Robust optimization is a distinct approach to optimizations problems that allows for the incorporation of
uncertainty. The usefulness of robust optimization lies in the ability to solve for every realization of the uncertain
parameters. As a result, the problem can be solved for the worst-case scenarios of the entire set...
Traditionally, robust optimization has solved problems based on static decisions which are predetermined by the
decision makers. Once the decisions were made, the problem was solved and whenever a new uncertainty was
realized, the uncertainty was incorporated to the original problem and the entire problem was solved again to
account...
Robust optimization is a subset of optimization theory that deals with a certain measure of robustness vs uncertainty. This balance of robustness
and uncertainty is represented as variability in the parameters of the problem at hand and or its solution [1]. In robust optimization, the modeler
aims to find decisions...
Fuzzy programming is one of many optimization models that deal with optimization under uncertainty. This model can be applied when situations are not clearly
defined and thus have uncertainty, or an exact value is not critical to the problem. For example, categorizing people into young, middle aged and old is...
The chance-constrained method is one of the major approaches to solving optimization problems under various
uncertainties. It is a formulation of an optimization problem that ensures that the probability of meeting a certain
constraint is above a certain level. In other words, it restricts the feasible region so that the...
Non-differentiable optimization is a category of optimization that deals with objective that for a variety of reasons
is non differentiable and thus non-convex. The functions in this class of optimization are generally non-smooth.
These functions although continuous often contain sharp points or corners that do not allow for the solution...
Geometric programming was introduced in 1967 by Duffin, Peterson and Zener. It is very useful in the applications
of a variety of optimization problems, and falls under the general class of signomial problems[1]. It can be used to
solve large scale, practical problems by quantifying them into a mathematical optimization...
In this work, we will focus on the “at the same time” or direct transcription approach which allow a simultaneous
method for the dynamic optimization problem. In particular, we formulate the dynamic optimization model with
orthogonal collocation methods. These methods can also be regarded as a special class of implicit...
Subgradient Optimization (or Subgradient Method) is an iterative algorithm
for minimizing convex functions, used predominantly in Nondifferentiable
optimization for functions that are convex but nondifferentiable. It is often slower
than Newton's Method when applied to convex differentiable functions, but can
be used on convex nondifferentiable functions where Newton's Method will...
Sequential quadratic programming (SQP) is a class of algorithms for solving non-linear optimization problems
(NLP) in the real world. It is powerful enough for real problems because it can handle any degree of non-linearity
including non-linearity in the constraints. The main disadvantage is that the method incorporates several
derivatives, which...
Quadratic programming (QP) is the
problem of optimizing a quadratic
objective function and is one of the
simplests form of non-linear
programming. The objective function
can contain bilinear or up to second
order polynomial terms, and the
constraints are linear and can be both
equalities and inequalities. QP is
widely...
Quasi-Newton Methods (QNMs) are generally a class of optimization methods that are used in Non-Linear
Programming when full Newton’s Methods are either too time consuming or difficult to use. More specifically,
these methods are used to find the global minimum of a function f(x) that is twice-differentiable. There are distinct...
The conjugate gradient method is a mathematical technique that can be useful
for the optimization of both linear and non-linear systems. This technique is
generally used as an iterative algorithm, however, it can be used as a direct
method, and it will produce a numerical solution. Generally this method is...
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the
early 1960s. The basis of IP method restricts the constraints into the objective function (duality
( http://en.wikipedia.org/wiki/Duality_%28optimization%29) ) by creating a barrier function. This limits potential solutions to
iterate in only...
Trust-region method (TRM) is one of the most important numerical optimization methods in
solving nonlinear programming (NLP) problems. It works in a way that first define a region
around the current best solution, in which a certain model (usually a quadratic model) can to
some extent approximate the original objective...
An algorithm is a line search method if it seeks the minimum of a defined nonlinear function by selecting a
reasonable direction vector that, when computed iteratively with a reasonable step size, will provide a function
value closer to the absolute minimum of the function. Varying these will change the...
Extended Cutting Plane is an optimization method suggested by Westerlund and Petersson in 1996 to solve
Mixed-Integer NonLinear Programming (MINLP) problems . ECP can be thought as an extension of Kelley's
cutting plane method, which uses iterative Newton's method to refine feasible area and ultimately solve a problem
within tolerable...
Outer approximation is a basic approach for solving Mixed Integer Nonlinear Programming (MINLP) models
suggested by Duran and Grossmann (1986) . Based on principles of decomposition, outer-approximation and
relaxation, the proposed algorithm effectively exploits the structure of the original problems. The new problems
consist of solving an alternating finite sequence...