In this work, we will focus on the “at the same time” or direct transcription approach which allow a simultaneous
method for the dynamic optimization problem. In particular, we formulate the dynamic optimization model with
orthogonal collocation methods. These methods can also be regarded as a special class of implicit...
Subgradient Optimization (or Subgradient Method) is an iterative algorithm
for minimizing convex functions, used predominantly in Nondifferentiable
optimization for functions that are convex but nondifferentiable. It is often slower
than Newton's Method when applied to convex differentiable functions, but can
be used on convex nondifferentiable functions where Newton's Method will...
Sequential quadratic programming (SQP) is a class of algorithms for solving non-linear optimization problems
(NLP) in the real world. It is powerful enough for real problems because it can handle any degree of non-linearity
including non-linearity in the constraints. The main disadvantage is that the method incorporates several
derivatives, which...
Quadratic programming (QP) is the
problem of optimizing a quadratic
objective function and is one of the
simplests form of non-linear
programming. The objective function
can contain bilinear or up to second
order polynomial terms, and the
constraints are linear and can be both
equalities and inequalities. QP is
widely...
Quasi-Newton Methods (QNMs) are generally a class of optimization methods that are used in Non-Linear
Programming when full Newton’s Methods are either too time consuming or difficult to use. More specifically,
these methods are used to find the global minimum of a function f(x) that is twice-differentiable. There are distinct...
The conjugate gradient method is a mathematical technique that can be useful
for the optimization of both linear and non-linear systems. This technique is
generally used as an iterative algorithm, however, it can be used as a direct
method, and it will produce a numerical solution. Generally this method is...
The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the
early 1960s. The basis of IP method restricts the constraints into the objective function (duality
( http://en.wikipedia.org/wiki/Duality_%28optimization%29) ) by creating a barrier function. This limits potential solutions to
iterate in only...
Trust-region method (TRM) is one of the most important numerical optimization methods in
solving nonlinear programming (NLP) problems. It works in a way that first define a region
around the current best solution, in which a certain model (usually a quadratic model) can to
some extent approximate the original objective...
An algorithm is a line search method if it seeks the minimum of a defined nonlinear function by selecting a
reasonable direction vector that, when computed iteratively with a reasonable step size, will provide a function
value closer to the absolute minimum of the function. Varying these will change the...