| Trees | Indices | Help |
|
|---|
|
|
optimize.py
A collection of general-purpose optimization routines using Numeric
N-D Algorithms
===============
fmin --- Nelder-Mead Simplex algorithm (uses only function calls).
fmin_powell --- Powell (direction set) method (uses only function calls).
fmin_bfgs --- Quasi-Newton method (uses function and gradient).
fmin_ncg --- Line-search Newton Conjugate Gradient (uses function,
gradient and hessian (if it's provided)).
1-D Algorithms
===============
brent --- Use Brent's method (does not need inital guess)
fminbound --- Bounded minimization for scalar functions on an
interval using Brent's parabolic/golden_mean method.
golden -- Use Golden Section method (does not need initial guess)
bracket --- Find a bracket containing the minimum.
Version: 0.4.2
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Num = Numeric
|
|||
max = MLab.max
|
|||
min = MLab.min
|
|||
abs = absolute
|
|||
epsilon = 1e-8
|
|||
Imports: Numeric, MLab, sys, absolute, sqrt, asarray, squeeze
|
|||
Minimize a function using the downhill simplex algorithm.
Description:
Uses a Nelder-Mead simplex algorithm to find the minimum of function
of one or more variables.
Inputs:
func -- the Python function or method to be minimized.
x0 -- the initial guess.
args -- extra arguments for func.
Outputs: (xopt, {fopt, iter, funcalls, warnflag})
xopt -- minimizer of function
fopt -- value of function at minimum: fopt = func(xopt)
iter -- number of iterations
funcalls -- number of function calls
warnflag -- Integer warning flag:
1 : 'Maximum number of function evaluations.'
2 : 'Maximum number of iterations.'
Additional Inputs:
xtol -- acceptable relative error in xopt for convergence.
ftol -- acceptable relative error in func(xopt) for convergence.
maxiter -- the maximum number of iterations to perform.
maxfun -- the maximum number of function evaluations.
full_output -- non-zero if fval and warnflag outputs are desired.
disp -- non-zero to print convergence messages.
|
Minimize the function f(xk+alpha pk) Uses the line search algorithm of Wright and Nocedal in 'Numerical Optimization', 1999, pg. 59-60 Outputs: (alpha0, gc, fc) |
Minimize over alpha, the function f(xk+alpha pk) Uses the interpolation algorithm (Armiijo backtracking) as suggested by Wright and Nocedal in 'Numerical Optimization', 1999, pg. 56-57 Outputs: (alpha, fc, gc) |
Minimize a function using the BFGS algorithm.
Description:
Optimize the function, f, whose gradient is given by fprime using the
quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS)
See Wright, and Nocedal 'Numerical Optimization', 1999, pg. 198.
Inputs:
f -- the Python function or method to be minimized.
x0 -- the initial guess for the minimizer.
fprime -- a function to compute the gradient of f.
args -- extra arguments to f and fprime.
Outputs: (xopt, {fopt, func_calls, grad_calls, warnflag})
xopt -- the minimizer of f.
fopt -- the value of f(xopt).
func_calls -- the number of function_calls.
grad_calls -- the number of gradient calls.
warnflag -- an integer warning flag:
1 : 'Maximum number of iterations exceeded.'
Additional Inputs:
avegtol -- the minimum occurs when fprime(xopt)==0. This specifies how
close to zero the average magnitude of fprime(xopt) needs
to be.
maxiter -- the maximum number of iterations.
full_output -- if non-zero then return fopt, func_calls, grad_calls,
and warnflag in addition to xopt.
disp -- print convergence message if non-zero.
|
Description:
Minimize the function, f, whose gradient is given by fprime using the
Newton-CG method. fhess_p must compute the hessian times an arbitrary
vector. If it is not given, finite-differences on fprime are used to
compute it. See Wright, and Nocedal 'Numerical Optimization', 1999,
pg. 140.
Inputs:
f -- the Python function or method to be minimized.
x0 -- the initial guess for the minimizer.
fprime -- a function to compute the gradient of f: fprime(x, *args)
fhess_p -- a function to compute the Hessian of f times an
arbitrary vector: fhess_p (x, p, *args)
fhess -- a function to compute the Hessian matrix of f.
args -- extra arguments for f, fprime, fhess_p, and fhess (the same
set of extra arguments is supplied to all of these functions).
Outputs: (xopt, {fopt, fcalls, gcalls, hcalls, warnflag})
xopt -- the minimizer of f
fopt -- the value of the function at xopt: fopt = f(xopt)
fcalls -- the number of function calls.
gcalls -- the number of gradient calls.
hcalls -- the number of hessian calls.
warnflag -- algorithm warnings:
1 : 'Maximum number of iterations exceeded.'
Additional Inputs:
avextol -- Convergence is assumed when the average relative error in
the minimizer falls below this amount.
maxiter -- Maximum number of iterations to allow.
full_output -- If non-zero return the optional outputs.
disp -- If non-zero print convergence message.
Remarks:
Only one of fhess_p or fhess need be given. If fhess is provided,
then fhess_p will be ignored. If neither fhess nor fhess_p is
provided, then the hessian product will be approximated using finite
differences on fprime.
|
Bounded minimization for scalar functions.
Description:
Finds a local minimizer of the scalar function func in the interval
x1 < xopt < x2 using Brent's method. (See brent for auto-bracketing).
Inputs:
func -- the function to be minimized (must accept scalar input and return
scalar output).
x1, x2 -- the optimization bounds.
args -- extra arguments to pass to function.
xtol -- the convergence tolerance.
maxfun -- maximum function evaluations.
full_output -- Non-zero to return optional outputs.
disp -- Non-zero to print messages.
0 : no message printing.
1 : non-convergence notification messages only.
2 : print a message on convergence too.
3 : print iteration results.
Outputs: (xopt, {fval, ierr, numfunc})
xopt -- The minimizer of the function over the interval.
fval -- The function value at the minimum point.
ierr -- An error flag (0 if converged, 1 if maximum number of
function calls reached).
numfunc -- The number of function calls.
|
Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol. A bracketing interval is a triple (a,b,c) where (a<b<c) and func(b) < func(a),func(c). If bracket is two numbers then they are assumed to be a starting interval for a downhill bracket search (see bracket) Uses inverse parabolic interpolation when possible to speed up convergence of golden section method. |
Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol. A bracketing interval is a triple (a,b,c) where (a<b<c) and func(b) < func(a),func(c). If bracket is two numbers then they are assumed to be a starting interval for a downhill bracket search (see bracket) Uses analog of bisection method to decrease the bracketed interval. |
Minimize a function using modified Powell's method.
Description:
Uses a modification of Powell's method to find the minimum of a function
of N
Inputs:
func -- the Python function or method to be minimized.
x0 -- the initial guess.
args -- extra arguments for func.
Outputs: (xopt, {fopt, xi, iter, funcalls, warnflag})
xopt -- minimizer of function
fopt -- value of function at minimum: fopt = func(xopt)
direc -- current direction set
iter -- number of iterations
funcalls -- number of function calls
warnflag -- Integer warning flag:
1 : 'Maximum number of function evaluations.'
2 : 'Maximum number of iterations.'
Additional Inputs:
xtol -- line-search error tolerance.
ftol -- acceptable relative error in func(xopt) for convergence.
maxiter -- the maximum number of iterations to perform.
maxfun -- the maximum number of function evaluations.
full_output -- non-zero if fval and warnflag outputs are desired.
disp -- non-zero to print convergence messages.
|
| Trees | Indices | Help |
|
|---|
| Generated by Epydoc 3.0.1 on Tue Nov 25 10:43:41 2014 | http://epydoc.sourceforge.net |