Package minimise :: Module scipy_optimize
[hide private]
[frames] | no frames]

Module scipy_optimize

source code

optimize.py

A collection of general-purpose optimization routines using Numeric


N-D Algorithms
===============
fmin        ---      Nelder-Mead Simplex algorithm (uses only function calls).
fmin_powell ---      Powell (direction set) method (uses only function calls).
fmin_bfgs   ---      Quasi-Newton method (uses function and gradient).
fmin_ncg    ---      Line-search Newton Conjugate Gradient (uses function, 
                     gradient and hessian (if it's provided)).

1-D Algorithms
===============
brent       ---      Use Brent's method (does not need inital guess)
fminbound   ---      Bounded minimization for scalar functions on an
                     interval using Brent's parabolic/golden_mean method.
golden      --       Use Golden Section method (does not need initial guess)

bracket     ---      Find a bracket containing the minimum.


Version: 0.4.2

Functions [hide private]
 
rosen(x) source code
 
rosen_der(x) source code
 
rosen_hess(x) source code
 
rosen_hess_prod(x, p) source code
 
fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1)
Minimize a function using the downhill simplex algorithm.
source code
 
zoom(a_lo, a_hi) source code
 
line_search(f, fprime, xk, pk, gfk, args=(), c1=1e-4, c2=0.9, amax=50)
Minimize the function f(xk+alpha pk)
source code
 
line_search_BFGS(f, xk, pk, gfk, args=(), c1=1e-4, alpha0=1)
Minimize over alpha, the function f(xk+alpha pk)
source code
 
approx_fprime(xk, f, *args) source code
 
approx_fhess_p(x0, p, fprime, *args) source code
 
fmin_bfgs(f, x0, fprime=None, args=(), avegtol=1e-5, maxiter=None, full_output=0, disp=1)
Minimize a function using the BFGS algorithm.
source code
 
fmin_ncg(f, x0, fprime, fhess_p=None, fhess=None, args=(), avextol=1e-5, maxiter=None, full_output=0, disp=1)
Description:
source code
 
fminbound(func, x1, x2, args=(), xtol=1e-5, maxfun=500, full_output=0, disp=1)
Bounded minimization for scalar functions.
source code
 
brent(func, args=(), brack=None, tol=1.48e-8, full_output=0, maxiter=500)
Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol.
source code
 
golden(func, args=(), brack=None, tol=1.49e-8, full_output=0)
Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol.
source code
 
bracket(func, xa=0.0, xb=1.0, args=(), grow_limit=110.0)
Given a function and distinct initial points, search in the downhill direction (as defined by the initital points) and return new points xa, xb, xc that bracket the minimum of the function: f(xa) > f(xb) < f(xc)
source code
 
_myfunc(alpha, func, x0, direc, args=()) source code
 
_linesearch_powell(func, p, xi, args=(), tol=1e-4) source code
 
fmin_powell(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1)
Minimize a function using modified Powell's method.
source code
 
_endprint(x, flag, fval, maxfun, xtol, disp) source code
Variables [hide private]
  Num = Numeric
  max = MLab.max
  min = MLab.min
  abs = absolute
  epsilon = 1e-8

Imports: Numeric, MLab, sys, absolute, sqrt, asarray, squeeze


Function Details [hide private]

fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1)

source code 
Minimize a function using the downhill simplex algorithm.

Description:

  Uses a Nelder-Mead simplex algorithm to find the minimum of function
  of one or more variables.

Inputs:

  func -- the Python function or method to be minimized.
  x0 -- the initial guess.
  args -- extra arguments for func.

Outputs: (xopt, {fopt, iter, funcalls, warnflag})

  xopt -- minimizer of function

  fopt -- value of function at minimum: fopt = func(xopt)
  iter -- number of iterations
  funcalls -- number of function calls
  warnflag -- Integer warning flag:
              1 : 'Maximum number of function evaluations.'
              2 : 'Maximum number of iterations.'

Additional Inputs:

  xtol -- acceptable relative error in xopt for convergence.
  ftol -- acceptable relative error in func(xopt) for convergence.
  maxiter -- the maximum number of iterations to perform.
  maxfun -- the maximum number of function evaluations.
  full_output -- non-zero if fval and warnflag outputs are desired.
  disp -- non-zero to print convergence messages.
  
  

line_search(f, fprime, xk, pk, gfk, args=(), c1=1e-4, c2=0.9, amax=50)

source code 

Minimize the function f(xk+alpha pk)

Uses the line search algorithm of Wright and Nocedal in 'Numerical Optimization', 1999, pg. 59-60

Outputs: (alpha0, gc, fc)

line_search_BFGS(f, xk, pk, gfk, args=(), c1=1e-4, alpha0=1)

source code 

Minimize over alpha, the function f(xk+alpha pk)

Uses the interpolation algorithm (Armiijo backtracking) as suggested by Wright and Nocedal in 'Numerical Optimization', 1999, pg. 56-57

Outputs: (alpha, fc, gc)

fmin_bfgs(f, x0, fprime=None, args=(), avegtol=1e-5, maxiter=None, full_output=0, disp=1)

source code 
Minimize a function using the BFGS algorithm.

Description:

  Optimize the function, f, whose gradient is given by fprime using the
  quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS)
  See Wright, and Nocedal 'Numerical Optimization', 1999, pg. 198.

Inputs:

  f -- the Python function or method to be minimized.
  x0 -- the initial guess for the minimizer.
  fprime -- a function to compute the gradient of f.
  args -- extra arguments to f and fprime.

Outputs: (xopt, {fopt, func_calls, grad_calls, warnflag})

  xopt -- the minimizer of f.

  fopt -- the value of f(xopt).
  func_calls -- the number of function_calls.
  grad_calls -- the number of gradient calls.
  warnflag -- an integer warning flag:
              1 : 'Maximum number of iterations exceeded.'

Additional Inputs:

  avegtol -- the minimum occurs when fprime(xopt)==0.  This specifies how
             close to zero the average magnitude of fprime(xopt) needs
             to be.
  maxiter -- the maximum number of iterations.
  full_output -- if non-zero then return fopt, func_calls, grad_calls,
                 and warnflag in addition to xopt.
  disp -- print convergence message if non-zero.                
  

fmin_ncg(f, x0, fprime, fhess_p=None, fhess=None, args=(), avextol=1e-5, maxiter=None, full_output=0, disp=1)

source code 
Description:

  Minimize the function, f, whose gradient is given by fprime using the
  Newton-CG method.  fhess_p must compute the hessian times an arbitrary
  vector. If it is not given, finite-differences on fprime are used to
  compute it. See Wright, and Nocedal 'Numerical Optimization', 1999,
  pg. 140.

Inputs:

  f -- the Python function or method to be minimized.
  x0 -- the initial guess for the minimizer.
  fprime -- a function to compute the gradient of f: fprime(x, *args)
  fhess_p -- a function to compute the Hessian of f times an
             arbitrary vector: fhess_p (x, p, *args)
  fhess -- a function to compute the Hessian matrix of f.
  args -- extra arguments for f, fprime, fhess_p, and fhess (the same
          set of extra arguments is supplied to all of these functions).

Outputs: (xopt, {fopt, fcalls, gcalls, hcalls, warnflag})

  xopt -- the minimizer of f
  
  fopt -- the value of the function at xopt: fopt = f(xopt)
  fcalls -- the number of function calls.
  gcalls -- the number of gradient calls.
  hcalls -- the number of hessian calls.
  warnflag -- algorithm warnings:
              1 : 'Maximum number of iterations exceeded.'

Additional Inputs:

  avextol -- Convergence is assumed when the average relative error in
             the minimizer falls below this amount.  
  maxiter -- Maximum number of iterations to allow.
  full_output -- If non-zero return the optional outputs.
  disp -- If non-zero print convergence message.                
  
Remarks:

  Only one of fhess_p or fhess need be given.  If fhess is provided,
  then fhess_p will be ignored.  If neither fhess nor fhess_p is
  provided, then the hessian product will be approximated using finite
  differences on fprime.

  

fminbound(func, x1, x2, args=(), xtol=1e-5, maxfun=500, full_output=0, disp=1)

source code 
Bounded minimization for scalar functions.

Description:

  Finds a local minimizer of the scalar function func in the interval
  x1 < xopt < x2 using Brent's method.  (See brent for auto-bracketing).

Inputs:

  func -- the function to be minimized (must accept scalar input and return
          scalar output).
  x1, x2 -- the optimization bounds.
  args -- extra arguments to pass to function.
  xtol -- the convergence tolerance.
  maxfun -- maximum function evaluations.
  full_output -- Non-zero to return optional outputs. 
  disp -- Non-zero to print messages.
          0 : no message printing.
          1 : non-convergence notification messages only.
          2 : print a message on convergence too.
          3 : print iteration results. 


Outputs: (xopt, {fval, ierr, numfunc})

  xopt -- The minimizer of the function over the interval.
  fval -- The function value at the minimum point.
  ierr -- An error flag (0 if converged, 1 if maximum number of
          function calls reached).
  numfunc -- The number of function calls.

brent(func, args=(), brack=None, tol=1.48e-8, full_output=0, maxiter=500)

source code 

Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol. A bracketing interval is a triple (a,b,c) where (a<b<c) and func(b) < func(a),func(c). If bracket is two numbers then they are assumed to be a starting interval for a downhill bracket search (see bracket)

Uses inverse parabolic interpolation when possible to speed up convergence of golden section method.

golden(func, args=(), brack=None, tol=1.49e-8, full_output=0)

source code 

Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol. A bracketing interval is a triple (a,b,c) where (a<b<c) and func(b) < func(a),func(c). If bracket is two numbers then they are assumed to be a starting interval for a downhill bracket search (see bracket)

Uses analog of bisection method to decrease the bracketed interval.

fmin_powell(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1)

source code 
Minimize a function using modified Powell's method.

Description:

  Uses a modification of Powell's method to find the minimum of a function
  of N

Inputs:

  func -- the Python function or method to be minimized.
  x0 -- the initial guess.
  args -- extra arguments for func.

Outputs: (xopt, {fopt, xi, iter, funcalls, warnflag})

  xopt -- minimizer of function

  fopt  -- value of function at minimum: fopt = func(xopt)
  direc -- current direction set
  iter -- number of iterations
  funcalls -- number of function calls
  warnflag -- Integer warning flag:
              1 : 'Maximum number of function evaluations.'
              2 : 'Maximum number of iterations.'

Additional Inputs:

  xtol -- line-search error tolerance.
  ftol -- acceptable relative error in func(xopt) for convergence.
  maxiter -- the maximum number of iterations to perform.
  maxfun -- the maximum number of function evaluations.
  full_output -- non-zero if fval and warnflag outputs are desired.
  disp -- non-zero to print convergence messages.