Difference between revisions of "Python:Extrema"

From PrattWiki
Jump to navigation Jump to search
Line 68: Line 68:
  
 
== fmin ==
 
== fmin ==
 +
If you have a multi-variable function, you can solve for the minimum using the unbounded minimization function fmin.  The function needs all your variables to be contained in a single list or array, however.  For example, if you are trying to find the minimum of the surface:
 +
<center>
 +
<math>
 +
f(x, y)=2+x-y+2x^2+2xy+y^2
 +
</math>
 +
</center>
 +
you can define the function in terms of two variables:
 
<syntaxhighlight lang=python>
 
<syntaxhighlight lang=python>
 
+
def f(x, y):
 +
    return 2 + x - y + 2*x**2 + 2*x*y + y**2
 +
</syntaxhighlight>
 +
but when you want to find the root, you need to give fmin a function of a single variable that contains multiple parts and with an initial guess that has the same number of parts.  For eaxmple:
 +
<syntaxhighlight lang=python>
 +
min_loc = opt.fmin(lambda vec: f(vec[0], vec[1]), [0.5, 2.0])
 +
</syntaxhighlight>
 +
looks for the minimum value of the function by starting at [0.5, 2.0].  It will return a list with two numbers in it.  Given that, to get the value of the function at that location, you will need to deal out the values in the list using the * operator:
 +
<syntaxhighlight lang=python>
 +
min_val = f(*min_loc)
 
</syntaxhighlight>
 
</syntaxhighlight>

Revision as of 17:42, 26 March 2019

This page discusses two different ways of getting Python to find the minimum of a function (versus a data set). Assuming the scipy.optimize module is loaded as opt, we will look at opt.fminbound and opt.fmin. The fminbound command can find a single independent value that will minimize a one-dimensional function over a specific domain. The fmin command can find a single array of values that will minimize a multi-dimensional function given some initial guess.

Preamble

The examples below assume

import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt

has run.

fminbound

The fminbound command can find a single value of a single variable that minimizes a function in a bounded interval. The function requires three arguments:

  • The function to minimize,
  • The left side of the boundary, and
  • The right side of the boundary,

and, by default, returns the value of the variable that minimizes the function on the interval. If the kwarg full_output is True, then the function returns four pieces of information:

  • The value of the variable that minimizes the function on the interval,
  • The value of the function at the minimum,
  • A flag that states if minimization worked (0) or not (1), and
  • How many times the algorithm had to call the function to find the minimum value.

The function can be defined any way Python functions can be defined, including using a lambda function in the first argument. For example, to find the minimum value of \(\cos(x)\) for values of \(x\) between 0 and 6, the following will work:

def fun(x):
    return np.cos(x)

out1 = opt.fminbound(fun, 0, 6)
out4 = opt.fminbound(fun, 0, 6, full_output=True)

and the outputs are:

In [1]: out1
Out[1]: 3.1415926891518526

In [2]: out4
Out[2]: (3.1415926891518526, -0.9999999999999993, 0, 9)

Using lambda functions, you can skip the formal definition of the function and go straight for:

out1a = opt.fminbound(lambda v: np.cos(v), 0, 6)
out4a = opt.fminbound(lambda v: np.cos(v), 0, 6, full_output=True)

which produces the same values.

If there are multiple local minima, the algorithm may fail to reach the "most minimum" minimum. For instance, the result of:

def fun2(x):
    return np.cos(x)+x/20

test1 = opt.fminbound(fun2, 0, 40, full_output=True)

is

In [1]: test1
Out[1]: (9.374756357078237, -0.5300113625734499, 0, 12)

even though there is a more "more minimum" minimum in that domain. Specifically, note:

test2 = opt.fminbound(fun2, 0, 6, full_output=True)

yields:

In [2]: test2
Out[2]: (3.0915714695721217, -0.8441706279326544, 0, 9)

which is the actual minimum value of the function over the domain [0, 40].

fmin

If you have a multi-variable function, you can solve for the minimum using the unbounded minimization function fmin. The function needs all your variables to be contained in a single list or array, however. For example, if you are trying to find the minimum of the surface:

\( f(x, y)=2+x-y+2x^2+2xy+y^2 \)

you can define the function in terms of two variables:

def f(x, y):
    return 2 + x - y + 2*x**2 + 2*x*y + y**2

but when you want to find the root, you need to give fmin a function of a single variable that contains multiple parts and with an initial guess that has the same number of parts. For eaxmple:

min_loc = opt.fmin(lambda vec: f(vec[0], vec[1]), [0.5, 2.0])

looks for the minimum value of the function by starting at [0.5, 2.0]. It will return a list with two numbers in it. Given that, to get the value of the function at that location, you will need to deal out the values in the list using the * operator:

min_val = f(*min_loc)