I am using the Scipy optimization module, specifically fmin_tnc and fmin_l_bfgs_b. However, I am receiving the message "IndexError: invalid index to scalar variable" when using either one.
What is the cause of this error?
And what is the meaning of this error message?
My practice code:
def f01(para):
para1, para2 = para
return 1+ (para1 -1)**2 + (para2 -2)**2
para0 = np.array([10, 10])
mybounds = [(-40,30),(-20,15)]
opt.fmin_l_bfgs_b(f01, para0, bounds = mybounds )
Which returns:
Traceback (most recent call last):
File "C:\Python27\mystuff\practice_optimize01.py", line 78, in <module>
opt.fmin_l_bfgs_b(f01, para0, bounds = mybounds )
File "C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py", line 174, in fm
in_l_bfgs_b
**opts)
File "C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py", line 294, in _m
inimize_lbfgsb
f, g = func_and_grad(x)
File "C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py", line 249, in fu
nc_and_grad
f = fun(x, *args)
File "C:\Python27\lib\site-packages\scipy\optimize\optimize.py", line 55, in _
_call__
self.jac = fg[1]
IndexError: invalid index to scalar variable.
Python 2.7.3, 32-bit. Numpy 1.6.2. Scipy 0.11.0b1. Windows XP and Vista.
fmin_l_bfgs_b expects that your function returns the function value and the gradient. You return only the function value.
If you only return the function value and don't provide a gradient, then you need to set approx_grad=True so that fmin_l_bfgs_b uses a numerical approximation to it.
See the description of the options in the docstring.
From my reading of the documentation, fmin_tnc has the same pattern, and same problem in your case.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With