I currently have a Fortran function that I want to optimize using SciPy wrapping it using Ctypes. Is this possible? Perhaps I've done something wrong in my implementation. For example, assume I have:
module cost_fn
use iso_c_binding, only: c_float
implicit none
contains
function sin_2_cos(x,y) bind(c)
real(c_float) :: x, y, sin_2_cos
sin_2_cos = sin(x)**2 * cos(y)
end function sin_2_cos
end module cost_fn
that I compile with:
gfortran -fPIC -shared -g -o cost.so cost.f90
and then try to find a (local) minimum with:
#!/usr/bin/env python
from ctypes import *
import numpy as np
import scipy.optimize as sopt
cost = cdll.LoadLibrary('./cost.so')
cost.sin_2_cos.argtypes = [POINTER(c_float), POINTER(c_float)]
cost.sin_2_cos.restype = c_float
def f2(x):
return cost.sin_2_cos(c_float(x[0]), c_float(x[1]))
# return np.sin(x[0])**2 * np.cos(x[1])
# print(f2( [1, 1] ))
# print(f2( [0.5 * np.pi, np.pi] ))
print( sopt.minimize( f2, (1.0, 1.0), options={'disp': True}, tol=1e-8) )
I expect a local minimum f2(pi / 2, pi) = -1. When I call f2 with the cost.sin_2_cos return value, the "minimimum" is just given at the initial guess of (1, 1). If I call f2 with the "Python" return value, optimize finds the correct minimum.
I've tried redefining sin_2_cos to take dimension(2) array input, but was seeing similar behavior. Perhaps I need to call sin_2_cos directly with minimize (but then how would I specify c_float for the arguments)? Any thoughts are appreciated!
Edit: To a comment below, note that the two commented print(f2(...))
lines produce the expected values. Thus, I believe the Fortran function is being properly called through the Python f2 function.
Your Fortran code uses single precision floating point values (i.e. 32 bit floats). (In C, float
is single precision and double
is double precision.) The default method used by scipy.optimize.minimize()
uses finite differences to approximate the derivatives of the function. That is, to estimate the derivative f'(x0)
, it computes (f(x0+eps) - f(x0))/eps
, where eps
is the step size. The default step size that it uses to compute the finite differences is approximately 1.49e-08. Unfortunately, this value is smaller than the spacing of single precision values around the value 1. So when the minimizer adds eps
to 1, the result is still 1. That means the function is evaluated at the same point, and the finite difference result is 0. That's a condition for the minimum, so the solver decides it is done.
The solver option eps
sets the finite difference step size. Set it to something larger than 1.19e-7. For example, if I use options={'disp': True, 'eps': 2e-6}
, I get a solution.
By the way, you can find that value, 1.19e-7, by using numpy.finfo()
:
In [4]: np.finfo(np.float32(1.0)).eps
Out[4]: 1.1920929e-07
You'll also get a solution if you use the option method='nelder-mead'
in the minimize()
function. That method does not rely on finite differences.
Finally, you could convert the Fortran code to use double precision:
module cost_fn
use iso_c_binding, only: c_double
implicit none
contains
function sin_2_cos(x,y) bind(c)
real(c_double) :: x, y, sin_2_cos
sin_2_cos = sin(x)**2 * cos(y)
end function sin_2_cos
end module cost_fn
Then change the Python code to use ctypes.c_double
instead of ctypes.c_float
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With