Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MATLABs 'fminsearch' different from Octave's 'fmincg'

I am trying to get consistent answers for a simple optimization problem, between two functions in MATLAB and Octave. Here is my code:

  options = optimset('MaxIter', 500 , 'Display', 'iter', 'MaxFunEvals', 1000);

  objFunc = @(t) lrCostFunction(t,X,y);

  [result1] = fminsearch(objFunc, theta, options);
  [result2]=  fmincg (objFunc, theta, options);

(Bear in mind, that X, y, and theta are defined earlier and are correct). The problem is the following: When I run the above code in MATLAB with it using fmincg, (commend out fminsearch), I get the correct answer.

However, if I comment out fmincg and let us run fminsearch, I get no conversion whatsoever. In fact the output looks like this:

   491          893         0.692991         reflect
   492          894         0.692991         reflect
   493          895         0.692991         reflect
   494          896         0.692991         reflect
   495          897         0.692991         reflect
   496          898         0.692991         reflect
   497          899         0.692991         reflect
   498          900         0.692991         reflect
   499          901         0.692991         reflect
   500          902         0.692991         reflect



Exiting: Maximum number of iterations has been exceeded
         - increase MaxIter option.
         Current function value: 0.692991 

Increasing the number of iterations doesnt do jack. In contrast, when using the fmincg, I see it converging, and it finally gives me the correct result:

Iteration     1 | Cost: 2.802128e-001
Iteration     2 | Cost: 9.454389e-002
Iteration     3 | Cost: 5.704641e-002
Iteration     4 | Cost: 4.688190e-002
Iteration     5 | Cost: 3.759021e-002
Iteration     6 | Cost: 3.522008e-002
Iteration     7 | Cost: 3.234531e-002
Iteration     8 | Cost: 3.145034e-002
Iteration     9 | Cost: 3.008919e-002
Iteration    10 | Cost: 2.994639e-002
Iteration    11 | Cost: 2.678528e-002
Iteration    12 | Cost: 2.660323e-002
Iteration    13 | Cost: 2.493301e-002

.
.
.


Iteration   493 | Cost: 1.311466e-002
Iteration   494 | Cost: 1.311466e-002
Iteration   495 | Cost: 1.311466e-002
Iteration   496 | Cost: 1.311466e-002
Iteration   497 | Cost: 1.311466e-002
Iteration   498 | Cost: 1.311466e-002
Iteration   499 | Cost: 1.311466e-002
Iteration   500 | Cost: 1.311466e-002

This gives the correct asnwer.

So what gives? Why is fminsearch not working in this minimization case?

Additional context:

1) Octave is the language that has fmincg btw, however a quick google result also retrieves this function. My MATLAB can call either.

2) My problem has a convex error surface, and its error surface is everywhere differentiable.

3) I only have access to fminsearch, fminbnd (which I cant use since this problem is multivariate not univariate), so that leaves fminsearch. Thanks!

like image 879
Spacey Avatar asked May 27 '12 01:05

Spacey


People also ask

What does Fminsearch do in Matlab?

fminsearch finds the minimum of a scalar function of several variables, starting at an initial estimate. This is generally referred to as unconstrained nonlinear optimization. x = fminsearch (fun,x0) starts at the point x0 and finds a local minimum x of the function described in fun .

What is Matlab Fmincon?

x = fmincon( fun , x0 , A , b , Aeq , beq , lb , ub ) defines a set of lower and upper bounds on the design variables in x , so that the solution is always in the range lb ≤ x ≤ ub .

What Toolbox is Fmincon in?

fmincon is a Nonlinear Programming solver provided in MATLAB's Optimization Toolbox.


2 Answers

I assume that fmincg is implementing a conjugate-gradient type optimization. fminsearch is a derivative-free optimization method. So, why do you expect them to give the same results. They are completely different algorithms.

I would expect fminsearch to find the global minima for a convex cost function. At least, this has been my experience so far.

The first line of fminsearch's output suggest that objFunc(theta) is ~0.69 but this value is very different than the cost values in fmincg's output. So, I would look for possible bugs outside fminsearch. Make sure you are giving the same cost function and initial point to both algorithms.

like image 102
emrea Avatar answered Sep 18 '22 21:09

emrea


This is problem I've noticed sometimes with this algorithm. It may not be the answer you are looking for, but what seems to work for me, in these cases, is to modify the tolerance values at which it terminates. What I see is an oscillation between two points providing equal results. I know this happens in LabView, and can only speculate that it happens in Matlab.

Unless I see you data, I can't comment more, but that is what I suggest.

Note: by increasing the tolerance, the goal is to catch the algorithm before it reaches that state. It becomes less precise, but usually the number of significant digits is rather small anyways.

like image 40
Rasman Avatar answered Sep 18 '22 21:09

Rasman