When I run something like
import numpy
from sklearn import linear_model
A= #something
b= #something
clf=linear_model.Lasso(alpha=0.015, fit_intercept=False, tol=0.00000000000001,
max_iter=10000000000000, positive=True)
clf.fit(A,b)
I get the error:
usr/local/lib/python2.7/dist-packages/scikit_learn-0.14.1-py2.7-linux-x86_64.egg/
sklearn/linear_model/coordinate_descent.py:418: UserWarning: Objective did not
converge. You might want to increase the number of iterations
' to increase the number of iterations')
The interesting thing is that A is never rank defficient. (I think)
Lasso on sklearn does not converge.
It is a converging sequence of safe regions for the Lasso with parameter λ if the diameters of the sets converge to zero.
LASSO (Least Absolute Shrinkage and Selection Operator) LASSO is the regularisation technique that performs L1 regularisation. It modifies the loss function by adding the penalty (shrinkage quantity) equivalent to the summation of the absolute value of coefficients.
Lasso regression is a regularization technique. It is used over regression methods for a more accurate prediction. This model uses shrinkage. Shrinkage is where data values are shrunk towards a central point as the mean. The lasso procedure encourages simple, sparse models (i.e. models with fewer parameters).
Try increasing tol.
From the documentation:
tol : float, optional
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
The default for tol is 0.0001 on my version of scikit-learn. I assume that your tolerance is so small that the optimization never reaches a lower value.
The only thing that SOMETIMES helped me to get rid of the warning was increasing the number of iterations significantly (with a significant increase of training time).
Increasing the tolerance always led to the same warnings, but with larger values in them, and not to getting rid of the warnings. Not sure why.
As an important analytical side note, I interpret getting this warning initially when using Lasso regression as a bad sign, regardless of what happens next.
For me it practically always occurred in the situation when the model was over-fitting, meaning that it performed well on the full training set itself, but then poorly during cross-validation and testing.
Regardless of whether I had supressed the warning (there is a way) or had gotten rid of it "naturally" by increasing the number of iterations, I almost always had to go back and simplify the set of features for Lasso to be effective (and in some cases to abandon Lasso altogether in favor of a different model).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With