I am working on using hyperopt to tune my ML model but having troubles in using the qloguniform as the search space. I am giving the example from official wiki and changed the search space.
import pickle
import time
#utf8
import pandas as pd
import numpy as np
from hyperopt import fmin, tpe, hp, STATUS_OK, Trials
def objective(x):
return {
'loss': x ** 2,
'status': STATUS_OK,
# -- store other results like this
'eval_time': time.time(),
'other_stuff': {'type': None, 'value': [0, 1, 2]},
# -- attachments are handled differently
'attachments':
{'time_module': pickle.dumps(time.time)}
}
trials = Trials()
best = fmin(objective,
space=hp.qloguniform('x', np.log(0.001), np.log(0.1), np.log(0.001)),
algo=tpe.suggest,
max_evals=100,
trials=trials)
pd.DataFrame(trials.trials)
But getting the following error.
ValueError: ('negative arg to lognormal_cdf', array([-3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764]))
I have tried without log transform as below but the output values turns out to be log transformation (ex- 1.017,1.0008,1.02456), which is wrong. It is consistent with the documentation.
hp.qloguniform('x', 0.001,0.1, 0.001)
Thanks
The issue seems to be in the last argument to the hp.qloguniform
, q
and how tpe.suggest
uses that.
First let's discuss about the q
. According to the documentation:
hp.qloguniform(label, low, high, q)
round(exp(uniform(low, high)) / q) * q
Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.
q
here is a "quantizer"
which will limit the outputs from the defined space to the multiples of q
. For example, the following is what happens inside the qloguniform
:
from hyperopt import pyll, hp
n_samples = 10
space = hp.loguniform('x', np.log(0.001), np.log(0.1))
evaluated = [pyll.stochastic.sample(space) for _ in range(n_samples)]
# Output: [0.04645754, 0.0083128 , 0.04931957, 0.09468335, 0.00660693,
# 0.00282584, 0.01877195, 0.02958924, 0.00568617, 0.00102252]
q = 0.005
qevaluated = np.round(np.array(evaluated)/q) * q
# Output: [0.045, 0.01 , 0.05 , 0.095, 0.005, 0.005, 0.02 , 0.03 , 0.005, 0.])
Compare the evaluated
and qevaluated
here. qevaluated
is in multiples of q
or we say that its quantized in the "intervals" (or steps) of q
. You can try changing the q
value to learn more.
The q
you defined in the question extremely large as compared to generated sample range (0.001 to 0.1
):
np.log(0.001)
# Output: -6.907755278982137
So, the output of all of the values here will be 0.
q = np.log(0.001)
qevaluated = np.round(np.array(evaluated)/q) * q
# Output: [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]
Now coming to tpe.suggest
(Section 4 of this paper): TPE uses a tree of different estimators to optimize the search process, during which it divides the search space depending on the generator of space (in this case qloguniform
). See code here for details. For dividing the space into multiple parts, it will use the q
.
But since all the points in your space will be 0.0 (as described above), this negative q
generates invalid bounds for lognormal_cdf
which is not acceptable and hence the error.
So long story short, your usage of q
is not proper. As you already said in the comment:-
Also
q
value should not be used inside the log uniform/log normal random sampling according toround(exp(uniform(low, high)) / q) * q
so you should only supply values of q
which are valid for your required space. So here, since you want to generate values between 0.001
and 0.1
, the q
value should be comparable to them.
I agree that you supply np.log(0.001)
and np.log(0.1)
inside the qloguniform
but that is so that the output values are between 0.001 and 0.1. So dont use np.log
in q
. q
should be used according to generated values.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With