Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Isolation Forest Parameter tuning with gridSearchCV

I have multi variate time series data, want to detect the anomalies with isolation forest algorithm. want to get best parameters from gridSearchCV, here is the code snippet of gridSearch CV.

input data set loaded with below snippet.

df = pd.read_csv("train.csv")
df.drop(['dataTimestamp','Anomaly'], inplace=True, axis=1)
X_train = df
y_train = df1[['Anomaly']] ( Anomaly column is labelled data).

define the parameters for Isolation Forest.

clf = IsolationForest(random_state=47, behaviour='new', score="accuracy")
param_grid = {'n_estimators': list(range(100, 800, 5)), 'max_samples': list(range(100, 500, 5)), 'contamination': [0.1, 0.2, 0.3, 0.4, 0.5], 'max_features': [5,10,15], 'bootstrap': [True, False], 'n_jobs': [5, 10, 20, 30]}

f1sc = make_scorer(f1_score)
grid_dt_estimator = model_selection.GridSearchCV(clf, param_grid,scoring=f1sc, refit=True,cv=10, return_train_score=True)
grid_dt_estimator.fit(X_train, y_train)

after executing the fit , got the below error.

ValueError: Target is multiclass but average='binary'. Please choose another average setting.

Can some one guide me what is this about, tried average='weight', but still no luck, anything am doing wrong here. please let me know how to get F-score as well.

like image 547
Anantha Avatar asked May 10 '19 13:05

Anantha


3 Answers

You incur in this error because you didn't set the parameter average when transforming the f1_score into a scorer. In fact, as detailed in the documentation:

average : string, [None, ‘binary’ (default), ‘micro’, ‘macro’, ‘samples’, ‘weighted’] This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned.

The consequence is that the scorer returns multiple scores for each class in your classification problem, instead of a single measure. The solution is to declare one of the possible values of the average parameter for f1_score, depending on your needs. I therefore refactored the code you provided as an example in order to provide a possible solution to your problem:

from sklearn.ensemble import IsolationForest
from sklearn.metrics import make_scorer, f1_score
from sklearn import model_selection
from sklearn.datasets import make_classification

X_train, y_train = make_classification(n_samples=500, 
                                       n_classes=2)

clf = IsolationForest(random_state=47, behaviour='new')

param_grid = {'n_estimators': list(range(100, 800, 5)), 
              'max_samples': list(range(100, 500, 5)), 
              'contamination': [0.1, 0.2, 0.3, 0.4, 0.5], 
              'max_features': [5,10,15], 
              'bootstrap': [True, False], 
              'n_jobs': [5, 10, 20, 30]}

f1sc = make_scorer(f1_score(average='micro'))

grid_dt_estimator = model_selection.GridSearchCV(clf, 
                                                 param_grid,
                                                 scoring=f1sc, 
                                                 refit=True,
                                                 cv=10, 
                                                 return_train_score=True)
grid_dt_estimator.fit(X_train, y_train)
like image 72
Luca Massaron Avatar answered Sep 24 '22 15:09

Luca Massaron


Update make_scorer with this to get it working.

make_scorer(f1_score, average='micro')
like image 36
Gayathri Manohar Avatar answered Sep 22 '22 15:09

Gayathri Manohar


Parameters you tune are not all necessary.
For example:
contamination is the rate for abnomaly, you can determin the best value after you fitted a model by tune the threshold on model.score_samples

n_jobs is the CPU core you used.

like image 25
Joey Gao Avatar answered Sep 22 '22 15:09

Joey Gao