using python scikit svm, after running clf.fit(X, Y), you get your support vectors. could I load these support vectors directly (passing them as paramter) when instantiate a svm.SVC object? which means I do not need to running fit() method each time to do predication
According to the SVM algorithm we find the points closest to the line from both the classes. These points are called support vectors. Now, we compute the distance between the line and the support vectors. This distance is called the margin.
As discussed earlier, SVM is used for both classification and regression problems. Scikit-learn's method of Support Vector Classification (SVC) can be extended to solve regression problems as well. That extended method is called Support Vector Regression (SVR).
From the scikit manual: http://scikit-learn.org/stable/modules/model_persistence.html
1.2.4 Model persistence It is possible to save a model in the scikit by using Python’s built-in persistence model, namely pickle.
>>> from sklearn import svm
>>> from sklearn import datasets
>>> clf = svm.SVC()
>>> iris = datasets.load_iris()
>>> X, y = iris.data, iris.target
>>> clf.fit(X, y)
SVC(kernel=’rbf’, C=1.0, probability=False, degree=3, coef0=0.0, eps=0.001,
cache_size=100.0, shrinking=True, gamma=0.00666666666667)
>>> import pickle
>>> s = pickle.dumps(clf)
>>> clf2 = pickle.loads(s)
>>> clf2.predict(X[0])
array([ 0.])
>>> y[0]
0
In the specific case of the scikit, it may be more interesting to use joblib’s replacement of pickle, which is more efficient on big data, but can only pickle to the disk and not to a string:
>>> from sklearn.externals import joblib
>>> joblib.dump(clf, ’filename.pkl’)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With