I am using Pipeline from sklearn to classify text.
In this example Pipeline I have a TfIDF vectorizer and some custom features wrapped with FeatureUnion and a classifier as the Pipeline steps, I then fit the training data and do the prediction:
from sklearn.pipeline import FeatureUnion, Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
X = ['I am a sentence', 'an example']
Y = [1, 2]
X_dev = ['another sentence']
# load custom features and FeatureUnion with Vectorizer
features = []
measure_features = MeasureFeatures() # this class includes my custom features
features.append(('measure_features', measure_features))
countVecWord = TfidfVectorizer(ngram_range=(1, 3), max_features= 4000)
features.append(('ngram', countVecWord))
all_features = FeatureUnion(features)
# classifier
LinearSVC1 = LinearSVC(tol=1e-4, C = 0.10000000000000001)
pipeline = Pipeline(
[('all', all_features ),
('clf', LinearSVC1),
])
pipeline.fit(X, Y)
y_pred = pipeline.predict(X_dev)
# etc.
The above code works just fine, but there is a twist. I want to do Part of Speech Tagging on the text and to use a different Vectorizer on the tagget text.
X = ['I am a sentence', 'an example']
X_tagged = do_tagging(X)
# X_tagged = ['PP AUX DET NN', 'DET NN']
Y = [1, 2]
X_dev = ['another sentence']
X_dev_tagged = do_tagging(X_dev)
# load custom featues and FeatureUnion with Vectorizer
features = []
measure_features = MeasureFeatures() # this class includes my custom features
features.append(('measure_features', measure_features))
countVecWord = TfidfVectorizer(ngram_range=(1, 3), max_features= 4000)
# new POS Vectorizer
countVecPOS = TfidfVectorizer(ngram_range=(1, 4), max_features= 2000)
features.append(('ngram', countVecWord))
features.append(('pos_ngram', countVecWord))
all_features = FeatureUnion(features)
# classifier
LinearSVC1 = LinearSVC(tol=1e-4, C = 0.10000000000000001)
pipeline = Pipeline(
[('all', all_features ),
('clf', LinearSVC1),
])
# how do I fit both X and X_tagged here
# how can the different vectorizers get either X or X_tagged?
pipeline.fit(X, Y)
y_pred = pipeline.predict(X_dev)
# etc.
How do I properly fit this kind of data? How can the two vectorizers differentiate between raw text and pos text? What are my options?
I also have custom features, some of them would take the raw text and others the POS text.
EDIT: Added MeasureFeatures()
from sklearn.base import BaseEstimator
import numpy as np
class MeasureFeatures(BaseEstimator):
def __init__(self):
pass
def get_feature_names(self):
return np.array(['type_token', 'count_nouns'])
def fit(self, documents, y=None):
return self
def transform(self, x_dataset):
X_type_token = list()
X_count_nouns = list()
for sentence in x_dataset:
# takes raw text and calculates type token ratio
X_type_token.append(type_token_ratio(sentence))
# takes pos tag text and counts number of noun pos tags (NN, NNS etc.)
X_count_nouns.append(count_nouns(sentence))
X = np.array([X_type_token, X_count_nouns]).T
print X
print X.shape
if not hasattr(self, 'scalar'):
self.scalar = StandardScaler().fit(X)
return self.scalar.transform(X)
This feature transformer then needs to either take tagged text for the count_nouns() function or the raw text for type_token_ratio()
I think that you have to do a FeatureUnion on 2 Transformers (TfidfTransformer and POSTransformer). Of course you need to define that POSTransformer.
Maybe this article will help you.
Maybe your pipeline will look like this.
pipeline = Pipeline([
('features', FeatureUnion([
('ngram_tf_idf', Pipeline([
('counts_ngram', CountVectorizer()),
('tf_idf_ngram', TfidfTransformer())
])),
('pos_tf_idf', Pipeline([
('pos', POSTransformer()),
('counts_pos', CountVectorizer()),
('tf_idf_pos', TfidfTransformer())
])),
('measure_features', MeasureFeatures())
])),
('classifier', LinearSVC())
])
And this assume that MeasureFeatures and POSTransformer are Transformers conform to the sklearn API.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With