I'm self studying how to use scikit-learn and i decided to start the second task but with my own corpus. I obtained some bigrams by hand, let's say:
training_data = [[('this', 'is'), ('is', 'a'),('a', 'text'), 'POS'],
[('and', 'one'), ('one', 'more'), 'NEG']
[('and', 'other'), ('one', 'more'), 'NEU']]
I would like to vectorize them in a format that nicely can be filled in some classification algorithm provided by scikit-learn (svc, multnomial naive bayes, etc). This is what i tried:
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer(analyzer='word')
X = count_vect.transform(((' '.join(x) for x in sample)
for sample in training_data))
print X.toarray()
The problem with this is that i dont know how to treat the label (i.e. 'POS', 'NEG', 'NEU'
), do i need to "vectorize" the label too in order to pass the training_data
to a classification algorithm or i just could let it like 'POS' or any other kind of string?. Another problem is that I'm getting this:
raise ValueError("Vocabulary wasn't fitted or is empty!")
ValueError: Vocabulary wasn't fitted or is empty!
So, how can i vectorize bigrams like training_data
. I also was reading about dictvectorizer and Sklearn-pandas, do you guys think using them could be a better aproach for this task?
It should look like this:
>>> training_data = [[('this', 'is'), ('is', 'a'),('a', 'text'), 'POS'],
[('and', 'one'), ('one', 'more'), 'NEG'],
[('and', 'other'), ('one', 'more'), 'NEU']]
>>> count_vect = CountVectorizer(preprocessor=lambda x:x,
tokenizer=lambda x:x)
>>> X = count_vect.fit_transform(doc[:-1] for doc in training_data)
>>> print count_vect.vocabulary_
{('and', 'one'): 1, ('a', 'text'): 0, ('is', 'a'): 3, ('and', 'other'): 2, ('this', 'is'): 5, ('one', 'more'): 4}
>>> print X.toarray()
[[1 0 0 1 0 1]
[0 1 0 0 1 0]
[0 0 1 0 1 0]]
Then put your labels in a target variable:
y = [doc[-1] for doc in training_data] # ['POS', 'NEG', 'NEU']
Now you can train a model:
model = SVC()
model.fit(X, y)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With