I am trying to train a Naive Bayes classifier with positive/negative words extracting from a sentiment. example:
I love this movie :))
I hate when it rains :(
The idea is I extract positive or negative sentences based on the emoctions used, but in order to train a classifier and persist it into database.
The problem is that I have more than 1 million such sentences, so if I train it word by word, the database will go for a toss. I want to remove all non-relevant word example 'I','this', 'when', 'it' so that number of times I have to make a database query is less.
Please help me in resolving this issue to suggest me better ways of doing it
Thank you
There are two common approaches:
In both cases, determining which words/POS tags are relevant may be done using a measure such as PMI.
Mind you: standard stop lists from information retrieval may or may not work in sentiment analysis. I recently read a paper (no reference, sorry) where it was claimed that ! and ?, commonly removed in search engines, are valuable clues for sentiment analysis. (So may 'I', esp. when you also have a neutral category.)
Edit: you can also safely throw away everything that occurs only once in the training set (so called hapax legomena). Words that occur once have little information value for your classifier, but may take up a lot of space.
You might want to check this out http://books.google.com/books?id=CE1QzecoVf4C&lpg=PA390&ots=OHuYwLRhag&dq=sentiment%20%20mining%20for%20fortune%20500&pg=PA379#v=onepage&q=sentiment%20%20mining%20for%20fortune%20500&f=false
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With