Let's say I pick a random source like CNN. Would it be more advantageous to automatically sort scraped articles into categories based on keywords, or scrape individual parts of the website for different categories, i.e. cnn.com/tech or /entertainment, for example. The second option isn't easily scalable, I wouldn't want to manually configure urls for different sources. How does Google News address this issue?
Here is a Google patent from 2005
"Systems and methods for improving the ranking of news articles"
And an update from 2012:
SYSTEMS AND METHODS FOR IMPROVING THE RANKING OF NEWS ARTICLES
If you wanted to build a simple system yourself, I would do something like this:
Take a bunch of news stories that are already classified into sports/tech/whatever.
Tokenize them into individual words and grams (short sequences of words).
Create a really big table with unique words and grams as the columns and individual stories as the rows:
StoryId Class word1 word2 gram1 gram2 ...
1 sports 0 0.2 0.01 0
2 tech 0.5 0.01 0 0.3
3 sports 0 0.1 0.3 0.01
Where the values in the cells represent the frequency, binary occurrence or TF-IDF scores of the words in the documents.
Use a classification algorithm such as Naive Bayes or Support Vector Machines to learn the weights of the columns with respect to the class labels. This is called your model.
When you get a new, unclassified document, tokenize it the same way as before, apply the model you created earlier, and it will give you the most likely class label of the document.
Here is my video series which includes a video on automatic document categorization:
http://vancouverdata.blogspot.ca/2010/11/text-analytics-with-rapidminer-loading.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With