Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Plagiarism Analyzer (compared against Web Content)

Hi everyone all over the world,

Background

I am a final year student of Computer Science. I've proposed my Final Double Module Project which is a Plagiarism Analyzer, using Java and MySQL.

The Plagiarism Analyzer will:

  1. Scan all the paragraphs of uploaded document. Analyze percentage of each paragraph copied from which website.
  2. Highlight only the words copied exactly from which website in each paragraph.

My main objective is to develop something like Turnitin, improved if possible.

I have less than 6 months to develop the program. I have scoped the following:

  1. Web Crawler Implementation. Probably will be utilizing Lucene API or developing my own Crawler (which one is better in terms of time development and also usability?).
  2. Hashing and Indexing. To improve on the searching and analyzing.

Questions

Here are my questions:

  1. Can MySQL store that much information?
  2. Did I miss any important topics?
  3. What are your opinions concerning this project?
  4. Any suggestions or techniques for performing the similarity analysis?
  5. Can a paragraph be hashed, as well as words?

Thanks in advance for any help and advice. ^^

like image 412
Mr CooL Avatar asked Oct 14 '09 16:10

Mr CooL


2 Answers

1) Make your own web crawler ? looks like you can easily use all your available time just for this task. Try using a standard solution for that : it's not the heart of your program.

You still will have the opportunity to make your own or try another one afterwards (if you have time left !). Your program should work only on local files so as not to be tied to a specific crawler/API.

Maybe you'll even have to use different crawlers for different sites

2) Hashing whole paragraphs is possible. You can just hash any string. But of course that means you can only check for whole paragrpahs copied exactly. Maybe sentences would be a better unit to test. You probably should "normalize" (tranform) the sentences/paragrpahs before hashing to sort out minor differences like uppercase/lowercase.

3) MySQL can store a lot of data.

The usual advice is : stick to standard SQL. If you discover you have way too much data you will still have the possibility to use another SQL implementation.

But of course if you have too much data, start by looking at ways to reduce it or at least to reduce what's in mySQL. for example you could store hashes in MySQL but original pages (if needed) in plain files.

like image 35
siukurnin Avatar answered Oct 18 '22 13:10

siukurnin


Have you considered another project that isn't doomed to failure on account of lack of resources available to you?

If you really want to go the "Hey, let's crawl the whole web!" route, you're going to need to break out things like HBase and Hadoop and lots of machines. MySQL will be grossly insufficient. TurnItIn claims to have crawled and indexed 12 billion pages. Google's index is more like [redacted]. MySQL, or for that matter, any RDBMS, cannot scale to that level.

The only realistic way you're going to be able to pull this off is if you do something astonishingly clever and figure out how to construct queries to Google that will reveal plagiarism of documents that are already present in Google's index. I'd recommend using a message queue and access the search API synchronously. The message queue will also allow you to throttle your queries down to a reasonable rate. Avoid stop words, but you're still looking for near-exact matches, so queries should be like: "* quick brown fox jumped over * lazy dog" Don't bother running queries that end up like: "* * went * * *" And ignore results that come back with 94,000,000 hits. Those won't be plagiarism, they'll be famous quotes or overly general queries. You're looking for either under 10 hits or a few thousand hits that all have an exact match on your original sentence or some similar metric. And even then, this should just be a heuristic — don't flag a document unless there are lots of red flags. Conversely, if everything comes back as zero hits, they're being unusually original. Book search typically needs more precise queries. Sufficiently suspicious stuff should trigger HTTP requests for the original pages, and final decisions should always be the purview of a human being. If a document cites its sources, that's not plagiarism, and you'll want to detect that. False positives are inevitable, and will likely be common, if not constant.

  • Advanced Google query operators
  • API documentation for Google search
  • API documentation for Google book search

Be aware that the TOS prohibit permanently storing any portion of the Google index.

Regardless, you have chosen to do something exceedingly hard, no matter how you build it, and likely very expensive and time-consuming unless you involve Google.

like image 136
Bob Aman Avatar answered Oct 18 '22 13:10

Bob Aman