Hi everyone all over the world,
Background
I am a final year student of Computer Science. I've proposed my Final Double Module Project which is a Plagiarism Analyzer, using Java and MySQL.
The Plagiarism Analyzer will:
My main objective is to develop something like Turnitin, improved if possible.
I have less than 6 months to develop the program. I have scoped the following:
Questions
Here are my questions:
Thanks in advance for any help and advice. ^^
1) Make your own web crawler ? looks like you can easily use all your available time just for this task. Try using a standard solution for that : it's not the heart of your program.
You still will have the opportunity to make your own or try another one afterwards (if you have time left !). Your program should work only on local files so as not to be tied to a specific crawler/API.
Maybe you'll even have to use different crawlers for different sites
2) Hashing whole paragraphs is possible. You can just hash any string. But of course that means you can only check for whole paragrpahs copied exactly. Maybe sentences would be a better unit to test. You probably should "normalize" (tranform) the sentences/paragrpahs before hashing to sort out minor differences like uppercase/lowercase.
3) MySQL can store a lot of data.
The usual advice is : stick to standard SQL. If you discover you have way too much data you will still have the possibility to use another SQL implementation.
But of course if you have too much data, start by looking at ways to reduce it or at least to reduce what's in mySQL. for example you could store hashes in MySQL but original pages (if needed) in plain files.
Have you considered another project that isn't doomed to failure on account of lack of resources available to you?
If you really want to go the "Hey, let's crawl the whole web!" route, you're going to need to break out things like HBase and Hadoop and lots of machines. MySQL will be grossly insufficient. TurnItIn claims to have crawled and indexed 12 billion pages. Google's index is more like [redacted]. MySQL, or for that matter, any RDBMS, cannot scale to that level.
The only realistic way you're going to be able to pull this off is if you do something astonishingly clever and figure out how to construct queries to Google that will reveal plagiarism of documents that are already present in Google's index. I'd recommend using a message queue and access the search API synchronously. The message queue will also allow you to throttle your queries down to a reasonable rate. Avoid stop words, but you're still looking for near-exact matches, so queries should be like: "* quick brown fox jumped over * lazy dog"
Don't bother running queries that end up like: "* * went * * *"
And ignore results that come back with 94,000,000 hits. Those won't be plagiarism, they'll be famous quotes or overly general queries. You're looking for either under 10 hits or a few thousand hits that all have an exact match on your original sentence or some similar metric. And even then, this should just be a heuristic — don't flag a document unless there are lots of red flags. Conversely, if everything comes back as zero hits, they're being unusually original. Book search typically needs more precise queries. Sufficiently suspicious stuff should trigger HTTP requests for the original pages, and final decisions should always be the purview of a human being. If a document cites its sources, that's not plagiarism, and you'll want to detect that. False positives are inevitable, and will likely be common, if not constant.
Be aware that the TOS prohibit permanently storing any portion of the Google index.
Regardless, you have chosen to do something exceedingly hard, no matter how you build it, and likely very expensive and time-consuming unless you involve Google.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With