Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Free Large datasets to experiment with Hadoop

People also ask

Where can I find huge datasets?

Sources for Finding Large DatasetsPage from the CISER Data Archive at Cornell Institute for Social and Economic Research. 'Find, download, and use datasets that are generated and held by the Federal Government. ' U.S. government website with links to health-related datasets from a variety of health agencies.

Which algorithm is best for large datasets?

the Quick sort algorithm generally is the best for large data sets and long keys.

Where can I get datasets to practice?

Kaggle: Kaggle is the home for everything data science-related. Forum discussions centre on Kaggle competitions, data science troubleshooting, fun data sets, discussions of various machine learning, big data and data science topics and more. It also has an excellent jobs board!


Few points about your question regarding crawling and wikipedia.

You have linked to the wikipedia data dumps and you can use the Cloud9 project from UMD to work with this data in Hadoop.

They have a page on this: Working with Wikipedia

Another datasource to add to the list is:

  • ClueWeb09 - 1 billion webpages collected between Jan and Feb 09. 5TB Compressed.

Using a crawler to generate data should be posted in a separate question to one about Hadoop/MapReduce I would say.


One obvious source: the Stack Overflow trilogy data dumps. These are freely available under the Creative Commons license.


This is a collection of 189 datasets for machine learning (which is one of the nicest applications for hadoop g): http://archive.ics.uci.edu/ml/datasets.html


It's no log file but maybe you could use the planet file from OpenStreetMap: http://wiki.openstreetmap.org/wiki/Planet.osm

CC licence, about 160 GB (unpacked)

There are also smaller files for each continent: http://wiki.openstreetmap.org/wiki/World