Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does someone really sort terabytes of data?

Tags:

sorting

I recently spoke to someone, who works for Amazon and he asked me: How would I go about sorting terabytes of data using a programming language?

I'm a C++ guy and of course, we spoke about merge sort and one of the possible techniques is to split the data into smaller size and sort each of them and merge them finally.

But in reality, do companies like Amazon or eBay sort terabytes of data? I know, they store tons of information, but do they sorting them?

In a nutshell my question is: Why wouldn't they keep them sorted in the first place, instead of sorting terabytes of data?

like image 460
nsivakr Avatar asked Aug 06 '10 19:08

nsivakr


People also ask

How many rows of data are in a terabyte?

Suppose each record occupies 100 bytes, or 0.1 KB, a terabyte of space can accommodate 10G rows, or ten billion records.

What algorithm does spark use?

TimSort: In Apache Spark 1.1, we switched our default sorting algorithm from quicksort to TimSort, a derivation of merge sort and insertion sort. It performs better than quicksort in most real-world datasets, especially for datasets that are partially ordered. We use TimSort in both the map and reduce phases.


2 Answers

But in reality, does companies like Amazon/Ebay, sort terabytes of data? I know, they store tons of info but sorting them???

Yes. Last time I checked Google processed over 20 petabytes of data daily.

Why wouldn't they keep them sorted at the first place instead of sorting terabytes of data, is my question in a nutshell.

EDIT: relet makes a very good point; you only need to keep indexes and have those sorted. You can easily and efficiently retrieve sort data that way. You don't have to sort the entire dataset.

like image 102
NullUserException Avatar answered Oct 30 '22 18:10

NullUserException


Consider log data from servers, Amazon must have a huge amount of data. The log data is generally stored as it is received, that is, sorted according to time. Thus if you want it sorted by product, you would need to sort the whole data set.

Another issue is that many times the data needs to be sorted according to the processing requirement, which might not be known beforehand.

For example: Though not a terabyte, I recently sorted around 24 GB Twitter follower network data using merge sort. The implementation that I used was by Prof Dan Lemire.

http://www.daniel-lemire.com/blog/archives/2010/04/06/external-memory-sorting-in-java-the-first-release/

The data was sorted according to userids and each line contained userid followed by userid of person who is following him. However in my case I wanted data about who follows whom. Thus I had to sort it again by second userid in each line.

However for sorting 1 TB I would use map-reduce using Hadoop. Sort is the default step after the map function. Thus I would choose the map function to be identity and NONE as reduce function and setup streaming jobs.

Hadoop uses HDFS which stores data in huge blocks of 64 MB (this value can be changed). By default it runs single map per block. After the map function is run the output from map is sorted, I guess by an algorithm similar to merge sort.

Here is the link to the identity mapper: http://hadoop.apache.org/common/docs/r0.16.4/api/org/apache/hadoop/mapred/lib/IdentityMapper.html

If you want to sort by some element in that data then I would make that element a key in XXX and the line as value as output of the map.

like image 21
5 revs, 2 users 70% Avatar answered Oct 30 '22 18:10

5 revs, 2 users 70%