Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Git + a large data set?

Tags:

We're often working on a project where we've been handed a large data set (say, a handful of files that are 1GB each), and are writing code to analyze it.

All of the analysis code is in Git, so everybody can check changes in and out of our central repository. But what to do with the data sets that the code is working with?

I want the data in the repository:

  • When users first clone the repository, the data should come with.
  • The data isn't 100% read-only; now and then a data point is corrected, or a minor formatting change happens. If minor changes happen to the data, users should be notified at the next checkout.

However, I don't want the data in the git repository:

  • git cloning a spare copy (so I have two versions in my home directory) will pull a few GB of data I already have. I'd rather either have it in a fixed location [set a rule that data must be in ~/data] or add links as needed.
  • With data in the repository, copying to a thumb drive may be impossible, which is annoying when I'm just working on a hundred lines of code.
  • If an erroneous data point is fixed, I'm never going to look at the erroneous version again. Changes to the data set can be tracked in a plain text file or by the person who provided the data (or just not at all).

It seems that I need a setup with a main repository for code and an auxiliary repository for data. Any suggestions or tricks for gracefully implementing this, either within git or in POSIX at large? Everything I've thought of is in one way or another a kludge.

like image 729
bk. Avatar asked Jun 07 '11 16:06

bk.


People also ask

Can I store data on github?

You can upload other types of files to GitHub, but if you save it in a text-based format others can directly suggest changes and you can more easily track changes.

How much space does Github give you?

Every account using Git Large File Storage receives 1 GB of free storage and 1 GB a month of free bandwidth. If the bandwidth and storage quotas are not enough, you can choose to purchase an additional quota for Git LFS.


2 Answers

use submodules to isolate your giant files from your source code. More on that here:

http://git-scm.com/book/en/v2/Git-Tools-Submodules

The examples talk about libraries, but this works for large bloated things like data samples for testing, images, movies, etc.

You should be able to fly while developing, only pausing here and there if you need to look at new versions of giant data.

Sometimes it's not even worth while tracking changes to such things.

To address your issues with getting more clones of the data: If your git implementation supports hard links on your OS, this should be a breeze.

The nature of your giant dataset is also at play. If you change some of it, are you changing giant blobs or a few rows in a set of millions? This should determine how effective VCS will be in playing a notification mechanism for it.

Hope this helps.

like image 173
Adam Dymitruk Avatar answered Nov 12 '22 19:11

Adam Dymitruk


This sounds like the perfect occasion to try git-annex:

git-annex allows managing files with git, without checking the file contents into git. While that may seem paradoxical, it is useful when dealing with files larger than git can currently easily handle, whether due to limitations in memory, checksumming time, or disk space.

like image 38
adl Avatar answered Nov 12 '22 17:11

adl