We're often working on a project where we've been handed a large data set (say, a handful of files that are 1GB each), and are writing code to analyze it.
All of the analysis code is in Git, so everybody can check changes in and out of our central repository. But what to do with the data sets that the code is working with?
I want the data in the repository:
However, I don't want the data in the git repository:
It seems that I need a setup with a main repository for code and an auxiliary repository for data. Any suggestions or tricks for gracefully implementing this, either within git or in POSIX at large? Everything I've thought of is in one way or another a kludge.
You can upload other types of files to GitHub, but if you save it in a text-based format others can directly suggest changes and you can more easily track changes.
Every account using Git Large File Storage receives 1 GB of free storage and 1 GB a month of free bandwidth. If the bandwidth and storage quotas are not enough, you can choose to purchase an additional quota for Git LFS.
use submodules to isolate your giant files from your source code. More on that here:
http://git-scm.com/book/en/v2/Git-Tools-Submodules
The examples talk about libraries, but this works for large bloated things like data samples for testing, images, movies, etc.
You should be able to fly while developing, only pausing here and there if you need to look at new versions of giant data.
Sometimes it's not even worth while tracking changes to such things.
To address your issues with getting more clones of the data: If your git implementation supports hard links on your OS, this should be a breeze.
The nature of your giant dataset is also at play. If you change some of it, are you changing giant blobs or a few rows in a set of millions? This should determine how effective VCS will be in playing a notification mechanism for it.
Hope this helps.
This sounds like the perfect occasion to try git-annex:
git-annex allows managing files with git, without checking the file contents into git. While that may seem paradoxical, it is useful when dealing with files larger than git can currently easily handle, whether due to limitations in memory, checksumming time, or disk space.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With