I just saw the first Git tutorial at http://blip.tv/play/Aeu2CAI.
How does Git store all the versions of all the files, and how can it still be more economical in space than Subversion which saves only the latest version of the code?
I know this can be done using compression, but that would be at the cost of speed, but this also says that Git is much faster (though where it gains the maximum is the fact that most of its operations are offline).
So, my guess is that
uncompression + work
is still faster than network_fetch + work
Am I correct? Even close?
Running git commit to commit your staged changes is generally fast because actually staging the changes did most of the work.
To give you some examples: Git itself is 222MB, Mercurial itself is 64MB, and Apache is 225MB. In bitbucket, there are two git storage limits; Soft limit and Hard limit. Soft Limit (1GB) : You will reach soft limit if your repository size reaches 1 GB. Bitbucket will notify the users about the storage limit.
How Does Git LFS Work? Git LFS uses pointers instead of the actual files or binary large objects (blobs). So, instead of writing large files/blobs to a Git repository, you write a pointer file, and the files/blobs themselves are written to a separate server. Plus, with Git LFS, multiple servers can be used.
Git doesn't think of or store its data this way. Instead, Git thinks of its data more like a set of snapshots of a mini filesystem. Every time you commit, or save the state of your project in Git, it basically takes a picture of what all your files look like at that moment and stores a reference to that snapshot.
I assume you are asking how it is possible for a git clone (full repository + checkout) to be smaller than checked-out sources in Subversion. Or did you mean something else?
This question is answered in the comments
First you should take into account that along checkout (working version) Subversion stores pristine copy (last version) in those .svn
subdirectories. Pristine copy is stored uncompressed in Subversion.
Second, git uses the following techniques to make repository smaller:
First, any operation that involves network would be much slower than a local operation. Therefore for example comparing current state of working area with some other version, or getting a log (a history), which in Subversion involves network connection and network transfer, and in Git is a local operation, would of course be much slower in Subversion than in Git. BTW. this is the difference between centralized version control systems (using client-server workflow) and distributed version control systems (using peer-to-peer workflow), not only between Subversion and Git.
Second, if I understand it correctly, nowadays the limitation is not CPU but IO (disk access). Therefore it is possible that the gain from having to read less data from disk because of compression (and being able to mmap it in memory) overcomes the loss from having to decompress data.
Third, Git was designed with performance in mind (see e.g. GitHistory page on Git Wiki):
core.trustctime
config variable).pack.depth
, which defaults to 50. Git has delta cache to speed up access. There is (generated) packfile index for fast access to objects in packfile.git log
" as fast as possible, and you see it almost immediately, even if generating full history would take more time; it doesn't wait for full history to be generated before displaying it.I am not a Git hacker, and I probably missed some techniques and tricks that Git uses for better performance. Note however that Git heavily uses POSIX (like memory mapped files) for that, so the gain might be not as large on MS Windows.
Not a complete answer, but those comments (from AlBlue) might help on the space management aspect of the question:
There's a couple of things worth clarifying here.
Firstly, it is possible to have a bigger Git repository than an SVN repository; I hope I didn't imply that that was never the case. However, in practice, it generally tends to be the case that a Git repository takes less space on disk than an equivalent SVN repository would.
One thing you cite is Apache's single SVN repository, which is obviously massive. However, one only has to look atgit.apache.org
, and you'll note that each Apache project has its own Git repository. What's really needed is a comparison of like-for-like; in other words, a checkout of the (abdera) SVN project vs the clone of the (abdera) Git repository.I was able to check out
git://git.apache.org/abdera.git
. On disk, it consumed 28.8Mb.
I then checked out the SVN versionhttp://svn.apache.org/repos/asf/abdera/java/trunk/
, and it consumed 34.3Mb.
Both numbers were taken from a separately mounted partition in RAM space, and the number quoted was the number of bytes taken from the disk.
If usingdu -sh
as a means of testing, the Git checkout was 11Mb and the SVN checkout was 17Mb.The Git version of Apache Abdera would let me work with any version of the history up to and including the current release; the SVN would only have the backup of the currently checked out version. Yet it takes less space on disk.
How, you may ask?
Well, for one thing, SVN creates a lot more files. The SVN checkout has 2959 files; the corresponding Git repository has 845 files.
Secondly, whilst SVN has an
.svn
folder at each level of the hierarchy, a Git repo only has a single.git
repository at the top level. This means (amongst other things) that renames from one dir to another have relatively smaller impact in Git than in SVN, which admitedly, already has relatively small impact anyway.Thirdly, Git stores its data as compressed objects, whereas SVN stores them as uncompressed copies. Go into any
.svn/text-base
directory, and you'll find uncompressed copies of the (base) files.
Git has a mechanism to compress all files (and indeed, all history) into pack files. In Abdera's case,.git/objects/pack/
has a single .pack file (containing all history) in a 4.8Mb file.
So the size of the repository is (roughly) the same size as the current checked out code in this case, though I wouldn't expect that always to be the case.Anyway, you're right that history can grow to be more than the total size of the current checkout; but because of the way that SVN works, it really has to approach twice the size in order to make much of a difference. Even then, disk space reduction is not really the main reason to use a DVCS anyway; it's an advantage for some things, sure, but it's not the real reason why people use it.
Note that Git (and Hg, and other DVCSs) do suffer from a problem where (large) binaries are checked in, then deleted, as they'll still show up in the repository and take up space, even if they're not current. The text compression takes care of these kind of things for text files, but binary ones are more of an issue. (There are administrative commands that can update the contents of Git repositories, but they have slightly higher overhead/administrative cost than CVS; git filter-branch is like
svnadmin dump/filter/load
.)
As for the speed aspect, I mentioned it in my "How fast is git over subversion with remote operations?" answer (like Linus said in its Google presentation: (paraphrasing here) "anything involving network will just kill the performances")
And the GitBenchmark document mentioned by Jakub Narębski is a good addition, even though it doesn't deal directly with Subversion.
It does list the kind of operation you need to monitor on a DVCS performance-wise.
Other Git benchmarks are mentioned in this SO question.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With