Git is a version control system originally developed by Linus Torvalds that lets you track changes to a set of files. These files can be any type of file including the menagerie of files that typically make up a data orientated project (. pdf, . Rmd, .
Linus Torvalds invented Git 15 years ago in order to continue development of the Linux kernel. The original team could no longer use BitKeeper. At the time, no other Source Control Management (SCMs) met their specific requirements for a distributed system.
At present, he works full-time on the Linux kernel as part of the Linux Foundation.
As creator of the Linux operating system, Linus Torvalds is a leading supporter of Open Source software. An avid programmer, Torvalds wrote the kernel of the Linux operating system at age 21 from his mother's apartment in Helsinki.
In CVS, history was tracked on a per-file basis. A branch might consist of various files with their own various revisions, each with its own version number. CVS was based on RCS (Revision Control System), which tracked individual files in a similar way.
On the other hand, Git takes snapshots of the state of the whole project. Files are not tracked and versioned independently; a revision in the repository refers to a state of the whole project, not one file.
When Git refers to tracking a file, it means simply that it is to be included in the history of the project. Linus's talk was not referring to tracking files in the Git context, but was contrasting the CVS and RCS model with the snapshot-based model used in Git.
I agree with brian m. carlson's answer: Linus is indeed distinguishing, at least in part, between file-oriented and commit-oriented version control systems. But I think there is more to it than that.
In my book, which is stalled and might never get finished, I tried to come up with a taxonomy for version control systems. In my taxonomy the term for what we're interested here is the atomicity of the version control system. See what is currently page 22. When a VCS has file-level atomicity, there is in fact a history for each file. The VCS must remember the name of the file and what occurred to it at each point.
Git doesn't do that. Git has only a history of commits—the commit is its unit of atomicity, and the history is the set of commits in the repository. What a commit remembers is the data—a whole tree-full of file names and the contents that go with each of those files—plus some metadata: for instance, who made the commit, when, and why, and the internal Git hash ID of the commit's parent commit. (It is this parent, and the directed acycling graph formed by reading all commits and their parents, that is the history in a repository.)
Note that a VCS can be commit-oriented, yet still store data file-by-file. That's an implementation detail, though sometimes an important one, and Git does not do that either. Instead, each commit records a tree, with the tree object encoding file names, modes (i.e., is this file executable or not?), and a pointer to the actual file content. The content itself is stored independently, in a blob object. Like a commit object, a blob gets a hash ID that is unique to its content—but unlike a commit, which can only appear once, the blob can appear in many commits. So the underlying file content in Git is stored directly as a blob, and then indirectly in a tree object whose hash ID is recorded (directly or indirectly) in the commit object.
When you ask Git to show you a file's history using:
git log [--follow] [starting-point] [--] path/to/file
what Git is really doing is walking the commit history, which is the only history Git has, but not showing you any of these commits unless:
(but some of these conditions can be modified via additional git log
options, and there's a very difficult to describe side effect called History Simplification that makes Git omit some commits from the history walk entirely). The file history you see here does not exactly exist in the repository, in some sense: instead, it's just a synthetic subset of the real history. You'll get a different "file history" if you use different git log
options!
The confusing bit is here:
Git never ever sees those as individual files. Git thinks everything as the full content.
Git often uses 160 bit hashes in place of objects in its own repo. A tree of files is basically a list of names and hashes associated with the content of each (plus some metadata).
But the 160 bit hash uniquely identifies the content (within the universe of the git database). So a tree with hashes as content includes the content in its state.
If you change the state of the content of a file, its hash changes. But if its hash changes, the hash associated with the file name's content also changes. Which in turn changes the hash of the "directory tree".
When a git database stores a directory tree, that directory tree implies and includes all of the content of all of the subdirectories and all of the files in it.
It is organized in a tree structure with (immutable, reusable) pointers to blobs or other trees, but logically it is a single snapshot of the entire content of the entire tree. The representation in the git database isn't the flat data contents, but logically it is all of its data and nothing else.
If you serialized the tree to a filesystem, deleted all .git folders, and told git to add the tree back into its database, you'd end up with adding nothing to the database -- the element would already be there.
It may help to think of git's hashes as a reference counted pointer to immutable data.
If you built an application around that, a document is a bunch of pages, which have layers, which have groups, which have objects.
When you want to change an object, you have to create a completely new group for it. If you want to change a group, you have to create a new layer, which needs a new page, which needs a new document.
Every time you change a single object, it spawns a new document. The old document continues to exist. The new and old document share most of their content -- they have the same pages (except 1). That one page has the same layers (except 1). That layer has the same groups (except 1). That group has the same objects (except 1).
And by same, I mean logically a copy, but implementation-wise it is just another reference counted pointer to the same immutable object.
A git repo is a lot like that.
This means that a given git changeset contains its commit message (as a hash code), it contains its work tree, and it contains its parent changes.
Those parent changes contain their parent changes, all the way back.
The part of the git repo that contains history is that chain of changes. That chain of changes it at a level above the "directory" tree -- from a "directory" tree, you cannot uniquely get to a change set and the chain of changes.
To find out what happens to a file, you start with that file in a changeset. That changeset has a history. Often in that history, the same named file exists, sometimes with the same content. If the content is the same, there was no change to the file. If it is different, there is a change, and work needs to be done to work out exactly what.
Sometimes the file is gone; but, the "directory" tree might have another file with the same content (same hash code), so we can track it that way (note; this is why you want a commit-to-move a file separate from a commit-to-edit). Or the same file name, and after checking the file is similar enough.
So git can patchwork together a "file history".
But this file history comes from efficient parsing of the "entire changeset", not from a link from one version of the file to another.
"git does not track files" basically means that git's commits consist of a file tree snapshot connecting a path in the tree to a "blob" and a commit graph tracking the history of commits. Everything else is reconstructed on-the-fly by commands like "git log" and "git blame". This reconstruction can be told via various options how hard it should look for file-based changes. The default heuristics can determine when a blob changes place in the file tree without change, or when a file is associated with a different blob than before. The compression mechanisms Git uses don't care a whole lot about blob/file boundaries. If the content is somewhere already, this will keep the repository growth small without associating the various blobs.
Now that is the repository. Git also has a working tree, and in this working tree there are tracked and untracked files. Only the tracked files are recorded in the index (staging area? cache?) and only what is tracked there makes it into the repository.
The index is file-oriented and there are some file-oriented commands for manipulating it. But what ends up in the repository is just commits in the form of file tree snapshots and the associated blob data and the commit's ancestors.
Since Git does not track file histories and renames and its efficiency does not depend on them, sometimes you have to try a few times with different options until Git produces the history/diffs/blames you are interested in for non-trivial histories.
That's different with systems like Subversion which record rather than reconstruct histories. If it's not on record, you don't get to hear about it.
I actually built a differential installer at one time that just compared release trees by checking them into Git and then producing a script duplicating their effect. Since sometimes whole trees were moved, this produced much smaller differential installers than overwriting/deleting everything would have produced.
Git doesn't track a file directly, but tracks snapshots of the repository, and these snapshots happen to consist of files.
Here's a way to look at it.
In other version control systems (SVN, Rational ClearCase), you can right click on a file and get its change history.
In Git, there is no direct command that does this. See this question. You'll be surprised at how many different answers there are. There is no one simple answer because Git doesn't simply track a file, not in the way that SVN or ClearCase does it.
Tracking "content", incidentally, is what led to not track empty directories.
That is why, if you git rm the last file of a folder, the folder itself gets deleted.
That wasn't always the case, and only Git 1.4 (May 2006) enforced that "tracking content" policy with commit 443f833:
git status: skip empty directories, and add -u to show all untracked files
By default, we use
--others --directory
to show uninteresting directories (to get user's attention) without their contents (to unclutter output).
Showing empty directories do not make sense, so pass--no-empty-directory
when we do so.Giving
-u
(or--untracked
) disables this uncluttering to let the user get all untracked files.
That was echoed years later in Jan. 2011 with commit 8fe533, Git v1.7.4:
This is in keeping with the general UI philosophy: git tracks content, not empty directories.
In the meantime, with Git 1.4.3 (Sept. 2006), Git starts limiting untracked content to non-empty folders, with commit 2074cb0:
it should not list the contents of completely untracked directories, but only the name of that directory (plus a trailing '
/
').
Tracking content is what allowed git blame to, very early on (Git 1.4.4, Oct. 2006, commit cee7f24) be more performant:
More importantly, its internal structure is designed to support content movement (aka cut-and-paste) more easily by allowing more than one paths to be taken from the same commit.
That (tracking content) is also what put git add in the Git API, with Git 1.5.0 (Dec. 2006, commit 366bfcb)
make 'git add' a first class user friendly interface to the index
This brings the power of the index up front using a proper mental model without talking about the index at all.
See for example how all the technical discussion has been evacuated from the git-add man page.Any content to be committed must be added together.
Whether that content comes from new files or modified files doesn't matter.
You just need to "add" it, either with git-add, or by providing git-commit with-a
(for already known files only of course).
That is what made git add --interactive
possible, with the same Git 1.5.0 (commit 5cde71d)
After making the selection, answer with an empty line to stage the contents of working tree files for selected paths in the index.
That is also why, to recursively remove all contents from a directory, you need to pass -r
option, not just the directory name as the <path>
(still Git 1.5.0, commit 9f95069).
Seeing file content instead of file itself is what allows merge scenario like the one described in commit 1de70db (Git v2.18.0-rc0, Apr. 2018)
Consider the following merge with a rename/add conflict:
- side A: modify
foo
, add unrelatedbar
- side B: rename
foo->bar
(but don't modify the mode or contents)In this case, the three-way merge of original foo, A's foo, and B's
bar
will result in a desired pathname ofbar
with the same mode/contents that A had forfoo
.
Thus, A had the right mode and contents for the file, and it had the right pathname present (namely,bar
).
Commit 37b65ce, Git v2.21.0-rc0, Dec. 2018, recently improved colliding conflict resolutions.
And commit bbafc9c firther illustrates the importance of considering file content, by improving the handling for rename/rename(2to1) conflicts:
- Instead of storing files at
collide_path~HEAD
andcollide_path~MERGE
, the files are two-way merged and recorded atcollide_path
.- Instead of recording the version of the renamed file that existed on the renamed side in the index (thus ignoring any changes that were made to the file on the side of history without the rename), we do a three-way content merge on the renamed path, then store that at either stage 2 or stage 3.
- Note that since the content merge for each rename may have conflicts, and then we have to merge the two renamed files, we can end up with nested conflict markers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With