I've got a test repository that I put under Git. Most of the files are pretty tiny but there's a very large number of them and simply Git operations like add and status are taking tens of minutes to complete. What are my options for putting these under revision control and getting reasonable performance? Should I attempt to use submodules or should I steer clear of DVCSes?
The first thing to determine is if the poor behavior is due to your machine or to your specific local copy of the repo. The files in your . git folder can affect performance in various ways - settings in . git/config , presence of lfs files, commits that can be garbage collected, etc.
Why git operations becomes slow when repo gets bigger. Git is really slow for 100,000 objects.
Summary. To recap, git clean is a convenience method for deleting untracked files in a repo's working directory. Untracked files are those that are in the repo's directory but have not yet been added to the repo's index with git add .
Git operations like add
and status
require stat
ing every file in the filesystem (to detect changes). Either you have a truly massive number of files (say, tens or hundreds of thousands of files), or you have a filesystem that has a rather slow stat
operation.
In any case, if you need to work on a system where this is extremely slow, you can use the "assume unchanged" bit in the index, which tells Git not to bother stat
ing the file. If you do turn this on, you need to manually instruct git to pick up changes in individual files, e.g. by passing them directly to git add
, otherwise Git won't even know anything changed. You can turn this on by setting git config core.ignoreStat true
and then running something like git reset --hard HEAD
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With