remove the file from your project's current file-tree. remove the file from repository history — rewriting Git history, deleting the file from all commits containing it. remove all reflog history that refers to the old commit history. repack the repository, garbage-collecting the now-unused data using git gc.
Just run the rm command with the -f and -r switch to recursively remove the . git folder and all of the files and folders it contains.
you should not delete all changes older than 30 days (i think it's somehow possible exploiting git, but really not recommended).
you can call git gc --aggressive --prune
, which will perform garbage collection in your repository and prune old objects. do you have a lot of binary files (archives, images, executables) which change often? those usually lead to huge .git folders (remember, git stores snapshots for each revision and binary files compress badly)
Here is what the creator of git Linus has to say about how to shrink your git repo:
The equivalent of "git gc --aggressive" - but done *properly* - is to do (overnight) something like
git repack -a -d --depth=250 --window=250
where that depth thing is just about how deep the delta chains can be (make them longer for old history - it's worth the space overhead), and the window thing is about how big an object window we want each delta candidate to scan.
And here, you might well want to add the "-f" flag (which is the "drop all old deltas", since you now are actually trying to make sure that this one actually finds good candidates.
source: http://gcc.gnu.org/ml/gcc/2007-12/msg00165.html
Will this get rid of binary data that is orphaned in my repo? "git repack" will not git rid of images or binary data that you have checked into your repo and then deleted it. To delete those kind of data permanently from your repo you have to re-write your history. A common example of that is when you accidentally check in your passwords in git. You can go back and delete some files but then you have to re-write your history from then to now and then force push then new repo to your origin.
I tried these but my repository was still very large. The problem was I had accidentally checked in some generated large files. After some searching I found a great tutorial which makes it easy to delete the large generated files. This tutorial allowed me to shrink my repository from 60 MB to < 1 MB.
Steve Lorek, How to Shrink a Git Repository
Updated: Here's a copy-paste version of the blog post.
Our main Git repository had suddenly ballooned in size. It had grown overnight to 180MB (compressed) and was taking forever to clone.
The reason was obvious; somebody, somewhere, somewhen, somehow, had committed some massive files. But we had no idea what those files where.
After a few hours of trial, error and research, I was able to nail down a process to:
This process should never be attempted unless you can guarantee that all team members can produce a fresh clone. It involves altering the history and requires anyone who is contributing to the repository to pull down the newly cleaned repository before they push anything to it.
If you don't already have a local clone of the repository in question, create one now:
git clone remote-url
Now—you may have cloned the repository, but you don't have all of the remote branches. This is imperative to ensure a proper 'deep clean'. To do this, we'll need a little Bash script:
#!/bin/bash
for branch in `git branch -a | grep remotes | grep -v HEAD | grep -v master`; do
git branch --track ${branch##*/} $branch
done
Thanks to bigfish on StackOverflow for this script, which is copied verbatim.
Copy this code into a file, chmod +x filename.sh
, and then execute it with ./filename.sh
. You will now have all of the remote branches as well (it's a shame Git doesn't provide this functionality).
Credit is due to Antony Stubbs here - his Bash script identifies the largest files in a local Git repository, and is reproduced verbatim below:
#!/bin/bash
#set -x
# Shows you the largest objects in your repo's pack file.
# Written for osx.
#
# @see http://stubbisms.wordpress.com/2009/07/10/git-script-to-show-largest-pack-objects-and-trim-your-waist-line/
# @author Antony Stubbs
# set the internal field spereator to line break, so that we can iterate easily over the verify-pack output
IFS=$'\n';
# list all objects including their size, sort by size, take top 10
objects=`git verify-pack -v .git/objects/pack/pack-*.idx | grep -v chain | sort -k3nr | head`
echo "All sizes are in kB. The pack column is the size of the object, compressed, inside the pack file."
output="size,pack,SHA,location"
for y in $objects
do
# extract the size in bytes
size=$((`echo $y | cut -f 5 -d ' '`/1024))
# extract the compressed size in bytes
compressedSize=$((`echo $y | cut -f 6 -d ' '`/1024))
# extract the SHA
sha=`echo $y | cut -f 1 -d ' '`
# find the objects location in the repository tree
other=`git rev-list --all --objects | grep $sha`
#lineBreak=`echo -e "\n"`
output="${output}\n${size},${compressedSize},${other}"
done
echo -e $output | column -t -s ', '
Execute this script as before, and you'll see some output similar to the below:
All sizes are in kB. The pack column is the size of the object, compressed, inside the pack file.
size pack SHA location
1111686 132987 a561d25105c79aa4921fb742745de0e791483afa 08-05-2012.sql
5002 392 e501b79448b9e970ab89b048b3218c2853fdfc88 foo.sql
266 249 73fa731bb90b04dcf79eeea8fdd637ba7df4c089 app/assets/images/fw/iphone.fw.png
265 43 939b31c563bd40b1ca70e4f4a9f7d67c27c936c0 doc/models_complete.svg
247 39 03514d9e84418573f26b205bae7e4e57057c036f unprocessed_email_replies.sql
193 49 6e601c4067aaddb26991c4bd5fbddef003800e70 public/assets/jquery-ui.min-0424e108178defa1cc794ee24fc92d24.js
178 30 c014b20b6fed9f17a0b2809ac410d74f291da26e foo.sql
158 158 15f9e56bc0865f4f303deff053e21909661a716b app/assets/images/iphone.png
103 36 3135e15c5cec75a4c85a0636b154b83221020c97 public/assets/application-c65733a4a64a1a885b1c32694574b12a.js
99 85 c1c80bc4c09e692d5e2127e39c87ecacdb1e816f app/assets/images/fw/lovethis_logo_sprint.fw.png
Yep - looks like someone has been pushing some rather unnecessary files somewhere! Including a lovely 1.1GB present in the form of a SQL dump file.
Cleaning the file will take a while, depending on how busy your repository has been. You just need one command to begin the process:
git filter-branch --tag-name-filter cat --index-filter 'git rm -r --cached --ignore-unmatch filename' --prune-empty -f -- --all
This command is adapted from other sources—the principal addition is --tag-name-filter cat
which ensures tags are rewritten as well.
After this command has finished executing, your repository should now be cleaned, with all branches and tags in tact. Reclaim space
While we may have rewritten the history of the repository, those files still exist in there, stealing disk space and generally making a nuisance of themselves. Let's nuke the bastards:
rm -rf .git/refs/original/
git reflog expire --expire=now --all
git gc --prune=now
git gc --aggressive --prune=now
Now we have a fresh, clean repository. In my case, it went from 180MB to 7MB.
Now we need to push the changes back to the remote repository, so that nobody else will suffer the pain of a 180MB download.
git push origin --force --all
The --all
argument pushes all your branches as well. That's why we needed to clone them at the start of the process.
Then push the newly-rewritten tags:
git push origin --force --tags
Anyone else with a local clone of the repository will need to either use git rebase
, or create a fresh clone, otherwise when they push again, those files are going to get pushed along with it and the repository will be reset to the state it was in before.
5GB vs 200MB is kind of weird. Try to run git gc
.
But no, unless you split your repository into modules, you can't decrease the size of the .git
directory.
Each clone of a git repo is a full fledged repository that can act as a server. That's the base principle of distributed version control.
Do, in this order, from least-dangerous and/or most-effective and/or fastest to more-dangerous and/or less-effective and/or slowest:
These test results are for a repo where du -hs --exclude=.git .
shows that the total repo size, NOT including the .git
dir, is about 80 GB, and du -hs .git
showed that the .git
folder alone started out at about 162 GB:
# Memory Saved
# Time it took in .git dir
# ------------ ------------
time git lfs prune # 1~60 min 62 GB
time git gc # 3 min < 1 GB
time git prune # 1 min < 1 GB
time git repack -a -d --depth=250 --window=250 # 2 min < 1 GB
time git gc --aggressive --prune # 1.25 hrs < 1 GB
As you can see, the last command takes a very long time for very little benefit, so don't even run it!
First off, you need to know what in the .git folder is taking up so much space. One technique is to run the ncurses-based (GUI-like) ncdu
(NCurses Disk Usage) command inside your repo. Another way is to run this:
du -h --max-depth=1 .git
Side note: To see how big your repo is, NOT including your .git
folder, run this instead:
du -h --max-depth=1 --exclude=.git .
Sample output of the 1st command above:
$ du -h --max-depth=1 .git
158G .git/lfs
6.2M .git/refs
4.0K .git/branches
2.5M .git/info
3.7G .git/objects
6.2M .git/logs
68K .git/hooks
162G .git
As you can see, my total .git
folder size is 162 GB, but 158 GB of that is my .git/lfs
folder since I am using the 3rd-party "Git Large File Storage" (git lfs
) tool to store large binary files. So, run this to reduce that significantly. Note: the time
part of all commands below is optional:
time git lfs prune
(If git lfs prune
fails with "panic: runtime error: invalid memory address or nil pointer dereference", see my notes below.)
Source: How to shrink a git LFS repo
Official documentation: git-lfs-prune(1)
-- Delete old LFS files from local storage
That took 60 seconds to run!
Now I've just freed up 62 GB! My .git/lfs
folder is now only 96 GB, as shown here:
$ du -h --max-depth=1 .git
96G .git/lfs
6.2M .git/refs
4.0K .git/branches
2.5M .git/info
3.0G .git/objects
6.2M .git/logs
68K .git/hooks
99G .git
Next, run this to shrink the .git/objects
folder by a few hundred MB to ~1 GB or so:
time git gc
time git prune
git gc
takes about 3 minutes to run, and git prune
takes about 1 minute.
Check your disk usage again with du -h --max-depth=1 .git
. If you'd like to save even more space, run this:
time git repack -a -d --depth=250 --window=250
That takes about 2 minutes and saves a few hundred more MB.
Now, you can stop here, OR you can run this final command:
time git gc --aggressive --prune
That final command will save a few hundred more MB but will take about 1.25 hours.
git lfs prune
fails with "panic: runtime error: invalid memory address or nil pointer dereference"If git lfs prune
fails with:
panic: runtime error: invalid memory address or nil pointer dereference
then you may have an old version of git-lfs
installed and need to update it. Here is how:
First, check to see what version you have installed. Run man git-lfs
and scroll to the bottom to see the date. Maybe it says it is from 2017, for instance. Now, update your version with these commands. The first command comes from here: https://packagecloud.io/github/git-lfs/install.
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt update
sudo apt install git-lfs
Run man git-lfs
again and scroll to the bottom. I now see my date as "March 2021", when previously it was some date in 2017.
Also, if I run sudo apt install git-lfs
again, it tells me:
git-lfs is already the newest version (2.13.3).
So, the update for git-lfs
worked, and now the error is gone and git lfs prune
works again!
I first documented this in a comment on GitHub here: https://github.com/git-lfs/git-lfs/issues/3395#issuecomment-889393444.
git lfs prune
: How to shrink a git LFS repo
git repack -a -d --depth=250 --window=250
: https://gcc.gnu.org/legacy-ml/gcc/2007-12/msg00165.html
rsync
, as I explain in my answer here. That being said, occasionally I use git
for synchronization too, as I explain for my sync_git_repo_from_pc1_to_pc2.sh
tool here, and in my other answer here: Work on a remote project with Eclipse via SSH.If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With