Let me start with some context:
I had to upgrade a crucial Magento webshop to a new version. To be sure all existing code would still work after the upgrade and make some post-upgrade changes I made a Git repository from the entire Magento installation (excluding obvious content like the 4.5GB of images, ./var directory etc.), pushed it to an origin and cloned it on a dev server. Made a new branch, performed the upgrades, made code changes, committed it all to the dev branch and pushed it back to origin.
Now the time has come to upgrade the 'real' shop, meaning i have to merge the master branch on the production server with the dev branch. And then everyhing goes wrong:
git fetch
- works
git branch
says: * master
git merge origin/dev
goes horribly wrong (only output after some waiting):
fatal: Out of memory? mmap failed: Cannot allocate memory
Same goes for git checkout dev
, git rebase master origin/dev
etc.
Did some research here on stackoverflow in existing questions and spent an evening of trying suggestions, including (but not limited to):
git gc
Counting objects: 48154, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (37152/37152), done.
fatal: Out of memory, malloc failed (tried to allocate 527338875 bytes)
error: failed to run repack
and:
git repack -a -d --window-memory 10m --max-pack-size 20m
Counting objects: 48154, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (37152/37152), done.
fatal: Out of memory, malloc failed (tried to allocate 527338875 bytes)
In addition to the previous command, i also tried this (which is pretty similar). As the link makes mention of a possible issue with 32-bit systems, perhaps it's wise to mention the specs for the three systems involved:
Does anyone know how I can recover from this? Does repacking on origin work? If it does, how can I convince the production server to fetch a new copy of the repository? Any help would be greatly appreciated!
The error you're getting comes from the large files in your repository. Git is trying to put the entire contents of the file in memory, which makes it croak.
Git 1.7.6 was released last month and has this lovely bit in its release notes:
Adding a file larger than core.bigfilethreshold (defaults to 1/2 Gig) using "git add" will send the contents straight to a packfile without having to hold it and its compressed representation both at the same time in memory.
Upgrading to 1.7.6 might enable you to run git gc
and maybe even git merge
, but I can't verify because it's hard get a repository into that state (the conditions must be just right).
If upgrading Git doesn't help, you can try removing the large files from the repository using git filter-branch
. Before you do that, try backing up the large files using git cat-file -p <commit_sha1>:path/to/large/file >/path/to/backup/of/large/file
.
You'll want to do these operations on your most beefy machine (lots of memory).
If this works, try re-cloning to the other machines (or simply rsync the .git
directory).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With