Here is the problem:
I created bare git repository at my hosting partner place, which I use as the reference repository from all the locations/computers I maintain my project from.
The thing is that my project is using a sqlite db file, which keeps growing regularly (it is about 150MB for now). As time is passing, my .git folder is getting bigger and bigger (lately around 1GB). And my hosting space is limited.
I need the bare repository to contain the HEAD version of this db file but I really do not need to keep its version history.
So, to gain some space, from time to time, I remove the db file from the history, clean the repository and recreate the bare version. This works, but is quite a pain.
Is there a way to tell git to keep only the last version of a file and drop its history?
The Unversioned Files changelist shows all files that have been added to your project, but that are not being tracked by Git.
You shouldn't store credentials like usernames, passwords, API keys and API secrets. If someone else steals your credentials, they can do nasty things with it.
It contains your original data files and all the log messages, author information, dates, and other information required to rebuild any version or branch of the project. Git places only four types of objects in the object store: the blobs, trees, commits, and tags.
Short answer: no.
More useful answer: Git doesn't track files individually, so asking it to throw away the history of a single file would mean that it would have to rewrite all of its history completely upon every commit, and that leads to all kinds of ugly problems.
You can store a file in an annotated tag, but that's not very convenient. It basically goes like this:
ID=`git hash-object -w yourfile.sqlite`
git tag -a -m "Tag database file" mytag $ID
In no way does that conveniently update (or even create) the database file in the working tree for you... you'd have to use hook scripts to emulate that.
Full disclosure: I'm not completely sure whether it's actually possible to push tagged blobs that aren't covered by the normal history. I suspect that it isn't, in which case this recipe would be a lot less than useful.
It sounds like you're looking for the solution to the wrong problem.
Large binary files do often need to be stored in repositories, but I don't think a SQLite database is something you would really need to store in its binary form in a repository.
Rather, you should keep the schema in version control, and if you need to keep data too, serialize it (to XML, JSON, YAML...) and version that too. A build script can create the database and unserialize the data into it when necessary.
Because a text-based serialization format can be tracked efficiently by Git, you won't worry about the space overhead of keeping past versions even if you don't think you need access to them.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With