Over the years, I've always stored binary dependencies in the \lib
folder and checked that into source-control with the rest of the project. I find I do this less so now that we have NuGet and NuGet Package Restore.
I've heard that some companies enforce a rule that no binaries can be checked into source control. The reasons cited include:
Are there objective arguments for or against this practice for the vast majority of projects that use source-control?
You should use Git LFS if you have large files or binary files to store in Git repositories. That's because Git is decentralized. So, every developer has the full change history on their computer.
GitHub is a web-based platform where users can host Git repositories. It helps you facilitate easy sharing and collaboration on projects with anyone at any time. GitHub also encourages broader participation in open-source projects by providing a secure way to edit files in another user's repository.
I would strongly recommend you to NOT use the practice that you describe (the practice of forbidding binaries in source-control). Actually I would call this an organizational anti-pattern.
The single most important rule is:
You should be able to check out a project on a new machine, and it has to compile out of the box.
If this can be done via NuGet, then fine so. If not, check in the binaries. If there are any legal/license issues, then you should have at least a text file (named how_to_compile.txt
or similar) in your repo that contains all the required information.
Another very strong reason to do it like this is to avoid versioning problems - or do you know
Some other arguments against the above:
My own rule of thumb is there generated assets should not be version controlled (regardless of whether they're binary or textual). There are several things like images, audio/video files etc. which might be checked in and for good reason.
As for the specific points.
You can't merge these kinds of files but they're usually just replaced rather than piecewise merged. Diffing them might be possible for some files using custom differs but in general, this is done using some kind of metadata like version numbers.
If you had a large text file, disk usage is not an argument against version control. Same here. The idea is that changes to this file need to be tracked. In the worst case, it's possible to put these assets in a separate repository (that doesn't change very often) and then include it in the current one using something git submodules.
This is simply not true. Operations on that specific file might be slower but that's okay. It would be the same for text files.
I think having things in version control increases the convenience provided by the repo. manager.
This touches on my point that the files in question shouldn't be generated. If the files are not generated, then checkout and build is one step. There's no "download binary assets" stage.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With