My project is currently using a svn repository which gains several hundred new revisions per day. The repository resides on a Win2k3-server and is served through Apache/mod_dav_svn.
I now fear that over time the performance will degrade due to too many revisions.
Is this fear reasonable?
We are already planning to upgrade to 1.5, so having thousands of files in one directory will not be a problem in the long term.
Subversion on stores the delta (differences), between 2 revisions, so this helps saving a LOT of space, specially if you only commit code (text) and no binaries (images and docs).
Does that mean that in order to check out the revision 10 of the file foo.baz, svn will take revision 1 and then apply the deltas 2-10?
What type of repo do you have? FSFS or BDB?
(Let's assume FSFS for now, since that's the default.)
In the case of FSFS, each revision is stored as a diff against the previous. So, you would think that yes, after many revisions, it would be very slow.
However, this isn't the case. FSFS uses what are called "skip deltas" to avoid having to do too many lookups on previous revs.
(So, if you are using an FSFS repo, Brad Wilson's answer is wrong.)
In the case of a BDB repo, the HEAD (latest) revision is full-text, but the earlier revisions are built as a series of diffs against the head. This means the previous revs have to be re-calculated after each commit.
For more info: http://svn.apache.org/repos/asf/subversion/trunk/notes/skip-deltas
P.S. Our repo is about 20GB, with about 35,000 revisions, and we have not noticed any performance degradation.
Subversion stores the most current version as full text, with backward-looking diffs. This means that updates to head are always fast, and what you incrementally pay for is looking farther and farther back in history.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With