Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Which Version Control System would you use for a 1000+ developer organization? Why? [closed]

  • For such a huge installation, there are at least the following major requirements: Data safety, maturity, robustness, Scalability, price (a per seat licence vs. open source always makes a huge difference regardless of the price per seat), ease of administration
  • I would think that subversion would be just fine.
  • There is support available (from collabnet, clearvision, wandisco and others). You could ask them if subversion would be able to handle your task.
  • subversion has a very mature database backend - FSFS. It is absolutely rock solid and since 1.5 it can handle really many revisions without performance degradation. The revisions are written in a file system. So the reliability of your subversion repository depends on the quality of your file system, os and storage system.
  • This is why I would recommend Solaris 10 with ZFS as the file system. ZFS has really great file system features for production systems. But above all it provides data integrity checksumming. So with this amount of source code in the subversion repository you won't have to worry about repository corruption because of a silent hard drive bit error or controller or cable bit error. By now ZFS is mature enough that it can be safely used as a UFS or whatever replacement.
  • I don't know about the hardware requirements. Maybe Collabnet could give you advice.
  • But a really good start (which could be used as NFS storage or backup storage if it turns out to be too slow - you will definitely be able to make good use of it anyway) would be a 2nd generation thumper, i.e Sun Fire X4540 Server: You can have (all within a nice 4U Rack Server for 80.000$ (list price - this will be likely negotiable)): 48 TB Disk space!, 8 AMD Opteron CPU cores, 64 GB RAM, Solaris 10 preinstalled, 3 year Platinum software and hardware support from sun. So the mere hardware and support price for this server would be 25$ per seat of your 3000 Developers.
  • To assure really great data safety, you could partition the 48 hard drives as follows: 3 drives for the operating system (3-way Raid-1 mirror), 3 hot spares (not used, on stand-by in the case of a failure of the other drives), a zfs pool of 14 3-way Raid 1 mirrors (14*3=42 drives) for the subversion repository. If you would like to fill the 14 TB ZFS Raid space only by 80% then this would be approximately 10 Tebibyte of real usable disk space for the repository, i.e. an average of 3 GB per developer.
  • With this configuration: Subversion 1.6 on a Sun x4540 thumper with 10 TiB 3-way Raid-1 ZFS redundant and checksummed disk space this should be a really serious start.
  • If the compute power isn't enough for 3000+ developers than you could buy a beefier server which could use the disk space of the thumper. If the disk performance is too slow you could hook up a huge array of fast scsi drives to the compute server and use the thumper as a backup solution.
  • Certainly, it would make sense to get consulting services from collabnet regarding the planning and deployment of this subversion server and to get platinum support for the hardware and solaris operating system from sun.
  • Edit (answer to comment #1): For distributed teams there is the possibility of a master-slave configuration: WebDAV-Proxy. Each local team has a slave server, which replicates the repository. The developers get all checkouts from this slave. The checkins are forwarded transparently from the slave to the master. In this way, the master is always current. The vast majority of traffic is checkouts: Every developer gets every checkin any developer commits. So the checkout traffic should be 99.97% of the traffic with 3000 developers. If you have a local team with 50 developers, the checkout traffic would be reduced by 98%. The checkins shouldn't be a problem: how fast can anybody type new code? Obviously, for a small team you won't buy a thumper. You just need a box with enough hard drive space (i.e. if you intend to hold the hole repository 10TB). It can be a raid5 configuration as data loss isn't the end of the company. You won't need Solaris either. You could put linux on it if the local people would be more comfortable with it. Again: ask a consultant like collabnet if this is really a sound concept. With this many seats it shouldn't be a problem to pay for a one time consultation. They can set up the whole thing. Sun delivers the box with solaris pre-installed. You have sun support. So you won't need a solaris guru on site, as the configuration shouldn't change for the next years. This configuration means that
    • the slow line from the team to the headquarter won't be clogged with redundant checkout data and
    • the members of the local team can get their checkouts quickly
    • it would dramatically reduce the load at the thumper - this means with that configuration you shouldn't have to worry at all whether the thumper is capable of handling the load
    • it reduces the bandwidth costs
  • Edit (after the release of the M3000): A much more extreme hardware configuration targeted even more towards insane data integrity would be the combination of a M3000 server and a J4500 array:
    • the J4500 Storage Array is practically a thumper, but without the CPU-power and external storage interfaces which enables it to be connected to a server.
    • The M3000 Server is a Sparc64 server at a midrange price with high end RAS features. Most data paths and even cpu registers are checksummed, etc. The RAM is not only ECC protected but has the equivalent of the IBM Chipkill feature: It's raid on memory: not only single bit errors are detected and corrected, but entire memory chips may fail completely while no data is lost - similar to failing hard drives in raid arrays.
    • As the ZFS file system does CPU-based error checksumming on the data before it comes from, or after it goes to the CPU, the quality of the storage controller and cabling of the J4500 is not important. What matters are the bit error prevention and detection capabilities of the M3000 CPU, Memory, memory controller, etc.
    • Unfortuntely, the high quality memory sticks sun is using to improve the quality even more are that much expensive that the combination of the four core (eight threads) 4GB Ram M3000 + 48 TB J4500 would be roughly equivalent to the thumper, but if you would like to increase the server memory from 4GB to 8, 16 or 32 GB for in-memory caching purposes, the price goes up steeply. But maybe a 4GB configuration would even be enough if the master-slave configuration for distributed teams is used.
    • This hardware combination would be worth a thought if the source code and data integrity of this 3000 developer repository is valued extremely highly by the management. Then it would also make sense to add two or more thumpers as a rotating backup solution (not neccessary to protect against hardware failure, but to protect against administrator mistakes or for off-site backups in case of physical desasters).
    • As this would be a Sparc and not a x86 solution, there are certified Collabnet Subversion binaries for this platform available freely.
  • One of the advantages of subversion is also the excellent documentation: There is an excellent book from O'Reilly (Version Control with Subversion) also available for free as a PDF or HTML version.
  • To sum it up: With the combination Subversion 1.6 + Solaris 10 + 3-way-raid-1 redundant and checksummed ZFS + thumper + master-slave server replication for local teams + sun support + collabnet/clearvision/orcaware/Karl Vogel consultation + excellent and free subversion manual for all developers you should have a solution which provides
    • Extremely High Data Safety (very important for so much source code - you do not want to corrupt your repository, bit errors do happen, hard drives do fail!) You have one master data repository which holds all your versions/revisions really reliably: The main feature of source control systems.
    • Maturity - Subversion has been used by many, many companies and open source projects.
    • Scalability - With the master-slave replication you should not have a load problem on the master server: The load of the checkins are negligible. The checkouts are handled by the slaves.
    • No High Latency for local teams behind slow connections (because of the replication)
    • A low price: subversion is free (no per seat fee), excellent free documentation, over a three year period only 8$ per seat per year hardware and support costs for the master server, cheap linux boxes for slaves, one-time consultancy from collabnet et. al., low bandwidth costs because of master-slave-replication.
    • Ease of administration: Essentially no administration of the master server: The subversion consultant can deploy everything. Sun staff will swap faulty hard drives, etc. Slaves can be linux boxes or whatever administration skills are available at the local sites. Excellent subversion documentation.

Having worked at a few companies with 1000+ workers, I've found that by-and-large, they all use Perforce.

I've asked "Why don't you use something else? SVN? Git? Mercurial? Darcs?"- and they've said that (this is the same for all of the companies) - when they made the decision to go with Perforce, it was either that, or SourceSafe, or CVS - and honestly, given those three choices, I'd go with Perforce, too.

It's hard for 'more difficult' version control systems to gain traction with so many people, and a lot of the benefits of DCVS are less beneficial when you have the bulk of your software teams working within 18 feet of one another.

Perforce has a lot of API hooks for developers to use, and for a centralized system, it's got a lot of chutzpah.

I'm not saying that it's the best solution- but I've at least seen some very large companies where Perforce works, and well enough that it's almost ubiquitous.


Git was written for the Linux kernel, which might be the closest example to such a situation you can find public information on.


I want to say git, but don't think a company of that size is going to be all Linux (Windows support for git still sucks). So go with the SCM that Linux used before git i.e. BitKeeper


As of 2015, the most important factor is to use a Distributed Version Control System (DVCS). The main benefit of using a DVCS: allowing source code collaboration at many levels by reducing the friction of source code manipulation. This is especially important for a 1000+ developer organization.

Reducing Friction

Individual developer checkins are decoupled from collaboration activities. Lightweight checkins encourage clean units of independent work at a short-time scale (many checkins per hour or per day). Collaboration is naturally handled at a different, usually longer, time-scale (sync with others daily, weekly, monthly) as a system is built up in a distributed organization.

Use Git

Of the DVCS options, you should likely just use Git and take advantage of the great communities at GitHub or Bitbucket. For large private organizations, internal community and internal source code hosting may be important (there are vendors selling private hosting systems such as Atlassian Stash and probably others).

The main reason to use Git is that it is the most popular DVCS. Because of this:

  • Git is well-integrated into a wide range of development toolchains

  • Git is known and used by most developers

  • Git is well-documented

Or Mercurial

As an alternate to Git, Mercurial is also very good. Mercurial has a slightly cleaner, more orthogonal set of commands than Git. In the late 2000's, it used to be better supported than Git on Windows systems mostly due to having core developers that cared more about Windows.

GUI

For those who would like to use a GUI instead of git and hg on the command line, SourceTree is a great Windows and OS X application that presents a clean interface to both Git and Mercurial.

Obsolete Recommendations

As of 2010, I recommended Mercurial with TortoiseHG. It is the best combination of Windows support and distributed version control functionality.

From 2006-2009, I recommended Subversion (SVN) because it is free and has great integration with most IDEs. For those in the organization who travel or prefer a more distributed model, they can use Git for all their local work but still commit to the SVN repository when they want to share code. This is a great balance between a centralized and distributed system. See Git-SVN Crash Course to get started. The final and perhaps most important reason to use SVN is TortoiseSVN, a Windows client for SVN that makes accessing repositories a right-click away for anyone. At my company, this has proven a great way to give repository access to non-developers.