Beyond Git: The other version control systems developers use
At my first job out of college (pre-Y2K), I got my first taste of version control systems. We used Microsoft’s Visual SourceSafe (VSS), which had a repository of all the files needed for a release, which was then burned onto a disk and sent to people through the mail. If you wanted to work on one of those files, you had to check it out from the repo—literally, like a library book. That file would be locked until you checked it back in; no one else could edit it. In essence, VSS was a shield on top of a shared file folder.
Microsoft discontinued VSS in 2005, coincidently the same year as the first release of Git. While technology has shifted and improved quite a bit since then git has come out as the dominant choice for version control systems. This year, we asked what version control systems people used, and git came out as the clear overall winner.
But it’s not quite a blow out; there are two other systems on the list: SVN (Apache Subversion) and Mercurial. There was a time when both of these were prominent in the market, but not everyone remembers those days. Stack Overflow engineering has used both of these in the past, though we now use git like almost everybody else.
This article will look at what those version control systems are and why they still have a hold of some engineering teams.
Apache Subversion
Subversion (SVN) is an open-source version control system that maintains source code in a central server; anyone looking to change code accesses these files from clients. This client server model is an older style, compared to the distributed model git uses, where changes can be stored locally then distributed to the central history (and other branches) when pushed to an upstream repository. In fact, SVN build on historical version control—it was initially intended to be a mostly compatible successor to CVS (Concurrent Versions System), which is itself a front end and expansion to Revision Control System (RCS), initially released way back in 1982.
This earlier generation of version control worked great for the way software was built ten to fifteen plus years ago. A piece of software would be built as a central repository, with any and all feature additions merged into a trunk. Branches were rare and eventually absorbed into the mainline. Important files, particularly large binaries, could be “locked” to prevent other developers from changing them while you worked on them. And everything existed as directories—files, branches, tags, etc. This model worked great for a centrally located team that eventually shipped a release, whether as a disc or a download.
SVN is a free, open-source version of this model. One of the paid client-server version control systems, Perforce (more on this below), had some traction at enterprise-scale companies, notably Google, but for those unwilling to pay the price for it, SVN was a good option. Plenty of smaller companies (including us at the beginning) used centralized version control to manage their code, and I’m sure plenty of folks still do, whether out of habit or preference.
But the ways that engineering organizations work has changed pretty drastically in the last dozen years. There is no longer a central dev team working on a single codebase; you have multiple independent teams each responsible for one or more services. Stack Overflow user VonC has made himself a bit of a version control expert and has guided plenty of companies away from SVN. He sees it a technology built for a less agile way of working. “It does get in the way, in term of management, repository creation, registration, and the general development workflow. As opposed to a distributed model, which is much more agile in those aspects. I suspect the recent developments with remote working will not help those closed environment systems.”
The other reason that SVN grew less used was that git showed how things could be better. Quentin Headen, Senior Software Engineer here at Stack Overflow, used SVN early in his career. “In my opinion, the two biggest drawbacks of SVN are that first, it is centralized, which requires a the SVN server to be up for you to commit changes. If your internet is down, you can’t commit at all. Second, the branching is very heavy. Once a branch is created, you can’t delete it (if I remember correctly). I think there is a command to remove, but it stays in history regardless. Git branches are cheap and can be deleted easily if need be.”
Clearly, SVN lost prominence when the new generation of version control arrived. But git wasn’t the only member of that generation.
Mercurial
Git wasn’t the only member of the distributed version control generation. Mercurial first arrived the same year as Git—2005—and became the two primary players. Early on, many people wondered what differences, if any, the two systems had. When Stack Overflow moved away from SVN, Mercurial won out mostly because we had easy access to hosting through Fog Creek Software (now Glitch), another of our co-founder Joel Spolsky’s companies. Eventually, we too gave in to Git.
Initially, Mercurial seemed to be the natural fit for developers coming from earlier VC systems. VonC notes, “It’s the story of VHS versus Betamax.”
I reached out to Raphaël Gomès and Pierre-Yves David, both Mercurial core developers, about where Mercurial fits into the VC landscape. They said that plenty of large companies still use Mercurial in one form or another, including Mozilla, Facebook (though they may have moved to a Mercurial fork ported to Rust called Eden), Google (though as part of a custom VC codebase called Piper), Nokia, and Jane Street. “One of main advantages of Mercurial these days is its ability to scale on a very large project (millions of commits, millions of files). Over the years, companies have contributed performance improvements and dedicated features that make Mercurial a viable option for extreme scale monorepos.”
Ry4an Brase, who works at Google and uses their VC, expanded on why: “git is wed to the file system. Even GitHub accesses repositories as files on disk. The concurrency requirements of very large user bases on a single repo scale past filesystem access, and both Google and Facebook found Mercurial could be adapted to a database-like datastore and git could not.” However, with the recent release of Git v2.38 and Scalar, that advantage may be lessened.
But another reason that Mercurial may stay at these companies with massive monorepos is that it’s portable and extendable. It’s written in Python, which means it doesn’t need to be compiled to native code, and therefore it can be a viable VC option on any OS with a Python interpreter. It also has a robust extension system. “The extension system allows modifying any and all aspects of Mercurial and is usually greatly appreciated in corporate contexts to customize behavior or to connect to existing systems,” said Gomès and David.
Mercurial still has some big fans. Personally, I had never heard of it until some very enthusiast Mercurialists commented on an article of ours, A look under the hood: how branches work in Git.
babaloomer: Branches in mercurial are so simple and efficient! You never struggle to find the origin of a branch. Each commit has the name of its branch embedded in it, you can’t get lost! I don’t know how many times I had to drill down git history just to find the origin of a branch.
Scott: Mercurial did this much more intuitively than Git. You can tell the system is flawed when the standard practice in many workflows is to use “push -f” to force things. As with any tool, if you have to force it something is wrong.
Of course, different developers have different takes on this. Brase doesn’t think that Mercurial’s branching is necessary better. “Mercurial has four ways to do branches,” he said, “and the one that was exactly like git’s was called ‘bookmarks’, which the core developers were slow to support. What Mercurial called branches have no equivalent in git (every commit is on one and only one branch and it’s part of the commit info and revision hash), but no one wanted that kind.” Well, maybe not no one.
Mercurial is still and active project, as Gomès and David attest. They contribute to the code, manage the release cycles, and hold yearly conferences. While not the leading tool, it still has a place.
Other version control systems
In talking to people about version control, I found a few other interesting use cases, primarily around paid version control products.
Remember when I said I’d have more on Perforce? It turns out that several people mentioned it even though it didn’t even register on our survey. It turns out that Perforce has a strong presence in the video game industry—some even consider it the standard there. Rob Oates, an industry veteran who is currently the senior director of technology and partnerships at Exploding Kittens said, “Perforce still sees use in the game industry because c video game projects (by variety, count, and total volume of assets) are almost entirely not code.”
He gave four requirements that any version control system would need to fulfill in order to work for video game development:
- Must be useable by laypersons – Artists and designers will be working in this system day-to-day.
- Must lock certain files/types on checkout – Many of our files cannot be conceptually or technically merged.
- Must be architected to handle many large files as the primary use case – Many of our files will be dozens or hundreds of megabytes.
- Must avoid degenerate case with delta compression schemes – Many of our large files change entirely between revisions.
Perforce, because of its centralized server and file locking mechanism, fits perfectly. So why not separate the presentation layer from the simulation logic and store the big binary assets in one place and the code in a distributed system that excels at merging changes? The code in video games often depends on the assets. “For example, it would not be unusual for a game’s combat system to depend on the driving code, the animations, the models, and the tuning data,” said Oates. “Or a pathfinding system may depend on a navigation mesh generated from the level art. Keeping these concerns in one repo is faster and less confusing when a team of programmers, artists, and designers are working to rapidly iterate on the ‘feel’ of the combat system.”
The engineers at these companies often prefer git. When they have projects that don’t have artists and designers, they can git what they want. “Game engines and middleware have an easier time living on distributed version control as their contributors are mostly, if not entirely, engineers,” said Oates. Unfortunately for the devs on video games, most projects have a lot of people creating non-code assets.
Another one mentioned was Team Foundation Version Control (TFVC). This was a Microsoft product originally included in Team Foundation Server and still supported in Azure DevOps. It’s considered the spiritual successor to VSS and is another central server style VC system. Art Gola, a solutions architect with Federated Hermes, told me about it. “It was great for its time. It had an API, was supported on Linux (Team Foundation Everywhere) and tons of people using it that no one ever heard from since they were enterprise devs.”
But Gola’s team is actively trying to move their code out of the TFVC systems they have, and he suspects that a lot of other enterprise shops are too. Compared to the agility git provides, TFVC felt clunky. “It requires you to have a connection to the central server. Later versions allow you to work offline, but you only had the latest version of the code, unlike git. There is no built in pull request type of process. Branching was a pain.”
One could assume that now that the age of centralized version control is waning and distributed version control is ascendant, there is no innovation in the VC space. But you’d be mistaken. “There are a lot of cool experiments in the VCS space,” said Patrick Thomson, a GitHub engineer who compared Git and Mercurial in 2008, “Pijul and the theory of patch algebra, especially—but Git, being the most performant DVCS, is the only one I use in industry. I work on very large codebases.”
Why did Git win?
After seeing what the version control landscape looks like in 2022, it may be obvious why distributed version control won out as the VC of choice for software developers. But it may not be immediately obvious why Git has such a commanding share of the market over Mercurial. Both of them first came out around the same time and have similar features, though certainly not one to one. Certainly, many people prefer it. “For personal projects, I pick Mercurial. If I was starting another company, I’d use Git to avoid having to retrain and argue with new hires,” said Brase.
In fact, it should have had an advantage because it was familiar to SVN users and the centralized way of doing things. “Mercurial was certainly the most easy to use and more familiar to use because it was a bit like using subversion, but in a distributed fashion,” said VonC. But that fealty to the old ways may have hurt it as well. “That is also one aspect which was ultimately against Mercury because just having the vision of using an old tool in a distributed fashion was not necessarily the be best fit to develop in a decentralized way.”
The short answer why it won comes down to a strong platform and built-in user base. “Mercurial lost the popularity battle in the early 2010s to Git. It’s something we attribute in large part to the soaring rise of GitHub at that time, and to the natural endorsement of Git by the Linux community,” said Gomès and David.
Mercurial may have started out in a better position, but it may have lost ground over time. “Mercurial’s original fit was a curated, coherent user experience with a built-in web UI,” said Brase. “GitHub gave git the good web UI and coherent couldn’t beat the feature avalanche from Git contributors and the star power of its founder.”
That feature avalanche and focus on user needs may have been a hidden factor in pushing adoption. Thomson, in his comparison nearly fifteen years ago, likened Git to MacGyver and Mercurial to James Bond. Git let you scrape together a bespoke solution to nearly every problem if you were a command-line wizard, while Mercurial—if given the right job—could be fast and efficient. So where does Thomson stand now? “My main objection to Git—the UI—has improved over the years (I now use an Emacs-based Git frontend, which is terrific), whereas Mercurial’s primary drawback, its slow speed on large repositories, is still, as far as I can tell, an extant problem.”
Like MacGyver, Git has been improvising and adapting to fit whatever challenges come its way. Like James Bond, Mercurial has its way of doing things. It works great for some situations, but it has a distinct point of view. “My favorite example of a difference in how git and Mercurial approach new features is the `config` command,” said Brase. “Both `git config` and `hg config` are commands to edit settings such as the user’s email address. The `git config` command modifies `~/.gitrc` for you and usually gets it right. The Mercurial author refused all contributions that edited a config file for you. Instead `hg config` launched your text editor on `~/.hgrc`, saying ‘What is it with coders who are intimidated by text-based config files? Like doctors that can’t stand blood.’”
Regardless, it seems that while Git feels like the only version control game in town, it isn’t. Options for how to solve your problems are always a plus, so if you’ve been frustrated with the way it seems that everyone does things, know that there are other ways of working, and commit to learning more.
Tags: git, mercurial, perforce, svn, version control
34 Comments
I was a volunteer contributor to bzr, mainly working on the qbzr gui. Having put much hard work into bxr, I was very emotionally tied to seeing to “win out” over git and hg.
But around 2010 I could see the writing was on the wall that git was going to win the DVCS war. I gave a lot of thought to way that was. My conclusions:
* Network effect. git took a slight early lead, I think through Linus and linux’s visibility, and that early lead snowballed though network effect.
* While the git ui was bad and difficult to learn, it was flexible and could cater to different to requirements (even if this required hacks.) bzr and hg on the other hand had strong opinions on how they should be used, and this reduced flexibility.
Jelmer Vernooij, another bzr contributor, wrote an excellent retrospective that gives some other insights to git’s win: https://www.jelmer.uk/pages/bzr-a-retrospective.html
It’s a bit sad that Fossil (https://fossil-scm.org) wasn’t mentioned.
It is definitely more obscure than Mercurial but it came to the party the last (well, may be Verasity was announced, and then dead, later) but it has unique properties to show.
I would say, for a one-man gigs this could easily be a weapon of choice.
I, for one, while am a Git aficionado, have a sweet spot for this DVCS in my heart 😉
+1 for getting more eyes on fossil-scm. I’d add it was created by the creator (and in support) of SQLite.
Another (D)VCS that is interesting to me is https://www.plasticscm.com/ – oriented around Game/Media dev.
Another point I wanted as part of my comment:
Altassian’s bit-bucket was Mercurial first, IIRC, or at least was a first class citizen. When they dropped support for Git-first/only, that’s when I too, gave up on HG.
Game play
My former employee used SVN in their organization. The “trunk” was the active development branch, from which a “version_XYZ” branch broke out about once every month. Those version branches would gradually fall behind as less and less commits to the trunk were merged “down” into the versions. It was a feasible way to manage “versioning” software. But what really killed the joy was that you had to checkout all branches into their own directories, duplicating files. So if you wanted to merge a critical bug fix six versions back (which is common in enterprise software with support contracts for older versions), that meant to had to download the whole repository 6 times onto your drive, which in turn was so much data it wasn’t feasible to do it on the SSD but had to be done on the slower, bigger HDD. Using Git, checkout or switch of branches could’ve been done in a manner of seconds, because 95% of the files didn’t change from version to version.
Today, I’m only ever using Git. But sometimes I miss SVN, especially the more linear commit history, where a commit meant a guaranteed globally consistent sort order of changes, instead of changes commited a month ago but merged today appearing at the top.
Not a git issue, though? If you “only merged today” then the previous month change was never integrated and therefore fresh.
Look into trunk based development, the continuous integration that comes with, or at the very least not letting your branches go stale.
There are tons of ways of keeping a linear history in git, including rebasing, it just isn’t worth the trouble most of the time.
Ummm . . . . no, SVN merge something six versions back just downloads one old copy of the system, and if you’re merging into six different versions you should be doing and testing them one at a time anyway. If you’re limited by SSD vs HDD, so sad, six versions back people probably didn’t even have SSDs – and for a hundred bucks you can probably buy another SSD and plug it on your system anyway, so that’s not a sensible objection (and BTW go buy some more RAM, it’s more bang for the buck anyway). These aren’t problems with SVN, they’re problems with practical work on your computer. But that’s what we get paid for.
You could of course checkout a branch into a separate directory. But there was also “svn switch” to switch a working copy to a different branch by applying only the changes.
https://svnbook.red-bean.com/en/1.7/svn.ref.svn.c.switch.html
There’s no need to have that many copies in svn. The svn switch command lets you switch to a different branch in place.
Of course, Perforce (Helix) can merge files where required, and often, I find, better than Git can. In fact, P4Merge is one of the best merge resolution tools I’ve seen. You can pick this change from revision A, and that change from revision B, and the other change from both revisions. And if you need to manually resolve changes in a non-obvious way, you can do it right inline in the P4Merge window. Click save, and the local file is updated to match the changes. And all of this happens before you’ve committed your change, so you don’t need to worry about rebasing or opening another pull request to handle the merge.
We have also found Perforce (Helix) to be superior to Git when it comes to cherry picking integrations. Our use-case requires environment-based server-side source control where people making changes don’t have the luxury of private branches. With Perforce cherry picking integration we can offer users the flexibility of moving their change forward out of the shared dev branch once they feel it is ready, regardless of order of submission – a very powerful (and often required) option to have when it comes to large complex projects with many people making changes on the server. We also created git-based tools for this use-case but found that they are much less flexible due to challenges with automating cherry-picking integration in git. We do use git in other places, but the majority of our work for environment-based branch workflows is in Perforce due to its unique strengths for supporting that model.
P4Merge is the first thing I install over git 🙂
[merge]
tool = p4merge
[mergetool “p4merge”]
cmd = \”C:/Program Files/Perforce/p4merge.exe\” \”$BASE\” \”$LOCAL\” \”$REMOTE\” \”$MERGED\”
It was such a great tool 10 years ago.
it is still a good one but it’s a shame it has not seen features improvements since at least 10 years! See https://www.perforce.com/perforce/r23.1/user/p4vmergenotes.txt
After years of usages, I took the time years ago to write a list of improvements or fixes that could be done.
After an encouraging answer, nothing changed.
The main issue and request that I remember and that I think could be fixed in 1 day of work is that for a conflict, when you want both changes and use shift + click on both sides, p4merge must respect the order of click and put the change of the 1st clicked first.
Because at the moment, the result is always the same and it ends up that if the order is important, you have to edit the result manually 😕
I spent like 10 years before b4 git, and tried like 4 different VCSs, why would you want to even think about going back to one of those?
Because they worked? Because the company has a long-installed base and long-stable user training and long-experienced users and that’s what everyone is comfortable with – and it’s easier to train fresh-outs to use Subversion than to transfer archives and history and experience and stability to something new? Stop thinking that projects are done at the end of a class; start thinking about life-safety or medical projects that have been stable for a *decade* and require certifying authority review for any change because it might kill someone. No, we’re not changing the archive and losing the history on that.
It was mentioned above, but a huge part of git early on was its power to act as a platform. The tools it provides for distributed file management, and do it well, were practically non-existent (or not as consolidated/open/powerful) at the time. Now, before I tackle any related project to distributed text file management (excluding databases / sqlite problems), I try to see how much of the internals I can leverage the git protocols and tools, just providing fancy wrappers for the specific application needs. (Or how many projects are even feasible to spec or implement, purely because of git).
An excellent article on the platforming power of git: https://apenwarr.ca/log/20080131
As for just the domination of the VCS space, in my early-years experience 2010s, the relatively simple commandline UI/UX and spinup, with or without github etc, was why I used it, aside from large binary files where (Tortoise)SVN/Perforce are still king.
@Andrew J Leer: I don’t think they actually meant ‘beyond’ (author is probably a Toy Story fan). Consider the word as ‘Not yet using’, like I did and the piece is better.
Wow, it’s interesting to hear about your experience with VSS and how it has evolved over the years. It’s clear that Git has become the dominant choice for version control systems, and it’s amazing to see how technology has advanced since VSS’s discontinuation in 2005. It’s great to see that Git has been able to adapt and improve to meet the needs of developers.
“My code speaks for itsel”f is not quite as bad as “if it was difficult to write, i should be difficult to read”;-)
idk, git isn’t really anything special. Git seems to be the de facto standard yet every company I work for(big/small) seems to have little to no understanding of how to properly resolve merge conflicts when merging many disparate feature branches together.
You figure, Torvalds created git because he’s just an antisocial curmudgeon. While that may work for one developer working more/less autonomously and who does not need to be in fairly frequent communique with team members, it does not translate well to big companies where repos may have hundreds of concurrent feature branches in flight at any given point.
Basically, I don’t think the source control system is as important as there being a centralized understanding and means of conflict resolution between team members. Tools, plugins, etc try to manage this with “hooks” but then you are just placing the problem in a different sized box(how do I make the thing work with the hook) and then when issues bubble up, you are now one more step further from understanding the root cause and resolution. If anything, it just creates more points of failure because one is now relying on the support of any infrastructure/libraries for those hooks; so then the company makes an architecture/devops/BRM(build release management) team and that creates yet another abstracted box that developers assume “I don’t need to understand it, let them figure it out” and then before long the bottlenecks absolutely choke all forward movement.
It would be better to focus more on clearly documenting objectives, ensuring all team members understand everything, and frequent check-ins to avoid stale logic: when you work on a project long enough, you do end up unwittingly introducing regressions into the code because you are not holding yourself accountable for every line of code, one just assumes “I accomplished what I needed to in my near past and know what I need for today”. That myopia just does not work in any context.
I know everyone doesn’t like talking to each other and different teams(and that it’s basically become socially acceptable for programmers to be total asses to anyone that they don’t think can “understand”), but if you cannot communicate well, you sure as shoot ain’t going to be getting a clear understanding of how/what/when/where/why you’re coding some way and what the true end result should be.
And also, programmers ain’t special, we’re mostly just learning frameworks anymore and all actual critical thinking has been abstracted away, so everyone needs to put the egos away and actually talk through issues and stop assuming “my code speaks for itself”
I like Mercurial. But you should also have a look at Fossil (https://www2.fossil-scm.org/home/doc/trunk/www/index.wiki), which is what I use now.
Everything in this discussion is about PEOPLE, not about TECHNOLOGY. Do the people even DO source control and version control properly? When I started at one company, they did “version control” for each *version* – not archiving their work every day, or every successful code-test-verify cycle, every *version* after multiple weeks or months of work. The idea of committing their work every day or two was an unheard of imposition of time and effort.
I introduced Subversion, and stayed with it for years until I left that company. Most of the people at the hardware-focused company didn’t even understand or properly use this old-fashioned simplistic system; the prospect of using git was terrifying to our user base, other than the rare introduction of a fresh-out who had never heard of anything before git and thought it was the only way of doing anything. And since we really WERE centralized in the office, with central servers, because that was the only way that many hardware-focused companies worked before about 15 years ago, SVN was fine for the purpose. Also we didn’t have too many variants of code because people didn’t understand variants therefore didn’t implement them well.
Yes, this may sound like “How did you do things back in the dark ages, Grandpa?” But I used even simpler and more human-work-intensive systems like PVCS (released 1985) where you had to edit the change list by hand – keeping “pure” copies of the original text around would have taken up too much space on the tiny HDDs of the era. The important thing is, Do you properly archive your work?, and Do you understand and use the features of your tools?
I’ve been following Pijul for a while and I think it’s worth the look. https://pijul.org/
bit.dev 🙂
Just as an aside, if you’re going to refer to 93.87 as a whole number, it would be 94, not 93.
I seriously miss monotone (www.monotone.ca).
*Loved* that I could backup my day’s work in the office onto a USB key (as a single Sqlite DB file) and restore it on the train ride home without a network.
Found that it handled merge conflicts better than other tools I was using, including git.
Worth mentioning that Torvalds played with it before starting git, but found it too heavy for his intended use, among other complaints (https://lwn.net/Articles/132000/ and https://lwn.net/Articles/250012/)
Have used git, mercurial, perforce, svn, vss, dvcs at one point or another – much prefer git by a large margin.
Isn’t the Git — Mercurial missing something? I remember reading somewhere years ago that Mercurial offered a better way to deal with libraries than Git. And in my more recent uses of Git, I see nothing to address the difficulty with libraries.
For most of my own projects I prefer to use Subversion. I still use RCS when I’m working on something that is only a single file. I only use git because Open Source projects I use or work on use it but I don’t like it. It is too easy to get in to merge conflicts, or other problems, that are a pain to handle. I have lost work due on several occasions while using git which is that has never happened in my years of using RCS, CVS, or Subversion. During the time I’ve been using git I have learned how deal with some problems with using git and how to mitigate other problems it throws at me.
Have you been to japan? It’s still prehistoric here.. 90% of the companies still use svn (and their engineers think that they are competent!)
A section of your description of SVN is just plain wrong. And some of the comments saying that to work on different branches in SVN you have to check out each one separately shows that misunderstandings about SVN are widespread.
The section I disagree with is this one “Second, the branching is very heavy. Once a branch is created, you can’t delete it (if I remember correctly). I think there is a command to remove, but it stays in history regardless”.
The branching in SVN is actually basically free, because it has a “copy on write” methodology. If you make a branch or tag (in SVN both are just “copy” operations), then all that happens is that a new revision is created. No data is copied (as should be obvious to anyone who has committed a server-side branch or tag creation for a big SVN repository – it happens virtually instantaneously, which shows that there is no way gigabytes of data can be being copied.
And one big driver for version control is to prevent accidental deletion – so I would argue that a version control system where an established branch can be easily “really deleted” is potentially dangerous.
And a story for why a distributed version control system is not always good. I worked for a company where a rather disorganised development team was disbanded. They were using Git for their source control system. When the team that was going to take over what they had been working on came in, they discovered over 100 different instances of the Git repository, all with different (presumably mostly incompatible) changes, and sorting out which was the right version of all the different parts of the code took ages.
With a centralised version control system like SVN, it is virtually impossible for a team to end up in a mess like that, but in an environment where anyone can clone a repository, make and commit local changes, and carry on working without ever having to push things back to a central location, it is really easy for inexperienced or poorly disciplined development teams to end up in a huge mess.
initial comment on mercurial:
“One of main advantages of Mercurial these days is its ability to scale on a very large project (millions of commits, millions of files). Over the years, companies have contributed performance improvements and dedicated features that make Mercurial a viable option for extreme scale monorepos.”
Thomson says lower down
whereas Mercurial’s primary drawback, its slow speed on large repositories, is still, as far as I can tell, an extant problem.”
So which way is it?