Microservices are the software architecture of the moment. They’ve been adopted by everyone from eBay to the Government Digital Service, and – unless you’ve been on another planet – the hype has been hard to avoid.
Microservices can be extremely useful, but they aren’t right for everyone. As with any trend, it can be all too easy to get caught up in the excitement and commit to an architecture that causes your team trouble down the line.
At Segment, we’ve experienced both, having adopted microservices early on and subsequently made the shift to a monolith. Here’s what we learnt from our experience:
Weighing your Options
The choice between microservices and a monolithic architecture is anything but easy.
Microservices are like Lego blocks – small, single-purpose processes that can be combined to build applications within a service-oriented architecture. Their proponents claim that they provide better scalability because they are so easy to decouple and recombine. Since they can be developed independently, they have also been touted as a way to easily divide responsibility across engineering teams.
A monolithic architecture does what it says on the tin. Here, a large amount of functionality lives within a single service which is tested, deployed, and scaled as one unit. It’s a structure that’s existed for far longer, so it is somewhat less fashionable. But for what they lack in vogueishness and modularity, monoliths are celebrated for their performance advantages, along with the time they can save engineering teams.
The Problem with Microservices
Microservices have many obvious benefits, and were fast to take off. We, like many other startups, saw the architecture’s potential and were quick to embrace it.
Our product ingests thousands of events per second, forwarding them on to partner APIs like Salesforce and Google Analytics. Because there are more than a hundred of these server-side destinations, microservices seemed like the obvious choice for us. It meant that when one destination experienced issues, only its queue would jam with requests waiting to be processed. Isolated in this way, no other destinations would be affected, and we could tend to the problem without adding delays to the rest of our pipeline.
So far, so good. But after six years of building our product, we discovered a problem. As the product scaled, the time and effort required to maintain our codebases multiplied, and the original benefits of microservices started to fade into distant memory. With more than 15,000 customers relying on Segment’s customer data infrastructure, our architecture was becoming a distraction, demanding ever more of our team’s time when we could have been improving our product.
Pressed for time and faced with a huge body of codebases, our engineers had been updating some destinations but not others, leading to divergences and discrepancies. On top of that, load was not spread evenly between destinations. Though we had auto-scaling in place, spikes often required manual management. Developer productivity was suffering and so was our operational overhead, which increased with every destination we added. Something had to give.
The Monolith Returns
Though microservices have their benefits, we had come face to face with one of their biggest downsides: the sheer complexity of managing an ever-growing catalogue of services and shared libraries. Suddenly, in this case, a monolith seemed much more attractive – but it wasn’t going to be easy to make the move.
First, we had to replace over a hundred destination queues with a central system responsible for sending events to a single monolithic service. We also had to undertake the messy task of moving our destination code into a single repo, merging the dependencies and tests for 120 endpoints.
Piece of cake? Hardly. But once we had moved all destinations to a single monolithic service, our developer productivity rocketed. It’s now fast and easy to make changes to our shared libraries, with a single engineer able to do in minutes what had previously required the deployment of over 140 services. The monolith has also made scaling easier, with a good mix of CPU and memory-intense destinations living together in one service.
The Trade-Off
Moving to a monolith wasn’t just a major undertaking – it also required trade-offs. Before jumping in, it’s something to think long and hard about, as neither architecture is perfect.
For one thing, fault isolation is more difficult in a monolith, as when a bug is introduced to one destination it can cause the service to crash across the board. Automated testing can certainly help you out, but it will only get you so far.
In-memory caching is also less effective in a monolith, and though it’s possible to address this with something like Redis, that’s another point of scaling you’ll have to account for.
Finally, when updating one dependency to a new version, a monolithic architecture means you might unwittingly break multiple destinations. This can be harder to untangle than it would be in a microservices set-up, but an automated test suite can quickly show you where problems lie – overall, this is one trade-off that feels worth it for the simplicity you’ll gain.
Back to the Future
Our move from microservices to monolith may be unusual, but it’s a lesson for anyone trying to cut through the hype to decide on the right architecture for their business.
In fact, we expect to see other technology companies taking a similar journey to us in the coming years.
Microservices worked for us initially, solving performance challenges and isolating destinations in a way that made sense in the early days of our business.
But its biggest weakness was scalability, with the growth of our product leading to exploding complexity, technical debt, and a significant drop-off in developer productivity.
In the end, it made most sense for us to look back so that we could move forward. Although microservices won’t disappear – we still use them for plenty of other use cases – just like any architecture, they are not one-size-fits-all.
Sometimes, a good old monolith is still your best option.