DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image
Unpopular opinion: The software monolith is wrongly maligned
Mon, 11th Nov 2019
FYI, this story is more than a year old

Article by Segment chief technology officer Calvin-French Owen and software engineer Alexandra Noonan

It's fair to say that microservices are having their moment.

The hype is hard to escape.

In fact, according to IDC, 90% of all apps will feature microservices architectures by 2022.

This is because microservices work well for many companies.

When deployed wisely, they can add significant benefits in helping companies adapt at the pace of more nimble competitors.

However, the flip side is that a one-size-fits-all approach isn't suited for every organisation.

Like any technology, effectiveness is entirely dependent on the use case.

This is what happened to us at Segment.

In some cases, microservices worked brilliantly for us.

In others -- and where it really counted -- it lead to an explosion in complexity, technical debt, and a serious dip in developer productivity.

This whole experience taught us where microservices are and aren't a good architectural fit.

After careful consideration, we decided to make the shift back to monolith again.

Here is what we learned along the way:

Choices, choices

You can think of microservices as being like building blocks.

Each is a small component that can be combined with large numbers of others to build powerful applications within a service-oriented architecture.

Microservices provide a variety of benefits to an organisation.

They are easy to scale and can be combined in many different ways to achieve desired outcomes. Also, because each can be developed in isolation, they allow coding workloads to be effectively shared across engineering teams.

These days, monolithic architectures are often couched in “legacy” terms.

If you're a startup - at least so the story goes - you're able to skip them entirely, creating modern microservices-based applications from day one.

Like many other startups, this was alluring.

We saw the potential of microservices and were quick to embrace it.

Our product - which helps companies to clean, collect, and control first-party customer data - ingests thousands of events per second, forwarding them on to partner APIs like Salesforce and Google Analytics.

Microservices seemed like an obvious choice for us, because there are more than a hundred of these server-side destinations.

Taking the approach meant that, when one of the destinations experienced issues, only its queue would jam with requests waiting to be processed while others would continue without interruption.

However, after six years of following this strategy, problems started to appear.

Our architecture became more of a distraction, and benefits microservices had initially provided started to decline.

While we could still see that they delivered significant advantages, we also had to acknowledge one of their biggest downsides.

This was the complexity of managing an ever-growing catalogue of services and shared libraries. Time we could better spend improving our product was spent updating all these tiny codebases.

We want … a monolith?

Suddenly, having an old-style monolithic architecture seemed like a more attractive proposition. However, getting there clearly wasn't going to be an easy journey.

The first step was to replace more than 100 destination queues with a new central system that could send events to a single monolithic service.

We also had to undertake the complex task of migrating our destination code into a single repo, merging the dependencies and tests for 120 endpoints.

After completing this work, we quickly started to see some clear benefits.

The productivity of our development team was immediately boosted and we also found it was fast and easy to make changes to shared libraries.

Interestingly, just one engineer was able to do in minutes what had previously required the deployment of more than 140 services.

There is always a trade-off

Still, it is important to note --.neither architecture is perfect; there is no panacea.

Trade-offs were required.

For example -- fault isolation is more difficult in a monolith.

A single bug can sometimes crash your entire service.

Automated testing can definitely help with this, but there is still always a risk.

In-memory caching suffers as well.

Again -- there are tools to help improve this (e.g., Redis), but it is another thing your team have to worry about.

Finally, when updating a dependency, the chances are higher that you'll unintentionally break multiple destinations.

There is no panacea

As understanding of the benefits of microservices gains traction, their usage is expected to rapidly increase.

Departmental managers will need to explain the business benefits of the strategy to the board to ensure sufficient resources are made available for rapid development and deployment.

It should be noted that following a microservices-based strategy worked for us initially.

It solved performance challenges and isolated destinations in a way that made sense in the early days of our business.

However, we found microservices' biggest weakness is scalability.

The growth of our product led to exploding complexity, technical debt, and a significant drop in developer productivity.

In the end, it made most sense for us to look back so that we could move forward, and so we made the shift from microservices to monolith.

This doesn't mean that microservices have disappeared from our environment altogether, and we still use them for plenty of other use cases.

But, as is the case with any architecture, we have found that one size certainly does not fit all.