Why Use a Microservice Architecture
By Matt Sicker
Microservice architecture is the latest fad in software development, and as such it comes with numerous conflicting definitions. To help clear up this confusion, we’ll discuss what microservices are, how they relate to older development architecture patterns, and why and when they are useful.
In the enterprise world, service oriented architecture was the established paradigm in software development, and this typically incorporated the SOAP standard and web services. Some concepts from SOA, such as loose coupling and domain-driven design, translate well into microservices, but other concepts such as global transactions and global data consistency do not scale well in such an architecture.
So what are some of the major pain points in SOA that microservices help address?
- Monolithic applications and the issues behind deploying new versions of them
- Incompatible implementations of SOAP being widely used, which tend to nullify the benefits of using a web services standard for loose coupling
- Expensive proprietary software, which has not yet reappeared to a similar extent in microservices
- Data scalability and schema evolution in monolithic architectures, which are often more manageable in a microservices architecture
Microservices as SOA
Due in part to the existing complexity of SOA usage, many developers eventually found themselves trapped by applications that could not be iterated on rapidly in an agile development environment. Deployments required complicated build systems easily described as “magic”, and entire teams would need to be formed just to ensure a single deployment worked properly.
This sort of process worked well enough in the waterfall development days (or at least as well as waterfall development worked itself). But as the software industry migrated to agile development methodologies, the friction of maintenance, deployment, and onboarding new developers into overly complicated monoliths became untenable.
Microservices and HTTP
At this point, some developers began experimenting with what would later be named microservices. Standardizing on the ubiquitous web standards, individual services were written to communicate over HTTP using XML and later JSON, relying on extensive library support for these standards. This borrowed the concept of RESTful APIs, where services provided resources, which were identifiable by URIs, and each resource defined HTTP methods (or verbs) that could be performed on those resources.
There are other ideas on how to implement microservices besides via HTTP (e.g., OSGi in the Java world provides their own style of microservices that can communicate in-memory, but are still physically separated by the JVM via separate ClassLoaders), but the general concept behind them all is that each service requires physical separation and must communicate over well-defined boundaries such as through a network socket. This provides several advantages over traditional monoliths, many of which we will discuss.
It’s important to note, microservice architecture provides far more benefits the larger an organization becomes. A small enterprise with only a single team or two working on these services may find it overly complicated to break their systems into microservices, particularly with purely CRUD-style applications. Many of the advantages can be extended into small monolithic applications with proper developer discipline toward modularity, loose coupling of functionality, and high cohesion of individual units of code as organized into modules.
First of all, breaking up a monolith into smaller pieces simplifies the individual build and deployment systems of each microservice. While the overall complexity of the system may increase, the savings in developer time and pain is well worth the cost. Each individual microservice is much simpler to understand in isolation. In theory, this decreases the amount of time it takes to onboard new developers into a system. This could require the adoption of DevOps into an organization, generally considered a good thing to adopt regardless.
Choosing appropriate frameworks for each service is also an important advantage as developers no longer have to rely on overly broad enterprise style frameworks. This allows more room for experimentation with newer technology and design patterns, which can then be shared with other developers to help improve the overall system.
Choosing appropriate programming languages, libraries, and frameworks are only one part of the many choices required for each microservice. Since most applications contain some sort of persistent state, most applications require some sort of database or general persistent storage area. When separating an application into microservices, each microservice can and should have ownership over the data it needs while providing access to it through its own public APIs.
A common problem in many actual implementations of microservices is sharing databases between services. Not only does this tightly couple those applications in regards to scaling, schema management, and other administrative tasks, but it also destroys a lot of the advantages of using a microservice architecture. Though in practice, a single database cluster can be used to serve multiple microservices, as long as each microservice uses its own database within that cluster, the choice to physically separate that database is much easier than when multiple applications share a database, even if they use their own tables.
The choice of database, cache, and other infrastructural addons to applications should be left to the individual microservices, though in order to make the operations side of things at least somewhat practical, it is generally a good idea to limit the number of different choices to use. For example, an organization may wish to standardize on a single type of relational database, a single NoSQL database, a single time-series database, a single distributed cache, etc. Of course, this should remain more of a guideline rather than a hard rule in order to prevent wasted effort fighting the tools rather than using what works best in each use case.
However, the physical separation between microservices makes it more difficult to create minimally cohesive systems. Code no longer can be simply dumped wherever the developer feels like it. Instead, reusable libraries can be promoted where appropriate, and business logic remains exactly where it belongs: within its appropriate domain.
This can also provide a prime opportunity for an organization to contribute back to the greater community via free and open source software with said common library components (incidentally, this is how many Apache projects are started). Developers must make a conscious decision to make remote calls into other microservices, so this helps enforce loose coupling. Even when developers attempt to tightly couple microservices, this problem becomes apparent much sooner rather than at the most inopportune time when a service hits a spike in usage one day.
This leads to another advantage of microservices: the ability to independently build, deploy, and scale these services. As alluded to before, build scripts can be immensely simplified just due to the decrease in the amount of code needed to be packaged together into a single distributable artifact. Deployments can be simplified out of necessity of supporting deployment of numerous microservices, a goal that is often ignored when deploying monoliths.
As for scalability, each microservice can be independently copied and deployed multiple times across a cluster of servers. In order for microservices to communicate in this scenario, a form of client or server load balancing is required so that all instances of each microservice can be evenly used. This relates to the elasticity aspect of reactive architecture where services can scale up and down to meet the physical requirements of that service at any given time. In general, scaling horizontally like this is simpler than vertical scaling as it doesn’t require specialized or expensive hardware.
The double-edged sword
There are disadvantages to using a microservice architecture, however. Deployments are a double-edged sword in that they may be simpler on their own, but in the real world, this doesn’t negate the need for communication with other teams. Due to limitations in creating RESTful services, maintaining backwards compatibility can become difficult.
Since REST APIs aren’t exactly statically typed, programmatic refactoring between services is not possible at this time. If a backwards incompatible change is required in one service, then its dependent services may also need to make a deployment at the same time with updates. This can slowly lead back into the problems of coordinating monolithic deployments.
Renewed research into RPC-style protocols is working on solving these types of problems, but there is no silver bullet. Following semantic versioning strictly can help smooth migrations between incompatible versions of APIs, but this comes with its own set of limitations such as requiring to support multiple versions of an API while migrations take place as well as challenges in migrating or synchronizing different versions of data stores behind the APIs.
Splitting too much
A common pitfall when breaking a monolith into microservices is splitting them up too finely. This can be compared to a similar problem of dropping the use of relational databases purely in exchange for using a NoSQL style database. While certain domains may find this pattern lends well to use, other domains are relational in nature and can’t be physically decoupled without introducing numerous performance problems.
If this happens, not all is lost! Using a distributed trace logging library such as Zipkin can help identify tightly coupled microservices and performance issues in context of the physical flow of data in relation to particular requests or events. After identifying these parts of a system, they might be rejoined, or the overall architecture of how data relate to each other can be revisited.
Backend vs. frontend
Another difficult issue in microservice architectures is that the divide between frontend and backend needs can become ever greater. While the backend attempts to remain logically pure, frontend needs begin to clash with UI requirements that join data from numerous microservices into a single page. Supporting such UIs is generally much simpler in monolithic applications, but the microservice world is quickly researching and experimenting with alternative approaches to exposing APIs to applications.
An approach that stays pure to RESTful APIs is to layer APIs similar to how code may be organized in a traditional SOA application. The base layer of APIs is made up of the individual microservices in a system. Another layer of APIs can be created to compose those microservices into more specific aggregate domains. A third layer of APIs can then be added on to expose a more stable, UI-centric API that in essence translates all the underlying domains into synthetic ones that are relevant to the user experience.
This approach is generally more complicated, of course, but composite API layers, a common design pattern from enterprise application integration, can be easily implemented using integration libraries. This also allows for more intelligent caching and prefetching of data based on common usage patterns.
Another approach is to use an alternative method to interacting with APIs such as via GraphQL. Instead of exposing RESTful APIs directly to the frontend, a GraphQL endpoint is provided to specify the data structures required for a page along with filters on individual fields of the data, and individual fields can be updated via the same mechanism.
While this is not RESTful in the slightest, it does tend to work better in practice when defining an ever-changing API that both frontend and backend developers can agree on. This also makes it much simpler to refactor individual microservices without disrupting the frontend as the GraphQL server itself performs the data translations into the requested structure. It provides an interesting opportunity for caching and prefetching of data.
A fully asynchronous approach (which does not currently have a buzzword name that I’m aware of) may be to fully rely on using message queues and topics from frontend to backend. This idea was original inspired by a joke I made – implement an “everything” API endpoint to simply return all the user data we know about so the frontend developers would stop requesting more and more combined REST APIs.
To take this idea to its logical conclusion, the only realistic way I could think to implement it (without an extremely long response latency) would be to interpret the request as a message. Then, using websockets, individual pieces of the user’s data could be sent back as messages that would feed into a Redux-style state management library. This would fill in relevant UI elements as it loaded. Existing messaging protocols such as STOMP may be useful in implementing this idea.
While this approach of preloading all of a user’s data is not exactly realistic, the underlying idea of passing messages between the frontend and backend is interesting and should be considered as another alternative to traditional synchronous request/response APIs.
Embracing interconnected technology
The patterns behind microservice architectures generally parallel the wider internet. Instead of attempting to abstract away distributed systems, developers should be embracing them. A monolithic architecture attempts to group everything together under the illusion that a simpler local programming model will work out. In reality, these patterns do not scale well on the web, in big data, in the internet of things, in artificial intelligence, or really any non-trivial application of software engineering or computer science.
As technology continues to improve, past assumptions that allowed monolithic software to thrive no longer work well for most applications. Embrace the horizontal nature of interconnected technology as this is still only the beginning as distributed systems of the future continue to spread out.