Roger Ngo's Website

Personal thoughts about life and tech written down.

Microservices Architecture


Systems old and new have increasingly begun to use the microservice architecture in recent years. How this all happened began with the need to scale for big web applications that originally began as large, single instances. We define the large multi-tiered single application instance as the "monolith". In the effort to scale such systems, much work has been done over time to figure a new approach to model such applications in need to adapt to the changes with high velocity. The result of these efforts is the microservice architecture.

Microservices from a technical perspective, are not new. It is the implementation of the idea in practice that is novel. As of 2018, many large-scale web applications have been constructed with the microservice pattern. Some of these organizations which have successfully implemented some sort of microservice architecture have been Netflix, Amazon Web Services, eBay and Facebook.

In this article, I will discuss the microservice system architecture in the approach which presents it as a survey for those who are not familiar with the architectural pattern. I will also establish points which compares and contrasts microservices with and against its monolithic sibling.

Monolithic Architecture

Until recently, dynamic web applications were built as multi-tiered applications. The typical web application consisted of a web interface layer for client interaction, a service layer which handled business logic, and a database layer which housed the data flowing through between the client and server.

Figure 1. Monolithic Architecture

The monolithic system is quite convenient to use and develop on. This is simply due to the convenience of being able to manage development processes from conception, design, specification, implementation and testing. A change needed to be done on the system simply needs a new build be distributed back to the clients.

This pattern is simple and works for most. However, it is natural that a business will eventually grow and grow larger than the development processes in which building on a monolithic architecture allows. It is natural that with evolution of business logic and software which implements such logic, the monolith must then accommodate. Most organizations can continue to grow with these constant changes as they stay relatively small in scale.

There will come a time for some though as the monolith becomes to grow, the implementation must adapt to the complex business rules, and thus development becomes difficult due to a large codebase that is highly coupled with various modules which tend to extend beyond the knowledge base of the developer.

Implementation not only becomes the main problem when the monolith scales, but distribution becomes several degrees more complex. In a basic web application built as a monolith, the typical method of distribution is to host the web application on a web server.

Methods exist to solve the scaling problem in the monolith. Some ideas that are typically used could be adding more instances, in this case, web servers hosting the application and throwing a load balancer in front of these web servers to distribute the network requests across the application hosts. If the database is under too much load, then spin up more database servers with some sort of replication technique, or if complexity permits, database sharding to assign specific clients to a database.

Figure 2. Spawning additional application hosts to fulfill a larger request load.

Figure 3. Scaling to accommodate database load with database sharding.

These methods of adding scalability to the monolithic system can become quite costly over time. Since the monolith is one big application, not all modules within the application need scaling to accommodate large loads.

For example, a monolithic eCommerce website may have lots of traffic and load on their payment processor, but not much load on their customer support module. Since the application is a monolith, the eCommerce website doesn't have much of a choice in scaling one module and leaving the other alone. Assuming that the eCommerce company decides to solve their scalability problem through adding more application hosts, the monolithic nature of the application requires that the entire system be deployed onto a new server. This is quite costly as many modules become redundant without the need to actually "scale" them.

If 5% of your module within a monolithic application needs to scale out and 95% do not get much load, then still, more resources are needed to accommodate the system as a whole. But what if there is a way to actually construct your system through what we have always referred to as "modules" -- where everything is broken down into independent services and can be deployed as a single entity. This is where the microservices architecture comes into the field of play.

Microservices Architecture

The conceptual basis of the microservice derives from the UNIX philosophy: each tool, or program should do one thing and do it well.

In a generalized manner, a microservice is a service that performs fulfills operations for a functional unit of the overall business logic required by the application, or system.

A system composed of many services means smaller components which leads of a separation of concerns. A payment processor is its own service, the customer support module is its own service, the shopping cart is its own service, and so on. This leads to decentralization of a service. We can take a view of a system and interpret it as decentralized.

Figure 4. The monolith usually as a single entry point that can be traced for each request.
Figure 5. A request can be serviced by many different microservices depending on the invocation type.

With a typical monolith, it is easy to tell what the flow of an application function is, but with microservices, things are more modular, and the consumption of applications becomes functional. The overall system constructed by these services are said to be loosely coupled and highly cohesive.

To define, coupling is the degree of which the services share logic which originate from other services. Loose coupling of various services means that the services are more likely to function independently without any dependencies from another service without much trouble. By contrast, a tightly coupled set of services depend on one another to be fully functional.

Figure 6. Loose coupling.

Figure 7. Tight coupling.

Cohesion is defined to either be low, or high. A system composed of services with low cohesion results in services not communicating with one another to form an overall bigger service, or system. A system then composed with services which have high cohesion with one another makes many interactions with each other to fulfill a request. We want a system composed to microservices to be highly cohesive as it ultimately is the indicator whether or not the logic which serve as independent modules are actually of use as independent services.

Figure 8. Low cohesion.

Figure 9. High cohesion.

By separating out the modules in the system into many different services, the system becomes more loosely coupled, that is the services become independent components themselves and can be thought of as independent applications. Individual services then fulfill their purpose and fulfill it well due to the following qualities:

  1. A well-constructed service serves a "purpose" which adheres to the philosophy of doing one thing and doing it well.
  2. An individual service is smaller in size, and thus easier to change and maintain.
  3. A set of services which are loosely coupled and are less likely to affect other areas in the overall system.

A microservice architecture implicitly brings other development advantages for the organization. Apart from the main advantage of having smaller teams managing a single service, the choice of technology stack to implement these services is flexible. Since microservices are services which are independent from one another in the system, the choice of technology stack can be whatever set of technology that is needed to implement the service in a pragmatic manner. Because of this, the trend of systems being composed with a microservice architecture is said to be polyglot, or "of several languages".

For example, one service can be implemented on a JavaScript-based stack, while the other can be on stack which makes use of a more traditional stack such as LAMP. Ultimately the decision comes from which technology will work best to achieve the goal and the decision on the technology need not be stringent within the organization.

Since now there are many services which compose a system, rather than one large cumulative one to fulfill business needs, there are now several challenges presented in composing various microservices to construct a coherent system to fulfill all operations of the business.

With various services implemented with many different types of technologies and having them loosely coupled and distributed, the question is now how to approach communication within the system and orchestrating these services to fulfill operations to achieve overall business goals.

Microservice Communication

If the system is then composed by many independent services managed by various teams, built on various technologies, backed by various data stores and is contained across many different servers, how is cohesion achieved to externally create a functional unified system that fulfills the business needs of the organization?

The microservice system performs communications through message passing. By exposing API endpoints in specific services, each microservice can then communicate cohesively through passing messages to one another. The particular choice of message passing protocol of let's say REST opposed to SOAP is critically important, but rather the interpretation of the messages from service to service is key to creating the functional system.

With a message passing system that a number of services can use, the effect of this is a distributed system and the decision whether to handle message requests synchronously, or asynchronously becomes very important – especially when data integrity is a priority for the business goals.

Performing message operations synchronously results in a simple, clear and straightforward implementation of workflows. However, due to the blocking nature of synchronous operations, the system sacrifices scalability and performance at the expense of natural data integrity. Therefore, the general approach to message process is to handle the requests asynchronously.

Figure 10. Synchronous Flow

Figure 11. Asynchronous Flow

Of course, with asynchronous operations, comes greater complexity. Since operations are now performed in an ad-hoc manner, we must be wary of data integrity due to race conditions in computations that could result as the volume of requests and number of services increases in the system.

How one handles this complexity is variable and a decision of the development team. The team can choose to use an event-queuing mechanism paired with a good design pattern such as the publisher-subscriber pattern. There are many libraries to handle asynchronous message passing, but the main point is to be aware of the complexity involved in this type of communication.

In order for a message communication to be maintainable and robust, the API of the communication system must be well-thought out and implemented carefully. Construction of a good API to sanely manage and route messages to the appropriate service methods is challenging.

A common approach to API invocation is to use an API Gateway to orchestrate incoming requests. An API Gateway allows us to wrap various handlers to handle different types of requests, manage public and private endpoints and route API calls to the appropriate service for the response.

Figure 12. API Gateway

Data Stores

Since each microservice is an individual unit of deployment, the question to consider is how data is managed across the system. Generally, the microservice is an independent functional unit. Therefore, we must minimize the amount of resources that must be shared in the system. In the case of data stores, the system composed of microservices must do its best to be backed by its own independent data store.

Figure 13. Microservices using different types of data stores.

This allows the team managing the microservice to focus on delivery, reliability and maintenance of their service without the need of worrying of whether or not a data schema change within the data store could affect others.

Another advantage of having independent data stores across different services brings the ability to choose the appropriate data store technology to manage data. One service could be backed by a relational database store such as MySQL and another service could be backed by a NoSQL document-driven data store such as MongoDB.

Microservice Design Philosophy

The basis of the microservice design philosophy brings a challenge for systems that are composed of many business rules of varying domains. To build a system composed of microservices, one must identify functional units which can potentially operate independently. Effectively identifying a functional unit within the system, knowledge of the business processes is critically important.

Although the ultimate goal of the microservice is to create a system enabled for a higher velocity of development and change to business needs, it can be difficult to construct the system composed of microservices from inception.

Several reasons explain the difficulties:

  1. In greenfield projects, business requirements are presented, but are not immediately understood in the sense of how the specific business functions operate with each other.
  2. Knowing which functions of the business are highly susceptible to change is also not well understood.
  3. Tooling to meet specific needs of testing, deployment and monitoring has not been development.

Therefore, the general recommendation is to actually start the system as a monolith, then break it into many different services as domains within the business become better understood.

As the application becomes larger and complex, the development of the monolithic application can start to become more difficult as the size of the codebase and its dependencies grow. This makes it hard to scale.

A larger codebase results in a longer ramp-up time for incoming developers. Dependencies within the monolith also brings more need to understand a specific domain, or context when working with areas of code. This is the motivation in converting a monolith into a microservice and is generally done over time as the application matures and business needs become well understood.

Suppose we can take our business and separate it into the multiple functional domains. These functional domains can form bounded contexts. The bounded contexts form modular units in which your microservice can be constructed around.

Figure 14. Bounded Contexts

The bounded contexts group several interrelated microservices together. This is analogous the "department" within a company, or organization. Microservices within a bounded context allows closer interaction with each other during development and testing. They can be maintained by a single team, or a number of other teams working closely with each other.

Microservices allow faster scaling in development. If your team is big and your application is big, it is useful to split your code into modules bounded by context, or business domain areas. This enables each team to be smaller than is conducive to a more agile development paradigm.

So how can we take a monolith and split it into microservices?

Suppose we can take our business and separate it into multiple functional domains. These functional domains can form bounded contexts. The bounded contexts form modular units in which your microservice can be constructed around.

Going even further, we can continue to divide the bounded contexts and inspect each function and analyze their coupling with each other to see how we can build an individual microservice for that function. The goal is to achieve loose-coupling between services.

The goals which need to be achieved for microservice architecture:

  1. Create highly cohesive, loosely coupled services that can be independently deployed to achieve a business goal.
  2. Enable your application to be ready for change and have it easily adapt to business requirements that shift in high velocity.
  3. A system that shows scalability by being reliable, testable and easily deployable as the number of services grow.

Testing, Deploying and Managing the Application Composed of Microservices

If microservices are meant to be independently operated and deployable, testing infrastructure becomes an important piece in the overall system. With higher rates of change across many different components, tests need to be able to adapt at the same pace at which the system changes due to new microservice deployments.

  1. Keeping up-to-date tests is key.
  2. Creating a strong automated testing framework which can be: easily run, developed on and easily deployed.

The above will encourage your team to continuously develop and run tests to keep up with the pace at which the system changes due to microservices.

Since microservices perform communication through message passing and is often asynchronous, careful consideration must be taken in order to come up with the best set of tests possible to confidently ensure that the microservice being developed is able to operate correct within the system.

Microservices can easily be updated, thrown away and created from the simple natural reason in that they are small. Because of this, changes to a microservice, especially in high frequency enables greater risk for breaking changes, or even unintentional bugs.

In order to minimize the potential impact of changes that can go wrong, it is good practice to have a sophisticated deployment system.

With a sophisticated deployment system, we can practice blue/green deployment with deployment ramp up. What blue/green deployment is essentially deploying the new service first and really only routing a small percentage of requests to the new service endpoint. As the confidence of the new service increases, we can increase the number of requests going into the new service. When the number of requests flowing into the next service reaches 100%, the system can terminate the old service.

The makes DevOps a very important team in your organization. Also, pushing to production becomes more interesting as the number of things which can go wrong due to the number of components interacting with each other becomes exponential.

How can we ensure we are doing our best to create reliable services? Create tooling to simulate production catastrophes to create workflows to address potential problems. A good example of this would be Netflix and their Simian Army. The result is a more resilient system which can actually use the data to develop self-healing services.

Cost needs to be considered as the questions becomes of whether or not an organization deploys all these microservices into individual metal servers. Generally this is not an ideal approach. It can be prohibitively expensive to do so. Instead, the general practice has been to use containers and virtual machines instead. The microservice can then be distributed to many instances sitting on virtual machines which in turn lie in different physical machines.

The most interesting advantage in which microservices bring over monolithic system is the relationship between scalability and performance. Monoliths are hard to scale in that the general approach has always been to spawn complete instances of the monolithic application to new physical servers followed by load balancing requests through an algorithm that would lead the client to a specific instance of the application.

Figure 15. Scaling Microservices.

What if only a subset of functions within the monolith need to be concerned with scale? There is no choice in this as the monolith is a single unit.

With microservices, it becomes more economical in that we can identify specific services that actually need scaling and can allocate new instances of this small instance without wasting additional resources for services that don’t need scaling. If we wanted to only spin up more instances to handle payment processing, we can easily do so without having to replicate other services within the system. This is beauty of microservices!

Team Organization and Conclusion

Finally, let's talk about the organization of the team. Conway's Law states that a system will eventually start to mirror that of the organization which designed and implemented it.

The team composition is important as the members of the team will affect the overall lifecycle of your service. The team must be agile and responsive. Understanding the problem domain is important and thus the communication across members is vital. To make communication easier, it is generally better to have smaller teams. A way to approach this thought is to ask: If the team managing your service is large, then is your service actually "micro"?

How small is small though? That depends. Is the team composed to manage your service fulfilling a business needs? Again, it is purely dependent of the business requirements.

The key guideline is that a system architected through microservices is never complete. It is mostly always extended, updated and cleaned as business requirements and your organization evolves. Never be afraid to throw out old code and start anew if the service is no longer serving any purpose. Your services are small and are built to adapt. So, what's next? Serverless.

Reading and Resources