Top Ten Considerations When Planning Docker-Based Microservices
- octobre 09, 2016
As AWS consultants steeped in DevOps best practices, Docker, and the forward edge of new technologies and architectures, we often get asked about microservices. One of the most common questions we field is around potential stumbling blocks to a Docker-based microservices approach. This is a really smart question as there are several considerations that when not thought through in advance can really cause headaches down the road.
Before we talk through these top considerations, however, let’s first review why so many organizations are considering microservices in the first place. As you likely know, the idea behind microservices is that instead of writing an application as a single monolithic code base, developers can break it into smaller, autonomous services. This allows for more agility and greater autonomy amongst different teams, allowing them to work in parallel accomplishing more in less time.
Moreover, smaller components mean that code is less brittle, making it easier to change, test and update the code. Last, microservices shortens the onboarding journey as new hires need only learn the ins and outs of the particular service they’ll be working on, not the entirety of a monolithic application. (For additional background on microservices architectures, please reference our article here.)
Docker-Based Microservices
Docker is a natural fit for microservices as it inherently features autonomy, automation, and portability. Specifically, Docker is known for its ability to encapsulate a particular application component and all its dependencies thus enabling teams to work independently without requiring underlying infrastructure or the underlying substrate to support every single one of the components they are using.
In addition, Docker makes it really easy to create lightweight, isolated containers that can work with each other while being very portable. Because the application is decoupled from the underlying substrate, it is very portable and easy to use. Last, it is very easy to create a new set of containers; Docker orchestration solutions such as Docker Swarm, Kubernetes, or AWS ECS make it easy to spin up new services that are composed of multiple containers — all in a fully automated way. Thus Docker becomes a natural fit for microservices when creating a microservices substrate on which Docker containers can run.
There are several process and technology design points to consider when architecting a Docker-based microservices solution. We share here the top five process considerations and will share later this week in a second blog post the top five technology considerations — all based on our experience with organizations of various sizes and across industries.
Process Considerations
- How will an existing microservice be updated?
Why is this important?
Recall that the fundamental reason we use microservices is to allow faster development, which is expected to increase the number of updates we have to perform for a microservices. To leverage microservices fully, it is critical that this process be optimized.
What are some choices?
There are several components that make up this process and there are decisions that come with each step in the process. Let us explain with the help of three examples.
First, there is the question of whether to set up continuous deployment or set up a dashboard where a person presses a button to deploy a new version. The tradeoff is higher agility with continuous deployment versus tighter governance with manually triggered deployment. Automation can allow implementation of security with agility and allow both benefits to co-exist. A company shall decide their workflows and what automation they require, and where.
Second, it is important for businesses to consider where the actual container will be built. Will it be built locally, pushed and travel through the pipeline? Or will actual code first be converted into artifacts, and then to a Docker image that travels all the way to production? If you go with a solution where the container is built in the pipeline, it is important to consider where it will be built and what tools will be used around it.
Third, the actual deployment strategy must also be thought through. Specifically, you can update a microservices architecture through a blue-green deployment setup where a new set of containers are spun up and then the old ones are taken down. Or, you can opt for a rolling update as you go through the multiple service containers, creating one new container and putting it in service while you take out one of the old ones.
What are the considerations?
The actual decisions are multi-faceted and require consideration of factors including current flows, skill levels of operators, and any technology inclinations.
- How will developers start a brand new service?
Why is this important?
Starting a new service is a fundamental requirement of microservices. As a result, the process for starting a brand new service should be made as easy as possible.
What are some typical choices?
An important question to ask is, how will you enable developers to start a new service in a self-service fashion without compromising security and governance? Will it require going through an approval process such as filing an IT request? Or, will it be a fully automated process?
What are the considerations?
While our consultants at Flux7 will always err on the side of using as much automation as possible, this is definitely a process point you will want to think through in advance to ensure you correctly balance the need for security, governance and self-service.
- How will services get a URL assigned?
Why is this important?
This question really goes hand-in-hand with starting a brand new service. A new URL or subcontext (e.g., myurl.com/myservice) needs to be assigned to a new service each time it is created and a process for assigning them should ideally be automated.
What are some typical choices?
Options can include a self-service portal for assigning URLs manually or a process whereby the URL is automatically assigned and pulled from the name of the Docker container and any tags that are applied to the Docker container.
What are the considerations?
As with starting a new service, Flux7 AWS consultants err on the side of using as much automation as possible though this is definitely a design point that needs to be thought through in advance.
- How will container failure be detected and dealt with?
Why is this important?
One of the key requirements for modern infrastructure today is that it doesn’t require “babysitting;” it can self-heal and self-recover if it goes down. As a result, it is paramount to have a process to detect failure and a plan for how failure will be handled when it does occur.
What are some typical choices?
For example, it is important to have a defined process for detecting that a container application is no longer running, whether through a networking check or log parsing. Additionally, there should be a defined process for replacing the container with a new one as a possible solution.
What are the considerations?
While there are many approaches to this process, the design point is to make sure that the requirements are met, ideally via automation.
- How will the code for each microservice be structured?
Why is this important?
We want a fully automated process for building and deploying new services. Yet, if the number of services is going to be large, it can quickly become cumbersome to manage.
What are some typical choices?
Multiple versions of the process, one for each service, should be created. In these cases, it is imperative that each process is kept homogeneous.
What are the considerations?
A very important decision in this is how will each microservice be structured. For example, the Dockerfile should always appear in the exact same place and whatever is specific to the service should be contained with the Dockerfile. In this way, the process can be made microservice agnostic. Similarly, other files such as a Docker compose file or a task definition for AWS ECS should consistently be put in the same place — across all services — so that processes can run consistently in a homogeneous fashion.
As you can see, there are many important process design points that should be thought through before diving headlong into a Docker-based microservices deployment. In our next installment of this two-part blog, we will take a look at the top five technology considerations when planning a Docker-based microservices deployment. Want to make sure you don’t miss it? Sign up to receive our blog directly in your inbox.
Subscribe to our blog