Top 5 Technology Design Points for Docker-Based Microservices

  • October 16, 2017

In our last article , we took a look at why Docker is a natural fit for microservices and the top five process design points to consider when planning for a Docker-based microservices deployment. Today we will dive into the top five technology design points that should be considered in the planning stages. Doing so will help you avoid potential stumbling blocks that when not thought through in advance can really cause headaches down the road.


Indeed, as AWS consultants with deep experience in DevOps, Docker and the surrounding ecosystem of technologies, there are several important technology and tool decisions we frequently encounter. These decisions should be considered carefully as they have long-term implications for organizations embarking on a journey to a microservices architecture. These include:

  1. What tool will be used to schedule containers on compute nodes?
    Why is this important?
    Schedulers are important tools as they allocate resources needed to execute a job, assign work to resources and orchestrators ensure that the resources necessary to perform the work are available when needed.
    What are some choices?
    There are many tool choices for container orchestration. The top contenders that our AWS consultants consider are: ECS for customers in AWS, and Docker Swarm or Kubernetes for those who would like a vendor-agnostic solution.

    What are the considerations?
    There are several angles for organizations to weigh in making this decision including portability, compatibility, ease of setup, ease of maintenance, the ability to plug-and-play, and having a holistic solution. At Flux7, our microservices experts help our customers navigate this decision by asking them a series of questions about their requirements and then making a recommendation.

  2. What tool will be used to load balance requests between the containers of the same service?
    Why is this important?
    High availability and the ability to have multiple container services in the environment make it critical to support more than one container per microservice.
    What are some choices?
    For services that are non-clustered, for example web-based microservices developed in house, there is need for an external load balancer to balance incoming traffic between different containers on the same server. For load balancers within the same service, there are several options — from taking advantage of AWS ELB in Amazon to open source tools that can act as load balancers such as NGINX or HA Proxy.

    What are the considerations?
    This is an important technology decision that should be thought through carefully. Some salient design points to consider as you make a decision: requirements for session stickiness; the number of services you plan to have; the number of containers you have per service; and any Web load balancing algorithms you would like to have.

  3. What tool will be used to route traffic to the correct service?
    Why is this important?
    This design point goes hand-in-hand with load balancing as it directly addresses application load balancing.
    What are some choices?
    As we discussed earlier, individual URLs or sub contexts are assigned per service. When traffic hits the microservices cluster, another task is to ensure that the traffic coming in is routed to the right microservice given the URL that the traffic is addressed to. Here we can apply HAProxy, NGINX or AWS Application Load Balancing (ALB).

    What are the considerations?
    AWS ALB was introduced in August and in the short time it’s been available, a debate has emerged as to which tool is best for application load balancing. Our AWS consultants analyze on a case-by-case basis which approach is best for our customers. Two key questions we ask to make the right decision include, how many microservices do you plan to have and how complex do you want your routing mechanism to be.

  4. What tool will be used to give code the secrets?
    Why is this important?
    With the number of microservices in a given application expected to increase over time, and modern applications relying more and more on SaaS extended solutions, security simultaneously becomes really important and more difficult to manage. For microservices to communicate with each other, they typically rely on certificates and API keys to authenticate themselves with the target service. These API keys, also known as secrets, need to be managed securely and carefully. As they proliferate, traditional solutions, such as manually interjecting at time of deployment, don’t work. There are frankly just too many secrets to manage, and microservices require automation.
    What are some choices?
    Organizations need to settle on an automated way to get secrets to containers that need them. There are a few potential solutions including:
    • In-house solution built for saving secrets in encrypted storage, decrypting them on the fly and and injecting them inside the containers using environment variables.
    • AWS IAM rules which can interject Amazon API keys. However, this solution is limited to Amazon API keys and can only be used to access secrets stored in other Amazon services.
    • HashiCorp Vault uses automation to effectively handle both dynamic and static secrets. (For additional depth please see our article, “Handling Secrets in Microservices”)

    What are the considerations?
    Your answer to this technology question depends on how many secrets you have; how you expect that number to grow; your security and compliance needs; and how willing you are to change your application code to facilitate secret handling.

  5. Where will SSL be terminated?
    Why is this important?
    One question that arises frequently, especially around microservices that service web traffic is: where should SSL be terminated? Typical design factors to consider include your security and compliance requirements.
    What are some choices?
    Typical options are at the application or network load balancer, for example terminating them at AWS ELB or ALB. A second option is to terminate SSL at an intermediate layer such a Nginx, or at the application container itself.

    Certain compliance initiatives, like HIPAA, require that all traffic be encrypted. Thus, even if you decrypt at the load balancer, it needs to be re-encrypted before it is sent to containers running the application. On the flip side, the advantage of terminating at the load balancer is that you have a central place for handling SSL certificates. And fewer things have to be touched when an SSL certificate expires or needs to be rotated.

    What are the considerations?
    Elements to consider as you make a design decision include your specific compliance and security requirements; the ability of your applications to encrypt and decrypt data; and your container orchestration platform as some have the ability to encrypt data seamlessly. The combination of all the above should be the basis for your SSL termination decision.

 

While all these design points may feel overwhelming, making the right choices will have long-term implications to your organization’s success with its microservices architecture. Having built many Docker-based microservices environments in AWS, our consultants are well-versed in what’s best based on your specific business and technology requirements. That’s why we walk each of our clients through our proprietary decision support process designed to ensure the best fit decision at each point in the process. For more information on assessing your needs for a microservices architecture and help ensuring you’ve made the right decisions for long-term success, reach out to us today.

 

Did you find this useful?

Interested in getting tips, best practices and commentary delivered regularly? Click the button below to sign up for our blog and set your topic and frequency preferences.

 

Subscribe to our blog

ribbon-logo-dark

Related Blog Posts