Why You Need a Cloud-Native Application Platform
- August 03, 2021
Having the right framework when approaching microservices, containerization and a platform for container orchestration is important. Too many services and other elements to manage can quickly become overwhelming and you can lose track of important things like security, monitoring and governance. As a result, it’s important to have a platform to help you manage the complexity of cloud-native applications.
Based on our experience working with dozens of cloud-native projects, we recommend the following tiers that are addressed in four parts: Evaluation, Discovery, Gap Assessment and Deployment.
- Evaluation (workshop) – This tier starts you off on the right foot with a series of questions about business requirements, the current state of infrastructure and applications and the desired infrastructure and application state. It includes questions about monitoring and metrics today, service mesh, API gateway, security, and more. Your answers to these questions will inform the right orchestration platform strategy and tools for your unique business needs.
- Discovery - Capture and document your existing landscape as it relates to tools and solutions in place that may or may not be used in a public cloud or hybrid cloud landscape. The list should include applications, infrastructure dependencies, DevOps and automation, security and compliance, monitoring and analysis, governance, network and content delivery, legacy and any proprietary integrations.
- Assess the Gap – This tier identifies gaps arising from the discovery process. Most particularly it identifies and categorizes infrastructure, process and application improvements in three categories:
- Severe - requiring action before or as part of a cloud migration
- Medium - requiring action before or as part of a cloud migration
- Low - can be migrated to the cloud but will require action in the near future.
- Planning for action items also takes place at this tier; be sure to identify tools, measures and reporting mechanisms.
- Severe - requiring action before or as part of a cloud migration
- Delivery – Based on inputs and outputs from the prior three tiers, tools and processes should be implemented to address the entire microservices stack, including:
- Core business services that should be implemented as containerized microservices which can be independently developed, tested, deployed, and scaled.
- A managed interface that exposes services to users.
- Cross functional services that use micro gateways to keep services independent from one another.
- A governance layer that includes runtime governance and design time governance. This layer is essential due to the number of different technologies at play and the amount of autonomy they have.
- A monitoring and analytics layer that allows you to easily backtrack issues.
- A security layer that provides authentication and authorization, traceability and auditability.
- Automation like automated deployment through CI/CD.
- Load balancing.
- Service mesh as needed for traffic flow between microservices and cross- functional services.
- And, last, container orchestration.
After following the evaluation, discovery and assessment steps, you may, for example, find that Kubernetes automation is the best fit for you. It may be that it’s ideal for you to provision clusters with the proper specifications and use pipelines that can initiate cluster configurations so that Kubernetes extensions can be installed as part of the Kubernetes setup process as modules. In addition, it may be coupled with a monitoring module for Kubernetes to monitor the cluster state, component state and basic application metrics. Additional layers would include a Kubernetes Ingress Layer so that the application can be exposed on the internet.
Next, you need a CI/CD pipeline that allows users to build, test and deploy Dockerized applications to different environments with quality gates to ensure code quality. Also included would be application containerization and onboarding, and an automated Docker registry to store Docker images in a private secure registry. And, last, a mesh network that allows applications to discover and communicate through a mesh network on Kubernetes clusters.
A key thing you may notice in this container orchestration platform is that its architecture emphasizes vendor neutrality – that is, we help clients choose tools that are the best fit for their business, process and technology needs. Moreover, should something change, any component can be removed – whether it is open source or proprietary – and replaced with another without disrupting the architecture. This also means that components can deploy and run independently of each other.
Other design principles we’ve incorporated, with a hat tip to Chanaka Fernando, are the use of:
- Loose coupling
- Standard interfaces – e.g., REST is an API that many systems can communicate with.
- Agile development – emphasizes high flow of quality code rather than big product launches every six or 12 months.
- Resiliency – architectures designed to withstand failures.
- Open-source software
- Modularity and replaceability – ensures that components are interchangeable.
- Computing efficiency
Cloud-native platform in action
An e-commerce company moved its monolithic application to microservices and needed help implementing a cloud-native application platform in support. We helped them create a data-driven, metrics-based deployment platform that decreased time to market while increasing code quality. Specifically, we deployed Kubernetes clusters as infrastructure as code (IaC) within the Google Cloud Platform (GCP). Using Keptn, a control-plane for DevOps automation of cloud-native applications, we defined service level objectives (SLO) and service level indicators (SLI) for its microservices, using Keptn automation to only deploy those services that meet or exceed the SLO/SLI metrics.
Out of the box, Ketpn supports automated testing tools. For example, it supports JMeter to run performance tests, Litmus to run chaos tests and Jenkins. Keptn also gathers metrics for each application from the monitoring system; it supports Prometheus and Dynatrace monitoring systems. Based on application metrics, developers can set SLI and SLO for quality gates and automatically promote an application to the next stage (e.g., from dev to staging). This gives the client greater confidence to release faster to production; they now release new features multiple times a day.
To ensure application health throughout the application lifecycle, we also deployed Dynatrace. As a SaaS monitoring solution, Dynatrace simplifies the monitoring of cloud-native configurations; once enabled for the company’s clusters, Dynatrace observes standard metrics such as CPU or memory requests, across the application. In addition to standard metrics, Dynatrace supports OpenTelemetry metrics that give full observability for cloud-native applications.
OpenTelemetry consists of a few different components as shown here:
(Source: Based on OpenTelemetry: beyond getting started)
Together Keptn and Dynatrace provide a solid foundation for the health of the company’s cloud-native applications, benefiting its end users with more stable applications.
A fear of missing out has many people diving headlong into cloud-native application development. However, doing so without a platform for success can become quickly overwhelming as the complexity of managing security, governance and orchestration can grow exponentially. With these best practice tiers, you can be more assured of obtaining the benefits of cloud-native applications – achieving greater reliability, avoiding lock-in, and releasing quality code faster.
Stay up-to-date on the latest cloud-native best practices and subscribe to our blog below.