Serving streaming media to millions of consumers every day means having a strong technology infrastructure with extreme scalability and availability. However, when this entertainment company was founded over a dozen years ago, commercially available tools to build infrastructure to this extreme level were lacking. As a result, developers at the media firm created their own solutions. Fast-forward several years and the business has found itself with several proprietary solutions that create significant challenges across development and deployment. As a result, the company sought to standardize its infrastructure by replatforming for the public cloud.
Home grown doesn’t scale
The company was running containerized applications in its own data center with a proprietary container platform. It was simultaneously running other applications in AWS with ECS. Indeed, the development team had created its own PaaS and microservices architecture that ran on it. While the team was quite mature in its technology use, its web of custom solutions had grown too big for the team to effectively navigate.
For example, with some services running in the on-premises data center, some running in AWS, some using Docker and others using the company’s proprietary build platforms, they lacked a consistent way for microservices deployment. It was time-consuming for development to build an environment due to the many cross-dependencies that the team needed to untangle before-hand. Eventually, it simply became too unruly for the team to manage and they reached out for help, seeking to replatform and standardize on AWS technologies.
Invest in a bright spot
Our approach is to find a bright spot, a project within the overarching project that can result in a quick win that delivers business value and illustrates engagement. The customer agreed and suggested a group of microservices that served as the platform for its primary revenue-generating activities. Together, we determined that the goal of the bright spot would be a microservices migration from the company’s on-premises data center to AWS, using AWS technology to build the supporting infrastructure. This would effectively replace the company’s proprietary solutions with commercial services that could be easily navigated.
Assess the application
We began by assessing the application architecture, determining for each application tier what could (or should) be replaced with cloud native services. For example, the application includes the application itself, a load balancer, a database, and more. For each component, we analyzed what technologies were currently being used by the application and if they could be replaced with a commercial AWS alternative.
Similarly, we assessed the technology architecture that would support the microservices to determine the business value gained from native cloud services, e.g. automating scaling based on demand, self-healing, lower maintenance, etc. Ultimately, the teams determined that they would use several native cloud managed services. Specifically,
- AWS ECR – for storing, running, and managing container images
- AWS EKS – for building, securing, operating, and maintaining Kubernetes clusters
- Amazon Aurora and AWS ElastiCache – were used respectively for microservices, AWS-Redis, Kinesis, Redshift, AWS-ES, etc.
- Helm Charts – deploy code in a fashion like Terraform, for Kubernetes. Helm Charts make it easy to manage and maintain artifacts and helped the Operations team deploy faster to EKS, getting them to their automation goal.
- Spinnaker – was selected for deploying to AWS EKS due to its built-in capabilities of rendering Helm charts and Kubernetes deployments.
Infrastructure as Code (IaC)
With a plan in hand, the next step was to make any code changes that may be needed to enable the app to use the chosen cloud native service. As microservices that were running in a cloud native (albeit proprietary) architecture, minimal work needed to be done here. This allowed the teams to swiftly move to automate the deployment of cloud native services with IaC. Using HashiCorp Terraform Enterprise, the teams automated the deployment of infrastructure and cloud native services with IaC. The management of infrastructure deployment (e.g. deployment to Dev, Stage, Production, etc.) is overseen as-well, making it easy for the Operations team to maintain the required infrastructure.
Automated deployment pipelines
Next, we automated code deployment for the applications. Doing so allows developers to quickly create dev environments without involving other teams — or at the very least, reducing the amount they need to be involved. With an automated solution in place, developers now have automated deployment to AWS EKS.
Last, we migrated the microservice applications and their data to the new cloud native AWS technologies. As a result, the customer now has an auto-scaling and self-healing environment that can scale up to meet growing demand and scale down to reduce cost when demand is lower. Moreover, developers now can quickly deploy services in a dev environment, eradicating the need to untangle cross-dependencies, which frees them to spend much more time on innovation that drives customer satisfaction.
*This was originally written by Flux7 Inc., which has become Flux7, an NTT DATA Services Company as of December 30, 2019
Date de la publication : 2020-04-09