Microservices on Amazon ECS with Abby Fuller (AWS) & Mike Lang (Weaveworks)
Abby Fuller returned as the guest speaker to talk about Microservices on Amazon ECS. She provides an overview on ECS, how it works, and best practices.
On March 28 2017, the bi-monthly Weaveworks Online User Group met to cover “Microservices on Amazon ECS.”
Abby Fuller (AWS) returned as the guest speaker to talk about Microservices on ECS. In her talk, she:
- Recapped her last talk
- Provided an ECS overview
- Took a closer look at how ECS works
- Covered specific ECS best practices
Recap of the last episode
This blog post recaps Part 1 of Abby’s presentation.
Paraphrasing Amazon’s website definition, Amazon EC2 Container Service (ECS) is a container management service that allows you to run your apps on a managed cluster of Amazon EC2 instances in a way that’s highly scalable.
ECS provides a platform to run container-based workloads easily and allows the workloads to integrate with other AWS features. There is also open-sourced API for ECS management. The whole idea is to provide a platform for:
- Deep AWS Integration
- Container Orchestration
- Cluster Management
Currently, ECS is widely used by different companies, including:
A common question about ECS is how it maps back to traditional EC2 workloads. To understand the mapping of ECS to EC2 workloads, it is best to look at it as a stack comprising:
From the lowest layer of the stack, the regular EC2 instance registers itself to a cluster, indicating that the container can run on it. After registration, this instance is managed by another layer called the Service. The Service manages container startup or shut-down along with its placement. Finally, there is the Task which acts as a container wrapper with configuration that runs on the Instance. This diagram shows the relationship of Instance, Service, and Task:
AWS manages the mapping and, from the user’s point of view runs containers seamlessly on traditional EC2 instances.
How ECS Works?
Here is a high-level architecture view of ECS:
One or more EC2 instances make up of an ECS cluster. A load balancer (ELB or ALB) routes the traffic to the cluster instance. One or more services run in an ECS cluster. Within the ECS cluster, Task Definitions define different aspects of a container, such as environment variables, resource allocation, or other parameters. The AWS documentation page has a more detailed description of Service and Task Definitions. Autoscaling works on two levels in this architecture: cluster-level scaling and scaling of individual services running within the ECS Cluster.
To better understand how ECS works, let’s move from a high level view of ECS architecture to take a closer look at its individual features.
AWS ECS Task Placement
To utilize resources effectively, AWS ECS allows a user to specify where to place the task and/or which task to terminate based on different attributes. Some of the attributes are:
- AMI ID
- Availability Zone
- Instance Type
- Distinct Instances
There is also the Task Placement Strategies and Task Placement Constraints that can be used separately or in combination with both to control task placement.
Briefly, AWS ECS supports the following task placement strategies:
- Task Placement Strategies and Constraints are applied in this order for task placement
- Cluster Constraints
- Customer Constraints
- Placement Strategies
- Apply Filter
ECS Event Stream for CloudWatch Stream
Monitoring is an important aspect of operations. AWS ECS provides near real-time updates on container states, cluster states, as well as tasks running on those container instances. When integrating with Cloudwatch, this provides a powerful and effective monitoring tool, as existing CloudWatch Event targets such as AWS Lambda or Amazon Simple Queue Services can be used.
Documentation for this feature can be found here.
IAM Roles for Task
IAM roles can be assigned to a task. The benefits of using IAM Roles for tasks are:
- Credential Isolation
Refer to AWS documentation page for more detail on this feature.
Fast and Hassle-free Deployment
AWS ECS inherently provides easy deployment and rapid scalability. This is due to extensible API calls for deployment and other management tasks. For instance, a management task may trigger a deployment based on a commit to a branch on GitHub via a Continuous Integration tool.
AWS ECS also has extra protection for service deployment.
On a rolling update, for example, a new Task Definition must first pass automated health checks before AWS ECS drains the existing connections of previously running Task Definitions and updates to the new one.
Flexible Scaling for Performance
Services can be scaled up or down based on CloudWatch alarms, and Autoscaling is built-in during the service registration process. Besides service-level Autoscaling, the ECS cluster can be scaled up or down depending on workload or resources.
ALB vs ELB
In the high-level AWS ECS architecture, there is the load balancer. Elastic Load Balancing (ELB) is the classic component for AWS introduced in 2009. In 2016, AWS introduced Application Load Balancer (ALB) as an option under Elastic Load Balancing.
Classic Elastic Load Balancing defines routing rules based on protocol and port numbers. An Application Load Balancer defines routing rules based on content which allows ECS to allocate port numbers dynamically. When ports are allocated dynamically, it removes resource constraints. Not to mention, a single ALB capable of handling multiple services saves money.
For details of ALB, check out this website.
Abby ended her talk with a discussion of ECS best practices, which you can read about in part 2 of this blog post. For a deeper dive into Kubernetes and AWS, visit our what you need to know page and how we are managing clusters on AWS.
Watch the full talk: