We recently attended a 1 day workshop at the Amazon offices in downtown Washington DC focused on AWS container services ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service). Although our team has been exposed to Docker and Kubernetes, we had only recently touched on managed container services through AWS and Google Cloud.
The labs for EKS & ECS both used the AWS Cloud 9 IDE, which is accessible through a browser, to organize YAML, python scripts, AWS CLI commands, and other AWS functions. Up until this point, we had not utilized Cloud 9 to this extent, but after this workshop, we will! We had initially used Cloud 9 for managing serverless code functions in Lambda. After the lab, we realized there are many other use cases that would enhance DevOps environments focused around AWS. There are many benefits to Cloud 9 that we will have to cover during another blog 🙂
Our workshop began with the following agenda:
Learn how Amazon EKS makes deploying Kubernetes on AWS simple and scalable, including networking, security, monitoring, and logging.
Go from 0 to fully deployed using Amazon EKS. You’ll learn how to define and launch a Kubernetes control plane using Amazon EKS, including setting up kubectl, networking, and security access controls. Next, you’ll see how to launch worker nodes using Amazon EC2 and check them into your EKS cluster. We’ll finish up by running a sample application on our new, fully-managed Kubernetes cluster.
I will provide the links the labs later in the blog.
When you deploy Kubernetes in a non-managed environment, you need to have hosts to support each container node in a cluster. In the hierarchy of Kubernetes lingo, the main components are:
Kubernetes Master & Nodes
Master functions: maintain desired state of a cluster
Nodes: part of a cluster that are essentially the machines or VMs (virtual machines) that run your applications and cloud workflows. The Kubernetes master controls all of the nodes and all functions are managed by the master.
Each node contains the following main structures:
Pod : Basic building block, contains an application and its shared resources like storage, libraries, and network.
Service : Defines a logical set of pods and a policy on how to access them. Useful when accessing the front end of applications that use replicas of pods for load balancing.
Volume : Storage abstraction of shared resources of containers. It is ephemeral storage, so the changes you make will not persist if the containers are restarted.
Namespace : Used to divide resources across clusters between multiple users.
If you are setting up your own Kubernetes clusters, you need to deploy and configure all of the underlying VM’s to support all of the structures listed. This can be an administrative burden and can tie up operations & developer resources. The purpose of containerizing applications is to allow quick deployment, changes and multiple environments that are scaled and managed by a PaaS like service. After defining what limitations and requirements are needed for applications, Kubernetes handles the rest and automates the deployment process and management of resources for teams to develop, test, and validate applications in a shared environment.
AWS Managed Service Features
Taking advantage of the AWS micro services available allows an organization to leverage reliable and secure services without having to manage underlying infrastructure and high availability requirements.
Multi-AZ – The control plan or master api server and the backend database (etcd) are run in high-availability across three AWS zones. Control planes are monitored, patched and updated for you.
IAM integration – EKS uses Heptio Authenticator to link IAM roles so that users can have specific control over clusters and resources.
Load Balancers – Traffic for the clusters can be routed through network and application load balancers or a classic ELB.
EBS – Kubernetes persistent volumes for cluster storage are implemented as EBS volumes.
Route 53 – Clusters can be routed through Route 53 requests and load balanced across regions.
Auto Scaling – Clusters can use scaling features to grow and shrink on demand.
Container Network – The container network interfaces use ENI’s (Elastic Network Interfaces) to provide secondary IP addresses for Kubernetes pods.
From a high level overview, AWS manages the control plane or master controllers, while the worker nodes (clusters) are managed by the end user.
AWS ECS Fargate
ECS allows you to run Docker containers as a fully managed service that can scale without having to manage your own container orchestration, manage and scale a cluster, or manage the underlying virtual machines that support your cluster. ECS provides slightly more integration with AWS managed micro services like security groups, VPCs, ECR, CodeStar, and CloudWatch, CloudFormation templates, and CloudTrail logs.
When first opening up ECS, you get prompted immediately to create a cluster. After defining the type of cluster and if you are going to use Fargate or EC2 as the underlying infrastructure, you can define tasks that contain application Docker containers that define resource limits, images, and network port mappings. You can mix up existing EC2 AMIs, imported Docker images, or pre-defined app images AWS has available.
In terms of usability, the ECS service abstracts more of the initial setup for your cluster environment in the AWS console and integrates more of the existing AWS services into the setup wizards.
IAM Role to access resources
In order for the ECS service to deploy resources on your behalf, the IAM user that is deploying clusters/containers will need to have certain rights.
Create & Modify Rights to:
The first run wizard walks you through step-by-step on defining a cluster, service, task, can container.
Once you define your task, aka application and its parameters, define a service (load balancer) if needed, name your cluster, validate the settings and click create. Within a few moments you will have a running application with network accessible public and private IP addresses. You now have launched docker containers without having to configure and manage the underlying compute resources.
AWS ECS is free to use but when EC2 instances are deployed, you are billed for the normal EC2 compute time. AWS EKS is billed at $0.20/hour per cluster and for the EC2 instances you use to run your worker nodes.
Overall both managed services provide pros and cons. Pros to ECS are it is highly integrated into the AWS services and functions which allows you to be flexible on how you deploy your containers and the service is free except for EC2 costs. Cons for ECS are that it would not easily port to other cloud providers, whereas EKS/Kubernetes would port to Google, Azure, On-Prem seamlessly, adding a layer of abstraction to your multi-cloud strategy.
To find out how Adela Technologies can help you migrate your Docker or Kubernetes platform to AWS as a managed service or to migrate your workloads to the cloud, contact us HERE.