Ever since Docker containers were first introduced in 2013, the transition to microservice, container-based architecture, has been trending as the most popular industry topic.
Organizations that leverage Containers technology to run their mission-critical applications are doing so in order to adopt complex micro-serviced architectures, which increase speed, reliability, and availability.
When talking about containerized applications, the leading technologies which will always come up during the conversation are Kubernetes and Amazon ECS (Elastic Container Service).
While Kubernetes is an open-sourced container orchestration platform that was originally developed by Google, Amazon ECS is AWS’ proprietary, managed container orchestration service.
Amazon ECS is a provider-specific solution for running and managing docker containers in the public cloud. During re-invent 2017, AWS also announced support for Kubernetes with their EKS offering (Elastic Kubernetes Service). Amazon EKS is AWS’ managed service for Kubernetes, where users can easily deploy, a high-available, scalable and fully managed Kubernetes control-plane on AWS.
With both services offering similar yet distinct computing concepts for running and managing containers on AWS, how can engineering teams decide if Amazon ECS or Amazon EKS is the most suitable selection for their workloads?
In this blog post, we’ll share the differences between both platforms and highlight the advantages of each one, so you will be fully equipped to choose the best option for you.
Why use a Container Orchestrator?
In the past few years, containers have dramatically changed the way organizations develop, package and deploy applications.
Running applications in containers rather than traditional VM’s brings great value due to the fact that they are easily scalable and ephemeral.
However, when managing large clusters, scalability can often become an overhead for engineering teams.
When operating at scale, a container orchestration platform that automates the deployment, management, scaling, networking, and availability of the container clusters, has become necessary.
Container orchestration is all about managing the lifecycle of containers in large environments, and that includes various tasks such as:
- Provisioning and deployment of containers on instances
- Redundancy and availability of containers
- Scaling up\down based on a given load
- Resource allocation
- Health monitoring of containers and hosts
- Seamless deployment of new application versions
Being able to assign a network card directly to a task\pod might not seem like that big of a deal at first sight, however, this effectively means a higher level of security. This way the user is able to assign a Security Group dedicated for that individual task\pod, rather than simply opening all network ports for the hosting EC2 instance. (Regardless, some might argue that using a LB will eliminate this challenge as the user will need to open a certain security group to the LB alone.)
With ECS, you have the option to associate an ENI directly with a task, by choosing to launch a task in “awsvpc” mode. However, the maximum number of ENIs (read Virtual NICs) that can be assigned to each EC2 instance varies according to the EC2 type, and that ranges from 8-15 ENI’s per EC2 instance, potentially not enough to support all the containers we wish to run on that particular instance.
Lately, AWS has increased its support for ECS clusters running in “awsvpc” mode, and now users are able to assign 3 to 8 times more ENI’s than the previous limit (depending on the instance type), therefore increasing elasticity and enhancing container security.
With EKS, the user has the option to assign a dedicated network interface to a Pod. This practically means that all containers inside that pod will share the same internal network and public IP. On top of that, with EKS it’s also possible to share an ENI between several pods, thus enabling the user to place many more Pods per instance (EKS allows up to 750 Pods, depending on instance size,), significantly more than the capacity of ECS, which accommodates only to a maximum of 120 tasks per instance.
Generally speaking, both ECS and EKS clusters running on EC2 instances are debited to the same compute costs, based on the instance type they are using and the running time of that instance.
The only cost difference between ECS and EKS is that EKS is subjected to an additional cost of running the master nodes (across 3 availability zones) on On-Demand instances (this is due to the fact that the Kubernetes control plane must always be available and redundant, as EKS manages the master nodes separate from the worker nodes).
The service cost of EKS is 0.2$ an hour, which amounts to 144$ per month per K8s cluster.
As mentioned in the intro to this blog, both Kubernetes and ECS are quite similar in their orchestration concepts, however, another key differentiator between the two is the ease of deployment.
While ECS is considered an “Out of the box” solution for container orchestration due to its deployment simplicity, deploying Kubernetes clusters on EKS is a bit trickier and requires a more complex deployment configuration and expertise.
For starters, both ECS and EKS can be initially set up via the AWS management console. Due to the fact that ECS is AWS native solution, there is no control plane, as there is in EKS. After the initial cluster setup this is where the deployment simplicity of ECS kicks in, as the user is able to configure and deploy tasks directly from the AWS management console, whereas in EKS, the user is required to configure and deploy Pods via Kubernetes, using Kops.
To summarize, orchestrating containers via Kubernetes requires more expertise from DevOps engineers, whereas in ECS it’s considered to be easier.
Securing cloud infrastructure has always been a challenge for engineering teams, especially securing containers.
Both ECS and EKS have their docker container images stored securely in ECR (Elastic Container Registry), AWS’ service for storing docker images. Every time a container spins up, it securely pulls its container image directly from ECR.
The main security differentiator between ECS and EKS is the fact that ECS supports IAM roles per task, whereas IAM roles are not supported in EKS at the moment.
The capability to assign an IAM role per task\container provides an additional thickened layer of security by specifically granting containers access to various AWS services such as S3, DynamoDB, Redshift, SQS and more.
As the cloud-compute world evolves, more and more organizations are decentralizing their workloads across multiple cloud providers, thereby benefiting from the different services and pricing each cloud offers.
ECS is an AWS-native service, meaning that it is only possible to use on AWS infrastructure, causing a vendor lock-in.
On the other hand, EKS is based on Kubernetes, an open-source project which is available to users running on multi-cloud (AWS, GCP, Azure) and even On-Premise. This factor provides extra flexibility and allows users to be more cloud-agnostic when it comes to where they are running their workloads.
As discussed in this blog, having a container orchestrator is not an add-on, it’s a real necessity when running production workloads in the public cloud.
We have covered the key differentiators between ECS and EKS, and now the only issue left is to decide what is the most suitable selection for your team.
Well, there is no right or wrong when selecting a container orchestration platform, as each of them has pros and cons.
If you are new to containers and are looking for a simple way to set up and deploy clusters, perhaps ECS is the easier choice.
On the other hand, if you are experienced and are looking for a better way to scale your clusters and avoid vendor lock-in, perhaps EKS is the solution for you.
If you already have containers running on Kubernetes or want an advanced orchestration solution with more compatibility, you should use Amazon EKS.
When you’re looking for a solution that combines simplicity and availability, and you want to have advanced control over your infrastructure, then ECS is the right choice for you.
While ECS offers tighter integration with AWS services, users who run Kubernetes get the chance to enjoy the additional capabilities which derive from working in an open-source ecosystem.
Spotinst Ocean: Abstracting containers from the Infrastructure
Spotinst Ocean is our serverless compute-engine that provides data-plane (worker nodes) end to end management, and it abstracts Kubernetes Pods and Amazon ECS tasks from the underlying VMs / infrastructure. Spotinst Ocean relieves engineering teams from the overhead of managing the most complicated part of Kubernetes or ECS clusters by dynamically provisioning, scaling and managing the data-plane component (worker nodes and EC2 instances).
Spotinst Ocean takes advantage of multiple compute purchasing options like Reserved and Spot instances and uses On-Demand only as a fall-back, providing an 80% reduction in cloud infrastructure costs while maintaining high-availability for production and mission-critical applications.
Please check out this blog explaining the capabilities you can achieve with Spotinst Ocean and why it is considered as the go-to product to run container workloads in the public cloud, while significantly lowering cloud-compute costs.
Are you already running containers and are looking to automate your workloads at the lowest costs and gain deeper visibility into your clusters?
Take Ocean for a spin!