“Cluster schedulers must meet a number of goals simultaneously: high resource utilization, user-supplied placement constraints, rapid decision making, and various degrees of “fairness” and business importance – all while being robust and always available.”
Google Omega, Whitepaper.
Large-scale compute clusters are expensive, so it is important to use them well. Utilization and efficiency can be increased by running a mix of workloads on the same machines: CPU- and memory-intensive jobs, small and large ones, and a mix of batch and low-latency jobs – ones that serve end-user requests or provide infrastructure services such as storage, naming or locking.
In distributed clusters, i.e Apache Mesos, Amazon ECS, Docker Swarm, and Google Kubernetes, The ‘scheduler’ is one of the most critical components, as it responsible for scheduling tasks (
Containers). This process requires optimization of resource assignments to maximize the intended goals.
When there are multiple resource assignments possible, picking one versus another can lead to significantly different outcomes in terms of scalability, performance, etc. As such, efficient assignment selection is a crucial aspect of a scheduler library. For example, picking assignments by evaluating every pending task with every available resource is computationally prohibitive.
Tasks (Containers) Layer vs Infrastructure (Servers) Layer
It is very important to understand that the Containers layer is almost completely decoupled from the Infrastructure, meaning that both layers should scale up and down horizontally, and most importantly – separately.
In order to design a solid autoscaling environment within a distributed cluster, we should create different scaling policies for each layer.
For example, we would want to scale out the number of our running Docker containers based on our Application Latency, while we would want to scale out more physical hardware based on the CPU usage or available Memory in the cluster.
Figure A Shows an example for different types of Tasks that a scheduler should assign to the running machines. Also, there are two layers, Containers layer (Tasks & Services) and Infrastructure layer which are decoupled from each other.
Homogeneous vs Heterogenous
One of the most common challenges in managing large fleets of distributed clusters is to force ‘continuous optimization’. Homogeneous clusters (clusters that contain machines from only the same size and type) tend to become inefficient as you run various types of tasks and containers. While Heterogenous clusters can be used more wisely, it is hard to manage and allocate the right task to the right machine.
Distributed-Heterogenous Management With Spotinst – The Rise Of The Tetris
Spotinst Elastigroup aims to be the best place to set up and manage distributed clusters. Thus, we are happy to introduce our support for AWS ECS and Google Kuberenetes. A designated cluster scheduler that matches Tasks (Containers) with Machines (Resources).
Figure B shows how Spotinst Scheduler communicates with the ECS cluster layer, and also takes infrastructure decisions by picking the right machine for the right job.
For example, If your cluster runs 10 machines of c3.large (2 vCPUs and 3.8 GiB of RAM) and 10 machines of c4.xlarge (4 vCPUs and 7.5 GiB of RAM), so your total vCPUs is
40*1024 = 40,960 CPU Units and the total RAM is
What happens if a single Task requires
16 GiB of RAM? Although you have plenty of RAM and CPU, it won’t start.
However, in this scenario, our Infrastructure-aware Scheduler, provisions a machine that meets the task goals, and also terminates the machine when its not needed anymore.
Figure C shows how Spotinst Infrastructure-aware Scheduler deals with multiple and various tasks types and assign them to the right machine, if needed – it spins a machine with the right resources for the task to be completed.
We are happy to introduce our Infrastructure-aware Scheduler support for AWS ECS and Google Kuberenetes with more fun planned for Docker Swarm and Mesos later this year.
It is Generally Available and can be used directly from our Elastigroup creation wizard by simply specifying your ECS cluster ID or Kuberenetes API tokens, and feel the difference.
The Spotinst Team.