Spotinst Ocean is our serverless compute engine solution that abstracts containers from the underlying infrastructure and allows engineering teams to focus their time and efforts on building applications and shipping containers, rather than selecting VM’s, utilizing them, and configuring scaling policies when the application reaches peak traffic.
In this blog post, we will present 11 reasons why engineering teams should adopt Spotinst Ocean as the go-to platform to run container workloads in the cloud.
Danny Ocean: “Now there’s eleven of us, each with an equal share. You do the math”.
1. A Serverless experience
Spotinst Ocean provides the perfect “Serverless” experience for engineering teams running containers in the cloud. Spotinst Ocean eliminates the concern regarding the management of the underlying nodes by selecting the most suitable ones while maintaining full flexibility over the infrastructure.
Compared to other managed serverless services, you’re not required to pay an extra fee for the service it provides (you only pay a fixed fee from the savings).
With Spotinst Ocean, there’s no vendor lock-in, meaning, that it’s applicable to manage containers on AWS(EKS, Kops, native) GKE, and soon on ECS.
2. Pod-Driven Autoscaling
Spotinst Ocean leverages the “Pod-Driven Autoscaler”, which was developed by Spotinst, and is responsible for scheduling pods and scaling clusters.
The Auto- Scaler launches pods that are scheduled to run, and in case there are insufficient resources in the cluster, it will launch a new node in order to facilitate the scheduled pods. In addition to that, Spotinst’s Auto-Scaler provides the option to include a buffer of spare capacity (vCPU and memory), known as headroom. The headroom ensures that the cluster has the capacity and elasticity to quickly scale additional Pods without waiting for new nodes to be provisioned and registered to the cluster.
Besides easing and simplifying the scale-up process of the cluster, the Auto-Scaler helps reduce costs by automatically scaling down to the minimum amount of instances, when resources are not required. Spotinst Ocean automatically re-schedules Pods of under-utilized nodes to other nodes for higher resource utilization, thus in order to optimize the cluster for performance and costs, without any action required on the user’s end.
One of our latest enhancements to Spotinst Ocean is “Right-Sizing”. Managing containerized clusters is a challenging task and it can become difficult for engineering teams to estimate pods’ resource requirements in terms of vCPU and memory. Even if development teams manage to achieve an accurate estimation of their application’s resource consumption, the chances are that these measurements will differ in production environments. Thankfully, Spotinst Ocean’s Right-Sizing feature compares between the CPU & Memory requests of the pods and their actual consumption in production. Analyzing the comparison provides right-sizing suggestions to improve the resource configuration of the deployments. Applying correct resources to the pods’ can help prevent over-provisioning of extra space to your node, leading to underutilization and higher cluster costs, or under-provisioning of too little resources required, leading to different errors in the cluster, such as OOM events.
Rusty Ryan: “I need the reason. Don’t say money. Why do this?”
Danny Ocean: “Why not do it?”
In the last couple of years, we definitely witnessed the transition from running applications on VM’s to a more containerized approach. One of the challenges of running workloads in a micro-serviced container architecture is that multiple applications and services share the underlying infrastructure, making it extremely difficult to distinguish between the costs of various workloads. With Spotinst Ocean ‘Showback’ feature, the user can get a more granular view of the cluster and have a deeper dive into the cost breakdown (compute and storage) of each and one of the cluster’s resources (deployment, statefulset, daemonset, cron jobs and pods)
5. Spotinst Horizontal Pod Autoscaling (HPA)
One of the many challenges to DevOps teams with running containers is scaling nodes during peak traffic. In order to address these challenges, we have empowered Spotinst Ocean with an additional layer of scaling elasticity, Horizontal Pod Autoscaling (Spotinst HPA). While the Pod-Driven autoscaling is using a more vertical-based Tetris scaling approach that will scale nodes when they are reaching the utilization threshold, with Spotinst HPA, the user can achieve Pod autoscaling and replicate the number of pods in deployments, based on observed network latency or with custom metrics support. This means that in case certain deployments have crossed the defined thresholds, the HPA autoscaler will replicate the pods which belong to that deployment.
6. Run Workloads
As part of their daily routine, DevOps teams are required to work with several provisioning tools and platforms, which can occasionally become an overhead. In order to ease and automate Kubernetes operations for our users, we have developed the ‘Run Workloads’ feature, which allows DevOps engineers to run their applications directly from the Spotinst Ocean console. This new capability improves the overhead of managing Kubernetes clusters via multiple interfaces and adds a simple and convenient way to run applications. With ‘Run Workloads’ it is possible to create the following Kubernetes workloads: Deployment, Pod, DaemonSet
7. Diverse Workloads on 1 Cluster
With Spotinst Ocean, the user can define custom launch specifications which will allow him to configure multiple workload types on the same Ocean Cluster. The challenge of running multiple workload types on the same Kubernetes cluster was applying a unique configuration to each one of the workloads in a heterogeneous environment. In EKS, for example, the user must manage every workload separately in different Auto-scaling groups, making the deployment process a bit more challenging.
By applying a custom launch specification for the Spotinst Ocean cluster, the user can operate diverse workloads on the same Ocean cluster, relieving him from a cluster separation.
As part of those launch specs, the user can configure sets of labels and taints to go along with a custom AMI, user data script and security group, which will be used for the nodes that’ll serve your labeled pods. This additional layer of orchestration ensures the user’s ability to run any type of workload on the same Ocean Cluster.
8. Cluster roll
Cluster roll allows the user to roll his cluster in a single click and replaces the running instances in a blue-green manner, by the batch percentage configured by the user. Spotinst Ocean Roll takes into consideration the actual pods which are currently running in the cluster and is aware of any new workload entering the cluster.
Spotinst Ocean simplifies the roll process so the user could perform it with one single click while maintaining full visibility to the process and without any concerns.
9. Visualization & Visibility
Spotinst Ocean provides a broad and rich dashboard in which the user can have a drill-down and gain better visibility of his clusters. With Spotinst Ocean, the user receives visual insights into what the clusters’ status in terms of CPU\Memory utilization, cost breakdown, instance types running, pod distribution, cluster health and many more.
10. All life-cycles in one place
By using Spotinst Ocean, the user can maintain On-Demand, Reserved, and Spot Instances in one pool, empowering him with extra elasticity when selecting which instance types will be a part of the cluster. In case the user has pre-purchased RI’s in his resource pool, Spotintst Ocean will first use those nodes prior to leveraging Spot instances, thus in order to assure that the user will first utilize what he paid for. In addition to that, in order to ensure a highly available cluster, Spotinst Ocean will fallback to On-Demand in case the Spot market is unstable or unavailable.
Moreover, there’s no need to manage multiple instance groups under the hood anymore, but simply provide an annotation on the Pods and Spotinst Ocean will place the Pods according to the preferred Life Cycle.
11. Auto-Healing & Node Healthiness
In order to assure that the nodes which reside under the cluster are healthy and functioning properly, Spotinst Ocean validates their healthiness and initiates node replacements in order to prevent downtime of the application. The health checks can be a part of the ALB\ELB health checks if the cluster resides in AWS. In addition to that, the K8s master node monitors the worker nodes in the cluster and assigns a condition to describe the status of the node. The condition types are; OutOfDisk, Ready, MemoryPressure, DiskPressure, NetworkUnavailable. Spotinst Ocean periodically checks the nodes’ status every 30 seconds, and in case one of the nodes is flagged as ‘unhealthy’, it will be scheduled for replacement.
Since the DevOps sphere has adopted a containerized state of mind, various challenges have emerged for engineering teams running production workloads. Challenges such as management and orchestration, auto-scaling clusters, rightsizing pod resource requirements, cost efficiency, visibility, and many more.
In this blog we covered how Ocean’s Top 11 capabilities have revolutionized the way engineering teams are running container workloads in the cloud while maintaining a cost-efficient mindset.
Would you like to know more? Contact us for a demo
Want to take it for a spin? Get Started now for a free trial!