Six months ago, we announced the general availability of Ocean, a Spotinst service that abstracts Containers from VMs. Ocean is a “Nodeless”, or “Serverless” engine for Kubernetes that enables companies to focus on building applications and shipping containers, rather than worrying about managing and scaling the underlying infrastructure. Spotinst is thrilled to announce that Ocean is now available for Google Kubernetes Engine.
Spotinst Ocean is growing fast
Spotinst Ocean has been well received in the past 6 months:
- Ocean customer adoption increased 20x in Q1 2019
- Ocean manages over 100M resource hours per week
- Ocean manages some of the largest production deployments of containers in the world that consist of deployments with thousands of pods.
- Ocean is being used by a large number of industry verticals including financial services, e-commerce, ad-tech, and more.
Serverless, “Nodeless” Containers
The ease of deployment brought by containers combined with the benefits of “Serverless” infrastructure, e.g utility billing, scale-out per request, scale down to zero, and really no infrastructure to manage, highlight why Ocean is seen as the nirvana that developers and DevOps engineers have needed for years.
Costs, Eficieny and SLA
Cost optimization is one of the dominant pillars that Ocean is built upon. Ocean ensures that all containers are placed on the best possible mix of Preemptible instances that produce 80% savings over normal on-demand compute costs.
Optimizing clusters for cost, availability, and performance, Ocean continuously monitors GCP VMs and predicts interruptions. Once interruptions are predicted, Ocean will automatically migrate containers to new Preemptible VMs or, if necessary, to On-Demand. Standing behind the Spotinst SLA for infrastructure availability is the highest importance.
Kubernetes topology comes first
Spotinst Ocean recognizes activity in the Kubernetes control plane and provides pod-driven autoscaling, in order to meet applications demands. To ensure that all services, deployments and pods have the exact capacity required to run, Ocean scales GKE cluster capacity accordingly.
Ocean for Google Kubernetes Engine
Pod Driven Auto Scaling
Ocean scales Kubernetes clusters and pods based on predefined container requirements and helps reduce costs by automatically scaling down to zero instances when resources are not required.
Abstracting the underlying infrastructure
Ocean provides an abstraction of GCP instance groups by utilizing instance types from different families and sizes. Multiple groups appear as one pool of compute resources. This increases the cluster’s performance and efficiency, even with specific requirements such as GPU and Persistent Volume Claims.
Deploying Pods triggers Ocean to identify necessary constraints, labels and compute requirements. Optimal instances are then added to the cluster.
Heterogeneous Instance Groups
With Ocean, it is possible to have On-Demand and Preemptible instances in one pool by providing the pod annotation
(default is “preemptible”). Ocean will place these pods according to the Life Cycle preference.
Enhanced monitoring & visibility
Ocean is also a powerful prediction and dashboarding tool that provides deep visibility into Kubernetes clusters. Exposing a powerful view into the cost of running containerized workloads, the Ocean dashboard displays an accurate and actionable view of container utilization and spend.
Ocean now supports Google Kubernetes Engine (GKE), and is available in all Google Cloud regions.
All you need to do is connect GKE using a few simple steps, and then Ocean will manage all the underlying infrastructure.
Take Ocean for a spin today.