Google Kubernetes Engine

Many public clouds are investing heavily in compute services that utilize containerization technologies, often referred to as containers as a service (CaaS). These technologies solve entire classes of problems by abstracting away underlying virtual machines and networking components, allowing developers to build and deploy applications inside Linux containers.

Container technologies such as Docker allow developers to manage services somewhere between the application layer and the system layer, giving developers the ability to package the entire runtime environment including the application, external dependencies, and operating system components.

For Google Cloud, CaaS takes the form of GKE. Building on top of the open source Kubernetes project, GKE allows developers to package and deploy applications inside Docker containers while Google manages the underlying VM clusters and Kubernetes installation. This level of abstraction has quickly grown in popularity for its ability to preserve many of the benefits of IaaS while providing much of the convenience seen in higher levels of abstraction.

GKE started life as Container Engine and was rebranded as Kubernetes Engine in 2017, solidifying the core reliance on Kubernetes. As it was being rebranded, GKE also greatly matured as a platform, adding more functionality and integration with other GCP services.  Some of the key functionality and services within GCP that support GKE are:

  • Load balancing for supporting Compute Engine instances 
  • Node pools to organize groups of nodes within clusters for needed flexibility 
  • Automatic scaling of cluster nodes 
  • Automatic upgrades for cluster software 
  • Health checks and automatic repair to maintain health and availability 
  • Logging and monitoring with Stackdriver 

The deep integration between GKE and the rest of the GCP ecosystem makes it a very compelling managed environment for containerized workloads.