Introduction to Google Kubernetes Engine

Introduction to Google Kubernetes Engine



3 years ago | 6 min read

A common problem with production applications is "it worked on my machine" which is usually due to version inconsistencies.

Kubernetes Engine is Google cloud platform's fully managed Kubernetes engine which originated on a quest to solve the problem and more, it is similar to Amazon Elastic Kubernetes Service and Azure Kubernetes service.

In this article, you would learn how to utilise Kubernetes Engine so you can focus more on your product. We would cover the following concepts:

  • What Kubernetes Engine is and what it does
  • Packaging your application for Kubernetes Engine
  • Clusters
  • Managing clusters
  • Clusters availability and scaling
  • Workloads
  • Auto-scaling workloads
  • Logging and monitoring

A Short Story on GKE

It all started with containers. With containerizaiation, you can package code in a way that is designed to be highly portable and for very efficient use of resources while keeping it consistent across different environments.

Manually managing one container might not immediately be much of a headache but you might need a better way when you have multiple containers. Kubernetes provides a portable and extensible open-source platform for managing containerized applications.

When managing your containers with kubernetes, you might run into multiple problems with installation or provisioning or upgrading or scaling and even more. Many cloud providers have services that helps you with these challenges so you can focus more on your software instead. Kubernetes Engine is GCP's fully managed container offering that google manages for you under your administrative control.

GKE helps with managing kubernetes by providing you with a managed environment for deploying, scaling and managing your containerized application.

GKE helps you manage kubernetes by providing advanced cluster management features including cluster creation, load balancing, auto scaling, auto repairs, monitoring and logging.

Now that you understand what GKE is, let's see how to use it.

Package your application for GKE

GKE works with containerized applications, so the first step is to package your application into a container, one way to this is with Docker. This process makes your application into a containerized application. You can then move your source code into a source code repository like Github.


Next, you create your cluster. GKE uses clusters to manage kubernetes for you. The cluster has to be created from within GKE and not kubernetes because kubernetes do not use clusters. You can create your cluster using the cloud console UI or gcloud command line tool.

Manage GKE clusters

Managing your clusters involves you scaling, starting, stopping, autoscaling, inspecting your application and more. This is known as orchestrating.

You can orchestrate your clusters directly from Google cloud console or using the kubectl tool.


Kubectl is a tool you can use to manage your kubernetes clusters. It works by running your commands against kubernetes clusters. Kubectl is not tied to GKE but it comes preinstalled with cloud shell.

Clusters Availability and Scaling

GKE provides two types of clusters you can choose from to manage availability, regional clusters and zonal cluster

Regional clusters have multiple control planes deployed across several regions.

Zonal clusters have just one control plane deployed to your selected zone.

The difference is that regional clusters are globally available and are also highly available but deploying changes takes longer since it has to span globally, so depending on your user's location there is a possibility they have access to different versions of your application while you making updates. But regional don't provide as much availability so they should be chosen when flexibility is more important than availability.


Kubernetes categorizes the different types of containerized applications as workloads. In other words, workloads generally refers to containerized applications.

Pods are the smallest unit of deployment, they're basically an unmanaged process.

You would mostly interact with Kubernetes through the use of a Kubernetes controller, which introduces higher-level functionality related to the management and lifecycle of pods.

There are different types of kubernetes controllers including deployments, stateful sets, and daemon sets. These controller types determines the type of the workload.

Deployments create and manage identical pods called replicas, based on a pod template. If a pod stops or becomes unresponsive, the deployment's controller will replace it. Deployments are mostly used for stateless applications.

Stateful sets create and manage pods that are expected to have a level of persistence. They allow for stable network identifiers and persistent storage.

Daemon sets are useful for background tasks such as monitoring by creating one pod per node.

Workloads can be deployed to specific nodes by using node selectors, node affinity selectors, and resource requirements.

Autoscaling workloads

When using Kubernetes controllers such as deployment, we can specify the number of pod replicas that we want to deploy. If we want to increase or decrease that number all we have to do is manually change it and deploy the change.

Instead of manually scaling your workloads, GKE provides pod autoscalers for managing workloads. There are two types of pod autoscaler which are horizontal and vertical pod autoscaler.

Horizontal pod autoscaler

The horizontal pod autoscaling can manage an existing set of pods by monitoring utilization metrics and adjusting the replica count as needed.

Vertical pod autoscaler

Kubernetes also provides a built-in resource for vertical pod autoscaling. Vertical scaling allows the autoscaler to manage CPU and memory limits for a pod. When creating a vertical pod autoscaler setting the update mode to auto is going to allow the autoscaler to adjust the resource requirements of a pod by removing existing pods and creating new ones.

To avoid experiencing an excessive amount of pod restarts during vertical pod autoscaling, you can set a disruption budget.

A Kubernetes cluster consists of nodes and nodes have finite resources, this means your deployment could use up all of the available resources, you need to add more nodes to allow room for more resources.

Cluster autoscaler

A Kubernetes cluster consists of nodes and nodes have finite resources, this implies your deployment could assign all of the available resources, you would need to add more nodesto permit room for more resources.

GKE provides a mechanism for dynamically adding and removing nodes to and from a pool to match the requirement of your resource pods. This is called the cluster autoscaler. The cluster autoscaler adds and removes nodes to support the resource requests for your pods. If you deploy your workload and pods are unscheduled due to lack of resources, the cluster autoscaler can dynamically provision more nodes. When it is time to get rid of the nodes, the autoscaler starts to perform connection draining which allows a window of ten minutes for the pending connections after which the node is removed.

To enable cluster autoscaling you only need to specify a minimum and maximum number of nodes for a zonal deployment, and if your deployment per regional, you specify the minimum and maximum number of nodes per zone. Keep In mind that for deployments with regional availability, the minimum and maximum numbers you set for your nodes applies to every of your zones. An example could be a an instance deployed to a few zones, if you declare a minimum of three nodes and maximum of six nodes, that instance would have a minimum of nine nodes and maximum of eighteen nodes in total.

The cluster autoscaler works by monitoring resource requests. When there aren't enough nodes within the pool to fulfill the demand, the autoscaler can add new nodes. Once the cluster can manage the workload with fewer nodes, the autoscaler goes to begin to empty connections to the nodes and so terminate them.

Logging and monitoring

Application logs can facilitate understanding what's happening inside your application, logs are undoubtedly important for production applications, since you can't possibly know what has happened on every instance of your application on your customers devices. The logs are particularly useful for debugging problems and monitoring cluster activity. Kubernetes follows the Docker Standard which is the simplest and most adopted logging method for containerized applications, it basically involves writing to plain output and standard error streams

This native functionality provided by a container engine or runtime is mostly not enough for a whole logging solution. for instance, if a container crashes, a pod is evicted, or a node dies, you’ll usually still want to access your application’s logs. For such reasons, logs should have a separate storage and lifecycle independent of nodes, pods, or containers.

Cluster-level logging requires a separate backend to store, analyze, and query logs. Kubernetes provides no native storage solution for log data, but provides methods to integrate many existing logging solutions into your Kubernetes cluster.

GKE clusters natively integrates Cloud Logging and Monitoring by default, so you have a cluster-level-logging implemented out of the box. after you create a GKE cluster, both Monitoring and Cloud Logging are enabled by default.You get a monitoring dashboard specifically tailored for Kubernetes and your logs are sent to Cloud Logging’s datastore, and indexed for both searches and visualization within the Cloud Logs Viewer.


Google Kubernetes Engine offers a fast and easy solution for deploying a Kubernetes cluster to production in a matter of minutes with a few clicks.

With GKE, your containerized applications and services run in a fully managed Kubernetes environment. This means you get to focus more on your code and Google manages the rest for you while making it easy to gain insight into how your application is running.

Thank you for reading, I hope you learned a lot already. I'm Azeez Lukman and here's a developer's journey building something awesome every day. Please let's meet on Twitter, LinkedIn, GitHub and anywhere else @robogeeek95


Created by


Software developer and Technical writer | angelhacks Ambassador |







Related Articles