cft

How to provision and deploy a Google Kubernetes Engine cluster

How to provision and deploy a Google Kubernetes Engine cluster


user

Azeez

a year ago | 5 min read

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. it facilitates both declarative configuration and automation.

In plain English, Red Hat technology evangelist Gordon Haff explains Kubernetes as "an open-source platform that automates Linux container operations. It eliminates many of the manual processes involved in deploying and scaling containerized applications. In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters."

Containerization is simply a way to package code that is designed to be highly portable and for very efficient use of resources.

Google Kubernetes Engine is google's fully-managed Kubernetes service that lets you run these containerized applications in google cloud. Google manages it for you under your administrative control.

Deploying a Kubernetes engine cluster. this article walks you through creating a Google Kubernetes Engine cluster that contains several containers. Each of the containers would have a web server.

Overview

This guide is designed to help you to get started with Google Kubernetes Engine(GKE). We will be using the Google cloud shell to setup the GKE cluster and host a multi-container application. This guide will walk through the steps to:

  • Enable the needed APIs
  • Create Kubernetes Engine cluster
  • Configuring and resizing clusters
  • Run and deploy workloads to a container
  • Deploying applications

Enable the needed APIs

There are the two APIs you would have to enable before you can provision a Kubernetes engine cluster.

  • Kubernetes Engine API
  • Container Registry API

You need to first enable them if they are not already enabled. to do this follow the steps.

  • In the GCP Console, open the Navigation menu then click APIs & Services
  • Use the search to query each of the APIs and click Enable APIs and Services at the top and enable them for your project if they are not already enabled.

Create a Kubernetes Engine cluster

A Kubernetes cluster is a set of nodes that run containerized applications. With Kubernetes clusters, your containers run across multiple machines and environments weather virtual, physical, cloud-based, or on-premises. In our case we would be running our Kubernetes engine cluster in a cloud-based environment, specifically on GCP.

A cluster consists of at least one cluster master machine and multiple worker machines called nodes. A node is a Compute Engine virtual machine (VM) instance that runs the Kubernetes processes necessary to make them part of the cluster. You deploy applications to clusters, and the clusters run on the nodes.

Before you create a cluster let's learn about the Cloud Shell as this is the tool we would use

Cloud Shell

Cloud Shell is a shell environment for managing resources hosted on Google Cloud. Cloud Shell comes pre-installed with the gcloud command-line tool and kubectl command-line tool. The gcloud tool provides the primary command-line interface for Google Cloud, and kubectl provides the primary command-line interface for running commands against Kubernetes clusters.

To launch Cloud shell, perform the following steps.

  • From the upper-right corner of the console, click the Activate Cloud Shell.
activate cloud shell
activate cloud shell
  • Click Continue.
cloud shell ready
cloud shell ready

This will launch google cloudshell. We are now ready to launch a cluster and deploy an application.

Create Kubernetes Engine cluster

gcloud container clusters create firstkubapp --zone us-central1-a --num-nodes 2

Using the gcloud command-line tool, run this command to create a Kubernetes cluster. The cluster is named firstkubapp and is configured to run 2 nodes

View your Kubernetes Engine clusters

After the command is done running, you can view your clusters by navigating to Compute Engine > VM Instances.

Resizing Kubernetes Engine cluster

You can resize a cluster to increase or decrease the number of nodes in that cluster using the resize command. By specifying the node pool, your zone and the number of nodes you want to scale to.

To increase the size of your cluster

gcloud container clusters resize firstkubapp --node-pool default-pool \\
--num-nodes 4 --zone us-central1-a

To decrease the size of your cluster

gcloud container clusters resize firstkubapp --node-pool default-pool \\
--num-nodes 2 --zone us-central1-a

Deploy your container

You can now deploy a containerized application to the cluster you have just created. We would create a simple web app instance with nginx. Nginx is a popular open-source web server software for web serving, reverse proxying, caching, load balancing, media streaming, and more.

Launch an nginx container instance

From Cloud Shell, launch a single instance of the nginx container by running the command

kubectl create deploy nginx --image=nginx:1.17.10

In Kubernetes, all containers run in pods. This use of the kubectl create command caused Kubernetes to create a deployment of a single pod into the Kubernetes cluster you created. The pod contains the nginx container

A Kubernetes deployment keeps a given number of pods up and running even in the event of failures among the nodes on which they run. In this command, you launched the default number of pods, which is 1.

View the pod running the nginx container

From Cloud Shell, run this

kubectl get pods

You should see a single Pod firstkubapp running on your cluster.

Expose the nginx container to the Internet

You have deployed your application but you need to expose it to the internet so that users can access it. You can expose your application by creating a Service, a Kubernetes resource that exposes your application to external traffic.

kubectl expose deployment nginx --port 80 --type LoadBalancer
  • The --type LoadBalancer flag creates a Compute Engine load balancer for your container.
  • The --port flag initializes public port 80 to the internet

The command created a service and an external load balancer with a public IP address attached to it. The IP address remains the same for the life of the service. Any network traffic to that public IP address is routed to pods behind the service: in this case, the nginx pod.

View the new service:

kubectl get services

You can use the displayed external IP address to test and contact the nginx container remotely.

It may take a few seconds before the External-IP field is populated for your service. This is normal. Just re-run the kubectl get services command every few seconds until the field is populated.

Open a new web browser tab and paste your cluster's external IP address into the address bar. The default home page of the Nginx browser is displayed.

welcome to nginx
welcome to nginx

Scale up the number of pods running on your service

kubectl scale deployment nginx --replicas 3

Scaling up deployment is useful when you want to increase available resources for an application that is becoming more popular.

Confirm that Kubernetes has updated the number of pods

kubectl get pods

And that's it, you have just provisioned and deployed a Kubernetes cluster in Kubernetes Engine

Conclusion

With this headstart guide into the Google Kubernetes Engine, you have configured your first Google Kubernetes Engine cluster. This cluster is fully managed for you by Google. You populated the cluster with several pods, containing an application, resized the number of pods, exposed the application, and scaled the application.

Thank you for reading, I'm Azeez Lukman and here's a developer's journey building something awesome every day. Please let's meet on Twitter, LinkedIn and GitHub and anywhere else @robogeeek95

Resources

https://rafay.co/the-kubernetes-current/getting-started-with-google-kubernetes-engine-gke-0/https://googlepluralsight.qwiklabs.com/focuses/17274628?parent=lti_session

https://cloud.google.com/kubernetes-engine/docs/how-to/gpus

Upvote


user
Created by

Azeez

Software developer and Technical writer | angelhacks Ambassador |


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles