top of page
bukunmi14

What is Kubernetes?

Updated: Jun 13, 2020



Kubernetes is an open-source container orchestration platform developed by Google. It is a compute resource that developers use to deploy their applications.


In this article, I will dive into why Kubernetes was developed and how it differs from normal compute resources. I will also give a brief overview of containers. Before we begin, we need to set up our Kubernetes environment.


In the article on Cloud Computing which you can access here, we covered the building blocks of computing resources and we mentioned what a server was and how it relates to computing power. Now that we have an idea of servers and virtual machines (for cloud computing), the next port of call is how we make use of these resources to deploy our applications.


Compute Clusters

Clusters are a group of servers that combine compute resources to deploy our applications. Clusters came into existence because back in the day, developers deployed their applications on one server and even if that server had the best processing power and best storage capacity, it was way too risky to rely on a single server.


As such, in a cluster, each server is responsible for the ownership and management of its devices and has a copy of the cluster image that runs the cluster. In that way, the clusters work together to increase the data protection and maintain consistency of the cluster at all times.


For example, you might need to host a web application that will be accessed by the entire country. Regardless of the size of the country, that will still be a lot of traffic to deal with. One challenge you will face is not knowing when people would want to access the application, so you need to provision sufficient resources for the application.


In order to accomplish this, you will need to get a number of computers with a server configuration. All of these computers will need to be configured identically.


These computers will then be set up to work together. There will be something called a load-balancer that receives all requests. The load balancer then proceeds to distribute requests to all available servers in the cluster. If any server becomes unresponsive, it will be skipped over.


Pretty neat, as this can help us distribute resources among our servers.


A major downside of these clusters is that they are quite restricted, as all servers in the cluster run the same image which means the applications that are to be deployed must have the same configuration all through. This is a major issue as modern apps have various services that perform different functions and require different configurations.





Kubernetes Introduces Flexibility

Kubernetes helps solves this problem by providing clusters of machines/servers that can run containers.


Containers are a highly-portable, light-weight means of distributing and scaling our application without replicating the guest OS. If you need to, please see the previous article on containers here.


The configuration of a Container is independent of the host cluster image running that container.


Kubernetes gives developers the ability to develop and deploy complex applications made of multiple services effortlessly. It does this using objects.


Setup A GKE Cluster

In this section, I will walk you through how to set a Kubernetes cluster on Google Cloud Platform(GCP).


Kubernetes Engine is GCP’s managed Kubernetes service. This service gives it’s users access to create and maintain their own Kubernetes cluster without having to manage the Kubernetes platform.


To make use of the Kubernetes Engine you have to log in to your GCP account. If you don’t have one you can create it here. Also, you have to enable billing, but if your account is new you will get a complimentary $300 worth of computing credit (as of the time of writing), so you can use that.


A Kubernetes cluster can be created on GCP via both the Cloud Console GUI and the command line. For the sake of our blog post, we will be using the Cloud Console GUI.


The Kubernetes Engine service can be found on the drop-down of the hamburger on the left-hand of the home page.



After selecting the Kubernetes Engine icon the home page pops up. Here you will see various options like create clusters, workloads, applications et al. We want to create a Cluster, so we will select clusters.



The next page shows various options available to us when we want to create a cluster, here we specify the name of our Cluster. There is a Cluster basics tab that allows us to choose where we want our VMs to be located and if regional or multi-regional. We can also specify the number of nodes for the cluster by clicking the default-pool tab.



When you are done specifying all the necessary cluster parameters, you can click create to create your Cluster. Creating a Cluster may take a few minutes, immediately the cluster has been created you will see a green check sign to signify that your cluster has been successfully created.


From the clusters page, you can edit, delete, and connect to a cluster. You can click Connect to receive a gcloud command to connect to the cluster from the command line.


There you have it, we have just created a Kubernetes cluster on GCP that can readily deploy our applications.


Connect to an Existing Cluster

Now that you have a GKE cluster, you are ready to connect. You can do that from the GCP command line which is called Cloud Shell.


To access Cloud Shell, click on the Cloud Shell button at the top right of the GCP console.


When you click that button, the Cloud Shell will open up at the bottom.


Before you can connect to the cluster called cluster-1, you will need to get the authentication credentials. You can do that using the following code snippet from the Cloud Shell.


gcloud container clusters get-credentials cluster-1
 

Kubernetes Objects

The Kubernetes ecosystem is made of various objects. We will talk about three of them in this article.


Pods

These are single instances of a running process in a cluster. Pods contain at least one container.


Pods that contain more than one container must share resources such as networking, storage, IP address et al.


A pod allows its containers to behave as if they are running on an isolated VM, sharing common storage, IP address, and a set of ports.


Pods treat identical containers as a single entity called deployment. This helps improve the availability and redundancy of the applications running on the pods.


The simplest way of running a pod is using the kubectl run command. Recall the docker container which we set up in the previous article here. That container was compiled and save to Google Container Registry in a project. Here is the dockerfile again.


You would run it as a pod using the following command from Cloud Console.


kubectl run my_image --image=gcr.io/[PROJECT-ID]/my_image --port=8080

Pods can also be defined using a yaml template. We will go more into that in another article.


Deployments

A deployment describes the desired state for your infrastructure. This could be the state of a pod or a ReplicaSet.


The idea behind this is that Pods are ephemeral and can become unhealthy and be terminated at any time as they are not intended to be treated as durable entities. As a result, pods won't survive node failures or other downtimes such as lack of resources or node maintenance.


Creating a deployment helps in shielding against the ephemeral nature of pods and increasing redundancy by shifting the load to other pods. Due to Kubernetes' self-healing nature, additional pods in the deployment start up almost immediately without the end-user noticing any downside. Pods also support auto-scaling by increasing or reducing the number of running pods.


Deployments create workloads on GKE. To view your deployments, go to the Workloads sub-menu under Kubernetes Engine. If you don't have any deployments, you will get a screen that looks similar to the following.



For example, you could create a deployment using the following code snippet

kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0

Afterward, you could visit the Workloads section of Kubernetes Engine, and you would expect to see the following.



You can see from the information displayed that one pod was configured for this deployment. You can also see that it was deployed onto the cluster-1 cluster.


Recall that the deployment is the desired state. That state is defined in a yaml file with the following structure:


Deployments are a type of controller object. We will go more into those in another article.


Services

These are objects that provide API endpoints with a stable IP address that allows applications to access the pods running a particular application.


Services also act as load balancers that bring traffic down to a collection of Pods. Services are essential because pods are ephemeral and can be terminated at any time as such applications that have access to that pod before it was terminated would not have access because the new pod would have a different IP address. As such services provide a level of indirection that has a Stable IP address and API endpoint that would allow traffic into the pod so far the access is granted to the application.


For example, before our users can access the deployment we created above, we will need to set up a service. Consider the screen below which we access by clicking on the name of the deployment.


You can create a service by running the following code snippet


kubectl expose deployment hello-server --type=LoadBalancer --port 8080

By running the snippet above and refreshing the information page of that deployment, you will see something new, as shown below.


At this point, you can browse the Services & Ingress sub-menu, and you will see the following.


At this point, you can see the IP address and port number which you can use to access your deployment. You can also get that information by using the following commands.


kubectl get service

The output of that command looks similar to the following.


At this point, you have sufficient information to browse to your deployment. Note that the external IP is listed under EXTERNAL-IP in the output.


 

Clean Up

You must clean up after yourself, otherwise, you will find yourself spending a lot of money.


If you are done working with your cluster, you should delete it using the following commands.


gcloud container clusters delete cluster-1

You already know how to recreate the cluster, so nothing lost by cleaning up.

 

If you enjoyed the article and haven't subscribed, please do so to stay informed and receive our monthly newsletter.

170 views0 comments

Recent Posts

See All

Comments


Post: Blog2_Post
bottom of page