You will learn:

  • Communicate with the cluster.
  • Create and cancel a simple application.
  • Check cluster status and application status.
  • Publish application port using.

Create a work environment

If you have Docker Desktop, installation is easy. If you have VirtualBox, use Minikube instead.

Kubernetes running in the background can slow down your computer

You will communicate with the cluster using the kubectl command. Kubectl uses the network protocol to communicate with the application interface of the selected Kubernetes cluster, so it must have credentials to control the cluster.

First, test that the client is working properly and has a connection to the cluster. Which cluster am I communicating with?

kubectl cluster-info

Deploy the application using the Deployment object

To begin with, we will try to deploy a simple application that will be able to repeat the received requests. The application will run on a specific port in its container and respond to requests via the HTTP protocol.

The Kubernetes cluster will take care of the tasks related to running the application. The application will be able to scale and restart on any node as needed.

Special Kuberntes objects take care of the application. They express the required and current state of the application. We create objects directly using the kubectl client or using configuration files in the YAML format.

During the deployment of the first application, a Kubernetes object of type Deployment is created, which takes care of the running of the application. An object of type Deployment will be calledhello-kube.

More about the Deployment object in the tutorial.

We will create the first Kubernetes application with the command:

kubectl create deployment hello-kube --image=k8s.gcr.io/echoserver:1.10

Cluster status check

The state of objects in a cluster changes over time. The system assigns a suitable node, tries to get the image of the container and creates a new process according to our requirements.

Use the describe command to learn more about the deployment process:

kubectl describe deployment/hello-kube

If the deployment was successful, we can view the logs from the container start using the logs command:

kubectl logs deployment/hello-kube

Check the new state of the cluster and see the objects that were created:

kubectl get all

You should see something like this:

NAME READY STATUS RESTARTS AGE
pod/hello-kube-5d764c75f7-ffv6l 1/1 Running 0 11m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d16h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hello-kube 1/1 1 1 11m

NAME DESIRED CURRENT READY AGE
replicaset.apps/hello-kube-5d764c75f7 1 1 1 11m

You will see that in addition to hello-kube there will be several other objects. Each object has its own name and species.

Pod

The basic unit of a cloud application in the Kubernetes system is an object of type Pod. One Pod is made up of several containers and bundles. It is guaranteed that one Pod will run on a maximum of one node. This allows its containers to easily communicate with each other using local ports and shared volumes.

I will look at the specific type of objects in the cluster:

kubectl get pods

According to the name, we can learn something about:

kubectl describe pods/<pod_name>

shows us the current state of the pod in the cluster.

Troubleshooting

We can use the describe andlogs commands on all objects to find and troubleshoot if the cluster is not in the desired state.

We will display the error messages below using:

kubectl logs pods/<pod_name>

You can connect to the running pod using this guide.

In addition to the pod, a ReplicaSet is created, that monitors one or more pods. Using an object of the ReplicaSet type, we can scale the application - add or remove a load in the form of pods at runtime.

In our example, an object of type Deployment took care of creating ReplicaSet andPod.

Deployment pays attention to pod andReplicaSets. Its task is to ensure that changes to the versions of the floors work smoothly without interrupting the running of the application.

If the new version of the pod does not work, it will reject the update and the run will continue with the old version.

Deleting Kubernetes objects

If we no longer need the object, it is necessary to remove it so that it does not take up system resources unnecessarily.

Removal is performed using the command:

kubectl delete <object type>/<object name>

Try to delete the created Pod and verify the new state of the cluster:

kubectl delete pods/<pod name>
kubectl get all

Similarly, try deleting the created ReplicaSet.

  • If we delete the Pod, ReplicaSet will take care of starting the next one
  • If we delete ReplicaSet, Deployment will take care of the reboot
  • When deleting objects, we should proceed in reverse order.
  • If we delete Deployment, Kubernetes will take care of canceling ReplicaSet and Pods.

Services - Service object

In order for the application to be usable, we must determine how to access it. Containers within one pod can easily communicate with each other via local ports, but otherwise they are closed from external access.

The process of creating a service is similar to registering a DNS name for your server. A DNS record valid within the cluster is created which we can use to create a connection between multiple PODs.

The functionality of the service is taken care of by the Pod group, which can run on any node of the cluster. Kubernetes will make sure that one of them responds to the request.

If there are more pods, we must design the pods so that they are arbitrarily interchangeable - it does not matter which pod responds to the request. We achieve this by separating the data from the application. If we have data in one place, e.g. in the database, then it does not matter where the processing is performed.

We need to say how and under what name we will approach the service it provides. An object of type Service is used for this.

Create a Service object to publish a service to other platforms or to a public outside the cluster.

       +----------+
       | Service  |
       | DNS name |
       +----------+
       ^    ^     ^
      /     |      \
     /      |       \
    |       |       |
+-------+ +-------+ +-------+
| Pod 1 | | Pod 2 | | Pod 3 |
+-------+ +-------+ +-------+

A symbolic DNS name is created under which the service is accessible.

Important types of services:

  • ClusterIP: available within a cluster.
  • NodePort: accessible from outside on each node.
  • LoadBalancer: available via provider tools.

We create a service with the name hello-kube of theNodePort type with the command:

kubectl expose deployment hello-kube --type=NodePort --port=8080

Verify the new status of your cluster using the command line:

kubectl get all

We can also take a closer look at the services:

kubectl get services

We will see a list of services. We can read from it on which port the service will be entrusted.

We get something like this:

hello-kube NodePort 10.111.243.205 <none> 8080:32453/TCP 5s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d22h

The public port number is after the colon. NodePort means that the service runs on every workstation of our cluster. In this case, the public port number is automatically generated and may be different in your case. The public port can also be set according to our requirements.

We can try it using a web browser or the curl command:

curl -X GET http://localhost:32453

The echoservere's response to our request should appear on the screen.

Text-graphics interface

If you have more than one application running on the cluster, you will appreciate the cluster health console tool called k9s:

You can easily install it:

wget https://github.com/derailed/k9s/releases/download/v0.24.2/k9s_Linux_x86_64.tar.gz
tar zxf k9s_Linux_x86_64.tar.gz
sudo mv k9s /usr/local/bin

Previous Post Next Post

First steps with the Kubernetes system