kubernetes

Kubernetes Part 6 of n : Health Checks / Readiness Probes / Scaling Deployments

So this post will continue to build upon our example pod/deployment/service that we have been using for the entire series of posts. Just a reminder the rough outline is as follows:

 

  1. What is Kubernetes / Installing Minikube
  2. What are pods/labels, declaring your first pod
  3. Services
  4. Singletons, such as a DB
  5. ConfigMaps/Secrets
  6. LivenessProbe/ReadinessProbe/Scaling Deployments (this post)

 

Where is the code again?

 

 

What did we talk about last time?

Last time we looked at config maps/secrets.

 

So what is this post about?

This time we will focus our attention on the following

  • Liveness probes
  • Readingess probes
  • Scaling Deployments

 

Liveness Probes

Liveness probes are used by Kubernetes to work out whether a container is deemed healthy. There are several reasons that a container might be deemed unhealthy, such as

  • A deadlock
  • Unable to make progress

Kubernetes allows us to create either a Tcp/Http endpoint that can be used to test for liveness. Here is an example of what the pod definition should contain for a livenessProbe

 

livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: X-Custom-Header
          value: Awesome
      initialDelaySeconds: 3
      periodSeconds: 3

 

Lets talk about some of that

  • httpGet states that our container exposes a http GET endpoint at path “/healthz” at port “8080”
  • initialDelaySeconds : Is the amount of time we instruct Kubernetes to wait before calling our livenessProbe endpoint
  • periodSeconds : Is simple the time between 2 successive probes

 

 

 

Changing Our App To Support this

So for our demo app I wanted to make a change such that when the livenessProbe GET endpoint was called the result would be either a Success (Http Status 200) or a BadRequest(Http Status 400) which is picked randomly.

 

Ok so to alter our ongoing demo app to support this lets add a new model for the route

using ServiceStack;

namespace sswebapp.ServiceModel
{
    [Route("/healthcheck","GET")]
    public class HealthCheck 
    {
        
    }

}

 

And then update the ServiceStack service

using System;
using System.Collections.Generic;
using System.Net;
using System.Text;
using MySql.Data.MySqlClient;
using ServiceStack;
using sswebapp.ServiceModel;

namespace sswebapp.ServiceInterface
{
    public class MyServices : Service
    {

        private Random rand = new Random();

		......
		......
		......
		

        public object Get(HealthCheck healthCheck)
        {
            var someRandom = rand.Next(10);
            return new HttpResult()
            {
                StatusCode = someRandom > 5 ? HttpStatusCode.OK : HttpStatusCode.BadRequest,
                ContentType = "application/json"
            };
        }
    }
}

 

 

Why would we do this?

Well its obviously just to satisfy the demo such that when asking for more than 1 copy of the app when scaling up the deployment, we can simulate some random unhealthy behavior which should cause Kubernetes to kill the offending pod, and create a new one to satisfy our deployment requirements. You would not do this in a real health check endpoint.

 

 

Readiness Probes

Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup. In such cases, you don’t want to kill the application, but you don’t want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.

 

Readiness probes are configured similarly to liveness probes. The only difference is that you use the readinessProbe field instead of the livenessProbe field.

 

Taken from : https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ up on date 10/04/2018

 

Another way to think about readinessProbe maybe that your app has other dependencies that also need to be started before your app starts. Say you need to be able to communicate with another database, that also needs to be running before your app is deemed ready. You could work this logic into the value you return from a  readinessProbe.

 

Both readiness and liveness probes can be used in parallel for the same container, which should ensure that traffic does not reach your container until its actually ready (readinessProbe) and that the container is started when it is deemed unhealthy (livenessProbe)

 

 

Running The App

As we have done before we need to launch our pod/service where we start with a single instance running, which we will look to scale out in a moment

 

As always we need to ensure MiniKube is running first

cd\
.\minikube.exe start --kubernetes-version="v1.9.0" --vm-driver="hyperv" --memory=1024 --hyperv-virtual-switch="Minikube Switch" --v=7 --alsologtostderr 

 

Then we need to run our pod/deployment (this will start a single instance of the ServiceStack pod rest endpoint, which is what we have defined/asked for in the deployment.yaml file)

.\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post6_SimpleServiceStackPod_HealthChecks\sswebapp\deployment.yaml
.\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post6_SimpleServiceStackPod_HealthChecks\sswebapp\service.yaml

 

Ok with that in place, we do our usual thing, where we can grab the url for the service and hit it up

 

.\minikube service simple-sswebapi-service --url 

 

Grab the url and use postman to test the app say the XXXX/Hello/sacha route

 

If that is all working, we can then move on to look at how to scale our deployment up, and how it interacts with the livenessProbe we set up above (which may or may not fail when hit)

 

 

Scaling Deployments

So now that we have our deployment/service up and running with what is defined in the deployment.yaml, lets type this in

.\kubectl.exe describe deployment simple-sswebapi-pod-v1

We should see something like this output

image

Lets also check the pods

image

All good only 1 instance of our pod there too.

 

Now lets try and scale it up using this command

.\kubectl scale deployment simple-sswebapi-pod-v1 --replicas=5

Lets check the deployment now

image

Aha we now have 5 desired and 5 total and 2 available and 3 starting to come up

Now lets check the pods, here I issued the command a couple of times quickly and you can see the pods starting to come up

image

And in the 2nd part, all 5 are running

So now what we should be able to do is grab the url of the service for a endpoint, such as :

\minikube service simple-sswebapi-service --url

image

Now grab that url, and try the livenessProbe route, so for this example the full route would be http://192.168.0.29:32383/healthcheck. Try that in postman and then keep checking the pods anytime you get a HTTP status code of 400. Kubernetes should try to keep our desired state of 5 pods up and running by restarting the pod that gave us the 400 status code for the livenessProbe HTTP Get

 

 

Conclusion

So that’s it, we have reached the end of this mini Kubernetes journey. I hope you have fun following along.

 

I have had fun (and some tense moments) writing this series.

 

I did not cover these things, which you may like to research on your own, happy Kuberenetes

 

 

Up next to me is some more Azure stuff, then its either Scala Cats, or Embedded Kafka and some more Kafka Streams stuff

Leave a comment