kubernetes

Kubernetes Part 6 of n : Health Checks / Readiness Probes / Scaling Deployments

So this post will continue to build upon our example pod/deployment/service that we have been using for the entire series of posts. Just a reminder the rough outline is as follows:

 

  1. What is Kubernetes / Installing Minikube
  2. What are pods/labels, declaring your first pod
  3. Services
  4. Singletons, such as a DB
  5. ConfigMaps/Secrets
  6. LivenessProbe/ReadinessProbe/Scaling Deployments (this post)

 

Where is the code again?

 

 

What did we talk about last time?

Last time we looked at config maps/secrets.

 

So what is this post about?

This time we will focus our attention on the following

  • Liveness probes
  • Readingess probes
  • Scaling Deployments

 

Liveness Probes

Liveness probes are used by Kubernetes to work out whether a container is deemed healthy. There are several reasons that a container might be deemed unhealthy, such as

  • A deadlock
  • Unable to make progress

Kubernetes allows us to create either a Tcp/Http endpoint that can be used to test for liveness. Here is an example of what the pod definition should contain for a livenessProbe

 

livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: X-Custom-Header
          value: Awesome
      initialDelaySeconds: 3
      periodSeconds: 3

 

Lets talk about some of that

  • httpGet states that our container exposes a http GET endpoint at path “/healthz” at port “8080”
  • initialDelaySeconds : Is the amount of time we instruct Kubernetes to wait before calling our livenessProbe endpoint
  • periodSeconds : Is simple the time between 2 successive probes

 

 

 

Changing Our App To Support this

So for our demo app I wanted to make a change such that when the livenessProbe GET endpoint was called the result would be either a Success (Http Status 200) or a BadRequest(Http Status 400) which is picked randomly.

 

Ok so to alter our ongoing demo app to support this lets add a new model for the route

using ServiceStack;

namespace sswebapp.ServiceModel
{
    [Route("/healthcheck","GET")]
    public class HealthCheck 
    {
        
    }

}

 

And then update the ServiceStack service

using System;
using System.Collections.Generic;
using System.Net;
using System.Text;
using MySql.Data.MySqlClient;
using ServiceStack;
using sswebapp.ServiceModel;

namespace sswebapp.ServiceInterface
{
    public class MyServices : Service
    {

        private Random rand = new Random();

		......
		......
		......
		

        public object Get(HealthCheck healthCheck)
        {
            var someRandom = rand.Next(10);
            return new HttpResult()
            {
                StatusCode = someRandom > 5 ? HttpStatusCode.OK : HttpStatusCode.BadRequest,
                ContentType = "application/json"
            };
        }
    }
}

 

 

Why would we do this?

Well its obviously just to satisfy the demo such that when asking for more than 1 copy of the app when scaling up the deployment, we can simulate some random unhealthy behavior which should cause Kubernetes to kill the offending pod, and create a new one to satisfy our deployment requirements. You would not do this in a real health check endpoint.

 

 

Readiness Probes

Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup. In such cases, you don’t want to kill the application, but you don’t want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.

 

Readiness probes are configured similarly to liveness probes. The only difference is that you use the readinessProbe field instead of the livenessProbe field.

 

Taken from : https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ up on date 10/04/2018

 

Another way to think about readinessProbe maybe that your app has other dependencies that also need to be started before your app starts. Say you need to be able to communicate with another database, that also needs to be running before your app is deemed ready. You could work this logic into the value you return from a  readinessProbe.

 

Both readiness and liveness probes can be used in parallel for the same container, which should ensure that traffic does not reach your container until its actually ready (readinessProbe) and that the container is started when it is deemed unhealthy (livenessProbe)

 

 

Running The App

As we have done before we need to launch our pod/service where we start with a single instance running, which we will look to scale out in a moment

 

As always we need to ensure MiniKube is running first

cd\
.\minikube.exe start --kubernetes-version="v1.9.0" --vm-driver="hyperv" --memory=1024 --hyperv-virtual-switch="Minikube Switch" --v=7 --alsologtostderr 

 

Then we need to run our pod/deployment (this will start a single instance of the ServiceStack pod rest endpoint, which is what we have defined/asked for in the deployment.yaml file)

.\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post6_SimpleServiceStackPod_HealthChecks\sswebapp\deployment.yaml
.\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post6_SimpleServiceStackPod_HealthChecks\sswebapp\service.yaml

 

Ok with that in place, we do our usual thing, where we can grab the url for the service and hit it up

 

.\minikube service simple-sswebapi-service --url 

 

Grab the url and use postman to test the app say the XXXX/Hello/sacha route

 

If that is all working, we can then move on to look at how to scale our deployment up, and how it interacts with the livenessProbe we set up above (which may or may not fail when hit)

 

 

Scaling Deployments

So now that we have our deployment/service up and running with what is defined in the deployment.yaml, lets type this in

.\kubectl.exe describe deployment simple-sswebapi-pod-v1

We should see something like this output

image

Lets also check the pods

image

All good only 1 instance of our pod there too.

 

Now lets try and scale it up using this command

.\kubectl scale deployment simple-sswebapi-pod-v1 --replicas=5

Lets check the deployment now

image

Aha we now have 5 desired and 5 total and 2 available and 3 starting to come up

Now lets check the pods, here I issued the command a couple of times quickly and you can see the pods starting to come up

image

And in the 2nd part, all 5 are running

So now what we should be able to do is grab the url of the service for a endpoint, such as :

\minikube service simple-sswebapi-service --url

image

Now grab that url, and try the livenessProbe route, so for this example the full route would be http://192.168.0.29:32383/healthcheck. Try that in postman and then keep checking the pods anytime you get a HTTP status code of 400. Kubernetes should try to keep our desired state of 5 pods up and running by restarting the pod that gave us the 400 status code for the livenessProbe HTTP Get

 

 

Conclusion

So that’s it, we have reached the end of this mini Kubernetes journey. I hope you have fun following along.

 

I have had fun (and some tense moments) writing this series.

 

I did not cover these things, which you may like to research on your own, happy Kuberenetes

 

 

Up next to me is some more Azure stuff, then its either Scala Cats, or Embedded Kafka and some more Kafka Streams stuff

Advertisements
kubernetes

Kubernetes Part 5 of n : Config Maps and Secrets

 

So this post will continue to build upon our example pod/deployment/service that we have been using for the entire series of posts. Just a reminder the rough outline is as follows:

 

  1. What is Kubernetes / Installing Minikube
  2. What are pods/labels, declaring your first pod
  3. Services
  4. Singletons, such as a DB
  5. ConfigMaps/Secrets (this post)
  6. LivenessProbe/ReadinessProbe/Scaling Deployments

 

Where is the code again?

 

 

What did we talk about last time?

Last time we continued to use services, but last time we showed how you can use service to expose a singleton, such as a database, where we expect there only to be a single pod behind the service. We also talked about DNS lookups for aiding in discovery and finally using ClusterIP : None, which allowed us to address the pod directly.

 

So what is this post about?

This time we will focus our attention to 2 more bits of Kubernetes functionality

  • Config Maps
  • Secrets

We will continue to adjust our example pod that we have been using for the entire series so far. But before we do that lets just talk about some of the concepts behind Config Maps/Secrets in Kubernetes

 

Before we Start

Lets make sure minikube is up and running by using this

cd\
.\minikube.exe start --kubernetes-version="v1.9.0" --vm-driver="hyperv" --memory=1024 --hyperv-virtual-switch="Minikube Switch" --v=7 --alsologtostderr 

 

 

What is a ConfigMap?

This is what the official docs say

 

ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps.

 

From https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ up on date 29/03/18

 

There are many ways to create config files, such as

 

I am just going to concentrate on “Create ConfigMaps from files” as that fitted the bill of what I wanted to do with the pod that I have been altering for this series of posts. So lets have a look at that shall we.

 

Lets say I had a file called this “sachaserver-properties”, which held this content:

 

{
    "K8sConfigSetting1": "**** Config from K8s!!! ****"
}

 

I could easily try and create a config map from this file as follows:

 

.\kubectl.exe create configmap server-config --from-file=C:\Users\sacha\Desktop\KubernetesExamples\Post5_SimpleServiceStackPod_ConfigMapsSecrets\configmap_secrets\sachaserver-properties

So lets see what happens when I run that in PowerShell

 

image

 

Mmm was not expecting that. So the filename is important and has a regex associated with it of [-._a-zA-Z0-9]+

 

But hang on my file is called “sachaserver-properties“, so this should match the regex required by Kubernetes. Can you spot what’s wrong. That right the path is being used as part of the filename, which Kubernetes doesn’t like. We will see why in just a minute.

 

So the answer is to just copy the file to the same folder as kubctl.exe and then lets try this command

 

.\kubectl.exe create configmap server-config --from-file=C:\sachaserver-properties

This seems to go better

image

Lets have a closer look at the config map that we created here

.\kubectl.exe describe configmaps server-config

image

 

.\kubectl.exe get configmaps server-config -o yaml

image

 

There are a couple of things to note here:

  • That the ConfigMap has a Data section which is really just a key-value store
  • What’s really interesting is that the original file name (and we could have used multiple files so would have ended up with multiple keys in the Data section) that we used is used as the name of the key inside this ConfigMap Data section. This is WHY Kubernetes baulked at us earlier where we were using the FULL path to try and create the ConfigMap. It needs to be a simple key, that will end up being a key in this Data section
  • The other interesting thing is what happens with the original file(s) content. The content from the file/files is used as the value for the key (representing the file) inside of the Data section. As you can see this “sachaserver-properties” key does indeed contain all the original contents of the file that I showed above.

 

So that’s pretty cool. But what can I do with this ConfigMap?

Well now that we have a ConfigMap we need to think about how to use it inside our own pods. Again there are several different ways, where I will concentrate on one of them

 

Can we test this out before we move onto changing our main pod?

It’s not a lot of work to upload a new image to Docker Cloud, but I am the sort of guy that likes to know things work before I upload it somewhere, and then have to tear it down, and redo it again. So is there a way that I can check this ConfigMap stuff is working as expected locally using some existing Docker image before I change the code for the pod that we have been using to date for this series of posts?

 

Well actually yes there is, there is our old friend busybox (the swiss army knife of containers). We can use a busybox pod in a couple of ways to verify the config map is working as expected

  • We can check the mount works ok
  • We can check that the data contents of the ConfigMap is as expected

 

Lets see how

 

Checking the mount works ok

So lets say we have a busybox pod that looks like this (this is busybox-ls-pod.yaml in source code)

apiVersion: v1
kind: Pod
metadata:
  name: busybox-ls-pod
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "/bin/sh", "-c", "ls /k8s/config" ]
      volumeMounts:
      - name: config-volume
        mountPath: /k8s/config
  volumes:
    - name: config-volume
      configMap:
        # Provide the name of the ConfigMap containing the files you want
        # to add to the container
        name: server-config
  restartPolicy: Never

 

We can then run this like

.\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post5_SimpleServiceStackPod_ConfigMapsSecrets\configmap\busybox-ls-pod.yaml

And check its logs like this

.\kubectl logs busybox-ls-pod

Which gives us output like this

image

Ok, so that is good looks like the mount is working ok.

 

Checking the data contents is ok

So now that we know the mount is ok, how about the data contents from the mounted config map. Lets see an example of how we can check that using another busybox pod (this is busybox-cat-pod.yaml in source code)

apiVersion: v1
kind: Pod
metadata:
  name: busybox-cat-pod
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "/bin/sh","-c","cat /k8s/config/sachaserver-properties" ]
      volumeMounts:
      - name: config-volume
        mountPath: /k8s/config
  volumes:
    - name: config-volume
      configMap:
        # Provide the name of the ConfigMap containing the files you want
        # to add to the container
        name: server-config
  restartPolicy: Never

 

We can then run this like

.\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post5_SimpleServiceStackPod_ConfigMapsSecrets\configmap\busybox-cat-pod.yaml

And check its logs like this

.\kubectl logs busybox-cat-pod

Where we get output like this

image

Cool this looks to be working as expected as well

 

 

What is a Secret?

Right now there is not much difference between secretes and ConfigMaps in Kubernetes. The only real difference is how you create the data that you want to be stored in the first place, where the recommendation is to use Base64 encoded values for your secrets

 

There is a slightly different command like to run, and the way you mount the volume in your pod is also slightly different, but conceptually its not that different (right now, but I would imagine this might change to use some other mechanism over time).

 

Base64 Encoding

So the recommendation is to base64 encode our secret values, and if you are a linux/bash user this is how you can do it

image

 

Or you could just use one of the many base64 encoding/decoding sites on line such as : https://www.base64decode.org/

 

So once you have done that for whatever you want to keep as a secret we can put them in a file such as this one (this is the sachaserver-secrets-properties file in the demo source code)

{
    "K8sSecret1": "KioqKiBTZWNyZXQgZnJvbSBLOHMgKioqKg=="
}


In here the base64 encoded string is really “**** Secret from K8s ****

 

As before the secrets file must not contain any \\ or \ characters which means moving it to the C:\ for me (at least on Windows anyway). So here is the code that copies the file and also creates the secret from the input secret file sachaserver-secrets-properties

 

Remove-Item c:\sachaserver-secrets-properties
Copy-Item C:\Users\sacha\Desktop\KubernetesExamples\Post5_SimpleServiceStackPod_ConfigMapsSecrets\secrets\sachaserver-secrets-properties -Destination c:\sachaserver-secrets-properties
.\kubectl.exe create secret generic server-secrets --from-file=sachaserver-secrets-properties

 

We can do some simple tests to make sure it looks ok, but we will also use our favorite busybox pod to really check it out. For now lets run some rudimentary tests

.\kubectl.exe describe secrets server-secrets
.\kubectl.exe get secrets server-secrets -o yaml

 

But as we just said we can/should use busybox to confirm everything is ok before we start to make adjustments to our own pod. Lets move on to see what the busybox stuff looks like for this secrets stuff

 

We can use a busybox pod in the same way we used it with the configmap stuff, to verify the secrets we just crested is working as expected

  • We can check the mount works ok
  • We can check that the data contents of the secrets is as expected

 

Lets see how

 

Checking the mount works ok

So lets say we have a busybox pod that looks like this (this is busybox-secrets-ls-pod.yaml in source code)

apiVersion: v1
kind: Pod
metadata:
  name: busybox-secrets-ls-pod
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "/bin/sh", "-c", "ls /k8s/secrets" ]
      volumeMounts:
      - name: secrets-volume
        mountPath: /k8s/secrets
  volumes:
    - name: secrets-volume
      secret:
        # Provide the name of the Secret containing the files you want
        # to add to the container
        secretName: server-secrets
  restartPolicy: Never

 

We can then run this like

.\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post5_SimpleServiceStackPod_ConfigMapsSecrets\secrets\busybox-secrets-ls-pod.yaml

And check its logs like this

.\kubectl logs busybox-secrets-ls-pod

Which gives us output like this

image

 

Ok, so that is good looks like the mount is working ok.

 

Checking the data contents is ok

So now that we know the mount is ok, how about the data contents from the mounted secretes. Lets see an example of how we can check that using another busybox pod (this is busybox-secrets-cat-pod.yaml in source code)

apiVersion: v1
kind: Pod
metadata:
  name: busybox-secrets-cat-pod
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "/bin/sh","-c","cat /k8s/secrets/sachaserver-secrets-properties" ]
      volumeMounts:
      - name: secrets-volume
        mountPath: /k8s/secrets
  volumes:
    - name: secrets-volume
      secret:
        # Provide the name of the Secret containing the files you want
        # to add to the container
        secretName: server-secrets
  restartPolicy: Never

 

We can then run this like

.\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post5_SimpleServiceStackPod_ConfigMapsSecrets\secrets\busybox-secrets-cat-pod.yaml

And check its logs like this

.\kubectl logs busybox-secrets-cat-pod

Where we get output like this

image

 

Cool this looks to be working as expected as well

 

 

What’s changed in the demo app?

Ok so far no blockers, what have we got working so far?

Well we now have this working

  • A working configmap
  • A correctly proved out volume mapping for the configmap
  • The ability to confirm the configmap file is present in the mapped volume for the configmap
  • The ability to read the contents of the configmap file within the mapped volume for the configmap
  • A working secret
  • A correctly proved out volume mapping for the secret
  • The ability to confirm the secret file is present in the mapped volume for the secret
  • The ability to read the contents of the secret file within the mapped volume for the secret

I demonstrated ALL of this above. So now we should be in a good place to adjust our pod / deployment that we have been working on for this entire series. Just to remind ourselves of what the demo pod did here is what we have working so far

 

 

So what do we need to change to support the configmap/secret stuff that we are trying to demo for this post.

 

Change The Deployment For The Pod

We obviously need to make changes to the demo deployment/pod definition to support the configmap/secrets stuff, so this is the new deployment file for this post

 

Post5_SimpleServiceStackPod_ConfigMapsSecrets\sswebapp\deployment.yaml

 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: simple-sswebapi-pod-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: run=sswebapi-pod-v1
    spec:
      containers:
      - name: sswebapi-pod-v1
        image: sachabarber/sswebapp-post-5:v1
        ports:
        - containerPort: 5000
        volumeMounts:
        - name: config-volume
          mountPath: /k8s/config
        - name: secrets-volume
          mountPath: /k8s/secrets          
      volumes:
        - name: config-volume
          configMap:
            # Provide the name of the ConfigMap containing the files you want
            # to add to the container
            name: server-config
        - name: secrets-volume
          secret:
            # Provide the name of the Secret containing the files you want
            # to add to the container
            secretName: server-secrets            

 

Something That Reads The Input Files

So we established above that once we have some mapped volumes that the configmap/secret files should be available at the mounts specified above. So how do we read these files? Previously we were using busybox and using cat, and now we are in .NET Core land. Mmmm

 

Luckily the .NET Core configuration system working just fine reading these files, here is how we do it in the adjusted Startup.cs class

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Configuration;
using Funq;
using ServiceStack;
using ServiceStack.Configuration;
using sswebapp.ServiceInterface;
using ServiceStack.Api.Swagger;
using System.Text;

namespace sswebapp
{
    public class Startup
    {
        public static IConfiguration Configuration { get; set; }

        public Startup(IConfiguration configuration) => Configuration = configuration;

        // This method gets called by the runtime. Use this method to add services to the container.
        // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
        public void ConfigureServices(IServiceCollection services)
        {
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddJsonFile("/k8s/config/sachaserver-properties", optional: true, reloadOnChange: true)
                .AddJsonFile("/k8s/secrets/sachaserver-secrets-properties", optional: true, reloadOnChange: true)
                .AddEnvironmentVariables();

            Configuration = builder.Build();

            //Set the Static on MyServices, which is very poor design, but it is just for a
            //demo so I am letting it slide
            MyServices.AllVariablesFromStartup = Configuration.AsEnumerable();

            app.UseServiceStack(new AppHost
            {
                AppSettings = new NetCoreAppSettings(Configuration)
            });
        }
    }

    public class AppHost : AppHostBase
    {
       .....
    }
}

The most important lines being these ones where it can be seen that we read from the mapped configmap/secret mounted files

var builder = new ConfigurationBuilder()
    .SetBasePath(env.ContentRootPath)
    .....
	.....
	.AddJsonFile("/k8s/config/sachaserver-properties", optional: true, reloadOnChange: true)
    .AddJsonFile("/k8s/secrets/sachaserver-secrets-properties", optional: true, reloadOnChange: true)
    .AddEnvironmentVariables();

Configuration = builder.Build();

//Set the Static on MyServices, which is very poor design, but it is just for a
//demo so I am letting it slide
MyServices.AllVariablesFromStartup = Configuration.AsEnumerable();

 

 

A New Route

Ok so now that we are reading in these file values, we obviously want to ensure it is all working, so lets have a new route to expose all the settings that have been read in by the .NET Core configuration system. Where we are obviously expecting our configmap/secrets items to be part of that.

 

Since we are using ServiceStack this is what our new route looks like

using ServiceStack;
using System.Collections.Generic;

namespace sswebapp.ServiceModel
{
    [Route("/showsettings","GET")]
    public class ShowSettingRequest : IReturn<ShowSettingResponse>
    {
        
    }

    public class ShowSettingResponse
    {
        public IEnumerable<string> Results { get; set; }
    }
    
}

 

 

A New Method To Support The New Route

Ok so we now have a new route, but we need some service code to support this new route. So here we have it

using System;
using System.Collections.Generic;
using System.Text;
using MySql.Data.MySqlClient;
using ServiceStack;
using sswebapp.ServiceModel;

namespace sswebapp.ServiceInterface
{
    public class MyServices : Service
    {
        /// <summary>
        /// Set in <c>sswebapp.Startup.cs</c>. This is just for demo purposes only
        /// this is not a great design, but for this quick and dirty demo it does the job
        /// </summary>
        public static IEnumerable<KeyValuePair<string,string>> AllVariablesFromStartup { get; set; }

		....
		....
		....
		....
		....
		

        public object Get(ShowSettingRequest request)
        {
            try
            {
                var allVars = new List<string>();
                foreach (var kvp in AllVariablesFromStartup)
                {
                    allVars.Add($"Key: {kvp.Key}, Value: {kvp.Value}");
                }

                return new ShowSettingResponse { Results = allVars };
            }
            catch(Exception ex)
            {
                return new ShowSettingResponse { Results = new List<string>() {ex.Message } };
            }
        }
    }
}

 

Testing It Out

So we have talked about creating the configmap/secrets and how to test them out using our friend, busybox. We have also talked about how we modified the ongoing pod/deployment that this series of post has worked with from the beginning, where we have exposed a new route to allow us to grab all the settings the .NET Core configuration sub system can see.

 

So we should be in a good position to test it out for real, lets proceed

 

As usual we expect minikube to be running

c:\
cd\
.\minikube.exe start --kubernetes-version="v1.9.0" --vm-driver="hyperv" --memory=1024 --hyperv-virtual-switch="Minikube Switch" --v=7 --alsologtostderr 

Ok so now lets create the pod/deployment

\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post5_SimpleServiceStackPod_ConfigMapsSecrets\sswebapp\deployment.yaml

 

And then the service

.\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post5_SimpleServiceStackPod_ConfigMapsSecrets\sswebapp\service.yaml

Then lets grab the url for the service

.\minikube service simple-sswebapi-service --url 

 

So for me this would allow me to use this url for the running service, which should show ALL the settings that the ServiceStack app read in when using the new showsettings route, including the Kubernetes configmap/secret values we looked at above

 

 

http://192.168.0.29:32383/showsettings?format=json

 

image

 

Not the best formatting, I give you that, so lets just take that JSON into http://jsonprettyprint.com/json-pretty-printer.php which tidies it up into this

 

image

 

Aha our configmap and secret values are there. Superb its all working. Obviously for the secrets we would still need to decode this from this base64 string to get our original value. But this does show everything is working just fine.

 

Conclusion

As with most of the post in this series so far, I have found Kubernetes to be most intuitive, and it just kind of works to be honest, this post has been particularly straight forward where I just wrote the YAML for the config map, and then wrote a test busybox pod, and it just worked. Doesn’t happen that often…..so yay

Azure

Azure WebApp : Checking the deployed WebApp file system

 

So at work we are using Azure a lot, one of things we use a heck of a lot are web apps. We have a lot of these, some full blown web sites, some simple Service Stack REST APIs. We orchestrate the deployment of these Azure WebApps via the use of standard VSTS build/release steps, and the use of some custom ARM templates.

 

Just for a quick diversion this is what ARM templates are if you have not heard of them

 

What is Azure Resource Manager

Azure Resource Manager (ARM) allows you to provision your applications using a declarative template. In a single template, you can deploy multiple services along with their dependencies. You use the same template to repeatedly deploy your application during every stage of the application life cycle.

 

This post is NOT about ARM templates, but I just thought it worth calling out what they were, incase you have not heard of them before.

 

So what is this post about?

Well as I say we have a bunch of WebApps that we deploy to Azure, which most of the time is just fine, and we rarely need to check up on this automated deployment mechanism, it just works. However as most engineers will attest to, the shit fairy does occasionally come to town, and when she does she is sure to stop by and sprinkle a little chaos on your lovely deployment setup.

 

Now I don’t really believe in fairies, but I have certainly witnessed first hand that things have ended up deployed all wrong, and I have found myself in a situation where I needed to check the following things to make sure what I think I have configured is what is actually happening when I deploy

 

  1. VSTS steps are correct
  2. VSTS variables are correct
  3. Web.Config/AppSettings.json have the correct values in them when deployed

 

Items 1 AND 2 from that list are easy as we can check that inside VSTS, that is fairly ok. However item 3 requires us to get onto the actual deployment file system of the VM running the Azure WebApp that we tried to deploy. This is certainly not possible (to my knowledge) from VSTS.

 

So how can we see what was deployed for our Azure WebApp?

So it would seem we need to get access to the filesystem of the VM running the Azure WebApp. Now if you know anything about scale sets, and how Azure deals with WebApps you will know that you can’t really trust that what is there right now in terms of VMs is guaranteed to be the exact same VMs tomorrow. Azure just doesn’t work that way. If a VM is deemed unhealthy, it can and will be taken away, and a new on will be provisioned under the covers.

 

You generally don’t have to care about this, Azure just does its magic and we are blissfully unaware. Happy days.

 

However if we do need to do something like check a deployment, how do we do that? What VM should I try and gain access too? Will that VM be the same one tomorrow? Maybe/Maybe not. So we cant really write any funky scripts with set VM host names in them, as we may not be getting the same VM to host our WebApp from one day to the next. So how do we deal with this exactly?

 

Luckily there is a magic button in the Azure Portal that allows us to do just what we want.

 

Say we have a WebApp that we have created via some automated VSTS deployment setup

 

image

 

We can open the web app, and drill into its blade and look for the Advanced Tools

 

image

 

Then select the “Go” option from the panel that is displayed in the portal. Once you have done that a new tab will open in your browser that shows something like this

 

image

 

It can be seen that opens up a Kudu portal for the selected Azure WebApp.

 

But just what is Kudu?

Kudu is the engine behind git/hg deployments, WebJobs, and various other features in Azure Web Sites. It can also run outside of Azure.

 

Anyway once we have this Kudu portal open for our selected WebApp we can do various things, the one that we are interested in for this post is the Debug Console –> PowerShell

 

image

 

So lets launch that, we then get something like this

 

image

 

Aha a file system for the WebApp. Cool. So now all we need to do is explore this say by changing directories to site\wwwroot. Then from there we could say have a look at the Web.Config (this is a standard .NET web site so no AppSettings.json for this one)

 

We could examine the Web.Config content like this say where we use the Get-Content PowerShell commandlet

 

image

 

 

 

Conclusion

So that’s it. This was a small post, but I hope it showed you that even though the VMs you may be running on from one day to the next may change, you still have the tools you need to get in there and have a look around. Until next time then….