kubernetes

Kubernetes Part 4 of n, Singletons

 

So this post will build upon services that we looked at last time, and we will look to see how we can use a Service to act as a singleton for something that should be a singleton like a database. Just a reminder the rough outline is as follows:

 

  1. What is Kubernetes / Installing Minikube
  2. What are pods/labels, declaring your first pod
  3. Services
  4. Singletons, such as a DB (this post)
  5. ConfigMaps/Secrets
  6. LivenessProbe/ReadinessProbe/Scaling Deployments

 

Where is the code again?

 

 

What did we talk about last time?

Last time we talked about services and how they are a constant over a single or group of pods. Services also expose a DNS address which we looked at in the last post. This time we will carry on looking at services but we will tackle how we can set up a singleton such as a database

 

What are we going to cover?

This time we will build upon what we did in post2 and post3, but we will get the REST API pod/service that we have crafted to work with a single MySQL instance.  If you recall from the last posts that the REST API was ServiceStack which you may or may not be familiar with.

 

Why did I choose MySql?

I chose MySql as it is fairly self contained, can be run as a single pod/service in Kubernetes and its been around for a long time, and has stable Docker images available, and to be honest it’s a nice simple thing to use to demonstrate.

 

Changes to the ongoing REST API pod that we are working with

The REST API pod we have been working with is all good, nothing changes for the deployment of him from what we did in post2 and post3. However what we want to be able to do is get it to talk to a MySql instance that we will host in another service. So lets look at what changes from last time to enable us to do that shall we.

 

The first thing we need to do is update the sswebapp.ServiceInterface project to use the MySql instance. We need to update it to include the relevant NuGet package https://www.nuget.org/packages/MySql.Data/6.10.6 which (at the time of writing) was the most up to date .NET driver for MySql

 

image

 

So now that we have that in place we just need to create a new route for this MySql stuff.

From the stuff we already had working we had a simple route called “Hello”, which would match a route like this

 

/hello

/hello/{Name}

 

That still works, we have not touched that, but we do wish to add another route. The new route will be on that takes the following parameters and is a POST request

Host

Port

 

image

 

Here is an example of usage from Postman which is a tool I use a lot for testing out my REST endpoints

image

 

So now that we have a route declared we need to write some server side code to deal with this route. That is done by adding the following code to the sswebapp.ServiceInterface.MyService.cs code file

using System;
using System.Collections.Generic;
using MySql.Data.MySqlClient;
using ServiceStack;
using sswebapp.ServiceModel;

namespace sswebapp.ServiceInterface
{
    public class MyServices : Service
    {
		//OTHER EXISTING ROUTES HERE SUCH AS
		//Hello/{Name}
		

        public object Post(MySqlRequest request)
        {
            MySqlConnection connection = null;
            try
            {
                var server = request.SqlProps.Host;
                var port = request.SqlProps.Port;
                var uid = "root";
                var password = "password";
                string connectionString = $"server={server};port={port};user={uid};password={password};";

                using (connection = new MySqlConnection(connectionString))
                {
                    connection.Open();
                    var query = @"SELECT table_name, table_schema 
                                FROM INFORMATION_SCHEMA.TABLES 
                                WHERE TABLE_TYPE = 'BASE TABLE';";

                    using (MySqlCommand cmd = new MySqlCommand(query, connection))
                    {
                        //Create a data reader and Execute the command
                        using (MySqlDataReader dataReader = cmd.ExecuteReader())
                        {

                            //Read the data and store them in the list
                            var finalResults = new List<string>();
                            while (dataReader.Read())
                            {
                                finalResults.Add($"Name = '{dataReader["table_name"]}', Schema = '{dataReader["table_schema"]}'");
                            }

                            //close Data Reader
                            dataReader.Close();

                            return new MySqlResponse
                            {
                                Results = finalResults
                            };
                        }
                    }
                }
            }
            catch(Exception ex)
            {
                return new MySqlResponse
                {
                    Results = new List<string>() {  ex.Message  + "\r\n" + ex.StackTrace}
                };
            }
            finally
            {
                if(connection != null)
                {
                    if(connection.State == System.Data.ConnectionState.Open)
                    {
                        connection.Close();
                    }
                }
            }
        }
    }
}

 

So now we have a route and the above server side code to handle the route. But what exactly does the code above do? Well its quite simple, let break it down into a couple of points

 

  • We wish to have a MySql service/pod created outside the scope of this service/pod and it would be nice to try and connect to this MySql instance via its DNS name and its IP Address just to ensure both those elements work as expected in Kuberenetes
  • Since we would like to either use the MySql service/pod DNS name or IP Address I thought it made sense to be able to pass that into the REST API request as parameters, and use them to try and connect to the MySql instance. By doing this we just use the same logic above, no matter what the DNS name or IP Address ends up being. The above code will just work as is, we don’t need to change anything. You may say this is not very real world like, which is true, however I am trying to demonstrate concepts at this stage, which is why I am making things more simple/obvious
  • So we use the incoming parameters to establish a connection to the MySql service/pod (which obviously should be running in Kubernetes), and we just try and SELECT some stuff from the default MySql databases, and return that to the user just to show that the connection is working

 

So that is all we need to change to the existing REST API service/pod we have been working with

 

The MySql Service/pod

Ok so now that we have our ongoing REST API service/pod modified (and already uploaded to Docker cloud for you : https://hub.docker.com/r/sachabarber/sswebapp-post-4/) we need to turn our attention on how to craft a MySql kubernetes service/pod.

 

Luckily there is this nice Docker cloud image available to use already : https://hub.docker.com/_/mysql/, so we can certainly start with that.

 

For the MySql instance to work we will need to be able to store stuff to disk, so the service/pod needs to be stateful, which is something new. To do this we can create a kubernetes deployment for MySql and connect it to an existing PersistentVolume using a PersistentVolumeClaim

 

Before we begin

Before we start, lets make sure Minikube is running (see post 1 for more details)

cd\
.\minikube.exe start --kubernetes-version="v1.9.0" --vm-driver="hyperv" --memory=1024 --hyperv-virtual-switch="Minikube Switch" --v=7 --alsologtostderr 

 

Creating the PersistentVolume

So for this post, you can use the mysql\nfs-volume.yaml file to create the PersistentVolume you need, which looks like this

 

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-volume
  labels:
    volume: my-volume
spec:
  accessModes:
    - ReadWriteOnce
  capacity: 
    storage: 1Gi
  nfs:
    server: 192.169.0.1
    path: "/exports"

 

And can be deployed like

.\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post4_SimpleServiceStackPod_MySQLStatefulService\mysql\nfs-volume.yaml

 

That will create a PersistentVolume that is 1GB and that can then be used by the service/pod that will run the MySql instance.

 

Creating the MySql Instance

Lets see how we can create a MySql service/volumeClaim and deployment/pod, we can use the file src\mysql-deployment.yaml to do all this, which looks like this

 

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
    protocol: TCP
  selector:
    app: mysql
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      volume: my-volumeMounts
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        livenessProbe:
          tcpSocket:
            port: 3306
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

 

There are quite a few things to note here, so lets go through them one by one

 

PersistentVolumeClaim

 

That’s this part

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      volume: my-volumeMounts

 

  • This matches the PersistentVolume we just setup, where we ask for 1GB

 

In understanding the difference between PersistentVolume and PersistentVolumeClaims, this except from the official docs may help

 

A PersistentVolume (PV) is a piece of storage in the cluster that has been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a StorageClass. A PersistentVolumeClaim (PVC) is a request for storage by a user that can be fulfilled by a PV. PersistentVolumes and PersistentVolumeClaims are independent from Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods.

 

https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/  up on date 19/03/18

 

Deployment/pod

That’s this part

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        livenessProbe:
          tcpSocket:
            port: 3306
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

 

Ok so moving on, lets looks at the deployment itself this is the most complex part

  • We use the mysql:5.6 image to create then pod
  • We set the root level MySql password to “password” (relax this is just a demo, in one of the next posts I will show you how to use Kubernetes secrets)
  • We expose a livenessProbe to match the MySql port of 3306 (a future post will cover this)
  • We expose the standard MySql port of 3306
  • We set up the VolumeMounts that has a mount path
  • We set up the PersistentVolumeClaim to use the PersistentVolumeClaim  we created already

 

The MySql service

That’s this part

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
    protocol: TCP
  selector:
    app: mysql

Which I think is fairly self explanatory

 

So what about the REST API service/deployment/pod?

So we have talked about what changes we had to make to ensure that the REST API could talk to a MySql instance running in Kubernetes,but we have not talked about how we run this modified REST API in kubernetes. luckily this has not really changed since last time, we just do this

 

c:\
cd\
.\kubectl run simple-sswebapi-pod-v1 --replicas=1 --labels="run=sswebapi-pod-v1" --image=sachabarber/sswebapp-post-4:v1  --port=5000
.\kubectl expose deployment simple-sswebapi-pod-v1 --type=NodePort --name=simple-sswebapi-service
.\kubectl get services simple-sswebapi-service
.\minikube service simple-sswebapi-service --url 

 

So now that we have done all that, let’s check our services are there

image

 

Ok so now lets use the trick of using busybox to check out the DNS name for the running MySql instance, such that we can use it to provide to the new REST API endpoint

 

.\kubectl run -i --tty busybox --image=busybox --restart=Never

Then we can do something like

nslookup mysql

Which should give us something like this

image

Ok so putting this all together we should be able to hit our modified REST API endpoint to test this MySql instance using this DNS name. Lets see that

This is what we see using Postman

image

Woohoo, we get a nice 200 Ok (that’s nice), and this is the full JSON response we got

{
    "results": [
        "Name = 'columns_priv', Schema = 'mysql'",
        "Name = 'db', Schema = 'mysql'",
        "Name = 'event', Schema = 'mysql'",
        "Name = 'func', Schema = 'mysql'",
        "Name = 'general_log', Schema = 'mysql'",
        "Name = 'help_category', Schema = 'mysql'",
        "Name = 'help_keyword', Schema = 'mysql'",
        "Name = 'help_relation', Schema = 'mysql'",
        "Name = 'help_topic', Schema = 'mysql'",
        "Name = 'innodb_index_stats', Schema = 'mysql'",
        "Name = 'innodb_table_stats', Schema = 'mysql'",
        "Name = 'ndb_binlog_index', Schema = 'mysql'",
        "Name = 'plugin', Schema = 'mysql'",
        "Name = 'proc', Schema = 'mysql'",
        "Name = 'procs_priv', Schema = 'mysql'",
        "Name = 'proxies_priv', Schema = 'mysql'",
        "Name = 'servers', Schema = 'mysql'",
        "Name = 'slave_master_info', Schema = 'mysql'",
        "Name = 'slave_relay_log_info', Schema = 'mysql'",
        "Name = 'slave_worker_info', Schema = 'mysql'",
        "Name = 'slow_log', Schema = 'mysql'",
        "Name = 'tables_priv', Schema = 'mysql'",
        "Name = 'time_zone', Schema = 'mysql'",
        "Name = 'time_zone_leap_second', Schema = 'mysql'",
        "Name = 'time_zone_name', Schema = 'mysql'",
        "Name = 'time_zone_transition', Schema = 'mysql'",
        "Name = 'time_zone_transition_type', Schema = 'mysql'",
        "Name = 'user', Schema = 'mysql'",
        "Name = 'accounts', Schema = 'performance_schema'",
        "Name = 'cond_instances', Schema = 'performance_schema'",
        "Name = 'events_stages_current', Schema = 'performance_schema'",
        "Name = 'events_stages_history', Schema = 'performance_schema'",
        "Name = 'events_stages_history_long', Schema = 'performance_schema'",
        "Name = 'events_stages_summary_by_account_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_stages_summary_by_host_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_stages_summary_by_thread_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_stages_summary_by_user_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_stages_summary_global_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_statements_current', Schema = 'performance_schema'",
        "Name = 'events_statements_history', Schema = 'performance_schema'",
        "Name = 'events_statements_history_long', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_by_account_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_by_digest', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_by_host_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_by_thread_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_by_user_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_global_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_waits_current', Schema = 'performance_schema'",
        "Name = 'events_waits_history', Schema = 'performance_schema'",
        "Name = 'events_waits_history_long', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_by_account_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_by_host_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_by_instance', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_by_thread_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_by_user_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_global_by_event_name', Schema = 'performance_schema'",
        "Name = 'file_instances', Schema = 'performance_schema'",
        "Name = 'file_summary_by_event_name', Schema = 'performance_schema'",
        "Name = 'file_summary_by_instance', Schema = 'performance_schema'",
        "Name = 'host_cache', Schema = 'performance_schema'",
        "Name = 'hosts', Schema = 'performance_schema'",
        "Name = 'mutex_instances', Schema = 'performance_schema'",
        "Name = 'objects_summary_global_by_type', Schema = 'performance_schema'",
        "Name = 'performance_timers', Schema = 'performance_schema'",
        "Name = 'rwlock_instances', Schema = 'performance_schema'",
        "Name = 'session_account_connect_attrs', Schema = 'performance_schema'",
        "Name = 'session_connect_attrs', Schema = 'performance_schema'",
        "Name = 'setup_actors', Schema = 'performance_schema'",
        "Name = 'setup_consumers', Schema = 'performance_schema'",
        "Name = 'setup_instruments', Schema = 'performance_schema'",
        "Name = 'setup_objects', Schema = 'performance_schema'",
        "Name = 'setup_timers', Schema = 'performance_schema'",
        "Name = 'socket_instances', Schema = 'performance_schema'",
        "Name = 'socket_summary_by_event_name', Schema = 'performance_schema'",
        "Name = 'socket_summary_by_instance', Schema = 'performance_schema'",
        "Name = 'table_io_waits_summary_by_index_usage', Schema = 'performance_schema'",
        "Name = 'table_io_waits_summary_by_table', Schema = 'performance_schema'",
        "Name = 'table_lock_waits_summary_by_table', Schema = 'performance_schema'",
        "Name = 'threads', Schema = 'performance_schema'",
        "Name = 'users', Schema = 'performance_schema'"
    ]
}

Very nice looks like its working.

 

ClusterIP:None : A bit better

Whilst the above is very cool, we can go one better, and use everything the same as above with the exception of the service part of the src\mysql-deployment.yaml file, which we would now use like this

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
    protocol: TCP
  selector:
    app: mysql
  clusterIP: None

See how we are using clusterIP:None. The Service option clusterIP:None lets the Service DNS name resolve directly to the Pod’s IP address. This is optimal when you have only one Pod behind a Service and you don’t intend to increase the number of Pods. This allows us to hit the SQL instance just like this

image

 

Where you can use the dashboard (or command line to find the endpoints)

image

 

That’s pretty cool huh, no crazy long DNS name to deal with, “mysql” just rolls of the tongue a bit easier than “mysql.default.svc.cluster.local” (well in my opinion it does anyway)

 

Conclusion

Again Kubernetes has proven itself to be more than capable of doing what we want, and exposing one service to another, and we did not need to use the inferior technique of using Environment Variables or anything, we were able to just use DNS to resolve the MySql instance. This is way better than Links in Docker, I like it a whole lot more, as I can control my different deployments and they can discover each other using DNS. In Docker (I could be wrong here though) this would all need to be done in a single Docker compose file.

Leave a comment