kubernetes

Kubernetes Part 4 of n, Singletons

 

So this post will build upon services that we looked at last time, and we will look to see how we can use a Service to act as a singleton for something that should be a singleton like a database. Just a reminder the rough outline is as follows:

 

  1. What is Kubernetes / Installing Minikube
  2. What are pods/labels, declaring your first pod
  3. Services
  4. Singletons, such as a DB (this post)
  5. ConfigMaps/Secrets
  6. LivenessProbe/ReadinessProbe/Scaling Deployments

 

Where is the code again?

 

 

What did we talk about last time?

Last time we talked about services and how they are a constant over a single or group of pods. Services also expose a DNS address which we looked at in the last post. This time we will carry on looking at services but we will tackle how we can set up a singleton such as a database

 

What are we going to cover?

This time we will build upon what we did in post2 and post3, but we will get the REST API pod/service that we have crafted to work with a single MySQL instance.  If you recall from the last posts that the REST API was ServiceStack which you may or may not be familiar with.

 

Why did I choose MySql?

I chose MySql as it is fairly self contained, can be run as a single pod/service in Kubernetes and its been around for a long time, and has stable Docker images available, and to be honest it’s a nice simple thing to use to demonstrate.

 

Changes to the ongoing REST API pod that we are working with

The REST API pod we have been working with is all good, nothing changes for the deployment of him from what we did in post2 and post3. However what we want to be able to do is get it to talk to a MySql instance that we will host in another service. So lets look at what changes from last time to enable us to do that shall we.

 

The first thing we need to do is update the sswebapp.ServiceInterface project to use the MySql instance. We need to update it to include the relevant NuGet package https://www.nuget.org/packages/MySql.Data/6.10.6 which (at the time of writing) was the most up to date .NET driver for MySql

 

image

 

So now that we have that in place we just need to create a new route for this MySql stuff.

From the stuff we already had working we had a simple route called “Hello”, which would match a route like this

 

/hello

/hello/{Name}

 

That still works, we have not touched that, but we do wish to add another route. The new route will be on that takes the following parameters and is a POST request

Host

Port

 

image

 

Here is an example of usage from Postman which is a tool I use a lot for testing out my REST endpoints

image

 

So now that we have a route declared we need to write some server side code to deal with this route. That is done by adding the following code to the sswebapp.ServiceInterface.MyService.cs code file

using System;
using System.Collections.Generic;
using MySql.Data.MySqlClient;
using ServiceStack;
using sswebapp.ServiceModel;

namespace sswebapp.ServiceInterface
{
    public class MyServices : Service
    {
		//OTHER EXISTING ROUTES HERE SUCH AS
		//Hello/{Name}
		

        public object Post(MySqlRequest request)
        {
            MySqlConnection connection = null;
            try
            {
                var server = request.SqlProps.Host;
                var port = request.SqlProps.Port;
                var uid = "root";
                var password = "password";
                string connectionString = $"server={server};port={port};user={uid};password={password};";

                using (connection = new MySqlConnection(connectionString))
                {
                    connection.Open();
                    var query = @"SELECT table_name, table_schema 
                                FROM INFORMATION_SCHEMA.TABLES 
                                WHERE TABLE_TYPE = 'BASE TABLE';";

                    using (MySqlCommand cmd = new MySqlCommand(query, connection))
                    {
                        //Create a data reader and Execute the command
                        using (MySqlDataReader dataReader = cmd.ExecuteReader())
                        {

                            //Read the data and store them in the list
                            var finalResults = new List<string>();
                            while (dataReader.Read())
                            {
                                finalResults.Add($"Name = '{dataReader["table_name"]}', Schema = '{dataReader["table_schema"]}'");
                            }

                            //close Data Reader
                            dataReader.Close();

                            return new MySqlResponse
                            {
                                Results = finalResults
                            };
                        }
                    }
                }
            }
            catch(Exception ex)
            {
                return new MySqlResponse
                {
                    Results = new List<string>() {  ex.Message  + "\r\n" + ex.StackTrace}
                };
            }
            finally
            {
                if(connection != null)
                {
                    if(connection.State == System.Data.ConnectionState.Open)
                    {
                        connection.Close();
                    }
                }
            }
        }
    }
}

 

So now we have a route and the above server side code to handle the route. But what exactly does the code above do? Well its quite simple, let break it down into a couple of points

 

  • We wish to have a MySql service/pod created outside the scope of this service/pod and it would be nice to try and connect to this MySql instance via its DNS name and its IP Address just to ensure both those elements work as expected in Kuberenetes
  • Since we would like to either use the MySql service/pod DNS name or IP Address I thought it made sense to be able to pass that into the REST API request as parameters, and use them to try and connect to the MySql instance. By doing this we just use the same logic above, no matter what the DNS name or IP Address ends up being. The above code will just work as is, we don’t need to change anything. You may say this is not very real world like, which is true, however I am trying to demonstrate concepts at this stage, which is why I am making things more simple/obvious
  • So we use the incoming parameters to establish a connection to the MySql service/pod (which obviously should be running in Kubernetes), and we just try and SELECT some stuff from the default MySql databases, and return that to the user just to show that the connection is working

 

So that is all we need to change to the existing REST API service/pod we have been working with

 

The MySql Service/pod

Ok so now that we have our ongoing REST API service/pod modified (and already uploaded to Docker cloud for you : https://hub.docker.com/r/sachabarber/sswebapp-post-4/) we need to turn our attention on how to craft a MySql kubernetes service/pod.

 

Luckily there is this nice Docker cloud image available to use already : https://hub.docker.com/_/mysql/, so we can certainly start with that.

 

For the MySql instance to work we will need to be able to store stuff to disk, so the service/pod needs to be stateful, which is something new. To do this we can create a kubernetes deployment for MySql and connect it to an existing PersistentVolume using a PersistentVolumeClaim

 

Before we begin

Before we start, lets make sure Minikube is running (see post 1 for more details)

cd\
.\minikube.exe start --kubernetes-version="v1.9.0" --vm-driver="hyperv" --memory=1024 --hyperv-virtual-switch="Minikube Switch" --v=7 --alsologtostderr 

 

Creating the PersistentVolume

So for this post, you can use the mysql\nfs-volume.yaml file to create the PersistentVolume you need, which looks like this

 

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-volume
  labels:
    volume: my-volume
spec:
  accessModes:
    - ReadWriteOnce
  capacity: 
    storage: 1Gi
  nfs:
    server: 192.169.0.1
    path: "/exports"

 

And can be deployed like

.\kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post4_SimpleServiceStackPod_MySQLStatefulService\mysql\nfs-volume.yaml

 

That will create a PersistentVolume that is 1GB and that can then be used by the service/pod that will run the MySql instance.

 

Creating the MySql Instance

Lets see how we can create a MySql service/volumeClaim and deployment/pod, we can use the file src\mysql-deployment.yaml to do all this, which looks like this

 

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
    protocol: TCP
  selector:
    app: mysql
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      volume: my-volumeMounts
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        livenessProbe:
          tcpSocket:
            port: 3306
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

 

There are quite a few things to note here, so lets go through them one by one

 

PersistentVolumeClaim

 

That’s this part

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      volume: my-volumeMounts

 

  • This matches the PersistentVolume we just setup, where we ask for 1GB

 

In understanding the difference between PersistentVolume and PersistentVolumeClaims, this except from the official docs may help

 

A PersistentVolume (PV) is a piece of storage in the cluster that has been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a StorageClass. A PersistentVolumeClaim (PVC) is a request for storage by a user that can be fulfilled by a PV. PersistentVolumes and PersistentVolumeClaims are independent from Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods.

 

https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/  up on date 19/03/18

 

Deployment/pod

That’s this part

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        livenessProbe:
          tcpSocket:
            port: 3306
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

 

Ok so moving on, lets looks at the deployment itself this is the most complex part

  • We use the mysql:5.6 image to create then pod
  • We set the root level MySql password to “password” (relax this is just a demo, in one of the next posts I will show you how to use Kubernetes secrets)
  • We expose a livenessProbe to match the MySql port of 3306 (a future post will cover this)
  • We expose the standard MySql port of 3306
  • We set up the VolumeMounts that has a mount path
  • We set up the PersistentVolumeClaim to use the PersistentVolumeClaim  we created already

 

The MySql service

That’s this part

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
    protocol: TCP
  selector:
    app: mysql

Which I think is fairly self explanatory

 

So what about the REST API service/deployment/pod?

So we have talked about what changes we had to make to ensure that the REST API could talk to a MySql instance running in Kubernetes,but we have not talked about how we run this modified REST API in kubernetes. luckily this has not really changed since last time, we just do this

 

c:\
cd\
.\kubectl run simple-sswebapi-pod-v1 --replicas=1 --labels="run=sswebapi-pod-v1" --image=sachabarber/sswebapp-post-4:v1  --port=5000
.\kubectl expose deployment simple-sswebapi-pod-v1 --type=NodePort --name=simple-sswebapi-service
.\kubectl get services simple-sswebapi-service
.\minikube service simple-sswebapi-service --url 

 

So now that we have done all that, let’s check our services are there

image

 

Ok so now lets use the trick of using busybox to check out the DNS name for the running MySql instance, such that we can use it to provide to the new REST API endpoint

 

.\kubectl run -i --tty busybox --image=busybox --restart=Never

Then we can do something like

nslookup mysql

Which should give us something like this

image

Ok so putting this all together we should be able to hit our modified REST API endpoint to test this MySql instance using this DNS name. Lets see that

This is what we see using Postman

image

Woohoo, we get a nice 200 Ok (that’s nice), and this is the full JSON response we got

{
    "results": [
        "Name = 'columns_priv', Schema = 'mysql'",
        "Name = 'db', Schema = 'mysql'",
        "Name = 'event', Schema = 'mysql'",
        "Name = 'func', Schema = 'mysql'",
        "Name = 'general_log', Schema = 'mysql'",
        "Name = 'help_category', Schema = 'mysql'",
        "Name = 'help_keyword', Schema = 'mysql'",
        "Name = 'help_relation', Schema = 'mysql'",
        "Name = 'help_topic', Schema = 'mysql'",
        "Name = 'innodb_index_stats', Schema = 'mysql'",
        "Name = 'innodb_table_stats', Schema = 'mysql'",
        "Name = 'ndb_binlog_index', Schema = 'mysql'",
        "Name = 'plugin', Schema = 'mysql'",
        "Name = 'proc', Schema = 'mysql'",
        "Name = 'procs_priv', Schema = 'mysql'",
        "Name = 'proxies_priv', Schema = 'mysql'",
        "Name = 'servers', Schema = 'mysql'",
        "Name = 'slave_master_info', Schema = 'mysql'",
        "Name = 'slave_relay_log_info', Schema = 'mysql'",
        "Name = 'slave_worker_info', Schema = 'mysql'",
        "Name = 'slow_log', Schema = 'mysql'",
        "Name = 'tables_priv', Schema = 'mysql'",
        "Name = 'time_zone', Schema = 'mysql'",
        "Name = 'time_zone_leap_second', Schema = 'mysql'",
        "Name = 'time_zone_name', Schema = 'mysql'",
        "Name = 'time_zone_transition', Schema = 'mysql'",
        "Name = 'time_zone_transition_type', Schema = 'mysql'",
        "Name = 'user', Schema = 'mysql'",
        "Name = 'accounts', Schema = 'performance_schema'",
        "Name = 'cond_instances', Schema = 'performance_schema'",
        "Name = 'events_stages_current', Schema = 'performance_schema'",
        "Name = 'events_stages_history', Schema = 'performance_schema'",
        "Name = 'events_stages_history_long', Schema = 'performance_schema'",
        "Name = 'events_stages_summary_by_account_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_stages_summary_by_host_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_stages_summary_by_thread_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_stages_summary_by_user_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_stages_summary_global_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_statements_current', Schema = 'performance_schema'",
        "Name = 'events_statements_history', Schema = 'performance_schema'",
        "Name = 'events_statements_history_long', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_by_account_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_by_digest', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_by_host_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_by_thread_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_by_user_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_statements_summary_global_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_waits_current', Schema = 'performance_schema'",
        "Name = 'events_waits_history', Schema = 'performance_schema'",
        "Name = 'events_waits_history_long', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_by_account_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_by_host_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_by_instance', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_by_thread_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_by_user_by_event_name', Schema = 'performance_schema'",
        "Name = 'events_waits_summary_global_by_event_name', Schema = 'performance_schema'",
        "Name = 'file_instances', Schema = 'performance_schema'",
        "Name = 'file_summary_by_event_name', Schema = 'performance_schema'",
        "Name = 'file_summary_by_instance', Schema = 'performance_schema'",
        "Name = 'host_cache', Schema = 'performance_schema'",
        "Name = 'hosts', Schema = 'performance_schema'",
        "Name = 'mutex_instances', Schema = 'performance_schema'",
        "Name = 'objects_summary_global_by_type', Schema = 'performance_schema'",
        "Name = 'performance_timers', Schema = 'performance_schema'",
        "Name = 'rwlock_instances', Schema = 'performance_schema'",
        "Name = 'session_account_connect_attrs', Schema = 'performance_schema'",
        "Name = 'session_connect_attrs', Schema = 'performance_schema'",
        "Name = 'setup_actors', Schema = 'performance_schema'",
        "Name = 'setup_consumers', Schema = 'performance_schema'",
        "Name = 'setup_instruments', Schema = 'performance_schema'",
        "Name = 'setup_objects', Schema = 'performance_schema'",
        "Name = 'setup_timers', Schema = 'performance_schema'",
        "Name = 'socket_instances', Schema = 'performance_schema'",
        "Name = 'socket_summary_by_event_name', Schema = 'performance_schema'",
        "Name = 'socket_summary_by_instance', Schema = 'performance_schema'",
        "Name = 'table_io_waits_summary_by_index_usage', Schema = 'performance_schema'",
        "Name = 'table_io_waits_summary_by_table', Schema = 'performance_schema'",
        "Name = 'table_lock_waits_summary_by_table', Schema = 'performance_schema'",
        "Name = 'threads', Schema = 'performance_schema'",
        "Name = 'users', Schema = 'performance_schema'"
    ]
}

Very nice looks like its working.

 

ClusterIP:None : A bit better

Whilst the above is very cool, we can go one better, and use everything the same as above with the exception of the service part of the src\mysql-deployment.yaml file, which we would now use like this

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
    protocol: TCP
  selector:
    app: mysql
  clusterIP: None

See how we are using clusterIP:None. The Service option clusterIP:None lets the Service DNS name resolve directly to the Pod’s IP address. This is optimal when you have only one Pod behind a Service and you don’t intend to increase the number of Pods. This allows us to hit the SQL instance just like this

image

 

Where you can use the dashboard (or command line to find the endpoints)

image

 

That’s pretty cool huh, no crazy long DNS name to deal with, “mysql” just rolls of the tongue a bit easier than “mysql.default.svc.cluster.local” (well in my opinion it does anyway)

 

Conclusion

Again Kubernetes has proven itself to be more than capable of doing what we want, and exposing one service to another, and we did not need to use the inferior technique of using Environment Variables or anything, we were able to just use DNS to resolve the MySql instance. This is way better than Links in Docker, I like it a whole lot more, as I can control my different deployments and they can discover each other using DNS. In Docker (I could be wrong here though) this would all need to be done in a single Docker compose file.

kubernetes

Kubernetes Part 3 of n, Services

So this will be a shorter post than most of the others in this series, this time we will be covering services. The rough road map is as follows:

 

  1. What is Kubernetes / Installing Minikube
  2. What are pods/labels, declaring your first pod 
  3. Services (this post)
  4. Singletons (such as a DB)
  5. ConfigMaps/Secrets
  6. LivenessProbe/ReadinessProbe/Scaling Deployments

 

Although this post is not going to be a very long one do not underestimate the importance of “Services” in kubernetes.

 

 

So Just What Is A Service?

So before we go into what a service is lets just discuss PODs a bit. A POD is the lowest unit of deployment in Kubenetes. PODs are ephemeral, that is they are intended to die and may be restarted if they are deemed unhealthy. So with all this happening how could we really expose an IP Address/DNS name of something to another POD if it is such a possible state of flux and may be recreated at any time?

 

Services are the answer to this question. Services provide an abstraction over a single POD or group of pods that match a label selector. Services unlike PODs are supposed to live a VERY long time. Their IP address/DNS name and associated environment variables do not disappear or change until the service itself is deleted.

 

Here is an example we have a requirement to do some image processing which could all be done in parallel, we don’t care what POD picks this work up, providing we can reach a POD. This is something that a service will give you, where you would specify a label selector such that it can match the PODs for that label. These PODs will then have their endpoints associated with the service.

 

 

Lets try and visualize this using a diagram or 2

 

image

 

image

 

From this diagram we can see that we had 2 deployments labeller app=A/app=B and we exposed the PODs that run in these deployments using 2 services where we use the app=A/app=B label selectors. So you can see above that we have ended up with 3 PODS that matched app=B in one service and just one POD that matched app=A in the other service.

 

The cool thing about services is that they are always watching for new PODs, so if you did one of the following the service would end up knowing about it

  • Scale the number of PODS either using a ReplicationController or Deployment (the Service would see the new nodes or know which ones to remove)
  • Change the labels associated with a POD in some way which means it should NOW be included as a POD by the service, or should be removed from the Service as the labels no longer match the services POD selection criteria. This one is particularly powerful, as we can have some PODs that are exposed to a service that are running fine, then we can create a new deployment and add a new label say version=2, and can alter the Service selector to only pick up PODs that are NOW version=2 labelled. Then when we are happy we can remove the old PODs. This is pretty awesome and we will be discussing this more in a future post

 

 

What Did The Example Service Look Like Again?

Lets just remind ourselves of what the service looked like for the last post

 

And here is the code : https://github.com/sachabarber/KubernetesExamples

 

And here is the Docker Cloud repo we used last time that we can still use for this post : https://hub.docker.com/r/sachabarber/sswebapp/

 

We used this to create the deployment and expose the service

kubectl run simple-sswebapi-pod-v1 --replicas=1 --labels="run=sswebapi-pod-v1" --image=sachabarber/sswebapp:v1  --port=5000
kubectl expose deployment simple-sswebapi-pod-v1 --type=NodePort --name=simple-sswebapi-service

 

or via YAML

 

Deployment

kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post2_SimpleServiceStackPod\sswebapp\deployment.yaml

 

Where the YAML looks like this, note those labels

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: simple-sswebapi-pod-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: run=sswebapi-pod-v1
    spec:
      containers:
      - name: sswebapi-pod-v1
        image: sachabarber/sswebapp:v1
        ports:
        - containerPort: 5000

 

 

Service

kubectl.exe apply -f C:\Users\sacha\Desktop\KubernetesExamples\Post2_SimpleServiceStackPod\sswebapp\service.yaml

 

Where the YAML looks like this, note the selector, see how it matches the POD labels section

 

apiVersion: v1
kind: Service
metadata:
  name: simple-sswebapi-service
spec:
  selector:
    app: run=sswebapi-pod-v1
  ports:
  - protocol: TCP
    port: 5000
    targetPort: 5000
  type: NodePort

 

In the above example I am using type:NodePort. This means that this service will be exposed on each node in the Kubernetes cluster. We can get the bits of information we want using these command lines

 

Grab the single node IP address for Minikube cluster

.\minikube.exe ip

 

Which gives us this

image

 

Grab the single node IP address for Minikube cluster

.\kubectl describe service simple-sswebapi-service

 

Which gives us this

 

image

 

So we can now try and hit the full service exposes port like so

 

image

 

That is one type of Service type, we will talk about them all below in more detail, but this proves its working

Types Of Service

For some parts of your application (e.g. frontends) you may want to expose a Service onto an external (outside of your cluster) IP address.

Kubernetes ServiceTypes allow you to specify what kind of service you want. The default is ClusterIP.

Type values and their behaviors are:

 

  • ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
  • NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
  • ExternalName: Maps the service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. This requires version 1.7 or higher of kube-dns.

 

Taken from https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services—service-types up on date 05/03/18

 

Environment Variables

 

When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It supports both Docker links compatible variables (see makeLinkVariables) and simpler {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores.

For example, the Service “redis-master” which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11 produces the following environment variables:

REDIS_MASTER_SERVICE_HOST=10.0.0.11
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11

 

This does imply an ordering requirement – any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated. DNS does not have this restriction

 

Taken from https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables up on date 02/03/18

 

Lets try this for the code that goes with this post, I am assuming that you had recreated the PODs/Deployment after the service was created (as the environment variables can only be injected into PODs AFTER the service is created as just stated above). This may mean for this post that we are delete the deployment we initially created and create it again after the service is up and running

 

We can use this command line to get ALL the pods

 

.\kubectl.exe get pods

 

Which will then give us something like this for this POD we have used so far

 

image

 

So now lets see what happens when we examine this POD for its environment variables (remember this is after the service has been created for the deployment and I have recreated the deployment once or twice to ensure that the environment variables ARE NOW available)

 

.\kubectl.exe exec simple-sswebapi-pod-v1-f7f8764b9-xs822 env

Which for this POD (currently in this post we have only ever asked for 1 replica set, so we have only ever got this single POD too) gives us the following environment variables, and of which we could use in our code.

 

image

You can see that we ran this command in the context of the demo POD that we are working with, yet that POD can see environment variables from all the different services that are currently available (or where running when the demo POD was created at any rate)

 

Obviously we would have to deal with the fact that these are NOT available to PODs UNTIL the service is created. These env variables may serve you well for say, having a service over one set of PODS and then having another service/PODs that might way to use the 1st service, that would work, as the 1st service would hopefully be running before we start the 2nd service/PODs.

 

None the less DNS is a better option. We will look at that next

 

 

DNS

An optional (though strongly recommended) cluster add-on is a DNS server. The DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution of Services automatically.

For example, if you have a Service called “my-service” in Kubernetes Namespace “my-ns” a DNS record for “my-service.my-ns” is created. Pods which exist in the “my-ns” Namespace should be able to find it by simply doing a name lookup for “my-service“. Pods which exist in other Namespaces must qualify the name as “my-service.my-ns“. The result of these name lookups is the cluster IP.

Kubernetes also supports DNS SRV (service) records for named ports. If the “my-service.my-nsService has a port named “http” with protocol TCP, you can do a DNS SRV query for “_http._tcp.my-service.my-ns” to discover the port number for “http“.

 

The Kubernetes DNS server is the only way to access services of type ExternalName.

 

Taken from https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables up on date 02/03/18

 

So above where we were talking about environment variables, we talked about the fact that when you create a new service and have some PODs created afterwards the service will inject environment variables into PODs. However we also talked about the fact that the PODs need to started after the service in order to receive the correct environment variables. This is a bit of a limitation, which is solved by DNS.

 

Kubenetes runs a special DNS POD called kube-dns, this is made available thanks to a Kubernetes addon called DNS

 

You can check that the addon is installed like this

.\minikube addons list

Which should show something like this

enter image description here

 

So now that we know we have the DNS addon running how do we see if we have the kube-dns pod running? Well we can simply do this

.\kubectl get pod -n kube-system

Which should show something like this

enter image description here

So now that we have the DNS stuff and we know its running just how do we use it. Essentially what we would like to do is confirm that the DNS lookups within the cluster.. Ideally we would like to get a DNS nslookup command to run directly in the demo container. The demo container in this case is a simple .NET Core REST API, so it won’t have anything more than that. At first I could not see how to do this, but I asked a few dumb questions on stack overflow, and then it came to me, all the demos of using DNS nslookup in Kubernetes that I had seen seemed to use busybox

 

What is BusyBox you ask?

Coming in somewhere between 1 and 5 Mb in on-disk size (depending on the variant), BusyBox is a very good ingredient to craft space-efficient distributions.

BusyBox combines tiny versions of many common UNIX utilities into a single small executable. It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. The utilities in BusyBox generally have fewer options than their full-featured GNU cousins; however, the options that are included provide the expected functionality and behave very much like their GNU counterparts. BusyBox provides a fairly complete environment for any small or embedded system.

 

Taken from https://hub.docker.com/_/busybox/ up on date 02/03/18

 

Ok great how does that help us? Well what we can now do is just run the BusyBox docker hub image as a POD and use that POD to run our nslookup inside of to see if DNS lookups within the cluster are working.

 

Here is how to run the BusyBox docker hub image as  a POD

.\kubectl run -i --tty busybox --image=busybox --restart=Never

This will run and give us a command prompt, then we can do this to test the DNS for our demo service that goes with this post, like so

nslookup simple-sswebapi-service

Which gives us this

enter image description here

 

Cool, so we should be able to use these DNS names or IP address from within the cluster. Nice

 

Conclusion

So there you go, this was a shorter post than some of the other will be, but I hope you can see just why you need the service abstraction and why it it such as useful construct. Services have one more trick up their sleeve which we will be looking at in the next post