C#

Setting Up GrayLog For Use With NLog

Introduction

At work at the moment we have a number of Microservices which we are slowly trying to transition to containers, where we will likely use Kubernetes to run the containers. Right now our logging framework of choice is NLog, where we typically just log to a file.

Now this works fine when you are on premise or have a dedicates share setup, but you will need to do more work when you move to containers.

For example if you just log to some relative path to the Microservice you will find that when you run that in a container this logging will be wiped out if your container dies, as its effectively just logging into the container process storage. There are of course answers to this, such as

  • Volumes/Mounts in Docker/K8S, where we could mount volumes and use these to log to from inside the container

Or you could use a more radical solution such as

  • Mounting a share where all your files will be written to, but this is also really just a volume based solution as far as Docker/K8S is concerned

But there is a whole other category of solution that we could consider, which are dedicated logging solutions, where the main players are really

  • ELK (elastic, logstash, kibana)
  • EFK (elastic, fluentD, kibana)
  • Graylog

These logging solutions typically all use elastic, and come with certain ingestors, or input adaptors that allow the log data to flow into elastic where it is indexed, and made available to either Kibana (ExK stack) or Graylog if you use the Graylog stack.

Both Kibana and GrayLog offer a very nice UI which allows you to build up nice dashboards and conduct searches over the indexed data, and also include structural tag searches which are made available by the logger itself. NLog for example suports structred tag logging

So that is the overview, in this post I will show you how to quick setup a .NET Core 3.1 application using NLog talking to GrayLog, In a subsequent post I will also demonstrate the EFK stack. You might ask why not the ELK stack, well quite simply K8S has very good support for just piping the console output through FluentD, so it’s a good choice when working with K8S

NLog

Ok so lets start with creating a dead simple .NET Core3.1 Console app, and add the following Nuget package : NLog.Gelf (I used 1.1.4)

Then you will need to add a Nlog.Config file, this is what mine looks like

<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <extensions>
    <add assembly="NLog.Gelf" />
  </extensions>

  <targets>
    <target name="console" xsi:type="Console" layout="${date:format=dd/MM/yyyy HH\:mm\:ss.fff} | ${level:uppercase=true} | ${message}${exception:format=ToString}" />
    <target name="Gelf" type="GelfHttp" serverUrl="http://localhost:12201/gelf" facility="sachas app"/>
  </targets>

  <rules>
    <logger name="*" minLevel="Trace" appendTo="Gelf, console"/>
  </rules>
</nlog>

The key thing to note there is that we use the special Gelf target and assign it to endpoint http://localhost:12201/gelf, this is how it will be sent to GrayLog.

Then finally you will need to write some code that actually produces some logging output

using System;
using System.Threading;
using NLog;
using NLog.Config;

namespace NLogGrayLogDemo
{
    class Program
    {
        static void Main(string[] args)
        {

            var logger = LogManager.GetCurrentClassLogger();
            int counter = 0;


            while (true)
            {
                logger.Debug($"YO its nice, This one is from NLog Gelf logging, index = {counter++}");
                Thread.Sleep(5000);
            }
            Console.ReadLine();
        }
    }
}

Ok so now that we have some simple .NET Core app, we need an instance of GrayLog to test things with.

GrayLog

GrayLog also uses Elastic as its search/indexer, and it also stores some data in Mongo. As such to have a working GrayLog instance you will need Mongo/Elastic and GrayLog

Luckily our friends at GrayLog have made this very simple with their support for Docker, we can pretty much follow this quickstart : https://docs.graylog.org/en/3.3/pages/installation/docker.html

One thing to note though is if you want to actual persist the data you will need to use the Docker Compose file as shown here : https://docs.graylog.org/en/3.3/pages/installation/docker.html#persisting-data. For this demo however I will just be using simple docker run commands and as such the data in GrayLog/Mongo and Elastic will be ephemeral and will be lost should the containers be restarted.

 

So lets get started, first thing you will need to do is ensure you are using Linux Containers, so make sure your Docker Desktop is using Linux containers, then issue these commands

docker run --name mongo -d mongo:3
docker run --name elasticsearch -e "http.host=0.0.0.0" -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" -d docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.10

This will give you a running Mongo and Elastic, but for GrayLog we need a little bit more work.

Firstly we need to create an Admin password, which we can do use the WSL system for Windows, so from a WSL command line we can issue this command

echo -n "Enter Password: " && head -1 </dev/stdin | tr -d '\n' | sha256sum | cut -d" " -f1

Where you will enter “admin” (or any password you want to use), which will then get encrypted ready to use to create GrayLog. Grab the value and use it in next command line, where we setup the following ports

  • 9000 TCP : Graylog Web UI
  • 12201 HTTP : Gelf HTTP Input that will need to be steup inside your GrayLog instance running in docker after you issue command line below
  • 5555 TCP : Raw/Plaintext TCP Input that will need to be steup inside your GrayLog instance running in docker after you issue command line below
docker run --name graylog --link mongo --link elasticsearch -p 9000:9000 -p 12201:12201 -p 5555:5555 -e GRAYLOG_HTTP_EXTERNAL_URI="http://127.0.0.1:9000/" -e GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 -d graylog/graylog:3.3

Where we are using the ports we specified above and the admin root password we generated above

Ok so you should now have a running GrayLog, so we can

  • open the Graylog Web UI at http://127.0.0.1:9000/ and enter admin and whatever password you generated above
  • Go to System/Inputs menu and add 2 new inputs
    • Gelf Http : 12201
    • Raw/PlainText :  5555

So you now have a running GrayLog, time to test it out

 

Testing using Netcat

As before we can use the WSL system for Windows, so from a WSL command line we can issue these test commands

Plain text input

echo 'Plain Message1' | nc localhost 5555

Gelf Http input

curl -X POST -H 'Content-Type: application/json' -d '{ "version": "1.1", "host": "example.org", "short_message": "Gelf Message1", "level": 5, "_some_info": "foo" }' 'http://localhost:12201/gelf'

Now if you go back into GrayLog and conduct a search for say “Gelf” you should see the test messages

 

image

Now you should also be able to run the .NET Core app and it should also be able to produce messages straight into GrayLog

 

That is all for this post, as I say in subsequent post I will show you how to use EFK stack too

Uncategorized

Restoring from an Azure Artifacts NuGet feed from inside a Docker Build

So today I saw a lovely twiier/blog post by Mike Hadlow : Restoring from an Azure Artifacts NuGet feed from inside a Docker Build which I felt I just had to re-post here.

This is something that has caught me out before, but I solved it a different (um crappier way), so thought I would link to Mikes blog here, so I can always refer back to this.

It involves restoring from an Azure Devops feed when you are using a Docker container, and I too have read/re-read all the documentation around this, and read about the Nuget credential provider, and was not so into that approach.

Luckily what Mike provides is a simple solution that is also build friendly and will get you where you need to get, go check it out. Nice one Mike

Azure, Kafka

Azure Event Hubs With Kafka

So it’s been a while since I wrote a post about Kafka (and Azure too actually, at work we use AWS)

 

But anyway someone mentioned to be the other day that Azure EventHubs come with the ability to interop with Kafka Producer/Consumer code with very little change. Naturally I could not skip trying that for myself, and I am a big fan of Kafka, and am actually using MSK (managed AWS Streaming Kafka service) at work right now.

 

There are a couple of good videos on this

And there is also this really good starting project

 

So probably the best place to start is to create yourself a new EventHub in the Azure portal. Once you have done that you will need to grab the following 2 bits of information

  • You will need the connection string from the portal as well as the FQDN that points to your Event Hub namespace. The FQDN can be found within your connection string as follows:

Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=XXXXXX;SharedAccessKey=XXXXXX

  • You will need the namespace portion of the connection string

 

Other considerations when creating the EventHub would be

  • Partition count : this indicates the the number of parallel consumers that you can have processing the events.
  • Message retention : this is how long you wish the message to be retained. Event Hub s don’t actually delete the messages, but rather expire them after the retention policy

 

Getting the starter project

So once you have done that, you can simply grab the code from this repo : https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/quickstart/dotnet

Update the App.Config

Update the values of the EH_FQDN and EH_CONNECTION_STRING in App.config to direct the application to the Event Hubs Kafka endpoint with the correct authentication. Default values for the Event Hub/topic name (test) and consumer group ($Default) have been filled in, but feel free to change those as needed.

So for example this for my hub looks like this

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <appSettings>
    <add key="EH_FQDN" value="sbkafkaeventhubtest.servicebus.windows.net:9093"/>
    <add key="EH_CONNECTION_STRING" value="Endpoint=sb://sbkafkaeventhubtest.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=DY96clYXl6MJINCbS0yBDN91h7EMDIsV4/a7FHz7ENY="/>
    <add key="EH_NAME" value="test"/>
    <add key="CONSUMER_GROUP" value="$Default"/>
    <add key="CA_CERT_LOCATION" value=".\cacert.pem"/>
  </appSettings>
</configuration>

One important note in there is the CA_CERT_LOCATION which points to a cacert.pem file which if you open you will see contains all the PUBLIC CA certifcates. This can actually be obtained using “cache” of CA certificates from the Mozilla project  here.

With all this in place what does the code look like?

Coding A Producer

Making sure that you have the Confluent.Kafka Nuget package a producer is as simple as this

public static async Task Producer(string brokerList, string connStr, string topic, string cacertlocation)
{
	try
	{
		var config = new ProducerConfig
		{
			BootstrapServers = brokerList,
			SecurityProtocol = SecurityProtocol.SaslSsl,
			SaslMechanism = SaslMechanism.Plain,
			SaslUsername = "$ConnectionString",
			SaslPassword = connStr,
			SslCaLocation = cacertlocation,
			//Debug = "security,broker,protocol"        //Uncomment for librdkafka debugging information
		};
		using (var producer = new ProducerBuilder<long, string>(config).SetKeySerializer(Serializers.Int64).SetValueSerializer(Serializers.Utf8).Build())
		{
			Console.WriteLine("Sending 10 messages to topic: " + topic + ", broker(s): " + brokerList);
			for (int x = 0; x < 10; x++)
			{
				var msg = string.Format("Sample message #{0} sent at {1}", x, DateTime.Now.ToString("yyyy-MM-dd_HH:mm:ss.ffff"));
				var deliveryReport = await producer.ProduceAsync(topic, new Message<long, string> { Key = DateTime.UtcNow.Ticks, Value = msg });
				Console.WriteLine(string.Format("Message {0} sent (value: '{1}')", x, msg));
			}
		}
	}
	catch (Exception e)
	{
		Console.WriteLine(string.Format("Exception Occurred - {0}", e.Message));
	}
}

The only changes to the code above versus connecting to a native Kafka broker list are these small changes, which are mandatory when using EventHub, but are optional (but recommended still) when using Kafka

  • SecurityProtocol = SecurityProtocol.SaslSsl
  • SaslMechanism = SaslMechanism.Plain
  • SaslUsername = “$ConnectionString”
  • SaslPassword = connStr
  • SslCaLocation = cacertlocation,

 

Coding A Consumer

And here is a consumer, where the same small set of changes are required, otherwise the code is exactly how it would be using native Kafka brokers

public static void Consumer(string brokerList, string connStr, string consumergroup, string topic, string cacertlocation)
{
	var config = new ConsumerConfig
	{
		BootstrapServers = brokerList,
		SecurityProtocol = SecurityProtocol.SaslSsl,
		SocketTimeoutMs = 60000,                //this corresponds to the Consumer config `request.timeout.ms`
		SessionTimeoutMs = 30000,
		SaslMechanism = SaslMechanism.Plain,
		SaslUsername = "$ConnectionString",
		SaslPassword = connStr,
		SslCaLocation = cacertlocation,
		GroupId = consumergroup,
		AutoOffsetReset = AutoOffsetReset.Earliest,
		BrokerVersionFallback = "1.0.0",        //Event Hubs for Kafka Ecosystems supports Kafka v1.0+, a fallback to an older API will fail
		//Debug = "security,broker,protocol"    //Uncomment for librdkafka debugging information
	};

	using (var consumer = new ConsumerBuilder<long, string>(config).SetKeyDeserializer(Deserializers.Int64).SetValueDeserializer(Deserializers.Utf8).Build())
	{
		CancellationTokenSource cts = new CancellationTokenSource();
		Console.CancelKeyPress += (_, e) => { e.Cancel = true; cts.Cancel(); };

		consumer.Subscribe(topic);

		Console.WriteLine("Consuming messages from topic: " + topic + ", broker(s): " + brokerList);

		while (true)
		{
			try
			{
				var msg = consumer.Consume(cts.Token);
				Console.WriteLine($"Received: '{msg.Value}'");
			}
			catch (ConsumeException e)
			{
				Console.WriteLine($"Consume error: {e.Error.Reason}");
			}
			catch (Exception e)
			{
				Console.WriteLine($"Error: {e.Message}");
			}
		}
	}
}

So to prove this all works, here is a small screen shot of it after a few runs where we can see some monitoring in the Azure portal for the EventHub showing the messages sent

AzureEVentHub

 

Final points

One final point I wanted to touch on, was just because the EventHub is able to accept Kafka messages from a Kafka Producer doesn’t mean you have to use the EventHub with a Kafka Consumer. You can still have the EventHub act like a regular EventHub for downstream services, but just get its inputs from a Kafka Producer. This is really nice I think

 

And that’s it, I am happy to see how easy this was to do, hope you enjoyed it

docker

Docker container for windows

    Its been a long time since I wrote a post, but this one caught me out a few more times than I care to remember, so I thought it worth a post

    So the context of this article is that we have some legacy code that is NOT .Net Core, We need to run this code in Docker. So that means we have to run a full .NET compatible docker base image to run .NET 4.7.2

    Sounds simple enough right?

    Well yes and no, did I mention

    • That we are using Kubernetes on AWS (EKS, where we have a cluster of Windows and Linux based AMI EC2 instances)

    • We need to use Windows Containers not Linux ones

    • That all devs have a fixed machine that is not a VM, and it has a specific version of windows, this is important

        • Ok lets park that for a minute, lets assume we have a .NET Framework project (lets called it a WebApi project for simplicity) that we want to Dockerize

          Ok cool, so lets craft a docker file, maybe something like this

        FROM mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-1803
        ADD . /app
        WORKDIR /app
        RUN DIR
        ENTRYPOINT "WebApiService.exe"
        

        So when I ran this locally all was great. So then I tried to prepare it for EKS where we would run in on a shared EKS cluster, where we have Linux and Windows EC2 instances. And them I got this

        The operating system of the container does not match the operating system of the host

        Hmm this worked locally what could be wrong? lets dig on that

        So if I run this command locally on MY DEV VM I get this from PowerShell

        [System.Environment]::OSVersion.Version
        

        I get this

        Major  Minor  Build  Revision
        
                    -----  -----  -----  --------
        
                    10     0      17134  0
        

        Hmm ok

        So this is interesting lets keep digging, Now that we have this, we can check this against these known Docker base images https://hub.docker.com/_/microsoft-dotnet-framework-runtime/ where you may need to refer to this page to find your version in the list, and then

        That should give you a number like 1803. Which is based on the Major/Build number of your OSVersion, see how that is 10.0.17134 which matches my DEV machine, this is why it the docker base image runtime:4.8-windowsservercore-1803 works from my VM

                    1803     multiarch           No Dockerfile    10.0.17134.1305 10/06/2018 05:11:43      02/19/2020 02:35:44

        Which leads to this base image for our Dockerfile `mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-1803`

        OK great, so why wont it work in AWS EKS??? Lets see what’s up with those pesky EC2 instances

        So after remote desk topping onto the EC2 instances (which are windows VMS after all, so I can doez this innit) I can repeat the PowerShell above and see something like this

        eks windows worker AMI: Windows_Server-2019-English-Core-EKS_Optimized-1.16-2020.05.13 (ami-0fe735a36ec87b442)
        
        
        

        So in order to make this work with Dockerfile we need to change the base image to match this, which leads me to this one for our Dockerfile `mcr.microsoft.com/windows/servercore:ltsc2019`, I basically got to this from the list  https://hub.docker.com/_/microsoft-dotnet-framework-runtime/

        What have we learned??

        We must be VERY careful with Windows Docker images/files and base images, and cloud services and VM instances, lest we get burnt, don’t assume it’s the same spec as your VM, in fact I don’t know why I did

        Uncategorized

        React/Redux demo app using Hooks

        t has been a while since I wrote an article at Codeproject and since I stopped writing articles I have seen many interesting articles, many by Honey The Code Witch, and I thought it was time I started writing articles again. This one is around React/Redux/TypeScript, which I know there are already lots of. But what I wanted to do is explore using React hooks, and Redux hooks. As such this article will be based around a simple WebApi backend and a fairly straight forward React front end that uses Redux and hooks where possible.

        So with that in mind I came up with a simple demo app, which you can read about here : https://www.codeproject.com/Articles/5266424/Demo-app-using-React-Redux-Typescript-and-hooks

         

        C#, CodeProject

        .NET Core/Standard Auto Incrementing Versioning

        Using The AutoGenerated .NET Core/Standard AssemblyInfo.cs

        When you create a new .NET Core/.NET Standard project you will get a auto generated set of attributes, which is based on these settings for the project

        Which you can in effect in the obj folder, where a auto generated xxxxAssemblyInfo.cs class is created for you.

        /------------------------------------------------------------------------------
        // <auto-generated>
        // This code was generated by a tool.
        // Runtime Version:4.0.30319.42000
        //
        // Changes to this file may cause incorrect behavior and will be lost if
        // the code is regenerated.
        // </auto-generated>
        //------------------------------------------------------------------------------
        
        using System;
        
        using System.Reflection;
        
        [assembly: System.Reflection.AssemblyCompanyAttribute("SomeDemoCode.Std")]
        [assembly: System.Reflection.AssemblyConfigurationAttribute("Debug")]
        [assembly: System.Reflection.AssemblyFileVersionAttribute("1.0.0.0")]
        [assembly: System.Reflection.AssemblyInformationalVersionAttribute("1.0.0")]
        [assembly: System.Reflection.AssemblyProductAttribute("SomeDemoCode.Std")]
        [assembly: System.Reflection.AssemblyTitleAttribute("SomeDemoCode.Std")]
        [assembly: System.Reflection.AssemblyVersionAttribute("1.0.0.0")]
        
        // Generated by the MSBuild WriteCodeFragment class.
        

        So when we build the project we would end up with this version being applied to the resulting output for the project

        Ok that explains how the AssemblyInfo stuff works in .NET Core/Standard, but what is the .NET Cote/Standard way of doing auto incrementing versions?

        Well believe it or not there is not a native feature for this, there are various efforts/attempts at this which you can read more about

        · https://stackoverflow.com/questions/43019832/auto-versioning-in-visual-studio-2017-net-core

        · https://andrewlock.net/version-vs-versionsuffix-vs-packageversion-what-do-they-all-mean/

        After reading all of those the best approach seemed to be based upon sticking with using the Auto generated Assembly.info stuff, but to come up with some scheme that would aid in the generation of the assembly version at build time.

        What that means is you want to ensure your .csproj file looks something like this, where it can be seen that some of the .NET Core/Standard auto generated AssemblyInfo stuff is made available directly in the project file

        <Project Sdk="Microsoft.NET.Sdk">
        
        <PropertyGroup>
          <TargetFramework>netstandard2.0</TargetFramework>
          <Platforms>x64</Platforms>
        </PropertyGroup>
        
        <PropertyGroup>
          <DefineConstants>STD</DefineConstants>
          <PlatformTarget>x64</PlatformTarget>
          <AssemblyName>SomeDemoCode.Std</AssemblyName>
          <RootNamespace>SomeDemoCode.Std</RootNamespace>
          <VersionSuffix>1.0.0.$([System.DateTime]::UtcNow.ToString(mmff))</VersionSuffix>
          <AssemblyVersion Condition=" '$(VersionSuffix)' == '' ">0.0.0.1</AssemblyVersion>
          <AssemblyVersion Condition=" '$(VersionSuffix)' != '' ">$(VersionSuffix)</AssemblyVersion>
          <Version Condition=" '$(VersionSuffix)' == '' ">0.0.1.0</Version>
          <Version Condition=" '$(VersionSuffix)' != '' ">$(VersionSuffix)</Version>
          <Company>SAS</Company>
          <Authors>SAS</Authors>
          <Copyright>Copyright © SAS 2020</Copyright>
          <Product>Demo 1.0</Product>
        </PropertyGroup>
        
        </Project>
        

        With this in place you will get a auto versioned Assembly version using just .NET Core/Standard approach

        But what about InternalsVisibleTo?

        Quite often we want to still expose our .NET Standard projects to test projects. And if the .NET Core/Standard projects auto generate the AssemblyInfo based on either defaults or attributes in the actual .csproj file, how do we add a InternalsVisibleTo, this doesn’t seem to be covered by the auto generated AssemblyInfo that gets created for .NET Core/Standard project, nor does it seem to be available as a csproj level MSBUILD property item. So how do we do this ?

        luckily this is quite simple we just need to do the following in a custom file which you can call anything you want, you can even call it “AssemblyInfo.cs” if you want

        using System.Runtime.CompilerServices;
        
        [assembly: InternalsVisibleTo("SomeDemoCode.IntegrationTests")]
        

        Opting Out Of the Auto Assembly Info Generation Process And Using Our Own Auto Increment Scheme

        If you want to use the .NET Framework approach to auto versioning this is normally done with a wildcard in the

        [assembly: AssemblyVersion("1.0.*")]
        

        So you might think, hmm so I can just override this by adding my own AssemblyInfo.cs, But this will not work you will get this when you build

        > Error CS0579: Duplicate 'System.Reflection.AssemblyFileVersionAttribute' attribute
        > Error CS0579: Duplicate 'System.Reflection.AssemblyInformationalVersionAttribute' attribute
        > Error CS0579: Duplicate 'System.Reflection.AssemblyVersionAttribute' attribute
        

        Luckily we can opt out of this auto generation process, where we have to add the following to our .NET Core/.NET standard csproj file

        <PropertyGroup>
          <GenerateAssemblyInfo>false</GenerateAssemblyInfo>
          <Deterministic>false</Deterministic>
        </PropertyGroup>
        

        You need the deterministic property otherwise you will get this error

        Wildcards are only allowed if the build is not deterministic, which is the default for .Net Core projects. Adding False to csproj fixes the issue.
        

        With this in place we can now include a custom AssemblyInfo.cs which could for example use a auto incrementing version number, where we use a wild card when specifying the AssemblyVersion

        using System.Reflection;
        
        using System.Runtime.InteropServices;
        
        // General Information about an assembly is controlled through the following
        // set of attributes. Change these attribute values to modify the information
        // associated with an assembly.
        [assembly: AssemblyTitle("SomeDemoCode.Std")]
        [assembly: AssemblyDescription("SomeDemoCode .NET Standard")]
        [assembly: AssemblyConfiguration("")]
        [assembly: AssemblyCompany("SAS")]
        [assembly: AssemblyProduct("SAS 1.0")]
        [assembly: AssemblyCopyright("Copyright © SAS 2020")]
        [assembly: AssemblyTrademark("")]
        [assembly: AssemblyCulture("")]
         
        
        // Setting ComVisible to false makes the types in this assembly not visible
        // to COM components. If you need to access a type in this assembly from
        // COM, set the ComVisible attribute to true on that type.
        [assembly: ComVisible(false)]
        
        // The following GUID is for the ID of the typelib if this project is exposed to COM
        [assembly: Guid("CA7543D7-0F0F-4B48-9398-2712098E9324")]
         
        
        // Version information for an assembly consists of the following four values:
        //
        // Major Version
        // Minor Version
        // Build Number
        // Revision
        // You can specify all the values or you can default the Build and Revision Numbers
        // by using the '*' as shown below:
        // [assembly: AssemblyVersion("1.0.*")]
        [assembly: AssemblyVersion("1.0.*")]
        

         

        Now when we build you will get some nice auto incrementing assembly versions inside your .NET Core/Standard projects

        CodeProject

        Setting up Prometheus and Grafana Monitoring

        In this post I want to talk about how to get started with some very nice monitoring tools, namely

        At the job I was at previously we used these tools a lot. They served as the backbone to our monitoring for our system overall, and we were chucking a lot of metrics into these tools, and without these tools I think its fair to say we would not have had such a good insight to where our software had performance issues/bottlenecks. These 2 tools really did help us a lot.

        But what exactly are these tools?

        Prometheus

        Prometheus is the metrics capture engine, that come with an inbuilt query language known as PromQL. Prometheus is setup to scrape metrics from your own apps. It does this by way of a config file that you set up to scrape your chosen applications. You can read the getting started guide here : https://prometheus.io/docs/prometheus/latest/getting_started/ and read more about the queries here : https://prometheus.io/docs/prometheus/latest/querying/basics/

         

        As I just said you configure Prometheus to scrape your apps using a config file. This file is called prometheus.yml where a small example is shown below. This example is set to scrape Prometheus own metrics and also another app which is running on port 9000. I will show you that app later on.

        # my global config
        global:
          scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
          evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
          # scrape_timeout is set to the global default (10s).
        
        # Alertmanager configuration
        alerting:
          alertmanagers:
          - static_configs:
            - targets:
              # - alertmanager:9093
        
        # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
        rule_files:
          # - "first_rules.yml"
          # - "second_rules.yml"
        
        # A scrape configuration containing exactly one endpoint to scrape:
        # Here it's Prometheus itself.
        scrape_configs:
          # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
          - job_name: 'prometheus'
        
            # metrics_path defaults to '/metrics'
            # scheme defaults to 'http'.
        
            static_configs:
            - targets: ['localhost:9090','localhost:9000']
        

        So lets assume you have something like this in place, you should then be able to launch Prometheus using a command like this prometheus.exe –config.file=prometheus.yml, and then navigate to the following url to test out the Prometheus setup http://localhost:9090/

        This should show you something like this

        image

        From here you can try and also go to this url which will show you some of the inbuilt Prometheus metrics : http://localhost:9090/metrics, so taking one of these say go_memstats_alloc_bytes, we could go back to http://localhost:9090/ and build a simple graph

         

        image

         

        So once you have verified this bit looks ok. We can then turn our attention to getting Grafana to work.

        Grafana

        As we just saw Prometheus comes with fairly crude graphing. However Grafana offers a richer way to setup graphs, and it also comes with inbuilt support for using Prometheus as a data source. To get started you can download from here  : https://grafana.com/docs/grafana/latest/guides/getting_started/. And to start Grafana you just need to launch the bin\grafana-server.exe, but make sure you also have Prometheus running as shown in the previous step. Once you have both Prometheus and Grafana running, we can launch the Grafana UI from http://localhost:3000/

         

        Then what we can do is add Prometheus as a data source into Grafana, which can be done as follows:

        image

        image

        image

        So once you have done that we can turn our attention into adding a new Graph, this would be done using the “Add Query” button below.

        image

        If we stick with the example inbuilt metrics that come with Prometheus we can use the go_memstats_alloc_bytes one and add a new Panel and use the “Add Query” button above, where we can enter the following metric go_memstats_alloc_bytes{instance=”localhost:9090″,job=”prometheus”}, once configured we should see a nice little graph like this

        image

        I won’t go into every option of how to create graphs in Grafana, but one thing I can tell you that is very very useful is the ability to grab one graphs JSON representation and use it ton create other graphs. or you can also duplicate this is also very useful, both of these can be done by using the dropdown at the top of the graph in question

        image

        This will save you a lot of time when you are trying to create your own panels

        Labels

        I also just wanted to spend a bit of time talking about labels in Grafana and Prometheus. Labels allow you to to group items to together on one chart but distinguish them in some way based on the label value. For example suppose we wanted to monitor the number of GET vs PUT through some API, we could have a single metric for this but could apply some label value to it somehow. We will see how we can do this using our own .N ET code in the next section, but for now this is the sort of thing you can get with labels

        image

        This one was setup like this in Grafana

        image

        We will see how we did this is the next section.

        Custom Metrics Inside Your Own .NET App

        To use Prometheus in your own .NET code is actually fairly simple thanks to Prometheus.NET which is a nice Nuget package, which you can read more about at the link just provided. Once you have the Nuget installed, its simply a matter of setting up the metrics you want and adding your app to the list of apps that are scraped by Prometheus, which we saw above in the Yaml file at the start of this post. Essentially Prometheus.NET will expose a webserver at the port you chose, which you can then configure to be scraped.

        Let’s see an example of configuring a metric. We will use a Gauge type, which is a metric that can go up and down in value. We will also make use of labels, which is what is driving the chart shown above.

        using Prometheus;
        using System;
        using System.Threading;
        
        namespace PrometheusDemo
        {
            class Program
            {
        
                private static readonly Gauge _gauge =
                    Metrics.CreateGauge("sampleapp_ticks_total", 
                        "Just keeps on ticking",new string[1] { "operation" });
                private static double _gaugeValue = 0;
                private static Random _rand = new Random();
        
                static void Main(string[] args)
                {
                    var server = new MetricServer(hostname: "localhost", port: 9000);
                    server.Start();
                    while (true)
                    {
                        if(_gaugeValue > 100)
                        {
                            _gaugeValue = 0;
                            _gauge.Set(0);
                        }
                        _gaugeValue = _gaugeValue + 1;
                        if(_rand.NextDouble() > 0.5)
                        {
                            _gauge.WithLabels("PUT").Set(_gaugeValue);
                        }
                        else
                        {
                            _gauge.WithLabels("GET").Set(_gaugeValue);
                        }
        
                        Console.WriteLine($"Setting gauge valiue to {_gaugeValue}");
                        Thread.Sleep(TimeSpan.FromSeconds(1));
                    }
        
                    Console.ReadLine();
                }
            }
        }
        

        That is the entire listing, and that is enough to expose the single metric sampleapp_ticks_total with 2 labels

        • POST
        • GET

        Which are then used to display 2 individual line graphs for the same metric inside of Prometheus

        Conclusion

        And that is all there is to it.Obviously for more complex queries, you will need to dig into the Prometheus PromQL query syntax. But this gets you started. The other thing I have not shown here is that Grafana is also capable of creating other types of displays such as these

        image

        Uncategorized

        Elasticsearch

        So most people would have probably heard of Elasticsearch by now. So what exactly is Elasticsearch?

        Elasticsearch is a distributed, open source search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. Elasticsearch is built on Apache Lucene and was first released in 2010 by Elasticsearch N.V. (now known as Elastic). Known for its simple REST APIs, distributed nature, speed, and scalability, Elasticsearch is the central component of the Elastic Stack, a set of open source tools for data ingestion, enrichment, storage, analysis, and visualization. Commonly referred to as the ELK Stack (after Elasticsearch, Logstash, and Kibana), the Elastic Stack now includes a rich collection of lightweight shipping agents known as Beats for sending data to Elasticsearch.

        https://www.elastic.co/what-is/elasticsearch

        Essentially it is a great tool for analysing data that is stored within indexes inside of a NoSQL type database that is clustered/sharded and fault tolerant. As the blurb above states it is built on top of Lucene. For those that are interested in that, I wrote a small article in the past on using Luecene.Net : https://www.codeproject.com/Articles/609980/Small-Lucene-NET-Demo-App

         

        Anyway this post will talk you through downloading Elasticsearch for windows, and will show you how to use the high level C# client called NEST.

         

        We will be learning how to do the following things:

        • Create and index new documents
        • Search for documents
        • Update documents
        • Delete documents

        So let’s carry on and learn how we can download Elasticsearch.

        Download

        You can download it from here : https://www.elastic.co/downloads/elasticsearch. For my setup (windows) once downloaded we can simply open the bin folder from the download, and use the BAT file shown in the image below to start it on Windows.

         

        image

        Once you click that BAT file, and wait a while you should see something like this appear

        image

        Demo

        For this set of demos I am using Visual Studio 2019 (Community), and have installed the following Nuget package for Elasticsearch:

        <PackageReference Include="Elasticsearch.Net" Version="7.5.1" />
        <PackageReference Include="NEST" Version="7.5.1" />
        

        So with those in place lets proceed to the meat of this post, which is how do we do the things that we said we would do at the start of this post. So lets carry on to look at that. As I say this demo will use the high level Elasticsearch .NET client NEST which you can read more about here : https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/nest.html

        Indexing documents

        The 1st step is to get some data into Elasticsearch, so to do that we need to craft some data and also Index the data. Elastic is clever enough to infer some of the data/field types that should be used when it indexes but you can override this should you want to. Lets see an example

        We will use this class (ONLY) during this demo to do all our operations with

        namespace ElasticDemoApp_CSharp.Models
        {
            public class Person
            {
                public string Id { get; set; }
                public string FirstName { get; set; }
                public string LastName { get; set; }
                public bool IsManager { get; set; }
                public DateTime StartedOn { get; set; }
            }
        }
        

        It can be seen that there is an Id field in that POCO object. This field is fairly important and we will see why later.Lets see how we can get some data in.

        var settings = new ConnectionSettings(new Uri("http://localhost:9200"))
        .DefaultIndex("people");
        
        var client = new ElasticClient(settings);
        
        //CREATE
        var person = new Person
        {
            Id = "1",
            FirstName = "Tom",
            LastName = "Laarman",
            StartedOn = new DateTime(2016, 1, 1)
        };
        
        var people = new[]
        {
            new Person
            {
                Id = "2",
                FirstName = "Tom",
                LastName = "Pand",
                StartedOn = new DateTime(2017, 1, 1)
            },
            new Person
            {
                Id = "3",
                FirstName = "Tom",
                LastName = "grand",
                StartedOn = new DateTime(2017, 5, 4)
            }
        };
        
        client.IndexDocument(person);
        client.IndexMany(people);
        
        var manager1 = new Person
        {
            Id = "4",
            FirstName = "Tom",
            LastName = "Foo",
            StartedOn = new DateTime(2017, 1, 1)
        };
        
        client.Index(manager1, i => i.Index("managerpeople"));
        

        The code above shows you how to create the initial client, and also how to insert a single document, and how to insert many documents. Elastic kind of has a few ways for doing the same thing, so its up to you which API syntax you prefer, but the examples above largely do the same thing, they get data into Elastic at certain indexes. You can read more about Indexing here :https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/indexing-documents.html

        Query

        So now that we have some data in we may want to Search for it. Elastic comes with a rich query API, which you can read about here : https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/search.html

        So here is an example to query the data we just stored in Elastic. Note the use of the “&&” to form complex queries, you can read about that here : https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/bool-queries.html#binary-and-operator. Its worth getting to know these operators as it will make your queries more readable

        //SEARCH
        var searchResponse = client.Search<Person>(s => s
            .From(0)
            .Size(10)
            .AllIndices()
            .Query(q =>
                    q.Match(m => m
                    .Field(f => f.FirstName)
                    .Query("Tom")
                    ) &&
                    q.DateRange(r => r
                    .Field(f => f.StartedOn)
                    .GreaterThanOrEquals(new DateTime(2017, 1, 1))
                    .LessThan(new DateTime(2018, 1, 1))
                    )
            )
        );
        
        var matches = searchResponse.Documents;
        

        Update

        So now that we have some data and we can search it, lets turn our hand to updating it. Here are a few examples where I mix in some queries to check the updated data

        //UPDATE 
        
        //update all "Tom" person in "people" index
        person.FirstName = "Tim";
        client.UpdateAsync(new DocumentPath<Person>(person.Id),
            u => u.Index("people")
            .DocAsUpsert(true)
            .Doc(person)
            .Refresh(Elasticsearch.Net.Refresh.True))
            .ConfigureAwait(false).GetAwaiter().GetResult();
        
        searchResponse = client.Search<Person>(s => s
            .From(0)
            .Size(10)
            .AllIndices()
            .Query(q =>
                    q.Match(m => m
                    .Field(f => f.FirstName)
                    .Query("Tim")
                    )
            )
        );
        
        matches = searchResponse.Documents;
        
        //update "Tim" to "Samantha" using different update method
        client.UpdateAsync<Person, object>(new DocumentPath<Person>(1),
            u => u.Index("people")
                .DocAsUpsert(true)
                .RetryOnConflict(3)
                .Doc(new { FirstName = "Samantha" })
                .Refresh(Elasticsearch.Net.Refresh.True))
                .ConfigureAwait(false).GetAwaiter().GetResult();
        
        
        searchResponse = client.Search<Person>(s => s
            .From(0)
            .Size(10)
            .AllIndices()
            .Query(q =>
                q.Match(m => m
                    .Field(f => f.FirstName)
                    .Query("Samantha")
                )
            )
        );
        
        matches = searchResponse.Documents;
        

        There is not much more to say there apart from perhaps pay special attention to how we use the fluent DSL Doc(…) to apply partial updates, and we also use Refresh(..) which ensures the shards are updated that hold this data, which makes it visible to new searches.

        Deleting data

        So now we have put data in, queried it, and updated it, guess we should talk about deletes. This is done as follows:

        //DELETE
        client.DeleteAsync<Person>(1,
            d => d.Index("people")
                .Refresh(Elasticsearch.Net.Refresh.True))
                .ConfigureAwait(false).GetAwaiter().GetResult();
        
        searchResponse = client.Search<Person>(s => s
            .From(0)
            .Size(10)
            .AllIndices()
            .Query(q =>
                q.Match(m => m
                    .Field(f => f.Id)
                    .Query("1")
                )
            )
        );
        
        
        matches = searchResponse.Documents;
        
        //delete using a query
        client.DeleteByQueryAsync<Person>(
            d => d.AllIndices()
                .Query(qry => qry.Term(p => p.Name("FirstName").Value("Tom")))
                .Refresh(true)
                .WaitForCompletion())
                .ConfigureAwait(false).GetAwaiter().GetResult();
        
        var response = client.DeleteByQueryAsync<Person>(
            q => q
                .AllIndices()
                .Query(rq => rq
                    .Match(m => m
                    .Field(f => f.FirstName)
                    .Query("Tom")))
                .Refresh(true)
                .WaitForCompletion())
                .ConfigureAwait(false).GetAwaiter().GetResult();
        
        searchResponse = client.Search<Person>(s => s
        .From(0)
        .Size(10)
        .AllIndices()
        .Query(q =>
                q.Match(m => m
                .Field(f => f.FirstName)
                .Query("Tom")
                )
            )
        );
        
        
        matches = searchResponse.Documents;
        

        As before I have included queries in here to check the deletes. Hopefully you get the idea, where below we can delete by an Id, or by using a query where we look to match N-many records.

        Demo Project

        Anyway that is all I wanted to show this time, hopefully it gives you a small taste of using the .NET Elastic client. You can download a demo project from here : https://github.com/sachabarber/Elasticdemo

        React

        React router

        I plan on getting a lot better at a few things this year, my current list is

        • Really getting to know React
        • Really getting to know AWS
        • Really getting to know Azure

         

        As such you can expect some post on all these subjects over a while. But this is the here and now. So in this post I wanted to start with looking at React router

         

        What is react router

        React Router is a set of React components that help with all your navigation concerns when it comes to working with React.

         

        Main Components

        It comes with the following main components

         

        BrowserRouter

        A <Router> that uses the HTML5 history API (pushState, replaceState and the popstate event) to keep your UI in sync with the URL

         

        HashRouter

        A <Router> that uses the hash portion of the URL (i.e. window.location.hash) to keep your UI in sync with the URL.

         

        Link

        Provides declarative, accessible navigation around your application.

         

        NavLink

        A special version of the <Link> that will add styling attributes to the rendered element when it matches the current URL.

         

        MemoryRouter

        A <Router> that keeps the history of your “URL” in memory (does not read or write to the address bar). Useful in tests and non-browser environments like React Native.

         

        Redirect

        Rendering a <Redirect> will navigate to a new location. The new location will override the current location in the history stack, like server-side redirects (HTTP 3xx) do.

         

        Route

        The Route component is perhaps the most important component in React Router to understand and learn to use well. Its most basic responsibility is to render some UI when its path matches the current URL.

         

        Router

        The common low-level interface for all router components. Typically apps will use one of the high-level routers instead:

        The most common use-case for using the low-level <Router> is to synchronize a custom history with a state management lib like Redux or Mobx. Note that this is not required to use state management libs alongside React Router, it’s only for deep integration.

         

        StaticRouter

        A <Router> that never changes location.

        This can be useful in server-side rendering scenarios when the user isn’t actually clicking around, so the location never actually changes. Hence, the name: static. It’s also useful in simple tests when you just need to plug in a location and make assertions on the render output.

         

        Switch

        Renders the first child <Route> or <Redirect> that matches the location.

         

        How is this different than just using a bunch of <Route>s?

        <Switch> is unique in that it renders a route exclusively. In contrast, every <Route> that matches the location renders inclusively.

         

        Hooks

        The current version of the React Router also comes with these hooks

        useHistory

        The useHistory hook gives you access to the history instance that you may use to navigate.

        import { useHistory } from "react-router-dom";
        
        function HomeButton() {
          let history = useHistory();
        
          function handleClick() {
            history.push("/home");
          }
        
          return (
            <button type="button" onClick={handleClick}>
              Go home
            </button>
          );
        }
        

        useLocation

        The useLocation hook returns the location object that represents the current URL. You can think about it like a useState that returns a new location whenever the URL changes.

        import React from "react";
        import ReactDOM from "react-dom";
        import {
          BrowserRouter as Router,
          Switch,
          useLocation
        } from "react-router-dom";
        
        function usePageViews() {
          let location = useLocation();
        }
        

        useParams

        useParams returns an object of key/value pairs of URL parameters. Use it to access match.params of the current <Route>.

        import React from "react";
        import ReactDOM from "react-dom";
        import {
          BrowserRouter as Router,
          Switch,
          Route,
          useParams
        } from "react-router-dom";
        
        function BlogPost() {
          let { slug } = useParams();
          return <div>Now showing post {slug}</div>;
        }
        

        useRouteMatch

        The useRouteMatch hook attempts to match the current URL in the same way that a <Route> would. It’s mostly useful for getting access to the match data without actually rendering a <Route>

        import { useRouteMatch } from "react-router-dom";
        
        function BlogPost() {
          let match = useRouteMatch("/blog/:slug");
        
          // Do whatever you want with the match...
          return <div />;
        }
        

         

        So that is the main components and hooks that are available, so lets proceed and see how we can use it

         

        How do we use it?

        This largely boils down to a few steps.

         

        We need an actual React App

        Firstly we need an actual app. There are many ways you could do this, but by far the easiest way is to use Create React App. I prefer to work with TypeScript where possible, so we can use something like this : https://create-react-app.dev/docs/adding-typescript/ which will create a simple skeleton react app that will allow the use of TypeScript

         

        Lets creates some routes

        Once we have created an app, we need to create some actual routes and components to render for the routes. I think the best way to do this is to show a full sample here, then discuss the various different parts as we go. So here is a full example of how to use the React-Router (it should be noted that you will have needed to have installed this via NPM as react-router-dom)

        import React, {Component } from "react";
        import { RouteComponentProps, useHistory } from 'react-router';
        import {
            BrowserRouter as Router,
            Switch,
            Route,
            Link,
            useParams,
            HashRouter,
            BrowserRouter,
            NavLink
        } from "react-router-dom";
        
        
        //Non boostrap version
        export default function AppRouter() {
        
            return (
                <BrowserRouter >
                    <div>
                        <nav>
                            <ul>
                                <li>
                                    <NavLink to={{ pathname: "/" }} activeStyle={{
                                        fontWeight: "bold",
                                        color: "red"
                                    }}>Home</NavLink>
                                </li>
                                <li>
                                    <NavLink to="/about" activeStyle={{
                                        fontWeight: "bold",
                                        color: "red"
                                    }}>About</NavLink>
                                </li>
                                <li>
                                    <NavLink to="/aboutComponentUsingFunction" activeStyle={{
                                        fontWeight: "bold",
                                        color: "red"
                                    }}>AboutComponentUsingFunction</NavLink>
                                </li>
                                <li>
                                    <NavLink to="/aboutComponentUsingRenderFunction" activeStyle={{
                                        fontWeight: "bold",
                                        color: "red"
                                    }}>AboutComponentUsingRenderFunction</NavLink>
                                </li>
                                <li>
                                    <NavLink to="/users/1" activeStyle={{
                                        fontWeight: "bold",
                                        color: "red"
                                    }}>Users1</NavLink>
                                </li>
                                <li>
                                    <NavLink to="/users/2" activeStyle={{
                                        fontWeight: "bold",
                                        color: "red"
                                    }}>Users2</NavLink>
                                </li>
                                <li>
                                    <NavLink to="/users2/1" activeStyle={{
                                        fontWeight: "bold",
                                        color: "red"
                                    }}>Users As Class With History link</NavLink>
                                </li>
                            </ul>
                        </nav>
        
                        {/* A <Switch> looks through its children <Route>s and
                            renders the first one that matches the current URL. */}
                        <Switch>
                            <Route path="/about">
                                <About />
                            </Route>
                            <Route path="/aboutComponentUsingFunction"
                                //This is bad though due to this statement from the docs
                                //When you use the component props, the router uses React.createElement 
                                //to create a new React element from the given component. 
                                //That means if you provide an inline function to the component attribute, 
                                //you would create a new component every render. 
                                //This results in the existing component unmounting and the new component 
                                //mounting instead of just updating the existing component
        
                                component={(props: any) => <About {...props} isAuthed={true} />}>
                            </Route>
                            <Route path="/aboutComponentUsingRenderFunction"
                                //This is better as are using render rather than component, which does not 
                                //suffer from the issue mentioned above
                                render={(props: any) => <About {...props} isAuthed={true} />}>
                            </Route>
                            <Route path="/users/:id" children={<Users />} />
                            <Route path="/users2/:id" component={Users2} />
                            <Route path="/">
                                <Home />
                            </Route>
                        </Switch>
                    </div>
                </BrowserRouter >
            );
        }
        
        
        function Home() {
            return <h2>Home</h2>;
        }
        
        function About(props: any) {
        
            console.log(`In render method of About`);
            console.log(props);
            return <h2>About</h2>;
        }
        
        function Users() {
            // We can use the `useParams` hook here to access
            // the dynamic pieces of the URL.
            let { id } = useParams();
            let history = useHistory();
        
            const handleClick = () => {
                history.push("/home");
            };
        
            return (
                <div>
                    <h3>ID: {id}</h3>
                    <button type="button" onClick={handleClick}>Go home</button>
                </div>
            );
        }
        
        class Users2 extends React.Component<RouteComponentProps, any> {
        
            constructor(props: any) {
                super(props);
            }
        
            render() {
                return (
                    <div>
                        <h1>Hello {(this.props.match.params as any).id}!</h1 >
                        <button
                            type='button'
                            onClick={() => { this.props.history.push('/users/1') }} >
                            Go to users/1
                        </button>
                    </div>
                );
            }
        }
        

        When run this should look like this

        image 

         

        So from the above code there are a couple of points that deserve special callouts, so lets go through them

         

        NavLink usage

        We make use of NavLink to declared our actual routes. This would include the to which would be the actual route we wish to be rendered. Some examples would be

        • /
        • /about
        • /aboutComponentUsingFunction
        • /users/1

         

        Here is one such example of this

        <NavLink to="/about" activeStyle={{
            fontWeight: "bold",
            color: "red"
        }}>About</NavLink>
        

        Switch usage

        The next thing we need to make sure React-Router is working correctly is to include a Switch block, where we would declare all the routes. A Route should have as a minimum a path and some way of actual rendering the component, such as render/component/children each of which works slightly differently.

         

        The path is where you would be able to pick up the parameters for the matched route.  An example for one of the routes that expects some route parameters may look like this

        <Route path="/users/:id" children={<Users />} />
        <Route path="/users2/:id" component={Users2} />
        
        function Users() {
            // We can use the `useParams` hook here to access
            // the dynamic pieces of the URL.
            let { id } = useParams();
            let history = useHistory();
        
            const handleClick = () => {
                history.push("/home");
            };
        
            return (
                <div>
                    <h3>ID: {id}</h3>
                    <button type="button" onClick={handleClick}>Go home</button>
                </div>
            );
        }
        
        
        class Users2 extends React.Component<RouteComponentProps, any> {
        
            constructor(props: any) {
                super(props);
            }
        
            render() {
                return (
                    <div>
                        <h1>Hello {(this.props.match.params as any).id}!</h1 >
                        <button
                            type='button'
                            onClick={() => { this.props.history.push('/users/1') }} >
                            Go to users/1
                        </button>
                    </div>
                );
            }
        }
        

        It can be seen that when we use react-router that the router provides a prop called “match” which we can either expect as props when using class based components or which we may extract using the useParams hook when using a functional React component.

         

        It can also be seen from these 2 examples above how we can navigate  to different routes from within our components. This is done using the history object which you can read more about here : https://reacttraining.com/react-router/web/api/history

         

        A match object contains information about how a <Route path> matched the URL. match objects contain the following properties:

        • params – (object) Key/value pairs parsed from the URL corresponding to the dynamic segments of the path
        • isExact – (boolean) true if the entire URL was matched (no trailing characters)
        • path – (string) The path pattern used to match. Useful for building nested <Route>s
        • url – (string) The matched portion of the URL. Useful for building nested <Link>s

         

        In the example that is provided here I have tried to use to a mixture of render/component/children each of which works differently. Lets go through how these things work

         

        render

        This allows for convenient inline rendering and wrapping without the undesired remounting explained above.

        Instead of having a new React element created for you using the component prop, you can pass in a function to be called when the location matches. The render prop function has access to all the same route props (match, location and history) as the component render prop.

        component

        A React component to render only when the location matches. It will be rendered with route props.

        When you use component (instead of render or children) the router uses React.createElement to create a new React element from the given component. That means if you provide an inline function to the component prop, you would create a new component every render. This results in the existing component unmounting and the new component mounting instead of just updating the existing component. When using an inline function for inline rendering, use the render or the children prop.

        children

        Sometimes you need to render whether the path matches the location or not. In these cases, you can use the function children prop. It works exactly like render except that it gets called whether there is a match or not.The children render prop receives all the same route props as the component and render methods, except when a route fails to match the URL, then match is null.

         

        What about passing extra props to our component?

        While the methods described above will give you access to the standard react-router props, what do we do if we want extra props passed to our component? Is this possible? Well yes it is. Lets see one example of this. One such route is this one, where we use the standard react-router props but also supply a further “isAuthed” prop to the component being rendered for the route

         

        <Route path="/aboutComponentUsingRenderFunction"
            render={(props: any) => <About {...props} isAuthed={true} />}>
        </Route>
        
        function About(props: any) {
        
            console.log(`In render method of About`);
            console.log(props);
            return <h2>About</h2>;
        }
        

        Which when clicked on in the browser will look like this in the console

        image

         

         

        Lets mount our route based navigation component

        We then just need to import this top level react component AppRouter and use it to mount to the DOM, which can be done as follows:

        import React from 'react';
        import ReactDOM from 'react-dom';
        import './index.css';
        import AppRouter from './AppRouter'
        
        
        ReactDOM.render(<AppRouter />, document.getElementById('root'));
        

         

        What about bootstrap Navigation?

        There are probably quite a lot of people out there that like to use Bootstrap to aid in their website design. There is a React version of Bootstrap called react-bootstrap which has been built specifically for React, so we can use this, it as simple as installing it via NPM as npm install react-bootstrap

         

        With that installed, we can also craft a more traditional Bootstrap based navigation component using something like this full example. Note how we use the react-bootstrap NavBar and other components to do the actual navigation now, rather than just nav and list elements like we did in the 1st full example above

        import React, { Component } from "react";
        import { RouteComponentProps, useHistory } from 'react-router';
        import {
            Switch,
            Route,
            useParams,
            BrowserRouter,
            Link
        } from "react-router-dom";
        
        
        import 'bootstrap/dist/css/bootstrap.min.css';
        import {
            Nav,
            Navbar
        } from "react-bootstrap";
        
        
        function Navigation() {
            return (
                <BrowserRouter >
                    <div>
                        <Navbar bg="light" expand="lg">
                            <Navbar.Brand href="#home">React-Bootstrap</Navbar.Brand>
                            <Navbar.Toggle aria-controls="basic-navbar-nav" />
                            <Navbar.Collapse id="basic-navbar-nav">
                                <Nav className="mr-auto">
                                    <Nav.Link as={Link} to="/">Home</Nav.Link>
                                    <Nav.Link as={Link} to="/about">About</Nav.Link>
                                    <Nav.Link as={Link} to="/aboutComponentUsingFunction">AboutComponentUsingFunction</Nav.Link>
                                    <Nav.Link as={Link} to="/aboutComponentUsingRenderFunction">AboutComponentUsingRenderFunction</Nav.Link>
                                    <Nav.Link as={Link} to="/users/1">Users1</Nav.Link>
                                    <Nav.Link as={Link} to="/users/2">Users2</Nav.Link>
                                    <Nav.Link as={Link} to="/users2/1">Users As Class With History link</Nav.Link>
        
                                </Nav>
                            </Navbar.Collapse>
                        </Navbar>
        
                        {/* A <Switch> looks through its children <Route>s and
                            renders the first one that matches the current URL. */}
                        <Switch>
                            <Route path="/about">
                                <About />
                            </Route>
                            <Route path="/users/:id" render={() => <Users />}/>
                            <Route path="/users2/:id" component={Users2} />
                            <Route path="/">
                                <Home />
                            </Route>
                        </Switch>
                    </div>
                </BrowserRouter >
            );
        }
        
        
        class AppRouterBootstrap extends Component {
            render() {
                return (
                    <div id="App">
                        <Navigation />
                    </div>
                );
            }
        }
        
        export default AppRouterBootstrap;
        
        function Home() {
            return <h2>Home</h2>;
        }
        
        function About() {
            return <h2>About</h2>;
        }
        
        function Users() {
            // We can use the `useParams` hook here to access
            // the dynamic pieces of the URL.
            let { id } = useParams();
            let history = useHistory();
        
            const handleClick = () => {
                history.push("/home");
            };
        
            return (
                <div>
                    <h3>ID: {id}</h3>
                    <button type="button" onClick={handleClick}>Go home</button>
                </div>
            );
        }
        
        
        class Users2 extends React.Component<RouteComponentProps, any> {
        
            render() {
                return (
                    <div>
                        <h1>Hello {(this.props.match.params as any).id}!</h1 >
                        <button
                            type='button'
                            onClick={() => { this.props.history.push('/users/1') }} >
                            Go to users/1
                        </button>
                    </div>
                );
            }
        }
        

        The main thing to note here is that we make use of the react-bootstrap components such as Navbar/Nav and Nav.Link.

         

        One particular callout should be how we use Nav.Link, where it can be seen that we use like this

        import {
            Switch,
            Route,
            useParams,
            BrowserRouter,
            Link
        } from "react-router-dom";
        
        
        import 'bootstrap/dist/css/bootstrap.min.css';
        import {
            Nav,
            Navbar
        } from "react-bootstrap";
        
        <Nav.Link as={Link} to="/">Home</Nav.Link>
        

         

        See how we use the “as={Link}” we do this to ensure that a full server round trip doesn’t occur. If you use the typical approach of using a href with the react-bootstrap  Nav.Link which is typically as follows

        <Nav.Link href="/home">Active</Nav.Link>
        

         

        You would find that a FULL server round trip is done. Which is not what we want, so we instead use the Link component from react-router to use with the react-bootstrap  Nav.Link. This ensures that NO full server round trip is done and the redirect/render occurs purely client side

         

        Since the rest of the example is the same as the main example above, I won’t go through it all again.

         

        To use this version we would just need to use this instead when we mount the main component to the DOM

        ReactDOM.render(<AppRouterBootstrap />, document.getElementById('root'));
        

         

        When run this should look like this

        image

         

        That’s it for now

        Anyways that is all I wanted to say for now. In future react posts, I want to explore React-Redux (which obviously there are many resources for on the internet already, but this is my own journey, so perhaps I will have something useful to say who knows) and how you can properly test React-Redux apps

        Azure

        Azure DevOps : Setting up and pushing nuget package

        So its been a while since I posted. But thought it would be good to finally add the 2nd part of this, where this time we will look at how to create and push to Azure DevOps hosted Nuget feeds.

         

        Creating a AzureDevops project

        The 1st step is to create a new Azure Devops project, and from there you will need to go into the newly created project, and turn on the following

        • Repos : Which allows you to host repos in Azure Devops
        • Artifacts :  Which allows you create new Nuget feeds in Azure Devops

        image

         

        Once you have done that you can grab the repos remote clone details. So for a new project this may be something like this

         

        image

         

        So for me I then cloned the repo, and created a simple .NET Standard 2.0 library, that simply adds numbers

        image

        From there I simply pushed up the code to the remote repo.

         

        So all good so far, we should have a new project which supports feeds and has our new code in it. Lets carry on

         

        Creating a Nuget feed

        So now that we have some code pushed up. How do we make it available on our own hosted Nuget feed. There are a couple of steps to perform here

         

        Firstly we need to create a new Feed, which is done by going into the artifacts menu, and clicking the “Create Feed” button

         

        image

         

        You need to give the feed a name, lets suppose I chose “foofeed2” as the name, you should see something like this, where you will now need to go into the feed settings

         

        image

         

        If you click on the drop down you should see that the feed is created as “project scoped”, which means that it  belongs to the project. Until very recently this was not the case and all new feeds used to be scoped at organizational level, which effects how the build definition works. This was bug in Azure Devops. Which you can read more about in this StackOverflow which I created https://stackoverflow.com/questions/58856604/azure-devops-publish-nuget-to-hosted-feed

         

        This was quite a weird bug to have but it did mean that all of a sudden for any Nuget feed you created and tried to publish it would not work. This is now fixed, and I will explain the difference between pushing to a project based NuGet feed and an Organizational one later when we discuss the build definition

         

        For now you should make sure that the permissions for your need look something like this. Please forgive me but I am using a screen shot here from my actual project, where above I am just showing you what a new project will look like

         

        image

         

        I really do urge you to read the StackOverflow page as there is some really valuable discussions in there about scoping, and what extra permissions you need to ensure are there

         

        Build Pipeline

        So once we have the feed set up with the correct permissions, we can focus our attention to the build side of things.

         

        This is complete source code for a build pipeline that does the following

        • Nuget restore
        • Builds
        • Package
        • Push to AzureDevop feed

         

        # ASP.NET Core
        # Build and test ASP.NET Core projects targeting .NET Core.
        # Add steps that run tests, create a NuGet package, deploy, and more:
        # https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core
        
        trigger:
        - master
        
        pool:
          vmImage: 'ubuntu-latest'
        
        variables:
          buildConfiguration: 'Release'
          Major: '1'
          Minor: '0'
          Patch: '0'
        
        steps:
        
        - task: DotNetCoreCLI@2
          displayName: 'Restore'
          inputs:
            command: restore
            projects: '**/MathsLib.csproj'
        
        
        - task: DotNetCoreCLI@2
          displayName: Build
          inputs:
            command: build
            projects: '**/MathsLib.csproj'
            arguments: '--configuration Release' # Update this to match your need
        
        
        - task: DotNetCoreCLI@2
          inputs: 
            command: 'pack'
            projects: '**/MathsLib.csproj'
            outputDir: '$(Build.ArtifactStagingDirectory)'
            versioningScheme: 'byPrereleaseNumber'
            majorVersion: '1'
            minorVersion: '0'
            patchVersion: '0'
        
        
        - task: NuGetCommand@2
          displayName: 'nuget push'
          inputs:
            command: 'push'
            feedsToUse: 'select'
            packagesToPush: '$(Build.ArtifactStagingDirectory)/**/*.nupkg;!$(Build.ArtifactStagingDirectory)/**/*.symbols.nupkg'
            nuGetFeedType: 'internal'
            vstsFeed: 'nugetprojects/anotherfeed'
            publishVstsFeed: 'nugetprojects/anotherfeed'
            versioningScheme: 'off'
            allowPackageConflicts: true
        

         

        Project Scoped Feed

        image

        It should be noted that this is using a “project scoped” nuget feed. See these entries ‘nugetprojects/anotherfeed’ that is the syntax you need to use when using project scoped Nuget feeds.

         

        Organizational Scoped Feed

        image

         

        In contrast to that, if you use an organizational feed you will need to ensure you use feed settings in the YAML above that just have the name of your feed so the push step would be this instead

        - task: NuGetCommand@2
          displayName: 'nuget push'
          inputs:
            command: 'push'
            feedsToUse: 'select'
            packagesToPush: '$(Build.ArtifactStagingDirectory)/**/*.nupkg;!$(Build.ArtifactStagingDirectory)/**/*.symbols.nupkg'
            nuGetFeedType: 'internal'
            vstsFeed: 'testfeed'
            publishVstsFeed: 'testfeed'
            versioningScheme: 'off'
            allowPackageConflicts: true
        

        As I say this was a VERY recent changed introduced in Azure Devops, and you can read more in the StackOverflow post that I refer to above

         

        For now lets test this pipeline using the Project scoped feed “anotherfeed” above

        image

         

        Consuming the feed from a new project

        With all this in place we should now be able to consume the feed from a new project, so lets see how we can so that. We can go to the feed to grab how to connect to it

         

        image

        Then pick the VisualStudio settings (or Nuget if you want to use nuget.config to configure your sources)

         

        I’ll use VisualStudio one

         

        image

         

        I can then set this up in VisualStudio as a new Nuget feed

        image

        And search for my Nuget package (I did not create proper release so you need to  “include pre-releases”

        image

         

        Cool all working. So that’s it for this time