App.Config Transforms Outside Of Web Project

This is a weird post in some ways as it is new for me, but certainly VERY old for others. I imagine for years web developer have known about how to use the Web.Config XSLT transforms MSBUILD task. If you have not heard of this, quite simply it allows you to have a single Web.Config file and a number of other config files where ONLY the transformations are declared. Such that when the XSLT transforms MSBUILD task runs, it will take the source Web.config file along with a transformation .config file and produce a new .config file which you would use as part of your deployment process.

 

I have myself known about this for years to, I have even known about the Microsoft MSBUILD teams Slow Cheetah project which allows you to use this same technique outside of web projects. Ting is what I have always done is had a bunch of .config files (so one for XXX.LIVE.config one for XXXX.QA.config) that I would rename and deploy by some clever scripts.

 

I recently had to do a bit of work on a project that made use of the Web.Config XSLT transforms MSBUILD task, and I could clearly see in the MSBUILD file that this just used a MSBUILD task. So I thought this must be easy enough to use stand alone. Turns out it is, you DO NOT really need to use Slow Cheetah at all.  You just need to know where the Web.Config XSLT transforms MSBUILD task is and how to use it.

 

The rest of this post will talk you through how to do this.

 

Suppose you have this App.Config you wish to transform

 

We will concentrate on just a few areas here, those area are the ones that are going to change between environments:

 

<?xml version="1.0" encoding="utf-8"?>
<configuration>
 
  <configSections>
    <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog" />
    <section name="shareSettings" type="SimpleConfig.Section, SimpleConfig" />
  </configSections>
 
  <shareSettings
      productName="Shipping"
      ftpPath="D:\ShippingRoutes">
  </shareSettings>
 
  <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" 
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <extensions>
      <add assembly="Gelf4NLog.Target"/>
    </extensions>
    <targets async="true">
      <target name="graylog"
          xsi:type="graylog"
          hostip="dev-logging"
          hostport="12200"
          Facility="CoolService">
        <parameter name="exception" layout="${exception:format=tostring}" optional="true" />
        <parameter name="processname" layout="${processname}" />
        <parameter name="logger" layout="${logger}" />
        <parameter name="treadid" layout="${threadid}" />
      </target>
      <target name="file" xsi:type="File"
              layout="${longdate} | ${level} | ${message}${onexception:${newline}EXCEPTION\:${exception:format=tostring,StackTrace}}"
              fileName="c:/temp/CoolService-${shortdate}.log" />
    </targets>
    <rules>
      <logger name="NHibernate.*" minlevel="Off" writeTo="graylog" final="true" />
      <logger name="NHibernate.*" minlevel="Error" writeTo="file" final="true" />
      <logger name="*" minlevel="Off" writeTo="graylog" />
      <logger name="*" minlevel="trace" writeTo="file" />
    </rules>
  </nlog>
 
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
  </startup>
 
  <system.serviceModel>
    <diagnostics performanceCounters="All" />
 
    <bindings>
      <netTcpBinding>
        <binding name="tcpBinding" 
		maxReceivedMessageSize="2147483647" 
		closeTimeout="00:59:00" 
		openTimeout="00:59:00" 
		receiveTimeout="00:59:00" 
		sendTimeout="00:59:00">
          <security mode="None" />
          <readerQuotas maxStringContentLength="8192" 
			maxArrayLength="20971520" />
        </binding>
      </netTcpBinding>
    </bindings>
 
 
    <client>
      <!-- CoolService -->
      <endpoint name="coolServiceEndpoint" 
	        address="net.tcp://localhost:63006/CoolService" 
		binding="netTcpBinding"
                bindingConfiguration="tcpBinding" 
                contract="Services.ICoolService" />
    </client>
  </system.serviceModel>
 
  <system.diagnostics>
    <sources>
      <source 	name="System.ServiceModel" 
		switchValue="All" 
		propagateActivity="true">
        <listeners>
          <add name="traceListener" 
		type="System.Diagnostics.XmlWriterTraceListener" 
		initializeData="c:\temp\CoolService.svclog"/>
        </listeners>
      </source>
    </sources>
  </system.diagnostics>
 
 
</configuration>

 

  • Custom config section (NOTE I am using SimpleConfig to do that, which is awesome)
  • NLog logging settings
  • WCF client section
  • Diagnostics WCF section

 

So Now Show Me The Transformations

Now this post will not (as is not meant to) teach you all about the Web.Config XSLT transforms MSBUILD task, but rather shall show you an example. So on with the example, suppose we want to create a LIVE config file where we change the following:

 

  • Custom config section (NOTE I am using SimpleConfig to do that, which is awesome) (CHANGE ATTRIBUTES)
  • NLog logging settings (CHANGE Logger/Target)
  • WCF client section (CHANGE ADDRESS)
  • Diagnostics WCF section (REMOVE IT)

 

Here is how we could do that (say its called “CoolService.LIVE.config”) :

 

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  <shareSettings xdt:Transform="SetAttributes" 
		xdt:Locator="Match(productName)"  
		productName="Shipping"
      		ftpPath="\\shipping\ShippingRoutes" />
                 
  <nlog xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 		
        xmlns="http://www.nlog-project.org/schemas/NLog.xsd">
      <targets>
      	<target xdt:Transform="SetAttributes" 
		xdt:Locator="Match(name)" 
		name="graylog" 
		hostip="app-logging" />
                                               
      	<target xdt:Transform="SetAttributes" 
		xdt:Locator="Match(name)" 
		name="file" 
		fileName="D:/logs/CoolService-${shortdate}.log" />
     </targets>
     <rules>
     	<logger xdt:Transform="SetAttributes" 
		xdt:Locator="Match(writeTo)" 
		minlevel="trace" 
		writeTo="graylog"/>
     </rules>
  </nlog>
 
  <system.serviceModel>
    <client>
      <endpoint xdt:Transform="SetAttributes" 
		xdt:Locator="Match(name)"
		name="coolServiceEndpoint" 		
	        address="net.tcp://appCoolService:63006/CoolService"  />
    </client>
  </system.serviceModel>
 
  <system.diagnostics xdt:Transform="Remove" />

</configuration>

 

 

So How Do We Apply These Transforms

To actually apply these transforms, we can easily craft a simple MSBUILD project file, such as (say its called “Transforms.proj”):

 

<Project ToolsVersion="4.0" 
	DefaultTargets="Release" 
	xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <UsingTask 
	TaskName="TransformXml" 
	AssemblyFile="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.Tasks.dll"/>
 
    <ItemGroup>
        <Config Include="LIVE"><Environment>LIVE</Environment></Config>
        <Config Include="QA"><Environment>QA</Environment></Config>       
    </ItemGroup>
 
    <Target Name="Release">
        <MakeDir Directories="CoolService\Configuration\%(Config.Environment)"/>
 
        <TransformXml Source="App.config"
                     Transform="CoolService.%(Config.Identity).config"
                     Destination="CoolService\Configuration\%(Config.Environment)\CoolService.exe.config"/>
    </Target>
</Project>

 

Where the $(MsBuildExtensionsPath) will likely be something like “C:\Program Files (x86)\MSBuild\”. So once we have  a MSBUILD file like this in place it is just a simple matter of running MSBUILD something like

 

MSBUILD Transforms.proj

 

Which will result in the following being produced:

 

<?xml version="1.0" encoding="utf-8"?>
<configuration>
 
  <configSections>
    <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog" />
    <section name="shareSettings" type="SimpleConfig.Section, SimpleConfig" />
  </configSections>
 
  <shareSettings
      productName="Shipping"
      ftpPath="\\shipping\ShippingRoutes">
  </shareSettings>
 
  <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" 
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <extensions>
      <add assembly="Gelf4NLog.Target"/>
    </extensions>
    <targets async="true">
      <target name="graylog"
          xsi:type="graylog"
          hostip="app-logging"
          hostport="12200"
          Facility="CoolService">
        <parameter name="exception" layout="${exception:format=tostring}" optional="true" />
        <parameter name="processname" layout="${processname}" />
        <parameter name="logger" layout="${logger}" />
        <parameter name="treadid" layout="${threadid}" />
      </target>
      <target name="file" xsi:type="File"
              layout="${longdate} | ${level} | ${message}${onexception:${newline}EXCEPTION\:${exception:format=tostring,StackTrace}}"
              fileName="D:/logs/CoolService-${shortdate}.log" />
    </targets>
    <rules>
      <logger name="NHibernate.*" minlevel="trace" writeTo="graylog" final="true" />
      <logger name="NHibernate.*" minlevel="Error" writeTo="file" final="true" />
      <logger name="*" minlevel="trace" writeTo="graylog" />
      <logger name="*" minlevel="trace" writeTo="file" />
    </rules>
  </nlog>
 
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
  </startup>
 
  <system.serviceModel>
    <diagnostics performanceCounters="All" />
 
    <bindings>
      <netTcpBinding>
        <binding name="tcpBinding" 
		maxReceivedMessageSize="2147483647" 
		closeTimeout="00:59:00" 
		openTimeout="00:59:00" 
		receiveTimeout="00:59:00" 
		sendTimeout="00:59:00">
          <security mode="None" />
          <readerQuotas maxStringContentLength="8192" 
			maxArrayLength="20971520" />
        </binding>
      </netTcpBinding>
    </bindings>
 
 
    <client>
      <!-- CoolService -->
      <endpoint name="coolServiceEndpoint" 
	        address="net.tcp://appCoolService:63006/CoolService" 
		binding="netTcpBinding"
                bindingConfiguration="tcpBinding" 
                contract="Services.ICoolService" />
    </client>
  </system.serviceModel>
 
 
</configuration>

A Look At Akka .NET

A while back I wrote an Actor model for NetMQ (the .NET port of ZeroMQ), which is now part of the live codebase, I was happy with this.

 

I do like the idea of Actor Models, where you spin up and talk to an actor, rather than worry about locks/semaphores etc etc.

 

It just gels with me rather well. To this end I have been experimenting with Akka.NET which is a pretty complete port of the original Akka, it is a lot of fun, and a really nice way to write distributed multithreaded code if you ask me.

 

To this end I have written a small article, which should be viewed as an introductory article on Akka.NET. If you like the sound of this you can read the full article over here at Codeproject :

 

http://www.codeproject.com/Articles/1007161/A-Look-saAt-Akka-NET

 

Enjoy

Getting LineNumber(s) in your XLINQ

At work the other day I had to do some work with some Xml fragments, which I decided to do using XLinq.

 

Where I wanted to validate a certain fragment, and also get line numbers out of the fragment when it was deemed invalid. Say I had this XML

<?xmlversion="1.0" encoding="utf-8"?>
<Clients>
  <Client>
    <FirstName>Travis</FirstName>
    <LastName>Bickle</LastName>
  </Client>
  <Client>
    <FirstName>Franics</FirstName>
    <LastName>Bacon</LastName>
  </Client>
</Clients>

XLine actually supoprts line numbers by the way of the IXmlLineInfo Interface

So say you had some code like this which grabbed a XNode and wanted to use it’s line number

XText travis = (from x in xml.DescendantNodes().OfType<XText>()
                where x.Value == "Travis"
                select x).Single();

var lineInfo = (IXmlLineInfo)travis;
Console.WriteLine("{0} appears on line {1}", travis, lineInfo.LineNumber);

What I was finding though was that my line numbers were always coming out with 0 reported as the lineNumber. Turns out there is a easy win for this, it is to do with how I was initially loading the
XDocument. I was doing this

var xml = XDocument.Load(file);

Which is bad, and will not load the line numbers. You need to do this instead

var xml = XDocument.Load(file, LoadOptions.SetLineInfo);

For a much better write up on all of this, check out this old post by Charlie Calvert, its much better than my post, wish I had of found that one first

http://blogs.msdn.com/b/charlie/archive/2008/09/26/linq-farm-linq-to-xml-and-line-numbers.aspx

Xml Schemas From Code / XML validation against Schema File And More

I don’t know about you lot but I work with XML files a bit, but I don’t have to mess around with XSD (xml schema) files that often. And it seems like every time I do I forget what I did last time. To this end I thought I would write this up somewhere, so I can refer back to it.

 

So what we will we cover here?

I will be covering these things:

  1. Create a XML file from C# objects
  2. Create a XSD from a XML file using C#
  3. Validate a XML file against a XSD schema file using C#

 

1. Create a XML file from C# objects

So lets say we have created the following objects in C# that we wish to serialize to XML.

public class OrderList
{
    public OrderList()
    {
        Orders = new List();
    }

    public List Orders { get; set; }
}

public class Order
{

    public OrderSummary OrderSummary { get; set; }
    public Customer Customer { get; set; }
}


public class Address
{
    public string AddressLine1 { get; set; }
    public string AddressLine2 { get; set; }
    public string AddressLine3 { get; set; }
    public string City { get; set; }
    public string County { get; set; }
    public string PostCode { get; set; }
}

public class Customer
{
    public string Title { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string Email { get; set; }
    public string Phone { get; set; }

}


public class OrderSummary
{
    public OrderSummary()
    {
        OrderLines = new List();
    }

    public List OrderLines { get; set; }
    public Address DeliveryAddress { get; set; }
    public DateTime DeliveryDate { get; set; }

}

public class OrderLine
{
    public decimal ItemQuanity { get; set; }
    public string ItemName { get; set; }
}

 

How do we then take that and save it to XML? Turns out this is very easy we can just use some code like this:

 

public static void CreateXmlFile(string filename)
{
    Address add = new Address()
    {
        AddressLine1 = "AddressLine1",
        AddressLine2 = "AddressLine2",
        AddressLine3 = "AddressLine3",
        City = "City",
        County = "County",
        PostCode = "PostCode"
    };

    Customer cust = new Customer()
    {
        Email = "Email",
        FirstName = "John",
        LastName = "Barnes",
        Phone = "13311",
        Title = "Mr"
    };


    OrderList orders = new OrderList();


    var orderSummary = new OrderSummary()
    {
        DeliveryAddress = add,
        DeliveryDate = DateTime.Now,
        OrderLines = new List()
        {
            new OrderLine() {ItemQuanity = 150, ItemName = "TestItem1" },
            new OrderLine() {ItemQuanity = 250, ItemName = "TestItem2" },
            new OrderLine() {ItemQuanity = 4, ItemName = "TestItem3" },
        },
    };


    //order1
    Order order1 = new Order();
    order1.Customer = cust;
    order1.OrderSummary = orderSummary;
    orders.Orders.Add(order1);


    //order2
    Order order2 = new Order();
    order2.Customer = cust;
    order2.OrderSummary = orderSummary;
    orders.Orders.Add(order1);

    XmlSerializer xmlSerializer = new XmlSerializer(typeof(OrderList));

    using (FileStream stream = File.OpenWrite(filename))
    {
        xmlSerializer.Serialize(stream, orders);
    }
}

 

That is enough to write a XML file to disk that matches your C# objects. Cool so far. Let’s continue

 

2. Create a XSD from a XML file using C#

Now there are many ways to do this. Here are some choices

  • Double click a valid XML in Visual Studio and use the Xml menu to create a schema
  • Use the xsd.exe command line tool which you would use something like xsd.exe SomeXmlFile.xml
  • Use C# to programmatically write out a XSD file that matches some object definition

 

In the past I would have use the XSD.exe command line to do this. But a strange this happens when I take the output of that and try and include it in Visual Studio. Visual Studio tries to create a strongly typed DataSet out of the XSD file. I don’t want this, I want it to remain as a XSD schema file. I think there is a way to stop this happening by altering the file produced by xsd.exe, but there are 2 other ways. For this I have chosen to use a programmatical approach, which is as follows:

 

public static void CreateSchemaFromXml(string fileName)
{

    //CREATE SCHEMA FROM XML

    XmlSerializer xmlSerializer = new XmlSerializer(typeof(OrderList));

    XmlSchemas schemas = new XmlSchemas();
    XmlSchemaExporter exporter = new XmlSchemaExporter(schemas);

    XmlTypeMapping mapping = new XmlReflectionImporter()
        .ImportTypeMapping(typeof(OrderList));
    exporter.ExportTypeMapping(mapping);
    var schemasData = TrimSchema(schemas);

    using (FileStream stream = File.OpenWrite(fileName))
    {
        schemasData.First().Write(stream);
    }
}

private static List TrimSchema(XmlSchemas schemas)
{
    List schemasData = new List(
        schemas.Where(s => s.TargetNamespace != "http://www.w3.org/2001/XMLSchema" &&
        s.TargetNamespace != "http://microsoft.com/wsdl/types/"));

    return schemasData;
}

 

This will produce a valid XSD file on disk that you may then fiddle with by adding more restrictions say.  So now all we need to do is carry out some validation of XML file(s) against this XSD file.

 

3. Validate a XML file against a XSD schema file using C#

 

This is easily achieved using some test code. But before we look at that, this is what the project looks like in Visual Studio

 

image

 

So you can see that there is a folder with good and bad files that I wish to test against the XSD file. Here is the complete test case code:

 

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Reflection;
using System.Text;
using System.Threading.Tasks;
using System.Xml.Linq;
using System.Xml.Schema;
using NUnit.Framework;

namespace XmlTests
{
    [TestFixture]
    public class StaticXmlFileTests
    {

        
        //Baddies
        [TestCase(@"\Xml\BadAgainstSchema\OrdersBADExampleFileNoOrderSummary.xml", false)]
        [TestCase(@"\Xml\BadAgainstSchema\OrdersBADExampleFile_AddressLineTooLong.xml", false)]


        //Goodies
        [TestCase(@"\Xml\Good\OrdersGOODExampleFile_FullFeatureSet.xml", true)]
        [TestCase(@"\Xml\Good\OrdersGOODExampleFile_MultipleOrderLines.xml", true)]
        [TestCase(@"\Xml\Good\OrdersGOODExampleFile_MultipleOrders.xml", true)]
        [TestCase(@"\Xml\Good\OrdersGOODExampleFile_SingleOrder.xml", true)]
        [TestCase(@"\Xml\Good\OrdersGOODExampleFile_SingleOrderLine.xml", true)]
        public void TestFileProducesExpectedSchemaValidationResult(string filename, bool exepectedValidationResult)
        {


            var xmlFile = ObtainFullFilePath(filename);
            var xsdFile = ObtainFullFilePath(@"\Xml\OrdersExampleFile.xsd");

            //VALIDATE XML AGAINST SCHEMA C#
            var xdoc = XDocument.Load(xmlFile);
            var schemas = new XmlSchemaSet();
            using (FileStream stream = File.OpenRead(xsdFile))
            {
                schemas.Add(XmlSchema.Read(stream, (s, e) =>
                {
                    var x = e.Message;
                }));
            }

            bool isvalid = true;
            StringBuilder sb = new StringBuilder();
            try
            {
                xdoc.Validate(schemas, (s, e) => 
                    {
                        isvalid = false;
                        sb.AppendLine(string.Format("Line : {0}, Message : {1} ", 
                            e.Exception.LineNumber, e.Exception.Message));
                    });
            }
            catch (XmlSchemaValidationException)
            {
                isvalid = false;
            }

            var validationErrors = sb.ToString();
            Assert.AreEqual(exepectedValidationResult, isvalid);
            if (exepectedValidationResult)
            {
                Assert.AreEqual(string.Empty, validationErrors);
            }
            else
            {
                Assert.AreNotEqual(string.Empty, validationErrors);
            }

        }



        private string ObtainFullFilePath(string fileName)
        {
            var path = TestContext.CurrentContext.TestDirectory;
            return string.Format("{0}{1}", path, fileName);
        }
    }
}

 

And here is a screen shot of all the tests working as expected:

 

image

 

 

You can find a small project for this on github:  https://github.com/sachabarber/XmlTests

Azure : Upload and stream video content to WPF from blob storage

A while back when Azure first came out I toyed with the idea of uploading video content to Azure Blob Storage, and having it play back in my WPF app. At the time (can’t recall exactly when that was, but quite a while ago) I had some major headaches doing this. The problem stemmed from the fact that the WPF MediaElement and the Azure Blob Storage did not play nicely together.

You just could not seek (that is when you go to an unbuffered / not downloaded) to a segment of the video and try and play. It just did not work, you would have to wait for the video to download ALL the content up the point you requested.

 

There is a very good post that discusses this old problem right here : http://programmerpayback.com/2013/01/30/hosting-progressive-download-videos-on-azure-blobs/

 

Previously you had to set the Blob storage API version. Starting from the 2011-08-18 version, you can do partial and pause/resume downloads on blob objects. The nice thing is that your client code doesn’t have to change to achieve this. 

 

Luckily this is no longer a problem, so now days it is as simple as following these steps:

 

  1. Upload a video (say MP4) to Azure Blob Storage
  2. Grab the Uri of the uploaded video
  3. Use that Uri for a WPF MediaElement

 

I have created a small demo app here for you, here is what it looks like after I have uploaded a video and pressed the play button

 

image

 

The code is dead simple, here is the XAML (its a WPF app)

 

<Window x:Class="WpfMediaPlayerFromBlobstorage.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525" WindowState="Maximized">
    <Grid>
        <DockPanel LastChildFill="True">
           
            <StackPanel Orientation="Horizontal" DockPanel.Dock="Top">
                <Button x:Name="btnUpload" 
                        Click="BtnUpload_OnClick" 
                        Content="Pick MP4 file to upload" 
                        Width="Auto" 
                        Margin="5"
                        Height="23"/>
                <StackPanel Orientation="Horizontal" Margin="50,5,5,5">
                    <StackPanel x:Name="controls" 
                                HorizontalAlignment="Center" 
                                Orientation="Horizontal">

                        <Button x:Name="btnPlay" 
                                Height="23" 
                                Content="Play" 
                                VerticalAlignment="Center"
                                Margin="5"
                                Click="BtnPlay_OnClick" />
                        <Button x:Name="btnPause" 
                                Height="23" 
                                Content="Pause" 
                                VerticalAlignment="Center"
                                Margin="5"
                                Click="BtnPause_OnClick" />
                        <Button x:Name="btnStop" 
                                Height="23" 
                                Content="Stop" 
                                VerticalAlignment="Center"
                                Click="BtnStop_OnClick"
                                Margin="5" />

                        <TextBlock VerticalAlignment="Center" 
                                   Text="Seek To"
                                   Margin="5" />
                        <Slider Name="timelineSlider" 
                                Margin="5" 
                                Height="23"
                                VerticalAlignment="Center"
                                Width="70"
                                ValueChanged="SeekToMediaPosition" />

                    </StackPanel>
                </StackPanel>
            </StackPanel>
            <MediaElement x:Name="player" 
                          Volume="1"
                          LoadedBehavior="Manual"
                          UnloadedBehavior="Manual"
                          HorizontalAlignment="Stretch" 
                          VerticalAlignment="Stretch"
                          Margin="10"
                          MediaOpened="Element_MediaOpened" 
                          MediaEnded="Element_MediaEnded"/>
        </DockPanel>
    </Grid>
</Window>

And here is the code behind (for simplicity I did not use MVVM for this demo)

 

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;

using Microsoft.Win32;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Auth;
using Microsoft.WindowsAzure.Storage.Blob;
using Microsoft.WindowsAzure.Storage.Shared.Protocol;

namespace WpfMediaPlayerFromBlobstorage
{
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
        private static string blobStorageConnectionString =
            "DefaultEndpointsProtocol=http;AccountName=YOUR_ACCOUNT_HERE;AccountKey=YOUR_KEY_HERE";
        private Uri uploadedBlobUri=null;


        public MainWindow()
        {
            InitializeComponent();
            this.controls.IsEnabled = false;
        }

        private async void BtnUpload_OnClick(object sender, RoutedEventArgs e)
        {
            this.controls.IsEnabled = false;
            OpenFileDialog fd = new OpenFileDialog();
            fd.InitialDirectory=@"c:\";
            var result = fd.ShowDialog();
            if (result.HasValue && result.Value)
            {
                try
                {
                    var storageAccount = CloudStorageAccount.Parse(blobStorageConnectionString);
                    CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
                    CloudBlobContainer container = blobClient.GetContainerReference("mycontainer");
                    container.CreateIfNotExists();
                    CloudBlockBlob blockBlob = container.GetBlockBlobReference("myblob");
                    container.SetPermissions(
                        new BlobContainerPermissions
                        {
                            PublicAccess =
                                BlobContainerPublicAccessType.Blob
                        }
                    );

                    using (var fileStream = File.OpenRead(fd.FileName))
                    {
                        await blockBlob.UploadFromStreamAsync(fileStream);
                        uploadedBlobUri = blockBlob.Uri;
                        this.controls.IsEnabled = true;
                        MessageBox.Show("File uploaded ok");
                    }
                }
                catch (Exception exception)
                {
                    MessageBox.Show("Ooops : " + exception.Message);
                }
            }


           
        }

        private void BtnPlay_OnClick(object sender, RoutedEventArgs e)
        {
            player.Source = uploadedBlobUri;
            player.Play();
            timelineSlider.Value = 0;
        }

        private void BtnPause_OnClick(object sender, RoutedEventArgs e)
        {
            player.Pause();
        }

        private void BtnStop_OnClick(object sender, RoutedEventArgs e)
        {
            player.Stop();
            timelineSlider.Value = 0;
        }

        private void Element_MediaOpened(object sender, EventArgs e)
        {
            timelineSlider.Maximum = player.NaturalDuration.TimeSpan.TotalMilliseconds;
        }

        private void Element_MediaEnded(object sender, EventArgs e)
        {
            player.Stop();
            timelineSlider.Value = 0;
        }


        private void SeekToMediaPosition(object sender, 
		RoutedPropertyChangedEventArgs<double> args)
        {
            int sliderValue = (int)timelineSlider.Value;
            TimeSpan ts = new TimeSpan(0, 0, 0, 0, sliderValue);
            player.Position = ts;
        }
    }
}

And there you have it, a very simple media player that allows play/pause/stop and seek from a Azure Blob Storage uploaded video.

You can grab this project (you will need to fill in the Azure Blob Storage connection string details with your own account settings) from my github account here : https://github.com/sachabarber/WpfMediaPlayerFromBlobstorage

 

NOTE : If you want more control over encoding/streaming etc etc you should check out Azure Media Services

Azure : Event Hub A First Look

Over the next few weeks I am going to be looking at a couple of things I have had on my back log for a while (I need to get these things done, so I can make my pushy work colleague happy by learning Erlang). One of the things that I have on my back log is having a look at Azure Event Hubs.

 

Event Hubs come under the Azure Service Bus umbrella, but are quite different. They are a high throughput pub/sub at a massive scale, with low latency and high reliability. To be honest this post will not add much more than you could find on MSDN, in fact even the demo associated with this post is one directly from MSDN, however in the next series of post(s) I will be showing you some more novel uses of working with Event Hub(s), which will be my own material

 

I guess if you have not heard of Azure Event Hubs there will still be some goodness in here, even if I have poached a lot of the content for this post (please forgive me) from MSDN.

 

Event Hubs provides a message stream handling capability and though an Event Hub is an entity similar to queues and topics, it has very different characteristics than traditional enterprise messaging. Enterprise messaging scenarios commonly require a number of sophisticated capabilities such as sequencing, dead-lettering, transaction support, and strong delivery assurances, while the dominant concern for event ingestion is high throughput and processing flexibility for event streams. Therefore, the Azure Event Hubs capability differs from Service Bus topics in that it is strongly biased towards high throughput and event processing scenarios. As such, Event Hubs does not implement some of the messaging capabilities that are available for topics. If you need those capabilities, topics remain the optimal choice.

An Event Hub is created at the namespace level in Service Bus, similar to queues and topics. Event Hubs uses AMQP and HTTP as its primary API interfaces.

 

https://msdn.microsoft.com/library/azure/dn836025.aspx

 

Partitions

In order to create such a high throughput ingestor (Event Hub) Microsoft used the idea of partitions. I like to use these set of images to help me understand what partitions bring to the table.

 

Regular messaging may be something like this                   

image

 

Whilst an Event Hub may be more like this (many lanes)

 

 

What I am trying to show there is that by only have one lane, less traffic may travel, but by having more lanes more traffic will flow.

Event Hubs get their through put by holding n-many partitions. Using the Azure portal the maximum number of partitions you may allocate is 16, this may be extended if you contact the Microsoft Azure Service Bus team. Each partition can be thought of as a queue (FIFO) of messages. Messages are held for a configurable amount of time. This setting is global across the entire Event Hub, and as such will effect messages held across ALL partitions

In order to use partitions from your code you should assign a partition key, which would ensure that the correct partition gets used. If your publishing code does not supply a partition key, a round robin assignment will be used. Ensuring that each partition is fairly balanced in terms of through put.

Stream Offsets

Within each partition an offset is held within the partition, this offset can be thought of as a client side cursor, giving the position in the message stream that has been dealt with. This offset should be maintained by the event consumer, and may be used to indicate the position in the stream to start processing from should communications to the Event Hub be lost.

Checkpoints

Checkpoints are the responsibility of the consumer, and mark or commit their position within a partition event stream. The consumer can inform the Event Hub when it considers an event stream complete. If a consumer disconnects from a partition, when connection is re-established it begins reading at the checkpoint that was previously submitted. Due to the fact that event data is held for a specified period, it is possible to return older data by specifying a lower offset from this checkpointing process. Through this mechanism, checkpointing enables both failover resiliency and controlled event stream replay.

So How About A Demo

I simply followed the getting started example, which you can find here : https://azure.microsoft.com/en-gb/documentation/articles/service-bus-event-hubs-csharp-ephcs-getstarted/

The Publisher

Here is the entire code for a FULLY working Event Hub publisher

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;


using System.Threading;
using Microsoft.ServiceBus.Messaging;

namespace Sender
{
    class Program
    {

        static string eventHubName = "{Your hub name}";
        static string connectionString = "{Your hub connection string}";
   

        static void Main(string[] args)
        {
            Console.WriteLine("Press Ctrl-C to stop the sender process");
            Console.WriteLine("Press Enter to start now");
            Console.ReadLine();
            SendingRandomMessages();
        }



        static void SendingRandomMessages()
        {
            var eventHubClient = 
                EventHubClient.CreateFromConnectionString(connectionString, eventHubName);
            while (true)
            {
                try
                {
                    var message = Guid.NewGuid().ToString();
                    Console.WriteLine("{0} > Sending message: {1}", 
                        DateTime.Now, message);

                    EventData eventData = new EventData(
                        Encoding.UTF8.GetBytes(message));

                    //This is how you can include metadata
                    //eventData.Properties["someProp"] = "MyEvent"

                    //this is how you would set the partition key
                    //eventData.PartitionKey = 1.ToString();
                    eventHubClient.Send(eventData);
                }
                catch (Exception exception)
                {
                    Console.ForegroundColor = ConsoleColor.Red;
                    Console.WriteLine("{0} > Exception: {1}", 
                        DateTime.Now, exception.Message);
                    Console.ResetColor();
                }

                Thread.Sleep(5000);
            }
        }
    }
}

 

It can be seen above that there is a EventHubClient class that you may use to send events. The code above also shows how you create a new event using the EventData class. Although I have not used these features the code above also shows how to associate metadata with the event, and also set a partition key for the message.

The Consumer

The consumer is a little trickier but not too much, there are only 2 classes of interest in the demo app. The main entry point contains an EventProcessorHost, which used this code

In an effort to alleviate this overhead the Service Bus team has created EventProcessorHost an intelligent agent for .NET consumers that manages partition access and per partition offset for consumers.

To use this class you first must implement the IEventProcessor interface which has three methods: OpenAsync, CloseAsync, and ProcessEventsAsnyc. A simple implementation is shown below.

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

using Microsoft.ServiceBus.Messaging;
using Microsoft.Threading;
using System.Threading.Tasks;

using Microsoft.Threading;

namespace Receiver
{
    class Program
    {
        static void Main(string[] args)
        {
            AsyncPump.Run(MainAsync);
        }


        static async Task MainAsync()
        {
            string eventHubConnectionString = "{Your hub connection string}";
            string eventHubName = "{Your hub name}";
            string storageAccountName = "{Your storage account name}";
            string storageAccountKey = "{Your storage account key}";
            string storageConnectionString = 
                string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}",
                storageAccountName, storageAccountKey);

            string eventProcessorHostName = Guid.NewGuid().ToString();
            EventProcessorHost eventProcessorHost = 
                new EventProcessorHost(
                    eventProcessorHostName, 
                    eventHubName, 
                    EventHubConsumerGroup.DefaultGroupName, 
                    eventHubConnectionString, storageConnectionString);
            var epo = new EventProcessorOptions()
            {
                MaxBatchSize = 100,
                PrefetchCount = 1,
                ReceiveTimeOut = TimeSpan.FromSeconds(20)
            };
            await eventProcessorHost.RegisterEventProcessorAsync<SimpleEventProcessor>(epo);


            Console.WriteLine("Receiving. Press enter key to stop worker.");
            Console.ReadLine();
        }
    }
}

 

To use this class you first must implement the IEventProcessor interface which has three methods: OpenAsync, After implementing this class instantiate EventProcessorHost providing the necessary parameters to the constructor.

  • Hostname – be sure not to hard code this, each instance of EventProcessorHost must have a unique value for this within a consumer group.Eve
  • EventHubPath – this is an easy one.
  • ConsumerGroupName – also an easy one, “$Default” is the name of the default consumer group, but it generally is a good idea to create a consumer group for your specific aspect of
  • processing.EventHubConnectionString – this is the connection string to the particular event hub, which can be retrieved from the Azure portal.  This connection string should have Listen permissions on the Event Hub.
  • StorageConnectionString – this is the storage account that will be used for partition distribution and leases.  When Checkpointing the lastest offset values will also be stored here.
     

Finally call RegisterEventProcessorAsync on the EventProcessorHost and register your implementation of IEventProcessor.  At this point the agent will begin obtaining leases for partitions and creating receivers to read from them.  For each partition that a lease is acquired for an instance of your IEventProcessor class will be created and then used for processing events from that specific partition.

 

http://blogs.msdn.com/b/servicebus/archive/2015/01/16/event-processor-host-best-practices-part-1.aspx 

Lease management

Checkpointing is not the only use of the storage connection string performed by EventProcessorHost.  Partition ownership (that is reader ownership) is also performed for you.  This way only a single reader can read from any given partition at a time within a consumer group.  This is accomplished using Azure Storage Blob Leases and implemented using Epoch.  This greatly simplifies the auto-scale nature of EventProcessorHost.  As an instance of EventProcessorHost starts it will acquire as many leases as possible and begin reading events. As the leases draw near expiration EventProcessorHost will attempt to renew them by placing a reservation. If the lease is available for renewal the processor continues reading, but if it is not the reader is closed and CloseAsync is called – this is a good time to perform any final cleanup for that partition.

EventProcessorHost has a member PartitionManagerOptions. This member allows for control over lease management. Set these options before registering your IEventProcessor implementation.

 

Controlling the runtime

Additionally the call to RegisterEventProcessorAsync allows for a parameter EventProcessorOptions. This is where you can control the behavior of the EventProcessorHost itself. There are four properties and one event that you should be aware of.

 

  • MaxBatchSize – this is the maximum size of the collection the user wants to receive in an invocation of ProcessEventsAsync. Note that this is not the minimum, only the maximum. If there are not this many messages to be received the ProcessEventsAsync will execute with as many as were available.
  • PrefetchCount – this is a value used by the underlying AMQP channel to determine the upper limit of how many messages the client should receive. This value should be greater than or equal to MaxBatchSize.
  • InvokeProcessorAfterReceiveTimeout – setting this parameter to true will result in ProcessEventsAsync being called when the underlying call the receive events on a partition times out. This is useful for taking time based actions during periods of inactivity on the partition.
  • InitialOffsetProvider – this allows a function pointer or lambda expression to be set that will be called to provide the initial offset when a reader begins reading a partition. Without setting this the reader will start at the oldest event unless a JSON file with an offset has already been saved in the storage account supplied to the EventProcessorHost constructor. This is useful when you want to change the behavior of reader start up. When this method is invoked the object parameter will contain the partition id that the reader is being started for.
  • ExceptionReceived  – this event allows you to receive notification of any underlying exceptions that occur in the EventProcessorHost. If things aren’t working as you expect, this is a great place to start looking.

 

 

Here is the demo codes implementation

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

using Microsoft.ServiceBus.Messaging;
using System.Diagnostics;
using System.Threading.Tasks;

namespace Receiver
{
    class SimpleEventProcessor : IEventProcessor
    {
        Stopwatch checkpointStopWatch;

        async Task IEventProcessor.CloseAsync(PartitionContext context, CloseReason reason)
        {
            Console.WriteLine("Processor Shutting Down. Partition '{0}', Reason: '{1}'.", 
                context.Lease.PartitionId, reason);
            if (reason == CloseReason.Shutdown)
            {
                await context.CheckpointAsync();
            }
        }

        Task IEventProcessor.OpenAsync(PartitionContext context)
        {
            Console.WriteLine("SimpleEventProcessor initialized.  Partition: '{0}', Offset: '{1}'", 
                context.Lease.PartitionId, context.Lease.Offset);
            this.checkpointStopWatch = new Stopwatch();
            this.checkpointStopWatch.Start();
            return Task.FromResult<object>(null);
        }

        async Task IEventProcessor.ProcessEventsAsync(PartitionContext context, 
            
            IEnumerable<EventData> messages)
        {
            foreach (EventData eventData in messages)
            {
                string data = Encoding.UTF8.GetString(eventData.GetBytes());

                Console.WriteLine(string.Format("Message received.  Partition: '{0}', Data: '{1}'",
                    context.Lease.PartitionId, data));
            }

            //Call checkpoint every 5 minutes, so that worker can resume processing 
            //from the 5 minutes back if it restarts.
            if (this.checkpointStopWatch.Elapsed > TimeSpan.FromMinutes(5))
            {
                await context.CheckpointAsync();
                this.checkpointStopWatch.Restart();
            }
        }
    }
}

 

This code probably needs a little explanation, and one of the best explanations you are likely to find is over on the Service Bus teams we site, which again I will blatantly steal here:

Thread safety & processor instances
It’s important to know that by default EventProcessorHost is thread safe and will behave in a synchronous manner as far as your instance of IEventProcessor is concerned. When events arrive for a particular partition ProcessEventsAsync will be called on the IEventProcessor instance for that partition and will block further calls to ProcessEventsAsync for the particular partition.  Subsequent messages and calls to ProcessEventsAsync will queue up behind the scenes as the message pump continues to run in the background on other threads.  This thread safety removes the need for thread safe collections and dramatically increases performance.
 
Receiving Messages
Each call to ProcessEventsAsync will deliver a collection of events.  It is your responsibility to do whatever it is you intend to do with these events.  Keep in mind you want to keep whatever it is you’re doing relatively fast – i.e. don’t try to do many processes from here – that’s what consumer groups are for.  If you need to write to storage and do some routing it is generally better to use two consumer groups and have two IEventProcessor implementations that run separately.
 
At some point during your processing you’re going to want to keep track of what you have read and completed.  This will be critical if you have to restart reading – so you don’t start back at the beginning of the stream.  EventProcessorHost greatly simplifies this with the concept of Checkpoints.  A Checkpoint is a location, or offset, for a given partition, within a given consumer group, where you are satisfied that you have processed the messages up to that point. It is where you are currently “done”. Marking a checkpoint in EventProcessorHost is accomplished by calling the CheckpointAsync method on the PartitionContext object.  This is generally done within the ProcessEventsAsync method but can be done in CloseAsync as well.
 
CheckpointAsync has two overloads: the first, with no parameters, checkpoints to the highest event offset within the collection returned by ProcessEventsAsync.  This is a “high water mark” in that it is optimistically assuming you have processed all recent events when you call it.  If you use this method in this way be aware that you are expected to perform this after your other event processing code has returned.  The second overload allows you to specify an EventData instance to checkpoint to.  This allows you to use a different type of watermark to checkpoint to.  With this you could implement a “low water mark” – the lowest sequenced event you are certain has been processed. This overload is provided to enable flexibility in offset management.

 
When the checkpoint is performed a JSON file with partition specific information, the offset in particular, is written to the storage account supplied in the constructor to EventProcessorHost.  This file will be continually updated.  It is critical to consider checkpointing in context – it would be unwise to checkpoint every message.  The storage account used for checkpointing probably wouldn’t handle this load, but more importantly checkpointing every single event is indicative of a queued messaging pattern for which a Service Bus Queue may be a better option than an Event Hub.  The idea behind Event Hubs is that you will get at least once delivery at great scale.  By making your downstream systems idempotent it is easy to recover from failures or restarts that result in the same events being received multiple times.
 
Shutting down gracefully
Finally EventProcessorHost.UnregisterEventProcessorAsync allows for the clean shut down of all partition readers and should always be called when shutting down an instance of EventProcessorHost. Failure to do this can cause delays when starting other instances of EventProcessorHost due to lease expiration and Epoch conflicts.

 

http://blogs.msdn.com/b/servicebus/archive/2015/01/16/event-processor-host-best-practices-part-1.aspx

 

 

When you run this demo code you will see that 16 partitions are initialized and then messages are dispatches to the partitions.

 

You can grab a starter for this demo from here : https://github.com/sachabarber/EventHubDemo though you WILL need to create an Event Hub in Azure as well as a Storage account. Like I say full instructions are available on MSDN for this one, I simply followed the getting started example, which you can find here : https://azure.microsoft.com/en-gb/documentation/articles/service-bus-event-hubs-csharp-ephcs-getstarted/

 

 

image

 

 

This posts adds absolutely ZERO to the example shown in the link above, and I have borrowed A LOT of material from MSDN, that said if you have not heard of the Azure Event Hub you may have learnt something here. In my next post however (which may become an article, where I like to show original work), I will be looking to use an Azure Event Hub along with the Azure Stream Analytics service, which I think should be quite cool, and original. I am however sorry this post is so borrowed……case of could not have said it better myself.

Powershell : Get Process Stats

At work the other day I found myself needing to gain some minimal process information. I mainly use .NET for my day to day existence (trying to learn Erlang right now argghh), but yeah day to day right now its .NET, so I am obviously aware of the .NET Process class, and what it brings to the table.

 

I am also aware of using Windows Management Instrumentation (WMI) is the infrastructure for management data and operations on Windows-based operating systems. As such you may also use WMI queries from .NET which you may read more about here http://www.codeproject.com/Articles/12138/Process-Information-and-Notifications-using-WMI

 

Thing is for my purpose I had to use a purely command line solution, so using .NET code was out, sure I could have created a small Console App, but I just felt there was a better tool for the job. Enter PowerShell.

 

My Requirements

I simply wanted to grab all processes matching a certain name, and grab a property value from each process matching that name, and export that tom either CSV or XML. I figured this should be something that I could easily do in PowerShell. So lets examine this step by step

 

Step 1 : Grabbing all processes matching a criteria

Lets say we want to grab all instances of the “notepad” process. This is how we could do that in PowerShell

 

Get-Process "notepad"

 

This yields this result

 

image

 

Cool, so we have some properties for each process. Mmm interesting, I wonder if we can grab some of those properties, which is what I need to do.

 

Step 2 : Grabbing the property value of interest for the matched processes

One of the really really great things about PowerShell is that you can pipe the results of one operation to the next (using F# a bit I have come to love the pipe operator). That means I should be able to use these Processes that I have and try and pipe them into another PowerShell operation that extracts one of the properties. Lets try that next. I came up with this (say I wanted to grab just the “Handles” information)

 

Get-Process "notepad" | Select-Object Handles

 

Which yields this result. See the “handles” is the only thing remaining for the “notepad” process

image

 

If you wanted to be a bit more verbose about how you get these results the following also works, which yield exactly the same results as those above

Get-Process | Where-Object {$_.ProcessName -eq 'notepad'} | format-table -property Handles
Get-Process | Where-Object {$_.ProcessName -eq 'notepad'} | Select-Object Handles

 

Ok so what have we achieved so far?

 

Well we have managed to grab only the processes we are interested in by name, and grab  the value of only the property we care about for each of these processes (note only one instance of notepad.exe was running for the screen shots above). Ok, so we are doing well,  not much left to do, we simply need to export this to CSV or XML.

 

Step 3a : Export to CSV

Ok lets exporting the data to CSV, must be a way right? Sure enough PowerShell doesn’t let us down, here is how:

 

Get-Process "notepad" | Select-Object Handles  | Export-Csv file.csv

 

Which gives us the following CSV file

 

#TYPE Selected.System.Diagnostics.Process
“Handles”
“421”

 

 

Step 3b : Export to XML

Exporting to an XML file is just as easy, here is how:

 

Get-Process "notepad" | Select-Object Handles  | Export-Clixml file.xml

 

Which gives us the following XML file

<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04">
  <Obj RefId="0">
    <TN RefId="0">
      <T>Selected.System.Diagnostics.Process</T>
      <T>System.Management.Automation.PSCustomObject</T>
      <T>System.Object</T>
    </TN>
    <MS>
      <I32 N="Handles">421</I32>
    </MS>
  </Obj>
</Objs>

 

And there you go, job done……Hope that helped at least someone

Grunt.Js examination

Lately I have been looking at VS2015 / ASP vNext, and it did not take a genius to see that you need to know NPM/Bower and Gulp/Grunt. I have used NPM before and Bower is easy to pick up. I have not used (but have heard of) Gulp and Grunt before.

I looked at both of these over the past couple of weeks, and decided I liked Grunt better. For those that have not heard of Grunt it is a task runner for running repetitive tasks. There are lots of examples/resources available for Grunt, but I kind of wanted to look/try it myself. I have written up my findings in the following article.

http://www.codeproject.com/Articles/995334/Small-Grunt-js-examination

Like I say this is nothing new, and I expect most web developers would be like, yeah obviously, it was however interesting for me as a grunt newbie, which others may be.

CQRS Demo

For a while now I have found myself becoming interested in CQRS, and I am fortunate enough to work with a practitioner of CQRS. As such it seemed like a good time to try and learn a bit more about this pattern.

I have created a small demo app that is a fully asynchronous CQRS example.

If this sounds like it may of interest to you, you can read more about it over at codeproject : CQRS : A Cross Examination Of How It Works

Git protocol errors when using Bower package manager

I have just got back from a month long holiday (which was great). Anyway back to work now…..sigh

So the other day I was trying to get Yeoman to scaffold a new angular.js app for me, which worked fine. I then wanted to use the Bower package manager to download a package, and whoever created the package hosted it on Git. Bower can deal with this just fine. But if like me your network is locked down, where there are all sorts of firewall/proxy rules, you may not be able to use the git protocol.

Luckily this is an easy fix, and all you need to do is issue this command line to have git add a configuration rule to re-write git urls to https

git config --global url."https://".insteadOf git://

What Changes Did This Command Make?

Take a look at your global configuration using:

git config --list

You’ll see the following line in the output:

url.https://.insteadof=git://

You can see how this looks on file, by taking a peek at ~/.gitconfig where you should now see that the following two lines have been added:

[url "https://"]
    insteadOf = git://

And that is all there is to it, everything just worked after that.

Follow

Get every new post delivered to your Inbox.

Join 166 other followers