Category Archives: Azure

Small Azure EventGrid + Azure Functions Demo

I am a big fan of reactive programming, and love things like RX/Akka, and service buses and things like that, and I have meaning to try the new (preview) Azure EventGrid service out for a while.

 

To this end I have given it a little go where I hooked it up to a Azure Function and written a small article about it here : https://www.codeproject.com/Articles/1220389/Azure-EventGrid-Azure-Function-demo

Advertisements

Azure Service Fabric Demo App

At work we have made use of the Azure Service Fabric, and I thought it might be nice to write up some of the fun we had with that. To this end I have written an article on it at codeproject.com which you can read here : https://www.codeproject.com/Articles/1217885/Azure-Service-Fabric-demo

The article covers :

  • Service Fabric basics
  • IOC
  • Logging (Serilog/Seq)
  • Encryption of connection strings

Anyway hope you like it

I’m going to write up this big Scala thing I have been doing, then I may post some more Azure bits and bobs, adios until then

 

 

Azure : Upload and stream video content to WPF from blob storage

A while back when Azure first came out I toyed with the idea of uploading video content to Azure Blob Storage, and having it play back in my WPF app. At the time (can’t recall exactly when that was, but quite a while ago) I had some major headaches doing this. The problem stemmed from the fact that the WPF MediaElement and the Azure Blob Storage did not play nicely together.

You just could not seek (that is when you go to an unbuffered / not downloaded) to a segment of the video and try and play. It just did not work, you would have to wait for the video to download ALL the content up the point you requested.

 

There is a very good post that discusses this old problem right here : http://programmerpayback.com/2013/01/30/hosting-progressive-download-videos-on-azure-blobs/

 

Previously you had to set the Blob storage API version. Starting from the 2011-08-18 version, you can do partial and pause/resume downloads on blob objects. The nice thing is that your client code doesn’t have to change to achieve this. 

 

Luckily this is no longer a problem, so now days it is as simple as following these steps:

 

  1. Upload a video (say MP4) to Azure Blob Storage
  2. Grab the Uri of the uploaded video
  3. Use that Uri for a WPF MediaElement

 

I have created a small demo app here for you, here is what it looks like after I have uploaded a video and pressed the play button

 

image

 

The code is dead simple, here is the XAML (its a WPF app)

 

<Window x:Class="WpfMediaPlayerFromBlobstorage.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525" WindowState="Maximized">
    <Grid>
        <DockPanel LastChildFill="True">
           
            <StackPanel Orientation="Horizontal" DockPanel.Dock="Top">
                <Button x:Name="btnUpload" 
                        Click="BtnUpload_OnClick" 
                        Content="Pick MP4 file to upload" 
                        Width="Auto" 
                        Margin="5"
                        Height="23"/>
                <StackPanel Orientation="Horizontal" Margin="50,5,5,5">
                    <StackPanel x:Name="controls" 
                                HorizontalAlignment="Center" 
                                Orientation="Horizontal">

                        <Button x:Name="btnPlay" 
                                Height="23" 
                                Content="Play" 
                                VerticalAlignment="Center"
                                Margin="5"
                                Click="BtnPlay_OnClick" />
                        <Button x:Name="btnPause" 
                                Height="23" 
                                Content="Pause" 
                                VerticalAlignment="Center"
                                Margin="5"
                                Click="BtnPause_OnClick" />
                        <Button x:Name="btnStop" 
                                Height="23" 
                                Content="Stop" 
                                VerticalAlignment="Center"
                                Click="BtnStop_OnClick"
                                Margin="5" />

                        <TextBlock VerticalAlignment="Center" 
                                   Text="Seek To"
                                   Margin="5" />
                        <Slider Name="timelineSlider" 
                                Margin="5" 
                                Height="23"
                                VerticalAlignment="Center"
                                Width="70"
                                ValueChanged="SeekToMediaPosition" />

                    </StackPanel>
                </StackPanel>
            </StackPanel>
            <MediaElement x:Name="player" 
                          Volume="1"
                          LoadedBehavior="Manual"
                          UnloadedBehavior="Manual"
                          HorizontalAlignment="Stretch" 
                          VerticalAlignment="Stretch"
                          Margin="10"
                          MediaOpened="Element_MediaOpened" 
                          MediaEnded="Element_MediaEnded"/>
        </DockPanel>
    </Grid>
</Window>

And here is the code behind (for simplicity I did not use MVVM for this demo)

 

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;

using Microsoft.Win32;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Auth;
using Microsoft.WindowsAzure.Storage.Blob;
using Microsoft.WindowsAzure.Storage.Shared.Protocol;

namespace WpfMediaPlayerFromBlobstorage
{
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
        private static string blobStorageConnectionString =
            "DefaultEndpointsProtocol=http;AccountName=YOUR_ACCOUNT_HERE;AccountKey=YOUR_KEY_HERE";
        private Uri uploadedBlobUri=null;


        public MainWindow()
        {
            InitializeComponent();
            this.controls.IsEnabled = false;
        }

        private async void BtnUpload_OnClick(object sender, RoutedEventArgs e)
        {
            this.controls.IsEnabled = false;
            OpenFileDialog fd = new OpenFileDialog();
            fd.InitialDirectory=@"c:\";
            var result = fd.ShowDialog();
            if (result.HasValue && result.Value)
            {
                try
                {
                    var storageAccount = CloudStorageAccount.Parse(blobStorageConnectionString);
                    CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
                    CloudBlobContainer container = blobClient.GetContainerReference("mycontainer");
                    container.CreateIfNotExists();
                    CloudBlockBlob blockBlob = container.GetBlockBlobReference("myblob");
                    container.SetPermissions(
                        new BlobContainerPermissions
                        {
                            PublicAccess =
                                BlobContainerPublicAccessType.Blob
                        }
                    );

                    using (var fileStream = File.OpenRead(fd.FileName))
                    {
                        await blockBlob.UploadFromStreamAsync(fileStream);
                        uploadedBlobUri = blockBlob.Uri;
                        this.controls.IsEnabled = true;
                        MessageBox.Show("File uploaded ok");
                    }
                }
                catch (Exception exception)
                {
                    MessageBox.Show("Ooops : " + exception.Message);
                }
            }


           
        }

        private void BtnPlay_OnClick(object sender, RoutedEventArgs e)
        {
            player.Source = uploadedBlobUri;
            player.Play();
            timelineSlider.Value = 0;
        }

        private void BtnPause_OnClick(object sender, RoutedEventArgs e)
        {
            player.Pause();
        }

        private void BtnStop_OnClick(object sender, RoutedEventArgs e)
        {
            player.Stop();
            timelineSlider.Value = 0;
        }

        private void Element_MediaOpened(object sender, EventArgs e)
        {
            timelineSlider.Maximum = player.NaturalDuration.TimeSpan.TotalMilliseconds;
        }

        private void Element_MediaEnded(object sender, EventArgs e)
        {
            player.Stop();
            timelineSlider.Value = 0;
        }


        private void SeekToMediaPosition(object sender, 
		RoutedPropertyChangedEventArgs<double> args)
        {
            int sliderValue = (int)timelineSlider.Value;
            TimeSpan ts = new TimeSpan(0, 0, 0, 0, sliderValue);
            player.Position = ts;
        }
    }
}

And there you have it, a very simple media player that allows play/pause/stop and seek from a Azure Blob Storage uploaded video.

You can grab this project (you will need to fill in the Azure Blob Storage connection string details with your own account settings) from my github account here : https://github.com/sachabarber/WpfMediaPlayerFromBlobstorage

 

NOTE : If you want more control over encoding/streaming etc etc you should check out Azure Media Services

Azure : Event Hub A First Look

Over the next few weeks I am going to be looking at a couple of things I have had on my back log for a while (I need to get these things done, so I can make my pushy work colleague happy by learning Erlang). One of the things that I have on my back log is having a look at Azure Event Hubs.

 

Event Hubs come under the Azure Service Bus umbrella, but are quite different. They are a high throughput pub/sub at a massive scale, with low latency and high reliability. To be honest this post will not add much more than you could find on MSDN, in fact even the demo associated with this post is one directly from MSDN, however in the next series of post(s) I will be showing you some more novel uses of working with Event Hub(s), which will be my own material

 

I guess if you have not heard of Azure Event Hubs there will still be some goodness in here, even if I have poached a lot of the content for this post (please forgive me) from MSDN.

 

Event Hubs provides a message stream handling capability and though an Event Hub is an entity similar to queues and topics, it has very different characteristics than traditional enterprise messaging. Enterprise messaging scenarios commonly require a number of sophisticated capabilities such as sequencing, dead-lettering, transaction support, and strong delivery assurances, while the dominant concern for event ingestion is high throughput and processing flexibility for event streams. Therefore, the Azure Event Hubs capability differs from Service Bus topics in that it is strongly biased towards high throughput and event processing scenarios. As such, Event Hubs does not implement some of the messaging capabilities that are available for topics. If you need those capabilities, topics remain the optimal choice.

An Event Hub is created at the namespace level in Service Bus, similar to queues and topics. Event Hubs uses AMQP and HTTP as its primary API interfaces.

 

https://msdn.microsoft.com/library/azure/dn836025.aspx

 

Partitions

In order to create such a high throughput ingestor (Event Hub) Microsoft used the idea of partitions. I like to use these set of images to help me understand what partitions bring to the table.

 

Regular messaging may be something like this                   

image

 

Whilst an Event Hub may be more like this (many lanes)

 

 

What I am trying to show there is that by only have one lane, less traffic may travel, but by having more lanes more traffic will flow.

Event Hubs get their through put by holding n-many partitions. Using the Azure portal the maximum number of partitions you may allocate is 16, this may be extended if you contact the Microsoft Azure Service Bus team. Each partition can be thought of as a queue (FIFO) of messages. Messages are held for a configurable amount of time. This setting is global across the entire Event Hub, and as such will effect messages held across ALL partitions

In order to use partitions from your code you should assign a partition key, which would ensure that the correct partition gets used. If your publishing code does not supply a partition key, a round robin assignment will be used. Ensuring that each partition is fairly balanced in terms of through put.

Stream Offsets

Within each partition an offset is held within the partition, this offset can be thought of as a client side cursor, giving the position in the message stream that has been dealt with. This offset should be maintained by the event consumer, and may be used to indicate the position in the stream to start processing from should communications to the Event Hub be lost.

Checkpoints

Checkpoints are the responsibility of the consumer, and mark or commit their position within a partition event stream. The consumer can inform the Event Hub when it considers an event stream complete. If a consumer disconnects from a partition, when connection is re-established it begins reading at the checkpoint that was previously submitted. Due to the fact that event data is held for a specified period, it is possible to return older data by specifying a lower offset from this checkpointing process. Through this mechanism, checkpointing enables both failover resiliency and controlled event stream replay.

So How About A Demo

I simply followed the getting started example, which you can find here : https://azure.microsoft.com/en-gb/documentation/articles/service-bus-event-hubs-csharp-ephcs-getstarted/

The Publisher

Here is the entire code for a FULLY working Event Hub publisher

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;


using System.Threading;
using Microsoft.ServiceBus.Messaging;

namespace Sender
{
    class Program
    {

        static string eventHubName = "{Your hub name}";
        static string connectionString = "{Your hub connection string}";
   

        static void Main(string[] args)
        {
            Console.WriteLine("Press Ctrl-C to stop the sender process");
            Console.WriteLine("Press Enter to start now");
            Console.ReadLine();
            SendingRandomMessages();
        }



        static void SendingRandomMessages()
        {
            var eventHubClient = 
                EventHubClient.CreateFromConnectionString(connectionString, eventHubName);
            while (true)
            {
                try
                {
                    var message = Guid.NewGuid().ToString();
                    Console.WriteLine("{0} > Sending message: {1}", 
                        DateTime.Now, message);

                    EventData eventData = new EventData(
                        Encoding.UTF8.GetBytes(message));

                    //This is how you can include metadata
                    //eventData.Properties["someProp"] = "MyEvent"

                    //this is how you would set the partition key
                    //eventData.PartitionKey = 1.ToString();
                    eventHubClient.Send(eventData);
                }
                catch (Exception exception)
                {
                    Console.ForegroundColor = ConsoleColor.Red;
                    Console.WriteLine("{0} > Exception: {1}", 
                        DateTime.Now, exception.Message);
                    Console.ResetColor();
                }

                Thread.Sleep(5000);
            }
        }
    }
}

 

It can be seen above that there is a EventHubClient class that you may use to send events. The code above also shows how you create a new event using the EventData class. Although I have not used these features the code above also shows how to associate metadata with the event, and also set a partition key for the message.

The Consumer

The consumer is a little trickier but not too much, there are only 2 classes of interest in the demo app. The main entry point contains an EventProcessorHost, which used this code

In an effort to alleviate this overhead the Service Bus team has created EventProcessorHost an intelligent agent for .NET consumers that manages partition access and per partition offset for consumers.

To use this class you first must implement the IEventProcessor interface which has three methods: OpenAsync, CloseAsync, and ProcessEventsAsnyc. A simple implementation is shown below.

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

using Microsoft.ServiceBus.Messaging;
using Microsoft.Threading;
using System.Threading.Tasks;

using Microsoft.Threading;

namespace Receiver
{
    class Program
    {
        static void Main(string[] args)
        {
            AsyncPump.Run(MainAsync);
        }


        static async Task MainAsync()
        {
            string eventHubConnectionString = "{Your hub connection string}";
            string eventHubName = "{Your hub name}";
            string storageAccountName = "{Your storage account name}";
            string storageAccountKey = "{Your storage account key}";
            string storageConnectionString = 
                string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}",
                storageAccountName, storageAccountKey);

            string eventProcessorHostName = Guid.NewGuid().ToString();
            EventProcessorHost eventProcessorHost = 
                new EventProcessorHost(
                    eventProcessorHostName, 
                    eventHubName, 
                    EventHubConsumerGroup.DefaultGroupName, 
                    eventHubConnectionString, storageConnectionString);
            var epo = new EventProcessorOptions()
            {
                MaxBatchSize = 100,
                PrefetchCount = 1,
                ReceiveTimeOut = TimeSpan.FromSeconds(20)
            };
            await eventProcessorHost.RegisterEventProcessorAsync<SimpleEventProcessor>(epo);


            Console.WriteLine("Receiving. Press enter key to stop worker.");
            Console.ReadLine();
        }
    }
}

 

To use this class you first must implement the IEventProcessor interface which has three methods: OpenAsync, After implementing this class instantiate EventProcessorHost providing the necessary parameters to the constructor.

  • Hostname – be sure not to hard code this, each instance of EventProcessorHost must have a unique value for this within a consumer group.Eve
  • EventHubPath – this is an easy one.
  • ConsumerGroupName – also an easy one, “$Default” is the name of the default consumer group, but it generally is a good idea to create a consumer group for your specific aspect of
  • processing.EventHubConnectionString – this is the connection string to the particular event hub, which can be retrieved from the Azure portal.  This connection string should have Listen permissions on the Event Hub.
  • StorageConnectionString – this is the storage account that will be used for partition distribution and leases.  When Checkpointing the lastest offset values will also be stored here.
     

Finally call RegisterEventProcessorAsync on the EventProcessorHost and register your implementation of IEventProcessor.  At this point the agent will begin obtaining leases for partitions and creating receivers to read from them.  For each partition that a lease is acquired for an instance of your IEventProcessor class will be created and then used for processing events from that specific partition.

 

http://blogs.msdn.com/b/servicebus/archive/2015/01/16/event-processor-host-best-practices-part-1.aspx 

Lease management

Checkpointing is not the only use of the storage connection string performed by EventProcessorHost.  Partition ownership (that is reader ownership) is also performed for you.  This way only a single reader can read from any given partition at a time within a consumer group.  This is accomplished using Azure Storage Blob Leases and implemented using Epoch.  This greatly simplifies the auto-scale nature of EventProcessorHost.  As an instance of EventProcessorHost starts it will acquire as many leases as possible and begin reading events. As the leases draw near expiration EventProcessorHost will attempt to renew them by placing a reservation. If the lease is available for renewal the processor continues reading, but if it is not the reader is closed and CloseAsync is called – this is a good time to perform any final cleanup for that partition.

EventProcessorHost has a member PartitionManagerOptions. This member allows for control over lease management. Set these options before registering your IEventProcessor implementation.

 

Controlling the runtime

Additionally the call to RegisterEventProcessorAsync allows for a parameter EventProcessorOptions. This is where you can control the behavior of the EventProcessorHost itself. There are four properties and one event that you should be aware of.

 

  • MaxBatchSize – this is the maximum size of the collection the user wants to receive in an invocation of ProcessEventsAsync. Note that this is not the minimum, only the maximum. If there are not this many messages to be received the ProcessEventsAsync will execute with as many as were available.
  • PrefetchCount – this is a value used by the underlying AMQP channel to determine the upper limit of how many messages the client should receive. This value should be greater than or equal to MaxBatchSize.
  • InvokeProcessorAfterReceiveTimeout – setting this parameter to true will result in ProcessEventsAsync being called when the underlying call the receive events on a partition times out. This is useful for taking time based actions during periods of inactivity on the partition.
  • InitialOffsetProvider – this allows a function pointer or lambda expression to be set that will be called to provide the initial offset when a reader begins reading a partition. Without setting this the reader will start at the oldest event unless a JSON file with an offset has already been saved in the storage account supplied to the EventProcessorHost constructor. This is useful when you want to change the behavior of reader start up. When this method is invoked the object parameter will contain the partition id that the reader is being started for.
  • ExceptionReceived  – this event allows you to receive notification of any underlying exceptions that occur in the EventProcessorHost. If things aren’t working as you expect, this is a great place to start looking.

 

 

Here is the demo codes implementation

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

using Microsoft.ServiceBus.Messaging;
using System.Diagnostics;
using System.Threading.Tasks;

namespace Receiver
{
    class SimpleEventProcessor : IEventProcessor
    {
        Stopwatch checkpointStopWatch;

        async Task IEventProcessor.CloseAsync(PartitionContext context, CloseReason reason)
        {
            Console.WriteLine("Processor Shutting Down. Partition '{0}', Reason: '{1}'.", 
                context.Lease.PartitionId, reason);
            if (reason == CloseReason.Shutdown)
            {
                await context.CheckpointAsync();
            }
        }

        Task IEventProcessor.OpenAsync(PartitionContext context)
        {
            Console.WriteLine("SimpleEventProcessor initialized.  Partition: '{0}', Offset: '{1}'", 
                context.Lease.PartitionId, context.Lease.Offset);
            this.checkpointStopWatch = new Stopwatch();
            this.checkpointStopWatch.Start();
            return Task.FromResult<object>(null);
        }

        async Task IEventProcessor.ProcessEventsAsync(PartitionContext context, 
            
            IEnumerable<EventData> messages)
        {
            foreach (EventData eventData in messages)
            {
                string data = Encoding.UTF8.GetString(eventData.GetBytes());

                Console.WriteLine(string.Format("Message received.  Partition: '{0}', Data: '{1}'",
                    context.Lease.PartitionId, data));
            }

            //Call checkpoint every 5 minutes, so that worker can resume processing 
            //from the 5 minutes back if it restarts.
            if (this.checkpointStopWatch.Elapsed > TimeSpan.FromMinutes(5))
            {
                await context.CheckpointAsync();
                this.checkpointStopWatch.Restart();
            }
        }
    }
}

 

This code probably needs a little explanation, and one of the best explanations you are likely to find is over on the Service Bus teams we site, which again I will blatantly steal here:

Thread safety & processor instances
It’s important to know that by default EventProcessorHost is thread safe and will behave in a synchronous manner as far as your instance of IEventProcessor is concerned. When events arrive for a particular partition ProcessEventsAsync will be called on the IEventProcessor instance for that partition and will block further calls to ProcessEventsAsync for the particular partition.  Subsequent messages and calls to ProcessEventsAsync will queue up behind the scenes as the message pump continues to run in the background on other threads.  This thread safety removes the need for thread safe collections and dramatically increases performance.
 
Receiving Messages
Each call to ProcessEventsAsync will deliver a collection of events.  It is your responsibility to do whatever it is you intend to do with these events.  Keep in mind you want to keep whatever it is you’re doing relatively fast – i.e. don’t try to do many processes from here – that’s what consumer groups are for.  If you need to write to storage and do some routing it is generally better to use two consumer groups and have two IEventProcessor implementations that run separately.
 
At some point during your processing you’re going to want to keep track of what you have read and completed.  This will be critical if you have to restart reading – so you don’t start back at the beginning of the stream.  EventProcessorHost greatly simplifies this with the concept of Checkpoints.  A Checkpoint is a location, or offset, for a given partition, within a given consumer group, where you are satisfied that you have processed the messages up to that point. It is where you are currently “done”. Marking a checkpoint in EventProcessorHost is accomplished by calling the CheckpointAsync method on the PartitionContext object.  This is generally done within the ProcessEventsAsync method but can be done in CloseAsync as well.
 
CheckpointAsync has two overloads: the first, with no parameters, checkpoints to the highest event offset within the collection returned by ProcessEventsAsync.  This is a “high water mark” in that it is optimistically assuming you have processed all recent events when you call it.  If you use this method in this way be aware that you are expected to perform this after your other event processing code has returned.  The second overload allows you to specify an EventData instance to checkpoint to.  This allows you to use a different type of watermark to checkpoint to.  With this you could implement a “low water mark” – the lowest sequenced event you are certain has been processed. This overload is provided to enable flexibility in offset management.

 
When the checkpoint is performed a JSON file with partition specific information, the offset in particular, is written to the storage account supplied in the constructor to EventProcessorHost.  This file will be continually updated.  It is critical to consider checkpointing in context – it would be unwise to checkpoint every message.  The storage account used for checkpointing probably wouldn’t handle this load, but more importantly checkpointing every single event is indicative of a queued messaging pattern for which a Service Bus Queue may be a better option than an Event Hub.  The idea behind Event Hubs is that you will get at least once delivery at great scale.  By making your downstream systems idempotent it is easy to recover from failures or restarts that result in the same events being received multiple times.
 
Shutting down gracefully
Finally EventProcessorHost.UnregisterEventProcessorAsync allows for the clean shut down of all partition readers and should always be called when shutting down an instance of EventProcessorHost. Failure to do this can cause delays when starting other instances of EventProcessorHost due to lease expiration and Epoch conflicts.

 

http://blogs.msdn.com/b/servicebus/archive/2015/01/16/event-processor-host-best-practices-part-1.aspx

 

 

When you run this demo code you will see that 16 partitions are initialized and then messages are dispatches to the partitions.

 

You can grab a starter for this demo from here : https://github.com/sachabarber/EventHubDemo though you WILL need to create an Event Hub in Azure as well as a Storage account. Like I say full instructions are available on MSDN for this one, I simply followed the getting started example, which you can find here : https://azure.microsoft.com/en-gb/documentation/articles/service-bus-event-hubs-csharp-ephcs-getstarted/

 

 

image

 

 

This posts adds absolutely ZERO to the example shown in the link above, and I have borrowed A LOT of material from MSDN, that said if you have not heard of the Azure Event Hub you may have learnt something here. In my next post however (which may become an article, where I like to show original work), I will be looking to use an Azure Event Hub along with the Azure Stream Analytics service, which I think should be quite cool, and original. I am however sorry this post is so borrowed……case of could not have said it better myself.

Azure Cloud Service : Inter role communications

I have been doing a bit more with Azure of late, and one of the things I wanted to try out was inter role communications between cloud service roles. You can obviously use the Azure ServiceBus where topic based subscriptions may make some sense. I felt there may be a lighter approach, and set out to explore this.

I was not alone in this thinking, and found some good stuff out there, which I have reworked into the article below:

 

http://www.codeproject.com/Articles/888469/Azure-Cloud-Service-Inter-role-communications

 

I hope some of you find it useful

Azure : Redis Cache

This is a new post in a series of beginners articles on how to do things in Azure. This series will be for absolute beginners, and if you are not one of those this will not be for you.

You can find the complete set of posts that make us this series here :

https://sachabarbs.wordpress.com/azure/

This time we will look at how to use the preview of the Azure Redis Cache.

For this one you will need access to the new Azure portal:

https://portal.azure.com/

Why Redis Cache

I have used Redis in the past when I was looking into different NoSQL options, which you can read about here :

https://sachabarbs.wordpress.com/2012/05/21/document-dbs-a-quick-look-at-some-of-them/

Redis was very easy to use an I found it very easy to store complex JSON object in it, at the time I was using the ServiceStack.Redis client libs.

Here is what Scott Guthries blog has to say about the preview Redis Cache for Azure

Unlike traditional caches which deal only with key-value pairs, Redis is popular for its support of high performance data types, on which you can perform atomic operations such as appending to a string, incrementing the value in a hash, pushing to a list, computing set intersection, union and difference, or getting the member with highest ranking in a sorted set.  Other features include support for transactions, pub/sub, Lua scripting, keys with a limited time-to-live, and configuration settings to make Redis behave more like a traditional cache.

Finally, Redis has a healthy, vibrant open source ecosystem built around it. This is reflected in the diverse set of Redis clients available across multiple languages. This allows it to be used by nearly any application, running on either Windows or Linux, that you host inside of Azure.

http://weblogs.asp.net/scottgu/azure-redis-cache-disaster-recovery-to-azure-tagging-support-elastic-scale-for-sqldb-docdb

 

Creating the Cache

image

This will launch a wizard where you can pick the name/pricing plan etc etc

It may take a few minutes to configure the cache for the first time, you can see this in the portal, as shown below

image

You may also check on the status of all your configured Azure items in the new Portal, using the BROWSE button, as shown below:

image

Once you have successfully created a Cache, you will see something like this:

image

From here you can grab the keys for your account, which we will use in the next bit of this post

Using A Created Cache

So once you have created a Redis Cache using the preview portal you will likely want to connect and use it. So lets have a look at that shall we.

We start by getting the connection details, this requires 2 steps:

1. Get the url, which is easy to get using the properties window, as shown below

image

2. Once you have copied the host name url, we need to copy the password, which you can grab from the keys area

image

Once you have these 2 bits of information, we are able to start using the Redis Azure cache.

Code Code Code

You will need to add the following NuGet package : StackExchange.Redis

NuGet command line > Install-Package StackExchange.Redis

Once you have that, its finally time to start coding, so what do you need to use the cache.

Not much as it turns out. The following is a fully working program, which stores and retrieves some basic string values, and also a fully serialized object.

NOTE : To see what types you can use with the Redis Cache read this link : http://redis.io/topics/data-types

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json;
using StackExchange.Redis;

namespace AzureRedisCacheDemo
{
    class Program
    {
        static void Main(string[] args)
        {
            ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(
                "sachabarberredistest.redis.cache.windows.net,ssl=true,password=*** YOUR ACCOUNT KEY HERE ***");

            IDatabase cache = connection.GetDatabase();

            // Perform cache operations using the cache object...
            // Simple put of integral data types into the cache
            cache.StringSet("key1", "dsyavdadsahda");
            cache.StringSet("key2", 25);

            Foo foo1 = new Foo() {Age = 1, Name = "Foo1"};
            var serializedFoo = JsonConvert.SerializeObject(foo1);
            cache.StringSet("serializedFoo", serializedFoo);

            Foo foo3 = new Foo() { Age = 1, Name = "Foo3" };
            var serializedFoo3 = JsonConvert.SerializeObject(foo1);
            cache.StringSet("serializedFoo3", serializedFoo3);


            // Simple get of data types from the cache
            string key1 = cache.StringGet("key1");
            int key2 = (int)cache.StringGet("key2");


            var foo2 = JsonConvert.DeserializeObject<Foo>(cache.StringGet("serializedFoo"));
            bool areEqual = foo1 == foo2;


            var foo4 = JsonConvert.DeserializeObject<Foo>(cache.StringGet("serializedFoo3"));
            bool areEqual2 = foo3 == foo4;


            Console.ReadLine();
        }
    }


    
    public class Foo : IEquatable<Foo>
    {
        public string Name { get; set; }
        public int Age { get; set; }

        public bool Equals(Foo other)
        {
            if (ReferenceEquals(null, other)) return false;
            if (ReferenceEquals(this, other)) return true;
            return string.Equals(Name, other.Name) && Age == other.Age;
        }

        public override bool Equals(object obj)
        {
            if (ReferenceEquals(null, obj)) return false;
            if (ReferenceEquals(this, obj)) return true;
            if (obj.GetType() != this.GetType()) return false;
            return Equals((Foo) obj);
        }

        public override int GetHashCode()
        {
            unchecked
            {
                return ((Name != null ? Name.GetHashCode() : 0)*397) ^ Age;
            }
        }

        public static bool operator ==(Foo left, Foo right)
        {
            return Equals(left, right);
        }

        public static bool operator !=(Foo left, Foo right)
        {
            return !Equals(left, right);
        }
    }
}

When you run this code you should see something like this:

image

It is a VERY simple demo, but demonstrates the cache nicely I feel.

In this example I am using JSON.Net to serialize objects to strings, which are then stored in the Redis Cache, you may have some other serializer you prefer, but this does illustrate the point of a working Redis Cache ok I feel.

 

StackExchange.Redis Pub.Sub

Redis may also be used as a pub/sub framework, which you can use as follows:

Here is a basic publisher

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json;
using StackExchange.Redis;

namespace AzureRedisCachePublisher
{
    class Program
    {
        static void Main(string[] args)
        {
            ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(
                "sachabarberredistest.redis.cache.windows.net,ssl=true,password=*** YOUR ACCOUNT KEY HERE ***");
            ISubscriber sub = connection.GetSubscriber();

            Console.WriteLine("Press a key to pubish");
            Console.ReadLine();



            sub.Publish("messages", "This is from the publisher");

            Console.ReadLine();
        }
    }

}

  And here is a basic Subscriber

 

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json;
using StackExchange.Redis;

namespace AzureRedisCacheSubscriber
{
    class Program
    {
        static void Main(string[] args)
        {
            ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(
                "sachabarberredistest.redis.cache.windows.net,ssl=true,password=*** YOUR ACCOUNT KEY HERE ***");
            ISubscriber sub = connection.GetSubscriber();

            sub.Subscribe("messages", (channel, message) =>
            {
                Console.WriteLine((string)message);
            });


            Console.ReadLine();
        }
    }


 }

Which when run will give you something like this:

 

image

  Anyway that is all for now, enjoy until next time.

 

 

This has barely scratched the surface of working with StackExhange.Redis, if you want to know more read the documentation here: https://github.com/StackExchange/StackExchange.Redis

Azure : Blob Storage / Retrieval

This is a new post in a series of beginners articles on how to do things in Azure. This series will be for absolute beginners, and if you are not one of those this will not be for you.

You can find the complete set of posts that make us this series here :

https://sachabarbs.wordpress.com/azure/

This time we will look at how to use Azure blob storage for uploading things like files (images, word documents, whatever you like really).

Introduction To Blob Storage

What exactly is Blob Storage? Well it is actually very simple, it is Azure hosted storage that allows you to upload large amounts of unstructured data (typically binary data, so bytes) that may be shared publicly using http/https.

Typical usage may be:

  • Storing images
  • Storing documents
  • Storing videos
  • Storing music

A typical Azure blob service would make use of these components

image

Account

This would be your Azure storage account. You must create a storage account using the Azure portal, which you would do through the portal : https://manage.windowsazure.com (we will see more on this in just a minute)

Container(s)

These belong to Account, and are using to group blob(s). Each account can have unlimited number of containers, and each container may contain an unlimited number of blobs.

Blob(s)

Blobs represent the Azure hosted data that represent the originally uploaded binary data. It is the blobs that you would eventual end up loading when you share an Azure blob storage url.

There are 2 types of blob storage available

Block Blobs

These are binary blocks of up to 200GB, where you can upload 64MB at one time, Typically for a larger block, you would spit things up and upload them in chunks using multiple threads, and Azure will reassemble them making the available as a single blob.

Page Blobs

These can be up to 1TB and consist of a collection of 512 pages. You would set a maximum size when creating the page blob. I have not used these so much, and personally I think these are here to support other Azure features like Virtual Hard Drives (VHDs) which are stored as page blobs in Azure Storage.

 

Url Syntax

The actual blob url format is as shown below:

http://<storage account>.blob.core.windows.net/<container>/<blob>

 

How To Get An Azure Storage Account

The first thing you will need to do is create a storage account, this is easily achieved by using the portal. Go to the portal : https://manage.windowsazure.com, and then click new

image

Then pick storage, and go through creating a new Storage Account

image

 

Then once you have that you can open the newly created storage account, and click the dashboard, and that will show you the relevant connection strings which you may use in your application

image

Using The Storage Emulator

NOTE : If you just want to try things out without going through the process of creating a Storage Account, you can actually use the Storage Emulator, which you can do using a account connection string something like this:

<!-- TODO : This would need to change to live azure value when deployed -->
    <add key="azureStorageConnectionString" value="UseDevelopmentStorage=true;" />

Since I am writing the bulk of these posts on a train without any internet connectivity, I will be using the storage emulator in any demos in this post.

If you want to use the emulator, ensure that is running, and that you have enabled the storage emulator

image

image

 

Getting The Right NuGet Package

You will need to install this NuGet Package to work with Azure Blob Storage “WindowsAzure.Storage”

 

Creating A Container

This is done using the following sort of code

CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
    ConfigurationManager.AppSettings["StorageConnectionString"]);

CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

// get blob container
CloudBlobContainer container = blobClient.GetContainerReference("testcontainer5");
container.CreateIfNotExists();


container.SetPermissions(
new BlobContainerPermissions
{
    PublicAccess = BlobContainerPublicAccessType.Blob
});

Uploading A File To Azure Blob Storage

This is done using the following code

CloudBlockBlob cloudBlockBlob = container.GetBlockBlobReference(
    "mandrillBlobUploaedToAzure.jpg");
cloudBlockBlob.Metadata["TypeOfImage"] = "Animal";

// Create or overwrite the "myblob" blob with contents from a local file.
using (var fileStream = System.IO.File.OpenRead(@"C:\Users\User\Pictures\mandrill.jpg"))
{
    cloudBlockBlob.UploadFromStream(fileStream);
    Console.WriteLine("Blob Url : {0}", cloudBlockBlob.Uri);
}

 

Listing All Blobs In A Container

This is done using the following code

container = blobClient.GetContainerReference("sachasContainer");

// Loop over items within the container and output the length and URI.
foreach (IListBlobItem item in container.ListBlobs(null, false))
{
    if (item.GetType() == typeof(CloudBlockBlob))
    {
        CloudBlockBlob blob = (CloudBlockBlob)item;

        Console.WriteLine("Block blob of length {0}: {1}", blob.Properties.Length, blob.Uri);

    }
}

Download Blobs

This is done using this sort of code

CloudBlockBlob blockBlob = container.GetBlockBlobReference("mandrillBlobUploaedToAzure.jpg");

// Save blob contents to a file.
using (var fileStream = System.IO.File.OpenWrite(@"C:\Users\User\Pictures\XXX.jpg"))
{
    blockBlob.DownloadToStream(fileStream);
}

Blob Metadata

Blobs support metadata via a dictionary which is available using the Metadata property which is a simple key/value pair.

 

How To Add Descriptive Metadata To A Blob

This is actually very easy, you just store more data in either a SQL Azure or SQL Azure table storage where you would include the blobs Url.  Job done

 

Deleting A Blob From A Container

This is done using this sort of code

blockBlob = container.GetBlockBlobReference("mandrillBlobUploaedToAzure.jpg");

// Delete the blob.
blockBlob.Delete();

Azure : SQL Azure

This is a new post in a series of beginners articles on how to do things in Azure. This series will be for absolute beginners, and if you are not one of those this will not be for you.

You can find the complete set of posts that make us this series here :

https://sachabarbs.wordpress.com/azure/

This time we will look at how to create a new SQL server database within Azure (in a later article we will loot using Microsofts NoSQL database “DocumentDB”)

Anyway so step 1, is to open up the portal

https://manage.windowsazure.com

 

From there you can click on “SQL Databases” and choose the “Create a SQL Database” hyperlink

image

From there you need to fill in your preferences within the wizard

image

image

Once this wizard has completed you will see a new database has been created

image

IMPORTANT : When the database is created, you will need to ensure that the standard port 1433 is opened. One of the easiest ways to do that, is to use the Azure portal to query the database (even though there is no tables in the database yet)

image

This little cheat will prompt you to open up the Firewall ports, which is great, lets just let the Azure portal do this work for us

image

So once the port is open, you will be redirected to an app in the browser (Silverlight app at present), that allows you to use your connection details you chose

image

When you successfully connect you should see something like this

image

Now there is no data in the SQL database yet. We could use this Silverlight app to add some tables, and data. However I would prefer to do that in Visual Studio, so lets go back to the portal, and open the connection strings, as shown below

image

image

We are interested in the ADO .NET one, where the part I have highlighted is the important part you need

image

SO grabbing the connection address to the Azure SQL server instance, lets connect via Visual Studio, and create a table

image

Once you have a connection in Visual Studio, lets create a new table using the context menu

image

When you are happy with the table, click the “Update” button which will push the changes to Azure. This is only a demo, for a real app you would likely have some sort of scripts, or would use the Entity Framework migrations facility to manage changes

image

image

So now lets check everything worked by connecting to the Azure SQL database from SQL server management studio.

image

As we can see we see the table we just created above

image

And lets also check the Azure portal query app

image

Yep, the table looks good, there is no data there yet, as expected for a new table. So lets now turn our attention to getting some data into the new table.

Lets use a new Entity Framework model to talk to the new SQL Azure database/table we just created.

image

I went with the defaults but you can choose what you like

image

This will result in a few files being created in the demo app, such as these, as well as an entry in the App.Config file to point to the SQL Azure database instance

image

And here is some code that will do some basic CRUD operation using the Entity Framework context that was created for us.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace SQLAzureTest
{
    class Program
    {
        static void Main(string[] args)
        {

            //insert
            using (var sachaAzureSQLEntities = new SachaAzureSQLEntities())
            {
                sachaAzureSQLEntities.Orders.Add(new Order()
                {
                    //note we are spelling this wrong so we can update it later
                    Description = "Buzz Lighyear toy",
                    Quanitity = 1
                });
                sachaAzureSQLEntities.SaveChanges();



                //select
                var order = sachaAzureSQLEntities.Orders.Single(
                    x => x.Description == "Buzz Lighyear toy");
                Console.WriteLine("Order : Id: {0}, Description: {1}, Quanity {2}",
                    order.Id, order.Description, order.Quanitity);

                //update
                order.Description = "Buzz Lightyear toy";
                sachaAzureSQLEntities.SaveChanges();

                var exists = sachaAzureSQLEntities.Orders.Any(
                    x => x.Description == "Buzz Lighyear toy");
                Console.WriteLine("Buzz Lighyear toy exists :  {0}", exists);

                order = sachaAzureSQLEntities.Orders.Single(
                    x => x.Description == "Buzz Lightyear toy");
                Console.WriteLine("Order : Id: {0}, Description: {1}, Quanity {2}",
                    order.Id, order.Description, order.Quanitity);



                //delete
                sachaAzureSQLEntities.Orders.Remove(order);
                sachaAzureSQLEntities.SaveChanges();

                Console.WriteLine("Orders count :  {0}", 
                    sachaAzureSQLEntities.Orders.Count());


            }

            Console.ReadLine();
        }
    }
}

And here is the results of this against the SQL Azure instance we just created.

image

Azure : Provisioning a Virtual Machine

This is a new post in a series of beginners articles on how to do things in Azure. This series will be for absolute beginners, and if you are not one of those this will not be for you.

You can find the complete set of posts that make us this series here :

https://sachabarbs.wordpress.com/azure/

This time we will look at how to create a new Virtual Machine using the Compute element of Microsoft Azure.

This again will be quite a screen shot heavy posting (its only a couple of the posts that will be like this, a lot of the subsequent ones will be much more codey, which is probably a good thing as there is a new version of the Azure portal waiting in the wings that has a different look and feel)

Anyway so step 1, is to open up the portal

https://manage.windowsazure.com

From there you will want to create a new Virtual machine, this can be done by clicking on the “New” button in the Azure portal.

image

From here you can choose what new thing you want to create in Azure. As this post is all about Virtual Machines, we will choose to create a Virtual Machine using the Azure compute element. This can be seen in the screen shot below. Azure comes with a whole load of pre canned images where the most common ones are shown in the list. You can also choose to examine more images, which is what we will be doing in  this post. So you would use the “More images” drop down item, as shown below

image

This will take you to a wizard that allows you to choose your Virtual Machine image and setup, and username/password

image

 

image

image

image

Once you have completed the wizard a new Virtual Machine, should be listed in the Azure portal for you. This is as shown below, for this posts example.

image

You can go into the Virtual Machine and view information about it, such as its public IP address, its DNS name etc etc

image

More importantly is the “Connect” button. When you click that a new Remote Desktop item will be created and downloaded for you. You can then use that to gain access to the Virtual Machine you setup.

One word of warning though is that the Status of the Virtual Machine MUST be “Running”, the status of the Virtual Machine can be seen in the screen shot above.

image

image

Here is me connected to a Virtual Machine (a standard Windows 2008 server), that I created another time.

image

Anyway hope that helps someone, another step along the Azure highway

Azure : How to publish a web site from VS2013 to Azure

This is the first post in a series of beginners articles on how to do things in Azure. This series will be for absolute beginners, and if you are not one of those this will not be for you.

You can find the complete set of posts that make us this series here :

 https://sachabarbs.wordpress.com/azure/

This time we will be looking at how to create and publish a web site to Azure using the Azure web portal (there is a new version of this coming soon, but for now it looks as shown below).

The first thing you will need is an Azure subscription, if you don’t have one of these you will need to get one. Once you have that you can sign into the portal.

Once you have signed into the Azure portal you simple need to create a new web site, I personally use the “Quick Create” option, and then you need to pick the Url and region for the new site. 

image

This will then create a new website place holder in Azure. You now need to go into the newly created place holder web site, which will take you into the Azure page for the new web site

image

From there you will be able to download the publish profile. This will be a single file which you should store somewhere where you will remember it.

image

Now you can go into Visual Studio 2013, and create a new website. I chose to go with one of the standard templates, where I simply altered the content a bit, such that I know the publishing step we will do later actually worked.

image

Once you are happy with your awesome web site you can right click and use the “Publish Web Site” menu.

image

This will then present you with some choices. You can use the “Import” option, and then browse to the publish profile settings file you downloaded in the Azure web portal step.

image

All you then need to do is follow the wizard through, and make sure you click “publish” at the end and you should then up with a lovely website hosted in Azure, as can be seen from the screen show below.

image

Not rocket science I know, but as I say this is for absolute Azure beginners.