AWS

AWS SWF

What are we talking about this time?

Last time we talked about Email Service (SES). This time we will talk about AWS SWF.

Initial setup

If you did not read the very first part of this series of posts, I urge you to go and read that one now as it shows you how to get started with AWS, and create an IAM user : https://sachabarbs.wordpress.com/2018/08/30/aws-initial-setup/

Where is the code

The code for this post can be found here in GitHub : https://github.com/sachabarber/AWS/tree/master/Compute/SWF

What are we talking about this time?

This time we will be talking about SWF (which stands for simple workflow service), which Amazon have this to say about

Introduction To SWF

A growing number of applications are relying on asynchronous and distributed processing. The scalability of such applications is the primary motivation for using this approach. By designing autonomous distributed components, developers have the flexibility to deploy and scale out parts of the application independently if the load on the application increases. Another motivation is the availability of cloud services. As application developers start taking advantage of cloud computing, they have a need to combine their existing on-premises assets with additional cloud-based assets. Yet another motivation for the asynchronous and distributed approach is the inherent distributed nature of the process being modeled by the application; for example, the automation of an order-fulfillment business process may span several systems and human tasks.

Developing such applications can be complicated. It requires that you coordinate the execution of multiple distributed components and deal with the increased latencies and unreliability inherent in remote communication. To accomplish this, you would typically need to write complicated infrastructure involving message queues and databases, along with the complex logic to synchronize them.

The Amazon Simple Workflow Service (Amazon SWF) makes it easier to develop asynchronous and distributed applications by providing a programming model and infrastructure for coordinating distributed components and maintaining their execution state in a reliable way. By relying on Amazon SWF, you are freed to focus on building the aspects of your application that differentiate it.

Simple Workflow Concepts

The basic concepts necessary for understanding Amazon SWF workflows are introduced below and are explained further in the subsequent sections of this guide. The following discussion is a high-level overview of the structure and components of a workflow.

The fundamental concept in Amazon SWF is the workflow. A workflow is a set of activities that carry out some objective, together with logic that coordinates the activities. For example, a workflow could receive a customer order and take whatever actions are necessary to fulfill it. Each workflow runs in an AWS resource called a domain, which controls the workflow’s scope. An AWS account can have multiple domains, each of which can contain multiple workflows, but workflows in different domains can’t interact.

When designing an Amazon SWF workflow, you precisely define each of the required activities. You then register each activity with Amazon SWF as an activity type. When you register the activity, you provide information such as a name and version, and some timeout values based on how long you expect the activity to take. For example, a customer may have an expectation that an order will ship within 24 hours. Such expectations would inform the timeout values that you specify when registering your activities.

In the process of carrying out the workflow, some activities may need to be performed more than once, perhaps with varying inputs. For example, in a customer-order workflow, you might have an activity that handles purchased items. If the customer purchases multiple items, then this activity would have to run multiple times. Amazon SWF has the concept of an activity task that represents one invocation of an activity. In our example, the processing of each item would be represented by a single activity task.

An activity worker is a program that receives activity tasks, performs them, and provides results back. Note that the task itself might actually be performed by a person, in which case the person would use the activity worker software for the receipt and disposition of the task. An example might be a statistical analyst, who receives sets of data, analyzes them, and then sends back the analysis.

Activity tasks—and the activity workers that perform them—can run synchronously or asynchronously. They can be distributed across multiple computers, potentially in different geographic regions, or they can all run on the same computer. Different activity workers can be written in different programming languages and run on different operating systems. For example, one activity worker might be running on a desktop computer in Asia, whereas a different activity worker might be running on a hand-held computer device in North America.

The coordination logic in a workflow is contained in a software program called a decider. The decider schedules activity tasks, provides input data to the activity workers, processes events that arrive while the workflow is in progress, and ultimately ends (or closes) the workflow when the objective has been completed.

The role of the Amazon SWF service is to function as a reliable central hub through which data is exchanged between the decider, the activity workers, and other relevant entities such as the person administering the workflow. Amazon SWF also maintains the state of each workflow execution, which saves your application from having to store the state in a durable way.

The decider directs the workflow by receiving decision tasks from Amazon SWF and responding back to Amazon SWF with decisions. A decision represents an action or set of actions which are the next steps in the workflow. A typical decision would be to schedule an activity task. Decisions can also be used to set timers to delay the execution of an activity task, to request cancellation of activity tasks already in progress, and to complete or close the workflow.

The mechanism by which both the activity workers and the decider receive their tasks (activity tasks and decision tasks respectively) is by polling the Amazon SWF service.

Amazon SWF informs the decider of the state of the workflow by including, with each decision task, a copy of the current workflow execution history. The workflow execution history is composed of events, where an event represents a significant change in the state of the workflow execution. Examples of events would be the completion of a task, notification that a task has timed out, or the expiration of a timer that was set earlier in the workflow execution. The history is a complete, consistent, and authoritative record of the workflow’s progress.

Amazon SWF access control uses AWS Identity and Access Management (IAM), which allows you to provide access to AWS resources in a controlled and limited way that doesn’t expose your access keys. For example, you can allow a user to access your account, but only to run certain workflows in a particular domain.

Workflow Execution

Bringing together the ideas discussed in the preceding sections, here is an overview of the steps to develop and run a workflow in Amazon SWF:

  • Write activity workers that implement the processing steps in your workflow.
  • Write a decider to implement the coordination logic of your workflow.
  • Register your activities and workflow with Amazon SWF.
  • You can do this step programmatically or by using the AWS Management Console.
  • Start your activity workers and decider.
  • These actors can run on any computing device that can access an Amazon SWF endpoint. For example, you could use compute instances in the cloud, such as Amazon Elastic Compute Cloud (Amazon EC2); servers in your data center; or even a mobile device, to host a decider or activity worker. Once started, the decider and activity workers should start polling Amazon SWF for tasks.
  • Start one or more executions of your workflow.
  • Executions can be initiated either programmatically or via the AWS Management Console.
  • Each execution runs independently and you can provide each with its own set of input data. When an execution is started, Amazon SWF schedules the initial decision task. In response, your decider begins generating decisions which initiate activity tasks. Execution continues until your decider makes a decision to close the execution.
  • View workflow executions using the AWS Management Console.
  • You can filter and view complete details of running as well as completed executions. For example, you can select an open execution to see which tasks have completed and what their results were.
  • Document Conventions

 

Taken from https://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dg-intro-to-swf.html

So after reading that we now know that there are a few things we should/would need to implements ourselves, we will look at each of these now

Initiator

This is not official lingo I am just coining this phrase, but anyway in my lingo the initiator is the place that registers the domain, the activities, the workflow and starts the workers/decider. There is quite a lot of boiler plate code here, so its best to just examine the code. One important fact is that when you start the workflow you may pass some state in using the Input field, which can hold serialized data to send to the workers.

Here is the code

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using System.Linq;
using Amazon;
using Amazon.SimpleWorkflow;
using Amazon.SimpleWorkflow.Model;
using System.Threading;

namespace SwfInitiator
{
    class Program
    {
        static string domainName = "SwfDemoDomain";
        static IAmazonSimpleWorkflow SwfClient = AWSClientFactory.CreateAmazonSimpleWorkflowClient();

        public static void Main(string[] args)
        {
            Console.Title = "INITIATOR";

            string workflowName = "SwfDemo Workflow";

            // Setup
            RegisterDomain();
            RegisterActivity("Activity1A", "Activity1");
            RegisterActivity("Activity1B", "Activity1");
            RegisterActivity("Activity2", "Activity2");
            RegisterWorkflow(workflowName);

            //// Launch workers to service Activity1A and Activity1B
            ////  This is acheived by sharing same tasklist name (i.e.) "Activity1"
            StartProcess(@"..\..\..\Worker\bin\Debug\SwfWorker", new[] { "Activity1" });
            StartProcess(@"..\..\..\Worker\bin\Debug\SwfWorker", new[] { "Activity1" });

            //// Launch Workers for Activity2
            StartProcess(@"..\..\..\Worker\bin\Debug\SwfWorker", new[] { "Activity2" });
            StartProcess(@"..\..\..\Worker\bin\Debug\SwfWorker", new[] { "Activity2" });

            //// Start the Deciders, which defines the structure/flow of Workflow
            StartProcess(@"..\..\..\Decider\bin\Debug\SwfDecider", null);
            Thread.Sleep(1000);

            //Start the workflow
            Task.Run(() => StartWorkflow(workflowName));

            Console.Read();
        }

        static void StartProcess(string processLocation, string[] args)
        {
            var p = new System.Diagnostics.Process();
            p.StartInfo.FileName = processLocation;
            if (args != null)
                p.StartInfo.Arguments = String.Join(" ", args);
            p.StartInfo.RedirectStandardOutput = false;
            p.StartInfo.UseShellExecute = true;
            p.StartInfo.WindowStyle = System.Diagnostics.ProcessWindowStyle.Normal;
            p.Start();
        }

        static void RegisterDomain()
        {
            // Register if the domain is not already registered.
            var listDomainRequest = new ListDomainsRequest()
            {
                RegistrationStatus = RegistrationStatus.REGISTERED
            };

            if (SwfClient.ListDomains(listDomainRequest).DomainInfos.Infos.FirstOrDefault(
                                                      x => x.Name == domainName) == null)
            {
                var request = new RegisterDomainRequest()
                {
                    Name = domainName,
                    Description = "Swf Demo",
                    WorkflowExecutionRetentionPeriodInDays = "1"
                };

                Console.WriteLine("INITIATOR: Created Domain - " + domainName);
                try
                {
                    SwfClient.RegisterDomain(request);
                }
                catch(DomainAlreadyExistsException dex)
                {

                }
            }
        }

        static void RegisterActivity(string name, string tasklistName)
        {
            // Register activities if it is not already registered
            var listActivityRequest = new ListActivityTypesRequest()
            {
                Domain = domainName,
                Name = name,
                RegistrationStatus = RegistrationStatus.REGISTERED
            };

            if (SwfClient.ListActivityTypes(listActivityRequest).ActivityTypeInfos.TypeInfos.FirstOrDefault(
                                          x => x.ActivityType.Version == "2.0") == null)
            {
                var request = new RegisterActivityTypeRequest()
                {
                    Name = name,
                    Domain = domainName,
                    Description = "Swf Demo Activities",
                    Version = "2.0",
                    DefaultTaskList = new TaskList() { Name = tasklistName },//Worker poll based on this
                    DefaultTaskScheduleToCloseTimeout = "300",
                    DefaultTaskScheduleToStartTimeout = "150",
                    DefaultTaskStartToCloseTimeout = "450",
                    DefaultTaskHeartbeatTimeout = "NONE",
                };
                try
                {

                }
                catch(TypeAlreadyExistsException tex)
                {
                    SwfClient.RegisterActivityType(request);
                }
                Console.WriteLine($"INITIATOR: Created Activity Name - {request.Name}");
            }
        }

        static void RegisterWorkflow(string name)
        {
            // Register workflow type if not already registered
            var listWorkflowRequest = new ListWorkflowTypesRequest()
            {
                Name = name,
                Domain = domainName,
                RegistrationStatus = RegistrationStatus.REGISTERED
            };

            if (SwfClient.ListWorkflowTypes(listWorkflowRequest)
                .WorkflowTypeInfos
                .TypeInfos
                .FirstOrDefault(x => x.WorkflowType.Version == "2.0") == null)
            {
                var request = new RegisterWorkflowTypeRequest()
                {
                    DefaultChildPolicy = ChildPolicy.TERMINATE,
                    DefaultExecutionStartToCloseTimeout = "300",
                    DefaultTaskList = new TaskList()
                    {
                        Name = "SwfDemo" // Decider need to poll for this task
                    },
                    DefaultTaskStartToCloseTimeout = "150",
                    Domain = domainName,
                    Name = name,
                    Version = "2.0"
                };
                try
                {

                }
                catch(TypeAlreadyExistsException tex)
                {
                    SwfClient.RegisterWorkflowType(request);
                }

                Console.WriteLine($"INITIATOR: Registerd Workflow Name - {request.Name}");
            }
        }

        static void StartWorkflow(string name)
        {
            string workflowID = $"Swf DemoID - {DateTime.Now.Ticks.ToString()}";
            SwfClient.StartWorkflowExecution(new StartWorkflowExecutionRequest()
            {
                Input = "{\"inputparam1\":\"value1\"}", // Serialize input to a string
                WorkflowId = workflowID,
                Domain = domainName,
                WorkflowType = new WorkflowType()
                {
                    Name = name,
                    Version = "2.0"
                }
            });
            Console.WriteLine($"INITIATOR: Workflow Instance created ID={workflowID}");
        }
    }
}

 

Decider

So the decider is the piece that will schedule activities for workers, it is also responsible for listening to responses of workers, and working out how to act on this information. It does this by examining the history of DecisionTaskEvents which it gets via polling the SWF engine. In these historical events, it can dig deeper to deduce if certain activities have completed from workers, and if it deems so may complete the workflow, or schedule more tasks. Again with this one the code makes a better job of explaining things than I do

 

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Amazon;
using Amazon.SimpleWorkflow;
using Amazon.SimpleWorkflow.Model;

namespace SwfDeciderDecider
{
    class Program
    {
        static string domainName = "SwfDemoDomain";
        static IAmazonSimpleWorkflow SwfDeciderClient =
                    AWSClientFactory.CreateAmazonSimpleWorkflowClient();

        public static void Main(string[] args)
        {
            Console.Title = "DECIDER";
            // Start the Deciders, which defines the structure/flow of Workflow
            Task.Run(() => Decider());
            Console.Read();
        }


        // Simple logic
        //  Creates four activities at the begining
        //  Waits for them to complete and completes the workflow
        static void Decider()
        {
            int activityCount = 0; // This refers to total number of activities per workflow
            while (true)
            {
                Console.WriteLine("DECIDER: Polling for decision task ...");
                var request = new PollForDecisionTaskRequest()
                {
                    Domain = domainName,
                    TaskList = new TaskList() { Name = "SwfDemo" }
                };

                var response = SwfDeciderClient.PollForDecisionTask(request);
                if (response.DecisionTask.TaskToken == null)
                {
                    Console.WriteLine("DECIDER: NULL");
                    continue;
                }

                int completedActivityTaskCount = 0, totalActivityTaskCount = 0;
                foreach (HistoryEvent e in response.DecisionTask.Events)
                {
                    Console.WriteLine($"DECIDER: EventType - {e.EventType}" +
                        $", EventId - {e.EventId}");
                    if (e.EventType == "ActivityTaskCompleted")
                        completedActivityTaskCount++;
                    if (e.EventType.Value.StartsWith("Activity"))
                        totalActivityTaskCount++;
                }
                Console.WriteLine($"completedCount={completedActivityTaskCount}");

                var decisions = new List<Decision>();
                if (totalActivityTaskCount == 0) // Create this only at the begining
                {
                    ScheduleActivity("Activity1A", decisions);
                    ScheduleActivity("Activity1B", decisions);
                    ScheduleActivity("Activity2", decisions);
                    ScheduleActivity("Activity2", decisions);
                    activityCount = 4;
                }
                else if (completedActivityTaskCount == activityCount)
                {
                    var decision = new Decision()
                    {
                        DecisionType = DecisionType.CompleteWorkflowExecution,
                        CompleteWorkflowExecutionDecisionAttributes =
                          new CompleteWorkflowExecutionDecisionAttributes
                          {
                              Result = "{\"Result\":\"WF Complete!\"}"
                          }
                    };
                    decisions.Add(decision);

                    Console.WriteLine("DECIDER: WORKFLOW COMPLETE");
                }
                var respondDecisionTaskCompletedRequest =
                    new RespondDecisionTaskCompletedRequest()
                    {
                        Decisions = decisions,
                        TaskToken = response.DecisionTask.TaskToken
                    };
                SwfDeciderClient.RespondDecisionTaskCompleted(respondDecisionTaskCompletedRequest);
            }
        }

        static void ScheduleActivity(string name, List<Decision> decisions)
        {
            var decision = new Decision()
            {
                DecisionType = DecisionType.ScheduleActivityTask,
                ScheduleActivityTaskDecisionAttributes =  
                  new ScheduleActivityTaskDecisionAttributes()
                  {
                      ActivityType = new ActivityType()
                      {
                          Name = name,
                          Version = "2.0"
                      },
                      ActivityId = name + "-" + System.Guid.NewGuid().ToString(),
                      Input = "{\"activityInput1\":\"value1\"}"
                  }
            };
            Console.WriteLine($"DECIDER: ActivityId={decision.ScheduleActivityTaskDecisionAttributes.ActivityId}");
            decisions.Add(decision);
        }
    }
}

Worker

So as stated above the worker should poll for ActivityTasks, which they can do using the PollForActivityTaskRequest, if they get one that matches their query, they should do the work according to what they can do with the PollForActivityTaskResponse, which has an ActivityTask with a Input on it which can contain serialized data for the worker. Once the worker is done they submit a RespondActivityTaskCompletedRequest back to the SWF engine using the SwfClient.RespondActivityTaskCompleted(….) method

This will certainly be easier to understand with some code, which is shown here

using System;
using System.Threading.Tasks;
using Amazon;
using Amazon.SimpleWorkflow;
using Amazon.SimpleWorkflow.Model;

namespace SwfWorker
{
    class Program
    {
        static string domainName = "SwfDemoDomain";
        static IAmazonSimpleWorkflow SwfClient = AWSClientFactory.CreateAmazonSimpleWorkflowClient();

        public static void Main(string[] args)
        {
            string tasklistName = args[0];
            Console.Title = tasklistName.ToUpper();
            Task.Run(() => Worker(tasklistName));
            Console.Read();
        }


        static void Worker(string tasklistName)
        {
            string prefix = string.Format("WORKER {0}:{1:x} ", tasklistName,
                                  System.Threading.Thread.CurrentThread.ManagedThreadId);
            while (true)
            {
                Console.WriteLine($"{prefix} : Polling for activity task ...");
                var pollForActivityTaskRequest =
                    new PollForActivityTaskRequest()
                    {
                        Domain = domainName,
                        TaskList = new TaskList()
                        {
                            // Poll only the tasks assigned to me
                            Name = tasklistName
                        }
                    };

                var pollForActivityTaskResponse =        
                    SwfClient.PollForActivityTask(pollForActivityTaskRequest);

                if (pollForActivityTaskResponse.ActivityTask.ActivityId == null)
                {
                    Console.WriteLine($"{prefix} : NULL");
                }
                else
                {
                    Console.WriteLine($"{prefix} : saw Input {pollForActivityTaskResponse.ActivityTask.Input}");

                    var respondActivityTaskCompletedRequest = new RespondActivityTaskCompletedRequest()
                        {
                            Result = "{\"activityResult1\":\"Result Value1\"}",
                            TaskToken = pollForActivityTaskResponse.ActivityTask.TaskToken
                        };

                    var respondActivityTaskCompletedResponse =
                        SwfClient.RespondActivityTaskCompleted(respondActivityTaskCompletedRequest);
                    Console.WriteLine($"{prefix} : Activity task completed. ActivityId - " +
                        pollForActivityTaskResponse.ActivityTask.ActivityId);
                }
            }
        }


    }
}

SWF Console

So the console has full support for things like

  • Registering domains
  • Register new Activity type
  • Register new Workflow type
  • Listing/re-run executions

But as we just saw a lot of this can be done in code. The code I just presented in this post, is running on my own PC, but you could imagine this could run anywhere that has AWS endpoint access

Here is a taster of what the SWF console looks like

image

Running the demo

So if you were to open the demo code, and ensure you have your correct user profile set up in the App.Config variables, and have the Initiator set as the start project, and run it you should see some output like this, with the processes all started in their own windows

Initiator

INITIATOR: Created Domain – SwfDemoDomain
INITIATOR: Created Activity Name – Activity1A
INITIATOR: Created Activity Name – Activity1B
INITIATOR: Created Activity Name – Activity2
INITIATOR: Registerd Workflow Name – SwfDemo Workflow
INITIATOR: Workflow Instance created ID=Swf DeomoID – 636779091969304227

Worker typical output

WORKER Activity1:3  : saw Input {“activityInput1″:”value1”}
WORKER Activity1:3  : Activity task completed. ActivityId – Activity1B-2c1cec6d-90c8-4146-8c3a-1ba15fed9628
WORKER Activity1:3  : Polling for activity task …
WORKER Activity1:3  : NULL
WORKER Activity1:3  : Polling for activity task …
WORKER Activity1:3  : NULL
WORKER Activity1:3  : Polling for activity task …
WORKER Activity1:3  : NULL
WORKER Activity1:3  : Polling for activity task …
Decider

DECIDER: Polling for decision task …
DECIDER: EventType – WorkflowExecutionStarted, EventId – 1
DECIDER: EventType – DecisionTaskScheduled, EventId – 2
DECIDER: EventType – DecisionTaskStarted, EventId – 3
completedCount=0
DECIDER: ActivityId=Activity1A-1221249f-1c3e-4f4e-baa3-7f2ae4db12d1
DECIDER: ActivityId=Activity1B-2c1cec6d-90c8-4146-8c3a-1ba15fed9628
DECIDER: ActivityId=Activity2-0721eb07-9790-46f5-93c9-142fcc572555
DECIDER: ActivityId=Activity2-51dd0fa7-45de-43b0-b810-8185e12f5ab3
DECIDER: Polling for decision task …
DECIDER: EventType – WorkflowExecutionStarted, EventId – 1
DECIDER: EventType – DecisionTaskScheduled, EventId – 2
DECIDER: EventType – DecisionTaskStarted, EventId – 3
DECIDER: EventType – DecisionTaskCompleted, EventId – 4
DECIDER: EventType – ActivityTaskScheduled, EventId – 5
DECIDER: EventType – ActivityTaskScheduled, EventId – 6
DECIDER: EventType – ActivityTaskScheduled, EventId – 7
DECIDER: EventType – ActivityTaskScheduled, EventId – 8
DECIDER: EventType – ActivityTaskStarted, EventId – 9
DECIDER: EventType – ActivityTaskStarted, EventId – 10
DECIDER: EventType – ActivityTaskStarted, EventId – 11
DECIDER: EventType – ActivityTaskStarted, EventId – 12
DECIDER: EventType – ActivityTaskCompleted, EventId – 13
DECIDER: EventType – DecisionTaskScheduled, EventId – 14
DECIDER: EventType – ActivityTaskCompleted, EventId – 15
DECIDER: EventType – ActivityTaskCompleted, EventId – 16
DECIDER: EventType – ActivityTaskCompleted, EventId – 17
DECIDER: EventType – DecisionTaskStarted, EventId – 18
completedCount=4
DECIDER: WORKFLOW COMPLETE
DECIDER: Polling for decision task …

Advertisements
AWS

AWS Simple Email Service (SES)

What are we talking about this time?

Last time we talked about Step Functions.This time we will be talking about Simple Email Service (SES). I don’t want to cover everything about this service, but I do want to show you how you can use it so send emails as a bare minimum.

Initial setup

If you did not read the very first part of this series of posts, I urge you to go and read that one now as it shows you how to get started with AWS, and create an IAM user : https://sachabarbs.wordpress.com/2018/08/30/aws-initial-setup/

Where is the code

The code for this post can be found here in GitHub : https://github.com/sachabarber/AWS/tree/master/AppServices/SES

What are we talking about this time?

Ok so as I stated above this time we are going to be talking about SES, which is going to be fairly self contained, and a small post this time to be honest. I just want to show how we can use SES to send mails from our own apps.

Setting It Up And Sending An Email

Before we start it is important to note that the SES service is limited to a few regions. So my normal EU-WEST2 is NOT supported. So I have to use EU-WEST1. 

So the first step is to setup a verified email which you can do in the SES console here

image

Once you have done that you can use the verification email that gets sent to the email address you used to complete the verification process. Then you should be good to use this email as a sender for SES. You can read more about this process here : https://docs.aws.amazon.com/ses/latest/DeveloperGuide/setting-up-email.html

But assuming you have done this step, it really is as simple as making sure you are using the correct region, and then using the verified email you just created and using some code like this

using Amazon;
using System;
using System.Collections.Generic;
using System.Configuration;
using Amazon.SimpleEmail;
using Amazon.SimpleEmail.Model;
using Amazon.Runtime;

namespace SESSender
{
    class Program
    {
        // Set the sender's email address here : AWSVerifiedEmail
        static readonly string senderAddress = ";

        // Set the receiver's email address here.
        static readonly string receiverAddress = "";

        static void Main(string[] args)
        {
            if (CheckRequiredFields())
            {

                using (var client = new AmazonSimpleEmailServiceClient(RegionEndpoint.EUWest1))
                {
                    var sendRequest = new SendEmailRequest
                    {
                        Source = senderAddress,
                        Destination = new Destination { ToAddresses = new List<string> { receiverAddress } },
                        Message = new Message
                        {
                            Subject = new Content("Sample Mail using SES"),
                            Body = new Body { Text = new Content("Sample message content.") }
                        }
                    };
                    try
                    {
                        Console.WriteLine("Sending email using AWS SES...");
                        var response = client.SendEmail(sendRequest);
                        Console.WriteLine("The email was sent successfully.");
                    }
                    catch (Exception ex)
                    {
                        Console.WriteLine("The email was not sent.");
                        Console.WriteLine("Error message: " + ex.Message);

                    }
                }
            }

            Console.Write("Press any key to continue...");
            Console.ReadKey();
        }

        static bool CheckRequiredFields()
        {
            var appConfig = ConfigurationManager.AppSettings;

            if (string.IsNullOrEmpty(appConfig["AWSProfileName"]))
            {
                Console.WriteLine("AWSProfileName was not set in the App.config file.");
                return false;
            }
            if (string.IsNullOrEmpty(senderAddress))
            {
                Console.WriteLine("The variable senderAddress is not set.");
                return false;
            }
            if (string.IsNullOrEmpty(receiverAddress))
            {
                Console.WriteLine("The variable receiverAddress is not set.");
                return false;
            }
            return true;
        }
    }
}

Remember this service IS NOT available in all regions, so make sure you have the correct region set.

Anyway here is the result of sending the email

image

 

SMTP

To use SMTP instead you need to setup the SMTP user in SES console

image

Once you have that you can then just use code something more like this

 

using System;
using System.Net;
using System.Net.Mail;
using System.Configuration;


namespace SESSMTPSample1
{
    class Program
    {
        // Set the sender's email address here.
        static string senderAddress = null;

        // Set the receiver's email address here.
        static string receiverAddress = null;
        
        // Set the SMTP user name in App.config 
        static string smtpUserName = null;

        // Set the SMTP password in App.config 
        static string smtpPassword = null;

        static string host = "email-smtp.eu-west-1.amazonaws.com";

        static int port = 587;

        static void Main(string[] args)
        {
            if (CheckRequiredFields())
            {
                var smtpClient = new SmtpClient(host, port);
                smtpClient.EnableSsl = true;
                smtpClient.Credentials = new NetworkCredential(smtpUserName, smtpPassword);

                var message = new MailMessage(
                                from: senderAddress,
                                to: receiverAddress,
                                subject: "Sample email using SMTP Interface",
                                body: "Sample email.");

                try
                {
                    Console.WriteLine("Sending email using SMTP interface...");
                    smtpClient.Send(message);
                    Console.WriteLine("The email was sent successfully.");
                }
                catch (Exception ex)
                {
                    Console.WriteLine("The email was not sent.");
                    Console.WriteLine("Error message: " + ex.Message);
                }
            }

            Console.Write("Press any key to continue...");
            Console.ReadKey(); 
        }

        static bool CheckRequiredFields()
        {
            var appConfig = ConfigurationManager.AppSettings;

            smtpUserName = appConfig["AwsSesSmtpUserName"];
            if (string.IsNullOrEmpty(smtpUserName))
            {
                Console.WriteLine("AwsSesSmtpUserName is not set in the App.config file.");
                return false;
            }

            smtpPassword = appConfig["AwsSesSmtpPassword"];
            if (string.IsNullOrEmpty(smtpPassword))
            {
                Console.WriteLine("AwsSesSmtpPassword is not set in the App.config file.");
                return false;
            }
            if (string.IsNullOrEmpty(senderAddress))
            {
                Console.WriteLine("The variable senderAddress is not set.");
                return false;
            }
            if (string.IsNullOrEmpty(receiverAddress))
            {
                Console.WriteLine("The variable receiverAddress is not set.");
                return false;
            }
 
            return true;
        }
    }
}

 

And that is all I really wanted to show in this short post. Next time we will likely dip back into Compute stuff, where we look at AWS SWF

AWS

AWS : Step functions

What are we talking about this time?

Last time we talked about the “Serverless Framework” and how it could help us deploy functions with the minimum of fuss. This time we will be looking at Step Functions.

Initial setup

If you did not read the very first part of this series of posts, I urge you to go and read that one now as it shows you how to get started with AWS, and create an IAM user : https://sachabarbs.wordpress.com/2018/08/30/aws-initial-setup/

Where is the code

The code for this post can be found here in GitHub : https://github.com/sachabarber/AWS/tree/master/Compute/SimpleStepFunction

What are Step Functions?

AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Using Step Functions, you can design and run workflows that stitch together services such as AWS Lambda and Amazon ECS into feature-rich applications. Workflows are made up of a series of steps, with the output of one step acting as input into the next. Application development is simpler and more intuitive using Step Functions, because it translates your workflow into a state machine diagram that is easy to understand, easy to explain to others, and easy to change. You can monitor each step of execution as it happens, which means you can identify and fix problems quickly. Step Functions automatically triggers and tracks each step, and retries when there are errors, so your application executes in order and as expected

https://aws.amazon.com/step-functions/ up on date 23/10/18

How do I get started with Step Functions?

Ok the first thing you need to do is do the initial setup stuff at the top of this post, then we can use the AWS .NET Toolkit which we also installed as part of very first post in this series. Lets see what that looks like out of the tin.

 

We start by choosing the “AWS Serverless Application with Tests”

image

This will then give us a screen like this

image

Where we choose the “Step Functions Hello World”, so after doing that we would see something like this shown to us inside Visual Studio.

image

Lets ignore the Tests project for now, but lets have a look at the actual project and try and understand what you get out of the box.

 

state-machine.json

So as we stated above Step Functions are workflows that are able to chain functions together. Lambda functions by themselves would not know how to do this, they need some other machinery above orchestrating this. This is the job of “state-machine.json”. If we look at the out of the box example we see this file contents:

{
  "Comment": "State Machine",
  "StartAt": "Greeting",
  "States": {
    "Greeting": {
      "Type": "Task",
      "Resource": "${GreetingTask.Arn}",
      "Next": "WaitToActivate"
    },
    "WaitToActivate": {
      "Type": "Wait",
      "SecondsPath": "$.WaitInSeconds",
      "Next": "Salutations"
    },
    "Salutations": {
      "Type": "Task",
      "Resource": "${SalutationsTask.Arn}",
      "End": true
    }
  }
}

There are a couple of take away points here:

  • It can be seen that this file represents the possible states, and also expresses how to move from one state to the next
  • The other key thing here is the use of the ${…Arn} notations. These are place holders that will be resolved via values in the serverless.template. When the project is deployed the contents of state-machine.json are copied into the serverless.template (Which we will look at very soon). The insertion location is controlled by the –template-substitutions parameter. The project template presets the –template-substitutions parameter in aws-lambda-tools-defaults.json. The format of the value for –template-substitutions is <json-path>=<file-name>.

    For example this project template sets the value to be:

    –template-substitutions $.Resources.StateMachine.Properties.DefinitionString.Fn::Sub=state-machine.json

 

State.cs

using System;
using System.Collections.Generic;
using System.Text;

namespace SimpleStepFunction
{
    /// <summary>
    /// The state passed between the step function executions.
    /// </summary>
    public class State
    {
        /// <summary>
        /// Input value when starting the execution
        /// </summary>
        public string Name { get; set; }

        /// <summary>
        /// The message built through the step function execution.
        /// </summary>
        public string Message { get; set; }

        /// <summary>
        /// The number of seconds to wait between calling the Salutations task and Greeting task.
        /// </summary>
        public int WaitInSeconds { get; set; } 
    }
}

What is a state machine without state. This is the state for the state machine as defined in the state-machine.json file

 

StepFunctionTasks.cs

As we saw above we define our state machine flow in the state-machine.json file, but we still need the actual AWS Lambda functions to call. The functions can reside in any file, as long as it’s a valid Lambda Function. For the out of the box example this is the file that goes with the state-machine.json file.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Threading.Tasks;

using Amazon.Lambda.Core;


// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace SimpleStepFunction
{
    public class StepFunctionTasks
    {
        /// <summary>
        /// Default constructor that Lambda will invoke.
        /// </summary>
        public StepFunctionTasks()
        {
        }


        public State Greeting(State state, ILambdaContext context)
        {
            state.Message = "Hello";

            if(!string.IsNullOrEmpty(state.Name))
            {
                state.Message += " " + state.Name;
            }

            // Tell Step Function to wait 5 seconds before calling 
            state.WaitInSeconds = 5;

            return state;
        }

        public State Salutations(State state, ILambdaContext context)
        {
            state.Message += ", Goodbye";

            if (!string.IsNullOrEmpty(state.Name))
            {
                state.Message += " " + state.Name;
            }

            return state;
        }
    }
}

Couple of points to note there:

  • There are 2 Functions here
    • Greeting
    • Salutations
  • There is the State class passed around as state, which may be written to in the functions

 

aws-lambda-tools-default.json

This file contains the defaults that will be used to do the deployment, it also shows how to integrate the state-machine.json with the serverless.template file

{
  "Information" : [
    "This file provides default values for the deployment wizard inside Visual Studio and the AWS Lambda commands added to the .NET Core CLI.",
    "To learn more about the Lambda commands with the .NET Core CLI execute the following command at the command line in the project root directory.",

    "dotnet lambda help",

    "All the command line options for the Lambda command can be specified in this file."
  ],
  "profile":"default",
  "region" : "eu-west-2",
  "configuration" : "Release",
  "framework"     : "netcoreapp2.1",
  "s3-prefix"     : "SimpleStepFunction/",
  "template"      : "serverless.template",
  "template-parameters" : "",
  "template-substitutions" : "$.Resources.StateMachine.Properties.DefinitionString.Fn::Sub=state-machine.json",
  "s3-bucket"              : "",
  "stack-name"             : ""
}

 

Take a look at the properties.

  • profile is the AWS credentials you use to connect to AWS. 
  • region is the AWS region you are going to use to deploy to
  • configuration is what configuration that is being deployed, for example, Release and Debug.
  • framework is associated with the .NET framework you wish to use
  • s3-prefix is the  prefix for the s3 bucket used to store the deployed code artifacts
  • template is the name of AWS Cloud Formation template to use
  • template-parameters are parameters for deployment.
  • template-substitutions is for identifying what to replace within the template. We mentioned this above
  • s3-bucket is the AWS S3 bucket that will be used for the deployed artifacts
  • stack-name is the name you will see in within AWS Cloud Formation console for the deployment

 

serverless.template

image 

Is the cloud formation template that describes your infrastructure requirements. For this example that would include

  • 2 Lambda functions
    • Greeting
    • Salutations
  • An IAM role for Lambda
  • The state machine

 

Ok so now we understand a bit more about the files lets see how we can deploy this. We can use Visual Studio to do this, but for real life you would use the AWS CLI

image

Where we can just work through the wizard

image

image

image

Once its published we should be able to go into the AWS console, and have a look at a few things

 

We can look at the Step Functions console in AWS, and we should see something like this

image

Which we can drill into, and “Start Execution” to test out

image

image

So lets click the button, and see what we get

image

image

Ok cool we can see that it worked nicely. So what about all that Cloud Formation stuff, how does that fit into it all. Lets go have a look at Cloud Formation Console in AWS

image#

So that’s all looking good.

 

What about more complex examples

Ok so we have seen the out of the box example, but what else can be done using step functions?

 

Well if we refer to the documentation on the states, which shows you all the possible state types, you can quickly see we could come up with some pretty cool workflows:

 

  • Pass: A Pass state (“Type”: “Pass”) simply passes its input to its output, performing no work. Pass states are useful when constructing and debugging state machines.
  • Task: A Task state (“Type”: “Task”) represents a single unit of work performed by a state machine.
  • Choice: A Choice state (“Type”: “Choice”) adds branching logic to a state machine.
  • Wait: A Wait state (“Type”: “Wait”) delays the state machine from continuing for a specified time. You can choose either a relative time, specified in seconds from when the state begins, or an absolute end-time, specified as a timestamp.
  • Succeed: A Succeed state (“Type”: “Succeed”) stops an execution successfully. The Succeed state is a useful target for Choice state branches that don’t do anything but stop the execution.Because Succeed states are terminal states, they have no Next field, nor do they have need of an End field
  • Fail: A Fail state (“Type”: “Fail”) stops the execution of the state machine and marks it as a failure.The Fail state only allows the use of Type and Comment fields from the set of common state fields.
  • Parallel: The Parallel state (“Type”: “Parallel”) can be used to create parallel branches of execution in your state machine.

You can read more about these states, and their various parameters here : https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-parallel-state.html

 

So for now lets expand apon our simple out of the box example, and adjust it to do the following

 

  • Start with an initial state, where we examine the incoming state object, and set the “IsMale” property to 1 if the Name starts with “Mr”
  • Enter a pass state (do nothing)
  • Enter a choice state, that will either call “PrintMaleInfo” next state, if the incoming state object “IsMale” is set to 1, otherwise  if its 0 “PrintFemaleInfo” will be called, if its not 0 or 1, “PrintInfo” will be the next state called
  • PrintMaleInfo/PrintFemaleInfo/PrintInfo are all terminals states

 

Here is what the revised state-machine.json looks like

{
  "Comment": "State Machine",
  "StartAt": "Initial",
  "States": {
    "Initial": {
      "Type": "Task",
      "Resource": "${InitialTask.Arn}",
      "Next": "WaitToActivate"
    },
    "WaitToActivate": {
      "Type": "Wait",
      "SecondsPath": "$.WaitInSeconds",
      "Next": "Pass"
    },
    "Pass": {
      "Type": "Task",
      "Resource": "${PassTask.Arn}",
      "Next": "ChoiceStateX"
    },

    "ChoiceStateX": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.IsMale",
          "NumericEquals": 1,
          "Next": "PrintMaleInfo"
        },
        {
          "Variable": "$.IsMale",
          "NumericEquals": 0,
          "Next": "PrintFemaleInfo"
        }
      ],
      "Default": "PrintInfo"
    },
    "PrintMaleInfo": {
      "Type": "Task",
      "Resource": "${PrintMaleInfoTask.Arn}",
      "End": true
    },
    "PrintFemaleInfo": {
      "Type": "Task",
      "Resource": "${PrintFemaleInfoTask.Arn}",
      "End": true
    },
    "PrintInfo": {
      "Type": "Task",
      "Resource": "${PrintInfoTask.Arn}",
      "End": true
    }
  }
}

And here is the revised serverless.template file

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Transform" : "AWS::Serverless-2016-10-31",
  "Description" : "An AWS Serverless Application.",

  "Resources" : {
    "InitialTask" : {
        "Type" : "AWS::Lambda::Function",
        "Properties" : {
            "Handler" : "MoreRealWorldStepFunction::MoreRealWorldStepFunction.StepFunctionTasks::Initial",
            "Role"    : {"Fn::GetAtt" : [ "LambdaRole", "Arn"]},
            "Runtime" : "dotnetcore2.1",
            "MemorySize" : 256,
            "Timeout" : 30,
            "Code" : {
                "S3Bucket" : "",
                "S3Key" : ""
            }
        }
    },
	"PassTask" : {
        "Type" : "AWS::Lambda::Function",
        "Properties" : {
            "Handler" : "MoreRealWorldStepFunction::MoreRealWorldStepFunction.StepFunctionTasks::Pass",
			"Role"    : {"Fn::GetAtt" : [ "LambdaRole", "Arn"]},
            "Runtime" : "dotnetcore2.1",
            "MemorySize" : 256,
            "Timeout" : 30,
            "Code" : {
                "S3Bucket" : "",
                "S3Key" : ""
            }
        }
    },
	"PrintInfoTask" : {
        "Type" : "AWS::Lambda::Function",
        "Properties" : {
            "Handler" : "MoreRealWorldStepFunction::MoreRealWorldStepFunction.StepFunctionTasks::PrintInfo",
			"Role"    : {"Fn::GetAtt" : [ "LambdaRole", "Arn"]},
            "Runtime" : "dotnetcore2.1",
            "MemorySize" : 256,
            "Timeout" : 30,
            "Code" : {
                "S3Bucket" : "",
                "S3Key" : ""
            }
        }
    },
	"PrintMaleInfoTask" : {
        "Type" : "AWS::Lambda::Function",
        "Properties" : {
            "Handler" : "MoreRealWorldStepFunction::MoreRealWorldStepFunction.StepFunctionTasks::PrintMaleInfo",
			"Role"    : {"Fn::GetAtt" : [ "LambdaRole", "Arn"]},
            "Runtime" : "dotnetcore2.1",
            "MemorySize" : 256,
            "Timeout" : 30,
            "Code" : {
                "S3Bucket" : "",
                "S3Key" : ""
            }
        }
    },
	"PrintFemaleInfoTask" : {
        "Type" : "AWS::Lambda::Function",
        "Properties" : {
            "Handler" : "MoreRealWorldStepFunction::MoreRealWorldStepFunction.StepFunctionTasks::PrintFemaleInfo",
			"Role"    : {"Fn::GetAtt" : [ "LambdaRole", "Arn"]},
            "Runtime" : "dotnetcore2.1",
            "MemorySize" : 256,
            "Timeout" : 30,
            "Code" : {
                "S3Bucket" : "",
                "S3Key" : ""
            }
        }
    },
    "StateMachine" : {
        "Type" : "AWS::StepFunctions::StateMachine",
        "Properties": {
            "RoleArn": { "Fn::GetAtt": [ "StateMachineRole", "Arn" ] },
            "DefinitionString": { "Fn::Sub": "" }
        }
    },
    "LambdaRole" : {
        "Type" : "AWS::IAM::Role",
        "Properties" : {
            "AssumeRolePolicyDocument" : {
                "Version" : "2012-10-17",
                "Statement" : [
                    {
                        "Action" : [
                            "sts:AssumeRole"
                        ],
                        "Effect" : "Allow",
                        "Principal" : {
                            "Service" : [
                                "lambda.amazonaws.com"
                            ]
                        }
                    }
                ]
            },
            "ManagedPolicyArns" : [
                "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
            ]
       }
    },
    "StateMachineRole" : {
        "Type" : "AWS::IAM::Role",
        "Properties" : {
            "AssumeRolePolicyDocument" : {
              "Version": "2012-10-17",
              "Statement": [
                {
                  "Effect": "Allow",
                  "Principal": {
                    "Service": {"Fn::Sub" : "states.${AWS::Region}.amazonaws.com"}
                  },
                  "Action": "sts:AssumeRole"
                }
              ]
            },
            "Policies" : [{
                "PolicyName": "StepFunctionLambdaInvoke",
                "PolicyDocument": {
                  "Version": "2012-10-17",
                  "Statement": [
                    {
                      "Effect": "Allow",
                      "Action": [
                        "lambda:InvokeFunction"
                      ],
                      "Resource": "*"
                    }
                  ]
                }
            }]
        }
    }
  },
  "Outputs" : {
  }
}

And finally here are the revised functions that are used by the state machine

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Threading.Tasks;

using Amazon.Lambda.Core;


// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace MoreRealWorldStepFunction
{
    public class StepFunctionTasks
    {
        /// <summary>
        /// Default constructor that Lambda will invoke.
        /// </summary>
        public StepFunctionTasks()
        {
        }


        public State Initial(State state, ILambdaContext context)
        {
            state.Message = $"Hello-{Guid.NewGuid().ToString()}";

            LogMessage(context, state.ToString());


            state.IsMale = state.Name.StartsWith("Mr") ? 1 : 0;


            // Tell Step Function to wait 5 seconds before calling 
            state.WaitInSeconds = 5;

            return state;
        }

        public State PrintMaleInfo(State state, ILambdaContext context)
        {
            LogMessage(context, "IS MALE");
            return state;
        }

        public State PrintFemaleInfo(State state, ILambdaContext context)
        {
            LogMessage(context, "IS FEMALE");
            return state;
        }


        public State Pass(State state, ILambdaContext context)
        {
            return state;
        }


        public State PrintInfo(State state, ILambdaContext context)
        {
            LogMessage(context, state.ToString());
            return state;
        }


        void LogMessage(ILambdaContext ctx, string msg)
        {
            ctx.Logger.LogLine(
                string.Format("{0}:{1} - {2}",
                    ctx.AwsRequestId,
                    ctx.FunctionName,
                    msg));
        }
    }
}

So with all that in place, lets check it out in the Step Function console, and see what the definition looks like and whether it runs ok. So this is what it looks like from the console

image

And when we try and execute it in the Step Function console, we can see this

image

So when we run this, it does indeed go down the “IsMale” choice state of “PrintMaleInfo”

image

Starting An Execution Using C# Code

So being able to upload a Step Function into AWS and run it via the Step Function AWS console is cool and all, but what would be better is if we are able run it via our own code. This is quite an interesting one, as there are quite a few different ways to do this, I will discuss a few of them

 

Permissioning an IAM user with the correct privileges

So throughout this series I have been using a single IAM user I created in the very 1st post in this series, now you may not want to do this in real life, but if you want that user to be able to Start a state machine execution request, you could try and add an inline policy something like this

 

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "states:*",
            "Resource": "*"
        }
    ]
}

 

Of you could just give the IAM user this policy AWSStepFunctionsFullAccess. Either of which should allow you to kick of a step function using code something like this

static void ExecuteStepFunctionUsingDefaultProfileWithIAMStepFunctionsFullAccessInIAMConsole()
{
    var options = new AWSOptions()
    {
        Profile = "default",
        Region = RegionEndpoint.EUWest2
    };

    var amazonStepFunctionsConfig = new AmazonStepFunctionsConfig { RegionEndpoint = RegionEndpoint.EUWest2 };
    using (var amazonStepFunctionsClient = new AmazonStepFunctionsClient(amazonStepFunctionsConfig))
    {
        var state = new State
        {
            Name = "MyStepFunctions"
        };
        var jsonData1 = JsonConvert.SerializeObject(state);
        var startExecutionRequest = new StartExecutionRequest
        {
            Input = jsonData1,
            Name = $"SchedulingEngine_{Guid.NewGuid().ToString("N")}",
            StateMachineArn = "arn:aws:states:eu-west-2:464534050515:stateMachine:StateMachine-z8hrOwmL9CiG"
        };
        var taskStartExecutionResponse = amazonStepFunctionsClient.StartExecutionAsync(startExecutionRequest).ConfigureAwait(false).GetAwaiter().GetResult();
    }


    Console.ReadLine();
}

 

The thing with this is you are having to add the policies to your IAM user, which is cool, but another way may be to use an existing state machine role that was created by a previously deployed Step Function (say from Visual studio deploy).

 

Assuming Step Function Role

To do this you would need to assume the step function role. This would need code something like this

static void ExecuteStepFunctionUsingAssumedExistingStateMachineRole()
{
    var options = new AWSOptions()
    {
        Profile = "default",
        Region = RegionEndpoint.EUWest2
    };

    var assumedRoleResponse = ManualAssume(options).ConfigureAwait(false).GetAwaiter().GetResult();
    var assumedCredentials = assumedRoleResponse.Credentials;
    var amazonStepFunctionsConfig = new AmazonStepFunctionsConfig { RegionEndpoint = RegionEndpoint.EUWest2 };
    using (var amazonStepFunctionsClient = new AmazonStepFunctionsClient(
        assumedCredentials.AccessKeyId,
        assumedCredentials.SecretAccessKey, amazonStepFunctionsConfig))
    {
        var state = new State
        {
            Name = "MyStepFunctions"
        };
        var jsonData1 = JsonConvert.SerializeObject(state);
        var startExecutionRequest = new StartExecutionRequest
        {
            Input = jsonData1,
            Name = $"SchedulingEngine_{Guid.NewGuid().ToString("N")}",
            StateMachineArn = "arn:aws:states:eu-west-2:XXXXX:stateMachine:StateMachine-XXXXX"
        };
        var taskStartExecutionResponse = amazonStepFunctionsClient
			.StartExecutionAsync(startExecutionRequest)
			.ConfigureAwait(false)
			.GetAwaiter()
		    .GetResult();
    }

    Console.ReadLine();
}


public static async Task<AssumeRoleResponse> ManualAssume(AWSOptions options)
{
    var stsClient = options.CreateServiceClient<IAmazonSecurityTokenService>();
    var assumedRoleResponse = await stsClient.AssumeRoleAsync(new AssumeRoleRequest()
    {
        RoleArn = "arn:aws:iam::XXXXX:role/SimpleStepFunction-StateMachineRole-XXXXX",
        RoleSessionName = "test"
    });

    return assumedRoleResponse;

}

 

The important things there are

  • The Name of the execution should be unique
  • You get the ARN for the state machine from the AWS console

 

Within A Lambda

You could imagine that you could also use some code like that first example above for the permissioned IAM user, in a Lambda that you expose using the API Gateway (though you would need to provide the key/secret in code for the IAM user used by the AmazonStepFunctionsClient, as the default profile won’t be available in the actual AWS cloud, as profiles are stored locally on your PC).

 

You will be pleased to know there is already good support for this scenario using Api Gateway directly to execute a step function, you can read more about this here : https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-api-gateway.html (this is also a good read on this topic https://stackoverflow.com/questions/41113666/how-to-invoke-aws-step-function-using-api-gateway)

 

In fact it doesn’t stop there, the serverless framework that we looked at last time, also has good support for step functions and triggering them via http. You can read more about this approach here : https://serverless.com/blog/how-to-manage-your-aws-step-functions-with-serverless/ (you may need to combine that with the code in my last article as this link assumed a Node.Js based lambda, my article however showed a C# serverless framework example)

 

See ya later, not goodbye

Ok that’s it for now until the next post

AWS

AWS Using Serveless Framework To Create A Lambda Function

What are we talking about this time?

This time we are going to talk about how to expose our AWS lambda function over HTTP, which is exactly the same as what I did in my last AWS article, the thing is I did it all by hand in that article. This time we are going to be talking about the “Serverless Framework”.

Initial setup

If you did not read the very first part of this series of posts, I urge you to go and read that one now as it shows you how to get started with AWS, and create an IAM user : https://sachabarbs.wordpress.com/2018/08/30/aws-initial-setup/

Where is the code

The code for this post can be found here in GitHub : https://github.com/sachabarber/AWS/tree/master/Compute/ServerlessFrameworkLambda

What Is The Serverless Framework?

Picture says a 1000ns words and all that, here is a quick intro picture to the Serverless Framework.

image

At its heart it is an abstraction layer between your code and cloud, where it is cloud agnostic, your code may not be, but serverless itself is, and can be used quite happily against Azure/AWS etc etc

You should be asking yourself just how it does this abstraction over these cloud vendors? That’s a pretty neat trick isn’t it? Why yes, lets see how it works.

Its all mainly down to this rather clever abstraction file called “serverless.yml” which tells the framework what it should provision for you, and how things communicate, and should be setup.

Here is the example one for this demo app which is a simple AWS Lambda exposes as a GET REST API using AWS API Gateway

# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
#    docs.serverless.com
#
# Happy Coding!

service: ServerlessFrameworkLambda # NOTE: update this with your service name

# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"

provider:
  name: aws
  runtime: dotnetcore2.1

# you can overwrite defaults here
#  stage: dev
  region: eu-west-2

# you can add statements to the Lambda function's IAM Role here
#  iamRoleStatements:
#    - Effect: "Allow"
#      Action:
#        - "s3:ListBucket"
#      Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "ServerlessDeploymentBucket" } ] ]  }
#    - Effect: "Allow"
#      Action:
#        - "s3:PutObject"
#      Resource:
#        Fn::Join:
#          - ""
#          - - "arn:aws:s3:::"
#            - "Ref" : "ServerlessDeploymentBucket"
#            - "/*"

# you can define service wide environment variables here
#  environment:
#    variable1: value1

# you can add packaging information here
package:
  artifact: bin/release/netcoreapp2.1/deploy-package.zip
#  exclude:
#    - exclude-me.js
#    - exclude-me-dir/**

functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello

#    The following are a few example events you can configure
#    NOTE: Please make sure to change your handler code to work with those events
#    Check the event documentation for details
    events:
      - http:
          path: gettime
          method: get
          cors: true
#      - s3: ${env:BUCKET}
#      - schedule: rate(10 minutes)
#      - sns: greeter-topic
#      - stream: arn:aws:dynamodb:region:XXXXXX:table/foo/stream/1970-01-01T00:00:00.000
#      - alexaSkill: amzn1.ask.skill.xx-xx-xx-xx
#      - alexaSmartHome: amzn1.ask.skill.xx-xx-xx-xx
#      - iot:
#          sql: "SELECT * FROM 'some_topic'"
#      - cloudwatchEvent:
#          event:
#            source:
#              - "aws.ec2"
#            detail-type:
#              - "EC2 Instance State-change Notification"
#            detail:
#              state:
#                - pending
#      - cloudwatchLog: '/aws/lambda/hello'
#      - cognitoUserPool:
#          pool: MyUserPool
#          trigger: PreSignUp

#    Define function environment variables here
#    environment:
#      variable2: value2

# you can add CloudFormation resource templates here
#resources:
#  Resources:
#    NewResource:
#      Type: AWS::S3::Bucket
#      Properties:
#        BucketName: my-new-bucket
#  Outputs:
#     NewOutput:
#       Description: "Description for the output"
#       Value: "Some output value"

You can see from the commented lines, how you might configure some of the other functionality you may need, and the docs are pretty decent. You can see more examples here : https://github.com/serverless/examples

But isn’t this quite limited? No not really, for example this is what the AWS offering looks like for serverless functions. This is pretty much exactly what AWS offers without the use of Serverless Framework.

imageAnd just to contrast here is the Azure offering.

image

Now as I say all you care about is the code, and the serverless.yml file, that governs the deployment/update/rollback. Serverless Framework will deal with the cloud provider for you. But just how do you get started with this framework?

The rest of this post will talk you through that.

Installation

The Serverless Framework is a node based command line installation, as such you will need to install node if you don’t already have it, download it from https://nodejs.org/en/download/. Once you have downloaded that, simply open a Node command line window and install the Serverless Framework as follows

npm install -g serverless

Credentials

The next thing you will need to do is associate your cloud provider credentials with the Serverless Framework command line, which you can read about here : https://serverless.com/framework/docs/providers/aws/guide/credentials/

For AWS this would look something like this

serverless config credentials --provider aws --key ANN7EXAMPLE --secret wJalrXUtnFYEXAMPLEKEY

Create A Project

Ok now that you are that far is, you can now create a project. This is easily done as follows, where this one uses a AWS c# template.

serverless create --template aws-csharp --path myService

NOTE : I found that I could not include “.” in the name of my service

Changes I Made At This Point

At this point I updated the serverless.yml file to the one I showed a minute ago, and I then updated the C# function to match almost exactly with the code from my last AWS article

using Amazon.Lambda.APIGatewayEvents;
using Amazon.Lambda.Core;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Net;

[assembly:LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace AwsDotnetCsharp
{
    public class Handler
    {
        ITimeProcessor processor = new TimeProcessor();

        public APIGatewayProxyResponse Hello(
           APIGatewayProxyRequest apigProxyEvent, ILambdaContext context)
        {
            LogMessage(context, "Processing request started");
            APIGatewayProxyResponse response;
            try
            {
                var result = processor.CurrentTimeUTC();
                response = CreateResponse(result);

                LogMessage(context, "Processing request succeeded.");
            }
            catch (Exception ex)
            {
                LogMessage(context,
                    string.Format("Processing request failed - {0}", ex.Message));
                response = CreateResponse(null);
            }

            return response;
        }

        APIGatewayProxyResponse CreateResponse(DateTime? result)
        {
            int statusCode = (result != null) ?
                (int)HttpStatusCode.OK :
                (int)HttpStatusCode.InternalServerError;

            string body = (result != null) ?
                JsonConvert.SerializeObject(result) : string.Empty;

            var response = new APIGatewayProxyResponse
            {
                StatusCode = statusCode,
                Body = body,
                Headers = new Dictionary<string, string>
                {
                    { "Content-Type", "application/json" },
                    { "Access-Control-Allow-Origin", "*" }
                }
            };
            return response;
        }

        /// 
<summary>
        /// Logs messages to cloud watch
        /// </summary>

        void LogMessage(ILambdaContext ctx, string msg)
        {
            ctx.Logger.LogLine(
                string.Format("{0}:{1} - {2}",
                    ctx.AwsRequestId,
                    ctx.FunctionName,
                    msg));
        }
    }

  
}

Package A Project

Once you have edited your code, and potentially the serverless.yml file you need to package it, which for a C# project means running the build.cmd command in PowerShell. Internally this runs the AWS dot net command line extensions for Lambda https://github.com/aws/aws-extensions-for-dotnet-cli so you may find you also need to install those too. See my last AWS article on details about how to do that

Deploy The Function

Open the node command prompt at the place where you have your serverless.yml file, and run serverless deploy, and you should see something like this, where it is provisioning the various cloud provider items needed by your serverless.yml fileimageLet’s go and have a look what it created.  imageOk so we see 1 matching AWS function. CoolimageLet’s drill in a bit further, on the matching functionimageWe see the function does indeed have a matching API Gateway (just like in the previous AWS post I did)imageOk so now lets drill into the API GatewayimageSo far so good, we can see the resource created matches what we specified in our serverless.yml fileimageSo lets test the endpoint. Woohoo we see the time, its working, and this was significantly less hassle than the last article were I had to jump into IAM settings, API Gateway configuration, Lambda configuration, publish from Visual Studio/command line wizards. 

We have only really scratched the surface of using the Serverless Framework for Lambda/functions in the cloud, as I say the documentation is pretty good, it really is worth a try if you have not played with it before.


Beware

See ya later, not goodbye

Ok that’s it for now until the next post

 

 



AWS

AWS Lambda exposed via ApiGateway

 

What are we talking about this time?

This time we are going to talk about how to expose our AWS lambda function over HTTP.  This is actually fairly simple to do so this will not be a big post, and will certainly build on what we saw in the last post where I introduced how to create and publish a new AWS lambda function.

Initial setup

If you did not read the very first part of this series of posts, I urge you to go and read that one now as it shows you how to get started with AWS, and create an IAM user : https://sachabarbs.wordpress.com/2018/08/30/aws-initial-setup/

Where is the code

The code for this post can be found here in GitHub : https://github.com/sachabarber/AWS/tree/master/Compute/Lambda.ApiGateway.DemoApp

What is AWS API Gateway?

Amazon API Gateway is an AWS service that enables developers to create, publish, maintain, monitor, and secure APIs at any scale. You can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud.

image

In practical terms, API Gateway lets you create, configure, and host a RESTful API to enable applications to access the AWS Cloud. For example, an application can call an API in API Gateway to upload a user’s annual income and expense data to Amazon Simple Storage Service or Amazon DynamoDB, process the data in AWS Lambda to compute tax owed, and file a tax return via the IRS website.

As shown in the diagram, an app (or client application) gains programmatic access to AWS services, or a website on the internet, through one or more APIs, which are hosted in API Gateway. The app is at the API’s frontend. The integrated AWS services and websites are located at the API’s backend. In API Gateway, the frontend is encapsulated by method requests and method responses, and the backend is encapsulated by integration requests and integration responses.

With Amazon API Gateway, you can build an API to provide your users with an integrated and consistent developer experience to build AWS cloud-based applications.

https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html up on date 07/10/18

Writing a Lambda that Is exposes via API Gateway

Ok so now that we know what an API Gateway is, how do we write a AWS Lambda to use it? Well as before there are APIGatewayEvents that can be used inside of a Lambda function. Lets see the relevant code shall we:

using System;
using System.Collections.Generic;
using System.Net;
using Amazon.Lambda.APIGatewayEvents;
using Amazon.Lambda.Core;
using Newtonsoft.Json;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace Lambda.ApiGateway.DemoApp
{
    public class Function
    {

        ITimeProcessor processor = new TimeProcessor();

        /// <summary>
        /// Default constructor. This constructor is used by Lambda to construct
        /// the instance. When invoked in a Lambda environment
        /// the AWS credentials will come from the IAM role associated with the
        /// function and the AWS region will be set to the
        /// region the Lambda function is executed in.
        /// </summary>
        public Function()
        {

        }

        public APIGatewayProxyResponse FunctionHandler(
            APIGatewayProxyRequest apigProxyEvent, ILambdaContext context)
        {
            LogMessage(context, "Processing request started");
            APIGatewayProxyResponse response;
            try
            {
                var result = processor.CurrentTimeUTC();
                response = CreateResponse(result);

                LogMessage(context, "Processing request succeeded.");
            }
            catch (Exception ex)
            {
                LogMessage(context, 
                    string.Format("Processing request failed - {0}", ex.Message));
                response = CreateResponse(null);
            }

            return response;
        }

        APIGatewayProxyResponse CreateResponse(DateTime? result)
        {
            int statusCode = (result != null) ?
                (int)HttpStatusCode.OK :
                (int)HttpStatusCode.InternalServerError;

            string body = (result != null) ?
                JsonConvert.SerializeObject(result) : string.Empty;

            var response = new APIGatewayProxyResponse
            {
                StatusCode = statusCode,
                Body = body,
                Headers = new Dictionary<string, string>
                {
                    { "Content-Type", "application/json" },
                    { "Access-Control-Allow-Origin", "*" }
                }
            };
            return response;
        }

        /// <summary>
        /// Logs messages to cloud watch
        /// </summary>
        void LogMessage(ILambdaContext ctx, string msg)
        {
            ctx.Logger.LogLine(
                string.Format("{0}:{1} - {2}",
                    ctx.AwsRequestId,
                    ctx.FunctionName,
                    msg));
        }

    }
}

It can be seen that we need to use a specialized APIGatewayProxyRequest/APIGatewayProxyResponse pair

In this example we are exposing the AWS Lambda as a GET only operation. If you wanted to accept POST/DELETE/PUT data you could use the APIGatewayProxyRequest.Body to get the data representing the request.

Ok, so now that we have the code, and lets assume that the Lambda has been published to AWS (see the last article for a detailed explanation of how to do that). For now lets assume we have published the above code to AWS, and we have it available in the AWS console, we would now need to configure the API Gateway part of it.

Which  starts with just telling the ApiGateway trigger which stage to run in. This is shown below

imageOnce we have configured the ApiGateway trigger for the published lambda, we should see it shown something like the screen shot below. We then need to move on to setting up the actual ApiGateway resources themselves and how they relate to the Lambda call.

image

We can do this either by following the link shown within the Api Gateway section of our lambda as shown above, or via the AWS console where we just search for the Api Gateway. Both paths are valid, and should lead you to a screen something like the one below. It is from this screen that we will add new resources.

imageSo we wish (at least for this demo) to create a GET resource that will call our Lambda. We can do this by using the Actions menu, and creating a new GET from the drop down options. We then setup the GET resource to call the lambda (the one for this demo). This is all shown in the screen shot below

imageOnce we have added the resource we should be able to test it out using the Test button (the one shown below with the lightning bolt on it). This will test the Api resource. So for this demo this should call the Lambda and GET a new time returned to the Api Gateway call, and we should see a status code of 200 (Ok)

imageSo if that tests out just fine, we are almost there. All we need to do now ensure that the Api Gateway is deployed. This can be done using the “Deploy API” menu option from the Api Gateway portal, as shown below.

imageWith all that done, we should be able to test our deployed Api Gateway pointing to our Lambda, so lets grab the public endpoint for the Api Gateway, which we can do my examining the Stages menu, then finding our resource (GET in this case) and getting hold of the Invoke Url.

imageSo for me this looks like this

imageCool looks like its working.

See ya later, not goodbye

Ok that’s it for now until the next post

AWS

AWS : Lambda

What are we talking about this time?

This time we are going to talk about AWS Lambda, believe it or not this is a fairly big subject for such a simple topic.This will probably require a few posts, so this is what I would like to cover I think

  • SQS source Lambda writing to S3 bucket
  • Using Serverless framework
  • Kinesis Firehose Lambda writing to S3 bucket
  • Step functions using Lambda functions

Initial setup

If you did not read the very first part of this series of posts, I urge you to go and read that one now as it shows you how to get started with AWS, and create an IAM user : https://sachabarbs.wordpress.com/2018/08/30/aws-initial-setup/

Where is the code

The code for this post can be found here in GitHub : https://github.com/sachabarber/AWS/tree/master/Compute/Lambda.SQS.DemoApp

What is AWS Lambda?

AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code. There are of course servers behind the scenes, but these are provisioned when needed, and you are only paying for what time you use of their compute resource.

AWS Lambda Limits

There are some limitations when using Lambda that one should be aware of before getting started. These are shown below

image

image

AWS Lambda will dynamically scale capacity in response to increased traffic, subject to the concurrent executions limit noted previously. To handle any burst in traffic, AWS Lambda will immediately increase your concurrently executing functions by a predetermined amount, dependent on which region it’s executed, as noted below:

image

Getting started with AWS Lambda

In this post we are going to build this pipeline in AWS

image

It is worth noting that there are MANY sources for Lambda, you can find out more here : https://github.com/aws/aws-lambda-dotnet, but for those of you that just want to know now, here is the current list

image

Don’t try and click this it’s a screenshot

AWS Toolkit

So the easiest way to get started with your own Lambda function is using the AWS Toolkit, which if you followed part1 you would have installed. So lets use the wizard to create a SQS triggered Lambda.

image

Once you have run through this wizard you will be left with the shell of a Lambda project that is triggered via a SQS event, and has unit tests for that. I however wanted my Lambda to write to S3, and I also wanted a SQS publisher that I could use to push messages to my SQS queue to test my lambda for real later on.

So the final solution for me looks like this

image

SQS Publisher

Lets start with the simple bit, the SQS Publisher, this is as follows

using System;
using Amazon.SQS;
using Amazon.SQS.Model;
using Nito.AsyncEx;

namespace SQSSPublisher
{
    class Program
    {
        private static bool _receieverShouldDeleteMessage = false;
        private static AmazonSQSClient _sqs = new AmazonSQSClient();
        private static string _myQueueUrl;
        private static string _queueName = "lamda-sqs-demo-app";

        static void Main(string[] args)
        {
            AsyncContext.Run(() => MainAsync(args));
        }

        static async void MainAsync(string[] args)
        {
            try
            {

                var listQueuesRequest = new ListQueuesRequest();
                var listQueuesResponse = await _sqs.ListQueuesAsync(listQueuesRequest);

                try
                {
                    Console.WriteLine($"Checking for a queue called {_queueName}.\n");
                    var resp = await _sqs.GetQueueUrlAsync(_queueName);
                    _myQueueUrl = resp.QueueUrl;

                }
                catch(QueueDoesNotExistException quex)
                {
                    //Creating a queue
                    Console.WriteLine($"Create a queue called {_queueName}.\n");
                    var sqsRequest = new CreateQueueRequest { QueueName = _queueName };
                    var createQueueResponse = await _sqs.CreateQueueAsync(sqsRequest);
                    _myQueueUrl = createQueueResponse.QueueUrl;
                }

                //Sending a message
                for (int i = 0; i < 2; i++)
                {
                    var message = $"This is my message text-Id-{Guid.NewGuid().ToString("N")}";
                    //var message = $"This is my message text";
                    Console.WriteLine($"Sending a message to MyQueue : {message}");
                    var sendMessageRequest = new SendMessageRequest
                    {
                        QueueUrl = _myQueueUrl, //URL from initial queue creation
                        MessageBody = message
                    };
                    await _sqs.SendMessageAsync(sendMessageRequest);
                }
            }
            catch (AmazonSQSException ex)
            {
                Console.WriteLine("Caught Exception: " + ex.Message);
                Console.WriteLine("Response Status Code: " + ex.StatusCode);
                Console.WriteLine("Error Code: " + ex.ErrorCode);
                Console.WriteLine("Error Type: " + ex.ErrorType);
                Console.WriteLine("Request ID: " + ex.RequestId);
            }

            Console.WriteLine("Press Enter to continue...");
            Console.Read();
        }


        
    }
}

Lambda

Now lets see the Lambda itself. Remember it will get triggered to run when a SQS event arrives in the queue its listening to, and it will write to s3 bucket.

using System;
using System.Linq;
using System.Threading.Tasks;
using Amazon.Lambda.Core;
using Amazon.Lambda.SQSEvents;
using Amazon.S3;
using Amazon.S3.Model;


// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace Lambda.SQS.DemoApp
{
    public class Function
    {
        static IAmazonS3 client;
        private static string bucketName = "lamda-sqs-demo-app-out-bucket";

        /// 
<summary>
        /// Default constructor. This constructor is used by Lambda to construct the instance. When invoked in a Lambda environment
        /// the AWS credentials will come from the IAM role associated with the function and the AWS region will be set to the
        /// region the Lambda function is executed in.
        /// </summary>

        public Function()
        {

        }


        /// 
<summary>
        /// This method is called for every Lambda invocation. This method takes in an SQS event object and can be used 
        /// to respond to SQS messages.
        /// </summary>

        /// <param name="evnt"></param>
        /// <param name="context"></param>
        /// <returns></returns>
        public async Task FunctionHandler(SQSEvent evnt, ILambdaContext context)
        {
            foreach(var message in evnt.Records)
            {
                await ProcessMessageAsync(message, context);
            }
        }

        private async Task ProcessMessageAsync(SQSEvent.SQSMessage message, ILambdaContext context)
        {
            context.Logger.LogLine($"Processed message {message.Body}");

            using (client = new AmazonS3Client(Amazon.RegionEndpoint.EUWest2))
            {
                Console.WriteLine("Creating a bucket");
                await CreateABucketAsync(bucketName, false);
                Console.WriteLine("Writing message from SQS to bucket");
                await WritingAnObjectAsync(message.Body.ToUpper(), Guid.NewGuid().ToString("N").ToLower());
            }


            // TODO: Do interesting work based on the new message
            await Task.CompletedTask;
        }


        async Task WritingAnObjectAsync(string messageBody, string keyName)
        {
            await CarryOutAWSTask<Unit>(async () =>
            {
                // simple object put
                PutObjectRequest request = new PutObjectRequest()
                {
                    ContentBody = messageBody,
                    BucketName = bucketName,
                    Key = keyName
                };

                PutObjectResponse response = await client.PutObjectAsync(request);
                return Unit.Empty;
            }, "Writing object");
        }


        async Task CreateABucketAsync(string bucketToCreate, bool isPublic = true)
        {
            await CarryOutAWSTask<Unit>(async () =>
            {
                if(await BucketExists(bucketToCreate))
                {
                    Console.WriteLine($"{bucketToCreate} already exists, skipping this step");
                }

                PutBucketRequest putBucketRequest = new PutBucketRequest()
                {
                    BucketName = bucketToCreate,
                    BucketRegion = S3Region.EUW2,
                    CannedACL = isPublic ? S3CannedACL.PublicRead : S3CannedACL.Private
                };
                var response = await client.PutBucketAsync(putBucketRequest);
                return Unit.Empty;
            }, "Create a bucket");
        }


        async Task<bool> BucketExists(string bucketName)
        {
            return await CarryOutAWSTask<bool>(async () =>
            {
                ListBucketsResponse response = await client.ListBucketsAsync();
                return  response.Buckets.Select(x => x.BucketName).Contains(bucketName);
            }, "Listing buckets");
        }

        async Task<T> CarryOutAWSTask<T>(Func<Task<T>> taskToPerform, string op)
        {
            try
            {
                return await taskToPerform();
            }
            catch (AmazonS3Exception amazonS3Exception)
            {
                if (amazonS3Exception.ErrorCode != null &&
                    (amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId") ||
                    amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
                {
                    Console.WriteLine("Please check the provided AWS Credentials.");
                    Console.WriteLine("If you haven't signed up for Amazon S3, please visit http://aws.amazon.com/s3");
                }
                else
                {
                    Console.WriteLine($"An Error, number '{amazonS3Exception.ErrorCode}', " +
                                      $"occurred when '{op}' with the message '{amazonS3Exception.Message}'");
                }

                return default(T);
            }
        }


    }



    public class Unit
    {
        public static Unit Empty => new Unit();
    }
}

Tests

And this is what the tests look like, where we test the lamba input, and check the output is stored in s3

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

using Xunit;
using Amazon.Lambda.TestUtilities;
using Amazon.Lambda.SQSEvents;

using Lambda.SQS.DemoApp;
using Amazon.S3;
using Amazon.S3.Model;

namespace Lambda.SQS.DemoApp.Tests
{
    public class FunctionTest
    {
        private static string bucketName = "lamda-sqs-demo-app-out-bucket";


        [Fact]
        public async Task TestSQSEventLambdaFunction()
        {
            var sqsEvent = new SQSEvent
            {
                Records = new List<SQSEvent.SQSMessage>
                {
                    new SQSEvent.SQSMessage
                    {
                        Body = "foobar"
                    }
                }
            };

            var logger = new TestLambdaLogger();
            var context = new TestLambdaContext
            {
                Logger = logger
            };

            var countBefore = await CountOfItemsInBucketAsync(bucketName);

            var function = new Function();
            await function.FunctionHandler(sqsEvent, context);

            var countAfter = await CountOfItemsInBucketAsync(bucketName);

            Assert.Contains("Processed message foobar", logger.Buffer.ToString());

            Assert.Equal(1, countAfter - countBefore);
        }


        private async Task<int> CountOfItemsInBucketAsync(string bucketName)
        {
            using (var client = new AmazonS3Client(Amazon.RegionEndpoint.EUWest2))
            {
                ListObjectsRequest request = new ListObjectsRequest();
                request.BucketName = bucketName;
                ListObjectsResponse response = await client.ListObjectsAsync(request);
                return response.S3Objects.Count;
            }
        }
    }
}

Deploying the Lambda to AWS

We have a few options available to us here, Dotnet command line / VS2017

VS2017

So for now lets just right click the lambda project, and “Publish to AWS Lambda”. Follow this wizard will show something like this

image

image

There are several inbuilt roles to choose from to use to run your Lambda. I started with the AWSLambdaFullAccess. However I still need to add SQS and S3 to that. We will see how to do that below

Dot net tool

Run dotnet tool install -g Amazon.Lambda.Tools (you need .NET Core 2.1.3 or above) o grab the dot net core command line CLI. Once you have that installed you can create/deploy lambdas straight from the command line

Adjusting the deployed Lambda for extra policy requirements

So we started out using this AWSLambdaFullAccess role for our lambda, but now we need to create the execution policy. This is described here : https://docs.aws.amazon.com/lambda/latest/dg/with-sqs-create-execution-role.html

But is easier to locate the role for the lambda you just created and give it the extra permissions using the IAM console, where you grab the ARNs for the SQS queue and S3 buckets etc etc

image

Testing it out

In AWS Console using test events

We can use the AWS console and use test events (not SQS events), and we can see that it all looks good, we have a nice green panel for the test execution

image

And we can check out the s3 bucket to ensure we got the expected output.

image

Cool, this looks good.

Use the real SQS source

So how about the real input from SQS?Well we can go back the AWS lamdba console, and configure the SQS trigger

image

Where we need to fill in the SQS event configuration for the lambda, so it knows what queue to use

image

Once we have done that, and we have made sure the role our Lambda is using is ok, the AWS lambda console should show something like this where we have a nicely configured SQS event trigger

image

Ok now we just run the SQSPublisher in the solution, check the s3 bucket, and wham we see 2 new messages. hooray

image

 

image

This corresponds nicely to the code in the SQSPublisher……….Its only working, YAY

image

See ya later, not goodbye

Ok that’s it for now until the next post

AWS

AWS Deploying ASP .NET Core app to Elastic Beanstalk

What are we talking about this time?

This time we are going to talk about AWS Elastic Beanstalk and how we can use it to deploy a scalable load balanced web site

Initial setup

If you did not read the very first part of this series of posts, I urge you to go and read that one now as it shows you how to get started with AWS, and create an IAM user : https://sachabarbs.wordpress.com/2018/08/30/aws-initial-setup/

Where is the code

The code for this post can be found here in GitHub : https://github.com/sachabarber/AWS/tree/master/Compute/ElasticBeanstalk/WebApiForElasticBeanstalk

What is Elastic Beanstalk?

Elastic Beanstalk is one of the AWS compute services. It comes with several platform languages supported. In other languages such as Java there is a web kind of role and a worker kind of role. However for .NET there is ONLY a IIS web kind of role. However don’t let that put you off, The whole planet like web sites of late it would seem, and it just so happens Elastic Beanstalk is a perfect fit for these type of apps.

Elastic Beanstalk has the following architecture

Image result for elastic beanstalk

It can be seen that EC2 instance are part of an Auto Scale Group (scalability) and we also get a load balancer out of the box with a single URI endpoint for our Elastic Beanstalk app, which will load balance amongst the running web apps hosted on the EC2 instances which are all running the web app. This is quite a lot of stuff that is good to have for free. This sort of stuff is quite hard to configure by yourself, so this is quite cool.

Deploying from Visual Studio

So when you create a new .NET Core Web App (The demo uses a standard .NET Core WebApi project) you can publish it straight to Elastic Beanstalk straight from Visual Studio.

image

This is obviously thanks to the AWS Toolkit (which I have installed and talk about in the 1st article in this series). This will launch a wizard, which looks like this

image

You can choose to create a new application environment or use one that you previously created

image

We can then pick a name for the application and its URI that will be publicly available

image

You then pick your hardware requirements (EC2 instance types)

image

You then pick your application permissions / Service permissions

image

We then pick our application options

When you Finish the wizard you should see something like this screen

image

The entire deployment takes about 10 minutes to do, so be patient. One thing to note is that the Visual Studio deployment also creates the aws-beanstalk-tools-defaults.json file to aid in the final application deployment to AWS. This is its contents for this demo app

{
    "comment" : "This file is used to help set default values when using the dotnet CLI extension Amazon.ElasticBeanstalk.Tools. For more information run \"dotnet eb --help\" from the project root.",
    "profile" : "default",
    "region"  : "eu-west-2",
    "application" : "WebApiForElasticBeanstalk",
    "environment" : "WebApiForElasticBeanstalk-dev",
    "cname"       : "webapiforelasticbeanstalk-dev",
    "solution-stack" : "64bit Windows Server 2016 v1.2.0 running IIS 10.0",
    "environment-type" : "SingleInstance",
    "instance-profile" : "aws-elasticbeanstalk-ec2-role",
    "service-role"     : "aws-elasticbeanstalk-service-role",
    "health-check-url" : "/",
    "instance-type"    : "t2.micro",
    "key-pair"         : "",
    "iis-website"      : "Default Web Site",
    "app-path"         : "/",
    "enable-xray"      : false
}

This is what is happening behind the scenes

Launching an environment creates the following resources:

  • EC2 instance – An Amazon Elastic Compute Cloud (Amazon EC2) virtual machine configured to run web apps on the platform that you choose.

    Each platform runs a specific set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy that sits in front of your web app, forwards requests to it, serves static assets, and generates access and error logs.

  • Instance security group – An Amazon EC2 security group configured to allow ingress on port 80. This resource lets HTTP traffic from the load balancer reach the EC2 instance running your web app. By default, traffic isn’t allowed on other ports.
  • Load balancer – An Elastic Load Balancing load balancer configured to distribute requests to the instances running your application. A load balancer also eliminates the need to expose your instances directly to the internet.
  • Load balancer security group – An Amazon EC2 security group configured to allow ingress on port 80. This resource lets HTTP traffic from the internet reach the load balancer. By default, traffic isn’t allowed on other ports.
  • Auto Scaling group – An Auto Scaling group configured to replace an instance if it is terminated or becomes unavailable.
  • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts that are created when you use Elastic Beanstalk.
  • Amazon CloudWatch alarms – Two CloudWatch alarms that monitor the load on the instances in your environment and are triggered if the load is too high or too low. When an alarm is triggered, your Auto Scaling group scales up or down in response.
  • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to launch the resources in your environment and propagate configuration changes. The resources are defined in a template that you can view in the AWS CloudFormation console.
  • Domain name – A domain name that routes to your web app in the formsubdomain.region.elasticbeanstalk.com.

All of these resources are managed by Elastic Beanstalk. When you terminate your environment, Elastic Beanstalk terminates all the resources that it contains.

 

Taken from https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/dotnet-core-tutorial.html#dotnet-core-tutorial-deploy up on date 19/09/18

Deploying from the command line

Ok so we can deploy from Visual Studio, this is cool, but it certainly won’t cut the mustard for CI purposes. Luckily AWS exposes some new .NET Core commands that we can use to deploy to Elastic Beanstalk. We need to grab the CLI tools to do this

Get the AWS Dot Net Core CLI tools

You will need .NET Core SDk 2.1.300 and later (dotnet tool is only available in 2.1.3 or later), once you have that installed you should be able to run this command dotnet tool install -g Amazon.ElasticBeanstalk.Tools to install the AWS Elastic Beanstalk dotnet commands. You can read more about this here : https://github.com/aws/aws-extensions-for-dotnet-cli/blob/master/README.md

So once you have these you should be able to change to the directory that contains your app to deploy and use this command line dotnet eb deploy-environment and follow the command line prompts. You may see an error message about S3 bucket not existing, which is easy enough to fix, just look at what bucket it was trying to create and go create it, it should be a Non-Public / Private bucket

Checking the deployment

So as we saw above a deployment to Elastic Beanstalk did quite a few things, so we should be able to check out the following

That the binaries are in S3 bucket

When we deploy an app to Elastic Beanstalk the binaries are placed into a S3 bucket, so we should see that has been created for us

image

Where we can drill into the bucket and see these files

image

The Elastic Beanstalk Env

We can use the Elastic Beankstalk console https://console.aws.amazon.com/elasticbeanstalk to see if our environment is deployed ok. We should see something like this

image

One thing that is interesting is if we go into the environment (green box)

image

Then use the Actions button at the top right, we can save this environment, so we can use this as a blueprint for a new environment.

Other cool stuff is that we can look at the Configuration, request logs, look at monitoring, setup alarms etc etc

Checking the deployed application works

Using the URL for our app, we should be able to test out our standard WebApi project. Lets give it a go

image

All looking good. So there you go. That’s all pretty cool I think

See ya later, not goodbye

Ok that’s it for now until the next post