Category Archives: Uncategorized

MADCAP IDEA PART 4 : PROTOTYPING THE SCREENS

 

Last Time

 

Last time we looked at bringing in the Play Framework (scala based MVC web framework) and making the front end work with Play. This time we will  be look at the initial prototypes of the screens.

This is my best guess of what they may look like right now, based on my initial requirements, but as with all things once you get into the guts of it, changes will occur.

 

 

PreAmble

Just as a reminder this is part of my ongoing set of posts which I talk about here :

https://sachabarbs.wordpress.com/2017/05/01/madcap-idea/, where we will be building up to a point where we have a full app using lots of different stuff, such as these

  • WebPack
  • React.js
  • React Router
  • TypeScript
  • Babel.js
  • Akka
  • Scala
  • Play (Scala Http Stack)
  • MySql
  • SBT
  • Kafka
  • Kafka Streams

 

Mockup tool of choice

I am a big fan of the balsamiq mockup tools https://balsamiq.com/. This comes as a stand alone installed version or as a plugin for JIRA.

balsamiq provides the following (I am just listing the features I used, there are many more)

 

  • Drag and drop from a wide range of forms, containers, controls
  • Set content for controls (usually using some fancy design time behavior)
  • Set navigation links
  • Set properties like IsSelected, IsEnabled etc etc

 

This is what the windows installed balsamiq desktop version looks like, see how you have many categories of items to choose from

 

image

 

And here is what I mean by the clever design time support. This is a data grid that I have double clicked on, where the text in design mode described the rendered results of the control

 

image

 

It really is a very nice tool. Anyway on with the initial screen designs

 

Navbar

image

 

This will be a simple react-router / react-bootstrap based navigation bar. There is nothing much more to say about that.

 

 

Login

image

 

This will be a login form which will be validated, and submitted to a Play framework controller, for further validation. The Play controller would look up the user details from a MySQL database, and if an entry is found the user is considered logged on. Keeping in simple here no oAuth no JWT, just simple lookup

 

Passenger Register

image

 

If the user is a passenger the sort of information that they will need to enter to register will be different from a driver who may need to register, as such there is a specific passenger registration form, which will be validated and sent to a Play controller endpoint for storage in MySQL.

 

Driver Register

image

 

If the user is a driver, we need more information about the vehicle, as such there is a specific driver registration form, which will be validated and sent to a Play controller endpoint for storage in MySQL.

 

 

Create Job

image

 

Only a passenger will be able to create new jobs. Since I am doing all this work on a single laptop which is ALWAYS in a single location, I am having to SIMULATE the geo-coordinates of a job by accepting the current users input for their current position. The passenger/driver users will provide this geo information by clicking on a google map. The geo co-ordinate update will either travel through a Kafka stream, –> Akka –> Comet, or may just use Akka –> Comet. I have not fully decided on this part yet.

 

There may only EVER be 1 active job, so if a logged in passenger tries to create a 2nd job this should cause an error

 

image

 

View Job

Both passengers/drivers may view an active job. Drivers may “bid for a job” by clicking on the map providing the job is not already paired with a driver.  A driver symbol will be a car, as before the driver will update their geo co-ordinates by clicking on the map. A before the geo co-ordinate update will either travel through a Kafka stream, –> Akka –> Comet, or may just use Akka –> Comet.

 

image

 

A passenger may inspect a drivers details, and chose to accept the driver, at which point the passengers job become assigned the chosen driver.

 

image

 

Drivers that are not allocated to the job will be removed from the map, and only geo updates from the paired passenger/driver will be reflected on the map. 

 

Passenger Completion

 

image

Once a job has been completed (by clicking the”Complete” button) the passenger will be able to rank the driver. This will store the ranking for the driver. This could be stored directly in MySQL, but I want to play with Kafka Streams a bit more, so we use a Kafka Publisher –> Kafka Streams –> KTable arrangement to store the state. And then use Kafka active queries to get the data out again.

 

 

Driver Completion

image

 

The driver is also able to complete the job from their end (using the “complete” button), and is able to rank the passenger. This will work as described above.

 

View Ranking

image

 

Depending on which way I go with the ranking storage this will either be a direct MySQL query or a Kafka Streams active query over a KTable.

 

Conclusion

This is perhaps the simplest of all the posts in this series, but it is an important one. Next time we will try and statically implement these screens, and the associated routing that goes with them.

MADCAP IDEA

INTRODUCTION

So this is somewhat of strange post, or should I say what will hopefully become a decent set of posts, thing is, I have no idea how this will end up really,
as I have not embarked on a mission like this before. So please bear with me.

SO JUST WHAT IS IT THAT I AM TALKING ABOUT?

Well the way I typically like to run my blog/code project articles / life, is that I pick a technology and
just concentrate on it for a while and write about it. This time however I have decided to treat my blogging/articles as a bit more
of a work like escapade, where I will be assigning mini tasks (think JIRA tickets) to myself, some of which I know nothing about, that should/could in reality be treated
as “spikes” and end up in complete dead ends. It is about the journey after all.

I WILL have a complete list of “tickets” (AKA tasks), which may or may not be completely fleshed out in advance. I will stick to “DOING” those “tickets”
and there is an end goal in sight, and I will outline that in a top level story. I cannot however commit to any timelines, this is as much my journey as it is yours (in fact I mainly
do this stuff for myself, and would highly reccomend it as a way of self improvement). That said I hope people get something out of the series of posts that WILL UNDOUBTEDLY
come from this idea.

You can think of the tasks as “technical tasks” which make up the high level “stories” (in JIRA speak).

This may come across a bit weird, but the technogies I plan to cover in the final product is pretty much a full app, so it’s a little hard to describe in
one blog post/article. So I am hoping that by breaking it down into small chunks, each story/sub task will be a useful learning experience in
it’s own right.

SOURCE CONTROL : ORGANISATION

The idea is that each story/sub task will be a folder/subfolder which is completely independent of other stories/sub tasks (up until the final goal, which is of course
a working showcase that demostrates it all working together).

NOTE TO SELF : I am going to try really hard to do this (aren’t we sacha), as I think one topic -> one source control repo (more than likely GIT), is a good way to
correlate ideas/words on the post/article

 

WHAT DO I WANT TO WRITE


In essence I want to write a very (pardon the pun) but uber simple “uber” type app. Where there are the following funtional requirements

  • There should be a web interface that a client can use. Clients may be a “driver” or a “pickup client” requireing a delivery
  • There should be a web interface that a “pickup client” can use, that shows a “pickup client” location on a map, which the “pickup client” choses.
    The “pickup client” may request a pickup job, in which case “drivers” that are in the area bid for a job.
    The “pickup client” location should be visible to a “driver” on a map
  • A “driver” may bid for a “pickup client” job, and the bidding “driver(s)” location should be visible to the “pickup client”.
  • The acceptance of the bidding “driver” is down to the “pickup client”
  • Once a “pickup client” accepts a “driver” ONLY the assigned “driver(s)” current map position will be shown to the “pickup client”
  • When a “pickup client” is happy that they have been picked up by a “driver”, the “pickup client” may rate the driver from 1-10, and the “driver” may also rate the “pickup client” from 1-10.
  • The rating should only be available once a “pickup client” has marked a job as “completed”
  • A “driver” or a “pickup client” should ALWAYS be able to view their previous ratings. 

Whilst this may sound child’s play to a lot of you (me included if I stuck to using simply CRUD operations), I just want to point out that this app is meant as a learning experience so I will not be using a simple SignalR Hub, and a couple of database tables.

I intend to write this project using a completely different set of technologies from the norm. Some of the technology choices could easily scale to hundreds of thousands of requests per second (Kafka has your back here)

POTENTIAL TECNHOLOGIES INVOLVED

  • WebPack
  • React.js
  • React Router
  • TypeScript
  • Babel.js
  • Akka
  • Scala
  • Play (Scala Http Stack)
  • MySql
  • SBT
  • Kafka
  • Kafka Streams

Some of this will undoubtedly be covered in other blogs (such as React/Webpack), however some of it I am hoping will be quite novel/insightful material.

Who knows though there may be some of you out there that haven’t heard of Webpack, so some of that may even be new, we shall se, hopefully enough stuff for everyone.

STORIES

I will maintain a list of stories and their sub tasks using Trello here : https://trello.com/b/F4ykCOOM/kafka-play-akka-react-webpack-tasks which at the time of writing this post was the items shown below

 

TOP LEVEL STORIES

Web Site

Play Back End

  • Create a back end play app
  • Create test Kafka consumer that is able to read from JSON payload from a Kafka topic
  • Create test publisher that publishes JSON payload to a Kafka topic
  • Create Akka Publisher flow to test EventSource JS call
  • Create login API
  • Create check ranking API, which will use Kafka Active queries over KTable (or Global KTable) in the materialized streams
  • Create publish job API, which will publish out on Kafka publisher where it will send a JSON payload
  • Create receive job update API, will read JSON from Kafka Consumer where it will read in JSON payload, with the intention of updating the map of the drivers position
  • Create “Accept Job” API which will publish out on Kafka publisher where it will send JSON payload
  • Create “Bid for Job” API which will publish out on Kafka publisher where it will send JSON payload
  • Create Complete job API, which will publish out on Kafka publisher where it will send a JSON payload
  • Create ranking API, which will publish out on Kafka publisher where it will send a JSON payload
  • Create publish driver job co-ordinate update API, which will publish out on Kafka publisher where it will send a JSON payload

Kafka Streams

Create test app that tests out listening to any single Kafka publisher JSON topic, and creates streams app from it, and pushes out to an output topic

  • Create a windowed Kafka stream app that will window over all “driver bidding” jobs for a give period, and will output to an output stream, such that all the job bids can be consumed by Kafka Consumer
  • Create a paired stream of accepted job (id, client, driver id) and an updated driver position which will come in on a different stream
  • Create a ranking streams app which will store a successful ranking in a Kafka Stream KTable
  • Create a way to use Active Queries for allowing clients/drivers to query their rankings

 

 

HOW WILL PROGRESS BE TRACKED

I will simply use Trellos “Label” facility, such that done tasks will be “Green”, and there will obviously be a post/GitHub code repo folder that goes with that.

 

CAVEATS

1. I will not be concerned with connection failures, the aim of the project is to try and create a real world like project, but not actually create a end-end production grade application
2. I will be treating every run as if it were the first, I will not be storing ANY permanent state (apart from ratings potentially)
3. I will be doing things at my own pace (I have 2 kids) so it comes when it comes

4. I will try and use varied technology choices, which will in places mean that there could potentially be more work required to make it production quality

 

 

 

A Look At Docker

A while ago I worked on a project that used this tech stack

  • Akka HTTP : (actually we used Spray.IO but it is practically the same thing for the purpose of this article). For those that don’t know what Akka HTTP is, it is a simple Akka based framework that is also able to expose a REST interface to communicate with the actor system
  • Cassandra database : Apache Cassandra is a free and open-source distributed database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. Cassandra offers robust support for clusters spanning multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients.

It is a multi node cluster

This was a pain to test, and we were always stepping on each others toes, as you can imagine running up a 5 node cluster of VMs just to satisfy my each developers own testing needs was a bit much. So we ended up with some dedicated test environments, running 5 Cassandra nodes. These was still a PITA to be honest.

This got me thinking perhaps I could use Docker to help me out here, perhaps I could run Cassandra in a Docker container, hell perhaps I could even run my own code that uses Cassandra in a Docker container, and just point my UI at the Akka HTTP REST server running in Docker. mmmmm

I started to dig around, and of course this is entirely possible (otherwise I would not be writing this article now would I).

This is certainly not a new thing here for Codeproject, there are numerous Docker articles,  but I never found one that talked about Cassandra, so I thought why not write another one.

 

Which I have just published here : https://www.codeproject.com/Articles/1175248/A-look-at-Docker

akka mailboxes

This post will be a bit smaller than the ones we have just done, but none the less still just as important.

Let’ start by talking about mailboxes and dispatchers, and what exactly they are and how they relate to each other.

What’s A Mailbox?

In Akka the mailbox is some time of queue that holds the messages for an actor. There is usually a mailbox per actor. Though in some cases, where routing gets involved there may only be one mailbox between a number of actors, but for now lets assume a one to one relationship between mailboxes and actors.

What’s A Dispatcher

In Akka a Dispatcher is the heart of the actor system, and it is the thing that dispatches the messages.

There is a way that Akka actors may be configured to use a certain Dispatcher and the Dispatcher may in turn be configured to use a certain mailbox type.

Here is an example of how you might configure an actor to use a custom Dispatcher

You may have this code for an actor

import akka.actor.Props
val myActor =
  context.actorOf(Props[MyActor].withDispatcher("my-dispatcher"), "myactor1")

Where you may have this custom Dispatcher in your configuration of the system

my-dispatcher {

  # Type of mailbox to use for the Dispatcher
  mailbox-requirement = org.example.MyInterface

  # Dispatcher is the name of the event-based dispatcher
  type = Dispatcher

  # What kind of ExecutionService to use
  executor = "thread-pool-executor"

  # Configuration for the thread pool
  thread-pool-executor {

    # minimum number of threads to cap factor-based core number to
    core-pool-size-min = 2

    # No of core threads ... ceil(available processors * factor)
    core-pool-size-factor = 2.0

    # maximum number of threads to cap factor-based number to
    core-pool-size-max = 10
  }

  # Throughput defines the maximum number of messages to be
  # processed per actor before the thread jumps to the next actor.
  # Set to 1 for as fair as possible.
  throughput = 100
}

It can be seen above that we are able to configure the mailbox type for a Dispatcher in the configuration using the line

# Type of mailbox to use for the Dispatcher
mailbox-requirement = org.example.MyInterface

There are actually several inbuilt Dispatcher types that you may use when creating a custom Dispatcher.

Talking about Dispatch types and how they all work is kind of out of scope for what I wanted to talk about in this post though, so if you want to know more about Akka Dispatchers you should consult the official Akka documentation

http://doc.akka.io/docs/akka/snapshot/scala/dispatchers.html

Ok so now that we have taken that slight detour and talked about how you can associate a mailbox type with a custom Dispatcher should you want to let’s get back to the main thrust of this post, which is to talk about mailboxes.

As previously stated mailboxes represent a storage mechanism for an actors messages.

Built In Mailbox Types

Akka comes shipped with a number of mailbox implementations:

UnboundedMailbox – The default mailbox

  • Backed by a java.util.concurrent.ConcurrentLinkedQueue
  • Blocking: No
  • Bounded: No
  • Configuration name: “unbounded” or “akka.dispatch.UnboundedMailbox”

SingleConsumerOnlyUnboundedMailbox

  • Backed by a very efficient Multiple Producer Single Consumer queue, cannot be used with
  • BalancingDispatcher
  • Blocking: No
  • Bounded: No
  • Configuration name: “akka.dispatch.SingleConsumerOnlyUnboundedMailbox”

BoundedMailbox

  • Backed by a java.util.concurrent.LinkedBlockingQueue
  • Blocking: Yes
  • Bounded: Yes
  • Configuration name: “bounded” or “akka.dispatch.BoundedMailbox”

UnboundedPriorityMailbox

  • Backed by a java.util.concurrent.PriorityBlockingQueue
  • Blocking: Yes
  • Bounded: No
  • Configuration name: “akka.dispatch.UnboundedPriorityMailbox”

BoundedPriorityMailbox

  • Backed by a java.util.PriorityBlockingQueue wrapped in an akka.util.BoundedBlockingQueue
  • Blocking: Yes
  • Bounded: Yes
  • Configuration name: “akka.dispatch.BoundedPriorityMailbox”

 

Default Mailbox

As shown above the unbounded mailbox is the default. You can however swap out the default using the following configuration, though you will need to ensure that the chosen default mailbox is the correct one for the type of Dispatcher used. For example a SingleConsumerOnlyUnboundedMailbox can not be used with a BalancingDispatcher

Anyway this is how you would change the default mailbox in config

akka.actor.default-mailbox {
  mailbox-type = "akka.dispatch.SingleConsumerOnlyUnboundedMailbox"
}

 

Mailbox Type For An Actor

It is possible to associate a particular type of mailbox with a particular type of an actor which can be done by mixing in the RequiresMessageQueue trait

import akka.dispatch.RequiresMessageQueue
import akka.dispatch.BoundedMessageQueueSemantics
 
class SomeActor extends Actor
  with RequiresMessageQueue[BoundedMessageQueueSemantics]

Where you would use the following configuration to configure the mailbox

bounded-mailbox {
  mailbox-type = "akka.dispatch.BoundedMailbox"
  mailbox-capacity = 1000
  mailbox-push-timeout-time = 10s
}
 
akka.actor.mailbox.requirements {
  "akka.dispatch.BoundedMessageQueueSemantics" = bounded-mailbox
}

It is worth noting that this setting could be overwritten by code or by a dispatcher mailbox configuration section

Where Is The Code?

As previously stated all the code for this series will end up in this GitHub repo:

https://github.com/sachabarber/SachaBarber.AkkaExamples

The Nuances of Loading and Unloading Assemblies with AppDomain

I don’t normallly like just pointing out other peoples work, bit this time I have no hesitation at all in doing just that. If you have ever worked with AppDomain(s) in .NET you would have certainly had some fun.

CodeProject Marc Clifton has written a truly great article on AppDomain(s) which you should all read. You can find it here : http://www.codeproject.com/Articles/1091726/The-Nuances-of-Loading-and-Unloading-Assemblies-wi

Nice one Marc

WebApi POST + [ISerializable] + JSON .NET

At work I have taken on the task of building a small utility web site for admin needs. Thing is I wanted it to be very self contained so I have opted for this

  • Self hosted web API
  • JSON data exchanges
  • Aurelia.IO front end
  • Raven DB database

So I set out to create a nice web api endpoint like this

private IDocumentStore _store;

public LoginController(IDocumentStore store)
{
	_store = store;
}

[HttpPost]
public IHttpActionResult Post(LoginUser loginUser)
{
    //
}

Where I then had this datamodel that I was trying to post via the awesome AWEWSOME REST plugin for Chrome

using System;
 
namespace Model
{
    [Serializable]
    public class LoginUser
    {
        public LoginUser()
        {
 
        }
 
        public LoginUser(string userName, string password)
        {
            UserName = userName;
            Password = password;
        }
 
        public string UserName { get; set; }
        public string Password { get; set; }
 
        public override string ToString()
        {
            returnstring.Format("UserName: {0}, Password: {1}", UserName, Password);
        }
    }
}

This just would not work, I could see the endpoint being called ok, but no matter what I did the LoginUser model only the post would always have NULL properties. After a little fiddling I removed the [Serializable] attribute and it all just started to work.

Turns out this is to do with the way JSON.Net works when it see the [Serializable] attribute.

For example if you had this model

[Serializable]
public class ResortModel
{
    public int ResortKey { get; set; }
    public string ResortName { get; set; }
}

Without the [Serializable] attribute the JSON output is:

{
    "ResortKey": 1,
    "ResortName": "Resort A"
}

With the [Serializable] attribute the JSON output is:

{
    "<ResortKey>k__BackingField": 1,
    "<ResortName>k__BackingField": "Resort A"
}

I told one of my collegues about this, and he found this article : http://stackoverflow.com/questions/29962044/using-serializable-attribute-on-model-in-webapi which explains it all nicely including how to fix it

Hope that helps, sure bit me in the Ass

Introduction To Apache Spark

I have just started a new job, where we will be using the following technology stack

  • Apache Spark
  • Apache Zookeeper
  • Cassandra
  • Scala

As I get to grips with these I will be writing introductory articles on these things, that will hopefully help those that wish to take their 1st steps with these cool bits of tech.

The 1st one is done as is on Apache Spark : http://www.codeproject.com/Articles/1023037/Introduction-to-Apache-Spark

 

This is what the creators of Apache Spark have to say about their own work.

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.

So if this sounds of interest to you, I hope you enjoy the article.

Grunt.Js examination

Lately I have been looking at VS2015 / ASP vNext, and it did not take a genius to see that you need to know NPM/Bower and Gulp/Grunt. I have used NPM before and Bower is easy to pick up. I have not used (but have heard of) Gulp and Grunt before.

I looked at both of these over the past couple of weeks, and decided I liked Grunt better. For those that have not heard of Grunt it is a task runner for running repetitive tasks. There are lots of examples/resources available for Grunt, but I kind of wanted to look/try it myself. I have written up my findings in the following article.

http://www.codeproject.com/Articles/995334/Small-Grunt-js-examination

Like I say this is nothing new, and I expect most web developers would be like, yeah obviously, it was however interesting for me as a grunt newbie, which others may be.

CQRS Demo

For a while now I have found myself becoming interested in CQRS, and I am fortunate enough to work with a practitioner of CQRS. As such it seemed like a good time to try and learn a bit more about this pattern.

I have created a small demo app that is a fully asynchronous CQRS example.

If this sounds like it may of interest to you, you can read more about it over at codeproject : CQRS : A Cross Examination Of How It Works

Git protocol errors when using Bower package manager

I have just got back from a month long holiday (which was great). Anyway back to work now…..sigh

So the other day I was trying to get Yeoman to scaffold a new angular.js app for me, which worked fine. I then wanted to use the Bower package manager to download a package, and whoever created the package hosted it on Git. Bower can deal with this just fine. But if like me your network is locked down, where there are all sorts of firewall/proxy rules, you may not be able to use the git protocol.

Luckily this is an easy fix, and all you need to do is issue this command line to have git add a configuration rule to re-write git urls to https

git config --global url."https://".insteadOf git://

What Changes Did This Command Make?

Take a look at your global configuration using:

git config --list

You’ll see the following line in the output:

url.https://.insteadof=git://

You can see how this looks on file, by taking a peek at ~/.gitconfig where you should now see that the following two lines have been added:

[url "https://"]
    insteadOf = git://

And that is all there is to it, everything just worked after that.