MADCAP IDEA PART 5 : ADDING REACT-ROUTER

 

Last Time

 

Last time we looked at using Balsamiq mockups to prototype our screens. This post is a small one, but one that I have found slightly trickier to get to work. This time we will look at routing inside of a React application.

 

PreAmble

Just as a reminder this is part of my ongoing set of posts which I talk about here :

https://sachabarbs.wordpress.com/2017/05/01/madcap-idea/, where we will be building up to a point where we have a full app using lots of different stuff, such as these

 

  • WebPack
  • React.js
  • React Router
  • TypeScript
  • Babel.js
  • Akka
  • Scala
  • Play (Scala Http Stack)
  • MySql
  • SBT
  • Kafka
  • Kafka Streams

 

What is routing?

Routing is the act of taking a uri (with our without parameters), and finding a resource (be that a page, react component file) that matches the given uri and returning that content to be rendered.

 

What choices do we have?

When it comes to React there are not than many choices at all. We need to either hand roll our own routing mechanism which might be accomplished either using

window.location.hash
window.location.href

This is however quite a pain, and you would need to hand roll your own support for richer routes that requires route parameters such as /orders/{orderId}. So perhaps there is a better way out there already. There is of course, the https://reacttraining.com/react-router/

 

 

 

React-Router

This the defacto go to solution for routing in React. The react-router was built with react in mind, which means at its heart it is a component based router. It is quite stable and been around quite a while, The current version is v4, which is quite a different beast from the previous versions, a lot changed. So for this series of posts, and the code that goes with it, I will be using v3 of the react-router.

 

Installing react-router

To install react-router, you just need to use NPM (though this article has used a specific version of react-router)

npm install react-router --save

 

Creating the components

As I say, at its heart the react-router is a component based router. So we need some components to render, route to. For the demo all my routes are simple routes that don’t require any further parameters. But rest assure the react-router can deal with routing parameters just fine.

 

So here are the demo route components that we would like to route to.

 

class ReDirecter extends React.Component<undefined, undefined> {

    handleClick = () => {
        hashHistory.push('/contact');
    };

    render() {
        return (
             <button onClick={this.handleClick} type="button">go to contact</button>

        )
    }
}



const Home = () => (
  <div>
    <h2>Home</h2>
  </div>
)

const Contact = () => (
  <div>
    <h2>Contact</h2>
  </div>
)


const About = () => (
  <div>
    <h2>About</h2>
  </div>
)

 

It can be seen that these are very simple components, but they are good enough to route to. There is also an example above of how to use the router to navigate to a new route programmatically in code, but we will talk more about this later.

 

Types of history

So now that we have some components that can be routed to, we need to just understand a bit more about history, and what we get out of the box with react-router. There is great page describing this here : https://github.com/ReactTraining/react-router/blob/v2.0.0-rc5/docs/guides/basics/Histories.md

 

The real crux of it is that there are several types of history providers that can be used with the router

 

hashHistory

Which is a simple hash history which DOES NOT need server side route support.

 

browserHistory

This type of history DOES require FULL server side route support. Which means you really need to think about how your components need to be split up and served from the server side code. This history provider does support isomorphic rendering though.

 

For me the hashHistory above was what I wanted, so I have used that.

 

 

Creating the router

Ok so now that we have talked about the types of history and talked about having components to route to, how we create the router?

 

Well quite simply we do it like this (remember this is v3 of the router, v4 may be different). Note that I am using ES6 style imports

 

import * as React from "react";
import * as ReactDOM from "react-dom";

import 'bootstrap/dist/css/bootstrap.css';
import {Nav, Navbar, NavItem, NavDropdown, MenuItem} from "react-bootstrap";

import { Router, Route, hashHistory  } from 'react-router'





ReactDOM.render((
    <Router history={hashHistory}>
            <Route component={App}>
                <Route path="/" component={Home}/>
                <Route path="/contact" component={Contact}/>
                <Route path="/about" component={About}/>
                <Route path="/redirecter" component={ReDirecter}/>
            </Route>
        </Router>
), document.getElementById('root'));

 

It can be seen that the router itself is a React component and gets rendered to a target DOM element. It can also be seen that we include a hashHistory which we imported from react-router.  Each route is added as a new Route where we match the expected route path, with the component to render.

 

I will also be showing you how to use the react-router with a bootstrap-react Navbar too, so there are imports for that too.

 

 
How does that work with a bootstrap-react navbar?

So I just mentioned I would like to use a bootstrap-react Navbar, so what does that look like. Well some of the eagle eyed amongst you may have noticed the line component={App} in the router setup we just saw. Lets have a look at that.

 

import * as React from "react";
import * as ReactDOM from "react-dom";

import 'bootstrap/dist/css/bootstrap.css';
import {Nav, Navbar, NavItem, NavDropdown, MenuItem} from "react-bootstrap";

import { Router, Route, hashHistory  } from 'react-router'


class MainNav extends React.Component<undefined, undefined> {
  render() {
    return (
     <Navbar brand='React-Bootstrap'>
         <Nav>
             <NavItem eventKey={1} href='#/'>Home</NavItem>
             <NavItem eventKey={2} href='#/contact'>Contact</NavItem>
             <NavItem eventKey={2} href='#/about'>About</NavItem>
             <NavItem eventKey={2} href='#/redirecter'>Redirect</NavItem>
         </Nav>
     </Navbar>
    )
  }
}



class App extends React.Component<undefined, undefined> {
  render() {
    return (

        <div>
            <MainNav/>
            {this.props.children}
        </div>
    )
  }
}

 

It can be seen that the App component is another React component that does 2 things:

  • Renders another component, namely MainNav (which is a bootstrap-react Navbar component, which is also shown above, where the Navbar has links  for the required routes)
  • Also renders the children of the router, which for this example will be a single component (so for the demo code will be one of ReDirecter / Home / Contact / About components)
  •  
 
What about on the fly route changes?

But what about when we are inside of a React component and we may wish to navigate somewhere else, how do we do that? Well it is done via the use of the routers history provider (hashHistory / browserHistory etc etc), here in an example of that from a simple React component that shows a button, which when clicked will perform a redirect to the Contact component, the trick is to use the history provider push(..) method to set a new route location

 

class ReDirecter extends React.Component<undefined, undefined> {

    handleClick = () => {
        hashHistory.push('/contact');
    };

    render() {
        return (
             <button onClick={this.handleClick} type="button">go to contact</button>

        )
    }
}

 

 

Demo

Ok so now we have covered all the basics, lets run the code up and see what we get.

 

When we first start we see this, where we can see the Navbar, and the address bar contains a # style address (this is due to using the hashHistory history provider for the Router)

 

image

 

We can click on the various links representing the routes, and the relevant component will get rendered by the Router. The Redirect route is shown below, where clicking the button will perform a redirect to the Contact component

 

image

 

 

Conclusion

As you can see even something as page navigation in React requires a bit of thought, and depending on what type of routing you chose to use, may even require a fair amount of server side work, which could change the entire workflow quite a bit.

 

I went for the hashHistory for this demo but for a real world production ready application, you should use the browserHistory, and ensure you have supporting server side code to deal with the expected routing requests.

Advertisements

MADCAP IDEA PART 4 : PROTOTYPING THE SCREENS

 

Last Time

 

Last time we looked at bringing in the Play Framework (scala based MVC web framework) and making the front end work with Play. This time we will  be look at the initial prototypes of the screens.

This is my best guess of what they may look like right now, based on my initial requirements, but as with all things once you get into the guts of it, changes will occur.

 

 

PreAmble

Just as a reminder this is part of my ongoing set of posts which I talk about here :

https://sachabarbs.wordpress.com/2017/05/01/madcap-idea/, where we will be building up to a point where we have a full app using lots of different stuff, such as these

  • WebPack
  • React.js
  • React Router
  • TypeScript
  • Babel.js
  • Akka
  • Scala
  • Play (Scala Http Stack)
  • MySql
  • SBT
  • Kafka
  • Kafka Streams

 

Mockup tool of choice

I am a big fan of the balsamiq mockup tools https://balsamiq.com/. This comes as a stand alone installed version or as a plugin for JIRA.

balsamiq provides the following (I am just listing the features I used, there are many more)

 

  • Drag and drop from a wide range of forms, containers, controls
  • Set content for controls (usually using some fancy design time behavior)
  • Set navigation links
  • Set properties like IsSelected, IsEnabled etc etc

 

This is what the windows installed balsamiq desktop version looks like, see how you have many categories of items to choose from

 

image

 

And here is what I mean by the clever design time support. This is a data grid that I have double clicked on, where the text in design mode described the rendered results of the control

 

image

 

It really is a very nice tool. Anyway on with the initial screen designs

 

Navbar

image

 

This will be a simple react-router / react-bootstrap based navigation bar. There is nothing much more to say about that.

 

 

Login

image

 

This will be a login form which will be validated, and submitted to a Play framework controller, for further validation. The Play controller would look up the user details from a MySQL database, and if an entry is found the user is considered logged on. Keeping in simple here no oAuth no JWT, just simple lookup

 

Passenger Register

image

 

If the user is a passenger the sort of information that they will need to enter to register will be different from a driver who may need to register, as such there is a specific passenger registration form, which will be validated and sent to a Play controller endpoint for storage in MySQL.

 

Driver Register

image

 

If the user is a driver, we need more information about the vehicle, as such there is a specific driver registration form, which will be validated and sent to a Play controller endpoint for storage in MySQL.

 

 

Create Job

image

 

Only a passenger will be able to create new jobs. Since I am doing all this work on a single laptop which is ALWAYS in a single location, I am having to SIMULATE the geo-coordinates of a job by accepting the current users input for their current position. The passenger/driver users will provide this geo information by clicking on a google map. The geo co-ordinate update will either travel through a Kafka stream, –> Akka –> Comet, or may just use Akka –> Comet. I have not fully decided on this part yet.

 

There may only EVER be 1 active job, so if a logged in passenger tries to create a 2nd job this should cause an error

 

image

 

View Job

Both passengers/drivers may view an active job. Drivers may “bid for a job” by clicking on the map providing the job is not already paired with a driver.  A driver symbol will be a car, as before the driver will update their geo co-ordinates by clicking on the map. A before the geo co-ordinate update will either travel through a Kafka stream, –> Akka –> Comet, or may just use Akka –> Comet.

 

image

 

A passenger may inspect a drivers details, and chose to accept the driver, at which point the passengers job become assigned the chosen driver.

 

image

 

Drivers that are not allocated to the job will be removed from the map, and only geo updates from the paired passenger/driver will be reflected on the map. 

 

Passenger Completion

 

image

Once a job has been completed (by clicking the”Complete” button) the passenger will be able to rank the driver. This will store the ranking for the driver. This could be stored directly in MySQL, but I want to play with Kafka Streams a bit more, so we use a Kafka Publisher –> Kafka Streams –> KTable arrangement to store the state. And then use Kafka active queries to get the data out again.

 

 

Driver Completion

image

 

The driver is also able to complete the job from their end (using the “complete” button), and is able to rank the passenger. This will work as described above.

 

View Ranking

image

 

Depending on which way I go with the ranking storage this will either be a direct MySQL query or a Kafka Streams active query over a KTable.

 

Conclusion

This is perhaps the simplest of all the posts in this series, but it is an important one. Next time we will try and statically implement these screens, and the associated routing that goes with them.

Madcap Idea Part 3 : Bringing Play Back End Into The Fold + Some Basic Streaming

Last Time

Last time we looked at bringing in the Inversify.js JS IOC container. This time we will bring a Play Framework (scala based MVC web framework) into the fold, and shall be using the front end we have been working on so far to be the front end for the Play Framework back end.

 

PreAmble

Just as a reminder this is part of my ongoing set of posts which I talk about here :

https://sachabarbs.wordpress.com/2017/05/01/madcap-idea/, where we will be building up to a point where we have a full app using lots of different stuff, such as these

  • WebPack
  • React.js
  • React Router
  • TypeScript
  • Babel.js
  • Akka
  • Scala
  • Play (Scala Http Stack)
  • MySql
  • SBT
  • Kafka
  • Kafka Streams

 

What Is The Play Framework?

The Play Framework is a Scala based MVC (model view controller) type web application framework. As such it has in built mechanisms for things typical of a MVC web framework (certainly if you have done any ASP MVC . NET you would find it very familiar).

So we have the typical MVC concerns covered by the Play Framework

  • Controllers
  • Actions
  • Routing
  • Model binding
  • JSON support
  • View engine

Thing is I will not be doing any actual HTML in the Play Framework back end code, I want to do all of that using the previously covered webpack/typescript/react starter code I have shown so far. Rather I will be using the Play Framework as a API backend, where we will simply be using various controllers as endpoint to accept/serve JSON, and Event streamed data. All the actual front end work/routing will be done via webpack and React.

 

There are still some very appealing parts in Play that I did want to make use of, such as:

  • It is Scala, which means when I come to integrate Kafka / Kafka Streams it will be ready to do so
  • It uses Akka which I wanted to use. I also want to use Akka streams, which Play also supports
  • Play controllers lend themselves quite nicely to being able to create a fairly simple REST API
  • It can be used fairly easily to serve static files (think of these as the final artifacts that come out of the webpack generation pipeline). So things like minimized CSS / JS etc etc

 

So hopefully you can see that using Play Framework still made a lot of sense, even if we will only end up using 1/2 of what it has to offer. To be honest the idea of using controllers for a REST API is something that is done in ASP MVC .NET all time either by using of actual controllers or by using the WebApi.

Ok so now that we know what we will be using Play Framework for, how about we dive into the code for this post.

 

Play Framework Basics

Lets start by looking at the bare bones structure of a Play Framework application, which looks like this (I am using IntelliJ IDEA as my IDE)

image

Lets talk a bit about each of these folders

 

app

This folder would hold controllers/views (I have added the Entities folder there that is not part of a Play Framework application requirements). Inside the controllers folder you would find controllers, and inside the views folder you would find views. For the final app there will be no views folder, I simply kept that in this screenshot to talk about what a standard Play Framework application looks like

 

conf

This folder contains the configuration for the Play Framework application. This would include the special routes file, and any other application specific configuration would might have.

Lets just spend a minute having a look at the Play Framework routes file, which you can read more about here : https://www.playframework.com/documentation/2.5.x/ScalaRouting

The routes file has its own DSL, that is responsible for matching a give route with a controller + controller action. The controller action that matches the route is ultimately responsible for servicing the http request. I think the DSL shown in routes file below is pretty self explanatory with perhaps the exception of the assets based routes.

 

All assets based http requests (ie ones that start with /assets for example http://localhost:9000/assets/images/favicon.png would actually be routed through to a special controller called Assets. You dont see any code for this one, its part of the Play Framework application codebase. This special Assets inbuilt play controller is responsible for serving up static data files which it expects to find in the public folder. So for example our initial request of http://localhost:9000/assets/images/favicon.png would get translated into this file (relative path from project root) /public/images/favicon.png. As I say this is handled for you by the special Assets built in controller.

 

The only other funky part to the Assets based route is that it uses a *file in its route. Which essentially boils down to the play framework being able match a multi-part path. Which we actually just saw with the example above http://localhost:9000/assets/images/favicon.png , see how that contains not only the file name, but also a directory of images. The Assets controller + routing is able to deal with that path just fine.

 

# Routes
# This file defines all application routes (Higher priority routes first)
# ~~~~

# Home page

GET        /                                   controllers.HomeController.index()

GET        /scala/comet/liveClock              controllers.ScalaCometController.streamClock()
GET        /scala/comet/kick                   controllers.ScalaCometController.kickRandomTime()

# Map static resources from the /public folder to the /assets URL path
GET        /assets/*file                       controllers.Assets.at(path="/public", file)

 

Ok so moving on to the rest of the standard folders that come with a Play Framework application

 

public

This is where you will need to put any static content that you wish to be served. Obviously views (if you use that part of play) will be within in the app/views folder. Like I say I am not using the views aspect of Play so you will not be seeing any views in my views folder. I instead want to let webpack et all generate my routing, web page etc etc. I do however want to serve bundles so later on I will be showing you how my webpack generated bundles fit in with the Play Framework ecco system.

 

target

Since this is a scala based project we get the standard scala based folders, and target is one of them, that has the generated/compiled code in it.

 

SBT

It is worth pointing out that my Play Framework application is an SBT based project, as such there is an SBT aspect to it, which largely boils down to these files

 

Project [root-build] / plugs.sbt file

This file adds Play as a plugin for the SBT project

 

// The Lightbend repository
resolvers += Resolver.typesafeRepo("releases")

// Use the Play sbt plugin for Play projects
addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.5.14")

 

build.sbt

This is the main SBT build file for the project. This is where all our external dependencies are brought in etc etc (standard SBT stuff)

 

import play.sbt._
import sbt.Keys._
import sbt._

name := "play-streaming-scala"

version := "1.0-SNAPSHOT"

scalaVersion := "2.11.11"

lazy val root = (project in file(".")).enablePlugins(play.sbt.PlayScala)

javacOptions ++= Seq("-source", "1.8", "-target", "1.8", "-Xlint")

initialize := {
  val _ = initialize.value
  if (sys.props("java.specification.version") != "1.8")
    sys.error("Java 8 is required for this project.")
}

 

So I think that covers the basics of a standard Play Framework application, the remainder of this post will look at the following aspects

  • How do we provide a stream based request, that allows server sent events to be sent back to JavaScript from server side code
  • How do we integrate the webpack based front end we have been playing with in previous posts

 

How do provide a stream based request (server sent events)

One of the main ingredients of the application we have chosen to build is the ability to stream data back to the JavaScript in response to incoming Kafka data. Ok we have not got to the Kafka part yet, but we know we will need some way of pushing data from the Play Framework application code to the front end JavaScript code. So how do we do that?

Well there are several parts to this, so lets just go through them in turn

 

Play streaming route

There are 2 routes for the example stream for this post

  • The actual stream endpoint route itself
  • An endpoint that allows us to produce a new value for the Akka Stream that is exposes in the stream endpoint route

Here is the full routing code for these 2 routes

 

GET        /scala/comet/liveClock              controllers.ScalaCometController.streamClock()
GET        /scala/comet/kick                   controllers.ScalaCometController.kickRandomTime()

 

Play stream provider

The job of creating the stream is handled by the ScalaCometController where it uses Akka Streams to provide the stream itself. The clever part is what parts of the Akka Streams API we used.

I have chosen to use MergeHub and BroadCastHub, which act as Fan-In and Fan-Out stages respectively. This allows us to have many producers and many consumers that are able to push and listen to the stream.

 

I have also supplied a route kickRandomTime() which will produce a new random JSON value that will either be a Location or a Resident domain object entity converted to JSON.

Another thing to note is how we handle stream errors. We use ideas borrowed from regular Akka where we use a strategy to manage the stream

Here is the full code to the ScalaCometController

 

package controllers

import java.time.ZonedDateTime
import java.time.format.DateTimeFormatter
import javax.inject.{Inject, Singleton}

import akka.actor.ActorSystem
import akka.stream.{ActorMaterializer, ActorMaterializerSettings, Materializer, Supervision}
import akka.stream.scaladsl.{BroadcastHub, Keep, MergeHub, Source}
import play.api.http.ContentTypes
import play.api.libs.Comet
import play.api.libs.json._
import play.api.mvc.{Controller, _}
import Entities._
import Entities.JsonFormatters._

import scala.concurrent.ExecutionContext


object flipper
{
  var current: Boolean = false
}


@Singleton
class ScalaCometController @Inject()
  (
    implicit actorSystem: ActorSystem,
    ec: ExecutionContext
  )
  extends Controller  {

  //Error handling for streams
  //http://doc.akka.io/docs/akka/2.5.2/scala/stream/stream-error.html
  val decider: Supervision.Decider = {
    case _                      => Supervision.Restart
  }

  implicit val mat = ActorMaterializer(
    ActorMaterializerSettings(actorSystem).withSupervisionStrategy(decider))


  val (sink, source) =
    MergeHub.source[JsValue](perProducerBufferSize = 16)
      .toMat(BroadcastHub.sink(bufferSize = 256))(Keep.both)
      .run()

  def streamClock() = Action {
    Ok.chunked(source via Comet.json("parent.clockChanged")).as(ContentTypes.HTML)
  }

  def kickRandomTime() = Action {

    var finalJsonValue:JsValue=null

    flipper.current = !flipper.current
    if(flipper.current) {
      finalJsonValue = Json.toJson(Location(1.0,1.0))
    } else {
      val rand = DateTimeFormatter.ofPattern("ss mm HH").format(ZonedDateTime.now().minusSeconds(scala.util.Random.nextInt))
      finalJsonValue = Json.toJson(Resident("Hove", 12, Some(rand)))
    }
    Source.single(finalJsonValue).runWith(sink)
    Ok(s"We sent '$finalJsonValue'")
  }

}

Client side Comet frame

The Play Framework has support for web sockets, and we could have used a web socket, but there are times when you dont want to initiate some comms, you just want notifications of something that happened on the server. Luckily the Play Framework has support for this too using Comet, which you read more about here : https://www.playframework.com/documentation/2.5.x/ScalaComet

What this really boils down to is have a IFRAME that is permanently part of the rendered HTML (forever-iframe technique) that is linked up to a comet play url. For me this is like this

 

/scala/comet/liveClock

 

Where it can be seen that this uses the route

 

GET        /scala/comet/liveClock              controllers.ScalaCometController.streamClock()

 

Which we saw above

 

Client side RX comet listener

So we have now seen how the back end produces a stream of differing JSON object (ok toggles between 2 fixed JSON objects), and how we use the forever-iframe idea with a comet based Play route. So what about the message coming into the JavaScript what does that look like?

 

Well here is the relevant code, where we wrap the code we want to run in a self executing function. There are a couple of points here of note

Ok.chunked(source via Comet.json("parent.clockChanged")).as(ContentTypes.HTML))
  • We create a top level window function called clockChanged which ties up with the back end comet route code (
  • We create a custom event that we dispatch via the window
  • We use that custom event to create an Rx Observable (I want this separation as long term the Rx code will become a service that is injected into the props of a React component from the IOC container)

 

import Rx from 'rx';  


(function () {

    var evt;

    window['clockChanged'] = function (incomingJsonPayload) {
        evt = new CustomEvent('onClockChanged', { detail: incomingJsonPayload });
        window.dispatchEvent(evt);
    }

    var source = Rx.Observable.fromEvent(window, 'onClockChanged');

    var subscription = source.subscribe(
        function (x) {
            console.log('RX saw onClockChanged');
            console.log('RX x = ', x.detail);
        },
        function (err) {
            console.log('Error: %s', err);
        },
        function () {
            console.log('Completed');
        });

} ());

And that is all there is to that part. As I say this is demo code and will be moved about a bit over time, but it does show the moving parts ok.

How do integrate our existing webpack based front end with Play

 

Up until now I have avoided talking about the view, and Play and the code from the webpack based front end we have seen in the previous posts. Things is we can do views in play but I was kind of keen to do all my front end work in TypeScript/react/webpack. So I wanted play to merely serve up my front end code from those tools. What that means is really serving up the final JS/CSS bundles and index.html produced by webpack and the awesome HtmlWebpackPlugin (that uses a template and injects the right script bundles). So can this webpack based workflow work ok with Play. I thought about it a bit, and came up with this.

 

Making sure webpack generates bundles to correct place

The first thing to do was to include the code in the Play application at some location. For me this as follows:

image

You want to avoid putting to much stuff under the public folder in Play as Play treats these files as static files, that can be cached etc etc, so 100MB of node modules should defo not go under public if you can help it.

 

Then from there we need to ensure the bundles get built to the correct location, which for my setup and my Play application meant the public folder

 

Making sure bundles are prefixed

The other thing I needed to do was make sure the webpack output was setup to have the correct public path.

To do the 2 things above, I changed my webpack config file to this

image

With these changes in place this is what I now get for my index.html page in the public\dist folder

 

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8" />
        <title>Hello React!</title>

       
		<link href="/assets/dist/vendor.bundle.994d68871d44c4fc440c.css" rel="stylesheet">
		<link href="/assets/dist/indexCss.bundle.994d68871d44c4fc440c.css" rel="stylesheet"></head>
    <body>
         
        
<!-- Main --> /assets/dist/vendor.bundle.994d68871d44c4fc440c.js /assets/dist/index.bundle.994d68871d44c4fc440c.js </body> </html>

 

See how the bundles point to the assets folder, which we now know Play will serve using the AssetsController.

 

Serving the initial webpack generated HTML page

The last piece of the puzzle is how do we get a standard play route to serve up this index.html file? Well here is how

 

package controllers


import javax.inject.Inject

import play.api.mvc.{Action, Controller}

class HomeController @Inject() (environment: play.api.Environment)
  extends Controller {

  def index() = Action {
    val fullpath = s"${environment.rootPath}\\public\\dist\\index.html"
    val htmlContents = scala.io.Source.fromFile(fullpath).mkString
    Ok(htmlContents).as("text/html")
  }

}

 

Quite cunning no, we just read the file as text, and serve the string contents as HTML. Where the route for this is still using the standard Play route

 

GET        /                                   controllers.HomeController.index()

 

And bam, we now have a streaming Akka/Play/Webpack app working. Neato

 

A Demo

Now that I have talked about most of the parts, lets see a little demo of it working, so we need to load our site up (which is the GET / route above)

image

Our ugly but functional, react page is shown. Not very exciting, lets see the console (clear it), and load up the great REST tool Postman, and poke the stream simulation endpoint, and watch our Rx JavaScript code write out to the console

 

Here is the console after a few Send clicks in postman for the http://localhost:9000/scala/comet/kick route

image

 

Conclusion

So that is all I wanted to say this time. I think it has worked out fairly well. Next time we will be looking at adding react-routing into the fold, and create some routes, and dummy place holder pages which we develop along the way (after we do the post on prototyping the screens of course)

MADCAP IDEA PART 2 : ADDING DI/IOC TO THE CLIENT SIDE FRONT END WEB SITE

 

Last Time

Last time we built a bare bones react/webpack/babel/typescript/bootstrap web site. You can read that post here should you wish to: https://sachabarbs.wordpress.com/2017/05/15/madcap-idea-part-1-start-of-the-client-side-portion-of-the-web-site/

 

PreAmble

This post will be about adding DI/IOC to the bear bones no thrills client portion of the web site that we built last time. Just as a reminder this is part of my ongoing set of posts which I talk about here :

https://sachabarbs.wordpress.com/2017/05/01/madcap-idea/, where we will be building up to a point where we have a full app using lots of different stuff, such as these

  • WebPack
  • React.js
  • React Router
  • TypeScript
  • Babel.js
  • Akka
  • Scala
  • Play (Scala Http Stack)
  • MySql
  • SBT
  • Kafka
  • Kafka Streams

 

What Is DI/IOC?

In software engineering, dependency injection is a technique whereby one object supplies the dependencies of another object. A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it. The service is made part of the client’s state. Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern.

This fundamental requirement means that using values (services) produced within the class from new or static methods is prohibited. The class should accept values passed in from outside.

The intent behind dependency injection is to decouple objects to the extent that no client code has to be changed simply because an object it depends on needs to be changed to a different one.

Dependency injection is one form of the broader technique of inversion of control. Rather than low level code calling up to high level code, high level code can receive lower level code that it can call down to. This inverts the typical control pattern seen in procedural programming.

As with other forms of inversion of control, dependency injection supports the dependency inversion principle. The client delegates the responsibility of providing its dependencies to external code (the injector). The client is not allowed to call the injector code. It is the injecting code that constructs the services and calls the client to inject them. This means the client code does not need to know about the injecting code. The client does not need to know how to construct the services. The client does not need to know which actual services it is using. The client only needs to know about the intrinsic interfaces of the services because these define how the client may use the services. This separates the responsibilities of use and construction.

There are three common means for a client to accept a dependency injection: setter-, interface- and constructor-based injection. Setter and constructor injection differ mainly by when they can be used. Interface injection differs in that the dependency is given a chance to control its own injection. All require that separate construction code (the injector) take responsibility for introducing a client and its dependencies to each other

 

From https://en.wikipedia.org/wiki/Dependency_injection

 

 

What Choices Do We To Do This When Working With React?

There are several choices we have when working with React, such as

  • Using react context
  • Using the module system
  • Using a 3rd party DI/IOC system (I will be covering this, in this post)

 

There is VERY good post on these different techniques here : https://github.com/krasimir/react-in-patterns/tree/master/patterns/dependency-injection it is a great read and I suggest you take a look to gain a better understanding of some of the more obscure areas of react (context I am looking at you)

 

 

Which 3rd Party DI/IOC Library Did I Chose And Why?

I have decided to use the https://github.com/inversify/InversifyJS (Inversify JS) DI/IOC library. Why did I chose this one, well there were quite a few reasons:

 

  • It is written in TypeScript, and I wanted to use TypeScript where possible
  • Is looks like a good full featured container, that reminds me of many other containers that I have worked with in .NET/Scala
  • It is quite mature and is in version 2.0
  • It has quite a bit of good press around it
  • The documentation is good
  • It did what I wanted

 

So those were my reasons, so what do we have to do to get Inversify JS to work?

 

Installation

 

Node Module Installation

The first steps is to make sure we have the correct node packages, to do this we just need to issue the following  NPM (node package manager) command line :

npm install inversify reflect-metadata --save

 

Once this is done you should have the following entries in your package.json file

 

image

 

 

Changes To The TypeScript “tsconfig.json” File

The other thing that Inversify JS needs is a couple of specific TypeScript settings. These need to go in the tsconfig.json file, the new lines since last time are these

 

image

 

 

 

 

That is all the setup we need, so let’s now have a look at using the Inversify JS DI/IOC classes, decorators etc etc

 

 

Creating Some Injectable Thing

Lets start with creating something that can be injected with other values, and can be resolved from the container.

 

In Inversify JS  a type has to be marked as @Injectable, which is not something you have to do in other DI/IOC offerings. This is more than likely a requirement because JavaScript is a dynamic language and these decorators are used to create extra metadata to ease in the resolution mechanisms used by ALL IOC containers.

 

To actual get something injected into a class we need to use the @Inject decorater.

 

Here is an example of a class that can be resolved from the Inversify JS  IOC container, and will also have it constructor dependencies resolves by the Inversify JS  IOC container.

 

import { injectable, inject } from "inversify";
import { TYPES } from "../types";

@injectable()
export class Foo {

    private _num: number;

    constructor(@inject(TYPES.SomeNumber) num: number) {
        this._num = num;
    }

    getNum() {
        return this._num * 2;
    }

}

 

You can see that we also use some TYPES. What are these, lets take a look at that.

export const TYPES = {
    Foo: Symbol("Foo"),
    SomeNumber: Symbol("SomeNumber")
};

 

It can be seen this TYPES constant just offers us a way of using Symbol as runtime identifiers for our dependencies.

 

Ok so now that we have a class that expects to have its dependencies satisfied from the Inversify JS IOC Container, and is itself resolvable, lets see how we can configure the container.

 

Creating The Container

I have chosen to create a singleton object for my container, which looks like this

import "reflect-metadata";
import { Container } from "inversify";
import { TYPES } from "../types";
import { Foo } from "../domain/Foo";

export class ContainerOperations {
    private static instance: ContainerOperations;
    private _container:Container = new Container();

    private constructor() {
        
    }

    static getInstance() {
        if (!ContainerOperations.instance) {
            ContainerOperations.instance = new ContainerOperations();
            ContainerOperations.instance.createInversifyContainer();
        }
        return ContainerOperations.instance;
    }

    private createInversifyContainer() {
        this.container.bind<number>(TYPES.SomeNumber).toConstantValue(22);
        this.container.bind<Foo>(TYPES.Foo).to(Foo);
    }

    public get container(): Container {
        return this._container;
    }
}

 

There are a couple of things to note in the above code:

  • We import “reflect-metadata”, “Container” and “TYPES”
  • We have our singleton that simple wraps the Inversify JS IOC container.
  • We configure the the container registrations (bindings in Inversify JS speak)

 

And that is all there is to that part. So all that is left, is to actually resolve something from the container. We will look at that next

 

Resolving Something From The Container

 

As I am using react, I will likely be using the Inversify JS IOC container to assist me with creating the props for the react components. That is not strictly relevant to this discussion, so lets just see how we can resolve an instance of the Foo IOC registered class using Inversify JS

 

We do this as follows:

import { Foo } from "./domain/Foo";
import { TYPES } from "./types";
import { ContainerOperations } from "./ioc/ContainerOperations"; 

let foo = ContainerOperations.getInstance().container.get<Foo>(TYPES.Foo);

 

It can be seen that we simple make use of the singleton (that wraps the container) to resolve our Foo class. Happy days

 

Conclusion

I have to say I did struggle a bit with getting Inversify JS up and running in my project. But I also have to say that I asked a question on the Inversify forum and the author Remo Jansen was absolutely brilliant in helping me to get my stuff to run. To the point where I pointed him at my GitHub repo, and he looked at, got it to work, and sent me a pull request.

 

Remo I tip my hat to you sir, top library, top fella. And as promised I owe you that beer

 

So once it was installed, I found it very easy to work with, it soon felt like many other IOC frameworks I have used (NInject, Funq,Munq, Castle, AutoFac, Unity, MEF take your pick). I was very happy with the results.

 

As I previously stated I will be continuing to write posts which will be tracked on Trello : https://trello.com/b/F4ykCOOM/kafka-play-akka-react-webpack-tasks

MadCap Idea part 1 : Start of the Client side portion of the web site

 

PreAmble

This post will be about building the bear bones no thrills client portion of the web site that is part of my ongoing (well this is the first, so ongoing after this) set of posts which I talk about here :

 

https://sachabarbs.wordpress.com/2017/05/01/madcap-idea/, where we will be building up to a point where we have a full app using lots of different stuff, such as these

  • WebPack
  • React.js
  • React Router
  • TypeScript
  • Babel.js
  • Akka
  • Scala
  • Play (Scala Http Stack)
  • MySql
  • SBT
  • Kafka
  • Kafka Streams

 

 

Introduction

So let me just apologize for how long this one has taken to put together, I never envisaged that this post would take me quite as long as it has. That said it has only taken 5-6 days where I have spent a maximum of 2 hours on it, and when I started this post I had a VERY rough idea of how webpack worked and what it did, but I had NEVER tried to create a webpack project from scratch, so not so bad in the end, I am fairly happy with the results.

 

Where is the code?

If you prefer to just have a look at the end result you can see that here : https://github.com/sachabarber/MadCapIdea

 

What did I want to get to work VS what is working?

Before I started this post/code I had a set list of requirements in mind, which I will show in the table below. I will also show whether I managed to get that feature to work or not

Feature Does It Work
I wanted to use Web pack to manage the build Yes
I wanted to use Typescript via Babel to regular JavaScript Yes
I wanted 3rd party libraries to be used with their typing information No
I wanted to be able to bundle my eventually transpiled JavaScript code into a single bundle Yes
I wanted to be able to use SCSS/SASS for my CSS needs and have them transpiled to CSS Yes
I wanted to be able to bundle my eventually transpiled CSS code into a single bundle Yes
I wanted to be able to import/export stuff (JS /CSS / Files etc etc) using ES6 modules Yes
I wanted to be able to trace back from minified JavaScript bundle back to my original TypeScript via SourceMaps Yes
I wanted to be able to use a fully SourceMap enabled DEVELOP version of my webpack setup Yes
I wanted to be able to run a streamlined (minification, no comments, no console.log, no SourceMap) PRODUCTION version of my webpack setup Yes
I wanted to be able to use React Yes
I wanted to be able to use Bootstrap-React Yes
I wanted to have the option to use JQuery/Lodash as I would in a simple standard JavaScript project, ie as “$” and “_” respectively Yes

 

As you can see I did actually manage to get ALL of this to work with the one exception of the typings for 3rd party libraries. The code still works at runtime, but there is just something hinky going on outside of runtime.

 

<rant>

I would just like to spend a moment ranting about just how much disinformation is out there on the whole module/typescript/webpack space. I must have read about 100 posts, all with different setups, all with different tsconfig.json files, all suggesting different webpack setups. On one hand my god weback/TypeScript are cool, but you have to be VERY careful what you apply. If you get your tsconfig.json into an invalid state, you just may find that editing TypeScript no longer works inside Visual Studio.

 

I have lost track of just how many different approaches I took to try and resolve the unknown module issue in TSX (TypeScript JSX react files). Even the official walk through on the TypeScript.org web site doesn’t work for me. Things I tried and failed at were

  • Including typings in my tsconfig.json
  • Include reference headers in my TypeScript files
  • Including a reference.ts file for 3rd party typings
  • Messing about with moving types\xxxxFramework into full blown NPM Dependencies rather than NPM DevDependencies
  • Various module settings inside of tsconfig.json

All failed, so if anyone out there that is a TypeScript / React / WebPack guru, please let me know what I am doing wrong.  The funny thing is that EVERYTHING is 100% fine at runtime.

</rant>

 

Ah that feels better, anyway now that, that is out of my system, lets continue shall we….

 

 

Webpack fundamentals

So what exactly is webpack. I think this image from the webpack web site https://webpack.js.org/ does a fairly good job of described at a glance what webpack is all about

 

image

 

So clear? No? Ok lets try some words as well

 

  • Webpack at its heart is a bundler which is able to offer module support, and is able to bundle a lot of different things into bundles
  • Webpack is able to bundle lots of thing via a technique called “loaders”, loaders can be piped one to the next (just like a command line)
  • Webpack also offers module support for
    • AMD
    • CommonJS
    • ES6 modules
  • Webpack comes with SourceMap support (one of my favorite things ever, maybe even better than ice cream, but no where near as good as BBQ food and beers)
  • Webpack comes with inbuilt minification support (thanks to Uglify.js : https://www.npmjs.com/package/uglifyjs)
  • Webpack works seamlessly with NPM (Node package manager)
  • Webpack is able to watch your files and produce new packages based on the diff of what you edited compared to what was previously built
  • Webpack supports the idea of base/different configs such that you may have different environment configs DEV|PROD (typically you want loads of debugging aids in dev)

 

So in a nutshell that is what webpack is all about. We will dive into some of the sub areas in a bit more details below before we examine the actual use cases that I set out to solve

 

This may all seem a bit overwhelming, but with webpack it mainly boils down to a config file (typically called webpack.config.js). Here is a minimal example

const { resolve } = require('path');

const webpack = require('webpack');

// plugins
const HtmlWebpackPlugin = require('html-webpack-plugin');

module.exports = (env) => {

  return {
    context: resolve('src'),
    entry: {
      app: './main.ts'
    },
    output: {
      filename: '[name].[hash].js',
      path: resolve('dist'),
      // Include comments with information about the modules.
      pathinfo: true,
    },

    resolve: {
        extensions: [
            '',
            '.js',
            '.ts',
            '.tsx'
        ]
    },

    devtool: 'cheap-module-source-map',

    module: {
      loaders: [
        { test: /\.tsx?$/, loaders: [ 'awesome-typescript-loader' ], exclude: /node_modules/ }
      ],
    },

    plugins: [

      new HtmlWebpackPlugin({
        template: resolve('src','index.html')
      })

    ]

  }
};

 

We will be diving into this, and a lot more within this post.

 

Node

As stated Node/NPM is a fairly vital part of working with webpack, so you will need to ensure you have done the following as a minimum

  • Installed node
  • Installed NPM
  • Installed webpack globally : npm install webpack –g

 

Most of the stuff I talk about in this post requires installing via NPM. But You have a copy of all the requirements inside the package.json file. Which at the time of writing this post looked like this

 

{
  "name": "task1webpackconfig",
  "version": "1.0.0",
  "description": "webpack 2 + TypeScript 2 + Babel example",
  "repository": {
    "type": "git",
    "url": "git+https://github.com/sachabarber/MadCapIdea.git"
  },
  "keywords": [
    "babel",
    "typescript",
    "webpack",
    "bundling",
    "javascript",
    "npm"
  ],
  "author": "sacha barber",
  "homepage": "https://github.com/sachabarber/MadCapIdea#readme",
  "dependencies": {
    "bootstrap": "^3.3.7",
    "jquery": "^3.2.1",
    "lodash": "^4.17.4",
    "react": "^15.5.4",
    "react-bootstrap": "^0.31.0",
    "react-dom": "^15.5.4",
    "webpack": "^2.5.0",
    "webpack-merge": "^4.1.0"
  },
  "devDependencies": {
    "@types/jquery": "^2.0.43",
    "@types/lodash": "^4.14.63",
    "@types/react": "^15.0.24",
    "@types/react-dom": "^15.5.0",
    "awesome-typescript-loader": "^3.1.3",
    "babel-core": "^6.24.1",
    "babel-loader": "^7.0.0",
    "babel-preset-es2015": "^6.24.1",
    "babel-preset-es2015-native-modules": "^6.9.4",
    "babel-preset-react": "^6.24.1",
    "css-loader": "^0.28.1",
    "extract-text-webpack-plugin": "^2.1.0",
    "html-webpack-plugin": "^2.28.0",
    "node-sass": "^4.5.2",
    "on-build-webpack": "^0.1.0",
    "sass-loader": "^6.0.3",
    "source-map-loader": "^0.2.1",
    "typescript": "^2.3.2",
    "webpack": "^2.4.1"
  },
  "scripts": {
    "build-dev": "webpack -d --config webpack.develop.js",
    "build-prod": "webpack --config webpack.production.js"
  }
}

 

 

 

Loaders

Loaders are probably the MOST important webpack concept to learn. There is practically a loader for EVERYTHING. But what exactly is a loader?

Well quite simply, a loader is a way to take some source file contents, and bundle it up in the final artifact. However things can get more sophisticated as some loaders are also able to transpile (act of converting code written in one language into another language say TypeScript –> JavaScript, or ES6 JavaScript –> ES5 JavaScript).

 

Loaders may also be piped together where the loaders declared run from right most to left most (or bottom to top, if you have them over multiple lines). This is EXTREMELY powerful, as it enables this sort of workflow in the demo code

  • Write code in TypeScript (using good stuff like classes (ok ES6 has those but you get me), interfaces, async-await etc etc)
  • Have that run through Babel.Js (bring future JS functions to you by converting your future JS into JS that runs in browsers now)
  • Finally into plain old JS that is compatible with today’s browsers (they will all catch up one day, actually they won’t so yeah babel.js is here to help)

Loaders are not just for JS, they can be used for CSS/Images/Fonts all sorts of things

 

We will see examples on this stuff when we get into the guts of things

Code dissection

In this section we will dissect the code contained at the github repo, and talk through all my initial requirements and see how they ended up being implemented

 

Bundles

One of the main reason to want to use webpack is for its bundling abilities, where I wanted to be able to bundle the following things

  • Typescript which is transpiled to JavaScript (thanks to a TypeScript loader)
  • SCSS/SASS/Css (thanks to a Sass loader)
  • Images(thanks to a Url loader)

So that is what we are trying to bundle, but there are a few things that need to be done to make that happen, so lets start with the loaders (I will be covering images and fonts later, so for now lets just talk about JavaScript and CSS bundling)

 

JavaScript Bundling

As I say I wanted the option to use TypeScript or regular JavaScript, and I also wanted to be able to use SASS or regular CSS so we start with these loaders which will traverse the source code and find all the relevant files (see the little regex that’s used to find the files) and will then bundle these files

 

let _ = require('lodash');
let webpack = require('webpack');
let path = require('path');
let fs = require("fs");
let WebpackOnBuildPlugin = require('on-build-webpack');
let ExtractTextPlugin = require('extract-text-webpack-plugin');
let HtmlWebpackPlugin = require('html-webpack-plugin');

let babelOptions = {
    "presets": ["es2015", "react"]
};

function isVendor(module) {
    return module.context && module.context.indexOf('node_modules') !== -1;
}

let entries = {
    index: './src/index.tsx'

};

let buildDir = path.resolve(__dirname, 'dist');

module.exports = {

    context: __dirname,

    entry: entries,

    output: {
        filename: '[name].bundle.[hash].js',
        path: buildDir
    },

    
    
    resolve: {
        extensions: [".tsx", ".ts", ".js", ".jsx"],
        modules: [path.resolve(__dirname, "src"), "node_modules"]
    },

    plugins: [

       

        // creates a common vendor js file for libraries in node_modules
        new webpack.optimize.CommonsChunkPlugin({
            names: ['vendor'],
            minChunks: function (module, count) {
                return isVendor(module);
            }
        }),

        // creates a common vendor js file for libraries in node_modules
        new webpack.optimize.CommonsChunkPlugin({
            name: "commons",
            chunks: _.keys(entries),
            minChunks: function (module, count) {
                return !isVendor(module) && count > 1;
            }
        }),


        //scss/sass files extracted to common css bundle
        new ExtractTextPlugin({
            filename: '[name].bundle.css',
            allChunks: true,
        }),

        new HtmlWebpackPlugin({
            filename: 'index.html',
            template: 'template.html',
        })
    ],

    module: {
        rules: [
            // All files with a '.ts' or '.tsx' extension will be handled by 'awesome-typescript-loader' 1st 
            // then 'babel-loader'
            // NOTE : loaders run right to left (think of them as a cmd line pipe)
            {
                test: /\.ts(x?)$/,
                exclude: /node_modules/,
                use: [
                  {
                      loader: 'babel-loader',
                      options: babelOptions
                  },
                  {
                      loader: 'awesome-typescript-loader'
                  }
                ]
            },


            // All files with a .css extenson will be handled by 'css-loader'
            {
                test: /\.css$/,
                loader: ExtractTextPlugin.extract(['css-loader?importLoaders=1']),
            },

            // All files with a .scss|.sass extenson will be handled by 'sass-loader'
            {
                test: /\.(sass|scss)$/,
                loader: ExtractTextPlugin.extract(['css-loader', 'sass-loader'])
            },


            // All files with a '.js' extension will be handled by 'babel-loader'.
            {
                test: /\.js$/,
                exclude: /node_modules/,
                use: [
                  {
                      loader: 'babel-loader',
                      options: babelOptions
                  }
                ]
            },


            // All output '.js' files will have any sourcemaps re-processed by 'source-map-loader'.
            {
                enforce: "pre",
                test: /\.js$/,
                loader: "source-map-loader"
            }
        ]
    }
};

 

 

The bulk of the code above is made up of loaders. But there are a few things above that deserve special call outs, namely

 

Resolve

This tells us what type of files webpack should try and resolve

 

resolve: {
	extensions: [".tsx", ".ts", ".js", ".jsx"],
	modules: [path.resolve(__dirname, "src"), "node_modules"]
},

 

Entry

These are the main entry points into the code. So for me this is the index.tsx file.

 

let entries = {
    index: './src/index.tsx'

};


entry: entries,

 

 

Output

This is where you tell webpack what the name of your final bundles will be, which will contain all the code files that matches the regex test that was setup in the loaders. It is VERY important to note that ALL the files that matches the loader regex will become part of the bundle file.

 

output: {
	filename: '[name].bundle.[hash].js',
	path: buildDir
},

 

 

TypeScript

I wanted the option to be able to use TypeScript IF I WANTED to. So to do this we need a webpack loader, there are a couple  of TypeScript loaders for webpack. But I went with awesome-typescript-loader. I also want to run my TypeScript files through Babel. We will get onto what Babel brings to the party in just a second, but for now just understand that TypeScript and Babel act as transpilers where they take JavaScript using features that is not available in regular JavaScript and transpile that code into regular JavaScript that today’s browsers understand. Obviously since the final product of both TypeScript and Babel is regular JavaScript we also need a loader for that too.

 

Here is my TypeScript/Babel/JavaScript setup.

// All files with a '.ts' or '.tsx' extension will be handled by 'awesome-typescript-loader' 1st 
// then 'babel-loader'
// NOTE : loaders run right to left (think of them as a cmd line pipe)
{
	test: /\.ts(x?)$/,
	exclude: /node_modules/,
	use: [
	  {
		  loader: 'babel-loader',
		  options: babelOptions
	  },
	  {
		  loader: 'awesome-typescript-loader'
	  }
	]
},

// All files with a '.js' extension will be handled by 'babel-loader'.
{
	test: /\.js$/,
	exclude: /node_modules/,
	use: [
	  {
		  loader: 'babel-loader',
		  options: babelOptions
	  }
	]
}

 

The other thing you need when working with TypeScript is a tsconfig.json file. Here is mine, you can see that I have configured mine to be react friendly.

 

{
  "compilerOptions": {
    "allowSyntheticDefaultImports": true,
    "moduleResolution": "node",
    "outDir": "./dist/",
    "sourceMap": true,
    "noImplicitAny": false,
    "module": "es2015",
    "target": "es5",
    "jsx": "react",
    "types" : ["jquery", "lodash", "react", "react-dom"]
  },
    "include": [
        "./src/**/*"
    ]
}

 

NOTE: You need to be a bit careful with this file, if you mess it up, you may find yourself in a quite sad position where you can no longer edit TypeScript files in Visual Studio.

 

Babel

I just showed you the babel loader, so I won’t repeat that. But just what is this Babel you speak of. Well here the blurb from the Babel.js website

 

Babel has support for the latest version of JavaScript through syntax transformers. These plugins allow you to use new syntax, right now without waiting for browser support.

 

This is the sort of stuff that Babel allows you to write right now.

 

image

 

The only other thing you need for Babel is to give a little config file called .babelrc which for me just contains this

 

{ "presets": ["es2015","react"] }

 

And that is pretty much all there is to it, you can now use these features in your JavaScript. Neato

 

SCSS

I don’t mind CSS, but these days there are better tools out there, namely LESS/SASS. What these tools offer you are things like this

  • Modular CSS (multiple files in a heirachy)
  • Nested CSS rules
  • Variables
  • etc etc

 

So it seems strange NOT to want to work with this. As with most things in webpack, it starts with a loader, where we have support for SASS and also plain CSS. Remember loaders run from right to left, so in the case of the SASS/SCSS file match, the files will 1st run through the sass-loader the the css-loader. However for plain old CSS they just go through the css-loader

 

// All files with a .css extenson will be handled by 'css-loader'
{
	test: /\.css$/,
	loader: ExtractTextPlugin.extract(['css-loader?importLoaders=1']),
},

// All files with a .scss|.sass extenson will be handled by 'sass-loader'
{
	test: /\.(sass|scss)$/,
	loader: ExtractTextPlugin.extract(['css-loader', 'sass-loader'])
},

 

The other part of the puzzle to get CSS to work is this ExtractTextPlugin that you can see mentioned in the loader sections just above. What the ExtractTextPlugin  does is to extract all the text from the individual CSS files (yep that’s right SASS/SCSS is transpiled to regular CSS) into a single CSS file.

 

//scss/sass files extracted to common css bundle
new ExtractTextPlugin({
    filename: '[name].bundle.[hash].css',
    allChunks: true,
}),

 

Don’t be too scared by the [name] and [hash] stuff just yet we will get onto to that later.

 

 

Bootstrap

So for those of you living under a rock there is a great library (started by Twitter engineers) to help create responsive uniform looking sites. This library is called twitter Bootstrap. It comes with various components and CSS, and use typeography for its icons.

 

Now Bootstrap is great, but I wanted to use React, and React has the concept of a virtual DOM, and generally speaking tries to work with it own Virtual DOM rather than the real DOM. This has led to a specialized version of Bootstrap specifically for use with React. Naturally I needed to get that to work. It is called React-Bootstrap.

 

So once we have it installed via NPM we just need to worry about a few small thing

 

Images

These are loaded by (surprise surprise) another bootstrap loader section

{ 
	test: /\.png$/, 
	loader: "url-loader?limit=100000" 
},

{ 
	test: /\.jpg$/, 
	loader: "file-loader" 
},

{
	test: /\.svg(\?.*)?$/,
	loader: 'url-loader?prefix=fonts/&name=fonts/[name].[ext]&limit=10000&mimetype=image/svg+xml'
},

 

Fonts

Fonts are also loaded by more webpack loaders

{
	test: /\.woff(\?.*)?$/,
	loader: 'url-loader?prefix=fonts/&name=fonts/[name].[ext]&limit=10000&mimetype=application/font-woff'
},

{
	test: /\.woff2(\?.*)?$/,
	loader: 'url-loader?prefix=fonts/&name=fonts/[name].[ext]&limit=10000&mimetype=application/font-woff2'
},

{
	test: /\.ttf(\?.*)?$/,
	loader: 'url-loader?prefix=fonts/&name=fonts/[name].[ext]&limit=10000&mimetype=application/octet-stream'
},

{
	test: /\.eot(\?.*)?$/, loader: 'file-loader?prefix=fonts/&name=fonts/[name].[ext]'
},

 

 

Css

So once you have all the other stuff done you can proceed to just use react-bootstrap. Here is a small example from one of my TypeScript files

 

import * as React from "react";
import * as ReactDOM from "react-dom";
import { Button } from 'react-bootstrap';

import 'bootstrap/dist/css/bootstrap.css';

export class Hello extends React.Component<HelloProps, undefined> {
    render() {
        return 
<div>
                <Button bsStyle="primary" bsSize="large">Large button</Button>
                
<h1 id="helloText">Hello from {this.props.compiler} and {this.props.framework}!</h1>

               </div>

;
    }
}

 

Which when rendered looks like this:

image

 

 

 

Lodash

Lodash is a kind of new underscore-esque library, which offers many convenience methods on collections. It is like the LINQ to obejcts of the JavaScript world. To work with Lodash you can simply import it as follows

 

import * as _ from "lodash";

 

Which we could verify quite simply with something like this, where the image below is me finding the original line in my TypeScript file within the SourceMap that was sent to the browser and putting a break point on the line I wanted to debug

 

console.log(_.VERSION);

image

 

 

JQuery

Ah the blessed Jquery, love it or hate it, there is certainly a lot of it on the web. And at times it is still very convenient, so we should really allow for it too. Thing with JQuery is that it wants to be available as global variable $ or via a property on window. Is this even possible with webpack? Well yes it is, we simply add the following bit of config within the webpack Plugins section

 

//The ProvidePlugin makes a module available as a variable in every other
//module required by webpack
new webpack.ProvidePlugin({
    $: "jquery",
    jQuery: "jquery",
    "window.jQuery": "jquery"
}),

 

That then allows us to to use Jquery like this without having to ever import it anywhere, its just automatically globally available

 

console.log("jquery");
console.log($);
console.log($.fn.jquery);

 

Again I am using the emitted SourceMap to find my original TypeScript code

 

image

 

 

Source Map Support

I also wanted to be able to debug my ORIGINAL TypeScript/JavaScript, so using SourceMaps WAS A MUST. By using source maps in webpack I am able to send the transpiled/bundled (but not minified I only do that in production mode), and also view the original code, and set break points in the original code.

 

This is the JavaScript bundle that webpack sent

 

image

 

And here is me inside the SourceMap file, see how I am in the original content here (ie the code I wrote)

 

image

 

 

This is enabled via the webpack setting

devtool: "source-map"

 

ES6 style code and modules

Another feature of using webpack is that you may using AMD/CommonJS modules (or if you included TypeScript/Babel ES6 modules). I am using TypeScript and Babel so I went with ES6 style modules, which means I can export/import things like this:

import * as React from "react";
import * as ReactDOM from "react-dom";
import * as _ from "lodash";
import { Button } from 'react-bootstrap';

import 'bootstrap/dist/css/bootstrap.css';

export interface HelloProps { compiler: string; framework: string; }

export class Foo {

    private _num: number;

    constructor(num: number) {
        this._num = num;
    }

    getNum() {
        return this._num * 2;
    }

}

 

 

Html Plugin

Ok hope you all recall but a while ago I promised to explain what was meant by [name] and [hash] in my webpack config.

  • [name] : simply gets replaced by the current bundle name
  • [hash] : produces a hash of the bundle

I think name is self explanatory, but [hash] is an interesting one.  The idea of producing a hash for your bundles is great. That means if the file contents change the hash produced is different, so the browser cache would be invalidated.

 

That’s cool. But hang on how do we normally include script/css references in our Html page, either in Script/head tags right? And if the hash is changing all the time, how can we possible link to files where we don’t know what the hash will be.

 

Luckily we just use the HtmlWebpackPlugin, which does a great job of taking a template for the original HTML we want to end up with, and putting the final bundle generated references into a copy of the template and copying that final HTML file to the desired output directory.

 

So for me I have this webpack config

new HtmlWebpackPlugin({
    filename: 'index.html',
    template: 'template.html',
})

 

Where my template.html file looks like this

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8" />
        <title>Hello React!</title>
    </head>
    <body>
        
<div id="example"></div>

        <!-- Main -->
    </body>
</html>

 

And once webpack / HtmlWebpackPlugin have run their magic, the resultant HTML (ie final HTML file) looks like this:

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8" />
        <title>Hello React!</title>
		<link href="vendor.bundle.b8e27b8c09179b83b9b1.css" rel="stylesheet">
		<link href="indexCss.bundle.b8e27b8c09179b83b9b1.css" rel="stylesheet"></head>
    <body>
        
<div id="example"></div>

        <!-- Main -->
		<img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%20type%3D%22text%2Fjavascript%22%20src%3D%22vendor.bundle.b8e27b8c09179b83b9b1.js%22%3E%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
		<script type="text/javascript" src="index.bundle.b8e27b8c09179b83b9b1.js">
	</body>
</html>

 

See how it just inserts the CSS/JS bundles for me, and my hashing for the bundles now seemlessly happens and I don’t have to worry about it ever again

 

Separate Configs

The final thing I wanted to cover was how to have different DEV/PROD webpack configs. Up until now I have just been showing you a base config file. But we can use webpack-merge to allow us to create bespoke webpack config files for specific environments.

For example here is my Develop webpack file (which is the same as the base config file)

 

let commonConfig = require('./webpack.config.js');
let webpack = require('webpack');
let Merge = require('webpack-merge');

module.exports = function (env) {
    return Merge(commonConfig, {})
}

 

Whilst this is my Production webpack config file where I want

  • No SourceMap files
  • No console.log
  • No comments
  • Minification

 

let commonConfig = require('./webpack.config.js');
let webpack = require('webpack');
let Merge = require('webpack-merge');

module.exports = function (env) {
    return Merge(commonConfig, {
        plugins: [
          new webpack.LoaderOptionsPlugin({
              minimize: true,
              debug: false
          }),
          new webpack.optimize.UglifyJsPlugin({
              // Eliminate comments
              comments: false,
              beautify: false,
              mangle: {
                  screw_ie8: true,
                  keep_fnames: true
              },
              compress: {
                  screw_ie8: true,

                  // remove warnings
                  warnings: false,

                  // Drop console statements
                  drop_console: true
              },
              comments: false,
              sourceMap: false
          })
        ]
    })
}

 

 

 

Conclusion

So that is all I wanted to say this time, as I stated in the 1st post I will be continuing to write posts which will be tracked on Trello : https://trello.com/b/F4ykCOOM/kafka-play-akka-react-webpack-tasks

MADCAP IDEA

INTRODUCTION

So this is somewhat of strange post, or should I say what will hopefully become a decent set of posts, thing is, I have no idea how this will end up really,
as I have not embarked on a mission like this before. So please bear with me.

SO JUST WHAT IS IT THAT I AM TALKING ABOUT?

Well the way I typically like to run my blog/code project articles / life, is that I pick a technology and
just concentrate on it for a while and write about it. This time however I have decided to treat my blogging/articles as a bit more
of a work like escapade, where I will be assigning mini tasks (think JIRA tickets) to myself, some of which I know nothing about, that should/could in reality be treated
as “spikes” and end up in complete dead ends. It is about the journey after all.

I WILL have a complete list of “tickets” (AKA tasks), which may or may not be completely fleshed out in advance. I will stick to “DOING” those “tickets”
and there is an end goal in sight, and I will outline that in a top level story. I cannot however commit to any timelines, this is as much my journey as it is yours (in fact I mainly
do this stuff for myself, and would highly reccomend it as a way of self improvement). That said I hope people get something out of the series of posts that WILL UNDOUBTEDLY
come from this idea.

You can think of the tasks as “technical tasks” which make up the high level “stories” (in JIRA speak).

This may come across a bit weird, but the technogies I plan to cover in the final product is pretty much a full app, so it’s a little hard to describe in
one blog post/article. So I am hoping that by breaking it down into small chunks, each story/sub task will be a useful learning experience in
it’s own right.

SOURCE CONTROL : ORGANISATION

The idea is that each story/sub task will be a folder/subfolder which is completely independent of other stories/sub tasks (up until the final goal, which is of course
a working showcase that demostrates it all working together).

NOTE TO SELF : I am going to try really hard to do this (aren’t we sacha), as I think one topic -> one source control repo (more than likely GIT), is a good way to
correlate ideas/words on the post/article

 

WHAT DO I WANT TO WRITE


In essence I want to write a very (pardon the pun) but uber simple “uber” type app. Where there are the following funtional requirements

  • There should be a web interface that a client can use. Clients may be a “driver” or a “pickup client” requireing a delivery
  • There should be a web interface that a “pickup client” can use, that shows a “pickup client” location on a map, which the “pickup client” choses.
    The “pickup client” may request a pickup job, in which case “drivers” that are in the area bid for a job.
    The “pickup client” location should be visible to a “driver” on a map
  • A “driver” may bid for a “pickup client” job, and the bidding “driver(s)” location should be visible to the “pickup client”.
  • The acceptance of the bidding “driver” is down to the “pickup client”
  • Once a “pickup client” accepts a “driver” ONLY the assigned “driver(s)” current map position will be shown to the “pickup client”
  • When a “pickup client” is happy that they have been picked up by a “driver”, the “pickup client” may rate the driver from 1-10, and the “driver” may also rate the “pickup client” from 1-10.
  • The rating should only be available once a “pickup client” has marked a job as “completed”
  • A “driver” or a “pickup client” should ALWAYS be able to view their previous ratings. 

Whilst this may sound child’s play to a lot of you (me included if I stuck to using simply CRUD operations), I just want to point out that this app is meant as a learning experience so I will not be using a simple SignalR Hub, and a couple of database tables.

I intend to write this project using a completely different set of technologies from the norm. Some of the technology choices could easily scale to hundreds of thousands of requests per second (Kafka has your back here)

POTENTIAL TECNHOLOGIES INVOLVED

  • WebPack
  • React.js
  • React Router
  • TypeScript
  • Babel.js
  • Akka
  • Scala
  • Play (Scala Http Stack)
  • MySql
  • SBT
  • Kafka
  • Kafka Streams

Some of this will undoubtedly be covered in other blogs (such as React/Webpack), however some of it I am hoping will be quite novel/insightful material.

Who knows though there may be some of you out there that haven’t heard of Webpack, so some of that may even be new, we shall se, hopefully enough stuff for everyone.

STORIES

I will maintain a list of stories and their sub tasks using Trello here : https://trello.com/b/F4ykCOOM/kafka-play-akka-react-webpack-tasks which at the time of writing this post was the items shown below

 

TOP LEVEL STORIES

Web Site

Play Back End

Kafka Streams

Create test app that tests out listening to any single Kafka publisher JSON topic, and creates streams app from it, and pushes out to an output topic

 

 

HOW WILL PROGRESS BE TRACKED

I will simply use Trellos “Label” facility, such that done tasks will be “Green”, and there will obviously be a post/GitHub code repo folder that goes with that.

 

CAVEATS

1. I will not be concerned with connection failures, the aim of the project is to try and create a real world like project, but not actually create a end-end production grade application
2. I will be treating every run as if it were the first, I will not be storing ANY permanent state (apart from ratings potentially)
3. I will be doing things at my own pace (I have 2 kids) so it comes when it comes

4. I will try and use varied technology choices, which will in places mean that there could potentially be more work required to make it production quality

 

 

 

Crossbar.io quick look at it

A while ago someone posted another SignalR article on www.codeproject.com, and I stated hey you should take a look at this stuff : crossbar.io, and not liking to me someone that is not willing to die by the sword (afting living by it of course) I decided to put my money where my mouth was/is, and write a small article on crossbar.io.

So what its crossbar.io? Well quite simply it is a message broker that has many language bindings, that should all be able to communicate together seamlessly.

Here is what the people being crossbar.io have to say about their own product

Crossbar.io is an open source networking platform for distributed and microservice applications. It implements the open Web Application Messaging Protocol (WAMP), is feature rich, scalable, robust and secure. Let Crossbar.io take care of the hard parts of messaging so you can focus on your app’s features.

I decided to take a look at this a bit more which I have written up here : https://www.codeproject.com/Articles/1183744/Crossbar-io-a-quick-look-at-it

A Look At Docker

A while ago I worked on a project that used this tech stack

  • Akka HTTP : (actually we used Spray.IO but it is practically the same thing for the purpose of this article). For those that don’t know what Akka HTTP is, it is a simple Akka based framework that is also able to expose a REST interface to communicate with the actor system
  • Cassandra database : Apache Cassandra is a free and open-source distributed database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. Cassandra offers robust support for clusters spanning multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients.

It is a multi node cluster

This was a pain to test, and we were always stepping on each others toes, as you can imagine running up a 5 node cluster of VMs just to satisfy my each developers own testing needs was a bit much. So we ended up with some dedicated test environments, running 5 Cassandra nodes. These was still a PITA to be honest.

This got me thinking perhaps I could use Docker to help me out here, perhaps I could run Cassandra in a Docker container, hell perhaps I could even run my own code that uses Cassandra in a Docker container, and just point my UI at the Akka HTTP REST server running in Docker. mmmmm

I started to dig around, and of course this is entirely possible (otherwise I would not be writing this article now would I).

This is certainly not a new thing here for Codeproject, there are numerous Docker articles,  but I never found one that talked about Cassandra, so I thought why not write another one.

 

Which I have just published here : https://www.codeproject.com/Articles/1175248/A-look-at-Docker

Update scheduled Quartz.Net job by monitoring App.Config

 

Introduction

So I was back in .NET land the other day at work, where I had to schedule some code to run periodically on some schedule.

The business also needed this schedule to be adjustable, so that they could adjust it when things were busier and wind it down when they are not

This adjusting of the schedule time would be done via a setting in the App.Config, where the App.Config is monitored for changes. If there is a change then we would look to use the new schedule value from the App.Config to run the job. Ideally the app must not go down to afford this change of job schedule time.

There are some good job / scheduling libraries out there, but for this I just wanted to use something light weight so I went with Quartz.net

Its easy to setup and use, and has a fairly nice API, supports IOC and CRON schedules. In short it fits the bill

In a netshell this post will simple talk about how you can adjust the schedule of a ALREADY scheduled job, there will also be some special caveats that I personally had to deal with in my requirements, which may or may not be an issue for you

 

Some Weird Issues That I Needed To Cope With

So let me just talk about some of the issues that I had to deal with

The guts of the job code that I run on my schedule is actually writing to Azure Blob Storage and then to Azure SQL DW tables. And as such has several writes to several components one after another.

So this run of the current job run MUST be allowed to complete in FULL (or fail using Exception handling that’s ok to). It would not be acceptable to just stop the Quartz job while there is work in flight.

I guess some folk may be thinking of some sort of transaction here, that must either commit or rollback. Unfortunately that doesn’t work with Azure Blob Storage uploads.

So I had to think of another plan.

So here is what I came up with. I would use threading primitives namely an AutoResetEvent that would control when the Quartz.net job could be changed to use a new schedule.

if a change in the App.Config was seen, then we know that we “should” be using a new schedule time, however the scheduled job MAY have work in flight. So we need to wait for that work to complete (or fail) before we could think about swapping the Quartz.net scheduler time.

So that is what I went for, there are a few other things to be aware of such as I needed threading primitives that worked with Async/Await code. Luckily Stephen Toub from the TPL team has done that for us : asyncautoresetevent

There is also the well known fact that the FileSystemWatcher class fires events twice : http://lmgtfy.com/?q=filesystemwatcher+firing+twice

So as we go through the code you will see how I dealt with those

The Code

Ok so now that we have talked about the problem, lets go through the code.

There are several NuGet packages I am using to make my life easier

So lets start with the entry point, which for me is the simple Program class shown below

using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Security.Principal;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Autofac;
using SachaBarber.QuartzJobUpdate.Services;
using Topshelf;


namespace SachaBarber.QuartzJobUpdate
{
    static class Program
    {
        private static ILogger _log = null;


        [STAThread]
        public static void Main()
        {
            try
            {
                var container = ContainerOperations.Container;
                _log = container.Resolve<ILogger>();
                _log.Log("Starting");

                AppDomain.CurrentDomain.UnhandledException += AppDomainUnhandledException;
                TaskScheduler.UnobservedTaskException += TaskSchedulerUnobservedTaskException;
                Thread.CurrentPrincipal = new WindowsPrincipal(WindowsIdentity.GetCurrent());
                
                HostFactory.Run(c =>                                 
                {
                    c.Service<SomeWindowsService>(s =>                        
                    {
                        s.ConstructUsing(() => container.Resolve<SomeWindowsService>());
                        s.WhenStarted(tc => tc.Start());             
                        s.WhenStopped(tc => tc.Stop());               
                    });
                    c.RunAsLocalSystem();                            

                    c.SetDescription("Uploads Calc Payouts/Summary data into Azure blob storage for RiskStore DW ingestion");       
                    c.SetDisplayName("SachaBarber.QuartzJobUpdate");                       
                    c.SetServiceName("SachaBarber.QuartzJobUpdate");                      
                });
            }
            catch (Exception ex)
            {
                _log.Log(ex.Message);
            }
            finally
            {
                _log.Log("Closing");
            }
        }
      

        private static void AppDomainUnhandledException(object sender, UnhandledExceptionEventArgs e)
        {
            ProcessUnhandledException((Exception)e.ExceptionObject);
        }

        private static void TaskSchedulerUnobservedTaskException(object sender, UnobservedTaskExceptionEventArgs e)
        {
            ProcessUnhandledException(e.Exception);
            e.SetObserved();
        }

        private static void ProcessUnhandledException(Exception ex)
        {
            if (ex is TargetInvocationException)
            {
                ProcessUnhandledException(ex.InnerException);
                return;
            }
            _log.Log("Error");
        }
    }
}

All this does it host the actual windows service class for me using TopShelf. Where the actual service class looks like this

using System;
using System.Configuration;
using System.IO;
using System.Reactive.Disposables;
using System.Reactive.Linq;
using System.Xml.Linq;
using Autofac;
using SachaBarber.QuartzJobUpdate.Async;
using SachaBarber.QuartzJobUpdate.Configuration;
using SachaBarber.QuartzJobUpdate.Jobs;
using SachaBarber.QuartzJobUpdate.Services;
//logging
using Quartz;

namespace SachaBarber.QuartzJobUpdate
{
    public class SomeWindowsService
    {
        private readonly ILogger _log;
        private readonly ISchedulingAssistanceService _schedulingAssistanceService;
        private readonly IRxSchedulerService _rxSchedulerService;
        private readonly IObservableFileSystemWatcher _observableFileSystemWatcher;
        private IScheduler _quartzScheduler;
        private readonly AsyncLock _lock = new AsyncLock();
        private readonly SerialDisposable _configWatcherDisposable = new SerialDisposable();
        private static readonly JobKey _someScheduledJobKey = new JobKey("SomeScheduledJobKey");
        private static readonly TriggerKey _someScheduledJobTriggerKey = new TriggerKey("SomeScheduledJobTriggerKey");

        public SomeWindowsService(
            ILogger log,
            ISchedulingAssistanceService schedulingAssistanceService, 
            IRxSchedulerService rxSchedulerService,
            IObservableFileSystemWatcher observableFileSystemWatcher)
        {
            _log = log;
            _schedulingAssistanceService = schedulingAssistanceService;
            _rxSchedulerService = rxSchedulerService;
            _observableFileSystemWatcher = observableFileSystemWatcher;
        }

        public void Start()
        {
            try
            {
                var ass = typeof (SomeWindowsService).Assembly;
                var configFile = $"{ass.Location}.config"; 
                CreateConfigWatcher(new FileInfo(configFile));


                _log.Log("Starting SomeWindowsService");

                _quartzScheduler = ContainerOperations.Container.Resolve<IScheduler>();
                _quartzScheduler.JobFactory = new AutofacJobFactory(ContainerOperations.Container);
                _quartzScheduler.Start();

                //create the Job
                CreateScheduledJob();
            }
            catch (JobExecutionException jeex)
            {
                _log.Log(jeex.Message);
            }
            catch (SchedulerConfigException scex)
            {
                _log.Log(scex.Message);
            }
            catch (SchedulerException sex)
            {
                _log.Log(sex.Message);
            }

        }

        public void Stop()
        {
            _log.Log("Stopping SomeWindowsService");
            _quartzScheduler?.Shutdown();
            _configWatcherDisposable.Dispose();
            _observableFileSystemWatcher.Dispose();
        }


        private void CreateConfigWatcher(FileInfo configFileInfo)
        {
            FileSystemWatcher watcher = new FileSystemWatcher();
            watcher.Path = configFileInfo.DirectoryName;
            watcher.NotifyFilter = 
                NotifyFilters.LastAccess | 
                NotifyFilters.LastWrite | 
                NotifyFilters.FileName | 
                NotifyFilters.DirectoryName;
            watcher.Filter = configFileInfo.Name;
            _observableFileSystemWatcher.SetFile(watcher);
            //FSW is notorious for firing twice see here : 
            //http://stackoverflow.com/questions/1764809/filesystemwatcher-changed-event-is-raised-twice
            //so lets use Rx to Throttle it a bit
            _configWatcherDisposable.Disposable = _observableFileSystemWatcher.Changed.SubscribeOn(
                _rxSchedulerService.TaskPool).Throttle(TimeSpan.FromMilliseconds(500)).Subscribe(
                    async x =>
                    {
                        //at this point the config has changed, start a critical section
                        using (var releaser = await _lock.LockAsync())
                        {
                            //tell current scheduled job that we need to read new config, and wait for it
                            //to signal us that we may continue
                            _log.Log($"Config file {configFileInfo.Name} has changed, attempting to read new config data");
                            _schedulingAssistanceService.RequiresNewSchedulerSetup = true;
                            _schedulingAssistanceService.SchedulerRestartGate.WaitAsync().GetAwaiter().GetResult();
                            //recreate the AzureBlobConfiguration, and recreate the scheduler using new settings
                            ConfigurationManager.RefreshSection("schedulingConfiguration");
                            var newSchedulingConfiguration = SimpleConfig.Configuration.Load<SchedulingConfiguration>();
                            _log.Log($"SchedulingConfiguration section is now : {newSchedulingConfiguration}");
                            ContainerOperations.ReInitialiseSchedulingConfiguration(newSchedulingConfiguration);
                            ReScheduleJob();
                        }
                    },
                    ex =>
                    {
                        _log.Log($"Error encountered attempting to read new config data from config file {configFileInfo.Name}");
                    });
        }

        private void CreateScheduledJob(IJobDetail existingJobDetail = null)
        {
            var azureBlobConfiguration = ContainerOperations.Container.Resolve<SchedulingConfiguration>();
            IJobDetail job = JobBuilder.Create<SomeQuartzJob>()
                    .WithIdentity(_someScheduledJobKey)
                    .Build();

            ITrigger trigger = TriggerBuilder.Create()
                .WithIdentity(_someScheduledJobTriggerKey)
                .WithSimpleSchedule(x => x
                    .RepeatForever()
                    .WithIntervalInSeconds(azureBlobConfiguration.ScheduleTimerInMins)
                )
                .StartAt(DateTimeOffset.Now.AddSeconds(azureBlobConfiguration.ScheduleTimerInMins))
                .Build();

            _quartzScheduler.ScheduleJob(job, trigger);
        }

        private void ReScheduleJob()
        {
            if (_quartzScheduler != null)
            {
                _quartzScheduler.DeleteJob(_someScheduledJobKey);
                CreateScheduledJob();
            }
        }
    }


}

There is a fair bit going on here. So lets list some of the work this code does

  • It creates the initial Quartz.Net job and scheduled it using the values from a custom config section which are read into an object
  • It watches the config file for changes (we will go through that in a moment) and will wait on the AsyncAutoResetEvent to be signalled, at which point it will recreate the Quartz.net job

So lets have a look at some of the small helper parts

This is a simple Rx based file system watcher. The reason Rx is good here is that you can Throttle the events (see this post FileSystemWatcher raises 2 events)

using System;
using System.IO;
using System.Reactive.Linq;

namespace SachaBarber.QuartzJobUpdate.Services
{
    public class ObservableFileSystemWatcher : IObservableFileSystemWatcher
    {
        private FileSystemWatcher _watcher;

        public void SetFile(FileSystemWatcher watcher)
        {
            _watcher = watcher;

            Changed = Observable
                .FromEventPattern<FileSystemEventHandler, FileSystemEventArgs>
                (h => _watcher.Changed += h, h => _watcher.Changed -= h)
                .Select(x => x.EventArgs);

            Renamed = Observable
                .FromEventPattern<RenamedEventHandler, RenamedEventArgs>
                (h => _watcher.Renamed += h, h => _watcher.Renamed -= h)
                .Select(x => x.EventArgs);

            Deleted = Observable
                .FromEventPattern<FileSystemEventHandler, FileSystemEventArgs>
                (h => _watcher.Deleted += h, h => _watcher.Deleted -= h)
                .Select(x => x.EventArgs);

            Errors = Observable
                .FromEventPattern<ErrorEventHandler, ErrorEventArgs>
                (h => _watcher.Error += h, h => _watcher.Error -= h)
                .Select(x => x.EventArgs);

            Created = Observable
                .FromEventPattern<FileSystemEventHandler, FileSystemEventArgs>
                (h => _watcher.Created += h, h => _watcher.Created -= h)
                .Select(x => x.EventArgs);

            All = Changed.Merge(Renamed).Merge(Deleted).Merge(Created);
            _watcher.EnableRaisingEvents = true;
        }

        public void Dispose()
        {
            _watcher.EnableRaisingEvents = false;
            _watcher.Dispose();
        }

        public IObservable<FileSystemEventArgs> Changed { get; private set; }
        public IObservable<RenamedEventArgs> Renamed { get; private set; }
        public IObservable<FileSystemEventArgs> Deleted { get; private set; }
        public IObservable<ErrorEventArgs> Errors { get; private set; }
        public IObservable<FileSystemEventArgs> Created { get; private set; }
        public IObservable<FileSystemEventArgs> All { get; private set; }
    }
}

And this is a small utility class that will contain the results of the custom config section that may be read using SimpleConfig

namespace SachaBarber.QuartzJobUpdate.Configuration
{
    public class SchedulingConfiguration
    {
        public int ScheduleTimerInMins { get; set; }

        public override string ToString()
        {
            return $"ScheduleTimerInMins: {ScheduleTimerInMins}";
        }
    }
}

Which you read from the App.Config like this

 var newSchedulingConfiguration = SimpleConfig.Configuration.Load<SchedulingConfiguration>();

And this is the Async/Await compatible AutoResetEvent that I took from Stephen Toubs blog

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace SachaBarber.QuartzJobUpdate.Async
{
    public class AsyncAutoResetEvent
    {
        private static readonly Task Completed = Task.FromResult(true);
        private readonly Queue<TaskCompletionSource<bool>> _waits = new Queue<TaskCompletionSource<bool>>();
        private bool _signaled;

        public Task WaitAsync()
        {
            lock (_waits)
            {
                if (_signaled)
                {
                    _signaled = false;
                    return Completed;
                }
                else
                {
                    var tcs = new TaskCompletionSource<bool>();
                    _waits.Enqueue(tcs);
                    return tcs.Task;
                }
            }
        }

        public void Set()
        {
            TaskCompletionSource<bool> toRelease = null;
            lock (_waits)
            {
                if (_waits.Count > 0)
                    toRelease = _waits.Dequeue();
                else if (!_signaled)
                    _signaled = true;
            }
            toRelease?.SetResult(true);
        }
    }
}

So the last part of the puzzle is how does the AsynAutoReset event get signalled?

Well as we said above we need to wait for any in progress work to complete first. So the way I tackled that was that within the job code that gets run every Quartz.Net scheduler tick time, we just check whether we have be requested to swap out the current schedule time, and if so we should signal the waiting code of the (shared) AsyncAutoResetEvent, otherwise we just carry on and do the regular job work.

The way that we get the AsyncAutoResetEvent that is used by the waiting code and also the job code (to signal it) is via using a singleton registration in an IOC container. I am using AutoFac which I set up like this, but you could have your own singleton, or IOC container of choice that you could use.

The trick is to make sure that both classes that need to access the AsyncAutoResetEvent use a single instance.

using System;
using System.Reflection;
using Autofac;
using SachaBarber.QuartzJobUpdate.Configuration;
using SachaBarber.QuartzJobUpdate.Services;
using Quartz;
using Quartz.Impl;

namespace SachaBarber.QuartzJobUpdate
{
    public class ContainerOperations
    {
        private static Lazy<IContainer> _containerSingleton = 
            new Lazy<IContainer>(CreateContainer);

        public static IContainer Container => _containerSingleton.Value;

        public static void ReInitialiseSchedulingConfiguration(
            SchedulingConfiguration newSchedulingConfiguration)
        {
            var currentSchedulingConfiguration = 
                Container.Resolve<SchedulingConfiguration>();
            currentSchedulingConfiguration.ScheduleTimerInMins = 
                newSchedulingConfiguration.ScheduleTimerInMins;
        }
        

        private static IContainer CreateContainer()
        {
            var builder = new ContainerBuilder();
            builder.RegisterType<ObservableFileSystemWatcher>()
                .As<IObservableFileSystemWatcher>().ExternallyOwned();
            builder.RegisterType<RxSchedulerService>()
                .As<IRxSchedulerService>().ExternallyOwned();
            builder.RegisterType<Logger>().As<ILogger>().ExternallyOwned();
            builder.RegisterType<SomeWindowsService>();
            builder.RegisterInstance(new SchedulingAssistanceService())
                .As<ISchedulingAssistanceService>();
            builder.RegisterInstance(
                SimpleConfig.Configuration.Load<SchedulingConfiguration>());

            // Quartz/jobs
            builder.Register(c => new StdSchedulerFactory().GetScheduler())
                .As<Quartz.IScheduler>();
            builder.RegisterAssemblyTypes(Assembly.GetExecutingAssembly())
                .Where(x => typeof(IJob).IsAssignableFrom(x));
            return builder.Build();
        }

        
    }
}

Where the shared instance in my case is this class

using SachaBarber.QuartzJobUpdate.Async;

namespace SachaBarber.QuartzJobUpdate.Services
{
    public class SchedulingAssistanceService : ISchedulingAssistanceService
    {
        public SchedulingAssistanceService()
        {
            SchedulerRestartGate = new AsyncAutoResetEvent();
            RequiresNewSchedulerSetup = false;
        }    

        public AsyncAutoResetEvent SchedulerRestartGate { get; }
        public bool RequiresNewSchedulerSetup { get; set; }
    }
}

Here is the actual job code that will check to see if a change in the App.Config has been detected. Which would require this code to signal the waiting code that it may continue.

using System;
using System.IO;
using System.Net;
using System.Threading.Tasks;
using Quartz;

namespace SachaBarber.QuartzJobUpdate.Services
{
    public class SomeQuartzJob : IJob
    {
        private readonly ILogger _log;
        private readonly ISchedulingAssistanceService _schedulingAssistanceService;

        public SomeQuartzJob(
            ILogger log, 
            ISchedulingAssistanceService schedulingAssistanceService)
        {
            _log = log;
            _schedulingAssistanceService = schedulingAssistanceService;
        }


        public void Execute(IJobExecutionContext context)
        {
            try
            {
                ExecuteAsync(context).GetAwaiter().GetResult();
            }
            catch (JobExecutionException jeex)
            {
                _log.Log(jeex.Message);
                throw;
            }
            catch (SchedulerConfigException scex)
            {
                _log.Log(scex.Message);
                throw;
            }
            catch (SchedulerException sex)
            {
                _log.Log(sex.Message);
                throw;
            }
            catch (ArgumentNullException anex)
            {
                _log.Log(anex.Message);
                throw;
            }
            catch (OperationCanceledException ocex)
            {
                _log.Log(ocex.Message);
                throw;
            }
            catch (IOException ioex)
            {
                _log.Log(ioex.Message);
                throw;
            }
        }


        /// <summary>
        /// This is called every time the Quartz.net scheduler CRON time ticks
        /// </summary>
        public async Task ExecuteAsync(IJobExecutionContext context)
        {
            await Task.Run(async () =>
            {
                if (_schedulingAssistanceService.RequiresNewSchedulerSetup)
                {
                    //signal the waiting scheduler restart code that it can now restart the scheduler
                    _schedulingAssistanceService.RequiresNewSchedulerSetup = false;
                    _log.Log("Job has been asked to stop, to allow job reschedule due to change in config");
                    _schedulingAssistanceService.SchedulerRestartGate.Set();
                }
                else
                {
                    await Task.Delay(1000);
                    _log.Log("Doing the uninterruptible work now");
                }
            });
        }
    }
}

So when the AsyncAutoResetEvent is signalled the waiting code (inside the subscribe code of the Rx file system watcher inside the SomeWindowsService.cs code) will proceed to swap out the Quartz.Net scheduler time.

It can do this safely as we know there is NO work in flight as the job has told this waiting to code to proceed, which it can only do if there is no work in flight.

This swapping over of the scheduler time to use the newly read App.Config values is also protected in an AsyncLock class (again taken from Stephen Toub)

using System;
using System.Threading;
using System.Threading.Tasks;

namespace SachaBarber.QuartzJobUpdate.Async
{
    /// <summary>
    /// See http://blogs.msdn.com/b/pfxteam/archive/2012/02/12/10266988.aspx
    /// from the fabulous Stephen Toub
    /// </summary>    
    public class AsyncLock
    {
        private readonly AsyncSemaphore m_semaphore;
        private readonly Task<Releaser> m_releaser;

        public AsyncLock()
        {
            m_semaphore = new AsyncSemaphore(1);
            m_releaser = Task.FromResult(new Releaser(this));
        }

        public Task<Releaser> LockAsync()
        {
            var wait = m_semaphore.WaitAsync();
            return wait.IsCompleted ?
                m_releaser :
                wait.ContinueWith((_, state) => new Releaser((AsyncLock)state),
                    this, CancellationToken.None,
                    TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Default);
        }

        public struct Releaser : IDisposable
        {
            private readonly AsyncLock m_toRelease;

            internal Releaser(AsyncLock toRelease) { m_toRelease = toRelease; }

            public void Dispose()
            {
                if (m_toRelease != null)
                    m_toRelease.m_semaphore.Release();
            }
        }
    }
}

Where this relies on AsyncSemaphore

using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace SachaBarber.QuartzJobUpdate.Async
{
    /// <summary>
    /// See http://blogs.msdn.com/b/pfxteam/archive/2012/02/12/10266983.aspx
    /// from the fabulous Stephen Toub
    /// </summary>
    public class AsyncSemaphore
    {
        private static readonly Task s_completed = Task.FromResult(true);
        private readonly Queue<TaskCompletionSource<bool>> _mWaiters = new Queue<TaskCompletionSource<bool>>();
        private int _mCurrentCount;

        public AsyncSemaphore(int initialCount)
        {
            if (initialCount < 0) throw new ArgumentOutOfRangeException("initialCount");
            _mCurrentCount = initialCount;
        }

        public Task WaitAsync()
        {
            lock (_mWaiters)
            {
                if (_mCurrentCount > 0)
                {
                    --_mCurrentCount;
                    return s_completed;
                }
                else
                {
                    var waiter = new TaskCompletionSource<bool>();
                    _mWaiters.Enqueue(waiter);
                    return waiter.Task;
                }
            }
        }


        public void Release()
        {
            TaskCompletionSource<bool> toRelease = null;
            lock (_mWaiters)
            {
                if (_mWaiters.Count > 0)
                    toRelease = _mWaiters.Dequeue();
                else
                    ++_mCurrentCount;
            }
            if (toRelease != null)
                toRelease.SetResult(true);
        }


    }
}

Just for completeness this is how you get an App.Config section to refresh at runtime

 ConfigurationManager.RefreshSection("schedulingConfiguration");

Anyway this works fine for me, I now have a reactive app that changes to changes in the App.Config without the need to restart the app, and it does so by allowing inflight work to be completed.

Hope it helps someone out there

 

Where Is The Code?

The code can be found here : : https://github.com/sachabarber/SachaBarber.QuartzJobUpdate

scala environment config options

This is going to be a slightly weird post in a way as it is going to go round the houses a bit, and not going to contain any actual code, but shall talk about possible techniques of how to best manage specific environment config values for a multi project scala setup

Coming from .NET

So as many of you know I came from .NET, where we have a simple config model. We have App.Config or Web.Config.

We have tools at our disposal such as the XmlTransformation MsBuild tasks which allow us to maintain a set of different App.Config values in them that will be transformed for your different environments

I wrote a post about this here

https://sachabarbs.wordpress.com/2015/07/07/app-config-transforms-outside-of-web-project/

Here is a small example of what this might look like

 

image

So when I started doing multi project Scala projects using SBT where I might have the following project requirements

image

In the above diagram the following is assumed

  • There is a Sacha.Common library that we want to use across multiple projects
  • That both Sacaha.Rest.Endpoints and Sacha.Play.FrontEnd are both apps which will need some environment specific configuration in them
  • That there is going to be more than 1 environment that we wish to use such as
    • PROD
    • QA
    • DEV

Now coming from .NET my initial instinct was to put a a bunch of folders in the 2 apps, so taking the Sacha.Rest.Endpoints app as an example we may have something like this

image

So the idea would be that we would have specific application.conf files for the different environments that we need to target (I am assuming there is some build process which takes care of putting the correct file in place for the correct build within the CI server).

This is very easy to understand, if we want QA we would end up using the QA version of the application.conf file

This is a very common way of thinking about this problem in .NET.

Why Is This Bad?

But hang on here this is only 1 app, what if we had 100 apps that made up our software in total. That means we need to maintain all these separate config files for all the environments in ALL the separate apps.

Wow that doesn’t sound so cool anymore.

Another Approach!

A colleague and I were talking about this in some scala code that was being written for a new project, and this is kind of what was being discussed.

I should point out that this idea was not in fact mine, but my colleagues Andy Sprague, which is not something I credited him for in the 1st draft of this post. Which is bad, sorry Andy.

Anyway how about this for another idea. How about the Sacha.Common JAR hold just the specific bits of changing config in separate config files such as

  • “Qa.conf”
  • “Prod.conf”
  • “Dev.conf”

And then the individual apps that already reference the Sacha.Common JAR just include the environment config they need.

This is entirely possible thanks to the way that the Typesafe config library works, where it is designed to include extra config files. These extra config files in this case are just inside of the a JAR that is external -> Sacha.Common

Here is what this might like look for a consumer of the Sacha.Common jar

image

Where we just include the relevant environment config from Sacha.Common in the application.conf for the current app

And this is what the Sacha.Common may look like, where it provides the separate environment config files that consumers may use

image

This diagram may help to illustrate this further

image

Why Is This Cool?

The good thing about this design over the separate config files per environment per application is that we now ONLY need to maintain one set of environment specific settings, which are those in the common Jar Sacha.Common

I wish we could do this within the .NET configuration system.

Hope this helps, I personally think that this is a fairly nice way to manage your configs for multiple applications and multiple environments