CASSANDRA + SPARk 1 of 2

A while ago I wrote about using Apache Spark, which is a great tool. I have been using Cassandra for a bit at work now, so thought it might be nice to revisit that artilcle and talk through how to use Spark with Cassandra.

Here is the 1st part of that ; http://www.codeproject.com/Articles/1073158/Apache-Spark-Cassandra-of

Scala : multi project sbt setup

A while ago I wrote a post about how to use SBT (Scala Build Tool):

https://sachabarbs.wordpress.com/2015/10/13/sbt-wheres-my-nuget/

In that post I showed simple usages of SBT. Thing is that was not really that realistic, so I wanted to have a go at a more real world example of this. One where we might have multiple projects, say like this:

 

SachasSBTDemo-App which depends on 2 sub projects

  • SachasSBTDemo-Server
  • SachasSBTDemo-Common

So how do we go about doing this with SBT?

There are 5 main steps to do this. Which we look at in turn.

SBT Directory Structure

The first that we need to do is create a new project folder (if you are from Visual Studio / .NET background think of this as the solution folder) called “project”

In here we will create 2 files

build.properties which just lists the version of SBT we will use. It looks like this

sbt.version=0.13.8

SachaSBTDemo.scala is what I have called the other file, but you can call it what you like. Here is the contents of that file, this is the main SBT file that governs how it all hangs together. I will be explaining each of these parts as we go.

  import sbt._
  import Keys._

object BuildSettings {


  val buildOrganization = "sas"
  val buildVersion      = "1.0"
  val buildScalaVersion = "2.11.5"

  val buildSettings = Defaults.defaultSettings ++ Seq (
    organization := buildOrganization,
    version      := buildVersion,
    scalaVersion := buildScalaVersion
  )
}


object Dependencies {
  val jacksonjson = "org.codehaus.jackson" % "jackson-core-lgpl" % "1.7.2"
  val scalatest = "org.scalatest" % "scalatest_2.9.0" % "1.4.1" % "test"
}


object SachasSBTDemo extends Build {

  import Dependencies._
  import BuildSettings._

  // Sub-project specific dependencies
  val commonDeps = Seq (
     jacksonjson,
     scalatest
  )

  val serverDeps = Seq (
     scalatest
  )


  lazy val demoApp = Project (
    "SachasSBTDemo-App",
    file ("SachasSBTDemo-App"),
    settings = buildSettings
  )
  //build these projects when main App project gets built
  .aggregate(common, server)
  .dependsOn(common, server)

  lazy val common = Project (
    "common",
    file ("SachasSBTDemo-Common"),
    settings = buildSettings ++ Seq (libraryDependencies ++= commonDeps)
  )

  lazy val server = Project (
    "server",
    file ("SachasSBTDemo-Server"),
    settings = buildSettings ++ Seq (libraryDependencies ++= serverDeps)
  ) dependsOn (common)
  
}

 

Projects

In order to have separate project we need to use the Project item from the SBT library JARs. A minimal Project setup will tell SBT where to create the new Project. Here is an example of a Project, where the folder we expect SBT to create will be called “SachasSBTDemo-App”.

lazy val demoApp = Project (
    "SachasSBTDemo-App",
    file ("SachasSBTDemo-App"),
    settings = buildSettings
  )

Project Dependencies

We can also specify Project dependencies using “dependsOn” which takes a Seq of other projects that this Project depends on.

That means that when we apply an action to the Project that is depended on, the Project that has the dependency will also have the action applied.

lazy val demoApp = Project (
    "SachasSBTDemo-App",
    file ("SachasSBTDemo-App"),
    settings = buildSettings
  )
  //build these projects when main App project gets built
  .aggregate(common, server)
  .dependsOn(common, server)

Project Aggregation

We can also specify Project aggregates results from other projects, using “aggregate” which takes a Seq of other projects that this Project aggregates.

What “aggregate” means is that whenever we apply an action on the aggregating Project we should also see the same action applied to the aggregated Projects.

lazy val demoApp = Project (
    "SachasSBTDemo-App",
    file ("SachasSBTDemo-App"),
    settings = buildSettings
  )
  //build these projects when main App project gets built
  .aggregate(common, server)
  .dependsOn(common, server)

Library Dependencies

Just like the simple post I did before, we still need to bring in our JAR files using SBT. But this time we come up with a nicer way to manage them. We simply wrap them all up in a simple object, and then use the object to satisfy the various dependencies of the Projects. Much neater.

import sbt._
import Keys._


object Dependencies {
  val jacksonjson = "org.codehaus.jackson" % "jackson-core-lgpl" % "1.7.2"
  val scalatest = "org.scalatest" % "scalatest_2.9.0" % "1.4.1" % "test"
}

  // Sub-project specific dependencies
  val serverDeps = Seq (
     scalatest
  )

  .....
  .....
  lazy val server = Project (
    "server",
    file ("SachasSBTDemo-Server"),
    //bring in the library dependencies
    settings = buildSettings ++ Seq (libraryDependencies ++= serverDeps)
  ) dependsOn (common)

The Finished Product

The final product once run through SBT should be something like this if viewed in IntelliJ IDEA:

image

 

Or like on the file system

image

If you want to grab my source files, they are available here at GitHub : https://github.com/sachabarber/SBT_MultiProject_Demo

MVP For 2016

Well I just got the email from the big house. I am an MVP for 2016 for “Visual Studio and Development Technologies”. This will be the 9th time I have been awarded the MVP award. Neato

Interesting thing is I spent 1/2 of last year working with the JVM / open source (few Apache projects) which I started to blog about. Whilst I spent the other 1/2 on .NET, so I honestly did not think I would be receiving the MVP award this time around, so it was a nice surprise. Awards are always nice.

That said I have never tried to “GET” the MVP award. It is nice when you get recognized for your efforts, but right now I am really enjoying the open source stuff, and I will be continuing to work with that for sure. I will ALWAYS have time for .NET, I love it. My current role has me spending 50% of my time in .NET land, and the other 50% in Scala and open source, so I am a happy camper right now.

I will continue to blog about stuff that I personally find interesting, and if that is .NET / Scala / open source stuff, so be it. Hopefully that will cover .NET and the other stuff of interest that I am digging lately. Only time will tell.

 

Anyway thanks Microsoft, and thanks to all the readers of my blog. Happy new year to you all

 

 

 

 

 

SCALA mocking

 

Last time we looked at writing unit tests for our code, where we looked at using ScalaTest. This time we will be looking at mocking.

In .NET there are several choices available that I like (and a couple that I don’t), such as :

  • Moq
  • FakeItEasy
  • RhinoMocks (this is one I am not keen on)

I personally am most familiar with Moq, so when I started looking at JVM based mocking frameworks I kind of wanted one that used roughly the same syntax as the ones that I had used in .NET land.

There are several choices available that I think are quite nicely, namely :

  • ScalaMock
  • EasyMock
  • JMock
  • Mockito

Which all play nicely with ScalaTest (which I am sure you are all very pleased to here).

So with that list what did I decide upon. I personally opted for Mockito, as I liked the syntax the best, that is not to say the others are not fine and dandy, it is just that I personally liked Mockito and it seemed to have good documentation and favorable Google search results, so Mockito it is.

So for the rest of this post I will talk about how to use Mockito to write our mocks. I will be used Mockito along side ScalaTest which we looked at last time.

SBT Requirements

As with most of the previous posts you will need to grab the libraries using SBT. As such your SBT file will need to use the following:

libraryDependencies ++= Seq(
  "org.mockito" % "mockito-core" % "1.8.5",
  "org.scalatest" %% "scalatest" % "2.2.5" % "test"
)

 

Our First Example

So with all that stated above. Lets have a look at a simple example. This trivial example mocks out a java.util.ArrayList[String]. And also sets up a few verifications

class FlatSpec_Mocking_Tests extends FlatSpec with Matchers with MockitoSugar {


  "Testing using Mockito " should "be easy" in {


    //mock creation
    val mockedList = mock[java.util.ArrayList[String]]

    //using mock object
    mockedList.add("one");
    mockedList.clear

    //verification
    verify(mockedList).add("one")
    verify(mockedList).clear

  }
}

One thing you may notice straight away is how the F*k am I able to mock a ArrayList[T], which is a class which is not abstract by the way. This is pretty cool.

 

Stubbing

Using Mockito we can also stub out things just as you would expect with any 1/2 decent mocking framework. Here is an example where we try and mock out a simple trait.

import java.util.Date
import org.scalatest._
import org.scalatest.mock._
import org.mockito.Mockito._


trait DumbFormatter {

  def formatWithDataTimePrefix(inputString : String, date : Date) : String = {
    s"date : $date : $inputString"
  }

  def getDate() : String = {
    new Date().toString
  }
}



class FlatSpec_Mocking_Tests extends FlatSpec with Matchers with MockitoSugar {

  "Stubbing using Mockito " should "be easy" in {

    var mockDumbFormatter = mock[DumbFormatter]
    when(mockDumbFormatter.getDate()).thenReturn("01/01/2015")
    assert("01/01/2015" === mockDumbFormatter.getDate())
  }
}

It can be seen above that it is quite easy to mock a trait. You can also see how we stub the mock out using the  Mockito functions

  • when
  • thenReturn

 

Return Values

We just saw an example above of how to use the “thenReturn” Mockito function, which is what you would use to setup your return value. If you want a dynamic return value this could quite easily call some other function which deals with creating the return values. Kind of a return value factory method.

 

Argument Matching

Mockito comes with something that allows you to match against any argument value. It also comes with regex matchers, and allows you to write custom matchers if the ones out of the box don’t quite fit your needs.

Here is an example of writing a mock where we use the standard argument matchers:

import java.util.Date
import org.scalatest._
import org.scalatest.mock._
import org.mockito.Mockito._
import org.mockito.Matchers._


trait DumbFormatter {

  def formatWithDataTimePrefix(inputString : String, date : Date) : String = {
    s"date : $date : $inputString"
  }

  def getDate() : String = {
    new Date().toString
  }
}



class FlatSpec_Mocking_Tests extends FlatSpec with Matchers with MockitoSugar {

  "Stubbing using Mockito " should "be easy" in {

    var mockDumbFormatter = mock[DumbFormatter]
    when(mockDumbFormatter.formatWithDataTimePrefix(anyString(),any[Date]())).thenReturn("01/01/2015 Something")
    assert("01/01/2015 Something" === mockDumbFormatter.formatWithDataTimePrefix("blah blah blah", new Date()))
  }
}

Exceptions

To throw exceptions with Mockito we simply need to use the “thenThrow(….) function. Here is how.

import java.util.Date
import org.scalatest._
import org.scalatest.mock._
import org.mockito.Mockito._
import org.mockito.Matchers._


trait DumbFormatter {

  def formatWithDataTimePrefix(inputString : String, date : Date) : String = {
    s"date : $date : $inputString"
  }

  def getDate() : String = {
    new Date().toString
  }
}



class FlatSpec_Mocking_Tests extends FlatSpec with Matchers with MockitoSugar {

  "Stubbing using Mockito " should "be easy" in {

    var mockDumbFormatter = mock[DumbFormatter]
    when(mockDumbFormatter.formatWithDataTimePrefix(anyString(),any[Date]()))
	.thenThrow(new RuntimeException())

    //use the ScalaTest intercept to test for exceptions
    intercept[RuntimeException] {
      mockDumbFormatter.formatWithDataTimePrefix("blah blah blah", new Date())
    }
  }
}

See how we also have to use the ScalaTest “intercept” for the actually testing

 

CallBacks

Callbacks are useful when you want to see what a method was called with and then you can make informed decisions about what you could possibly return.

Here is how you do callbacks in Mockito, note the use of the “thenAnswer” function, and how we use an anonymous Answer object.

import java.util.Date
import org.mockito.invocation.InvocationOnMock
import org.mockito.stubbing.Answer
import org.scalatest._
import org.scalatest.mock._
import org.mockito.Mockito._
import org.mockito.Matchers._


trait DumbFormatter {

  def formatWithDataTimePrefix(inputString : String, date : Date) : String = {
    s"date : $date : $inputString"
  }

  def getDate() : String = {
    new Date().toString
  }
}



class FlatSpec_Mocking_Tests extends FlatSpec with Matchers with MockitoSugar {

  "Stubbing using Mockito " should "be easy" in {

    var mockDumbFormatter = mock[DumbFormatter]
    when(mockDumbFormatter.formatWithDataTimePrefix(anyString(),any[Date]()))
      .thenAnswer(new Answer[String] {
        override def answer(invocation: InvocationOnMock): String = {
          val result = "called back nicely sir"
          println(result)
          result
        }
      })

    assert("called back nicely sir" === mockDumbFormatter.formatWithDataTimePrefix("blah blah blah", new Date()))



  }
}

 

Verification

The last thing I wanted to talk about was verification. Which may include verifying functions got called, and were called the right number of times.

Here is a simple example of this:

import java.util.Date
import org.mockito.invocation.InvocationOnMock
import org.mockito.stubbing.Answer
import org.scalatest._
import org.scalatest.mock._
import org.mockito.Mockito._
import org.mockito.Matchers._


trait DumbFormatter {

  def formatWithDataTimePrefix(inputString : String, date : Date) : String = {
    s"date : $date : $inputString"
  }

  def getDate() : String = {
    new Date().toString
  }
}



class FlatSpec_Mocking_Tests extends FlatSpec with Matchers with MockitoSugar {

  "Stubbing using Mockito " should "be easy" in {

    var mockDumbFormatter = mock[DumbFormatter]
    when(mockDumbFormatter.formatWithDataTimePrefix(anyString(),any[Date]()))
      .thenReturn("someString")

    val theDate = new Date()
    val theResult = mockDumbFormatter.formatWithDataTimePrefix("blah blah blah", theDate)
    val theResult2 = mockDumbFormatter.formatWithDataTimePrefix("no no no", theDate)

    verify(mockDumbFormatter, atLeastOnce()).formatWithDataTimePrefix("blah blah blah", theDate)
    verify(mockDumbFormatter, times(1)).formatWithDataTimePrefix("no no no", theDate)


  }
}

 

 

Further Reading

You can read more about how to use Mockito from the docs : https://docs.google.com/document/d/15mJ2Qrldx-J14ubTEnBj7nYN2FB8ap7xOn8GRAi24_A/edit

 

 

End Of The Line

Personally my quest goes on, I am going to keep going until I consider myself  good at Scala (which probably means I know nothing).

Anyway behind the scenes I will be studying more and more stuff about how to get myself to that point. As such I guess it is only natural that I may post some more stuff about Scala in the future.

But for now this it it, this is the end of the line for this brief series of posts on Scala. I hope you have all enjoyed the posts, and if you have please feel free to leave a comment, they are always appreciated.

 

SCALA : TESTING OUR CODE

 

So last time we looked at how to use Slick to connect to a SQL server database.

This time we look at how to use one of the 2 popular Scala testing frameworks.

The 2 big names when it comes to Scala testing are

  • ScalaTest
  • Specs2

I have chosen to use ScalaTest as it seems slightly more popular, when you do a Google search, and I quite liked the syntax. That said Specs2 is also very good. so if you fancy having a look at that you should.

SBT for ScalaTest

So what do we need to get started with ScalaTest. As always we need to grab the JAR, which we do using SBT.

At time of writing this was accomplished using this SBT entry:

name := "ClassesDemo"

version := "1.0"

scalaVersion := "2.11.7"

libraryDependencies ++= Seq(
  "org.scalatest" %% "scalatest" % "2.2.5" % "test"
)

With that in place, SBT should pull down the JAR from Maven Central for you. So once you are happy that you have the ScalaTest JAR installed, we can not proceed to write some tests.

 

Writing Some Tests

I come from a .NET background, and as such I am using to working with tools such as

  • NUnit
    • TestFixture
    • Setup : To setup the test
    • TearDown : To teardown the test
  • Moq / FakeItEasy : Mocking frameworks

As such I wanted to make sure I could do everything that I was used to in .NET using ScalaTest.

This article will concentrate on the testing side of things, while the next post will be on the mocking side of things.

So let’s carry on for now shall we.

Choosing You Test Style

ScalaTest allows you to use 2 different styles of writing tests.

  • FunSuite : This is more in line with what you get with NUnit say. We would write something like Test(“testing should be easy”)
  • FlatSpec : This is more of a BDD style test declaration, where we would write something like this: “Testing” should “be easy”
     

We will see an examples of both of these styles in just a minute, but before that lets carry on and looks at some of the common things you may want to do with your tests

Setup / TearDown

You may want to run some startup/teardown code that is run. Typically startup would be used to setup mocks for your test cases, and that sort of thing.

In things like NUnit this would simply be done by creating a method and attributing it to say it is the Setup/TearDown methods.

In ScalaTest things are slightly different in that we need to mixin the “BeforeAndAfter”  trait to do this. Lets see an example:

import org.scalatest.{FunSuite, BeforeAndAfter}
import scala.collection.mutable.ListBuffer

class FunSuite_Example_Tests extends FunSuite with BeforeAndAfter {

  val builder = new StringBuilder
  val buffer = new ListBuffer[String]

  before {
    builder.append("ScalaTest is ")
  }

  after {
    builder.clear()
    buffer.clear()
  }
}

It can be seen in this example that the BeforeAndAfter trait, gives you 2 additional functions

  • before
  • after

You can use these to perform your startup/teardown logic.

This example uses the FunSuite style, but the “BeforeAndAfter”  trait mixin is done exactly the same for the FlatSpec style testing.

 

Writing A Test Using FunSuite

I think if you have come from a NUnit / XUnit type of background you will probably identify more with the FunSuite style of testing.

Here is an example of a set of FunSuite tests.

import org.scalatest.{FunSuite, BeforeAndAfter}

import scala.collection.mutable.ListBuffer

class FunSuite_Example_Tests extends FunSuite with BeforeAndAfter {

  val builder = new StringBuilder
  val buffer = new ListBuffer[String]

  before {
    builder.append("ScalaTest is ")
  }

  after {
    builder.clear()
    buffer.clear()
  }

  test("Testing should be easy") {
    builder.append("easy!")
    assert(builder.toString === "ScalaTest is easy!")
    assert(buffer.isEmpty)
    buffer += "sweet"
  }

  test("Testing should be fun") {
    builder.append("fun!")
    assert(builder.toString === "ScalaTest is fun!")
    assert(buffer.isEmpty)
  }
}

It can be see that they follow the very tried and tested approach of tools like NUnit, where you have a test(…) function, where “…” is the text that describes your testcase.

Nothing much more to say there apart from to make sure you mixin the FunSuite trait.

 

Writing A Test Using FlatSpec

ScalaTest also supports another way of writing your tests, which is to use the FlatSpec trait, which you would mixin instead of the FunSuite trait.

When you use FlatSpec you would be writing your tests more like this:

  • “Testing” should “be easy” in {…}
  • it should “be fun” in {…}

Its more of a BDD style way of creating your test cases.

Here is the exact same test suite that we saw above but this time written using the FlatSpec instead of FunSuite.

import scala.collection.mutable.ListBuffer
 
class FlatSpec_Example_Tests extends FlatSpec with BeforeAndAfter {
 
    val builder = new StringBuilder
    val buffer = new ListBuffer[String]
 
     before {
         builder.append("ScalaTest is ")
       }
 
     after {
         builder.clear()
         buffer.clear()
       }
 
    "Testing" should "be easy" in {
         builder.append("easy!")
         assert(builder.toString === "ScalaTest is easy!")
         assert(buffer.isEmpty)
         buffer += "sweet"
       }
 
     it should "be fun" in {
         builder.append("fun!")
         assert(builder.toString === "ScalaTest is fun!")
         assert(buffer.isEmpty)
       }
}

I don’t mind either, I guess it’s down to personal choice/taste at the end of the day.

Using Matchers

Matchers are ScalaTest’s way of providing additonal constraints to assert against. In some testing frameworks we would just use the Assert class for that along with things like

  • Assert.AreEqual(..)
  • Assert.IsNotNull(..)

In ScalaTest you can still use the assert(..) function, but matchers are also a good way of expressing your test conditional.

So what exactly are matchers?

In the words of the ScalaTest creators:

ScalaTest provides a domain specific language (DSL) for expressing assertions in tests using the word should.

So what do we need to do to use these ScalaTest matchers? Well quite simply we need to just mix in Matchers, like this:

import org.scalatest._

class ExampleSpec extends FlatSpec with Matchers { ...}

You can alternatively import the members of the trait, a technique particularly useful when you want to try out matcher expressions in the Scala interpeter. Here’s an example where the members of Matchers are imported:

import org.scalatest._
import Matchers._

class ExampleSpec extends FlatSpec { // Can use matchers here ...

So that give us the ability to use the ScalaTest matchers DSL. So what do these things look like. Lets see a couple of examples:

import org.scalatest._


class FlatSpec_Example_Tests extends FlatSpec with Matchers {

    "Testing" should "probably use some matchers" in {

          //equality examples
          Array(1, 2) should equal (Array(1, 2))
          val resultInt = 23
          resultInt should equal (3) // can customize equality
          resultInt should === (3)   // can customize equality and enforce type constraints
          resultInt should be (3)    // cannot customize equality, so fastest to compile
          resultInt shouldEqual 3    // can customize equality, no parentheses required
          resultInt shouldBe 3       // cannot customize equality, so fastest to compile, no parentheses required

          //length examples
          List(1,2) should have length 2
          "cat" should have length 3

          //string examples
          val helloWorld = "Hello worlld"
          helloWorld should startWith ("Hello")
          helloWorld should endWith ("world")

          val sevenString ="six seven eight"
          sevenString should include ("seven")

          //greater than / less than
          val one = 1
          val zero = 0
          val seven = 7
          one should be < seven
          one should be > zero
          one should be <= seven
          one should be >= zero

          //emptiness
          List() shouldBe empty
          List(1,2) should not be empty
       }

}

 

 

For more information on using matchers, you should consult this documentation, which you can find here:

http://www.scalatest.org/user_guide/using_matchers

 

 

SCALA : Connecting to a database

 

This time we will proceed to look at using Scala to connect to SQL server.

In .NET we have quite a few ORM choices available, as well as standard ADO.NET. For example we could use any of the following quite easily

  • Linq to SQL
  • Entity Framework
  • Dapper
  • NHibernate
  • ADO .NET

In Scala things are a bit more tame on the ORM front. We basically only have one player, which is called “Slick”. The rest of this post will be about how to use Slick.

 

Slick

The good thing about Slick is that it works with a wide range of SQL dialects. For this post I will be using what I know which is MS SQL server. As such I will be using a MS SQL server driver, and there may be differences between the driver I use and other Slick drivers, but hopefully you will get the idea.

 

Notes on MS SQL Server

The following notes assume you are install

I found that I had to do the following to get Slick to work with MS SQL Server

  • Turn on the TCP/IP
  • Insure that the full set of SQL server services were running for the Slick Extension SQL Server driver to work.

Demo IntelliJ IDEA Project

As this one is quite a lot bigger than the previous Scala posts. I have decided to upload this one to GitHub.

You can grab the project from here :

https://github.com/sachabarber/ScalaSlickTest

But before you try and run it you should make sure you have done the following :

  • Created a MS SQL Server DB
  • Run  the schema creation scripts included in the IntelliJ IDEA project
  • Changed the “application.conf” file to point to YOUR SQL Server installation

 

The rest of this post will deal with how to do various things using Slick such as:

  • Use direct SQL commands (sql strings)
  • Use the slick ORM for CRUD
  • Use a store procedure with Slick

But before we get on to any of that lets just outline the schema we will be working with. The one and only table we will be using is this one :

image

So now that we know what the single (I know lame we should have had more, but meh) table looks like lets crack on

NOTE : In the examples shown in this post I am using the Scala Async Library that I have talked about before.

 

Using Direct SQL Commands

In this section we will see how we can use Slick to run arbitrary SQL commands. Lets see some examples

Return a Scalar value

Say we only want 1 value back. Perhaps count of the rows. We can just do this:

def selectScalarObject(db:Database) : Unit = {

  val action = sql"""Select count(*) as 'sysobjectsCount'  from sysobjects""".as[Int]
  val futureDB : Future[Vector[Int]] = db.run(action)

  async {
    val sqlData = await(futureDB)
    val count = sqlData.head
    println(s"PlainSQLHelper.selectScalarObject() sysobjectsCount: $count")
  } onFailure {
    case e => {
      println(s"ERROR : $e")
    }
  }
}

Return more than 1 value

We may of course want a couple of values, but we are not quite ready to return a brand new entity. So we can use a Tuple.

Here is an example:

def selectTupleObject(db: Database) : Unit = {

  val action = sql"""Select count(*)  as 'sysobjectsCount', count(*)/10  as 'sysobjectsCountDiv10' from sysobjects""".as[(Int,Int)]
  val futureDB : Future[Vector[(Int,Int)]] = db.run(action)

  async {
    val sqlData = await(futureDB)
    val (x,y) = sqlData.head
    println(s"PlainSQLHelper.selectTupleObject() sysobjectsCount: $x, sysobjectsCountDiv10: $y")
  } onFailure {
    case e => {
      println(s"ERROR : $e")
    }
  }
}

Return a case class

We can obviously make things more formal, and be nice and return  a nice case class. Here is an example of that:

def selectRawTableObject(db: Database) : Unit = {

  val action = sql"""Select * from Items""".as[(Int,String, Double, Int)]
  val futureDB : Future[Vector[(Int,String, Double, Int)]] = db.run(action)

  async {
    val sqlData = await(futureDB)
    val (id,desc, cost, location) = sqlData.head
    val item = RawSQLItem(id,desc, cost, location)
    println(s"PlainSQLHelper.selectRawTableObject() Id: ${item.id}, Description: ${item.description}, Cost: ${item.cost}, WarehouseLocation: ${item.warehouseLocationId}")
  } onFailure {
    case e => {
      println(s"ERROR : $e")
    }
  }
}


case class RawSQLItem(id: Int, description: String, cost: Double,  warehouseLocationId: Int)

 

 

Using The Slick ORM For CRUD

These examples show how you can do the basic CRUD operations with Slick.

However before we start to look at the CRUD operations, lets just see a bit of basic Slick code. Slick uses a trait called Table which you MUST mixin. It is also common practice that we use a companion object to create a TableQuery[T]. Here is the one for the CRUD operations we will be looking at next

package org.com.barbers.slicktest

import com.typesafe.slick.driver.ms.SQLServerDriver.api._

object Items {
  val items = TableQuery[Items]
}

case class DBItem(id: Int, description: String, cost: Double,  warehouseLocationId: Int)

class Items(tag: Tag) extends Table[DBItem](tag, "Products") {
  def id = column[Int]("Id", O.PrimaryKey, O.AutoInc)
  def description = column[String]("Description")
  def cost = column[Double]("Cost")
  def warehouseLocationId = column[Int]("WarehouseLocationId")
  def * = (id, description, cost, warehouseLocationId) <> (DBItem.tupled, DBItem.unapply)
}

Create

Ok so now we have seen that Slick uses a Table mixin, and that there is a TableQuery[T] at play. Let’s move on to see how we can create some data.

This is quite weird to do. Normally what we want from a INSERT is an Id. How Slick does that is a bit strange. We need to use the Slick DSL to say what we would like returned (the “Id”), which we do using the “returning” followed by the map of the Items table. This may sound weird but the example below may help to illustrate this a bit. Here is how we do it:

def saveItem(db: Database, item: DBItem) = {

  val action =(Items.items returning Items.items.map(_.id)) +=
    DBItem(-1, item.description, item.cost, item.warehouseLocationId)
  val futureDB : Future[Int] = db.run(action)

  async {
    val savedItemId = await(futureDB)
    println(s"TableResultRunner.saveItem() savedItem.Id ${savedItemId}")
  } onFailure {
    case e => {
      println(s"ERROR : $e")
    }
  }
}

And here is how we store several items.For a bulk insert, we can’t really get the inserted Ids. But we can add all Items in on go using the standard Scala collection operator ++=, which appends a new collection to the current collection.

Again an example will make this clearer

def insertSeveralItems(db: Database, items : List[DBItem]) : Unit = {

  implicit val session: Session = db.createSession()
  val insertActions = DBIO.seq(
    (Items.items ++= items.toSeq).transactionally
  )
  val sql = Items.items.insertStatement
  val futureDB : Future[Unit] = db.run(insertActions)

  async {
    await(futureDB)
    println(s"TableResultRunner.insertSeveralItems() DONE")
  } onFailure {
    case e => {
      println(s"ERROR : $e")
    }
  }
}

 

Retrieve

So we now have some Items, so how do we get them back from the DB?

There are many ways to do this with Slick. Let’s use a simple Take(2) operation to start with

def selectTwoItems(db: Database) : Unit = {

  implicit val session: Session = db.createSession()
  val q =  Items.items.take(2)
  val futureDB : Future[Seq[DBItem]] = db.run(q.result)

  async {
    val sqlData = await(futureDB)
    val item = sqlData.head
    println(s"TableResultRunner.selectTwoItems()[0] " +
      s"Id: ${item.id}, Description: ${item.description}, " +
      s"Cost: ${item.cost}, WarehouseLocationId: ${item.warehouseLocationId}")
  } onFailure {
    case e => {
      println(s"ERROR : $e")
    }
  }
}

We can also use Queries to filter out what we want from the DB. Here is an example of using a Query, where we use a filter to get all Items that have a Id that matches a Id

def findItemById(db: Database,id : Int) = {

  async {
    val q = for { p <- Items.items if p.id === id } yield p
    val futureDBQuery : Future[Option[DBItem]] = db.run(q.result.headOption)
    val item : Option[DBItem] = await(futureDBQuery)
    println(s"OPTION ${item}")
    item match {
      case Some(x) =>  println(s"TableResultRunner.findItemById The item is $x")
      case _ => ()
    }
  } onFailure {
    case e => {
      println(s"ERROR : $e")
    }
  }
}

 

Update

Update is a stranger on. Where we get out only the attributes we want from the DB using a query, and then use Slicks inbuilt update(..) function to perform the update on the columns we want. This is clearer with an example.

In this example we want to update ONLY the “cost” column of an Item.

def updateItemCost(db: Database, description : String, cost : Double) = {

  async {
    val q = Items.items
      .filter(_.description === description)
      .map(_.cost)
      .update(cost)

    val futureDB = db.run(q)
    val done = await(futureDB)
    println(s"Update cost of ${description}, to ${cost}")

    val q2 = for { p <- Items.items if p.description === description } yield p
    val futureDBQuery : Future[Seq[DBItem]] = db.run(q2.result)
    val items = await(futureDBQuery)
    items.map(item => println(s"TableResultRunner.updateItemCost The item is now $item") )
  } onFailure {
    case e => {
      println(s"ERROR : $e")
    }
  }
}

Delete

Lastly we would like to delete an Item. So let’ see how we can do that. Again we use some Slick magic for this, where we use the .delete() function. Here is an example where I delete a random Item from the DB.

def deleteRandomItem(db: Database) = {

  async {
    val q =  Items.items.take(1)
    val futureDB : Future[Seq[DBItem]] = db.run(q.result)
    val sqlData = await(futureDB)
    val item = sqlData.head
    val deleteFuture : Future[Unit] = db.run(
      Items.items.filter(_.id === item.id).delete).map(_ => ())
    await(deleteFuture)
    println(s"TableResultRunner.deleteRandomItem() deleted item.Id ${item.id}")
  } onFailure {
    case e => {
      println(s"ERROR : $e")
    }
  }
}

 

Calling A Stored Procedure

To call a stored procedure is a as simple as using the db session, and building out the call to the right stored procedure:

Say we have this stored procedure:

USE [SLICKTEST]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE PROCEDURE [dbo].[sp_SelectItemsByDescription]
    (
      @description NVARCHAR(MAX)
    )
AS
BEGIN
	SET NOCOUNT ON;

	select * from Items i where i.[Description] LIKE '%' + @description + '%'

END

GO


This is how we would call it using slick

def selectItems(db: Database, description: String): Unit = {

  val sqlStatement = db.source.createConnection().prepareCall(
    "{ call [dbo].[sp_SelectItemsByDescription](?) }",
    ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)

  sqlStatement.setFetchDirection(ResultSet.FETCH_FORWARD);
  sqlStatement.setString("@desc", description)

  val rs = sqlStatement.executeQuery()

  while (rs.next()) {
    val item = new DBItem(
      rs.getInt("Id"),
      rs.getString("Description"),
      rs.getDouble("Cost"),
      rs.getInt("WarehouseLocationId"))

    println(s"StoredProcedureHelper.selectProducts " +
      "using description set to ${desc} got this result : " +
      s"Id: ${item.id}, Description: ${item.description}, " +
      s"Cost: ${item.cost}, WarehouseLocationId: ${item.warehouseLocationId}")
  }

  rs.close()
  sqlStatement.close()
}

 

 

scala : dependency injection / ioc

In software engineering, dependency injection is a software design pattern that implements inversion of control for resolving dependencies. Dependency injection means giving an object its instance variables. Really. That’s it.

However there are several ways of doing this, and as such it is a fairly big topic, and I will not be able to go into the very specific details of DI/ IOC in one post.

Instead I shall attempt to outline some of the ways you could do DI / IOC in Scala (and like I say there are a few).

I will play nice though, and will try and point out good resources along the way, that you can follow for more information

 

Factories

One way of doing simple poor mans DI is to use factories, which decouple the client from the actual instance class that it may need to fulfill its role.

Here is an example of a factory and a class that needs a service inside it. We simply use the factory to get the service we need.



import com.typesafe.config.{ConfigObject, ConfigValue, ConfigFactory, Config}
import scala.collection.JavaConverters._
import java.net.URI
import java.util.Map.Entry


trait Processor {
  def Process() : Unit
}

class ActualProcessor() extends Processor {
  override def Process(): Unit = {
      println("ActualProcessor")
  }
}


object ProcessorFactory {

  var _processor: Processor = new ActualProcessor()

  // Getter
  def processor = _processor

  // Setter
  def processor_=(newProcessor: Processor): Unit = _processor = newProcessor
}


class OrderService {

  def processOrder(): Unit = {
    val processor = ProcessorFactory.processor
    processor.Process
  }
  
}



object ClassesDemo {

  def main(args: Array[String]) : Unit =
  {
    new OrderService().processOrder();
    System.in.read()
  }
}


Factories typically use static methods, such that they act as singletons and can be used from anywhere, and have a new instance set from anywhere (which is typically at the start of the app, or a test case)

Here is how we might change the factory to use a mock/test double before the testing starts. I am using ScalaTest in this example

package org.scalatest.examples.flatspec.beforeandafter

import org.scalatest._


class ExampleSpec extends FlatSpec with BeforeAndAfter {

  before {
    //for the tests we could use a Mock, or a Test double
    ProcessorFactory._processor = new MockProcessor()
  }
}

Google Guice

Google Guice is a DI library primarily for Java. However since Scala is a JVM language we may use it from Scala.

You can read more about Google Guice here : https://github.com/google/guice/wiki up on date 17/11/15

You willl need the following SBT libraryDependencies

 "com.google.inject" % "guice" % "3.0"

Typical usage can be thought of as 4 separate things

  • Defining an abstraction, that our client code will depend on
  • Stating that the client code wants a dependency injected. This is done with annotations in Java/Scala using the @Inject annotation
  • Providing the wire up code to wire the abstraction that the client code wanted satisfied with the actual implementation instance that the client code will get at runtime
  • Get the item from the Google Guice DI framework

Let’s see an example of these 4 points

import com.google.inject.{ Inject, Module, Binder, Guice }

//The abstraction
trait Processor {
  def Process() : Unit
}

class ActualProcessor() extends Processor {
  override def Process(): Unit = {
      println("ActualProcessor")
  }
}


// OrderService needs a Processor abstraction
class OrderService @Inject()(processor : Processor) {

  def processOrder(): Unit = {
    processor.Process
  }

}


//Declare a Google guice module that provides the wire up code
class DependencyModule extends Module {
  def configure(binder: Binder) = {
    binder.bind(classOf[Processor]).to(classOf[ActualProcessor])
  }
}


object ClassesDemo {

  def main(args: Array[String]) : Unit =
  {
    //get the item from the DI framework
    val injector = Guice.createInjector(new DependencyModule)
    val orderService = injector.getInstance(classOf[OrderService])
    orderService.processOrder()
    System.in.read()
  }
}


 

This is a very very quick introduction to DI using Google Guice, but as you can see it is quite similar to other DI frameworks such as Spring (or Castle, Autofac, Unity in the .NET world). You should certainly read the wiki a bit more on this one.

 

 

MacWire

We will now spend a bit more time looking at another framework called “macwire” which you can read more about at this GitHub project :

https://github.com/adamw/macwire up on date 17/11/15

So how do we use this MacWire framework. Well to be honest it is not that different from Google Guice in the code you wrte, but it uses the idea of Scala Macros under the hood. Though you don’t really need to get involved with that to use it.

We need to include the following SBT libraryDependencies before we start

libraryDependencies ++= Seq(
  "com.softwaremill.macwire" %% "macros" % "2.1.0" % "provided",
  "com.softwaremill.macwire" %% "util" % "2.1.0",
  "com.softwaremill.macwire" %% "proxy" % "2.1.0"
)

So lets see an example usage shall we:

package com.barbersDemo

import com.softwaremill.macwire._

//The abstraction
trait Processor {
  def Process() : Unit
}

class ActualProcessor() extends Processor {
  override def Process(): Unit = {
      println("ActualProcessor")
  }
}


class MyApp {
  val processor = new ActualProcessor()
}


// OrderService needs a Processor abstraction
class OrderService(processor : Processor) {

  def processOrder(): Unit = {
    processor.Process
  }

}

object ClassesDemo {

  def main(args: Array[String]) : Unit =
  {

    // we would substitute this line for a line that loads a Test
    // module with a set of test services services instead if we
    // were interested in testing/mocking
    val wired = wiredInModule(new MyApp)

    val orderService = wired.wireClassInstance[OrderService](classOf[OrderService])
    orderService.processOrder()
    System.in.read()
  }
}


 

As you can see from a usability point of view, it is not that different from using Google Guice. What is different is that we DO NOT have to use the @Inject annotation 

 

Cake Pattern

The cake pattern for me is the hardest one to get out of the lot, but seems to be the defacto way of doing DI in Scala.

You do get used to it. I managed to do this without the internet to refer to with a colleague today, so it is something that comes with time.

So here is the example:

package com.barbersDemo


// This trait is how you would express a dependency
// Any class that needs a Processor would mix in this trait
// along with using a self type to allow us to mixin either
// a mock / test double
trait ProcessorComponent {

  //abstract implementation, inheritors provide implementation
  val processor : Processor

  trait Processor {
    def Process() : Unit
  }
}


// An actual Processor
trait ActualProcessorComponent extends ProcessorComponent {

  val processor = new ActualProcessor()

  class ActualProcessor() extends Processor {
    def Process(): Unit = {
      println("ActualProcessor")
    }
  }
}


// An test double Processor
trait TestProcessorComponent extends ProcessorComponent {

  val processor = new TestProcessor()

  class TestProcessor() extends Processor {
    def Process(): Unit = {
      println("TestProcessor")
    }
  }
}



// The service that needs the Processor dependency
// satisfied.Which happens via the use of mixins
// and the use of a self type
class OrderService {

  // NOTE : The self type that allows to
  // mixin and use a ProcessorComponent
  this: ProcessorComponent =>

  def ProcessOrder() {
    processor.Process()
  }

}


object ClassesDemo {

  def main(args: Array[String]) : Unit =
  {
    //val defaultOrderServiceComponent = new DefaultOrderServiceComponent with ActualProcessorComponent

    // To use the test double or mock we would use a line similar to this
    val defaultOrderServiceComponent = new OrderService with TestProcessorComponent

    defaultOrderServiceComponent.ProcessOrder()
    System.in.read()
  }
}


 

There are a couple of things to not there

  • We want to make use of a trait (abstract class) called “Processor” which others may extend to do something, or provide a mock/test implementation
  • We wrap the trait we want to inject in a xxxComponent (this appears to be some sort of convention), and we also have an abstract val that the inheritor of the trait will provide an implementation for. You can see this in the ProcessorComponent trait (which is abstract)
  • We then have an ActualProcessorComponent / TestProcessorComponent which implement the trait ProcessorComponent
  • The place where we want to make use of the service, we make use of the self type within the OrderService which is this part “this: ProcessorComponent =>”. What this really means is that the OrderService NEEDS a ProcessorComponent  mixed in to work correctly. But since we know we will have a ProcessorComponent  mixed in (eithe real implementation or mock / test double) we can make use of it in the OrderService class.
  • All that is left is to wire up the OrderService with either a real implementation or mock / test double. This is done in the ClassesDemo.main(..) method shown above

 

Some further “Cake Pattern” blogs

 

 

Structural Typing

The last example I wanted to look at was using structural typing. To my mind this is kind of like duck typing, if you are expecting something that has a Print method, and you get something that has a Print method you should be able to use it.

NOTE : this approach USES reflection so will have a performance impact if used a lot

Here is an example of using structural typing

package com.barbersDemo

import com.softwaremill.macwire._

//The abstraction
trait Processor {
  def Process() : Unit
}

class ActualProcessor() extends Processor {
  override def Process(): Unit = {
      println("ActualProcessor")
  }
}


class TestProcessor() extends Processor {
  override def Process(): Unit = {
    println("TestProcessor")
  }
}



// OrderService needs a Processor abstraction
// but this tim we use structural typing, if it looks like
// a duck and quakes like a duck its a duck kind of thing
class OrderService(env: { val processor: Processor }) {

  def processOrder(): Unit = {
    //this time we use the env parameter to obtain the dependency
    env.processor.Process
  }

}



object Config {
  lazy val processor = new ActualProcessor() // this is where injection happens
}

object TestConfig {
  lazy val processor = new TestProcessor() // this is where injection happens
}

object ClassesDemo {

  def main(args: Array[String]) : Unit =
  {
    new OrderService(Config).processOrder()
    new OrderService(TestConfig).processOrder()
    System.in.read()
  }
}


As this is a bit stranger I have included, a call which uses the actual implementation and also a call that uses a test implementation.

The good thing about this is there there is no extra libraries, it is all standard Scala, and it is immutable and type safe.

A nice way to go about things if you ask me

 

 

SCALA : CONFIG

So we progress with the series of posts for .NET devs that may want to try their luck with Scala. This time we will be talking about configuration.

As I have stated quite a lot already I am a seasoned .NET developer that is getting into Scala.

Those of you that have worked in the .NET space will know that you can use the CofigurationManager class to help you read App.Config, or Web.Config file. Where the XXX.Config files are stored as XML a typical example being something like the ones shown below

 

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
  </configSections>
  <log4net>
    <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%4.4thread] %-5level %20.20logger{1} - %message%newline" />
      </layout>
    </appender>
    <appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
      <file value="Client.log" />
      <appendToFile value="true" />
      <rollingStyle value="Composite" />
      <datePattern value="yyyyMMdd" />
      <maxSizeRollBackups value="10" />
      <maximumFileSize value="1MB" />
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%4.4thread] %-5level %20.20logger{1} - %message%newline" />
      </layout>
    </appender>
    <root>
      <level value="INFO" />
      <appender-ref ref="ConsoleAppender" />
      <appender-ref ref="RollingLogFileAppender" />
    </root>
  </log4net>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
    </startup>
  <runtime>
     <runtime>
      <loadFromRemoteSources enabled="true" />
   </runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="Microsoft.Owin" publicKeyToken="31bf3856ad364e35" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-3.0.0.0" newVersion="3.0.0.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="Microsoft.Owin.Security" publicKeyToken="31bf3856ad364e35" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-3.0.0.0" newVersion="3.0.0.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-7.0.0.0" newVersion="7.0.0.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="System.Web.Cors" publicKeyToken="31bf3856ad364e35" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-5.2.2.0" newVersion="5.2.2.0" />
      </dependentAssembly>
    </assemblyBinding>
  </runtime>
</configuration>

The .NET configuration system is also quite flexible in that is allows you to create custom sections, but this requires a lot of code.

Here is an example : https://msdn.microsoft.com/en-us/library/2tw134k3.aspx up on date 12/11/15

We want to add a custom section which contains 2 properties

  • font
  • color

So we would have to add code similar to this to the actual XXXX.config file

 

<configuration>
<!-- Configuration section-handler declaration area. -->
  <configSections>
    <sectionGroup name="pageAppearanceGroup">
      <section 
        name="pageAppearance" 
        type="Samples.AspNet.PageAppearanceSection" 
        allowLocation="true" 
        allowDefinition="Everywhere"
      />
    </sectionGroup>
      <!-- Other <section> and <sectionGroup> elements. -->
  </configSections>

 <!-- Configuration section settings area. -->
  <pageAppearanceGroup>
    <pageAppearance remoteOnly="true">
      <font name="TimesNewRoman" size="18"/>
      <color background="000000" foreground="FFFFFF"/>
    </pageAppearance>
  </pageAppearanceGroup>


</configuration>

 

And also create the following C# code to create this custom set of XML tags

using System;
using System.Collections;
using System.Text;
using System.Configuration;
using System.Xml;

namespace Samples.AspNet
{
    public class PageAppearanceSection : ConfigurationSection
    {
        // Create a "remoteOnly" attribute.
        [ConfigurationProperty("remoteOnly", DefaultValue = "false", IsRequired = false)]
        public Boolean RemoteOnly
        {
            get
            { 
                return (Boolean)this["remoteOnly"]; 
            }
            set
            { 
                this["remoteOnly"] = value; 
            }
        }

        // Create a "font" element.
        [ConfigurationProperty("font")]
        public FontElement Font
        {
            get
            { 
                return (FontElement)this["font"]; }
            set
            { this["font"] = value; }
        }

        // Create a "color element."
        [ConfigurationProperty("color")]
        public ColorElement Color
        {
            get
            {
                return (ColorElement)this["color"];
            }
            set
            { this["color"] = value; }
        }
    }

    // Define the "font" element
    // with "name" and "size" attributes.
    public class FontElement : ConfigurationElement
    {
        [ConfigurationProperty("name", DefaultValue="Arial", IsRequired = true)]
        [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;'\"|\\", MinLength = 1, MaxLength = 60)]
        public String Name
        {
            get
            {
                return (String)this["name"];
            }
            set
            {
                this["name"] = value;
            }
        }

        [ConfigurationProperty("size", DefaultValue = "12", IsRequired = false)]
        [IntegerValidator(ExcludeRange = false, MaxValue = 24, MinValue = 6)]
        public int Size
        {
            get
            { return (int)this["size"]; }
            set
            { this["size"] = value; }
        }
    }

    // Define the "color" element 
    // with "background" and "foreground" attributes.
    public class ColorElement : ConfigurationElement
    {
        [ConfigurationProperty("background", DefaultValue = "FFFFFF", IsRequired = true)]
        [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;'\"|\\GHIJKLMNOPQRSTUVWXYZ", MinLength = 6, MaxLength = 6)]
        public String Background
        {
            get
            {
                return (String)this["background"];
            }
            set
            {
                this["background"] = value;
            }
        }

        [ConfigurationProperty("foreground", DefaultValue = "000000", IsRequired = true)]
        [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;'\"|\\GHIJKLMNOPQRSTUVWXYZ", MinLength = 6, MaxLength = 6)]
        public String Foreground
        {
            get
            {
                return (String)this["foreground"];
            }
            set
            {
                this["foreground"] = value;
            }
        }

    }

}

Which we can then access from code like this

Samples.AspNet.PageAppearanceSection config =
        (Samples.AspNet.PageAppearanceSection)System.Configuration.ConfigurationManager.GetSection(
        "pageAppearanceGroup/pageAppearance");
var cfgFont = config.Font.Name

Phew, that’s a lot of work.

There are other ways to do this in C#. I am thinking of the awesome SimpleConfig GitHub repo, which in my opinion is well underrated and something that we should all use in our .NET projects.

https://github.com/mikeobrien/SimpleConfig up on date 12/11/15

Using this we can now write code like this (instead of the above, which is a BIG improvement if you ask me)

First create your configuration types:

public class MyApplication
{
    public Build Build { get; set; }
}

public enum Target { Dev, CI }

public class Build
{
    public string Version { get; set; }
    public DateTime Date { get; set; }
    public Target DeployTarget { get; set; }
}

 

Next you need to register the SimpleConfig section handler in your web/app.config and create your configuration section as shown below. The default convention for the section name is the camel cased name of the root configuration type (Although you can override this as we’ll see later). The section name under configSections must match the section element name. All other element and attribute names in the configuration section are case insensitive but must otherwise match the property names of your configuration types (You can override this as well).

<configuration>
  <configSections>
    <section name="myApplication" type="SimpleConfig.Section, SimpleConfig"/>
  </configSections>
  <myApplication>
    <build version="0.0.0.0" date="10/25/1985" deployTarget="Dev"/>
  </myApplication>
</configuration>

Now you can load the section either by calling the convenience static method or newing up a new instance:

ar config = Configuration.Load<MyApplication>();
// or
var config = new Configuration().LoadSection<MyApplication>();

config.Build.Date.ShouldEqual(DateTime.Parse("10/25/1985"));
config.Build.DeployTarget.ShouldEqual(Target.Dev);
config.Build.Version.ShouldEqual("0.0.0.0");

 

This is cool, however Scala does it even better. The rest of this post will be about the awesome Typesafe Config library (Typesafe are the people behind Akka (I like Akka)

 

Typesafe Config Library

The guys from Typesafe have an awesome config library (https://github.com/typesafehub/config) that you may use with either Java/Scala, and it supports the following 3 formats

  • JSON
  • Java properties
  • HOCON (Human-Optimized Config Object Notation)

For everything I demonstrate here I will be using the following HOCON file

sachas.business {
  owner {
    name = "sacha barber"
    description = ${sachas.business.owner.name} "is the owner"
  }
  team {
    members = [
      "sacha barber"
      "chris baker"
      "paul freeman"
      "ryan the mad one"
    ]
  }
}
sachas.business.team.avgAge = 35

If you want to try this out in your own scala project you will need to add it as a SBT library dependency, using this (the version shown here was right at time of this post being published)

libraryDependencies ++= Seq(
  "com.typesafe" % "config" % "0.4.0"
)

So what can this Typesafe library do?

Well it essentially reads configuration information from file(s). This would typically be done using a application.conf file, which would be placed in your resources folder.

image

After you have a file, we can proceed to load it using the ConfigFactory, which you can use like this:

import com.typesafe.config.ConfigFactory


object ClassesDemo {

  def main(args: Array[String]) : Unit =
  {

    val config = ConfigFactory.load("application.conf")
    .....
    .....
    .....
    System.in.read()
  }
}


Well let’s start simple by using the HOCON file we outlined above:



import com.typesafe.config.ConfigFactory


object ClassesDemo {


  def main(args: Array[String]) : Unit =
  {

    val config = ConfigFactory.load("application.conf")
    val ownerName = config.getString("sachas.business.owner.name")  // => sacha barber
    val desc = config.getString("sachas.business.owner.description") // => sacha barber is the owner
    val age = config.getInt("sachas.business.team.avgAge ") // => 35
    val members = config.getStringList("sachas.business.team.members") // => [sacha barber,chris baker,paul freeman,an the mad one


    System.in.read()
  }
}


It can be seen that we can easily drill into the tree of properties, and use the getXXX methods to grab strings, list and all sorts of goodness

The above code gives this result

image

Pretty simple huh

The Config object has these helper methods to enable you to read configuration values:

  • getAnyRef
  • getAnyRefList
  • getBoolean
  • getBooleanList
  • getByte
  • getByteList
  • getConfig
  • getConfigList
  • getDouble
  • getDoubleList
  • getInt
  • getIntList
  • getList
  • getLong
  • getLongList
  • getMilliSeconds
  • getMilliSecondsList
  • getNanoSeconds
  • getNanoSecondsList
  • getNumber
  • getNumberList
  • getObject
  • getObjectList
  • getString
  • getStringList
  • getValue

I think most of these are quite obvious, perhaps the only one that I personally feel may need a bit more of an explanation are the getObject/getObjectList methods. So let’s have a look a specific example for this.

Say we have this HOCON file

decoders = [ { a : "socket://1.2.3.4:9000" },
  { b : "socket://1.2.3.4:8080" },
  { c : "socket://9.9.9.9:9001" },
  { d : "socket://9.9.8.8:9000" },
  { e : "socket://4.3.2.1:8081" } ]

Which we then read in like this



import com.typesafe.config.{ConfigObject, ConfigValue, ConfigFactory, Config}
import scala.collection.JavaConverters._
import java.net.URI
import java.util.Map.Entry



case class Settings(config: Config) {
  lazy val decoders : Map[String, URI] = {
    val list : Iterable[ConfigObject] = config.getObjectList("decoders").asScala
    (for {
      item : ConfigObject <- list
      entry : Entry[String, ConfigValue] <- item.entrySet().asScala
      key = entry.getKey
      uri = new URI(entry.getValue.unwrapped().toString)
    } yield (key, uri)).toMap
  }
}


object ClassesDemo {


  def main(args: Array[String]) : Unit =
  {

    val config = ConfigFactory.load("smallerList.conf")
    val decoders = new Settings(config).decoders

    System.in.read()
  }
}


Which give us the following results

 image

 

 

I have shamelessly stolen this example from this blog, which is a very nice example in my opinion

http://deploymentzone.com/2013/07/25/typesafe-config-and-maps-in-scala/ up on date 16/11/15

 

 

 

Further Readings

I came across a couple of god blogs on this whilst writing my own post. These are outline here:

 

 

 

 

SCAla : Futures /Promises and more

 

I have been a .NET developer for a long time now, and am very very used to dealing with the .NET framework Task library. Obviously here I mean TPL and now Async/Await.

So now that I am doing more and more Scala I wanted to see what the equivalent code would be in Scala, as I do like my Task(s) in .NET.

Lets say I had this .NET code, which is not blocking thanks to the use of callbacks

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ConsoleApplication1
{
    class Program
    {
        static void Main(string[] args)
        {
            var task = Task.Run(() =>
            {
                return 40;
            });


            task.ContinueWith(ant =>
            {
                Console.WriteLine(ant.Result);
            }, TaskContinuationOptions.OnlyOnRanToCompletion);

            task.ContinueWith(ant =>
            {
                Console.WriteLine("BAD NEWS");
            }, TaskContinuationOptions.OnlyOnFaulted);


            Console.ReadLine();
        }
    }
}

Roughly speaking we could break this down into the following equivalents in Scala:

  • A Task in .NET is roughly equivalent to a Scala Future
  • task.ContinueWith callbacks in .NET are Future callbacks in Scala

 We could take this comparison a bit further. So lets change the .NET code to this code, which is now blocking since we no longer use any callbacks, and instead use the Task.Result property, which causes the Task to be “Observed”.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ConsoleApplication1
{
    class Program
    {
        static void Main(string[] args)
        {
            var task = Task.Run(() =>
            {
                return 40;
            });


            var x = task.Result;


            Console.ReadLine();
        }
    }
}

In Scala this would be done by the use of the scala.concurrent.Await.ready / scala.concurrent.Await.result which we will see more of later. We will just spend a bit of time looking at some of the plumbing of how to create and work with Futures (Scalas Task equivalent).

 

Futures

A Future is an object holding a value which may become available at some point. This value is usually the result of some other computation:

  • If the computation has not yet completed, we say that the Future is not completed.
  • If the computation has completed with a value or with an exception, we say that the Future is completed.

Completion can take one of two forms:

  • When a Future is completed with a value, we say that the future was successfully completed with that value.
  • When a Future is completed with an exception thrown by the computation, we say that the Future was failed with that exception.

A Future has an important property that it may only be assigned once. Once a Future object is given a value or an exception, it becomes in effect immutable– it can never be overwritten.

The simplest way to create a future object is to invoke the future method which starts an asynchronous computation and returns a future holding the result of that computation. The result becomes available once the future completes.

Note that Future[T] is a type which denotes future objects, whereas future is a method which creates and schedules an asynchronous computation, and then returns a future object which will be completed with the result of that computation.

http://docs.scala-lang.org/overviews/core/futures.html up on date 10/11/15

Let’s see an example. This trivial example creates a Future[Int].

import scala.concurrent._
import ExecutionContext.Implicits.global

object ClassesDemo {

  def main(args: Array[String]) =
  {
    //Creating a Future
    val intFuture: Future[Int] = Future { 23 }
  }
}


You may be wondering how the Future.apply() method is able to come up with a computation that may be completed at some point in the future.

Well the answer to that lies in the use of Promises, which we will look at later.

 

Callbacks

So carrying on from the .NET example that I showed in the introduction paragraph, where I showed how to use Task.ContinueWith(..), which runs a continuation.

Well in Scala we can do the same thing, but it is simpy called a “callback”. Like the .NET continuation Scala callback are NON blocking.

Callback(s) are easy to use, here is an example:


import scala.concurrent.{ExecutionContext, duration, Future, Await}
import scala.reflect.runtime.universe._
import scala.reflect._
import scala.reflect.runtime._
import scala.util
import scala.util.{Failure, Success, Try}
import scala.concurrent.duration._
import ExecutionContext.Implicits.global

object ClassesDemo {

  def main(args: Array[String]) =
  {

    val intFuture: Future[Int] = Future { 23 }

    //use a "callback" which is non blocking
    intFuture onComplete {
      case Success(t) =>
      {
        println(t)
      }
      case Failure(e) =>
      {
        println(s"An error has occured: $e.getMessage")
      }
    }
  }
}


 

Awaiting Futures

We are also able to Await futures. We can do this using 2 methods of the scala.concurrent.Await class which are discussed below. One important note is that the 2 methods shown below ARE blocking, so should be used with caution

Await.ready 

//Await the "completed" state of an Awaitable.
def ready[T](awaitable: Awaitable[T], atMost: Duration): awaitable.type

Await.result

//Await and return the result (of type T) of an Awaitable.
def result[T](awaitable: Awaitable[T], atMost: Duration): T

Let’s see an example of both of these:

import scala.concurrent.{ExecutionContext, duration, Future, Await}
import scala.reflect.runtime.universe._
import scala.reflect._
import scala.reflect.runtime._
import scala.util
import scala.util.{Failure, Success, Try}
import scala.concurrent.duration._
import ExecutionContext.Implicits.global

object ClassesDemo {

  def main(args: Array[String]) =
  {
    //Await.ready
    lazy val intFuture: Future[Int] = Future { 23 }
    val result: Try[Int] = Await.ready(intFuture, 10 seconds).value.get
    val resultEither = result match {
      case Success(t) => Right(t)
      case Failure(e) => Left(e)
    }
    resultEither match {
      case Right(t) => println(t)
      case Left(e) => println(e)
    }

    //Await.result
    lazy val stringFuture = Future { "hello" }
    val theString :String = Await.result(stringFuture, 1 second)
    println(theString)
  }
}

Which when run will give the following output

image

Here are some other links that are good for some background reading on this 

 

Functional Composition

The callback mechanism we have shown is sufficient to chain future results with subsequent computations. However, it is sometimes inconvenient and results in bulky code. Luckily the scala Future[T] class is quite powerful, and comes with a number of combinators to help you write cleaner more succint code.

If only .Net Task has some of these methods (Oh hang on RX (reactive extensions does)) we would be laughing.

Anyway for now just be aware that Future[T] does come equipped with some nice combinators that you may use. I will go through a few of them here, but you should do some more research yourself

Map Example

In this example we use the Future[T].map to transform the result from one Future[T] into a new type of T say TR



import scala.concurrent.{ExecutionContext, duration, Future, Await}
import scala.reflect.runtime.universe._
import scala.reflect._
import scala.reflect.runtime._
import scala.util
import scala.util.{Failure, Success, Try}
import scala.concurrent.duration._
import ExecutionContext.Implicits.global

object ClassesDemo {


  def main(args: Array[String]) =
  {

    val rateQuoteFuture : Future[Double] = Future {
      1.5
    }

    val formattedRateFuture = rateQuoteFuture map { quote =>
      println(quote)
      s"Rate was : $quote"
    }
    formattedRateFuture onComplete  {
      case Success(formatted) => println(formatted)
      case Failure(x) => {
        println(x)
      }
    }


    System.in.read()
  }
}


For

We can also use For with Future[T] (here is one that I shameless stole from the Scala docs)

val usdQuote = Future { connection.getCurrentValue(USD) }
val chfQuote = Future { connection.getCurrentValue(CHF) }
val purchase = for {
  usd <- usdQuote
  chf <- chfQuote
  if isProfitable(usd, chf)
} yield connection.buy(amount, chf)
purchase onSuccess {
  case _ => println("Purchased " + amount + " CHF")
}

WithFilter

Or how about providing a filter. This can be done using the WithFilter method

val purchase = usdQuote flatMap {
  usd =>
  chfQuote
    .withFilter(chf => isProfitable(usd, chf))
    .map(chf => connection.buy(amount, chf))
}

 

Promises

So far we have only considered Future objects created by asynchronous computations started using the future method. However, futures can also be created using promises.

While futures are defined as a type of read-only placeholder object created for a result which doesn’t yet exist, a promise can be thought of as a writable, single-assignment container, which completes a future. That is, a promise can be used to successfully complete a future with a value (by “completing” the promise) using the success method. Conversely, a promise can also be used to complete a future with an exception, by failing the promise, using the failure method.

http://docs.scala-lang.org/overviews/core/futures.html up on date 10/11/15

The way I like to think about Promises (coming from .NET as I have) is that they are pretty much the same as a TaskCompletionSource.

To understand the association between a Promise and a Future lets look at the signature for the Future.apply() method, which looks like this:

 def apply[T](body: =>T)(implicit @deprecatedName('execctx) executor: ExecutionContext): Future[T] = impl.Future(body)

Which if we examine a bit further we can see has this implementation code, where we are actually using the Promise to complete / Fail the Future computation

private[concurrent] object Future {
  class PromiseCompletingRunnable[T](body: => T) extends Runnable {
    val promise = new Promise.DefaultPromise[T]()

    override def run() = {
      promise complete {
        try Success(body) catch { case NonFatal(e) => Failure(e) }
      }
    }
  }

  def apply[T](body: =>T)(implicit executor: ExecutionContext): scala.concurrent.Future[T] = {
    val runnable = new PromiseCompletingRunnable(body)
    executor.prepare.execute(runnable)
    runnable.promise.future
  }
}

 

Scala Async Library

Much of the stuff I talk about in this section is covered in a great post:

http://engineering.roundupapp.co/the-future-is-not-good-enough-coding-with-async-await/

Here is a small example of using several Future(s) together 

This has a few issues namely

There is a new nesting for each new Future to use
It doesn’t handle the unhappy path (failures)
Its pretty sequential



import scala.concurrent.{ExecutionContext, duration, Future, Await}
import scala.reflect.runtime.universe._
import scala.reflect._
import scala.reflect.runtime._
import scala.util
import scala.util.{Failure, Success, Try}
import scala.concurrent.duration._
import ExecutionContext.Implicits.global

object ClassesDemo {


  def main(args: Array[String]) =
  {

    val future1 : Future[Double] = Future { 1 }
    val future2 : Future[Double] = Future { 2 }
    val future3 : Future[Double] = Future { 3 }


    import scala.concurrent.ExecutionContext.Implicits.global

    val (f1,f2,f3) = (future1, future2, future3)
    f1 onSuccess { case r1 =>
      f2 onSuccess { case r2 =>
        f3 onSuccess { case r3 =>
          println(s"Sum:  ${r1 + r2 + r3}")
        }
      }
    }


    System.in.read()
  }
}


This has a few issues namely

  • There is a new nesting for each new Future to use
  • It doesn’t handle the unhappy path (failures)
  • Its pretty sequential

We c an fix some of this by using a for comprehension



import scala.concurrent.{ExecutionContext, duration, Future, Await}
import scala.reflect.runtime.universe._
import scala.reflect._
import scala.reflect.runtime._
import scala.util
import scala.util.{Failure, Success, Try}
import scala.concurrent.duration._
import ExecutionContext.Implicits.global

object ClassesDemo {


  def main(args: Array[String]) =
  {

    val future1 : Future[Double] = Future { 1 }
    val future2 : Future[Double] = Future { 2 }
    val future3 : Future[Double] = Future { 3 }


    import scala.concurrent.ExecutionContext.Implicits.global

    val (f1,f2,f3) = (future1, future2, future3)
    val f = for {
      r1 <- f1
      r2 <- f2
      r3 <- f3
    } yield r1 + r2 + r3
    f onComplete {
      case Success(s) => println(s"Sum: $s")
      case Failure(e) => // Handle failure
    }


    System.in.read()
  }
}


This fixes point 1, and 2, but it still executes sequentially. We could take this further and do this:



import scala.concurrent.{ExecutionContext, duration, Future, Await}
import scala.reflect.runtime.universe._
import scala.reflect._
import scala.reflect.runtime._
import scala.util
import scala.util.{Failure, Success, Try}
import scala.concurrent.duration._
import ExecutionContext.Implicits.global

object ClassesDemo {


  def main(args: Array[String]) =
  {

    val future1 : Future[Double] = Future { 1 }
    val future2 : Future[Double] = Future { 2 }
    val future3 : Future[Double] = Future { 3 }


    import scala.concurrent.ExecutionContext.Implicits.global

    val f = Future.sequence(Seq(future1,future2,future3))
    f onComplete {
      case Success(r) => println(s"Sum: ${r.sum}")
      case Failure(e) => // Handle failure
    }


    System.in.read()
  }
}


 

But there is a better way, that I am happy to say borrows from .NET async/await (which in turn borrowed from F# but hey ho). We can rewrite the above code using the Scala Async library like this.

The Scala async library can be found here :

https://github.com/scala/async up on date 10/11/15



import scala.concurrent.{Future}

import scala.async.Async._ //'async/await' macros blocks and implicits

object ClassesDemo {


  def main(args: Array[String]) =
  {
    val future1 : Future[Double] = Future { 1 }
    val future2 : Future[Double] = Future { 2 }
    val future3 : Future[Double] = Future { 3 }

    //use Scala Async Library here, note the Async-Await
    async {
      val s = await {future1} + await {future2} + await {future3}
      println(s"Sum:  $s")
    } onFailure { case e => /* Handle failure */ }


    System.in.read()
  }
}


async marks a block of asynchronous code. Such a block usually contains one or more await calls, which marks a point at which the computation will be suspended until the awaited Future is complete.

By default, async blocks operate on scala.concurrent.{Future, Promise}. The system can be adapted to alternative implementations of the Future pattern.

https://github.com/scala/async up on date 10/11/15

This for me as a .NET guy making his way into the Scala world makes a lot of sense

 

 

Further Reading

The Scala docs are actually very good for Futures/Promises. You can read more about this here :

http://docs.scala-lang.org/overviews/core/futures.html

SCALA GENERICS

 

So continuing on from the Scala for .NET series of posts. This time we will look at using Generics in Scala.

The basic usage for generics is not that far removed from usage in .NET, where in Scala you may have generic methods/classes.

Generic Methods

Here is a simple example of a generic method

object ClassesDemo {

  def genericPrinter[A](theStuff : A) : Unit = {
    System.out.println(s"theStuff =$theStuff")
  }


  def main(args: Array[String]) =
  {
    genericPrinter("A String")
    genericPrinter(12)
    genericPrinter(Some(12L))
    System.in.read()
    ()
  }

}

Which when run will give the following results:

image

 

Generic Classes

It is also possible to create generic classes. Here is an example of creating a generic class, and its usage

class printer[A](theStuff : A) {
  def printIt() : Unit = {
    System.out.println(s"theStuff =$theStuff")
  }
}


object ClassesDemo {

  def main(args: Array[String]) =
  {
    new printer[String]("A String").printIt()
    new printer[Int](12).printIt()
    new printer[Tuple2[String,Int]]("A String",12).printIt()
    System.in.read()
    ()
  }
}


Which when run will give the following results:

image

 

View Bounds

In .NET we have generic constraints that we can apply such as this

public class MyGenericClass<T> where T : IComparable
{

}
In Scala this is accomplished by using “View Bounds”
A view bound was a mechanism introduced in Scala to enable the use of some type A as if it were some type B. 
The typical syntax is this:
def f[A <% B](a: A) = a.bMethod

In other words, A should have an implicit conversion to B available, so that one can call B methods on an object of type A. The most common usage of view bounds in the standard library (before Scala 2.8.0, anyway), is with Ordered, like this:

def f[A <% Ordered[A]](a: A, b: A) = if (a < b) a else b

Because one can convert A into an Ordered[A], and because Ordered[A] defines the method <(other: A): Boolean, I can use the expression a < b.

Taken from http://docs.scala-lang.org/tutorials/FAQ/context-and-view-bounds.html up on date 05/11/15

 

Context Bounds

Context bounds were introduced in Scala 2.8.0, and are typically used with the so-called type class pattern, a pattern of code that emulates the functionality provided by Haskell type classes, though in a more verbose manner.

While a view bound can be used with simple types (for example, A <% String), a context bound requires a parameterized type, such as Ordered[A] above, but unlike String.

A context bound describes an implicit value, instead of view bound’s implicit conversion. It is used to declare that for some type A, there is an implicit value of type B[A] available. The syntax goes like this:

def f[A : B](a: A) = g(a) // where g requires an implicit value of type B[A]

This is more confusing than the view bound because it is not immediately clear how to use it. The common example of usage in Scala is this:

def f[A : ClassManifest](n: Int) = new Array[A](n)

An Array initialization on a parameterized type requires a ClassManifest to be available, for arcane reasons related to type erasure and the non-erasure nature of arrays.

Another very common example in the library is a bit more complex:

def f[A : Ordering](a: A, b: A) = implicitly[Ordering[A]].compare(a, b)

Here, implicitly is used to retrive the implicit value we want, one of type Ordering[A], which class defines the method compare(a: A, b: A): Int.

Taken from http://docs.scala-lang.org/tutorials/FAQ/context-and-view-bounds.html up on date 05/11/15

 

 

 

Type Erasure

Unlike .NET generics are not baked into the JVM, as they are in .NET where they are actually part of the CLR.

Scala’s types are erased at compile time. This means that if you were to inspect the runtime type of some instance, you might not have access to all type information that the Scala compiler has available at compile time.

Like scala.reflect.Manifest, TypeTags can be thought of as objects which carry along all type information available at compile time, to runtime. For example, TypeTag[T] encapsulates the runtime type representation of some compile-time type T. Note however, that TypeTags should be considered to be a richer replacement of the pre-2.10 notion of a Manifest, that are additionally fully integrated with Scala reflection.

ClassTag / TypeTag / Manifest

These 3 classes are the most useful ones to use to maintain type information.

Let’s consider this bit of code:

import MyExtensions._
import scala.reflect.runtime.universe._
import scala.reflect._

object ClassesDemo {


  def genericMeth[A](xs: List[A]) = xs match {
    case _: List[String] => "list of strings"
    case _: List[Foo] => "list of foos"
  }

  def main(args: Array[String]) =
  {
    val x =
    System.out.print(genericMeth(List("string")))


    System.in.read()
    ()
  }
}


 

Which when compiled will give the following errors:

image

To solve this problem Manifests were introduced to Scala. But they have the problem not being able to represent a lot of useful types.

TypeTag(s)/ClassTag(s) are the preferred mechanism to use. Here is the above code written again use a TypeTag, this time the code compiles fine

import MyExtensions._
import scala.reflect.runtime.universe._
import scala.reflect._

object ClassesDemo {


  def genericMeth[A : TypeTag](xs: List[A]) = typeOf[A] match {
    case t if t =:= typeOf[String] => "list of strings"
    case t if t <:< typeOf[Foo] => "list of foos"
  }

  def main(args: Array[String]) =
  {
    val x =
    System.out.print(genericMeth(List("string")))


    System.in.read()
    ()
  }
}


Another thing I have personally found of use is to use TypeTag/ClassTag to help me create an instance of the correct type.

For example:

 

import scala.reflect.runtime.universe._
import scala.reflect._

trait Logger {
  def log() : Unit
}

class LoggerA() extends Logger {
  override def log(): Unit = {
    println("LoggerA.log() called")
  }
}



object ClassesDemo {

  def doTheLoggingClassTag[T <: Logger]()(implicit tag: scala.reflect.ClassTag[T]) = {

      val theObject = tag.runtimeClass.newInstance.asInstanceOf[T]
      theObject.log()
      println(s"theObject $theObject")
      ()
    }

  def main(args: Array[String]) =
  {
    doTheLoggingClassTag[LoggerA]()

    System.in.read()
    ()
  }
}


Which will give this output:

image

There are some great sources of information on this, here are a couple of links:

Follow

Get every new post delivered to your Inbox.

Join 164 other followers