So this is somewhat of strange post, or should I say what will hopefully become a decent set of posts, thing is, I have no idea how this will end up really,
as I have not embarked on a mission like this before. So please bear with me.
SO JUST WHAT IS IT THAT I AM TALKING ABOUT?
Well the way I typically like to run my blog/code project articles / life, is that I pick a technology and
just concentrate on it for a while and write about it. This time however I have decided to treat my blogging/articles as a bit more
of a work like escapade, where I will be assigning mini tasks (think JIRA tickets) to myself, some of which I know nothing about, that should/could in reality be treated
as “spikes” and end up in complete dead ends. It is about the journey after all.
I WILL have a complete list of “tickets” (AKA tasks), which may or may not be completely fleshed out in advance. I will stick to “DOING” those “tickets”
and there is an end goal in sight, and I will outline that in a top level story. I cannot however commit to any timelines, this is as much my journey as it is yours (in fact I mainly
do this stuff for myself, and would highly reccomend it as a way of self improvement). That said I hope people get something out of the series of posts that WILL UNDOUBTEDLY
come from this idea.
You can think of the tasks as “technical tasks” which make up the high level “stories” (in JIRA speak).
This may come across a bit weird, but the technogies I plan to cover in the final product is pretty much a full app, so it’s a little hard to describe in
one blog post/article. So I am hoping that by breaking it down into small chunks, each story/sub task will be a useful learning experience in
it’s own right.
SOURCE CONTROL : ORGANISATION
The idea is that each story/sub task will be a folder/subfolder which is completely independent of other stories/sub tasks (up until the final goal, which is of course
a working showcase that demostrates it all working together).
NOTE TO SELF : I am going to try really hard to do this (aren’t we sacha), as I think one topic -> one source control repo (more than likely GIT), is a good way to
correlate ideas/words on the post/article
WHAT DO I WANT TO WRITE
In essence I want to write a very (pardon the pun) but uber simple “uber” type app. Where there are the following funtional requirements
- There should be a web interface that a client can use. Clients may be a “driver” or a “pickup client” requireing a delivery
- There should be a web interface that a “pickup client” can use, that shows a “pickup client” location on a map, which the “pickup client” choses.
The “pickup client” may request a pickup job, in which case “drivers” that are in the area bid for a job.
The “pickup client” location should be visible to a “driver” on a map
- A “driver” may bid for a “pickup client” job, and the bidding “driver(s)” location should be visible to the “pickup client”.
- The acceptance of the bidding “driver” is down to the “pickup client”
- Once a “pickup client” accepts a “driver” ONLY the assigned “driver(s)” current map position will be shown to the “pickup client”
- When a “pickup client” is happy that they have been picked up by a “driver”, the “pickup client” may rate the driver from 1-10, and the “driver” may also rate the “pickup client” from 1-10.
- The rating should only be available once a “pickup client” has marked a job as “completed”
- A “driver” or a “pickup client” should ALWAYS be able to view their previous ratings.
Whilst this may sound child’s play to a lot of you (me included if I stuck to using simply CRUD operations), I just want to point out that this app is meant as a learning experience so I will not be using a simple SignalR Hub, and a couple of database tables.
I intend to write this project using a completely different set of technologies from the norm. Some of the technology choices could easily scale to hundreds of thousands of requests per second (Kafka has your back here)
POTENTIAL TECNHOLOGIES INVOLVED
- React Router
- Play (Scala Http Stack)
- Kafka Streams
Some of this will undoubtedly be covered in other blogs (such as React/Webpack), however some of it I am hoping will be quite novel/insightful material.
Who knows though there may be some of you out there that haven’t heard of Webpack, so some of that may even be new, we shall se, hopefully enough stuff for everyone.
I will maintain a list of stories and their sub tasks using Trello here : https://trello.com/b/F4ykCOOM/kafka-play-akka-react-webpack-tasks which at the time of writing this post was the items shown below
TOP LEVEL STORIES
- Create standalone web app structure
- Create webpack config
- Setup react router
- Create static screens
- Create react googlemap component
- Create Driver / User login component
- Test EventSource from Play back end
- Dependency Injection Into React components
- Prototype screens
Play Back End
- Create a back end play app
- Create test Kafka consumer that is able to read from JSON payload from a Kafka topic
- Create test publisher that publishes JSON payload to a Kafka topic
- Create Akka Publisher flow to test EventSource JS call
- Create login API
- Create check ranking API, which will use Kafka Active queries over KTable (or Global KTable) in the materialized streams
- Create publish job API, which will publish out on Kafka publisher where it will send a JSON payload
- Create receive job update API, will read JSON from Kafka Consumer where it will read in JSON payload, with the intention of updating the map of the drivers position
- Create “Accept Job” API which will publish out on Kafka publisher where it will send JSON payload
- Create “Bid for Job” API which will publish out on Kafka publisher where it will send JSON payload
- Create Complete job API, which will publish out on Kafka publisher where it will send a JSON payload
- Create ranking API, which will publish out on Kafka publisher where it will send a JSON payload
- Create publish driver job co-ordinate update API, which will publish out on Kafka publisher where it will send a JSON payload
Create test app that tests out listening to any single Kafka publisher JSON topic, and creates streams app from it, and pushes out to an output topic
- Create a windowed Kafka stream app that will window over all “driver bidding” jobs for a give period, and will output to an output stream, such that all the job bids can be consumed by Kafka Consumer
- Create a paired stream of accepted job (id, client, driver id) and an updated driver position which will come in on a different stream
- Create a ranking streams app which will store a successful ranking in a Kafka Stream KTable
- Create a way to use Active Queries for allowing clients/drivers to query their rankings
HOW WILL PROGRESS BE TRACKED
I will simply use Trellos “Label” facility, such that done tasks will be “Green”, and there will obviously be a post/GitHub code repo folder that goes with that.
1. I will not be concerned with connection failures, the aim of the project is to try and create a real world like project, but not actually create a end-end production grade application
2. I will be treating every run as if it were the first, I will not be storing ANY permanent state (apart from ratings potentially)
3. I will be doing things at my own pace (I have 2 kids) so it comes when it comes
4. I will try and use varied technology choices, which will in places mean that there could potentially be more work required to make it production quality