master ac6c0fb2c9b7 cached
24 files
80.2 KB
20.2k tokens
1 requests
Download .txt
Repository: boldradius/akka-dddd-template
Branch: master
Commit: ac6c0fb2c9b7
Files: 24
Total size: 80.2 KB

Directory structure:
gitextract_tggwff63/

├── LICENSE
├── README.md
├── activator.properties
├── build.sbt
├── project/
│   ├── ResolverSettings.scala
│   ├── build.properties
│   └── plugins.sbt
├── src/
│   ├── main/
│   │   ├── resources/
│   │   │   └── application.conf
│   │   └── scala/
│   │       └── com/
│   │           └── boldradius/
│   │               ├── cqrs/
│   │               │   ├── AuctionCommandQueryProtocol.scala
│   │               │   ├── BidProcessor.scala
│   │               │   ├── BidView.scala
│   │               │   ├── ClusterBoot.scala
│   │               │   ├── ClusterNodeApp.scala
│   │               │   ├── DomainModel.scala
│   │               │   ├── HttpApp.scala
│   │               │   ├── HttpAuctionServiceRoute.scala
│   │               │   └── Passivation.scala
│   │               └── util/
│   │                   ├── Logging.scala
│   │                   └── MarshallingSupport.scala
│   ├── multi-jvm/
│   │   └── scala/
│   │       └── com/
│   │           └── boldradius/
│   │               └── auction/
│   │                   └── cqrs/
│   │                       ├── AuctionServiceSpec.scala
│   │                       └── StMultiNodeSpec.scala
│   └── test/
│       ├── resources/
│       │   └── testcreatetables.cql
│       └── scala/
│           └── com/
│               └── boldradius/
│                   └── dddd/
│                       └── HttpAuctionServiceRouteSpec.scala
└── tutorial/
    └── index.html

================================================
FILE CONTENTS
================================================

================================================
FILE: LICENSE
================================================
Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "{}"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright {yyyy} {name of copyright owner}

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.



================================================
FILE: README.md
================================================
# akka-dddd-template
Akka DDDD template using CQRS/ES with a Distributed Domain

Scala Version = 2.11.6

Akka Version = 2.3.9

Spray Version = 1.3.1


## Background

### Distributed Domain Driven Design

Distributed Domain Driven Design takes the existing DDD concept and applies it ot an application intended to have each domain instance represented by an actor, as opposed to a class instance that must be synchronized via a backing persistent store. In this pattern, each domain instance is a cluster singleton, meaning updates can be made to it's state without fear of conflict.

### CQRS/ES  Command Query Responsibility Segregation / Event Sourcing

This is a pattern that uses Command and Query objects to apply the [CQS](http://en.wikipedia.org/wiki/Command%E2%80%93query_separation) principle
        for modifying and retrieving data.

Event Sourcing is an architectural pattern in which state is tracked with an immutable event log instead of
destructive updates (mutable).
        
## Getting Started

To get this application going, you will need to:

*   Set up the datastore
*   Boot the cluster nodes
*   Boot the Http microservice node

### DataStore

This application requires a distributed journal. Storage backends for journals and snapshot stores are pluggable in Akka persistence. In this case we are using [Cassandra](http://cassandra.apache.org/download/).
You can find other journal plugins [here](http://akka.io/community/?_ga=1.264939791.1443869017.1408561680).

The datastore is specified in **application.conf**

cassandra-journal.contact-points = ["127.0.0.1"]

cassandra-snapshot-store.contact-points = ["127.0.0.1"]

As you can see, the default is localhost. In a cloud deployment, you could add several addresses to a cassandra cluster.

This application uses a simple domain to demonstrate CQRS and event sourcing with Akka Persistence. This domain is an online auction:

        final case class Bid(price:Double, buyer:String, timeStamp:Long)

        final case class Auction(auctionId:String,
                             startTime:Long,
                             endTime:Long,
                             initialPrice:Double,
                             acceptedBids:List[Bid],
                             refusedBids:List[Bid],
                             ended:Boolean)

This is a distributed application, leveraging **Akka Cluster**.

The **Command** path of this application is illustrated by the creation of an auction, and placing bids.

The **Query** path of this application is illustrated by the querying of winning bid and bid history.

In order to distribute and segregate these paths, we leverage  **Akka Cluster**, as well as **Cluster Sharding**.

Cluster Sharding enables the  distribution of the command and query actors across several nodes in the cluster,
supporting interaction using their logical identifier, without having to care about their physical location in the cluster.

### Cluster Nodes

You must first boot some cluster nodes (as many as you want). Running locally, these are distinguished by port  eg:[2551,2552,...].

This cluster must specify one or more **seed nodes** in **application.conf**

    akka.cluster {
    seed-nodes = [
    "akka.tcp://ClusterSystem@127.0.0.1:2551",
    "akka.tcp://ClusterSystem@127.0.0.1:2552"]
    }

The Cluster Nodes are bootstrapped in **ClusterNode.scala**.

To boot each cluster node locally:

    sbt 'runMain com.boldradius.cqrs.ClusterNodeApp nodeIpAddress port'

for example:

    sbt 'runMain com.boldradius.cqrs.ClusterNodeApp 127.0.0.1 2551'

### Http Microservice Node

The HTTP front end is implemented as a **Spray** microservice and is bootstrapped in **HttpApp.scala**.It participates in the Cluster, but as a proxy.

To run the microservice locally:

    sbt 'runMain com.boldradius.cqrs.HttpApp httpIpAddress httpPort akkaIpAddres akkaPort'

for example:

    sbt 'runMain com.boldradius.cqrs.HttpApp 127.0.0.1 9000 127.0.0.1 0'


The HTTP API enables the user to:

*   Create an Auction
*   Place a bid
*   Query for the current winning bid
*   Query for the bid history

#### Create Auction

        POST http://127.0.0.1:9000/startAuction

        {"auctionId":"123",
        "start":"2015-01-20-16:25",
        "end":"2015-07-20-16:35",
        "initialPrice" : 2,
        "prodId" : "3"}

#### Place Bid

    POST http://127.0.0.1:9000/bid

    {"auctionId":"123",
    "buyer":"dave",
    "bidPrice":6}

#### Query for the current winning bid

    GET http://127.0.0.1:9000/winningBid/123

#### Query for the bid history

    http://127.0.0.1:9000/bidHistory/123

### Spray service fowards to the cluster

The trait **HttpAuctionServiceRoute.scala** implements a route that takes ActorRefs (one for command and query) as input.
Upon receiving an Http request, it either sends a command message to the **command** actor, or a query message to the **query** actor.

     def route(command: ActorRef, query:ActorRef) = {
         post {
            path("startAuction") {
                extract(_.request) { e =>
                    entity(as[StartAuctionDto]) {
                        auction => onComplete(
                            (command ? StartAuctionCmd(auction.auctionId,....

## Exploring the Command path in the Cluster

The command path is implemented in **BidProcessor.scala**. This is a **PersistentActor** that receives commands:

    def initial: Receive = {
        case a@StartAuctionCmd(id, start, end, initialPrice, prodId) => ...
    }

    def takingBids(state: Auction): Receive = {
                case a@PlaceBidCmd(id, buyer, bidPrice) => ...
    }

and produces events, writing them to the event journal, and notifying the **Query** Path of the updated journal:

    def handleProcessedCommand(sendr: ActorRef, processedCommand: ProcessedCommand): Unit = {

            // ack whether there is an event or not
            processedCommand.event.fold(sender() ! processedCommand.ack) { evt =>
              persist(evt) { persistedEvt =>
                readRegion ! Update(await = true)
                sendr ! processedCommand.ack
                processedCommand.newReceive.fold({})(context.become)
               }
             }
        }
   
This actor is cluster sharded on auctionId as follows:

    val idExtractor: ShardRegion.IdExtractor = {
        case m: AuctionCmd => (m.auctionId, m)
    }

    val shardResolver: ShardRegion.ShardResolver = msg => msg match {
        case m: AuctionCmd => (math.abs(m.auctionId.hashCode) % 100).toString
    }

    val shardName: String = "BidProcessor"

This means, there is only one instance of this actor in the cluster, and all commands with the same  **auctionId** will be routed to the same actor.

If this actor receives no commands for 1 minute, it will **passivate** ( a pattern enabling the parent to stop the actor, in order to reduce memory consumption without losing any commands it is currently processing):

    /** passivate the entity when no activity */
    context.setReceiveTimeout(1 minute)     // this will send a ReceiveTimeout message after one minute, if no other messages come in

The timeout is handled in the **Passivation.scala** trait:

    protected def withPassivation(receive: Receive): Receive = receive.orElse{
        // tell parent actor to send us a poisinpill
        case ReceiveTimeout => context.parent ! Passivate(stopMessage = PoisonPill)

        // stop
        case PoisonPill => context.stop(self)
    }


If this actor fails, or is passivated, and then is required again (to handle a command), the cluster will spin it up, and it will replay the event journal. In this case we make use a var: auctionRecoverStateMaybe to capture the state while we replay. When the replay is finished, the actor is notified with the RecoveryCompleted message and we can then "become" appropriately to reflect this state. 

    def receiveRecover: Receive = {
        case evt:AuctionStartedEvt =>
                auctionRecoverStateMaybe = Some(Auction(evt.auctionId,evt.started,evt.end,evt.initialPrice,Nil,Nil,false))

            case evt: AuctionEvt => {
              auctionRecoverStateMaybe = auctionRecoverStateMaybe.map(state =>
                updateState(evt.logInfo("receiveRecover" + _.toString),state))
            }
        
            // Once recovery is complete, check the state to become the appropriate behaviour
            case RecoveryCompleted => {
              auctionRecoverStateMaybe.fold[Unit]({}) { auctionState =>
                if (auctionState.logInfo("receiveRecover RecoveryCompleted state: " + _.toString).ended)
                  context.become(passivate(auctionClosed(auctionState)).orElse(unknownCommand))
                else {
                  launchLifetime(auctionState.endTime)
                  context.become(passivate(takingBids(auctionState)).orElse(unknownCommand))
                }
              }
            }

## Exploring the Query path in the Cluster

The Queries are handled in a different Actor: **BidView.scala**. This is a **PersistentView** that handles query messages, or prompts from it's companion **PersistentActor** to update itself.

**BidView.scala** is linked to the **BidProcessor.scala** event journal via it's **persistenceId**

    override val persistenceId: String = "BidProcessor" + "-" + self.path.name

This means it has access to this event journal, and can maintain, and recover state from this journal.

It is possible for a PersistentView to save it's own snapshots, but, in our case, it isn't required.

This PersistentView is sharded in the same way the PersistentActor is:

    val idExtractor: ShardRegion.IdExtractor = {
        case m : AuctionEvt => (m.auctionId,m)
        case m : BidQuery => (m.auctionId,m)
    }

    val shardResolver: ShardRegion.ShardResolver = {
        case m: AuctionEvt => (math.abs(m.auctionId.hashCode) % 100).toString
        case m: BidQuery => (math.abs(m.auctionId.hashCode) % 100).toString
    }

One could have used a different shard strategy here, but a consequence of the above strategy is that the Query Path will
            reside in the same Shard Region as the command path, reducing latency of the Update() message from Command to Query.

The PersistentView maintains the following model in memory:

    final case class BidState(auctionId:String,
                             start:Long,
                             end:Long,
                             product:Double,
                             acceptedBids:List[Bid],
                             rejectedBids:List[Bid],
                             closed:Boolean)

This model is sufficient to satisfy both queries: Winning Bid, and  Bid History:

      def auctionInProgress(currentState:BidState, prodId:String):Receive = {

        case  GetBidHistoryQuery(auctionId) =>  sender ! BidHistoryResponse(auctionId,currentState.acceptedBids)

        case  WinningBidPriceQuery(auctionId) =>
            currentState.acceptedBids.headOption.fold(
            sender ! WinningBidPriceResponse(auctionId,currentState.product))(b =>
            sender ! WinningBidPriceResponse(auctionId,b.price))
              ....
      }


================================================
FILE: activator.properties
================================================
name=akka-dddd-cqrs
title=Akka Distributed Domain Driven Design with CQRS
description=A starter distributed application with Akka and Spray that demonstrates Command Query Responsibility Segregation using Akka Persistence, Cluster Sharding and a distributed journal for Event Sourcing enabled with Cassandra.
tags=akka,akka-persistence,cluster-sharding,cluster,cqrs,event-sourced
authorName=BoldRadius Solutions
authorLink=http://boldradius.com/
authorTwitter=@boldradius
authorBio=We are a custom software development, training and consulting firm specializing in the Typesafe Reactive Platform of Scala, Akka and Play Framework. Our mission is to enable our clients to adopt new technologies. We are a committed group of software developers with a mandate of solving problems with innovative, rapid solutions.
authorLogo=http://i59.tinypic.com/m9rc6p.png

================================================
FILE: build.sbt
================================================
import net.virtualvoid.sbt.graph.Plugin._
import sbt._
import sbt.Keys._
import com.typesafe.sbt.SbtMultiJvm
import com.typesafe.sbt.SbtMultiJvm.MultiJvmKeys.MultiJvm
import com.github.retronym.SbtOneJar



parallelExecution in Test := false

val baseSettings: Seq[Def.Setting[_]] =
  graphSettings ++
  Seq(
    name := "akka-dddd-template",
    version := "1.0.0",
    organization := "boldradius",
    scalaVersion := "2.11.6",
    ivyScala := ivyScala.value map { _.copy(overrideScalaVersion = true) },
    org.scalastyle.sbt.PluginKeys.config := file("project/scalastyle-config.xml"),
    scalacOptions in Compile ++= Seq("-encoding", "UTF-8", "-target:jvm-1.7", "-deprecation", "-unchecked", "-Ywarn-dead-code", "-Xfatal-warnings", "-feature", "-language:postfixOps"),
    scalacOptions in (Compile, doc) <++= (name in (Compile, doc), version in (Compile, doc)) map DefaultOptions.scaladoc,
    javacOptions in (Compile, compile) ++= Seq("-source", "1.7", "-target", "1.7", "-Xlint:unchecked", "-Xlint:deprecation", "-Xlint:-options"),
    javacOptions in doc := Seq(),
    javaOptions += "-Xmx2G",
    outputStrategy := Some(StdoutOutput),
    exportJars := true,
    fork := true,
    resolvers := ResolverSettings.resolvers,
    Keys.fork in run := true,
    // make sure that MultiJvm test are compiled by the default test compilation
    compile in MultiJvm <<= (compile in MultiJvm) triggeredBy (compile in Test),
    // disable parallel tests
    parallelExecution in Test := false,
    // make sure that MultiJvm tests are executed by the default test target,
    // and combine the results from ordinary test and multi-jvm tests
    artifact in oneJar <<= moduleName(Artifact(_)),
    executeTests in Test <<= (executeTests in Test, executeTests in MultiJvm) map {
      case (testResults, multiNodeResults)  =>
        val overall =
          if (testResults.overall.id < multiNodeResults.overall.id)
            multiNodeResults.overall
          else
            testResults.overall
        Tests.Output(overall,
          testResults.events ++ multiNodeResults.events,
          testResults.summaries ++ multiNodeResults.summaries)
    }
  )


val akka = "2.3.9"
val Spray = "1.3.1"

lazy val root =  project.in( file(".") )
  .settings( baseSettings ++ SbtMultiJvm.multiJvmSettings  ++ SbtOneJar.oneJarSettings ++ Defaults.itSettings :_*)
  .settings( libraryDependencies ++= {
        Seq(
          "io.spray"                   %% "spray-routing"   % Spray    % "compile",
          "io.spray"                   %% "spray-can"       % Spray    % "compile",
          "io.spray"                   %%  "spray-json"     % Spray    % "compile",
          "io.spray"                   %% "spray-testkit"   % Spray    % "test",
          "org.json4s"                 %% "json4s-native"   % "3.2.11",
          "com.typesafe.akka"          %%  "akka-actor"                            % akka,
          "com.typesafe.akka"          %% "akka-cluster"                           % akka,
          "com.typesafe.akka"          %% "akka-remote"                            % akka,
          "com.typesafe.akka"          %% "akka-contrib"                           % akka,
          "com.typesafe.akka"          %%  "akka-slf4j"                            % akka,
          "com.typesafe.akka"          %%  "akka-multi-node-testkit"               % "2.3.8",
          "com.typesafe.akka"          %%  "akka-testkit"                          % akka     % "test",
          "org.slf4j"                  %   "slf4j-api"                             % "1.7.7",
          "com.typesafe.scala-logging" %% "scala-logging"                          % "3.0.0",
          "ch.qos.logback"             %   "logback-core"                          % "1.1.2",
          "ch.qos.logback"             %   "logback-classic"                       % "1.1.2",
          "com.github.krasserm"        %% "akka-persistence-cassandra"             % "0.3.5",
          "org.scala-lang.modules"     %  "scala-xml_2.11"                         % "1.0.3",
          "org.scala-lang.modules"     %  "scala-xml_2.11"                         % "1.0.3",
          "org.scalatest"              %%  "scalatest"                             % "2.2.1"  % "test",
          "org.iq80.leveldb"           %   "leveldb"                               % "0.7",
          "org.json4s"                 %% "json4s-native"                          % "3.2.11",
          "joda-time" 				         % "joda-time" 						                   % "2.7",
          "org.joda" % "joda-convert" % "1.2",
          "com.datastax.cassandra"     % "cassandra-driver-core" 				           % "2.1.1"  exclude("org.xerial.snappy", "snappy-java"),
          "commons-io"                 %  "commons-io"                             % "2.4"    % "test",
          "org.xerial.snappy"          % "snappy-java"           				           % "1.1.1.3"
        )
      }
  ).configs (MultiJvm)







================================================
FILE: project/ResolverSettings.scala
================================================
import sbt._

object ResolverSettings {

  lazy val resolvers = Seq(
    Resolver.mavenLocal,
    Resolver.sonatypeRepo("releases"),
    Resolver.typesafeRepo("releases"),
    Resolver.typesafeRepo("snapshots"),
    Resolver.sonatypeRepo("snapshots"),
    "Linter" at "http://hairyfotr.github.io/linteRepo/releases",
    "krasserm" at "http://dl.bintray.com/krasserm/maven"
  )
}

================================================
FILE: project/build.properties
================================================
sbt.version = 0.13.7

================================================
FILE: project/plugins.sbt
================================================

// project/plugins.sbt
dependencyOverrides += "org.scala-sbt" % "sbt" % "0.13.7"

resolvers += "sonatype-releases" at "https://oss.sonatype.org/content/repositories/releases/"

addSbtPlugin("org.scalastyle" %% "scalastyle-sbt-plugin" % "0.5.0")

// Dependency graph plugin: https://github.com/jrudolph/sbt-dependency-graph
addSbtPlugin("net.virtual-void" % "sbt-dependency-graph" % "0.7.4")

//addSbtPlugin("org.brianmckenna" % "sbt-wartremover" % "0.11")

addSbtPlugin("io.gatling" % "gatling-sbt" % "2.1.0")

addSbtPlugin("com.typesafe.sbt" % "sbt-multi-jvm" % "0.3.8")

addSbtPlugin("org.scoverage" % "sbt-scoverage" % "1.0.1")

addSbtPlugin("org.scala-sbt.plugins" % "sbt-onejar" % "0.8")


================================================
FILE: src/main/resources/application.conf
================================================
akka {
  loglevel = INFO

  actor {
    provider = "akka.cluster.ClusterActorRefProvider"
  }

  remote {
    log-remote-lifecycle-events = off
    netty.tcp {
      hostname = "127.0.0.1"
      port = 0
    }
  }

  remote.watch-failure-detector.threshold = 20

  cluster {
    seed-nodes = [
      "akka.tcp://ClusterSystem@127.0.0.1:2551",
      "akka.tcp://ClusterSystem@127.0.0.1:2552"]

    auto-down-unreachable-after = 10s
  }

  persistence {


    journal {
      max-message-batch-size = 200
      max-confirmation-batch-size = 10000
      max-deletion-batch-size = 10000
      plugin = "cassandra-journal"
    }
    snapshot-store {
      plugin = "cassandra-snapshot-store"
    }


    #journal.plugin = "akka.persistence.journal.leveldb-shared"
    #journal.leveldb-shared.store {
    # DO NOT USE 'native = off' IN PRODUCTION !!!
    #native = off
    #dir = "target/shared-journal"
    #}
    #snapshot-store.local.dir = "target/snapshots"
    #view.auto-update-interval = 2s
  }

  contrib.cluster.sharding {
    # The extension creates a top level actor with this name in top level user scope,
    # e.g. '/user/sharding'
    guardian-name = sharding
    # If the coordinator can't store state changes it will be stopped
    # and started again after this duration.
    coordinator-failure-backoff = 1 s
    # Start the coordinator singleton manager on members tagged with this role.
    # All members are used if undefined or empty.
    # ShardRegion actor is started in proxy only mode on nodes that are not tagged
    # with this role.
    role = ""
    # The ShardRegion retries registration and shard location requests to the
    # ShardCoordinator with this interval if it does not reply.
    retry-interval = 1 s
    # Maximum number of messages that are buffered by a ShardRegion actor.
    buffer-size = 100000
    # Timeout of the shard rebalancing process.
    handoff-timeout = 60 s
    # Time given to a region to acknowdge it's hosting a shard.
    shard-start-timeout = 10 s
    # If the shard can't store state changes it will retry the action
    # again after this duration. Any messages sent to an affected entry
    # will be buffered until the state change is processed
    shard-failure-backoff = 10 s
    # If the shard is remembering entries and an entry stops itself without
    # using passivate. The entry will be restarted after this duration or when
    # the next message for it is received, which ever occurs first.
    entry-restart-backoff = 10 s
    # Rebalance check is performed periodically with this interval.
    rebalance-interval = 10 s
    # How often the coordinator saves persistent snapshots, which are
    # used to reduce recovery times
    snapshot-interval = 3600 s
    # Setting for the default shard allocation strategy
    least-shard-allocation-strategy {
      # Threshold of how large the difference between most and least number of
      # allocated shards must be to begin the rebalancing.
      rebalance-threshold = 10
      # The number of ongoing rebalancing processes is limited to this number.
      max-simultaneous-rebalance = 3
    }
  }


}


cassandra-journal {
  # FQCN of the cassandra journal plugin
  class = "akka.persistence.cassandra.journal.CassandraJournal"

  # Comma-separated list of contact points in the cluster
  contact-points = ["127.0.0.1"]

  # Port of contact points in the cluster
  port = 9042

  # Name of the keyspace to be created/used by the journal
  keyspace = "akka_dddd_template_journal"

  # Name of the table to be created/used by the journal
  table = "akka_dddd_template_journal"

  # Replication factor to use when creating a keyspace
  replication-factor = 1

  # Write consistency level
  write-consistency = "QUORUM"

  # Read consistency level
  read-consistency = "QUORUM"

  # Maximum number of entries per partition (= columns per row).
  # Must not be changed after table creation (currently not checked).
  max-partition-size = 5000000

  # Maximum size of result set
  max-result-size = 50001

  # Dispatcher for the plugin actor.
  plugin-dispatcher = "akka.actor.default-dispatcher"

  # Dispatcher for fetching and replaying messages
  replay-dispatcher = "akka.persistence.dispatchers.default-replay-dispatcher"
}

cassandra-snapshot-store {

  # FQCN of the cassandra snapshot store plugin
  class = "akka.persistence.cassandra.snapshot.CassandraSnapshotStore"

  # Comma-separated list of contact points in the cluster
  contact-points = ["127.0.0.1"]

  # Port of contact points in the cluster
  port = 9042

  # Name of the keyspace to be created/used by the snapshot store
  keyspace = "akka_dddd_template_snapshot"

  # Name of the table to be created/used by the snapshot store
  table = "akka_dddd_template_snapshot"

  # Replication factor to use when creating a keyspace
  replication-factor = 1

  # Write consistency level
  write-consistency = "ONE"

  # Read consistency level
  read-consistency = "ONE"

  # Maximum number of snapshot metadata to load per recursion (when trying to
  # find a snapshot that matches specified selection criteria). Only increase
  # this value when selection criteria frequently select snapshots that are
  # much older than the most recent snapshot i.e. if there are much more than
  # 10 snapshots between the most recent one and selected one. This setting is
  # only for increasing load efficiency of snapshots.
  max-metadata-result-size = 10

  # Dispatcher for the plugin actor.
  plugin-dispatcher = "cassandra-snapshot-store.default-dispatcher"

  # Default dispatcher for plugin actor.
  default-dispatcher {
    type = Dispatcher
    executor = "fork-join-executor"
    fork-join-executor {
      parallelism-min = 2
      parallelism-max = 8
    }
  }
}


================================================
FILE: src/main/scala/com/boldradius/cqrs/AuctionCommandQueryProtocol.scala
================================================
package com.boldradius.cqrs

object AuctionCommandQueryProtocol {

  sealed  trait AuctionMsg {
    val auctionId: String
  }

  sealed trait AuctionCmd extends AuctionMsg

//  case class BootInitCmd(auctionId: String) extends AuctionCmd
  case class StartAuctionCmd(auctionId: String, start: Long, end: Long, initialPrice: Double, prodId: String) extends AuctionCmd
  case class PlaceBidCmd(auctionId: String, buyer: String, bidPrice: Double) extends AuctionCmd

  sealed trait AuctionAck extends AuctionMsg

  case class StartedAuctionAck(auctionId: String) extends AuctionAck
  case class InvalidAuctionAck(auctionId: String, msg: String) extends AuctionAck
  case class PlacedBidAck(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long) extends AuctionAck
  case class RefusedBidAck(auctionId: String, buyer: String, bidPrice: Double, winningBid: Double) extends AuctionAck
  case class FailedBidAck(auctionId: String, buyer: String, bidPrice: Double, message: String) extends AuctionAck
  case class AuctionEndedAck(auctionId: String) extends AuctionAck
  case class AuctionNotYetStartedAck(auctionId: String) extends AuctionAck

  sealed trait BidQuery extends AuctionMsg

  case class WinningBidPriceQuery(auctionId: String) extends BidQuery
  case class GetBidHistoryQuery(auctionId: String) extends BidQuery
  case class GetAuctionStartEnd(auctionId: String) extends BidQuery
  case class GetProdIdQuery(auctionId: String) extends BidQuery

  sealed trait BidQueryResponse  extends AuctionMsg

  case class InvalidBidQueryReponse(auctionId: String, message: String) extends BidQueryResponse
  case class AuctionNotStarted(auctionId: String) extends BidQueryResponse
  case class WinningBidPriceResponse(auctionId: String, price: Double) extends BidQueryResponse
  case class BidHistoryResponse(auctionId: String, bids: List[Bid]) extends BidQueryResponse
  case class AuctionStartEndResponse(auctionId: String, start: Long, end: Long) extends BidQueryResponse
  case class ProdIdResponse(auctionId: String, prodId: String) extends BidQueryResponse

}



================================================
FILE: src/main/scala/com/boldradius/cqrs/BidProcessor.scala
================================================
package com.boldradius.cqrs

import akka.actor._
import akka.contrib.pattern.ShardRegion

import akka.persistence.{RecoveryCompleted, PersistentActor, SnapshotOffer, Update}
import AuctionCommandQueryProtocol._
import com.boldradius.util.ALogging

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._

/**
 *
 * This is the Command side of CQRS. This actor receives commands only: AuctionStart and BidPlaced Cmds.
 *
 * These commands are transformed into events and persisted to a cassandra journal.
 * Once the events are persisted, the corresponding view is prompted to update itself from this journal
 * with Update()
 *
 * For recovery, the state of the auction is encoded in the var auctionStateMaybe: Option[AuctionBidState]
 *
 * A tick message is scheduled to signal the end of the auction
 *
 * This actor will passivate after 1 minute if no messages are received
 *
 */
object BidProcessor {

  case object Tick

  def props(readRegion: ActorRef): Props = Props(new BidProcessor(readRegion))

  sealed trait AuctionEvt {
    val auctionId: String
  }

  case class AuctionStartedEvt(auctionId: String, started: Long, end: Long, initialPrice: Double, prodId: String) extends AuctionEvt

  case class AuctionEndedEvt(auctionId: String, timeStamp: Long) extends AuctionEvt

  case class BidPlacedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long) extends AuctionEvt

  case class BidRefusedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long) extends AuctionEvt

  case class BidFailedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long, error: String) extends AuctionEvt

  val idExtractor: ShardRegion.IdExtractor = {
    case m: AuctionCmd => (m.auctionId, m)
  }

  val shardResolver: ShardRegion.ShardResolver = {
    case m: AuctionCmd => (math.abs(m.auctionId.hashCode) % 100).toString
  }

  val shardName: String = "BidProcessor"
}

class BidProcessor(readRegion: ActorRef) extends PersistentActor with Passivation with ALogging {

  import BidProcessor._

  override def persistenceId: String = self.path.parent.name + "-" + self.path.name

  /** passivate the entity when no activity for 1 minute */
  context.setReceiveTimeout(1 minute)


  /**
   * This formalizes the effects of this processor
   * Each command results in:
   * maybe AuctionEvt,
   * an AuctionAck,
   * maybe newReceive
   */
  private final case class ProcessedCommand(event: Option[AuctionEvt], ack: AuctionAck, newReceive: Option[Receive])


  /**
   * Updates Auction state
   */
  private def updateState(evt: AuctionEvt, state: Auction): Auction = {

    evt match {
      case AuctionEndedEvt(auctionId: String, timeStamp) =>
        state.copy(ended = true)

      case BidPlacedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long) =>
        state.copy(acceptedBids = Bid(bidPrice, buyer, timeStamp) :: state.acceptedBids)

      case BidRefusedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long) =>
        state.copy(refusedBids = Bid(bidPrice, buyer, timeStamp) :: state.refusedBids)

      case BidFailedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long, error: String) =>
        state.copy(refusedBids = Bid(bidPrice, buyer, timeStamp) :: state.refusedBids)

      case _ => state
    }
  }

  private def getCurrentBid(state: Auction): Double =
    state.acceptedBids match {
      case Bid(p, _, _) :: tail => p
      case _ => state.initialPrice
    }


  /**
   * In an attempt to isolate the effects (write to journal, update state, change receive behaviour),
   * each case of the PartialFunction[Any,Unit]  Receive functions: initial, takingBids call
   * handleProcessedCommand ( sender, processedCommand) by convention
   *
   */
  def handleProcessedCommand(sendr: ActorRef, processedCommand: ProcessedCommand): Unit = {

    // ack whether there is an event or not
    processedCommand.event.fold(sender() ! processedCommand.ack) { evt =>
      persist(evt) { persistedEvt =>
        readRegion ! Update(await = true) // update read path
        sendr ! processedCommand.ack
        processedCommand.newReceive.fold()(context.become) // maybe change state
      }
    }
  }

  override def receiveCommand: Receive = passivate(initial).orElse(unknownCommand)

  def initial: Receive = {

    case StartAuctionCmd(id, start, end, initialPrice, prodId) =>
      val currentTime = System.currentTimeMillis()

      if (currentTime >= end) {
        handleProcessedCommand(sender(),
          ProcessedCommand(None, InvalidAuctionAck(id, "This auction is already over"), None)
        )
      } else {
        // Starting the auction, schedule a message to signal auction end
        launchLifetime(end)

        handleProcessedCommand(
          sender(),
          ProcessedCommand(
            Some(AuctionStartedEvt(id, start, end, initialPrice, prodId)),
            StartedAuctionAck(id),
            Some(passivate(takingBids(Auction(id, start, end, initialPrice, Nil, Nil, false))).orElse(unknownCommand))
          )
        )
      }
  }

  def takingBids(state: Auction): Receive = {

    case Tick => // end of auction
      val currentTime = System.currentTimeMillis()
      persist(AuctionEndedEvt(state.auctionId, currentTime)) { evt =>
        readRegion ! Update(await = true)
        context.become(passivate(auctionClosed(updateState(evt, state))).orElse(unknownCommand))
      }


    case PlaceBidCmd(id, buyer, bidPrice) => {
      val timestamp = System.currentTimeMillis()

      handleProcessedCommand(sender(),
        if (timestamp < state.endTime && timestamp >= state.startTime) {
          val currentPrice = getCurrentBid(state)
          if (bidPrice > currentPrice) {
            // Successful bid
            val evt = BidPlacedEvt(id, buyer, bidPrice, timestamp)
            ProcessedCommand(
              Some(evt),
              PlacedBidAck(id, buyer, bidPrice, timestamp),
              // update state
              Some(passivate(takingBids(updateState(evt, state))).orElse(unknownCommand))
            )
          } else {
            //Unsuccessful bid
            val evt = BidRefusedEvt(id, buyer, bidPrice, timestamp)
            ProcessedCommand(
              Some(evt),
              RefusedBidAck(id, buyer, bidPrice, currentPrice),
              Some(passivate(takingBids(updateState(evt, state))).orElse(unknownCommand))
            )
          }
        } else if (timestamp > state.endTime) {
          // auction expired
          ProcessedCommand(None, AuctionEndedAck(id), None)
        } else {
          ProcessedCommand(None, AuctionNotYetStartedAck(id), None)
        }
      )
    }
  }

  def auctionClosed(state: Auction): Receive = {
    case a: PlaceBidCmd => sender() ! AuctionEndedAck(state.auctionId)
    case a: StartAuctionCmd => sender() ! AuctionEndedAck(state.auctionId)
  }

  /** Used only for recovery */
  private var auctionRecoverStateMaybe: Option[Auction] = None

  def receiveRecover: Receive = {
    case evt: AuctionStartedEvt =>
      auctionRecoverStateMaybe =
        Some(Auction(evt.logInfo("receiveRecover evt:" + _.toString).auctionId, evt.started, evt.end, evt.initialPrice, Nil, Nil, false))

    case evt: AuctionEvt => {
      auctionRecoverStateMaybe = auctionRecoverStateMaybe.map(state =>
        updateState(evt.logInfo("receiveRecover evt:" + _.toString), state))
    }

    case RecoveryCompleted => postRecoveryBecome(auctionRecoverStateMaybe)

    // if snapshots are implemented, currently the aren't.
    case SnapshotOffer(_, snapshot) =>
      postRecoveryBecome(snapshot.asInstanceOf[Option[Auction]].logInfo("recovery from snapshot state:" + _.toString))
  }


  /**
   * Once recovery is complete, check the state to become the appropriate behaviour
   */
  def postRecoveryBecome(auctionRecoverStateMaybe: Option[Auction]): Unit =
    auctionRecoverStateMaybe.fold[Unit]({}) { auctionState =>
      log.info("postRecoveryBecome")
      if (auctionState.ended)
        context.become(passivate(auctionClosed(auctionState)).orElse(unknownCommand))
      else {
        launchLifetime(auctionState.endTime)
        context.become(passivate(takingBids(auctionState)).orElse(unknownCommand))
      }
    }


  def unknownCommand: Receive = {
    case other => {
      other.logInfo("unknownCommand: " + _.toString)
      sender() ! InvalidAuctionAck("", "InvalidAuctionAck")
    }
  }

  /** auction lifetime tick will send message when auction is over */
  def launchLifetime(time: Long) = {
    val auctionEnd = (time - System.currentTimeMillis()).logInfo("launchLifetime over in:" + _.toString + "ms")
    if (auctionEnd > 0) {
      context.system.scheduler.scheduleOnce(auctionEnd.milliseconds, self, Tick)
    }
  }
}


================================================
FILE: src/main/scala/com/boldradius/cqrs/BidView.scala
================================================
package com.boldradius.cqrs

import akka.actor._
import akka.contrib.pattern.ShardRegion
import akka.persistence.PersistentView
import AuctionCommandQueryProtocol._
import com.boldradius.cqrs.BidProcessor._
import com.boldradius.util.ALogging
import scala.concurrent.duration._


/**
 * This actor is the Query side of CQRS.
 *
 * Each possible query result is represented as a case class (BidQueryResponse)
 *
 * This actor will initialize itself automatically upon startup from the event journal
 * stored by the corresponding PersistentActor (BidProcessor).
 *
 * There are many strategies for keeping this Actor consistent with the Write side,
 * this example uses the Update() method called from the Write side, which will cause
 * unread journal events to be sent to this actor, which, in turn, can update it's internal state.
 *
 */

/**  state requred to satisfy queries  */
final case class BidState(auctionId:String,
                    start:Long,
                    end:Long,
                    product:Double,
                    acceptedBids:List[Bid],
                    rejectedBids:List[Bid],
                    closed:Boolean)
object BidState{
  def apply(auctionId:String,start:Long,end:Long,price:Double):BidState =
    BidState(auctionId,start,end,price,Nil,Nil,false)
}

object BidView {

  def props():Props = Props(classOf[BidView])

  val idExtractor: ShardRegion.IdExtractor = {
    case m : AuctionEvt => (m.auctionId,m)
    case m : BidQuery => (m.auctionId,m)
  }

  val shardResolver: ShardRegion.ShardResolver = {
    case m: AuctionEvt => (math.abs(m.auctionId.hashCode) % 100).toString
    case m: BidQuery => (math.abs(m.auctionId.hashCode) % 100).toString
  }

  val shardName: String = "BidView"

}

/**
 * The Query Actor
 */
class BidView extends PersistentView with ALogging with Passivation {

  override val viewId: String = self.path.parent.name + "-" + self.path.name

  /** It is thru this persistenceId that this actor is linked to the PersistentActor's event journal */
  override val persistenceId: String = "BidProcessor" + "-" + self.path.name

  /** passivate the entity when no activity */
  context.setReceiveTimeout(1 minute)

  /**
   * This is the initial receive method
   *
   * It will only process the AuctionStartedEvt or reply to the WinningBidPriceQuery
   *
   */
  def receive: Receive = passivate(initial).orElse(unknownCommand)

  def initial: Receive = {

    case e @ AuctionStartedEvt(auctionId, started, end,intialPrice, prodId) if isPersistent  =>
      val newState = BidState(auctionId,started,end,intialPrice)
      context.become(passivate(auctionInProgress(newState,prodId)).orElse(unknownCommand))

    case  WinningBidPriceQuery(auctionId) =>
      sender ! AuctionNotStarted(auctionId)
  }

  /**
   * Also responds to updates to the event journal (AuctionEndedEvt,BidPlacedEvt,BidRefusedEvt), and
   * updates internal state as well as responding to queries
   */
  def auctionInProgress(currentState:BidState, prodId:String):Receive = {

    case  GetProdIdQuery(auctionId) =>
      sender ! ProdIdResponse(auctionId,prodId)


    case  GetBidHistoryQuery(auctionId) =>
      sender ! BidHistoryResponse(auctionId,currentState.acceptedBids)

    case  WinningBidPriceQuery(auctionId) =>
        currentState.acceptedBids.headOption.fold(
          sender ! WinningBidPriceResponse(auctionId,currentState.product))(b =>
          sender ! WinningBidPriceResponse(auctionId,b.price))

    case e:  AuctionEndedEvt  =>
      val newState =  currentState.copy(closed = true)
      context.become(passivate(auctionEnded(newState)))

    case BidPlacedEvt(auctionId,buyer,bidPrice,timeStamp) if isPersistent =>
        val newState =  currentState.copy(acceptedBids = Bid(bidPrice,buyer, timeStamp) :: currentState.acceptedBids)
        context.become(passivate(auctionInProgress(newState,prodId)))


    case BidRefusedEvt(auctionId,buyer,bidPrice,timeStamp) if isPersistent =>
      val newState =  currentState.copy(rejectedBids = Bid(bidPrice,buyer, timeStamp) :: currentState.rejectedBids)
      context.become(passivate(auctionInProgress(newState,prodId)))

  }

  def auctionEnded(currentState:BidState):Receive = {
    case _ => {}
  }

  def unknownCommand:Receive = {
    case other  => {
      sender() ! InvalidAuctionAck("","InvalidAuctionAck")
    }
  }
}


================================================
FILE: src/main/scala/com/boldradius/cqrs/ClusterBoot.scala
================================================
package com.boldradius.cqrs

import akka.actor.{Props, ActorRef, ActorSystem}
import akka.contrib.pattern.ClusterSharding


object ClusterBoot {

  def boot(proxyOnly:Boolean = false)(clusterSystem: ActorSystem):(ActorRef,ActorRef) = {
    val view = ClusterSharding(clusterSystem).start(
      typeName = BidView.shardName,
      entryProps = if(!proxyOnly) Some(BidView.props()) else None,
      idExtractor = BidView.idExtractor,
      shardResolver = BidView.shardResolver)
    val processor = ClusterSharding(clusterSystem).start(
      typeName = BidProcessor.shardName,
      entryProps = if(!proxyOnly) Some(BidProcessor.props(view)) else None,
      idExtractor = BidProcessor.idExtractor,
      shardResolver = BidProcessor.shardResolver)
    (processor,view)
  }

}


================================================
FILE: src/main/scala/com/boldradius/cqrs/ClusterNodeApp.scala
================================================
package com.boldradius.cqrs

import akka.actor.ActorSystem
import com.typesafe.config._

/**
 * Start an akka cluster node
 * Usage:  sbt 'runMain com.boldradius.cqrs.ClusterNodeApp 127.0.0.1 2551'
 */
object ClusterNodeApp extends App {


    val conf =
      """akka.remote.netty.tcp.hostname="%hostname%"
        |akka.remote.netty.tcp.port=%port%
      """.stripMargin

    val argumentsError = """
   Please run the service with the required arguments: <hostIpAddress> <port> """

    assert(args.length == 2, argumentsError)

    val hostname = args(0)
    val port = args(1).toInt
    val config =
      ConfigFactory.parseString( conf.replaceAll("%hostname%",hostname)
        .replaceAll("%port%",port.toString)).withFallback(ConfigFactory.load())

    // Create an Akka system
    implicit val clusterSystem = ActorSystem("ClusterSystem", config)
    ClusterBoot.boot()(clusterSystem)
}


================================================
FILE: src/main/scala/com/boldradius/cqrs/DomainModel.scala
================================================
package com.boldradius.cqrs

final case class Bid(price:Double, buyer:String, timeStamp:Long)

final case class Auction(auctionId:String,
                         startTime:Long,
                         endTime:Long,
                         initialPrice:Double,
                         acceptedBids:List[Bid],
                         refusedBids:List[Bid],
                         ended:Boolean)






================================================
FILE: src/main/scala/com/boldradius/cqrs/HttpApp.scala
================================================
package com.boldradius.cqrs

import akka.actor._
import spray.routing._

import com.typesafe.config.ConfigFactory
import spray.can.Http
import akka.io.IO
import akka.pattern.ask
import akka.util.Timeout

import scala.concurrent.duration._


class AuctionHttpActor( command:ActorRef, query:ActorRef )
  extends HttpServiceActor
  with HttpAuctionServiceRoute {
  implicit val ec = context.dispatcher
  def receive = runRoute(route(command,query))
}

/**
 * This spins up the Http server, after connecting to akka cluster.
   * Usage:  sbt 'runMain com.boldradius.auction.cqrs.HttpApp <httpIpAddress>" <httpPort> "<akkaHostIpAddress>" <akkaport>'
 *
 */
object HttpApp extends App{


  private val argumentsError = """
   Please run the service with the required arguments: " <httpIpAddress>" <httpPort> "<akkaHostIpAddress>" <akkaport> """


  val conf =
    """akka.remote.netty.tcp.hostname="%hostname%"
       akka.remote.netty.tcp.port=%port%
    """.stripMargin


  assert(args.length == 4, argumentsError)

  val httpHost = args(0)
  val httpPort = args(1).toInt

  val akkaHostname = args(2)
  val akkaPort = args(3).toInt

  val config =
    ConfigFactory.parseString( conf.replaceAll("%hostname%",akkaHostname)
      .replaceAll("%port%",akkaPort.toString)).withFallback(ConfigFactory.load())

  implicit val system = ActorSystem("ClusterSystem",config)

  val (processor,view) = ClusterBoot.boot(true)(system)

  val service = system.actorOf( Props( classOf[AuctionHttpActor],processor,view), "cqrs-http-actor")

  implicit val timeout = Timeout(5.seconds)

  IO(Http) ? Http.Bind(service, interface = httpHost, port = httpPort)
}



================================================
FILE: src/main/scala/com/boldradius/cqrs/HttpAuctionServiceRoute.scala
================================================
package com.boldradius.cqrs

import akka.actor.ActorRef
import akka.pattern.ask
import akka.util.Timeout
import com.boldradius.cqrs.AuctionCommandQueryProtocol._
import com.boldradius.util.LLogging
import org.joda.time.format.DateTimeFormat
import spray.routing._

import scala.concurrent.ExecutionContext
import scala.concurrent.duration._
import scala.util.{Failure, Success}


final case class PlaceBidDto(auctionId:String, buyer:String, bidPrice:Double )
final case class StartAuctionDto(auctionId:String, start:String, end:String, initialPrice: Double, prodId: String)
final case class BidDto(price:Double, buyer:String, timeStamp:String)

final case class AuctionError(auctionId:String,msg:String,response:String = "AuctionError")
final case class AuctionStartedDto(auctionId:String,response:String = "AuctionStartedDto")
final case class AuctionNotStartedDto(auctionId:String,response:String = "AuctionNotStartedDto")
final case class SuccessfulBidDto(auctionId:String, bidPrice: Double, timeStamp:String,response:String = "SuccessfulBidDto")
final case class RejectedBidDto(auctionId:String, bidPrice: Double, currentBid:Double,response:String = "RejectedBidDto")
final case class FailedBidDto(auctionId:String, bidPrice: Double, currentBid:Double,response:String = "FailedBidDto")
final case class WinningBidDto(auctionId:String,bidPrice: Double,response:String = "WinningBidDto")
final case class BidHistoryDto(auctionId:String,bids: List[BidDto],response:String = "BidHistoryDto")




trait HttpAuctionServiceRoute extends HttpService with LLogging{



  implicit val ec: ExecutionContext

  import com.boldradius.util.MarshallingSupport._


  implicit val timeout = Timeout(30 seconds)
  lazy val fmt = DateTimeFormat.forPattern("yyyy-MM-dd-HH:mm")

  def route(command: ActorRef, query:ActorRef) = {
    post {
      path("startAuction") {
          extract(_.request) { e =>
            entity(as[StartAuctionDto]) {
              auction => onComplete(
                (command ? StartAuctionCmd(auction.auctionId,
                  fmt.parseDateTime(auction.start).getMillis,
                  fmt.parseDateTime(auction.end).getMillis,
                  auction.initialPrice, auction.prodId)).mapTo[AuctionAck]) {
                case Success(ack) => ack match {
                  case StartedAuctionAck(id) =>
                    complete(AuctionStartedDto(id))
                  case InvalidAuctionAck(id, msg) =>
                    complete(AuctionError("ERROR",id, msg))
                  case other =>
                    complete(AuctionError("ERROR",ack.auctionId, ack.toString))
                }
                case Failure(t) =>
                  t.printStackTrace()
                  complete(AuctionError("ERROR",auction.auctionId, t.getMessage))
              }
            }
          }
      } ~
        path("bid") {
          detach(ec) {
            extract(_.request) { e =>
              entity(as[PlaceBidDto]) {
                bid => onComplete(
                  (command ? PlaceBidCmd(bid.auctionId, bid.buyer, bid.bidPrice)).mapTo[AuctionAck]) {
                  case Success(ack) => ack.logInfo(s"PlaceBidCmd bid.bidPrice ${bid.bidPrice} id:" + _.auctionId.toString) match {
                    case PlacedBidAck(id, buyer, bidPrice, timeStamp) =>
                      complete(SuccessfulBidDto(id, bidPrice, fmt.print(timeStamp)))
                    case RefusedBidAck(id, buyer, bidPrice, winningBid) =>
                      complete(RejectedBidDto(id, bidPrice, winningBid))
                    case other =>
                      complete(AuctionError("ERROR",bid.auctionId, other.toString))
                  }
                  case Failure(t) =>
                    complete(AuctionError("ERROR",bid.auctionId, t.getMessage))
                }
              }
            }
          }
        }
    } ~
      get {
        path("winningBid" / Rest) { auctionId =>
          detach(ec) {
            onComplete((query ? WinningBidPriceQuery(auctionId)).mapTo[BidQueryResponse]) {
              case Success(s) => s match {
                case WinningBidPriceResponse(id, price) =>
                  complete(WinningBidDto(id, price))
                case AuctionNotStarted(id) =>
                  complete(AuctionNotStartedDto(id))
                case _ =>
                  complete(AuctionError("ERROR",auctionId, ""))
              }
              case Failure(t) =>
                t.getMessage.logError("WinningBidPriceQuery error: " + _)
                complete(AuctionError("ERROR",auctionId, t.getMessage))
            }
          }
        } ~
          path("bidHistory" / Rest) { auctionId =>
            onComplete((query ? GetBidHistoryQuery(auctionId)).mapTo[BidQueryResponse]) {
              case Success(s) => s match {
                case BidHistoryResponse(id, bids) =>
                  complete(BidHistoryDto(id, bids.map(b =>
                    BidDto(b.price, b.buyer, fmt.print(b.timeStamp)))))
                case AuctionNotStarted(id) =>
                  complete(AuctionNotStartedDto(id))
                case _ =>
                  complete(AuctionError("ERROR",auctionId, ""))
              }
              case Failure(t) =>
                complete(AuctionError("ERROR",auctionId, t.getMessage))
            }
          }
      }
  }
}



================================================
FILE: src/main/scala/com/boldradius/cqrs/Passivation.scala
================================================
package com.boldradius.cqrs

import akka.actor.{PoisonPill, Actor, ReceiveTimeout}
import com.boldradius.util.ALogging
import akka.contrib.pattern.ShardRegion.Passivate

trait Passivation extends ALogging {
  this: Actor =>

  protected def passivate(receive: Receive): Receive = receive.orElse{
    // tell parent actor to send us a poisinpill
    case ReceiveTimeout =>
      self.logInfo( s => s" $s ReceiveTimeout: passivating. ")
      context.parent ! Passivate(stopMessage = PoisonPill)

    // stop
    case PoisonPill => context.stop(self.logInfo( s => s" $s PoisonPill"))
  }
}

================================================
FILE: src/main/scala/com/boldradius/util/Logging.scala
================================================
package com.boldradius.util

import akka.actor.{Actor, ActorLogging}
import com.typesafe.scalalogging.LazyLogging
import scala.language.implicitConversions

trait ALogging extends ActorLogging{  this: Actor =>

  implicit def toLogging[V](v: V) : FLog[V] = FLog(v)

  case class FLog[V](v : V)  {
    def logInfo(f: V => String): V = {log.info(f(v)); v}
    def logDebug(f: V => String): V = {log.debug(f(v)); v}
    def logError(f: V => String): V = {log.error(f(v)); v}
    def logWarn(f: V => String): V = {log.warning(f(v)); v}
    def logTest(f: V => String): V = {println(f(v)); v}
  }
}
trait LLogging extends LazyLogging{

  implicit def toLogging[V](v: V) : FLog[V] = FLog(v)

  case class FLog[V](v : V)  {
    def logInfo(f: V => String): V = {logger.info(f(v)); v}
    def logDebug(f: V => String): V = {logger.debug(f(v)); v}
    def logError(f: V => String): V = {logger.error(f(v)); v}
    def logWarn(f: V => String): V = {logger.warn(f(v)); v}
    def logTest(f: V => String): V = {println(f(v)); v}
  }
}






================================================
FILE: src/main/scala/com/boldradius/util/MarshallingSupport.scala
================================================
package com.boldradius.util

import org.json4s.{DefaultFormats, Formats}
import spray.httpx.Json4sSupport

/**
 * Json marshalling for spray.
 */
object MarshallingSupport extends Json4sSupport {
  implicit def json4sFormats: Formats = DefaultFormats
}


================================================
FILE: src/multi-jvm/scala/com/boldradius/auction/cqrs/AuctionServiceSpec.scala
================================================
package com.boldradius.cqrs

import java.io.File
import java.util.UUID
import com.boldradius.cqrs.AuctionCommandQueryProtocol._
import scala.concurrent.duration._
import org.apache.commons.io.FileUtils
import com.typesafe.config.ConfigFactory
import akka.actor.ActorIdentity
import akka.actor.Identify
import akka.actor.Props
import akka.cluster.Cluster
import akka.contrib.pattern.ClusterSharding
import akka.persistence.Persistence
import akka.persistence.journal.leveldb.SharedLeveldbJournal
import akka.persistence.journal.leveldb.SharedLeveldbStore
import akka.remote.testconductor.RoleName
import akka.remote.testkit.MultiNodeConfig
import akka.remote.testkit.MultiNodeSpec
import akka.testkit.ImplicitSender

object AuctionServiceSpec extends MultiNodeConfig {
  val controller = role("controller")
  val node1 = role("node1")
  val node2 = role("node2")

  commonConfig(ConfigFactory.parseString("""
    akka.actor.provider = "akka.cluster.ClusterActorRefProvider"
    akka.persistence.journal.plugin = "akka.persistence.journal.leveldb-shared"
    akka.persistence.journal.leveldb-shared.store {
      native = off
      dir = "target/test-shared-journal"
    }
    akka.persistence.snapshot-store.local.dir = "target/test-snapshots"
    """))
}

class AuctionServiceSpecMultiJvmNode1 extends AuctionServiceSpec
class AuctionServiceSpecMultiJvmNode2 extends AuctionServiceSpec
class AuctionServiceSpecMultiJvmNode3 extends AuctionServiceSpec

class AuctionServiceSpec extends MultiNodeSpec(AuctionServiceSpec)
  with STMultiNodeSpec with ImplicitSender {

  import AuctionServiceSpec._

  def initialParticipants = roles.size

  val storageLocations = List(
    "akka.persistence.journal.leveldb.dir",
    "akka.persistence.journal.leveldb-shared.store.dir",
    "akka.persistence.snapshot-store.local.dir").map(s => new File(system.settings.config.getString(s)))

  override protected def atStartup() {
    runOn(controller) {
      storageLocations.foreach(dir => FileUtils.deleteDirectory(dir))
    }
  }

  override protected def afterTermination() {
    runOn(controller) {
      storageLocations.foreach(dir => FileUtils.deleteDirectory(dir))
    }
  }

  def join(from: RoleName, to: RoleName): Unit = {
    runOn(from) {
      Cluster(system) join node(to).address
      startSharding()
    }
    enterBarrier(from.name + "-joined")
  }

  def startSharding(): Unit = {

    val view = ClusterSharding(system).start(
      typeName = BidView.shardName,
      entryProps = Some(BidView.props),
      idExtractor = BidView.idExtractor,
      shardResolver = BidView.shardResolver)
    ClusterSharding(system).start(
      typeName = BidProcessor.shardName,
      entryProps = Some(BidProcessor.props(view)),
      idExtractor = BidProcessor.idExtractor,
      shardResolver = BidProcessor.shardResolver)
  }

  "Sharded auction service" must {

    "create Auction" in {
      // start the Persistence extension
      Persistence(system)
      runOn(controller) {
        system.actorOf(Props[SharedLeveldbStore], "store")
      }
      enterBarrier("peristence-started")

      runOn(node1, node2) {
        system.actorSelection(node(controller) / "user" / "store") ! Identify(None)
        val sharedStore = expectMsgType[ActorIdentity].ref.get
        SharedLeveldbJournal.setStore(sharedStore, system)
      }

      enterBarrier("after-1")
    }

    "join cluster" in within(15.seconds) {
      join(node1, node1)
      join(node2, node1)
      enterBarrier("after-2")
    }

    val auctionId = UUID.randomUUID().toString

    "start auction" in within(15.seconds) {

      val now = System.currentTimeMillis()

      runOn(node1,node2) {
        val auctionRegion = ClusterSharding(system).shardRegion(BidProcessor.shardName)
        awaitAssert {
          within(5.second) {
            auctionRegion ! StartAuctionCmd(auctionId,now + 1000,now + 1000000,1,"1")
            expectMsg( StartedAuctionAck(auctionId))
          }
        }

      }

      runOn(node1,node2) {
        val auctionViewRegion = ClusterSharding(system).shardRegion(BidView.shardName)
        awaitAssert {
          within(5.second) {
            auctionViewRegion ! WinningBidPriceQuery(auctionId)
            expectMsg( WinningBidPriceResponse(auctionId,1))
          }
        }
      }
      enterBarrier("after-2")
    }

    "bid on auction" in within(15.seconds) {

      runOn(node1,node2) {
        val auctionRegion = ClusterSharding(system).shardRegion(BidProcessor.shardName)
        auctionRegion ! PlaceBidCmd(auctionId,"dave",3)
      }

      runOn(node2,node2) {
        val auctionViewRegion = ClusterSharding(system).shardRegion(BidView.shardName)
        awaitAssert {
          within(5.second) {
            auctionViewRegion ! WinningBidPriceQuery(auctionId)
            expectMsg( WinningBidPriceResponse(auctionId,3))
          }
        }
      }
      enterBarrier("after-3")
    }

  }
}


================================================
FILE: src/multi-jvm/scala/com/boldradius/auction/cqrs/StMultiNodeSpec.scala
================================================
package com.boldradius.cqrs

import org.scalatest.{ BeforeAndAfterAll, WordSpecLike }
import org.scalatest.Matchers
import akka.remote.testkit.MultiNodeSpecCallbacks
//#imports

//#trait
/**
 * Hooks up MultiNodeSpec with ScalaTest
 */
trait STMultiNodeSpec extends MultiNodeSpecCallbacks
with WordSpecLike with Matchers with BeforeAndAfterAll {

  override def beforeAll() = multiNodeSpecBeforeAll()

  override def afterAll() = multiNodeSpecAfterAll()
}

================================================
FILE: src/test/resources/testcreatetables.cql
================================================
CREATE TABLE IF NOT EXISTS products (
  id bigint PRIMARY KEY,
  name text, 
  description text
) WITH comment='auction products';

CREATE TABLE IF NOT EXISTS auctions (
  id text,
  prodid text,
  start timestamp, 
  end timestamp,
  initialprice double,
  PRIMARY KEY (id)
) WITH comment='auctions';

CREATE TABLE IF NOT EXISTS auctionbids (
  aid text,
  bprice double,
  btime timestamp,
  bbuyer text,
  PRIMARY KEY ((aid), bprice, bbuyer, btime)
) WITH comment='auctions with bids'
	AND CLUSTERING ORDER BY (bprice DESC)
	AND caching = '{"keys":"ALL", "rows_per_partition":"1"}';


================================================
FILE: src/test/scala/com/boldradius/dddd/HttpAuctionServiceRouteSpec.scala
================================================
package com.boldradius.auction

import akka.actor.{ActorSystem, Actor, ActorRef, Props}
import com.boldradius.cqrs.AuctionCommandQueryProtocol._
import com.boldradius.cqrs._
import com.typesafe.config.ConfigFactory
import org.scalatest._
import spray.http.Uri
import spray.routing._

import scala.concurrent.duration._
import spray.json._
import spray.json.DefaultJsonProtocol
import spray.testkit.ScalatestRouteTest
import com.boldradius.util.MarshallingSupport._

object HttpAuctionServiceRouteSpec{
  import spray.util.Utils

  val (_, akkaPort) = Utils temporaryServerHostnameAndPort()

  val config = ConfigFactory.parseString( s"""
     akka.remote.netty.tcp.port = $akkaPort
     akka.log-dead-letters = off
     akka.log-dead-letters-during-shutdown = off
                                           """)

  val testSystem = ActorSystem("offers-route-spec", config)
}


class HttpAuctionServiceRouteSpec extends FeatureSpecLike
with GivenWhenThen
with ScalatestRouteTest
with MustMatchers
with BeforeAndAfterAll
with HttpAuctionServiceRoute {

  import  HttpAuctionServiceRouteSpec._

  implicit val ec = system.dispatcher

  override protected def createActorSystem(): ActorSystem = testSystem

  def actorRefFactory = testSystem



  val cmdActor:ActorRef = system.actorOf( Props( new Actor {
    def receive: Receive = {
      case StartAuctionCmd(id, start, end, initialPrice,prodId) =>
        sender() ! StartedAuctionAck(id)

      case PlaceBidCmd(id,buyer,bidPrice)=>
        sender ! PlacedBidAck(id,buyer,bidPrice,1)
    }
  }))

  val queryActor:ActorRef = system.actorOf( Props( new Actor {
    def receive: Receive = {
      case WinningBidPriceQuery(id) =>
        sender() ! WinningBidPriceResponse(id,1)
      case GetBidHistoryQuery(id) =>
        sender() ! BidHistoryResponse(id,List(Bid(1,"buyer",1)))
      case GetProdIdQuery(id) =>
        sender() ! ProdIdResponse(id,"1")
    }
  }))


  feature("Good Requests") {
    scenario("post is made to create auction") {
      Given("route is properly formed")
      When("/startAuction is called with POST")

      Post(Uri("/startAuction"),StartAuctionDto("123", "2015-01-20-15:53", "2015-01-20-15:53", 1, "1")) ~> route(cmdActor,queryActor) ~> check {
        //responseAs[Any] must be(Map("action" -> "AuctionStarted", "details" -> Map("auctionId" -> "123")))
        responseAs[AuctionStartedDto] must be(AuctionStartedDto("123","AuctionStartedDto"))
      }
      Then(s"Received POST response: ${AuctionStartedDto("123","AuctionStartedDto")}")
    }

    scenario("post is made to bid") {
      Given("route is properly formed")
      When("/bid is called with POST")
      Post(Uri("/bid"),PlaceBidDto("123", "buyer", 1)) ~> route(cmdActor,queryActor) ~> check {
        responseAs[SuccessfulBidDto] must be(SuccessfulBidDto("123",1 , "1969-12-31-19:00","SuccessfulBidDto"))
      }
      Then(s"Received POST response: ${SuccessfulBidDto("123",1 , "1969-12-31-19:00","SuccessfulBidDto")}")
    }
  }
}


================================================
FILE: tutorial/index.html
================================================
<html>

    <body>

    <div>
        <h2>Background</h2>
        <h3>Distributed Domain Driven Design</h3>
        <h3>CQRS/ES  Command Query Responsibility Segregation / Event Sourcing</h3>
        <p>This is a pattern that uses Command and Query objects to apply the <a href="http://en.wikipedia.org/wiki/Command%E2%80%93query_separation">CQS</a> principle
        for modifying and retrieving data.
        </p>

        <p>
            Event Sourcing is an architectural pattern in which state is tracked with an immutable event log instead of
            destructive updates (mutable).
        </p>
    </div>

    <div>
        <h2>Getting Started</h2>

        <p>To get this application going, you will need to:

            <ul>
                <li>Set up the datastore</li>
                <li>Boot the cluster nodes</li>
                <li>Boot the Http microservice node</li>
            </ul>

        </p>


        <h3>DataStore</h3>
        <p>This application requires a distributed journal. Storage backends for journals and snapshot stores are pluggable in Akka persistence. In this case we are using <a href="http://cassandra.apache.org/download/">Cassandra</a>.
            You can find other journal plugins <a href="http://akka.io/community/?_ga=1.264939791.1443869017.1408561680">here</a>.
        </p>
        <p>
            The datastore is specified in <b>application.conf</b>
        <pre><code>cassandra-journal.contact-points = ["127.0.0.1"]</code></pre>
        <pre><code>cassandra-snapshot-store.contact-points = ["127.0.0.1"]</code></pre>
        As you can see, the default is localhost. In a cloud deployment, you could add several addresses to a cassandra cluster.
        </p>


        <p>This application uses a simple domain to demonstrate CQRS and event sourcing with Akka Persistence. This domain is an online auction:</p>
        <pre><code>
final case class Bid(price:Double, buyer:String, timeStamp:Long)

final case class Auction(auctionId:String,
                         startTime:Long,
                         endTime:Long,
                         initialPrice:Double,
                         acceptedBids:List[Bid],
                         refusedBids:List[Bid],
                         ended:Boolean)
        </code></pre>
        <p>This is a distributed application, leveraging <b>Akka Cluster</b>.</p>

        <p>The <b>Command</b> path of this application is illustrated by the creation of an auction, and placing bids.</p>
        <p>The <b>Query</b> path of this application is illustrated by the querying of winning bid and bid history.</p>
        <p>In order to distribute and segregate these paths, we leverage  <b>Akka Cluster</b>, as well as <b>Cluster Sharding</b>.</p>
          <p>Cluster Sharding enables the  distribution of the command and query actors across several nodes in the cluster,
            supporting interaction using their logical identifier, without having to care about their physical location in the cluster.
        </p>

        <h3>Cluster Nodes</h3>
        <p></p>
        <p>You must first boot some cluster nodes (as many as you want). Running locally, these are distinguished by port  eg:[2551,2552,...].<br>
            This cluster must specify one or more <b>seed nodes</b> in
        <b>application.conf</b>
<pre><code>
akka.cluster {
seed-nodes = [
"akka.tcp://ClusterSystem@127.0.0.1:2551",
"akka.tcp://ClusterSystem@127.0.0.1:2552"]

auto-down-unreachable-after = 10s
}
</code></pre>
        </p>

        <p>
            The Cluster Nodes are bootstrapped in <b>ClusterNode.scala</b>.
        </p>

        <p>
            To boot each cluster node locally:
<pre><code>
sbt 'runMain com.boldradius.cqrs.ClusterNodeApp nodeIpAddress port'
</code></pre>
for example:
<pre><code>
sbt 'runMain com.boldradius.cqrs.ClusterNodeApp 127.0.0.1 2551'
</code></pre>
        </p>


        <h3>Http Microservice Node</h3>

        <p> The HTTP front end is implemented as a <b>Spray</b> microservice and is bootstrapped in <b>HttpApp.scala</b>.It participates in the Cluster, but as a proxy.</p>
          <p>  To run the microservice locally:
<pre><code>
sbt 'runMain com.boldradius.cqrs.HttpApp httpIpAddress httpPort akkaIpAddres akkaPort'
</code></pre>
for example:
<pre><code>
sbt 'runMain com.boldradius.cqrs.HttpApp 127.0.0.1 9000 127.0.0.1 0'
</code></pre>



        </p>
        <p>The HTTP API enables the user to:
            <ul>
                <li>Create an Auction</li>
                <li>Place a did</li>
                <li>Query for the current winning bid</li>
                <li>Query for the bid history</li>
            </ul>

            <h4>Create Auction</h4>
             <pre><code>
 POST http://127.0.0.1:9000/startAuction

 {"auctionId":"123",
 "start":"2015-01-20-16:25",
 "end":"2015-07-20-16:35",
 "initialPrice" : 2,
 "prodId" : "3"}
             </code></pre>

        <h4>Place Bid</h4>
             <pre><code>
POST http://127.0.0.1:9000/bid

{"auctionId":"123",
"buyer":"dave",
"bidPrice":6}
             </code></pre>

        <h4>Query for the current winning bid</h4>
             <pre><code>
GET http://127.0.0.1:9000/winningBid/123
             </code></pre>


        <h4>Query for the bid history</h4>
             <pre><code>
http://127.0.0.1:9000/bidHistory/123
             </code></pre>
        </p>


        <h3>Spray service fowards to the cluster</h3>
        The trait <b>HttpAuctionServiceRoute.scala</b> implements a route that takes ActorRefs (one for command and query) as input.
        Upon receiving an Http request, it either sends a command message to the <b>command</b> actor, or a query message to the <b>query</b> actor.


         <pre><code>
 def route(command: ActorRef, query:ActorRef) = {
     post {
        path("startAuction") {
            extract(_.request) { e =>
                entity(as[StartAuctionDto]) {
                    auction => onComplete(
                        (command ? StartAuctionCmd(auction.auctionId,....

         </code></pre>

    </div>

        <div>
            <h2>Exploring the Command path in the Cluster</h2>
            <p>The command path is implemented in <b>BidProcessor.scala</b>. This is a <b>PersistentActor</b> that receives commands:

<pre><code>
def initial: Receive = {
    case a@StartAuctionCmd(id, start, end, initialPrice, prodId) => ...
}

def takingBids(auctionId: String, startTime: Long, closeTime: Long): Receive = {
            case a@PlaceBidCmd(id, buyer, bidPrice) => ...
}
</code></pre>

and produces events, writing them to the event journal, and notifying the <b>Query</b> Path of the updated journal:

  <pre><code>
val event = AuctionStartedEvt(id, start, end, initialPrice, prodId)   // the event to be persisted
persistAsync(event) { evt =>                                          // block that will run once event has been written to journal
readRegion ! Update(await = true)                                   // update the Query path
auctionStateMaybe = startMaybeState(id, start, end, initialPrice)   // update internal state
...
}
  </code></pre>

            </p>
               This actor is cluster sharded on auctionId as follows:
            <pre><code>
val idExtractor: ShardRegion.IdExtractor = {
    case m: AuctionCmd => (m.auctionId, m)
}

val shardResolver: ShardRegion.ShardResolver = msg => msg match {
    case m: AuctionCmd => (math.abs(m.auctionId.hashCode) % 100).toString
}

val shardName: String = "BidProcessor"
            </code></pre>
            This means, there is only one instance of this actor in the cluster, and all commands with the same <b>auctionId</b> will
            be routed to the same actor.
            <p>
            <p>
             If this actor receives no commands for 1 minute, it will <b>passivate</b> ( a pattern enabling the parent to stop the actor, in order to reduce memory consumption without losing any commands it is currently processing):
              <pre><code>
/** passivate the entity when no activity */
context.setReceiveTimeout(1 minute)     // this will send a ReceiveTimeout message after one minute, if no other messages come in
        </code></pre>
            The timeout is handled in the <b>Passivation.scala</b> trait:
             <pre><code>
protected def withPassivation(receive: Receive): Receive = receive.orElse{
    // tell parent actor to send us a poisinpill
    case ReceiveTimeout => context.parent ! Passivate(stopMessage = PoisonPill)

    // stop
    case PoisonPill => context.stop(self)
}
             </code></pre>
            </p>
               If this actor fails, or is passivated, and then is required again (to handle a command), the cluster will spin it up, and it will replay the
            event journal, updating it's internal state:
              <pre><code>
def receiveRecover: Receive = {
    case evt: AuctionEvt => updateState(evt)

    case RecoveryCompleted => {
        auctionStateMaybe.fold[Unit]({}) { auctionState =>
            if (auctionState.ended)
                context.become(passivate(auctionClosed(auctionState.auctionId, auctionState.endTime)).orElse(unknownCommand))
            else{
                context.become(passivate(takingBids(auctionState.auctionId, auctionState.startTime, auctionState.endTime)).orElse(unknownCommand))
                }
            }
        }
}
              </code></pre>

            </p>

        </div>
    <div>
        <h2>Exploring the Query path in the Cluster</h2>
         The Queries are handled in a different Actor: <b>BidView.scala</b>. This is a <b>PersistentView</b> that handles query messages, or prompts from
        it's companion <b>PersistentActor</b> to update itself.
        <p>
            <b>BidView.scala</b> is linked to the <b>BidProcessor.scala</b> event journal via it's <b>persistenceId</b>
            <pre><code>
override val persistenceId: String = "BidProcessor" + "-" + self.path.name
    </code></pre>
        This means it has access to this event journal, and can maintain, and recover state from this journal.
        </p>
        <p>
            It is possible for a PersistentView to save it's own snapshots, but, in our case, it isn't required.
        </p>

        <p>
            This PersistentView is sharded in the same way the PersistentActor is:
             <pre><code>
val idExtractor: ShardRegion.IdExtractor = {
    case m : AuctionEvt => (m.auctionId,m)
    case m : BidQuery => (m.auctionId,m)
}

val shardResolver: ShardRegion.ShardResolver = {
    case m: AuctionEvt => (math.abs(m.auctionId.hashCode) % 100).toString
    case m: BidQuery => (math.abs(m.auctionId.hashCode) % 100).toString
}
        </code></pre>
        One could have used a different shard strategy here, but a consequence of the above strategy is that the Query Path will
        reside in the same Shard Region as the command path, reducing latency of the Update() message from Command to Query.
        </p>

        <p>
The PersistentView maintains the following model in memory:
<pre><code>
final case class BidState(auctionId:String,
                         start:Long,
                         end:Long,
                         product:Double,
                         acceptedBids:List[Bid],
                         rejectedBids:List[Bid],
                         closed:Boolean)
</code></pre>

        This model is sufficient to satisfy both queries: Winning Bid, and  Bid History:

      <pre><code>

  def auctionInProgress(currentState:BidState, prodId:String):Receive = {

    case  GetBidHistoryQuery(auctionId) =>  sender ! BidHistoryResponse(auctionId,currentState.acceptedBids)

    case  WinningBidPriceQuery(auctionId) =>
        currentState.acceptedBids.headOption.fold(
        sender ! WinningBidPriceResponse(auctionId,currentState.product))(b =>
        sender ! WinningBidPriceResponse(auctionId,b.price))

          ....

  }
      </code></pre>

        </p>
    </div>


    </body>

</html>
Download .txt
gitextract_tggwff63/

├── LICENSE
├── README.md
├── activator.properties
├── build.sbt
├── project/
│   ├── ResolverSettings.scala
│   ├── build.properties
│   └── plugins.sbt
├── src/
│   ├── main/
│   │   ├── resources/
│   │   │   └── application.conf
│   │   └── scala/
│   │       └── com/
│   │           └── boldradius/
│   │               ├── cqrs/
│   │               │   ├── AuctionCommandQueryProtocol.scala
│   │               │   ├── BidProcessor.scala
│   │               │   ├── BidView.scala
│   │               │   ├── ClusterBoot.scala
│   │               │   ├── ClusterNodeApp.scala
│   │               │   ├── DomainModel.scala
│   │               │   ├── HttpApp.scala
│   │               │   ├── HttpAuctionServiceRoute.scala
│   │               │   └── Passivation.scala
│   │               └── util/
│   │                   ├── Logging.scala
│   │                   └── MarshallingSupport.scala
│   ├── multi-jvm/
│   │   └── scala/
│   │       └── com/
│   │           └── boldradius/
│   │               └── auction/
│   │                   └── cqrs/
│   │                       ├── AuctionServiceSpec.scala
│   │                       └── StMultiNodeSpec.scala
│   └── test/
│       ├── resources/
│       │   └── testcreatetables.cql
│       └── scala/
│           └── com/
│               └── boldradius/
│                   └── dddd/
│                       └── HttpAuctionServiceRouteSpec.scala
└── tutorial/
    └── index.html
Condensed preview — 24 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (87K chars).
[
  {
    "path": "LICENSE",
    "chars": 11325,
    "preview": "Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licens"
  },
  {
    "path": "README.md",
    "chars": 11161,
    "preview": "# akka-dddd-template\nAkka DDDD template using CQRS/ES with a Distributed Domain\n\nScala Version = 2.11.6\n\nAkka Version = "
  },
  {
    "path": "activator.properties",
    "chars": 856,
    "preview": "name=akka-dddd-cqrs\ntitle=Akka Distributed Domain Driven Design with CQRS\ndescription=A starter distributed application "
  },
  {
    "path": "build.sbt",
    "chars": 4926,
    "preview": "import net.virtualvoid.sbt.graph.Plugin._\nimport sbt._\nimport sbt.Keys._\nimport com.typesafe.sbt.SbtMultiJvm\nimport com."
  },
  {
    "path": "project/ResolverSettings.scala",
    "chars": 379,
    "preview": "import sbt._\n\nobject ResolverSettings {\n\n  lazy val resolvers = Seq(\n    Resolver.mavenLocal,\n    Resolver.sonatypeRepo("
  },
  {
    "path": "project/build.properties",
    "chars": 20,
    "preview": "sbt.version = 0.13.7"
  },
  {
    "path": "project/plugins.sbt",
    "chars": 694,
    "preview": "\n// project/plugins.sbt\ndependencyOverrides += \"org.scala-sbt\" % \"sbt\" % \"0.13.7\"\n\nresolvers += \"sonatype-releases\" at \""
  },
  {
    "path": "src/main/resources/application.conf",
    "chars": 5747,
    "preview": "akka {\n  loglevel = INFO\n\n  actor {\n    provider = \"akka.cluster.ClusterActorRefProvider\"\n  }\n\n  remote {\n    log-remote"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/AuctionCommandQueryProtocol.scala",
    "chars": 2076,
    "preview": "package com.boldradius.cqrs\n\nobject AuctionCommandQueryProtocol {\n\n  sealed  trait AuctionMsg {\n    val auctionId: Strin"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/BidProcessor.scala",
    "chars": 8809,
    "preview": "package com.boldradius.cqrs\n\nimport akka.actor._\nimport akka.contrib.pattern.ShardRegion\n\nimport akka.persistence.{Recov"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/BidView.scala",
    "chars": 4349,
    "preview": "package com.boldradius.cqrs\n\nimport akka.actor._\nimport akka.contrib.pattern.ShardRegion\nimport akka.persistence.Persist"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/ClusterBoot.scala",
    "chars": 777,
    "preview": "package com.boldradius.cqrs\n\nimport akka.actor.{Props, ActorRef, ActorSystem}\nimport akka.contrib.pattern.ClusterShardin"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/ClusterNodeApp.scala",
    "chars": 897,
    "preview": "package com.boldradius.cqrs\n\nimport akka.actor.ActorSystem\nimport com.typesafe.config._\n\n/**\n * Start an akka cluster no"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/DomainModel.scala",
    "chars": 405,
    "preview": "package com.boldradius.cqrs\n\nfinal case class Bid(price:Double, buyer:String, timeStamp:Long)\n\nfinal case class Auction("
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/HttpApp.scala",
    "chars": 1641,
    "preview": "package com.boldradius.cqrs\n\nimport akka.actor._\nimport spray.routing._\n\nimport com.typesafe.config.ConfigFactory\nimport"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/HttpAuctionServiceRoute.scala",
    "chars": 5332,
    "preview": "package com.boldradius.cqrs\n\nimport akka.actor.ActorRef\nimport akka.pattern.ask\nimport akka.util.Timeout\nimport com.bold"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/Passivation.scala",
    "chars": 587,
    "preview": "package com.boldradius.cqrs\n\nimport akka.actor.{PoisonPill, Actor, ReceiveTimeout}\nimport com.boldradius.util.ALogging\ni"
  },
  {
    "path": "src/main/scala/com/boldradius/util/Logging.scala",
    "chars": 1027,
    "preview": "package com.boldradius.util\n\nimport akka.actor.{Actor, ActorLogging}\nimport com.typesafe.scalalogging.LazyLogging\nimport"
  },
  {
    "path": "src/main/scala/com/boldradius/util/MarshallingSupport.scala",
    "chars": 253,
    "preview": "package com.boldradius.util\n\nimport org.json4s.{DefaultFormats, Formats}\nimport spray.httpx.Json4sSupport\n\n/**\n * Json m"
  },
  {
    "path": "src/multi-jvm/scala/com/boldradius/auction/cqrs/AuctionServiceSpec.scala",
    "chars": 4918,
    "preview": "package com.boldradius.cqrs\n\nimport java.io.File\nimport java.util.UUID\nimport com.boldradius.cqrs.AuctionCommandQueryPro"
  },
  {
    "path": "src/multi-jvm/scala/com/boldradius/auction/cqrs/StMultiNodeSpec.scala",
    "chars": 455,
    "preview": "package com.boldradius.cqrs\n\nimport org.scalatest.{ BeforeAndAfterAll, WordSpecLike }\nimport org.scalatest.Matchers\nimpo"
  },
  {
    "path": "src/test/resources/testcreatetables.cql",
    "chars": 586,
    "preview": "CREATE TABLE IF NOT EXISTS products (\n  id bigint PRIMARY KEY,\n  name text, \n  description text\n) WITH comment='auction "
  },
  {
    "path": "src/test/scala/com/boldradius/dddd/HttpAuctionServiceRouteSpec.scala",
    "chars": 2987,
    "preview": "package com.boldradius.auction\n\nimport akka.actor.{ActorSystem, Actor, ActorRef, Props}\nimport com.boldradius.cqrs.Aucti"
  },
  {
    "path": "tutorial/index.html",
    "chars": 11963,
    "preview": "<html>\n\n    <body>\n\n    <div>\n        <h2>Background</h2>\n        <h3>Distributed Domain Driven Design</h3>\n        <h3>"
  }
]

About this extraction

This page contains the full source code of the boldradius/akka-dddd-template GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 24 files (80.2 KB), approximately 20.2k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!