[
  {
    "path": "LICENSE",
    "content": "Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright {yyyy} {name of copyright owner}\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n\n"
  },
  {
    "path": "README.md",
    "content": "# akka-dddd-template\nAkka DDDD template using CQRS/ES with a Distributed Domain\n\nScala Version = 2.11.6\n\nAkka Version = 2.3.9\n\nSpray Version = 1.3.1\n\n\n## Background\n\n### Distributed Domain Driven Design\n\nDistributed Domain Driven Design takes the existing DDD concept and applies it ot an application intended to have each domain instance represented by an actor, as opposed to a class instance that must be synchronized via a backing persistent store. In this pattern, each domain instance is a cluster singleton, meaning updates can be made to it's state without fear of conflict.\n\n### CQRS/ES  Command Query Responsibility Segregation / Event Sourcing\n\nThis is a pattern that uses Command and Query objects to apply the [CQS](http://en.wikipedia.org/wiki/Command%E2%80%93query_separation) principle\n        for modifying and retrieving data.\n\nEvent Sourcing is an architectural pattern in which state is tracked with an immutable event log instead of\ndestructive updates (mutable).\n        \n## Getting Started\n\nTo get this application going, you will need to:\n\n*   Set up the datastore\n*   Boot the cluster nodes\n*   Boot the Http microservice node\n\n### DataStore\n\nThis application requires a distributed journal. Storage backends for journals and snapshot stores are pluggable in Akka persistence. In this case we are using [Cassandra](http://cassandra.apache.org/download/).\nYou can find other journal plugins [here](http://akka.io/community/?_ga=1.264939791.1443869017.1408561680).\n\nThe datastore is specified in **application.conf**\n\ncassandra-journal.contact-points = [\"127.0.0.1\"]\n\ncassandra-snapshot-store.contact-points = [\"127.0.0.1\"]\n\nAs you can see, the default is localhost. In a cloud deployment, you could add several addresses to a cassandra cluster.\n\nThis application uses a simple domain to demonstrate CQRS and event sourcing with Akka Persistence. This domain is an online auction:\n\n        final case class Bid(price:Double, buyer:String, timeStamp:Long)\n\n        final case class Auction(auctionId:String,\n                             startTime:Long,\n                             endTime:Long,\n                             initialPrice:Double,\n                             acceptedBids:List[Bid],\n                             refusedBids:List[Bid],\n                             ended:Boolean)\n\nThis is a distributed application, leveraging **Akka Cluster**.\n\nThe **Command** path of this application is illustrated by the creation of an auction, and placing bids.\n\nThe **Query** path of this application is illustrated by the querying of winning bid and bid history.\n\nIn order to distribute and segregate these paths, we leverage  **Akka Cluster**, as well as **Cluster Sharding**.\n\nCluster Sharding enables the  distribution of the command and query actors across several nodes in the cluster,\nsupporting interaction using their logical identifier, without having to care about their physical location in the cluster.\n\n### Cluster Nodes\n\nYou must first boot some cluster nodes (as many as you want). Running locally, these are distinguished by port  eg:[2551,2552,...].\n\nThis cluster must specify one or more **seed nodes** in **application.conf**\n\n    akka.cluster {\n    seed-nodes = [\n    \"akka.tcp://ClusterSystem@127.0.0.1:2551\",\n    \"akka.tcp://ClusterSystem@127.0.0.1:2552\"]\n    }\n\nThe Cluster Nodes are bootstrapped in **ClusterNode.scala**.\n\nTo boot each cluster node locally:\n\n    sbt 'runMain com.boldradius.cqrs.ClusterNodeApp nodeIpAddress port'\n\nfor example:\n\n    sbt 'runMain com.boldradius.cqrs.ClusterNodeApp 127.0.0.1 2551'\n\n### Http Microservice Node\n\nThe HTTP front end is implemented as a **Spray** microservice and is bootstrapped in **HttpApp.scala**.It participates in the Cluster, but as a proxy.\n\nTo run the microservice locally:\n\n    sbt 'runMain com.boldradius.cqrs.HttpApp httpIpAddress httpPort akkaIpAddres akkaPort'\n\nfor example:\n\n    sbt 'runMain com.boldradius.cqrs.HttpApp 127.0.0.1 9000 127.0.0.1 0'\n\n\nThe HTTP API enables the user to:\n\n*   Create an Auction\n*   Place a bid\n*   Query for the current winning bid\n*   Query for the bid history\n\n#### Create Auction\n\n        POST http://127.0.0.1:9000/startAuction\n\n        {\"auctionId\":\"123\",\n        \"start\":\"2015-01-20-16:25\",\n        \"end\":\"2015-07-20-16:35\",\n        \"initialPrice\" : 2,\n        \"prodId\" : \"3\"}\n\n#### Place Bid\n\n    POST http://127.0.0.1:9000/bid\n\n    {\"auctionId\":\"123\",\n    \"buyer\":\"dave\",\n    \"bidPrice\":6}\n\n#### Query for the current winning bid\n\n    GET http://127.0.0.1:9000/winningBid/123\n\n#### Query for the bid history\n\n    http://127.0.0.1:9000/bidHistory/123\n\n### Spray service fowards to the cluster\n\nThe trait **HttpAuctionServiceRoute.scala** implements a route that takes ActorRefs (one for command and query) as input.\nUpon receiving an Http request, it either sends a command message to the **command** actor, or a query message to the **query** actor.\n\n     def route(command: ActorRef, query:ActorRef) = {\n         post {\n            path(\"startAuction\") {\n                extract(_.request) { e =>\n                    entity(as[StartAuctionDto]) {\n                        auction => onComplete(\n                            (command ? StartAuctionCmd(auction.auctionId,....\n\n## Exploring the Command path in the Cluster\n\nThe command path is implemented in **BidProcessor.scala**. This is a **PersistentActor** that receives commands:\n\n    def initial: Receive = {\n        case a@StartAuctionCmd(id, start, end, initialPrice, prodId) => ...\n    }\n\n    def takingBids(state: Auction): Receive = {\n                case a@PlaceBidCmd(id, buyer, bidPrice) => ...\n    }\n\nand produces events, writing them to the event journal, and notifying the **Query** Path of the updated journal:\n\n    def handleProcessedCommand(sendr: ActorRef, processedCommand: ProcessedCommand): Unit = {\n\n            // ack whether there is an event or not\n            processedCommand.event.fold(sender() ! processedCommand.ack) { evt =>\n              persist(evt) { persistedEvt =>\n                readRegion ! Update(await = true)\n                sendr ! processedCommand.ack\n                processedCommand.newReceive.fold({})(context.become)\n               }\n             }\n        }\n   \nThis actor is cluster sharded on auctionId as follows:\n\n    val idExtractor: ShardRegion.IdExtractor = {\n        case m: AuctionCmd => (m.auctionId, m)\n    }\n\n    val shardResolver: ShardRegion.ShardResolver = msg => msg match {\n        case m: AuctionCmd => (math.abs(m.auctionId.hashCode) % 100).toString\n    }\n\n    val shardName: String = \"BidProcessor\"\n\nThis means, there is only one instance of this actor in the cluster, and all commands with the same  **auctionId** will be routed to the same actor.\n\nIf this actor receives no commands for 1 minute, it will **passivate** ( a pattern enabling the parent to stop the actor, in order to reduce memory consumption without losing any commands it is currently processing):\n\n    /** passivate the entity when no activity */\n    context.setReceiveTimeout(1 minute)     // this will send a ReceiveTimeout message after one minute, if no other messages come in\n\nThe timeout is handled in the **Passivation.scala** trait:\n\n    protected def withPassivation(receive: Receive): Receive = receive.orElse{\n        // tell parent actor to send us a poisinpill\n        case ReceiveTimeout => context.parent ! Passivate(stopMessage = PoisonPill)\n\n        // stop\n        case PoisonPill => context.stop(self)\n    }\n\n\nIf this actor fails, or is passivated, and then is required again (to handle a command), the cluster will spin it up, and it will replay the event journal. In this case we make use a var: auctionRecoverStateMaybe to capture the state while we replay. When the replay is finished, the actor is notified with the RecoveryCompleted message and we can then \"become\" appropriately to reflect this state. \n\n    def receiveRecover: Receive = {\n        case evt:AuctionStartedEvt =>\n                auctionRecoverStateMaybe = Some(Auction(evt.auctionId,evt.started,evt.end,evt.initialPrice,Nil,Nil,false))\n\n            case evt: AuctionEvt => {\n              auctionRecoverStateMaybe = auctionRecoverStateMaybe.map(state =>\n                updateState(evt.logInfo(\"receiveRecover\" + _.toString),state))\n            }\n        \n            // Once recovery is complete, check the state to become the appropriate behaviour\n            case RecoveryCompleted => {\n              auctionRecoverStateMaybe.fold[Unit]({}) { auctionState =>\n                if (auctionState.logInfo(\"receiveRecover RecoveryCompleted state: \" + _.toString).ended)\n                  context.become(passivate(auctionClosed(auctionState)).orElse(unknownCommand))\n                else {\n                  launchLifetime(auctionState.endTime)\n                  context.become(passivate(takingBids(auctionState)).orElse(unknownCommand))\n                }\n              }\n            }\n\n## Exploring the Query path in the Cluster\n\nThe Queries are handled in a different Actor: **BidView.scala**. This is a **PersistentView** that handles query messages, or prompts from it's companion **PersistentActor** to update itself.\n\n**BidView.scala** is linked to the **BidProcessor.scala** event journal via it's **persistenceId**\n\n    override val persistenceId: String = \"BidProcessor\" + \"-\" + self.path.name\n\nThis means it has access to this event journal, and can maintain, and recover state from this journal.\n\nIt is possible for a PersistentView to save it's own snapshots, but, in our case, it isn't required.\n\nThis PersistentView is sharded in the same way the PersistentActor is:\n\n    val idExtractor: ShardRegion.IdExtractor = {\n        case m : AuctionEvt => (m.auctionId,m)\n        case m : BidQuery => (m.auctionId,m)\n    }\n\n    val shardResolver: ShardRegion.ShardResolver = {\n        case m: AuctionEvt => (math.abs(m.auctionId.hashCode) % 100).toString\n        case m: BidQuery => (math.abs(m.auctionId.hashCode) % 100).toString\n    }\n\nOne could have used a different shard strategy here, but a consequence of the above strategy is that the Query Path will\n            reside in the same Shard Region as the command path, reducing latency of the Update() message from Command to Query.\n\nThe PersistentView maintains the following model in memory:\n\n    final case class BidState(auctionId:String,\n                             start:Long,\n                             end:Long,\n                             product:Double,\n                             acceptedBids:List[Bid],\n                             rejectedBids:List[Bid],\n                             closed:Boolean)\n\nThis model is sufficient to satisfy both queries: Winning Bid, and  Bid History:\n\n      def auctionInProgress(currentState:BidState, prodId:String):Receive = {\n\n        case  GetBidHistoryQuery(auctionId) =>  sender ! BidHistoryResponse(auctionId,currentState.acceptedBids)\n\n        case  WinningBidPriceQuery(auctionId) =>\n            currentState.acceptedBids.headOption.fold(\n            sender ! WinningBidPriceResponse(auctionId,currentState.product))(b =>\n            sender ! WinningBidPriceResponse(auctionId,b.price))\n              ....\n      }\n"
  },
  {
    "path": "activator.properties",
    "content": "name=akka-dddd-cqrs\ntitle=Akka Distributed Domain Driven Design with CQRS\ndescription=A starter distributed application with Akka and Spray that demonstrates Command Query Responsibility Segregation using Akka Persistence, Cluster Sharding and a distributed journal for Event Sourcing enabled with Cassandra.\ntags=akka,akka-persistence,cluster-sharding,cluster,cqrs,event-sourced\nauthorName=BoldRadius Solutions\nauthorLink=http://boldradius.com/\nauthorTwitter=@boldradius\nauthorBio=We are a custom software development, training and consulting firm specializing in the Typesafe Reactive Platform of Scala, Akka and Play Framework. Our mission is to enable our clients to adopt new technologies. We are a committed group of software developers with a mandate of solving problems with innovative, rapid solutions.\nauthorLogo=http://i59.tinypic.com/m9rc6p.png"
  },
  {
    "path": "build.sbt",
    "content": "import net.virtualvoid.sbt.graph.Plugin._\nimport sbt._\nimport sbt.Keys._\nimport com.typesafe.sbt.SbtMultiJvm\nimport com.typesafe.sbt.SbtMultiJvm.MultiJvmKeys.MultiJvm\nimport com.github.retronym.SbtOneJar\n\n\n\nparallelExecution in Test := false\n\nval baseSettings: Seq[Def.Setting[_]] =\n  graphSettings ++\n  Seq(\n    name := \"akka-dddd-template\",\n    version := \"1.0.0\",\n    organization := \"boldradius\",\n    scalaVersion := \"2.11.6\",\n    ivyScala := ivyScala.value map { _.copy(overrideScalaVersion = true) },\n    org.scalastyle.sbt.PluginKeys.config := file(\"project/scalastyle-config.xml\"),\n    scalacOptions in Compile ++= Seq(\"-encoding\", \"UTF-8\", \"-target:jvm-1.7\", \"-deprecation\", \"-unchecked\", \"-Ywarn-dead-code\", \"-Xfatal-warnings\", \"-feature\", \"-language:postfixOps\"),\n    scalacOptions in (Compile, doc) <++= (name in (Compile, doc), version in (Compile, doc)) map DefaultOptions.scaladoc,\n    javacOptions in (Compile, compile) ++= Seq(\"-source\", \"1.7\", \"-target\", \"1.7\", \"-Xlint:unchecked\", \"-Xlint:deprecation\", \"-Xlint:-options\"),\n    javacOptions in doc := Seq(),\n    javaOptions += \"-Xmx2G\",\n    outputStrategy := Some(StdoutOutput),\n    exportJars := true,\n    fork := true,\n    resolvers := ResolverSettings.resolvers,\n    Keys.fork in run := true,\n    // make sure that MultiJvm test are compiled by the default test compilation\n    compile in MultiJvm <<= (compile in MultiJvm) triggeredBy (compile in Test),\n    // disable parallel tests\n    parallelExecution in Test := false,\n    // make sure that MultiJvm tests are executed by the default test target,\n    // and combine the results from ordinary test and multi-jvm tests\n    artifact in oneJar <<= moduleName(Artifact(_)),\n    executeTests in Test <<= (executeTests in Test, executeTests in MultiJvm) map {\n      case (testResults, multiNodeResults)  =>\n        val overall =\n          if (testResults.overall.id < multiNodeResults.overall.id)\n            multiNodeResults.overall\n          else\n            testResults.overall\n        Tests.Output(overall,\n          testResults.events ++ multiNodeResults.events,\n          testResults.summaries ++ multiNodeResults.summaries)\n    }\n  )\n\n\nval akka = \"2.3.9\"\nval Spray = \"1.3.1\"\n\nlazy val root =  project.in( file(\".\") )\n  .settings( baseSettings ++ SbtMultiJvm.multiJvmSettings  ++ SbtOneJar.oneJarSettings ++ Defaults.itSettings :_*)\n  .settings( libraryDependencies ++= {\n        Seq(\n          \"io.spray\"                   %% \"spray-routing\"   % Spray    % \"compile\",\n          \"io.spray\"                   %% \"spray-can\"       % Spray    % \"compile\",\n          \"io.spray\"                   %%  \"spray-json\"     % Spray    % \"compile\",\n          \"io.spray\"                   %% \"spray-testkit\"   % Spray    % \"test\",\n          \"org.json4s\"                 %% \"json4s-native\"   % \"3.2.11\",\n          \"com.typesafe.akka\"          %%  \"akka-actor\"                            % akka,\n          \"com.typesafe.akka\"          %% \"akka-cluster\"                           % akka,\n          \"com.typesafe.akka\"          %% \"akka-remote\"                            % akka,\n          \"com.typesafe.akka\"          %% \"akka-contrib\"                           % akka,\n          \"com.typesafe.akka\"          %%  \"akka-slf4j\"                            % akka,\n          \"com.typesafe.akka\"          %%  \"akka-multi-node-testkit\"               % \"2.3.8\",\n          \"com.typesafe.akka\"          %%  \"akka-testkit\"                          % akka     % \"test\",\n          \"org.slf4j\"                  %   \"slf4j-api\"                             % \"1.7.7\",\n          \"com.typesafe.scala-logging\" %% \"scala-logging\"                          % \"3.0.0\",\n          \"ch.qos.logback\"             %   \"logback-core\"                          % \"1.1.2\",\n          \"ch.qos.logback\"             %   \"logback-classic\"                       % \"1.1.2\",\n          \"com.github.krasserm\"        %% \"akka-persistence-cassandra\"             % \"0.3.5\",\n          \"org.scala-lang.modules\"     %  \"scala-xml_2.11\"                         % \"1.0.3\",\n          \"org.scala-lang.modules\"     %  \"scala-xml_2.11\"                         % \"1.0.3\",\n          \"org.scalatest\"              %%  \"scalatest\"                             % \"2.2.1\"  % \"test\",\n          \"org.iq80.leveldb\"           %   \"leveldb\"                               % \"0.7\",\n          \"org.json4s\"                 %% \"json4s-native\"                          % \"3.2.11\",\n          \"joda-time\" \t\t\t\t         % \"joda-time\" \t\t\t\t\t\t                   % \"2.7\",\n          \"org.joda\" % \"joda-convert\" % \"1.2\",\n          \"com.datastax.cassandra\"     % \"cassandra-driver-core\" \t\t\t\t           % \"2.1.1\"  exclude(\"org.xerial.snappy\", \"snappy-java\"),\n          \"commons-io\"                 %  \"commons-io\"                             % \"2.4\"    % \"test\",\n          \"org.xerial.snappy\"          % \"snappy-java\"           \t\t\t\t           % \"1.1.1.3\"\n        )\n      }\n  ).configs (MultiJvm)\n\n\n\n\n\n"
  },
  {
    "path": "project/ResolverSettings.scala",
    "content": "import sbt._\n\nobject ResolverSettings {\n\n  lazy val resolvers = Seq(\n    Resolver.mavenLocal,\n    Resolver.sonatypeRepo(\"releases\"),\n    Resolver.typesafeRepo(\"releases\"),\n    Resolver.typesafeRepo(\"snapshots\"),\n    Resolver.sonatypeRepo(\"snapshots\"),\n    \"Linter\" at \"http://hairyfotr.github.io/linteRepo/releases\",\n    \"krasserm\" at \"http://dl.bintray.com/krasserm/maven\"\n  )\n}"
  },
  {
    "path": "project/build.properties",
    "content": "sbt.version = 0.13.7"
  },
  {
    "path": "project/plugins.sbt",
    "content": "\n// project/plugins.sbt\ndependencyOverrides += \"org.scala-sbt\" % \"sbt\" % \"0.13.7\"\n\nresolvers += \"sonatype-releases\" at \"https://oss.sonatype.org/content/repositories/releases/\"\n\naddSbtPlugin(\"org.scalastyle\" %% \"scalastyle-sbt-plugin\" % \"0.5.0\")\n\n// Dependency graph plugin: https://github.com/jrudolph/sbt-dependency-graph\naddSbtPlugin(\"net.virtual-void\" % \"sbt-dependency-graph\" % \"0.7.4\")\n\n//addSbtPlugin(\"org.brianmckenna\" % \"sbt-wartremover\" % \"0.11\")\n\naddSbtPlugin(\"io.gatling\" % \"gatling-sbt\" % \"2.1.0\")\n\naddSbtPlugin(\"com.typesafe.sbt\" % \"sbt-multi-jvm\" % \"0.3.8\")\n\naddSbtPlugin(\"org.scoverage\" % \"sbt-scoverage\" % \"1.0.1\")\n\naddSbtPlugin(\"org.scala-sbt.plugins\" % \"sbt-onejar\" % \"0.8\")\n"
  },
  {
    "path": "src/main/resources/application.conf",
    "content": "akka {\n  loglevel = INFO\n\n  actor {\n    provider = \"akka.cluster.ClusterActorRefProvider\"\n  }\n\n  remote {\n    log-remote-lifecycle-events = off\n    netty.tcp {\n      hostname = \"127.0.0.1\"\n      port = 0\n    }\n  }\n\n  remote.watch-failure-detector.threshold = 20\n\n  cluster {\n    seed-nodes = [\n      \"akka.tcp://ClusterSystem@127.0.0.1:2551\",\n      \"akka.tcp://ClusterSystem@127.0.0.1:2552\"]\n\n    auto-down-unreachable-after = 10s\n  }\n\n  persistence {\n\n\n    journal {\n      max-message-batch-size = 200\n      max-confirmation-batch-size = 10000\n      max-deletion-batch-size = 10000\n      plugin = \"cassandra-journal\"\n    }\n    snapshot-store {\n      plugin = \"cassandra-snapshot-store\"\n    }\n\n\n    #journal.plugin = \"akka.persistence.journal.leveldb-shared\"\n    #journal.leveldb-shared.store {\n    # DO NOT USE 'native = off' IN PRODUCTION !!!\n    #native = off\n    #dir = \"target/shared-journal\"\n    #}\n    #snapshot-store.local.dir = \"target/snapshots\"\n    #view.auto-update-interval = 2s\n  }\n\n  contrib.cluster.sharding {\n    # The extension creates a top level actor with this name in top level user scope,\n    # e.g. '/user/sharding'\n    guardian-name = sharding\n    # If the coordinator can't store state changes it will be stopped\n    # and started again after this duration.\n    coordinator-failure-backoff = 1 s\n    # Start the coordinator singleton manager on members tagged with this role.\n    # All members are used if undefined or empty.\n    # ShardRegion actor is started in proxy only mode on nodes that are not tagged\n    # with this role.\n    role = \"\"\n    # The ShardRegion retries registration and shard location requests to the\n    # ShardCoordinator with this interval if it does not reply.\n    retry-interval = 1 s\n    # Maximum number of messages that are buffered by a ShardRegion actor.\n    buffer-size = 100000\n    # Timeout of the shard rebalancing process.\n    handoff-timeout = 60 s\n    # Time given to a region to acknowdge it's hosting a shard.\n    shard-start-timeout = 10 s\n    # If the shard can't store state changes it will retry the action\n    # again after this duration. Any messages sent to an affected entry\n    # will be buffered until the state change is processed\n    shard-failure-backoff = 10 s\n    # If the shard is remembering entries and an entry stops itself without\n    # using passivate. The entry will be restarted after this duration or when\n    # the next message for it is received, which ever occurs first.\n    entry-restart-backoff = 10 s\n    # Rebalance check is performed periodically with this interval.\n    rebalance-interval = 10 s\n    # How often the coordinator saves persistent snapshots, which are\n    # used to reduce recovery times\n    snapshot-interval = 3600 s\n    # Setting for the default shard allocation strategy\n    least-shard-allocation-strategy {\n      # Threshold of how large the difference between most and least number of\n      # allocated shards must be to begin the rebalancing.\n      rebalance-threshold = 10\n      # The number of ongoing rebalancing processes is limited to this number.\n      max-simultaneous-rebalance = 3\n    }\n  }\n\n\n}\n\n\ncassandra-journal {\n  # FQCN of the cassandra journal plugin\n  class = \"akka.persistence.cassandra.journal.CassandraJournal\"\n\n  # Comma-separated list of contact points in the cluster\n  contact-points = [\"127.0.0.1\"]\n\n  # Port of contact points in the cluster\n  port = 9042\n\n  # Name of the keyspace to be created/used by the journal\n  keyspace = \"akka_dddd_template_journal\"\n\n  # Name of the table to be created/used by the journal\n  table = \"akka_dddd_template_journal\"\n\n  # Replication factor to use when creating a keyspace\n  replication-factor = 1\n\n  # Write consistency level\n  write-consistency = \"QUORUM\"\n\n  # Read consistency level\n  read-consistency = \"QUORUM\"\n\n  # Maximum number of entries per partition (= columns per row).\n  # Must not be changed after table creation (currently not checked).\n  max-partition-size = 5000000\n\n  # Maximum size of result set\n  max-result-size = 50001\n\n  # Dispatcher for the plugin actor.\n  plugin-dispatcher = \"akka.actor.default-dispatcher\"\n\n  # Dispatcher for fetching and replaying messages\n  replay-dispatcher = \"akka.persistence.dispatchers.default-replay-dispatcher\"\n}\n\ncassandra-snapshot-store {\n\n  # FQCN of the cassandra snapshot store plugin\n  class = \"akka.persistence.cassandra.snapshot.CassandraSnapshotStore\"\n\n  # Comma-separated list of contact points in the cluster\n  contact-points = [\"127.0.0.1\"]\n\n  # Port of contact points in the cluster\n  port = 9042\n\n  # Name of the keyspace to be created/used by the snapshot store\n  keyspace = \"akka_dddd_template_snapshot\"\n\n  # Name of the table to be created/used by the snapshot store\n  table = \"akka_dddd_template_snapshot\"\n\n  # Replication factor to use when creating a keyspace\n  replication-factor = 1\n\n  # Write consistency level\n  write-consistency = \"ONE\"\n\n  # Read consistency level\n  read-consistency = \"ONE\"\n\n  # Maximum number of snapshot metadata to load per recursion (when trying to\n  # find a snapshot that matches specified selection criteria). Only increase\n  # this value when selection criteria frequently select snapshots that are\n  # much older than the most recent snapshot i.e. if there are much more than\n  # 10 snapshots between the most recent one and selected one. This setting is\n  # only for increasing load efficiency of snapshots.\n  max-metadata-result-size = 10\n\n  # Dispatcher for the plugin actor.\n  plugin-dispatcher = \"cassandra-snapshot-store.default-dispatcher\"\n\n  # Default dispatcher for plugin actor.\n  default-dispatcher {\n    type = Dispatcher\n    executor = \"fork-join-executor\"\n    fork-join-executor {\n      parallelism-min = 2\n      parallelism-max = 8\n    }\n  }\n}\n"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/AuctionCommandQueryProtocol.scala",
    "content": "package com.boldradius.cqrs\n\nobject AuctionCommandQueryProtocol {\n\n  sealed  trait AuctionMsg {\n    val auctionId: String\n  }\n\n  sealed trait AuctionCmd extends AuctionMsg\n\n//  case class BootInitCmd(auctionId: String) extends AuctionCmd\n  case class StartAuctionCmd(auctionId: String, start: Long, end: Long, initialPrice: Double, prodId: String) extends AuctionCmd\n  case class PlaceBidCmd(auctionId: String, buyer: String, bidPrice: Double) extends AuctionCmd\n\n  sealed trait AuctionAck extends AuctionMsg\n\n  case class StartedAuctionAck(auctionId: String) extends AuctionAck\n  case class InvalidAuctionAck(auctionId: String, msg: String) extends AuctionAck\n  case class PlacedBidAck(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long) extends AuctionAck\n  case class RefusedBidAck(auctionId: String, buyer: String, bidPrice: Double, winningBid: Double) extends AuctionAck\n  case class FailedBidAck(auctionId: String, buyer: String, bidPrice: Double, message: String) extends AuctionAck\n  case class AuctionEndedAck(auctionId: String) extends AuctionAck\n  case class AuctionNotYetStartedAck(auctionId: String) extends AuctionAck\n\n  sealed trait BidQuery extends AuctionMsg\n\n  case class WinningBidPriceQuery(auctionId: String) extends BidQuery\n  case class GetBidHistoryQuery(auctionId: String) extends BidQuery\n  case class GetAuctionStartEnd(auctionId: String) extends BidQuery\n  case class GetProdIdQuery(auctionId: String) extends BidQuery\n\n  sealed trait BidQueryResponse  extends AuctionMsg\n\n  case class InvalidBidQueryReponse(auctionId: String, message: String) extends BidQueryResponse\n  case class AuctionNotStarted(auctionId: String) extends BidQueryResponse\n  case class WinningBidPriceResponse(auctionId: String, price: Double) extends BidQueryResponse\n  case class BidHistoryResponse(auctionId: String, bids: List[Bid]) extends BidQueryResponse\n  case class AuctionStartEndResponse(auctionId: String, start: Long, end: Long) extends BidQueryResponse\n  case class ProdIdResponse(auctionId: String, prodId: String) extends BidQueryResponse\n\n}\n\n"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/BidProcessor.scala",
    "content": "package com.boldradius.cqrs\n\nimport akka.actor._\nimport akka.contrib.pattern.ShardRegion\n\nimport akka.persistence.{RecoveryCompleted, PersistentActor, SnapshotOffer, Update}\nimport AuctionCommandQueryProtocol._\nimport com.boldradius.util.ALogging\n\nimport scala.concurrent.ExecutionContext.Implicits.global\nimport scala.concurrent.duration._\n\n/**\n *\n * This is the Command side of CQRS. This actor receives commands only: AuctionStart and BidPlaced Cmds.\n *\n * These commands are transformed into events and persisted to a cassandra journal.\n * Once the events are persisted, the corresponding view is prompted to update itself from this journal\n * with Update()\n *\n * For recovery, the state of the auction is encoded in the var auctionStateMaybe: Option[AuctionBidState]\n *\n * A tick message is scheduled to signal the end of the auction\n *\n * This actor will passivate after 1 minute if no messages are received\n *\n */\nobject BidProcessor {\n\n  case object Tick\n\n  def props(readRegion: ActorRef): Props = Props(new BidProcessor(readRegion))\n\n  sealed trait AuctionEvt {\n    val auctionId: String\n  }\n\n  case class AuctionStartedEvt(auctionId: String, started: Long, end: Long, initialPrice: Double, prodId: String) extends AuctionEvt\n\n  case class AuctionEndedEvt(auctionId: String, timeStamp: Long) extends AuctionEvt\n\n  case class BidPlacedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long) extends AuctionEvt\n\n  case class BidRefusedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long) extends AuctionEvt\n\n  case class BidFailedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long, error: String) extends AuctionEvt\n\n  val idExtractor: ShardRegion.IdExtractor = {\n    case m: AuctionCmd => (m.auctionId, m)\n  }\n\n  val shardResolver: ShardRegion.ShardResolver = {\n    case m: AuctionCmd => (math.abs(m.auctionId.hashCode) % 100).toString\n  }\n\n  val shardName: String = \"BidProcessor\"\n}\n\nclass BidProcessor(readRegion: ActorRef) extends PersistentActor with Passivation with ALogging {\n\n  import BidProcessor._\n\n  override def persistenceId: String = self.path.parent.name + \"-\" + self.path.name\n\n  /** passivate the entity when no activity for 1 minute */\n  context.setReceiveTimeout(1 minute)\n\n\n  /**\n   * This formalizes the effects of this processor\n   * Each command results in:\n   * maybe AuctionEvt,\n   * an AuctionAck,\n   * maybe newReceive\n   */\n  private final case class ProcessedCommand(event: Option[AuctionEvt], ack: AuctionAck, newReceive: Option[Receive])\n\n\n  /**\n   * Updates Auction state\n   */\n  private def updateState(evt: AuctionEvt, state: Auction): Auction = {\n\n    evt match {\n      case AuctionEndedEvt(auctionId: String, timeStamp) =>\n        state.copy(ended = true)\n\n      case BidPlacedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long) =>\n        state.copy(acceptedBids = Bid(bidPrice, buyer, timeStamp) :: state.acceptedBids)\n\n      case BidRefusedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long) =>\n        state.copy(refusedBids = Bid(bidPrice, buyer, timeStamp) :: state.refusedBids)\n\n      case BidFailedEvt(auctionId: String, buyer: String, bidPrice: Double, timeStamp: Long, error: String) =>\n        state.copy(refusedBids = Bid(bidPrice, buyer, timeStamp) :: state.refusedBids)\n\n      case _ => state\n    }\n  }\n\n  private def getCurrentBid(state: Auction): Double =\n    state.acceptedBids match {\n      case Bid(p, _, _) :: tail => p\n      case _ => state.initialPrice\n    }\n\n\n  /**\n   * In an attempt to isolate the effects (write to journal, update state, change receive behaviour),\n   * each case of the PartialFunction[Any,Unit]  Receive functions: initial, takingBids call\n   * handleProcessedCommand ( sender, processedCommand) by convention\n   *\n   */\n  def handleProcessedCommand(sendr: ActorRef, processedCommand: ProcessedCommand): Unit = {\n\n    // ack whether there is an event or not\n    processedCommand.event.fold(sender() ! processedCommand.ack) { evt =>\n      persist(evt) { persistedEvt =>\n        readRegion ! Update(await = true) // update read path\n        sendr ! processedCommand.ack\n        processedCommand.newReceive.fold()(context.become) // maybe change state\n      }\n    }\n  }\n\n  override def receiveCommand: Receive = passivate(initial).orElse(unknownCommand)\n\n  def initial: Receive = {\n\n    case StartAuctionCmd(id, start, end, initialPrice, prodId) =>\n      val currentTime = System.currentTimeMillis()\n\n      if (currentTime >= end) {\n        handleProcessedCommand(sender(),\n          ProcessedCommand(None, InvalidAuctionAck(id, \"This auction is already over\"), None)\n        )\n      } else {\n        // Starting the auction, schedule a message to signal auction end\n        launchLifetime(end)\n\n        handleProcessedCommand(\n          sender(),\n          ProcessedCommand(\n            Some(AuctionStartedEvt(id, start, end, initialPrice, prodId)),\n            StartedAuctionAck(id),\n            Some(passivate(takingBids(Auction(id, start, end, initialPrice, Nil, Nil, false))).orElse(unknownCommand))\n          )\n        )\n      }\n  }\n\n  def takingBids(state: Auction): Receive = {\n\n    case Tick => // end of auction\n      val currentTime = System.currentTimeMillis()\n      persist(AuctionEndedEvt(state.auctionId, currentTime)) { evt =>\n        readRegion ! Update(await = true)\n        context.become(passivate(auctionClosed(updateState(evt, state))).orElse(unknownCommand))\n      }\n\n\n    case PlaceBidCmd(id, buyer, bidPrice) => {\n      val timestamp = System.currentTimeMillis()\n\n      handleProcessedCommand(sender(),\n        if (timestamp < state.endTime && timestamp >= state.startTime) {\n          val currentPrice = getCurrentBid(state)\n          if (bidPrice > currentPrice) {\n            // Successful bid\n            val evt = BidPlacedEvt(id, buyer, bidPrice, timestamp)\n            ProcessedCommand(\n              Some(evt),\n              PlacedBidAck(id, buyer, bidPrice, timestamp),\n              // update state\n              Some(passivate(takingBids(updateState(evt, state))).orElse(unknownCommand))\n            )\n          } else {\n            //Unsuccessful bid\n            val evt = BidRefusedEvt(id, buyer, bidPrice, timestamp)\n            ProcessedCommand(\n              Some(evt),\n              RefusedBidAck(id, buyer, bidPrice, currentPrice),\n              Some(passivate(takingBids(updateState(evt, state))).orElse(unknownCommand))\n            )\n          }\n        } else if (timestamp > state.endTime) {\n          // auction expired\n          ProcessedCommand(None, AuctionEndedAck(id), None)\n        } else {\n          ProcessedCommand(None, AuctionNotYetStartedAck(id), None)\n        }\n      )\n    }\n  }\n\n  def auctionClosed(state: Auction): Receive = {\n    case a: PlaceBidCmd => sender() ! AuctionEndedAck(state.auctionId)\n    case a: StartAuctionCmd => sender() ! AuctionEndedAck(state.auctionId)\n  }\n\n  /** Used only for recovery */\n  private var auctionRecoverStateMaybe: Option[Auction] = None\n\n  def receiveRecover: Receive = {\n    case evt: AuctionStartedEvt =>\n      auctionRecoverStateMaybe =\n        Some(Auction(evt.logInfo(\"receiveRecover evt:\" + _.toString).auctionId, evt.started, evt.end, evt.initialPrice, Nil, Nil, false))\n\n    case evt: AuctionEvt => {\n      auctionRecoverStateMaybe = auctionRecoverStateMaybe.map(state =>\n        updateState(evt.logInfo(\"receiveRecover evt:\" + _.toString), state))\n    }\n\n    case RecoveryCompleted => postRecoveryBecome(auctionRecoverStateMaybe)\n\n    // if snapshots are implemented, currently the aren't.\n    case SnapshotOffer(_, snapshot) =>\n      postRecoveryBecome(snapshot.asInstanceOf[Option[Auction]].logInfo(\"recovery from snapshot state:\" + _.toString))\n  }\n\n\n  /**\n   * Once recovery is complete, check the state to become the appropriate behaviour\n   */\n  def postRecoveryBecome(auctionRecoverStateMaybe: Option[Auction]): Unit =\n    auctionRecoverStateMaybe.fold[Unit]({}) { auctionState =>\n      log.info(\"postRecoveryBecome\")\n      if (auctionState.ended)\n        context.become(passivate(auctionClosed(auctionState)).orElse(unknownCommand))\n      else {\n        launchLifetime(auctionState.endTime)\n        context.become(passivate(takingBids(auctionState)).orElse(unknownCommand))\n      }\n    }\n\n\n  def unknownCommand: Receive = {\n    case other => {\n      other.logInfo(\"unknownCommand: \" + _.toString)\n      sender() ! InvalidAuctionAck(\"\", \"InvalidAuctionAck\")\n    }\n  }\n\n  /** auction lifetime tick will send message when auction is over */\n  def launchLifetime(time: Long) = {\n    val auctionEnd = (time - System.currentTimeMillis()).logInfo(\"launchLifetime over in:\" + _.toString + \"ms\")\n    if (auctionEnd > 0) {\n      context.system.scheduler.scheduleOnce(auctionEnd.milliseconds, self, Tick)\n    }\n  }\n}\n"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/BidView.scala",
    "content": "package com.boldradius.cqrs\n\nimport akka.actor._\nimport akka.contrib.pattern.ShardRegion\nimport akka.persistence.PersistentView\nimport AuctionCommandQueryProtocol._\nimport com.boldradius.cqrs.BidProcessor._\nimport com.boldradius.util.ALogging\nimport scala.concurrent.duration._\n\n\n/**\n * This actor is the Query side of CQRS.\n *\n * Each possible query result is represented as a case class (BidQueryResponse)\n *\n * This actor will initialize itself automatically upon startup from the event journal\n * stored by the corresponding PersistentActor (BidProcessor).\n *\n * There are many strategies for keeping this Actor consistent with the Write side,\n * this example uses the Update() method called from the Write side, which will cause\n * unread journal events to be sent to this actor, which, in turn, can update it's internal state.\n *\n */\n\n/**  state requred to satisfy queries  */\nfinal case class BidState(auctionId:String,\n                    start:Long,\n                    end:Long,\n                    product:Double,\n                    acceptedBids:List[Bid],\n                    rejectedBids:List[Bid],\n                    closed:Boolean)\nobject BidState{\n  def apply(auctionId:String,start:Long,end:Long,price:Double):BidState =\n    BidState(auctionId,start,end,price,Nil,Nil,false)\n}\n\nobject BidView {\n\n  def props():Props = Props(classOf[BidView])\n\n  val idExtractor: ShardRegion.IdExtractor = {\n    case m : AuctionEvt => (m.auctionId,m)\n    case m : BidQuery => (m.auctionId,m)\n  }\n\n  val shardResolver: ShardRegion.ShardResolver = {\n    case m: AuctionEvt => (math.abs(m.auctionId.hashCode) % 100).toString\n    case m: BidQuery => (math.abs(m.auctionId.hashCode) % 100).toString\n  }\n\n  val shardName: String = \"BidView\"\n\n}\n\n/**\n * The Query Actor\n */\nclass BidView extends PersistentView with ALogging with Passivation {\n\n  override val viewId: String = self.path.parent.name + \"-\" + self.path.name\n\n  /** It is thru this persistenceId that this actor is linked to the PersistentActor's event journal */\n  override val persistenceId: String = \"BidProcessor\" + \"-\" + self.path.name\n\n  /** passivate the entity when no activity */\n  context.setReceiveTimeout(1 minute)\n\n  /**\n   * This is the initial receive method\n   *\n   * It will only process the AuctionStartedEvt or reply to the WinningBidPriceQuery\n   *\n   */\n  def receive: Receive = passivate(initial).orElse(unknownCommand)\n\n  def initial: Receive = {\n\n    case e @ AuctionStartedEvt(auctionId, started, end,intialPrice, prodId) if isPersistent  =>\n      val newState = BidState(auctionId,started,end,intialPrice)\n      context.become(passivate(auctionInProgress(newState,prodId)).orElse(unknownCommand))\n\n    case  WinningBidPriceQuery(auctionId) =>\n      sender ! AuctionNotStarted(auctionId)\n  }\n\n  /**\n   * Also responds to updates to the event journal (AuctionEndedEvt,BidPlacedEvt,BidRefusedEvt), and\n   * updates internal state as well as responding to queries\n   */\n  def auctionInProgress(currentState:BidState, prodId:String):Receive = {\n\n    case  GetProdIdQuery(auctionId) =>\n      sender ! ProdIdResponse(auctionId,prodId)\n\n\n    case  GetBidHistoryQuery(auctionId) =>\n      sender ! BidHistoryResponse(auctionId,currentState.acceptedBids)\n\n    case  WinningBidPriceQuery(auctionId) =>\n        currentState.acceptedBids.headOption.fold(\n          sender ! WinningBidPriceResponse(auctionId,currentState.product))(b =>\n          sender ! WinningBidPriceResponse(auctionId,b.price))\n\n    case e:  AuctionEndedEvt  =>\n      val newState =  currentState.copy(closed = true)\n      context.become(passivate(auctionEnded(newState)))\n\n    case BidPlacedEvt(auctionId,buyer,bidPrice,timeStamp) if isPersistent =>\n        val newState =  currentState.copy(acceptedBids = Bid(bidPrice,buyer, timeStamp) :: currentState.acceptedBids)\n        context.become(passivate(auctionInProgress(newState,prodId)))\n\n\n    case BidRefusedEvt(auctionId,buyer,bidPrice,timeStamp) if isPersistent =>\n      val newState =  currentState.copy(rejectedBids = Bid(bidPrice,buyer, timeStamp) :: currentState.rejectedBids)\n      context.become(passivate(auctionInProgress(newState,prodId)))\n\n  }\n\n  def auctionEnded(currentState:BidState):Receive = {\n    case _ => {}\n  }\n\n  def unknownCommand:Receive = {\n    case other  => {\n      sender() ! InvalidAuctionAck(\"\",\"InvalidAuctionAck\")\n    }\n  }\n}\n"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/ClusterBoot.scala",
    "content": "package com.boldradius.cqrs\n\nimport akka.actor.{Props, ActorRef, ActorSystem}\nimport akka.contrib.pattern.ClusterSharding\n\n\nobject ClusterBoot {\n\n  def boot(proxyOnly:Boolean = false)(clusterSystem: ActorSystem):(ActorRef,ActorRef) = {\n    val view = ClusterSharding(clusterSystem).start(\n      typeName = BidView.shardName,\n      entryProps = if(!proxyOnly) Some(BidView.props()) else None,\n      idExtractor = BidView.idExtractor,\n      shardResolver = BidView.shardResolver)\n    val processor = ClusterSharding(clusterSystem).start(\n      typeName = BidProcessor.shardName,\n      entryProps = if(!proxyOnly) Some(BidProcessor.props(view)) else None,\n      idExtractor = BidProcessor.idExtractor,\n      shardResolver = BidProcessor.shardResolver)\n    (processor,view)\n  }\n\n}\n"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/ClusterNodeApp.scala",
    "content": "package com.boldradius.cqrs\n\nimport akka.actor.ActorSystem\nimport com.typesafe.config._\n\n/**\n * Start an akka cluster node\n * Usage:  sbt 'runMain com.boldradius.cqrs.ClusterNodeApp 127.0.0.1 2551'\n */\nobject ClusterNodeApp extends App {\n\n\n    val conf =\n      \"\"\"akka.remote.netty.tcp.hostname=\"%hostname%\"\n        |akka.remote.netty.tcp.port=%port%\n      \"\"\".stripMargin\n\n    val argumentsError = \"\"\"\n   Please run the service with the required arguments: <hostIpAddress> <port> \"\"\"\n\n    assert(args.length == 2, argumentsError)\n\n    val hostname = args(0)\n    val port = args(1).toInt\n    val config =\n      ConfigFactory.parseString( conf.replaceAll(\"%hostname%\",hostname)\n        .replaceAll(\"%port%\",port.toString)).withFallback(ConfigFactory.load())\n\n    // Create an Akka system\n    implicit val clusterSystem = ActorSystem(\"ClusterSystem\", config)\n    ClusterBoot.boot()(clusterSystem)\n}\n"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/DomainModel.scala",
    "content": "package com.boldradius.cqrs\n\nfinal case class Bid(price:Double, buyer:String, timeStamp:Long)\n\nfinal case class Auction(auctionId:String,\n                         startTime:Long,\n                         endTime:Long,\n                         initialPrice:Double,\n                         acceptedBids:List[Bid],\n                         refusedBids:List[Bid],\n                         ended:Boolean)\n\n\n\n\n"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/HttpApp.scala",
    "content": "package com.boldradius.cqrs\n\nimport akka.actor._\nimport spray.routing._\n\nimport com.typesafe.config.ConfigFactory\nimport spray.can.Http\nimport akka.io.IO\nimport akka.pattern.ask\nimport akka.util.Timeout\n\nimport scala.concurrent.duration._\n\n\nclass AuctionHttpActor( command:ActorRef, query:ActorRef )\n  extends HttpServiceActor\n  with HttpAuctionServiceRoute {\n  implicit val ec = context.dispatcher\n  def receive = runRoute(route(command,query))\n}\n\n/**\n * This spins up the Http server, after connecting to akka cluster.\n   * Usage:  sbt 'runMain com.boldradius.auction.cqrs.HttpApp <httpIpAddress>\" <httpPort> \"<akkaHostIpAddress>\" <akkaport>'\n *\n */\nobject HttpApp extends App{\n\n\n  private val argumentsError = \"\"\"\n   Please run the service with the required arguments: \" <httpIpAddress>\" <httpPort> \"<akkaHostIpAddress>\" <akkaport> \"\"\"\n\n\n  val conf =\n    \"\"\"akka.remote.netty.tcp.hostname=\"%hostname%\"\n       akka.remote.netty.tcp.port=%port%\n    \"\"\".stripMargin\n\n\n  assert(args.length == 4, argumentsError)\n\n  val httpHost = args(0)\n  val httpPort = args(1).toInt\n\n  val akkaHostname = args(2)\n  val akkaPort = args(3).toInt\n\n  val config =\n    ConfigFactory.parseString( conf.replaceAll(\"%hostname%\",akkaHostname)\n      .replaceAll(\"%port%\",akkaPort.toString)).withFallback(ConfigFactory.load())\n\n  implicit val system = ActorSystem(\"ClusterSystem\",config)\n\n  val (processor,view) = ClusterBoot.boot(true)(system)\n\n  val service = system.actorOf( Props( classOf[AuctionHttpActor],processor,view), \"cqrs-http-actor\")\n\n  implicit val timeout = Timeout(5.seconds)\n\n  IO(Http) ? Http.Bind(service, interface = httpHost, port = httpPort)\n}\n\n"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/HttpAuctionServiceRoute.scala",
    "content": "package com.boldradius.cqrs\n\nimport akka.actor.ActorRef\nimport akka.pattern.ask\nimport akka.util.Timeout\nimport com.boldradius.cqrs.AuctionCommandQueryProtocol._\nimport com.boldradius.util.LLogging\nimport org.joda.time.format.DateTimeFormat\nimport spray.routing._\n\nimport scala.concurrent.ExecutionContext\nimport scala.concurrent.duration._\nimport scala.util.{Failure, Success}\n\n\nfinal case class PlaceBidDto(auctionId:String, buyer:String, bidPrice:Double )\nfinal case class StartAuctionDto(auctionId:String, start:String, end:String, initialPrice: Double, prodId: String)\nfinal case class BidDto(price:Double, buyer:String, timeStamp:String)\n\nfinal case class AuctionError(auctionId:String,msg:String,response:String = \"AuctionError\")\nfinal case class AuctionStartedDto(auctionId:String,response:String = \"AuctionStartedDto\")\nfinal case class AuctionNotStartedDto(auctionId:String,response:String = \"AuctionNotStartedDto\")\nfinal case class SuccessfulBidDto(auctionId:String, bidPrice: Double, timeStamp:String,response:String = \"SuccessfulBidDto\")\nfinal case class RejectedBidDto(auctionId:String, bidPrice: Double, currentBid:Double,response:String = \"RejectedBidDto\")\nfinal case class FailedBidDto(auctionId:String, bidPrice: Double, currentBid:Double,response:String = \"FailedBidDto\")\nfinal case class WinningBidDto(auctionId:String,bidPrice: Double,response:String = \"WinningBidDto\")\nfinal case class BidHistoryDto(auctionId:String,bids: List[BidDto],response:String = \"BidHistoryDto\")\n\n\n\n\ntrait HttpAuctionServiceRoute extends HttpService with LLogging{\n\n\n\n  implicit val ec: ExecutionContext\n\n  import com.boldradius.util.MarshallingSupport._\n\n\n  implicit val timeout = Timeout(30 seconds)\n  lazy val fmt = DateTimeFormat.forPattern(\"yyyy-MM-dd-HH:mm\")\n\n  def route(command: ActorRef, query:ActorRef) = {\n    post {\n      path(\"startAuction\") {\n          extract(_.request) { e =>\n            entity(as[StartAuctionDto]) {\n              auction => onComplete(\n                (command ? StartAuctionCmd(auction.auctionId,\n                  fmt.parseDateTime(auction.start).getMillis,\n                  fmt.parseDateTime(auction.end).getMillis,\n                  auction.initialPrice, auction.prodId)).mapTo[AuctionAck]) {\n                case Success(ack) => ack match {\n                  case StartedAuctionAck(id) =>\n                    complete(AuctionStartedDto(id))\n                  case InvalidAuctionAck(id, msg) =>\n                    complete(AuctionError(\"ERROR\",id, msg))\n                  case other =>\n                    complete(AuctionError(\"ERROR\",ack.auctionId, ack.toString))\n                }\n                case Failure(t) =>\n                  t.printStackTrace()\n                  complete(AuctionError(\"ERROR\",auction.auctionId, t.getMessage))\n              }\n            }\n          }\n      } ~\n        path(\"bid\") {\n          detach(ec) {\n            extract(_.request) { e =>\n              entity(as[PlaceBidDto]) {\n                bid => onComplete(\n                  (command ? PlaceBidCmd(bid.auctionId, bid.buyer, bid.bidPrice)).mapTo[AuctionAck]) {\n                  case Success(ack) => ack.logInfo(s\"PlaceBidCmd bid.bidPrice ${bid.bidPrice} id:\" + _.auctionId.toString) match {\n                    case PlacedBidAck(id, buyer, bidPrice, timeStamp) =>\n                      complete(SuccessfulBidDto(id, bidPrice, fmt.print(timeStamp)))\n                    case RefusedBidAck(id, buyer, bidPrice, winningBid) =>\n                      complete(RejectedBidDto(id, bidPrice, winningBid))\n                    case other =>\n                      complete(AuctionError(\"ERROR\",bid.auctionId, other.toString))\n                  }\n                  case Failure(t) =>\n                    complete(AuctionError(\"ERROR\",bid.auctionId, t.getMessage))\n                }\n              }\n            }\n          }\n        }\n    } ~\n      get {\n        path(\"winningBid\" / Rest) { auctionId =>\n          detach(ec) {\n            onComplete((query ? WinningBidPriceQuery(auctionId)).mapTo[BidQueryResponse]) {\n              case Success(s) => s match {\n                case WinningBidPriceResponse(id, price) =>\n                  complete(WinningBidDto(id, price))\n                case AuctionNotStarted(id) =>\n                  complete(AuctionNotStartedDto(id))\n                case _ =>\n                  complete(AuctionError(\"ERROR\",auctionId, \"\"))\n              }\n              case Failure(t) =>\n                t.getMessage.logError(\"WinningBidPriceQuery error: \" + _)\n                complete(AuctionError(\"ERROR\",auctionId, t.getMessage))\n            }\n          }\n        } ~\n          path(\"bidHistory\" / Rest) { auctionId =>\n            onComplete((query ? GetBidHistoryQuery(auctionId)).mapTo[BidQueryResponse]) {\n              case Success(s) => s match {\n                case BidHistoryResponse(id, bids) =>\n                  complete(BidHistoryDto(id, bids.map(b =>\n                    BidDto(b.price, b.buyer, fmt.print(b.timeStamp)))))\n                case AuctionNotStarted(id) =>\n                  complete(AuctionNotStartedDto(id))\n                case _ =>\n                  complete(AuctionError(\"ERROR\",auctionId, \"\"))\n              }\n              case Failure(t) =>\n                complete(AuctionError(\"ERROR\",auctionId, t.getMessage))\n            }\n          }\n      }\n  }\n}\n\n"
  },
  {
    "path": "src/main/scala/com/boldradius/cqrs/Passivation.scala",
    "content": "package com.boldradius.cqrs\n\nimport akka.actor.{PoisonPill, Actor, ReceiveTimeout}\nimport com.boldradius.util.ALogging\nimport akka.contrib.pattern.ShardRegion.Passivate\n\ntrait Passivation extends ALogging {\n  this: Actor =>\n\n  protected def passivate(receive: Receive): Receive = receive.orElse{\n    // tell parent actor to send us a poisinpill\n    case ReceiveTimeout =>\n      self.logInfo( s => s\" $s ReceiveTimeout: passivating. \")\n      context.parent ! Passivate(stopMessage = PoisonPill)\n\n    // stop\n    case PoisonPill => context.stop(self.logInfo( s => s\" $s PoisonPill\"))\n  }\n}"
  },
  {
    "path": "src/main/scala/com/boldradius/util/Logging.scala",
    "content": "package com.boldradius.util\n\nimport akka.actor.{Actor, ActorLogging}\nimport com.typesafe.scalalogging.LazyLogging\nimport scala.language.implicitConversions\n\ntrait ALogging extends ActorLogging{  this: Actor =>\n\n  implicit def toLogging[V](v: V) : FLog[V] = FLog(v)\n\n  case class FLog[V](v : V)  {\n    def logInfo(f: V => String): V = {log.info(f(v)); v}\n    def logDebug(f: V => String): V = {log.debug(f(v)); v}\n    def logError(f: V => String): V = {log.error(f(v)); v}\n    def logWarn(f: V => String): V = {log.warning(f(v)); v}\n    def logTest(f: V => String): V = {println(f(v)); v}\n  }\n}\ntrait LLogging extends LazyLogging{\n\n  implicit def toLogging[V](v: V) : FLog[V] = FLog(v)\n\n  case class FLog[V](v : V)  {\n    def logInfo(f: V => String): V = {logger.info(f(v)); v}\n    def logDebug(f: V => String): V = {logger.debug(f(v)); v}\n    def logError(f: V => String): V = {logger.error(f(v)); v}\n    def logWarn(f: V => String): V = {logger.warn(f(v)); v}\n    def logTest(f: V => String): V = {println(f(v)); v}\n  }\n}\n\n\n\n\n"
  },
  {
    "path": "src/main/scala/com/boldradius/util/MarshallingSupport.scala",
    "content": "package com.boldradius.util\n\nimport org.json4s.{DefaultFormats, Formats}\nimport spray.httpx.Json4sSupport\n\n/**\n * Json marshalling for spray.\n */\nobject MarshallingSupport extends Json4sSupport {\n  implicit def json4sFormats: Formats = DefaultFormats\n}\n"
  },
  {
    "path": "src/multi-jvm/scala/com/boldradius/auction/cqrs/AuctionServiceSpec.scala",
    "content": "package com.boldradius.cqrs\n\nimport java.io.File\nimport java.util.UUID\nimport com.boldradius.cqrs.AuctionCommandQueryProtocol._\nimport scala.concurrent.duration._\nimport org.apache.commons.io.FileUtils\nimport com.typesafe.config.ConfigFactory\nimport akka.actor.ActorIdentity\nimport akka.actor.Identify\nimport akka.actor.Props\nimport akka.cluster.Cluster\nimport akka.contrib.pattern.ClusterSharding\nimport akka.persistence.Persistence\nimport akka.persistence.journal.leveldb.SharedLeveldbJournal\nimport akka.persistence.journal.leveldb.SharedLeveldbStore\nimport akka.remote.testconductor.RoleName\nimport akka.remote.testkit.MultiNodeConfig\nimport akka.remote.testkit.MultiNodeSpec\nimport akka.testkit.ImplicitSender\n\nobject AuctionServiceSpec extends MultiNodeConfig {\n  val controller = role(\"controller\")\n  val node1 = role(\"node1\")\n  val node2 = role(\"node2\")\n\n  commonConfig(ConfigFactory.parseString(\"\"\"\n    akka.actor.provider = \"akka.cluster.ClusterActorRefProvider\"\n    akka.persistence.journal.plugin = \"akka.persistence.journal.leveldb-shared\"\n    akka.persistence.journal.leveldb-shared.store {\n      native = off\n      dir = \"target/test-shared-journal\"\n    }\n    akka.persistence.snapshot-store.local.dir = \"target/test-snapshots\"\n    \"\"\"))\n}\n\nclass AuctionServiceSpecMultiJvmNode1 extends AuctionServiceSpec\nclass AuctionServiceSpecMultiJvmNode2 extends AuctionServiceSpec\nclass AuctionServiceSpecMultiJvmNode3 extends AuctionServiceSpec\n\nclass AuctionServiceSpec extends MultiNodeSpec(AuctionServiceSpec)\n  with STMultiNodeSpec with ImplicitSender {\n\n  import AuctionServiceSpec._\n\n  def initialParticipants = roles.size\n\n  val storageLocations = List(\n    \"akka.persistence.journal.leveldb.dir\",\n    \"akka.persistence.journal.leveldb-shared.store.dir\",\n    \"akka.persistence.snapshot-store.local.dir\").map(s => new File(system.settings.config.getString(s)))\n\n  override protected def atStartup() {\n    runOn(controller) {\n      storageLocations.foreach(dir => FileUtils.deleteDirectory(dir))\n    }\n  }\n\n  override protected def afterTermination() {\n    runOn(controller) {\n      storageLocations.foreach(dir => FileUtils.deleteDirectory(dir))\n    }\n  }\n\n  def join(from: RoleName, to: RoleName): Unit = {\n    runOn(from) {\n      Cluster(system) join node(to).address\n      startSharding()\n    }\n    enterBarrier(from.name + \"-joined\")\n  }\n\n  def startSharding(): Unit = {\n\n    val view = ClusterSharding(system).start(\n      typeName = BidView.shardName,\n      entryProps = Some(BidView.props),\n      idExtractor = BidView.idExtractor,\n      shardResolver = BidView.shardResolver)\n    ClusterSharding(system).start(\n      typeName = BidProcessor.shardName,\n      entryProps = Some(BidProcessor.props(view)),\n      idExtractor = BidProcessor.idExtractor,\n      shardResolver = BidProcessor.shardResolver)\n  }\n\n  \"Sharded auction service\" must {\n\n    \"create Auction\" in {\n      // start the Persistence extension\n      Persistence(system)\n      runOn(controller) {\n        system.actorOf(Props[SharedLeveldbStore], \"store\")\n      }\n      enterBarrier(\"peristence-started\")\n\n      runOn(node1, node2) {\n        system.actorSelection(node(controller) / \"user\" / \"store\") ! Identify(None)\n        val sharedStore = expectMsgType[ActorIdentity].ref.get\n        SharedLeveldbJournal.setStore(sharedStore, system)\n      }\n\n      enterBarrier(\"after-1\")\n    }\n\n    \"join cluster\" in within(15.seconds) {\n      join(node1, node1)\n      join(node2, node1)\n      enterBarrier(\"after-2\")\n    }\n\n    val auctionId = UUID.randomUUID().toString\n\n    \"start auction\" in within(15.seconds) {\n\n      val now = System.currentTimeMillis()\n\n      runOn(node1,node2) {\n        val auctionRegion = ClusterSharding(system).shardRegion(BidProcessor.shardName)\n        awaitAssert {\n          within(5.second) {\n            auctionRegion ! StartAuctionCmd(auctionId,now + 1000,now + 1000000,1,\"1\")\n            expectMsg( StartedAuctionAck(auctionId))\n          }\n        }\n\n      }\n\n      runOn(node1,node2) {\n        val auctionViewRegion = ClusterSharding(system).shardRegion(BidView.shardName)\n        awaitAssert {\n          within(5.second) {\n            auctionViewRegion ! WinningBidPriceQuery(auctionId)\n            expectMsg( WinningBidPriceResponse(auctionId,1))\n          }\n        }\n      }\n      enterBarrier(\"after-2\")\n    }\n\n    \"bid on auction\" in within(15.seconds) {\n\n      runOn(node1,node2) {\n        val auctionRegion = ClusterSharding(system).shardRegion(BidProcessor.shardName)\n        auctionRegion ! PlaceBidCmd(auctionId,\"dave\",3)\n      }\n\n      runOn(node2,node2) {\n        val auctionViewRegion = ClusterSharding(system).shardRegion(BidView.shardName)\n        awaitAssert {\n          within(5.second) {\n            auctionViewRegion ! WinningBidPriceQuery(auctionId)\n            expectMsg( WinningBidPriceResponse(auctionId,3))\n          }\n        }\n      }\n      enterBarrier(\"after-3\")\n    }\n\n  }\n}\n"
  },
  {
    "path": "src/multi-jvm/scala/com/boldradius/auction/cqrs/StMultiNodeSpec.scala",
    "content": "package com.boldradius.cqrs\n\nimport org.scalatest.{ BeforeAndAfterAll, WordSpecLike }\nimport org.scalatest.Matchers\nimport akka.remote.testkit.MultiNodeSpecCallbacks\n//#imports\n\n//#trait\n/**\n * Hooks up MultiNodeSpec with ScalaTest\n */\ntrait STMultiNodeSpec extends MultiNodeSpecCallbacks\nwith WordSpecLike with Matchers with BeforeAndAfterAll {\n\n  override def beforeAll() = multiNodeSpecBeforeAll()\n\n  override def afterAll() = multiNodeSpecAfterAll()\n}"
  },
  {
    "path": "src/test/resources/testcreatetables.cql",
    "content": "CREATE TABLE IF NOT EXISTS products (\n  id bigint PRIMARY KEY,\n  name text, \n  description text\n) WITH comment='auction products';\n\nCREATE TABLE IF NOT EXISTS auctions (\n  id text,\n  prodid text,\n  start timestamp, \n  end timestamp,\n  initialprice double,\n  PRIMARY KEY (id)\n) WITH comment='auctions';\n\nCREATE TABLE IF NOT EXISTS auctionbids (\n  aid text,\n  bprice double,\n  btime timestamp,\n  bbuyer text,\n  PRIMARY KEY ((aid), bprice, bbuyer, btime)\n) WITH comment='auctions with bids'\n\tAND CLUSTERING ORDER BY (bprice DESC)\n\tAND caching = '{\"keys\":\"ALL\", \"rows_per_partition\":\"1\"}';\n"
  },
  {
    "path": "src/test/scala/com/boldradius/dddd/HttpAuctionServiceRouteSpec.scala",
    "content": "package com.boldradius.auction\n\nimport akka.actor.{ActorSystem, Actor, ActorRef, Props}\nimport com.boldradius.cqrs.AuctionCommandQueryProtocol._\nimport com.boldradius.cqrs._\nimport com.typesafe.config.ConfigFactory\nimport org.scalatest._\nimport spray.http.Uri\nimport spray.routing._\n\nimport scala.concurrent.duration._\nimport spray.json._\nimport spray.json.DefaultJsonProtocol\nimport spray.testkit.ScalatestRouteTest\nimport com.boldradius.util.MarshallingSupport._\n\nobject HttpAuctionServiceRouteSpec{\n  import spray.util.Utils\n\n  val (_, akkaPort) = Utils temporaryServerHostnameAndPort()\n\n  val config = ConfigFactory.parseString( s\"\"\"\n     akka.remote.netty.tcp.port = $akkaPort\n     akka.log-dead-letters = off\n     akka.log-dead-letters-during-shutdown = off\n                                           \"\"\")\n\n  val testSystem = ActorSystem(\"offers-route-spec\", config)\n}\n\n\nclass HttpAuctionServiceRouteSpec extends FeatureSpecLike\nwith GivenWhenThen\nwith ScalatestRouteTest\nwith MustMatchers\nwith BeforeAndAfterAll\nwith HttpAuctionServiceRoute {\n\n  import  HttpAuctionServiceRouteSpec._\n\n  implicit val ec = system.dispatcher\n\n  override protected def createActorSystem(): ActorSystem = testSystem\n\n  def actorRefFactory = testSystem\n\n\n\n  val cmdActor:ActorRef = system.actorOf( Props( new Actor {\n    def receive: Receive = {\n      case StartAuctionCmd(id, start, end, initialPrice,prodId) =>\n        sender() ! StartedAuctionAck(id)\n\n      case PlaceBidCmd(id,buyer,bidPrice)=>\n        sender ! PlacedBidAck(id,buyer,bidPrice,1)\n    }\n  }))\n\n  val queryActor:ActorRef = system.actorOf( Props( new Actor {\n    def receive: Receive = {\n      case WinningBidPriceQuery(id) =>\n        sender() ! WinningBidPriceResponse(id,1)\n      case GetBidHistoryQuery(id) =>\n        sender() ! BidHistoryResponse(id,List(Bid(1,\"buyer\",1)))\n      case GetProdIdQuery(id) =>\n        sender() ! ProdIdResponse(id,\"1\")\n    }\n  }))\n\n\n  feature(\"Good Requests\") {\n    scenario(\"post is made to create auction\") {\n      Given(\"route is properly formed\")\n      When(\"/startAuction is called with POST\")\n\n      Post(Uri(\"/startAuction\"),StartAuctionDto(\"123\", \"2015-01-20-15:53\", \"2015-01-20-15:53\", 1, \"1\")) ~> route(cmdActor,queryActor) ~> check {\n        //responseAs[Any] must be(Map(\"action\" -> \"AuctionStarted\", \"details\" -> Map(\"auctionId\" -> \"123\")))\n        responseAs[AuctionStartedDto] must be(AuctionStartedDto(\"123\",\"AuctionStartedDto\"))\n      }\n      Then(s\"Received POST response: ${AuctionStartedDto(\"123\",\"AuctionStartedDto\")}\")\n    }\n\n    scenario(\"post is made to bid\") {\n      Given(\"route is properly formed\")\n      When(\"/bid is called with POST\")\n      Post(Uri(\"/bid\"),PlaceBidDto(\"123\", \"buyer\", 1)) ~> route(cmdActor,queryActor) ~> check {\n        responseAs[SuccessfulBidDto] must be(SuccessfulBidDto(\"123\",1 , \"1969-12-31-19:00\",\"SuccessfulBidDto\"))\n      }\n      Then(s\"Received POST response: ${SuccessfulBidDto(\"123\",1 , \"1969-12-31-19:00\",\"SuccessfulBidDto\")}\")\n    }\n  }\n}\n"
  },
  {
    "path": "tutorial/index.html",
    "content": "<html>\n\n    <body>\n\n    <div>\n        <h2>Background</h2>\n        <h3>Distributed Domain Driven Design</h3>\n        <h3>CQRS/ES  Command Query Responsibility Segregation / Event Sourcing</h3>\n        <p>This is a pattern that uses Command and Query objects to apply the <a href=\"http://en.wikipedia.org/wiki/Command%E2%80%93query_separation\">CQS</a> principle\n        for modifying and retrieving data.\n        </p>\n\n        <p>\n            Event Sourcing is an architectural pattern in which state is tracked with an immutable event log instead of\n            destructive updates (mutable).\n        </p>\n    </div>\n\n    <div>\n        <h2>Getting Started</h2>\n\n        <p>To get this application going, you will need to:\n\n            <ul>\n                <li>Set up the datastore</li>\n                <li>Boot the cluster nodes</li>\n                <li>Boot the Http microservice node</li>\n            </ul>\n\n        </p>\n\n\n        <h3>DataStore</h3>\n        <p>This application requires a distributed journal. Storage backends for journals and snapshot stores are pluggable in Akka persistence. In this case we are using <a href=\"http://cassandra.apache.org/download/\">Cassandra</a>.\n            You can find other journal plugins <a href=\"http://akka.io/community/?_ga=1.264939791.1443869017.1408561680\">here</a>.\n        </p>\n        <p>\n            The datastore is specified in <b>application.conf</b>\n        <pre><code>cassandra-journal.contact-points = [\"127.0.0.1\"]</code></pre>\n        <pre><code>cassandra-snapshot-store.contact-points = [\"127.0.0.1\"]</code></pre>\n        As you can see, the default is localhost. In a cloud deployment, you could add several addresses to a cassandra cluster.\n        </p>\n\n\n        <p>This application uses a simple domain to demonstrate CQRS and event sourcing with Akka Persistence. This domain is an online auction:</p>\n        <pre><code>\nfinal case class Bid(price:Double, buyer:String, timeStamp:Long)\n\nfinal case class Auction(auctionId:String,\n                         startTime:Long,\n                         endTime:Long,\n                         initialPrice:Double,\n                         acceptedBids:List[Bid],\n                         refusedBids:List[Bid],\n                         ended:Boolean)\n        </code></pre>\n        <p>This is a distributed application, leveraging <b>Akka Cluster</b>.</p>\n\n        <p>The <b>Command</b> path of this application is illustrated by the creation of an auction, and placing bids.</p>\n        <p>The <b>Query</b> path of this application is illustrated by the querying of winning bid and bid history.</p>\n        <p>In order to distribute and segregate these paths, we leverage  <b>Akka Cluster</b>, as well as <b>Cluster Sharding</b>.</p>\n          <p>Cluster Sharding enables the  distribution of the command and query actors across several nodes in the cluster,\n            supporting interaction using their logical identifier, without having to care about their physical location in the cluster.\n        </p>\n\n        <h3>Cluster Nodes</h3>\n        <p></p>\n        <p>You must first boot some cluster nodes (as many as you want). Running locally, these are distinguished by port  eg:[2551,2552,...].<br>\n            This cluster must specify one or more <b>seed nodes</b> in\n        <b>application.conf</b>\n<pre><code>\nakka.cluster {\nseed-nodes = [\n\"akka.tcp://ClusterSystem@127.0.0.1:2551\",\n\"akka.tcp://ClusterSystem@127.0.0.1:2552\"]\n\nauto-down-unreachable-after = 10s\n}\n</code></pre>\n        </p>\n\n        <p>\n            The Cluster Nodes are bootstrapped in <b>ClusterNode.scala</b>.\n        </p>\n\n        <p>\n            To boot each cluster node locally:\n<pre><code>\nsbt 'runMain com.boldradius.cqrs.ClusterNodeApp nodeIpAddress port'\n</code></pre>\nfor example:\n<pre><code>\nsbt 'runMain com.boldradius.cqrs.ClusterNodeApp 127.0.0.1 2551'\n</code></pre>\n        </p>\n\n\n        <h3>Http Microservice Node</h3>\n\n        <p> The HTTP front end is implemented as a <b>Spray</b> microservice and is bootstrapped in <b>HttpApp.scala</b>.It participates in the Cluster, but as a proxy.</p>\n          <p>  To run the microservice locally:\n<pre><code>\nsbt 'runMain com.boldradius.cqrs.HttpApp httpIpAddress httpPort akkaIpAddres akkaPort'\n</code></pre>\nfor example:\n<pre><code>\nsbt 'runMain com.boldradius.cqrs.HttpApp 127.0.0.1 9000 127.0.0.1 0'\n</code></pre>\n\n\n\n        </p>\n        <p>The HTTP API enables the user to:\n            <ul>\n                <li>Create an Auction</li>\n                <li>Place a did</li>\n                <li>Query for the current winning bid</li>\n                <li>Query for the bid history</li>\n            </ul>\n\n            <h4>Create Auction</h4>\n             <pre><code>\n POST http://127.0.0.1:9000/startAuction\n\n {\"auctionId\":\"123\",\n \"start\":\"2015-01-20-16:25\",\n \"end\":\"2015-07-20-16:35\",\n \"initialPrice\" : 2,\n \"prodId\" : \"3\"}\n             </code></pre>\n\n        <h4>Place Bid</h4>\n             <pre><code>\nPOST http://127.0.0.1:9000/bid\n\n{\"auctionId\":\"123\",\n\"buyer\":\"dave\",\n\"bidPrice\":6}\n             </code></pre>\n\n        <h4>Query for the current winning bid</h4>\n             <pre><code>\nGET http://127.0.0.1:9000/winningBid/123\n             </code></pre>\n\n\n        <h4>Query for the bid history</h4>\n             <pre><code>\nhttp://127.0.0.1:9000/bidHistory/123\n             </code></pre>\n        </p>\n\n\n        <h3>Spray service fowards to the cluster</h3>\n        The trait <b>HttpAuctionServiceRoute.scala</b> implements a route that takes ActorRefs (one for command and query) as input.\n        Upon receiving an Http request, it either sends a command message to the <b>command</b> actor, or a query message to the <b>query</b> actor.\n\n\n         <pre><code>\n def route(command: ActorRef, query:ActorRef) = {\n     post {\n        path(\"startAuction\") {\n            extract(_.request) { e =>\n                entity(as[StartAuctionDto]) {\n                    auction => onComplete(\n                        (command ? StartAuctionCmd(auction.auctionId,....\n\n         </code></pre>\n\n    </div>\n\n        <div>\n            <h2>Exploring the Command path in the Cluster</h2>\n            <p>The command path is implemented in <b>BidProcessor.scala</b>. This is a <b>PersistentActor</b> that receives commands:\n\n<pre><code>\ndef initial: Receive = {\n    case a@StartAuctionCmd(id, start, end, initialPrice, prodId) => ...\n}\n\ndef takingBids(auctionId: String, startTime: Long, closeTime: Long): Receive = {\n            case a@PlaceBidCmd(id, buyer, bidPrice) => ...\n}\n</code></pre>\n\nand produces events, writing them to the event journal, and notifying the <b>Query</b> Path of the updated journal:\n\n  <pre><code>\nval event = AuctionStartedEvt(id, start, end, initialPrice, prodId)   // the event to be persisted\npersistAsync(event) { evt =>                                          // block that will run once event has been written to journal\nreadRegion ! Update(await = true)                                   // update the Query path\nauctionStateMaybe = startMaybeState(id, start, end, initialPrice)   // update internal state\n...\n}\n  </code></pre>\n\n            </p>\n               This actor is cluster sharded on auctionId as follows:\n            <pre><code>\nval idExtractor: ShardRegion.IdExtractor = {\n    case m: AuctionCmd => (m.auctionId, m)\n}\n\nval shardResolver: ShardRegion.ShardResolver = msg => msg match {\n    case m: AuctionCmd => (math.abs(m.auctionId.hashCode) % 100).toString\n}\n\nval shardName: String = \"BidProcessor\"\n            </code></pre>\n            This means, there is only one instance of this actor in the cluster, and all commands with the same <b>auctionId</b> will\n            be routed to the same actor.\n            <p>\n            <p>\n             If this actor receives no commands for 1 minute, it will <b>passivate</b> ( a pattern enabling the parent to stop the actor, in order to reduce memory consumption without losing any commands it is currently processing):\n              <pre><code>\n/** passivate the entity when no activity */\ncontext.setReceiveTimeout(1 minute)     // this will send a ReceiveTimeout message after one minute, if no other messages come in\n        </code></pre>\n            The timeout is handled in the <b>Passivation.scala</b> trait:\n             <pre><code>\nprotected def withPassivation(receive: Receive): Receive = receive.orElse{\n    // tell parent actor to send us a poisinpill\n    case ReceiveTimeout => context.parent ! Passivate(stopMessage = PoisonPill)\n\n    // stop\n    case PoisonPill => context.stop(self)\n}\n             </code></pre>\n            </p>\n               If this actor fails, or is passivated, and then is required again (to handle a command), the cluster will spin it up, and it will replay the\n            event journal, updating it's internal state:\n              <pre><code>\ndef receiveRecover: Receive = {\n    case evt: AuctionEvt => updateState(evt)\n\n    case RecoveryCompleted => {\n        auctionStateMaybe.fold[Unit]({}) { auctionState =>\n            if (auctionState.ended)\n                context.become(passivate(auctionClosed(auctionState.auctionId, auctionState.endTime)).orElse(unknownCommand))\n            else{\n                context.become(passivate(takingBids(auctionState.auctionId, auctionState.startTime, auctionState.endTime)).orElse(unknownCommand))\n                }\n            }\n        }\n}\n              </code></pre>\n\n            </p>\n\n        </div>\n    <div>\n        <h2>Exploring the Query path in the Cluster</h2>\n         The Queries are handled in a different Actor: <b>BidView.scala</b>. This is a <b>PersistentView</b> that handles query messages, or prompts from\n        it's companion <b>PersistentActor</b> to update itself.\n        <p>\n            <b>BidView.scala</b> is linked to the <b>BidProcessor.scala</b> event journal via it's <b>persistenceId</b>\n            <pre><code>\noverride val persistenceId: String = \"BidProcessor\" + \"-\" + self.path.name\n    </code></pre>\n        This means it has access to this event journal, and can maintain, and recover state from this journal.\n        </p>\n        <p>\n            It is possible for a PersistentView to save it's own snapshots, but, in our case, it isn't required.\n        </p>\n\n        <p>\n            This PersistentView is sharded in the same way the PersistentActor is:\n             <pre><code>\nval idExtractor: ShardRegion.IdExtractor = {\n    case m : AuctionEvt => (m.auctionId,m)\n    case m : BidQuery => (m.auctionId,m)\n}\n\nval shardResolver: ShardRegion.ShardResolver = {\n    case m: AuctionEvt => (math.abs(m.auctionId.hashCode) % 100).toString\n    case m: BidQuery => (math.abs(m.auctionId.hashCode) % 100).toString\n}\n        </code></pre>\n        One could have used a different shard strategy here, but a consequence of the above strategy is that the Query Path will\n        reside in the same Shard Region as the command path, reducing latency of the Update() message from Command to Query.\n        </p>\n\n        <p>\nThe PersistentView maintains the following model in memory:\n<pre><code>\nfinal case class BidState(auctionId:String,\n                         start:Long,\n                         end:Long,\n                         product:Double,\n                         acceptedBids:List[Bid],\n                         rejectedBids:List[Bid],\n                         closed:Boolean)\n</code></pre>\n\n        This model is sufficient to satisfy both queries: Winning Bid, and  Bid History:\n\n      <pre><code>\n\n  def auctionInProgress(currentState:BidState, prodId:String):Receive = {\n\n    case  GetBidHistoryQuery(auctionId) =>  sender ! BidHistoryResponse(auctionId,currentState.acceptedBids)\n\n    case  WinningBidPriceQuery(auctionId) =>\n        currentState.acceptedBids.headOption.fold(\n        sender ! WinningBidPriceResponse(auctionId,currentState.product))(b =>\n        sender ! WinningBidPriceResponse(auctionId,b.price))\n\n          ....\n\n  }\n      </code></pre>\n\n        </p>\n    </div>\n\n\n    </body>\n\n</html>\n"
  }
]