[
  {
    "path": ".editorconfig",
    "content": "# For more info, see: http://EditorConfig.org\nroot = true\n\n[*.java]\nindent_style = space\nindent_size = 2\ncontinuation_indent_size = 4\n\n[*.md]\nindent_style = space\nindent_size = 2\ncontinuation_indent_size = 4\n\n[*.xml]\nindent_style = space\nindent_size = 2\ncontinuation_indent_size = 4\n"
  },
  {
    "path": ".gitignore",
    "content": "# ignore compiled byte code\ntarget\n\n# ignore output files from testing\noutput*\n\n# ignore standard Eclipse files\n.project\n.classpath\n.settings\n.checkstyle\n\n# ignore standard IntelliJ files\n.idea/\n*.iml\n*.ipr\n*.iws\n\n# ignore standard Vim and Emacs temp files\n*.swp\n*~\n\n# ignore standard Mac OS X files/dirs\n.DS_Store\n"
  },
  {
    "path": ".travis.yml",
    "content": "# Copyright (c) 2010 Yahoo! Inc., 2012 - 2015 YCSB contributors. \n# All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n# more info here about TravisCI and Java projects\n# http://docs.travis-ci.com/user/languages/java/\n\nlanguage: java\n\njdk:\n  - oraclejdk9\n  - oraclejdk8\n  - openjdk7\n\naddons:\n  hosts:\n    - myshorthost\n  hostname: myshorthost\n\ninstall: mvn install -q -DskipTests=true\n\nscript: mvn test -q\n\n# Services to start for tests.\nservices:\n  - mongodb\n# temporarily disable riak. failing, docs offline.\n#  - riak\n\n\n# Can't use container based infra because of hosts/hostname\nsudo: true\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "<!--\nCopyright (c) 2017 YCSB contributors.\nAll rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n## How To Contribute\n\nAs more and more databases are created to handle distributed or \"cloud\" workloads, YCSB needs contributors to write clients to test them. And of course we always need bug fixes, updates for existing databases and new features to keep YCSB going. Here are some guidelines to follow when digging into the code.\n\n## Project Source\n\nYCSB is located in a Git repository hosted on GitHub at [https://github.com/brianfrankcooper/YCSB](https://github.com/brianfrankcooper/YCSB). To modify the code, fork the main repo into your own GitHub account or organization and commit changes there.\n\nYCSB is written in Java (as most of the new cloud data stores at beginning of the project were written in Java) and is laid out as a multi-module Maven project. You should be able to import the project into your favorite IDE or environment easily. For more details about the Maven layout see the [Guide to Working with Multiple Modules](https://maven.apache.org/guides/mini/guide-multiple-modules.html).\n\n## Licensing\n\nYCSB is licensed under the Apache License, Version 2.0 (APL2). Every file included in the project must include the APL header. For example, each Java source file must have a header similar to the following:\n\n```java\n/**\n * Copyright (c) 2015-2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */ \n```\n\nWhen modifying files that already have a license header, please update the year when you made your edits. E.g. change ``Copyright (c) 2010 Yahoo! Inc., 2012 - 2016 YCSB contributors.`` to ``Copyright (c) 2010 Yahoo! Inc., 2012 - 2017 YCSB contributors.`` If the file only has ``Copyright (c) 2010 Yahoo! Inc.``, append the current year as in ``Copyright (c) 2010 Yahoo! Inc., 2017 YCSB contributors.``.\n\n**WARNING**: It should go without saying, but don't copy and paste code from outside authors or sources. If you are a database author and want to copy some example code, it must be APL2 compatible.\n\nClient bindings to non-APL databases are perfectly acceptable, as data stores are meant to be used from all kinds of projects. Just make sure not to copy any code or commit libraries or binaries into the YCSB code base. Link to them in the Maven pom file.\n\n## Issues and Support\n\nTo track bugs, feature requests and releases we use GitHub's integrated [Issues](https://github.com/brianfrankcooper/YCSB/issues). If you find a bug or problem, open an issue with a descriptive title and as many details as you can give us in the body (stack traces, log files, etc). Then if you can create a fix, follow the PR guidelines below.\n\n**Note** Before embarking on a code change or DB, search through the existing issues and pull requests to see if anyone is already working on it. Reach out to them if so.\n\nFor general support, please use the mailing list hosted (of course) with Yahoo groups at [http://groups.yahoo.com/group/ycsb-users](http://groups.yahoo.com/group/ycsb-users).\n\n## Code Style\n\nA Java coding style guide is enforced via the Maven CheckStyle plugin. We try not to be too draconian with enforcement but the biggies include:\n\n* Whitespaces instead of tabs.\n* Proper Javadocs for methods and classes.\n* Camel case member names.\n* Upper camel case classes and method names.\n* Line length.\n\nCheckStyle will run for pull requests or if you create a package locally so if you just compile and push a commit, you may be surprised when the build fails with a style issue. Just execute ``mvn checkstyle:checkstyle `` before you open a PR and you should avoid any suprises.\n\n## Platforms\n\nSince most data bases aim to support multiple platforms, YCSB aims to run on as many as possible as well. Besides **Linux** and **macOS**, YCSB must compile and run for **Windows**. While not all DBs will run under every platform, the YCSB tool itself must be able to execute on all of these systems and hopefully be able to communicate with remote data stores.\n\nAdditionally, YCSB is targeting Java 7 (1.7.0) as its build version as some users are glacially slow moving to Java 8. So please avoid those Lambdas and Streams for now.\n\n## Pull Requests\n\nYou've written some amazing code and are excited to share it with the community! It's time to open a PR! Here's what you should do.\n\n* Checkout YCSB's ``master`` branch in your own fork and create a new branch based off of it with a name that is reflective of your work. E.g. ``i123`` for fixing an issue or ``db_xyz`` when working on a binding.\n* Add your changes to the branch.\n* Commit the code and start the commit message with the component you are working on in square braces. E.g. ``[core] Add another format for exporting histograms.`` or ``[hbase12] Fix interrupted exception bug.``.\n* Push to your fork and click the ``Create Pull Request`` button.\n* Wait for the build to complete in the CI pipeline. If it fails with a red X, click through the logs for details and fix any issues and commit your changes.\n* If you have made changes, please flatten the commits so that the commit logs are nice and clean. Just run a ``git rebase -i <hash before your first commit>``. \n\nAfter you have opened your PR, a YCSB maintainer will review it and offer constructive feedback via the GitHub review feature. If no one has responded to your PR, please bump the thread by adding comments.\n\n**NOTE**: For maintainers, please get another maintainer to sign off on your changes before merging a PR. And if you're writing code, please do create a PR from your fork, don't just push code directly to the master branch.\n\n## Core, Bindings and Workloads\n\nThe main components of the code base include the core library and benchmarking utility, various database client bindings and workload classes and definitions.\n\n### Core\nWhen working on the core classes, keep in mind the following:\n\n* Do not change the core behavior or operation of the main benchmarking classes (Particularly the Client and Workload classes). YCSB is used all over the place because it's a consistent standard that allows different users to compare results with the same workloads. If you find a way to drastically improve throughput, that's great! But please check with the rest of the maintainers to see if we can add the tweaks without invalidating years of benchmarks.\n* Do not remove or modify measurements. Users may have tooling to parse the outputs so if you take something out, they'll be a wee bit unhappy. Extending or adding measurements is fine (so if you do have tooling, expect additions.)\n* Do not modify existing generators. Again we don't want to invalidate years of benchmarks. Instead, create a new generator or option that can be enabled explicitly (not implicitly!) for users to try out.\n* Utility classes and methods are welcome. But if they're only ever used by a specific database binding, co-locate the code with that binding.\n* Don't change the DB interface if at all possible. Implementations can squeeze all kinds of workloads through the existing interface and while it may be easy to change the bindings included with the source code, some users may have private clients they can't share with the community. \n\n### Bindings and Clients\n\nWhen a new database is released a *binding* can be created that implements a client communicating with the given data store that will execute YCSB workloads. Details about writing a DB binding can be found on our [GitHub Wiki page](https://github.com/brianfrankcooper/YCSB/wiki/Adding-a-Database). Some development guidelines to follow include:\n\n* Create a new Maven module for your binding. Follow the existing bindings as examples.\n* The module *must* include a README.md file with details such as:\n  * Database setup with links to documentation so that the YCSB benchmarks will execute properly.\n  * Example command line executions (workload selection, etc).\n  * Required and optional properties (e.g. connection strings, behavior settings, etc) along with the default values.\n  * Versions of the database the binding supports.\n* Javadoc the binding and all of the methods. Tell us what it does and how it works.\n\nBecause YCSB is a utility to compare multiple data stores, we need each binding to behave similarly by default. That means each data store should enforce the strictest consistency guarantees available and avoid client side buffering or optimizations. This allows users to evaluate different DBs with a common baseline and tough standards.\n\nHowever you *should* include parameters to tune and improve performance as much as possible to reach those flashy marketing numbers. Just be honest and document what the settings do and what trade-offs are made. (e.g. client side buffering reduces I/O but a crash can lead to data loss).\n\n### Workloads\n\nYCSB began comparing various key/value data stores with simple CRUD operations. However as DBs have become more specialized we've added more workloads for various tasks and would love to have more in the future. Keep the following in mind:\n\n* Make sure more than one publicly available database can handle your workload. It's no fun if only one player is in the game.\n* Use the existing DB interface to pass your data around. If you really need another API, discuss with the maintainers to see if there isn't a workaround.\n* Provide real-world use cases for the workload, not just theoretical idealizations."
  },
  {
    "path": "LICENSE.txt",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "NOTICE.txt",
    "content": "=========================================================================\nNOTICE file for use with, and corresponding to Section 4 of,            \nthe Apache License, Version 2.0,                                  \nin this case for the YCSB project.                     \n=========================================================================\n\n   This product includes software developed by\n   Yahoo! Inc. (www.yahoo.com)\n   Copyright (c) 2010 Yahoo! Inc.  All rights reserved.\n\n   This product includes software developed by\n   Google Inc. (www.google.com)\n   Copyright (c) 2015 Google Inc.  All rights reserved.\n"
  },
  {
    "path": "README.md",
    "content": "\n# ByteIterator\n\n从数据库取数据，我使用了ByteIterator数据接口，其例子和好处如下。\n\n代码示例：\n\n\n```Java\n  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    try {\n      byte[] value = db.get(key.getBytes());\n      Map<String, ByteIterator> deserialized = deserialize(value);\n      result.putAll(deserialized);\n    } catch (RocksDBException e) {\n      System.out.format(\"[ERROR] caught the unexpceted exception -- %s\\n\", e);\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n```\n  \nWhy I use ByteIterator here?\n\n  a.出于性能考虑，主要考虑字符串的成本、拷贝转码问题，流可能是一个图片（blob形式）\n  \n  b.byte是字节，可以屏蔽utf8、gbk等编码细节。文本从磁盘拿出来本来是二进制，需要通过编码转化为对应的字符。ByteIterator可以屏蔽不同服务器编码不一样的的问题。\n  \n  \n使用Byte来存储数据有什么缺点和优点？\n\n略\n\n使用Iterator有什么缺点和优点？可以屏蔽细节？\n\n略\n\n使用ByteIerator来有什么缺点和有点？  \n\n略\n\n\n\n<!--\nCopyright (c) 2010 Yahoo! Inc., 2012 - 2016 YCSB contributors.\nAll rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\nLeveldb and Rocksdb modules of YCSB\n====================================\n[![Build Status](https://travis-ci.org/brianfrankcooper/YCSB.png?branch=master)](https://travis-ci.org/brianfrankcooper/YCSB)\n\nLinks\n-----\nhttp://wiki.github.com/brianfrankcooper/YCSB/  \nhttps://labs.yahoo.com/news/yahoo-cloud-serving-benchmark/\nycsb-users@yahoogroups.com  \n\nGetting Started\n---------------\n\n1. Download the [latest release of YCSB](https://github.com/brianfrankcooper/YCSB/releases/latest):\n\n    ```sh\n    curl -O --location https://github.com/brianfrankcooper/YCSB/releases/download/0.12.0/ycsb-0.12.0.tar.gz\n    tar xfvz ycsb-0.12.0.tar.gz\n    cd ycsb-0.12.0\n    ```\n    \n2. Set up a database to benchmark. There is a README file under each binding \n   directory.\n\n3. Run YCSB command. \n\n    On Linux:\n    ```sh\n    bin/ycsb.sh load basic -P workloads/workloada\n    bin/ycsb.sh run basic -P workloads/workloada\n    ```\n\n    On Windows:\n    ```bat\n    bin/ycsb.bat load basic -P workloads\\workloada\n    bin/ycsb.bat run basic -P workloads\\workloada\n    ```\n\n  Running the `ycsb` command without any argument will print the usage. \n   \n  See https://github.com/brianfrankcooper/YCSB/wiki/Running-a-Workload\n  for a detailed documentation on how to run a workload.\n\n  See https://github.com/brianfrankcooper/YCSB/wiki/Core-Properties for \n  the list of available workload properties.\n\nBuilding from source\n--------------------\n\nYCSB requires the use of Maven 3; if you use Maven 2, you may see [errors\nsuch as these](https://github.com/brianfrankcooper/YCSB/issues/406).\n\nTo build the full distribution, with all database bindings:\n\n    mvn clean package\n\nTo build a single database binding:\n\n    mvn -pl com.yahoo.ycsb:mongodb-binding -am clean package\n"
  },
  {
    "path": "Todo.md",
    "content": "  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    try {\n      byte[] value = db.get(key.getBytes());\n      Map<String, ByteIterator> deserialized = deserialize(value);\n      result.putAll(deserialized);\n    } catch (RocksDBException e) {\n      System.out.format(\"[ERROR] caught the unexpceted exception -- %s\\n\", e);\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n  \n  /**\n  \n  1. Why I use ByteIterator here?\n  a.出于性能考虑，主要考虑字符串的成本、拷贝转码问题，流可能是一个图片（blob形式）\n  b.byte是字节可以，可以屏蔽utf8、gbk等编码细节。文本从磁盘拿出来本来是二进制，需要通过编码转化为对应的字符。\n    ByteIterator可以屏蔽不同服务器编码不一样的的问题。\n  \n  \n  **/\n"
  },
  {
    "path": "accumulo1.6/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on [Accumulo](https://accumulo.apache.org/). \n\n### 1. Start Accumulo\n\nSee the [Accumulo Documentation](https://accumulo.apache.org/1.6/accumulo_user_manual.html#_installation)\nfor details on installing and running Accumulo.\n\nBefore running the YCSB test you must create the Accumulo table. Again see the \n[Accumulo Documentation](https://accumulo.apache.org/1.6/accumulo_user_manual.html#_basic_administration)\nfor details. The default table name is `ycsb`.\n\n### 2. Set Up YCSB\n\nGit clone YCSB and compile:\n\n    git clone http://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:accumulo1.6-binding -am clean package\n\n### 3. Create the Accumulo table\n\nBy default, YCSB uses a table with the name \"usertable\". Users must create this table before loading\ndata into Accumulo. For maximum Accumulo performance, the Accumulo table must be pre-split. A simple\nRuby script, based on the HBase README, can generate adequate split-point. 10's of Tablets per\nTabletServer is a good starting point. Unless otherwise specified, the following commands should run\non any version of Accumulo.\n\n    $ echo 'num_splits = 20; puts (1..num_splits).map {|i| \"user#{1000+i*(9999-1000)/num_splits}\"}' | ruby > /tmp/splits.txt\n    $ accumulo shell -u <user> -p <password> -e \"createtable usertable\"\n    $ accumulo shell -u <user> -p <password> -e \"addsplits -t usertable -sf /tmp/splits.txt\"\n    $ accumulo shell -u <user> -p <password> -e \"config -t usertable -s table.cache.block.enable=true\"\n\nAdditionally, there are some other configuration properties which can increase performance. These\ncan be set on the Accumulo table via the shell after it is created. Setting the table durability\nto `flush` relaxes the constraints on data durability during hard power-outages (avoids calls\nto fsync). Accumulo defaults table compression to `gzip` which is not particularly fast; `snappy`\nis a faster and similarly-efficient option. The mutation queue property controls how many writes\nthat Accumulo will buffer in memory before performing a flush; this property should be set relative\nto the amount of JVM heap the TabletServers are given.\n\nPlease note that the `table.durability` and `tserver.total.mutation.queue.max` properties only\nexists for >=Accumulo-1.7. There are no concise replacements for these properties in earlier versions.\n\n    accumulo> config -s table.durability=flush\n    accumulo> config -s tserver.total.mutation.queue.max=256M\n    accumulo> config -t usertable -s table.file.compress.type=snappy\n\nOn repeated data loads, the following commands may be helpful to re-set the state of the table quickly.\n\n    accumulo> createtable tmp --copy-splits usertable --copy-config usertable\n    accumulo> deletetable --force usertable\n    accumulo> renametable tmp usertable\n    accumulo> compact --wait -t accumulo.metadata\n\n### 4. Load Data and Run Tests\n\nLoad the data:\n\n    ./bin/ycsb load accumulo1.6 -s -P workloads/workloada \\\n         -p accumulo.zooKeepers=localhost \\\n         -p accumulo.columnFamily=ycsb \\\n         -p accumulo.instanceName=ycsb \\\n         -p accumulo.username=user \\\n         -p accumulo.password=supersecret \\\n         > outputLoad.txt\n\nRun the workload test:\n\n    ./bin/ycsb run accumulo1.6 -s -P workloads/workloada  \\\n         -p accumulo.zooKeepers=localhost \\\n         -p accumulo.columnFamily=ycsb \\\n         -p accumulo.instanceName=ycsb \\\n         -p accumulo.username=user \\\n         -p accumulo.password=supersecret \\\n         > outputLoad.txt\n\n## Accumulo Configuration Parameters\n\n- `accumulo.zooKeepers`\n  - The Accumulo cluster's [zookeeper servers](https://accumulo.apache.org/1.6/accumulo_user_manual.html#_connecting).\n  - Should contain a comma separated list of of hostname or hostname:port values.\n  - No default value.\n\n- `accumulo.columnFamily`\n  - The name of the column family to use to store the data within the table.\n  - No default value.\n\n- `accumulo.instanceName`\n  - Name of the Accumulo [instance](https://accumulo.apache.org/1.6/accumulo_user_manual.html#_connecting).\n  - No default value.\n\n- `accumulo.username`\n  - The username to use when connecting to Accumulo.\n  - No default value.\n \n- `accumulo.password`\n  - The password for the user connecting to Accumulo.\n  - No default value.\n\n"
  },
  {
    "path": "accumulo1.6/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2011 YCSB++ project, 2014 - 2016 YCSB contributors.\nAll rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n  <artifactId>accumulo1.6-binding</artifactId>\n  <name>Accumulo 1.6 DB Binding</name>\n  <properties>\n    <!-- This should match up to the one from your Accumulo version -->\n    <hadoop.version>2.2.0</hadoop.version>\n    <!-- Tests do not run on jdk9 -->\n    <skipJDK9Tests>true</skipJDK9Tests>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>org.apache.accumulo</groupId>\n      <artifactId>accumulo-core</artifactId>\n      <version>${accumulo.1.6.version}</version>\n    </dependency>\n    <!-- Needed for hadoop.io.Text :( -->\n    <dependency>\n      <groupId>org.apache.hadoop</groupId>\n      <artifactId>hadoop-common</artifactId>\n      <version>${hadoop.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>jdk.tools</groupId>\n          <artifactId>jdk.tools</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.accumulo</groupId>\n      <artifactId>accumulo-minicluster</artifactId>\n      <version>${accumulo.1.6.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <!-- needed directly only in test, but transitive\n         at runtime for accumulo, hadoop, and thrift. -->\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n      <version>1.7.13</version>\n    </dependency>\n  </dependencies>\n  <build>\n    <testResources>\n      <testResource>\n        <directory>../workloads</directory>\n        <targetPath>workloads</targetPath>\n      </testResource>\n      <testResource>\n        <directory>src/test/resources</directory>\n      </testResource>\n    </testResources>\n  </build>\n</project>\n"
  },
  {
    "path": "accumulo1.6/src/main/conf/accumulo.properties",
    "content": "# Copyright 2014 Cloudera, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n# Sample Accumulo configuration properties\n#\n# You may either set properties here or via the command line.\n#\n\n# This will influence the keys we write\naccumulo.columnFamily=YCSB\n\n# This should be set based on your Accumulo cluster\n#accumulo.instanceName=ExampleInstance\n\n# Comma separated list of host:port tuples for the ZooKeeper quorum used\n# by your Accumulo cluster\n#accumulo.zooKeepers=zoo1.example.com:2181,zoo2.example.com:2181,zoo3.example.com:2181\n\n# This user will need permissions on the table YCSB works against\n#accumulo.username=ycsb\n#accumulo.password=protectyaneck\n\n# Controls how long our client writer will wait to buffer more data\n# measured in milliseconds\naccumulo.batchWriterMaxLatency=30000\n\n# Controls how much data our client will attempt to buffer before sending\n# measured in bytes\naccumulo.batchWriterSize=100000\n\n# Controls how many worker threads our client will use to parallelize writes\naccumulo.batchWriterThreads=1\n"
  },
  {
    "path": "accumulo1.6/src/main/java/com/yahoo/ycsb/db/accumulo/AccumuloClient.java",
    "content": "/**\n * Copyright (c) 2011 YCSB++ project, 2014-2016 YCSB contributors.\n * All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.accumulo;\n\nimport static java.nio.charset.StandardCharsets.UTF_8;\n\nimport java.io.IOException;\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Set;\nimport java.util.SortedMap;\nimport java.util.Vector;\nimport java.util.concurrent.ConcurrentHashMap;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.accumulo.core.client.AccumuloException;\nimport org.apache.accumulo.core.client.AccumuloSecurityException;\nimport org.apache.accumulo.core.client.BatchWriter;\nimport org.apache.accumulo.core.client.BatchWriterConfig;\nimport org.apache.accumulo.core.client.Connector;\nimport org.apache.accumulo.core.client.IteratorSetting;\nimport org.apache.accumulo.core.client.MutationsRejectedException;\nimport org.apache.accumulo.core.client.Scanner;\nimport org.apache.accumulo.core.client.TableNotFoundException;\nimport org.apache.accumulo.core.client.ZooKeeperInstance;\nimport org.apache.accumulo.core.client.security.tokens.AuthenticationToken;\nimport org.apache.accumulo.core.client.security.tokens.PasswordToken;\nimport org.apache.accumulo.core.data.Key;\nimport org.apache.accumulo.core.data.Mutation;\nimport org.apache.accumulo.core.data.Range;\nimport org.apache.accumulo.core.data.Value;\nimport org.apache.accumulo.core.iterators.user.WholeRowIterator;\nimport org.apache.accumulo.core.security.Authorizations;\nimport org.apache.accumulo.core.util.CleanUp;\nimport org.apache.hadoop.io.Text;\n\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\n/**\n * <a href=\"https://accumulo.apache.org/\">Accumulo</a> binding for YCSB.\n */\npublic class AccumuloClient extends DB {\n\n  private ZooKeeperInstance inst;\n  private Connector connector;\n  private Text colFam = new Text(\"\");\n  private byte[] colFamBytes = new byte[0];\n  private final ConcurrentHashMap<String, BatchWriter> writers = new ConcurrentHashMap<>();\n\n  static {\n    Runtime.getRuntime().addShutdownHook(new Thread() {\n      @Override\n      public void run() {\n        CleanUp.shutdownNow();\n      }\n    });\n  }\n\n  @Override\n  public void init() throws DBException {\n    colFam = new Text(getProperties().getProperty(\"accumulo.columnFamily\"));\n    colFamBytes = colFam.toString().getBytes(UTF_8);\n\n    inst = new ZooKeeperInstance(\n        getProperties().getProperty(\"accumulo.instanceName\"),\n        getProperties().getProperty(\"accumulo.zooKeepers\"));\n    try {\n      String principal = getProperties().getProperty(\"accumulo.username\");\n      AuthenticationToken token =\n          new PasswordToken(getProperties().getProperty(\"accumulo.password\"));\n      connector = inst.getConnector(principal, token);\n    } catch (AccumuloException | AccumuloSecurityException e) {\n      throw new DBException(e);\n    }\n\n    if (!(getProperties().getProperty(\"accumulo.pcFlag\", \"none\").equals(\"none\"))) {\n      System.err.println(\"Sorry, the ZK based producer/consumer implementation has been removed. \" +\n          \"Please see YCSB issue #416 for work on adding a general solution to coordinated work.\");\n    }\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    try {\n      Iterator<BatchWriter> iterator = writers.values().iterator();\n      while (iterator.hasNext()) {\n        BatchWriter writer = iterator.next();\n        writer.close();\n        iterator.remove();\n      }\n    } catch (MutationsRejectedException e) {\n      throw new DBException(e);\n    }\n  }\n\n  /**\n   * Called when the user specifies a table that isn't the same as the existing\n   * table. Connect to it and if necessary, close our current connection.\n   * \n   * @param table\n   *          The table to open.\n   */\n  public BatchWriter getWriter(String table) throws TableNotFoundException {\n    // tl;dr We're paying a cost for the ConcurrentHashMap here to deal with the DB api.\n    // We know that YCSB is really only ever going to send us data for one table, so using\n    // a concurrent data structure is overkill (especially in such a hot code path).\n    // However, the impact seems to be relatively negligible in trivial local tests and it's\n    // \"more correct\" WRT to the API.\n    BatchWriter writer = writers.get(table);\n    if (null == writer) {\n      BatchWriter newWriter = createBatchWriter(table);\n      BatchWriter oldWriter = writers.putIfAbsent(table, newWriter);\n      // Someone beat us to creating a BatchWriter for this table, use their BatchWriters\n      if (null != oldWriter) {\n        try {\n          // Make sure to clean up our new batchwriter!\n          newWriter.close();\n        } catch (MutationsRejectedException e) {\n          throw new RuntimeException(e);\n        }\n        writer = oldWriter;\n      } else {\n        writer = newWriter;\n      }\n    }\n    return writer;\n  }\n\n  /**\n   * Creates a BatchWriter with the expected configuration.\n   *\n   * @param table The table to write to\n   */\n  private BatchWriter createBatchWriter(String table) throws TableNotFoundException {\n    BatchWriterConfig bwc = new BatchWriterConfig();\n    bwc.setMaxLatency(\n        Long.parseLong(getProperties()\n            .getProperty(\"accumulo.batchWriterMaxLatency\", \"30000\")),\n        TimeUnit.MILLISECONDS);\n    bwc.setMaxMemory(Long.parseLong(\n        getProperties().getProperty(\"accumulo.batchWriterSize\", \"100000\")));\n    final String numThreadsValue = getProperties().getProperty(\"accumulo.batchWriterThreads\");\n    // Try to saturate the client machine.\n    int numThreads = Math.max(1, Runtime.getRuntime().availableProcessors() / 2);\n    if (null != numThreadsValue) {\n      numThreads = Integer.parseInt(numThreadsValue);\n    }\n    System.err.println(\"Using \" + numThreads + \" threads to write data\");\n    bwc.setMaxWriteThreads(numThreads);\n    return connector.createBatchWriter(table, bwc);\n  }\n\n  /**\n   * Gets a scanner from Accumulo over one row.\n   *\n   * @param row the row to scan\n   * @param fields the set of columns to scan\n   * @return an Accumulo {@link Scanner} bound to the given row and columns\n   */\n  private Scanner getRow(String table, Text row, Set<String> fields) throws TableNotFoundException {\n    Scanner scanner = connector.createScanner(table, Authorizations.EMPTY);\n    scanner.setRange(new Range(row));\n    if (fields != null) {\n      for (String field : fields) {\n        scanner.fetchColumn(colFam, new Text(field));\n      }\n    }\n    return scanner;\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n                     Map<String, ByteIterator> result) {\n\n    Scanner scanner = null;\n    try {\n      scanner = getRow(table, new Text(key), null);\n      // Pick out the results we care about.\n      final Text cq = new Text();\n      for (Entry<Key, Value> entry : scanner) {\n        entry.getKey().getColumnQualifier(cq);\n        Value v = entry.getValue();\n        byte[] buf = v.get();\n        result.put(cq.toString(),\n            new ByteArrayByteIterator(buf));\n      }\n    } catch (Exception e) {\n      System.err.println(\"Error trying to reading Accumulo table \" + table + \" \" + key);\n      e.printStackTrace();\n      return Status.ERROR;\n    } finally {\n      if (null != scanner) {\n        scanner.close();\n      }\n    }\n    return Status.OK;\n\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    // Just make the end 'infinity' and only read as much as we need.\n    Scanner scanner = null;\n    try {\n      scanner = connector.createScanner(table, Authorizations.EMPTY);\n      scanner.setRange(new Range(new Text(startkey), null));\n\n      // Have Accumulo send us complete rows, serialized in a single Key-Value pair\n      IteratorSetting cfg = new IteratorSetting(100, WholeRowIterator.class);\n      scanner.addScanIterator(cfg);\n\n      // If no fields are provided, we assume one column/row.\n      if (fields != null) {\n        // And add each of them as fields we want.\n        for (String field : fields) {\n          scanner.fetchColumn(colFam, new Text(field));\n        }\n      }\n\n      int count = 0;\n      for (Entry<Key, Value> entry : scanner) {\n        // Deserialize the row\n        SortedMap<Key, Value> row = WholeRowIterator.decodeRow(entry.getKey(), entry.getValue());\n        HashMap<String, ByteIterator> rowData;\n        if (null != fields) {\n          rowData = new HashMap<>(fields.size());\n        } else {\n          rowData = new HashMap<>();\n        }\n        result.add(rowData);\n        // Parse the data in the row, avoid unnecessary Text object creation\n        final Text cq = new Text();\n        for (Entry<Key, Value> rowEntry : row.entrySet()) {\n          rowEntry.getKey().getColumnQualifier(cq);\n          rowData.put(cq.toString(), new ByteArrayByteIterator(rowEntry.getValue().get()));\n        }\n        if (count++ == recordcount) { // Done reading the last row.\n          break;\n        }\n      }\n    } catch (TableNotFoundException e) {\n      System.err.println(\"Error trying to connect to Accumulo table.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    } catch (IOException e) {\n      System.err.println(\"Error deserializing data from Accumulo.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    } finally {\n      if (null != scanner) {\n        scanner.close();\n      }\n    }\n\n    return Status.OK;\n  }\n\n  @Override\n  public Status update(String table, String key,\n                       Map<String, ByteIterator> values) {\n    BatchWriter bw = null;\n    try {\n      bw = getWriter(table);\n    } catch (TableNotFoundException e) {\n      System.err.println(\"Error opening batch writer to Accumulo table \" + table);\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    Mutation mutInsert = new Mutation(key.getBytes(UTF_8));\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      mutInsert.put(colFamBytes, entry.getKey().getBytes(UTF_8), entry.getValue().toArray());\n    }\n\n    try {\n      bw.addMutation(mutInsert);\n    } catch (MutationsRejectedException e) {\n      System.err.println(\"Error performing update.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    return Status.BATCHED_OK;\n  }\n\n  @Override\n  public Status insert(String t, String key,\n                       Map<String, ByteIterator> values) {\n    return update(t, key, values);\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    BatchWriter bw;\n    try {\n      bw = getWriter(table);\n    } catch (TableNotFoundException e) {\n      System.err.println(\"Error trying to connect to Accumulo table.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    try {\n      deleteRow(table, new Text(key), bw);\n    } catch (TableNotFoundException | MutationsRejectedException e) {\n      System.err.println(\"Error performing delete.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    } catch (RuntimeException e) {\n      System.err.println(\"Error performing delete.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  // These functions are adapted from RowOperations.java:\n  private void deleteRow(String table, Text row, BatchWriter bw) throws MutationsRejectedException,\n          TableNotFoundException {\n    // TODO Use a batchDeleter instead\n    deleteRow(getRow(table, row, null), bw);\n  }\n\n  /**\n   * Deletes a row, given a Scanner of JUST that row.\n   */\n  private void deleteRow(Scanner scanner, BatchWriter bw) throws MutationsRejectedException {\n    Mutation deleter = null;\n    // iterate through the keys\n    final Text row = new Text();\n    final Text cf = new Text();\n    final Text cq = new Text();\n    for (Entry<Key, Value> entry : scanner) {\n      // create a mutation for the row\n      if (deleter == null) {\n        entry.getKey().getRow(row);\n        deleter = new Mutation(row);\n      }\n      entry.getKey().getColumnFamily(cf);\n      entry.getKey().getColumnQualifier(cq);\n      // the remove function adds the key with the delete flag set to true\n      deleter.putDelete(cf, cq);\n    }\n\n    bw.addMutation(deleter);\n  }\n}\n"
  },
  {
    "path": "accumulo1.6/src/main/java/com/yahoo/ycsb/db/accumulo/package-info.java",
    "content": "/**\n * Copyright (c) 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * YCSB binding for <a href=\"https://accumulo.apache.org/\">Apache Accumulo</a>.\n */\npackage com.yahoo.ycsb.db.accumulo;\n\n"
  },
  {
    "path": "accumulo1.6/src/test/java/com/yahoo/ycsb/db/accumulo/AccumuloTest.java",
    "content": "/*\n * Copyright (c) 2016 YCSB contributors.\n * All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.accumulo;\n\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assume.assumeTrue;\n\nimport java.util.Map.Entry;\nimport java.util.Properties;\n\nimport com.yahoo.ycsb.Workload;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.measurements.Measurements;\nimport com.yahoo.ycsb.workloads.CoreWorkload;\n\nimport org.apache.accumulo.core.client.Connector;\nimport org.apache.accumulo.core.client.Scanner;\nimport org.apache.accumulo.core.client.security.tokens.PasswordToken;\nimport org.apache.accumulo.core.data.Key;\nimport org.apache.accumulo.core.data.Value;\nimport org.apache.accumulo.core.security.Authorizations;\nimport org.apache.accumulo.core.security.TablePermission;\nimport org.apache.accumulo.minicluster.MiniAccumuloCluster;\nimport org.junit.After;\nimport org.junit.AfterClass;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.ClassRule;\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.rules.TemporaryFolder;\nimport org.junit.rules.TestName;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\n/**\n * Use an Accumulo MiniCluster to test out basic workload operations with\n * the Accumulo binding.\n */\npublic class AccumuloTest {\n  private static final Logger LOG = LoggerFactory.getLogger(AccumuloTest.class);\n  private static final int INSERT_COUNT = 2000;\n  private static final int TRANSACTION_COUNT = 2000;\n\n  @ClassRule\n  public static TemporaryFolder workingDir = new TemporaryFolder();\n  @Rule\n  public TestName test = new TestName();\n\n  private static MiniAccumuloCluster cluster;\n  private static Properties properties;\n  private Workload workload;\n  private DB client;\n  private Properties workloadProps;\n\n  private static boolean isWindows() {\n    final String os = System.getProperty(\"os.name\");\n    return os.startsWith(\"Windows\");\n  }\n\n  @BeforeClass\n  public static void setup() throws Exception {\n    // Minicluster setup fails on Windows with an UnsatisfiedLinkError.\n    // Skip if windows.\n    assumeTrue(!isWindows());\n    cluster = new MiniAccumuloCluster(workingDir.newFolder(\"accumulo\").getAbsoluteFile(), \"protectyaneck\");\n    LOG.debug(\"starting minicluster\");\n    cluster.start();\n    LOG.debug(\"creating connection for admin operations.\");\n    // set up the table and user\n    final Connector admin = cluster.getConnector(\"root\", \"protectyaneck\");\n    admin.tableOperations().create(CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n    admin.securityOperations().createLocalUser(\"ycsb\", new PasswordToken(\"protectyaneck\"));\n    admin.securityOperations().grantTablePermission(\"ycsb\", CoreWorkload.TABLENAME_PROPERTY_DEFAULT, TablePermission.READ);\n    admin.securityOperations().grantTablePermission(\"ycsb\", CoreWorkload.TABLENAME_PROPERTY_DEFAULT, TablePermission.WRITE);\n\n    // set properties the binding will read\n    properties = new Properties();\n    properties.setProperty(\"accumulo.zooKeepers\", cluster.getZooKeepers());\n    properties.setProperty(\"accumulo.instanceName\", cluster.getInstanceName());\n    properties.setProperty(\"accumulo.columnFamily\", \"family\");\n    properties.setProperty(\"accumulo.username\", \"ycsb\");\n    properties.setProperty(\"accumulo.password\", \"protectyaneck\");\n    // cut down the batch writer timeout so that writes will push through.\n    properties.setProperty(\"accumulo.batchWriterMaxLatency\", \"4\");\n    // set these explicitly to the defaults at the time we're compiled, since they'll be inlined in our class.\n    properties.setProperty(CoreWorkload.TABLENAME_PROPERTY, CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n    properties.setProperty(CoreWorkload.FIELD_COUNT_PROPERTY, CoreWorkload.FIELD_COUNT_PROPERTY_DEFAULT);\n    properties.setProperty(CoreWorkload.INSERT_ORDER_PROPERTY, \"ordered\");\n  }\n\n  @AfterClass\n  public static void clusterCleanup() throws Exception {\n    if (cluster != null) {\n      LOG.debug(\"shutting down minicluster\");\n      cluster.stop();\n      cluster = null;\n    }\n  }\n\n  @Before\n  public void client() throws Exception {\n\n    LOG.debug(\"Loading workload properties for {}\", test.getMethodName());\n    workloadProps = new Properties();\n    workloadProps.load(getClass().getResourceAsStream(\"/workloads/\" + test.getMethodName()));\n\n    for (String prop : properties.stringPropertyNames()) {\n      workloadProps.setProperty(prop, properties.getProperty(prop));\n    }\n\n    // TODO we need a better test rig for 'run this ycsb workload'\n    LOG.debug(\"initializing measurements and workload\");\n    Measurements.setProperties(workloadProps);\n    workload = new CoreWorkload();\n    workload.init(workloadProps);\n\n    LOG.debug(\"initializing client\");\n    client = new AccumuloClient();\n    client.setProperties(workloadProps);\n    client.init();\n  }\n\n  @After\n  public void cleanup() throws Exception {\n    if (client != null) {\n      LOG.debug(\"cleaning up client\");\n      client.cleanup();\n      client = null;\n    }\n    if (workload != null) {\n      LOG.debug(\"cleaning up workload\");\n      workload.cleanup();\n    }\n  }\n\n  @After\n  public void truncateTable() throws Exception {\n    if (cluster != null) {\n      LOG.debug(\"truncating table {}\", CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n      final Connector admin = cluster.getConnector(\"root\", \"protectyaneck\");\n      admin.tableOperations().deleteRows(CoreWorkload.TABLENAME_PROPERTY_DEFAULT, null, null);\n    }\n  }\n\n  @Test\n  public void workloada() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloadb() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloadc() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloadd() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloade() throws Exception {\n    runWorkload();\n  }\n\n  /**\n   * go through a workload cycle.\n   * <ol>\n   *   <li>initialize thread-specific state\n   *   <li>load the workload dataset\n   *   <li>run workload transactions\n   * </ol>\n   */\n  private void runWorkload() throws Exception {\n    final Object state = workload.initThread(workloadProps,0,0);\n    LOG.debug(\"load\");\n    for (int i = 0; i < INSERT_COUNT; i++) {\n      assertTrue(\"insert failed.\", workload.doInsert(client, state));\n    }\n    // Ensure we wait long enough for the batch writer to flush\n    // TODO accumulo client should be flushing per insert by default.\n    Thread.sleep(2000);\n    LOG.debug(\"verify number of cells\");\n    final Scanner scanner = cluster.getConnector(\"root\", \"protectyaneck\").createScanner(CoreWorkload.TABLENAME_PROPERTY_DEFAULT, Authorizations.EMPTY);\n    int count = 0;\n    for (Entry<Key, Value> entry : scanner) {\n      count++;\n    }\n    assertEquals(\"Didn't get enough total cells.\", (Integer.valueOf(CoreWorkload.FIELD_COUNT_PROPERTY_DEFAULT) * INSERT_COUNT), count);\n    LOG.debug(\"run\");\n    for (int i = 0; i < TRANSACTION_COUNT; i++) {\n      assertTrue(\"transaction failed.\", workload.doTransaction(client, state));\n    }\n  }\n}\n"
  },
  {
    "path": "accumulo1.6/src/test/resources/log4j.properties",
    "content": "#\n# Copyright (c) 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n\n# Root logger option\nlog4j.rootLogger=INFO, stderr\n\nlog4j.appender.stderr=org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.target=System.err\nlog4j.appender.stderr.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.conversionPattern=%d{yyyy/MM/dd HH:mm:ss} %-5p %c %x - %m%n\n\n# Suppress messages from ZooKeeper\nlog4j.logger.com.yahoo.ycsb.db.accumulo=INFO\nlog4j.logger.org.apache.zookeeper=ERROR\nlog4j.logger.org.apache.accumulo=WARN\n"
  },
  {
    "path": "accumulo1.7/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on [Accumulo](https://accumulo.apache.org/). \n\n### 1. Start Accumulo\n\nSee the [Accumulo Documentation](https://accumulo.apache.org/1.7/accumulo_user_manual.html#_installation)\nfor details on installing and running Accumulo.\n\nBefore running the YCSB test you must create the Accumulo table. Again see the \n[Accumulo Documentation](https://accumulo.apache.org/1.7/accumulo_user_manual.html#_basic_administration)\nfor details. The default table name is `ycsb`.\n\n### 2. Set Up YCSB\n\nGit clone YCSB and compile:\n\n    git clone http://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:accumulo1.7-binding -am clean package\n\n### 3. Create the Accumulo table\n\nBy default, YCSB uses a table with the name \"usertable\". Users must create this table before loading\ndata into Accumulo. For maximum Accumulo performance, the Accumulo table must be pre-split. A simple\nRuby script, based on the HBase README, can generate adequate split-point. 10's of Tablets per\nTabletServer is a good starting point. Unless otherwise specified, the following commands should run\non any version of Accumulo.\n\n    $ echo 'num_splits = 20; puts (1..num_splits).map {|i| \"user#{1000+i*(9999-1000)/num_splits}\"}' | ruby > /tmp/splits.txt\n    $ accumulo shell -u <user> -p <password> -e \"createtable usertable\"\n    $ accumulo shell -u <user> -p <password> -e \"addsplits -t usertable -sf /tmp/splits.txt\"\n    $ accumulo shell -u <user> -p <password> -e \"config -t usertable -s table.cache.block.enable=true\"\n\nAdditionally, there are some other configuration properties which can increase performance. These\ncan be set on the Accumulo table via the shell after it is created. Setting the table durability\nto `flush` relaxes the constraints on data durability during hard power-outages (avoids calls\nto fsync). Accumulo defaults table compression to `gzip` which is not particularly fast; `snappy`\nis a faster and similarly-efficient option. The mutation queue property controls how many writes\nthat Accumulo will buffer in memory before performing a flush; this property should be set relative\nto the amount of JVM heap the TabletServers are given.\n\nPlease note that the `table.durability` and `tserver.total.mutation.queue.max` properties only\nexists for >=Accumulo-1.7. There are no concise replacements for these properties in earlier versions.\n\n    accumulo> config -s table.durability=flush\n    accumulo> config -s tserver.total.mutation.queue.max=256M\n    accumulo> config -t usertable -s table.file.compress.type=snappy\n\nOn repeated data loads, the following commands may be helpful to re-set the state of the table quickly.\n\n    accumulo> createtable tmp --copy-splits usertable --copy-config usertable\n    accumulo> deletetable --force usertable\n    accumulo> renametable tmp usertable\n    accumulo> compact --wait -t accumulo.metadata\n\n### 4. Load Data and Run Tests\n\nLoad the data:\n\n    ./bin/ycsb load accumulo1.7 -s -P workloads/workloada \\\n         -p accumulo.zooKeepers=localhost \\\n         -p accumulo.columnFamily=ycsb \\\n         -p accumulo.instanceName=ycsb \\\n         -p accumulo.username=user \\\n         -p accumulo.password=supersecret \\\n         > outputLoad.txt\n\nRun the workload test:\n\n    ./bin/ycsb run accumulo1.7 -s -P workloads/workloada  \\\n         -p accumulo.zooKeepers=localhost \\\n         -p accumulo.columnFamily=ycsb \\\n         -p accumulo.instanceName=ycsb \\\n         -p accumulo.username=user \\\n         -p accumulo.password=supersecret \\\n         > outputLoad.txt\n\n## Accumulo Configuration Parameters\n\n- `accumulo.zooKeepers`\n  - The Accumulo cluster's [zookeeper servers](https://accumulo.apache.org/1.7/accumulo_user_manual.html#_connecting).\n  - Should contain a comma separated list of of hostname or hostname:port values.\n  - No default value.\n\n- `accumulo.columnFamily`\n  - The name of the column family to use to store the data within the table.\n  - No default value.\n\n- `accumulo.instanceName`\n  - Name of the Accumulo [instance](https://accumulo.apache.org/1.7/accumulo_user_manual.html#_connecting).\n  - No default value.\n\n- `accumulo.username`\n  - The username to use when connecting to Accumulo.\n  - No default value.\n \n- `accumulo.password`\n  - The password for the user connecting to Accumulo.\n  - No default value.\n\n"
  },
  {
    "path": "accumulo1.7/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2011 YCSB++ project, 2014 - 2016 YCSB contributors.\nAll rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n  <artifactId>accumulo1.7-binding</artifactId>\n  <name>Accumulo 1.7 DB Binding</name>\n  <properties>\n    <!-- This should match up to the one from your Accumulo version -->\n    <hadoop.version>2.2.0</hadoop.version>\n    <!-- Tests do not run on jdk9 -->\n    <skipJDK9Tests>true</skipJDK9Tests>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>org.apache.accumulo</groupId>\n      <artifactId>accumulo-core</artifactId>\n      <version>${accumulo.1.7.version}</version>\n    </dependency>\n    <!-- Needed for hadoop.io.Text :( -->\n    <dependency>\n      <groupId>org.apache.hadoop</groupId>\n      <artifactId>hadoop-common</artifactId>\n      <version>${hadoop.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>jdk.tools</groupId>\n          <artifactId>jdk.tools</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.accumulo</groupId>\n      <artifactId>accumulo-minicluster</artifactId>\n      <version>${accumulo.1.7.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <!-- needed directly only in test, but transitive\n         at runtime for accumulo, hadoop, and thrift. -->\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n      <version>1.7.13</version>\n    </dependency>\n  </dependencies>\n  <build>\n    <testResources>\n      <testResource>\n        <directory>../workloads</directory>\n        <targetPath>workloads</targetPath>\n      </testResource>\n      <testResource>\n        <directory>src/test/resources</directory>\n      </testResource>\n    </testResources>\n  </build>\n</project>\n"
  },
  {
    "path": "accumulo1.7/src/main/conf/accumulo.properties",
    "content": "# Copyright 2014 Cloudera, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n# Sample Accumulo configuration properties\n#\n# You may either set properties here or via the command line.\n#\n\n# This will influence the keys we write\naccumulo.columnFamily=YCSB\n\n# This should be set based on your Accumulo cluster\n#accumulo.instanceName=ExampleInstance\n\n# Comma separated list of host:port tuples for the ZooKeeper quorum used\n# by your Accumulo cluster\n#accumulo.zooKeepers=zoo1.example.com:2181,zoo2.example.com:2181,zoo3.example.com:2181\n\n# This user will need permissions on the table YCSB works against\n#accumulo.username=ycsb\n#accumulo.password=protectyaneck\n\n# Controls how long our client writer will wait to buffer more data\n# measured in milliseconds\naccumulo.batchWriterMaxLatency=30000\n\n# Controls how much data our client will attempt to buffer before sending\n# measured in bytes\naccumulo.batchWriterSize=100000\n\n# Controls how many worker threads our client will use to parallelize writes\naccumulo.batchWriterThreads=1\n"
  },
  {
    "path": "accumulo1.7/src/main/java/com/yahoo/ycsb/db/accumulo/AccumuloClient.java",
    "content": "/**\n * Copyright (c) 2011 YCSB++ project, 2014-2016 YCSB contributors.\n * All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.accumulo;\n\nimport static java.nio.charset.StandardCharsets.UTF_8;\n\nimport java.io.IOException;\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Set;\nimport java.util.SortedMap;\nimport java.util.Vector;\nimport java.util.concurrent.ConcurrentHashMap;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.accumulo.core.client.AccumuloException;\nimport org.apache.accumulo.core.client.AccumuloSecurityException;\nimport org.apache.accumulo.core.client.BatchWriter;\nimport org.apache.accumulo.core.client.BatchWriterConfig;\nimport org.apache.accumulo.core.client.ClientConfiguration;\nimport org.apache.accumulo.core.client.Connector;\nimport org.apache.accumulo.core.client.IteratorSetting;\nimport org.apache.accumulo.core.client.MutationsRejectedException;\nimport org.apache.accumulo.core.client.Scanner;\nimport org.apache.accumulo.core.client.TableNotFoundException;\nimport org.apache.accumulo.core.client.ZooKeeperInstance;\nimport org.apache.accumulo.core.client.security.tokens.AuthenticationToken;\nimport org.apache.accumulo.core.client.security.tokens.PasswordToken;\nimport org.apache.accumulo.core.data.Key;\nimport org.apache.accumulo.core.data.Mutation;\nimport org.apache.accumulo.core.data.Range;\nimport org.apache.accumulo.core.data.Value;\nimport org.apache.accumulo.core.iterators.user.WholeRowIterator;\nimport org.apache.accumulo.core.security.Authorizations;\nimport org.apache.accumulo.core.util.CleanUp;\nimport org.apache.hadoop.io.Text;\n\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\n/**\n * <a href=\"https://accumulo.apache.org/\">Accumulo</a> binding for YCSB.\n */\npublic class AccumuloClient extends DB {\n\n  private ZooKeeperInstance inst;\n  private Connector connector;\n  private Text colFam = new Text(\"\");\n  private byte[] colFamBytes = new byte[0];\n  private final ConcurrentHashMap<String, BatchWriter> writers = new ConcurrentHashMap<>();\n\n  static {\n    Runtime.getRuntime().addShutdownHook(new Thread() {\n      @Override\n      public void run() {\n        CleanUp.shutdownNow();\n      }\n    });\n  }\n\n  @Override\n  public void init() throws DBException {\n    colFam = new Text(getProperties().getProperty(\"accumulo.columnFamily\"));\n    colFamBytes = colFam.toString().getBytes(UTF_8);\n\n    inst = new ZooKeeperInstance(new ClientConfiguration()\n        .withInstance(getProperties().getProperty(\"accumulo.instanceName\"))\n        .withZkHosts(getProperties().getProperty(\"accumulo.zooKeepers\")));\n    try {\n      String principal = getProperties().getProperty(\"accumulo.username\");\n      AuthenticationToken token =\n          new PasswordToken(getProperties().getProperty(\"accumulo.password\"));\n      connector = inst.getConnector(principal, token);\n    } catch (AccumuloException | AccumuloSecurityException e) {\n      throw new DBException(e);\n    }\n\n    if (!(getProperties().getProperty(\"accumulo.pcFlag\", \"none\").equals(\"none\"))) {\n      System.err.println(\"Sorry, the ZK based producer/consumer implementation has been removed. \" +\n          \"Please see YCSB issue #416 for work on adding a general solution to coordinated work.\");\n    }\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    try {\n      Iterator<BatchWriter> iterator = writers.values().iterator();\n      while (iterator.hasNext()) {\n        BatchWriter writer = iterator.next();\n        writer.close();\n        iterator.remove();\n      }\n    } catch (MutationsRejectedException e) {\n      throw new DBException(e);\n    }\n  }\n\n  /**\n   * Called when the user specifies a table that isn't the same as the existing\n   * table. Connect to it and if necessary, close our current connection.\n   * \n   * @param table\n   *          The table to open.\n   */\n  public BatchWriter getWriter(String table) throws TableNotFoundException {\n    // tl;dr We're paying a cost for the ConcurrentHashMap here to deal with the DB api.\n    // We know that YCSB is really only ever going to send us data for one table, so using\n    // a concurrent data structure is overkill (especially in such a hot code path).\n    // However, the impact seems to be relatively negligible in trivial local tests and it's\n    // \"more correct\" WRT to the API.\n    BatchWriter writer = writers.get(table);\n    if (null == writer) {\n      BatchWriter newWriter = createBatchWriter(table);\n      BatchWriter oldWriter = writers.putIfAbsent(table, newWriter);\n      // Someone beat us to creating a BatchWriter for this table, use their BatchWriters\n      if (null != oldWriter) {\n        try {\n          // Make sure to clean up our new batchwriter!\n          newWriter.close();\n        } catch (MutationsRejectedException e) {\n          throw new RuntimeException(e);\n        }\n        writer = oldWriter;\n      } else {\n        writer = newWriter;\n      }\n    }\n    return writer;\n  }\n\n  /**\n   * Creates a BatchWriter with the expected configuration.\n   *\n   * @param table The table to write to\n   */\n  private BatchWriter createBatchWriter(String table) throws TableNotFoundException {\n    BatchWriterConfig bwc = new BatchWriterConfig();\n    bwc.setMaxLatency(\n        Long.parseLong(getProperties()\n            .getProperty(\"accumulo.batchWriterMaxLatency\", \"30000\")),\n        TimeUnit.MILLISECONDS);\n    bwc.setMaxMemory(Long.parseLong(\n        getProperties().getProperty(\"accumulo.batchWriterSize\", \"100000\")));\n    final String numThreadsValue = getProperties().getProperty(\"accumulo.batchWriterThreads\");\n    // Try to saturate the client machine.\n    int numThreads = Math.max(1, Runtime.getRuntime().availableProcessors() / 2);\n    if (null != numThreadsValue) {\n      numThreads = Integer.parseInt(numThreadsValue);\n    }\n    System.err.println(\"Using \" + numThreads + \" threads to write data\");\n    bwc.setMaxWriteThreads(numThreads);\n    return connector.createBatchWriter(table, bwc);\n  }\n\n  /**\n   * Gets a scanner from Accumulo over one row.\n   *\n   * @param row the row to scan\n   * @param fields the set of columns to scan\n   * @return an Accumulo {@link Scanner} bound to the given row and columns\n   */\n  private Scanner getRow(String table, Text row, Set<String> fields) throws TableNotFoundException {\n    Scanner scanner = connector.createScanner(table, Authorizations.EMPTY);\n    scanner.setRange(new Range(row));\n    if (fields != null) {\n      for (String field : fields) {\n        scanner.fetchColumn(colFam, new Text(field));\n      }\n    }\n    return scanner;\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n                     Map<String, ByteIterator> result) {\n\n    Scanner scanner = null;\n    try {\n      scanner = getRow(table, new Text(key), null);\n      // Pick out the results we care about.\n      final Text cq = new Text();\n      for (Entry<Key, Value> entry : scanner) {\n        entry.getKey().getColumnQualifier(cq);\n        Value v = entry.getValue();\n        byte[] buf = v.get();\n        result.put(cq.toString(),\n            new ByteArrayByteIterator(buf));\n      }\n    } catch (Exception e) {\n      System.err.println(\"Error trying to reading Accumulo table \" + table + \" \" + key);\n      e.printStackTrace();\n      return Status.ERROR;\n    } finally {\n      if (null != scanner) {\n        scanner.close();\n      }\n    }\n    return Status.OK;\n\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    // Just make the end 'infinity' and only read as much as we need.\n    Scanner scanner = null;\n    try {\n      scanner = connector.createScanner(table, Authorizations.EMPTY);\n      scanner.setRange(new Range(new Text(startkey), null));\n\n      // Have Accumulo send us complete rows, serialized in a single Key-Value pair\n      IteratorSetting cfg = new IteratorSetting(100, WholeRowIterator.class);\n      scanner.addScanIterator(cfg);\n\n      // If no fields are provided, we assume one column/row.\n      if (fields != null) {\n        // And add each of them as fields we want.\n        for (String field : fields) {\n          scanner.fetchColumn(colFam, new Text(field));\n        }\n      }\n\n      int count = 0;\n      for (Entry<Key, Value> entry : scanner) {\n        // Deserialize the row\n        SortedMap<Key, Value> row = WholeRowIterator.decodeRow(entry.getKey(), entry.getValue());\n        HashMap<String, ByteIterator> rowData;\n        if (null != fields) {\n          rowData = new HashMap<>(fields.size());\n        } else {\n          rowData = new HashMap<>();\n        }\n        result.add(rowData);\n        // Parse the data in the row, avoid unnecessary Text object creation\n        final Text cq = new Text();\n        for (Entry<Key, Value> rowEntry : row.entrySet()) {\n          rowEntry.getKey().getColumnQualifier(cq);\n          rowData.put(cq.toString(), new ByteArrayByteIterator(rowEntry.getValue().get()));\n        }\n        if (count++ == recordcount) { // Done reading the last row.\n          break;\n        }\n      }\n    } catch (TableNotFoundException e) {\n      System.err.println(\"Error trying to connect to Accumulo table.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    } catch (IOException e) {\n      System.err.println(\"Error deserializing data from Accumulo.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    } finally {\n      if (null != scanner) {\n        scanner.close();\n      }\n    }\n\n    return Status.OK;\n  }\n\n  @Override\n  public Status update(String table, String key,\n                       Map<String, ByteIterator> values) {\n    BatchWriter bw = null;\n    try {\n      bw = getWriter(table);\n    } catch (TableNotFoundException e) {\n      System.err.println(\"Error opening batch writer to Accumulo table \" + table);\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    Mutation mutInsert = new Mutation(key.getBytes(UTF_8));\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      mutInsert.put(colFamBytes, entry.getKey().getBytes(UTF_8), entry.getValue().toArray());\n    }\n\n    try {\n      bw.addMutation(mutInsert);\n    } catch (MutationsRejectedException e) {\n      System.err.println(\"Error performing update.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    return Status.BATCHED_OK;\n  }\n\n  @Override\n  public Status insert(String t, String key,\n                       Map<String, ByteIterator> values) {\n    return update(t, key, values);\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    BatchWriter bw;\n    try {\n      bw = getWriter(table);\n    } catch (TableNotFoundException e) {\n      System.err.println(\"Error trying to connect to Accumulo table.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    try {\n      deleteRow(table, new Text(key), bw);\n    } catch (TableNotFoundException | MutationsRejectedException e) {\n      System.err.println(\"Error performing delete.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    } catch (RuntimeException e) {\n      System.err.println(\"Error performing delete.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  // These functions are adapted from RowOperations.java:\n  private void deleteRow(String table, Text row, BatchWriter bw) throws MutationsRejectedException,\n          TableNotFoundException {\n    // TODO Use a batchDeleter instead\n    deleteRow(getRow(table, row, null), bw);\n  }\n\n  /**\n   * Deletes a row, given a Scanner of JUST that row.\n   */\n  private void deleteRow(Scanner scanner, BatchWriter bw) throws MutationsRejectedException {\n    Mutation deleter = null;\n    // iterate through the keys\n    final Text row = new Text();\n    final Text cf = new Text();\n    final Text cq = new Text();\n    for (Entry<Key, Value> entry : scanner) {\n      // create a mutation for the row\n      if (deleter == null) {\n        entry.getKey().getRow(row);\n        deleter = new Mutation(row);\n      }\n      entry.getKey().getColumnFamily(cf);\n      entry.getKey().getColumnQualifier(cq);\n      // the remove function adds the key with the delete flag set to true\n      deleter.putDelete(cf, cq);\n    }\n\n    bw.addMutation(deleter);\n  }\n}\n"
  },
  {
    "path": "accumulo1.7/src/main/java/com/yahoo/ycsb/db/accumulo/package-info.java",
    "content": "/**\n * Copyright (c) 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * YCSB binding for <a href=\"https://accumulo.apache.org/\">Apache Accumulo</a>.\n */\npackage com.yahoo.ycsb.db.accumulo;\n\n"
  },
  {
    "path": "accumulo1.7/src/test/java/com/yahoo/ycsb/db/accumulo/AccumuloTest.java",
    "content": "/*\n * Copyright (c) 2016 YCSB contributors.\n * All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.accumulo;\n\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assume.assumeTrue;\n\nimport java.util.Map.Entry;\nimport java.util.Properties;\n\nimport com.yahoo.ycsb.Workload;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.measurements.Measurements;\nimport com.yahoo.ycsb.workloads.CoreWorkload;\n\nimport org.apache.accumulo.core.client.Connector;\nimport org.apache.accumulo.core.client.Scanner;\nimport org.apache.accumulo.core.client.security.tokens.PasswordToken;\nimport org.apache.accumulo.core.data.Key;\nimport org.apache.accumulo.core.data.Value;\nimport org.apache.accumulo.core.security.Authorizations;\nimport org.apache.accumulo.core.security.TablePermission;\nimport org.apache.accumulo.minicluster.MiniAccumuloCluster;\nimport org.junit.After;\nimport org.junit.AfterClass;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.ClassRule;\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.rules.TemporaryFolder;\nimport org.junit.rules.TestName;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\n/**\n * Use an Accumulo MiniCluster to test out basic workload operations with\n * the Accumulo binding.\n */\npublic class AccumuloTest {\n  private static final Logger LOG = LoggerFactory.getLogger(AccumuloTest.class);\n  private static final int INSERT_COUNT = 2000;\n  private static final int TRANSACTION_COUNT = 2000;\n\n  @ClassRule\n  public static TemporaryFolder workingDir = new TemporaryFolder();\n  @Rule\n  public TestName test = new TestName();\n\n  private static MiniAccumuloCluster cluster;\n  private static Properties properties;\n  private Workload workload;\n  private DB client;\n  private Properties workloadProps;\n\n  private static boolean isWindows() {\n    final String os = System.getProperty(\"os.name\");\n    return os.startsWith(\"Windows\");\n  }\n\n  @BeforeClass\n  public static void setup() throws Exception {\n    // Minicluster setup fails on Windows with an UnsatisfiedLinkError.\n    // Skip if windows.\n    assumeTrue(!isWindows());\n    cluster = new MiniAccumuloCluster(workingDir.newFolder(\"accumulo\").getAbsoluteFile(), \"protectyaneck\");\n    LOG.debug(\"starting minicluster\");\n    cluster.start();\n    LOG.debug(\"creating connection for admin operations.\");\n    // set up the table and user\n    final Connector admin = cluster.getConnector(\"root\", \"protectyaneck\");\n    admin.tableOperations().create(CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n    admin.securityOperations().createLocalUser(\"ycsb\", new PasswordToken(\"protectyaneck\"));\n    admin.securityOperations().grantTablePermission(\"ycsb\", CoreWorkload.TABLENAME_PROPERTY_DEFAULT, TablePermission.READ);\n    admin.securityOperations().grantTablePermission(\"ycsb\", CoreWorkload.TABLENAME_PROPERTY_DEFAULT, TablePermission.WRITE);\n\n    // set properties the binding will read\n    properties = new Properties();\n    properties.setProperty(\"accumulo.zooKeepers\", cluster.getZooKeepers());\n    properties.setProperty(\"accumulo.instanceName\", cluster.getInstanceName());\n    properties.setProperty(\"accumulo.columnFamily\", \"family\");\n    properties.setProperty(\"accumulo.username\", \"ycsb\");\n    properties.setProperty(\"accumulo.password\", \"protectyaneck\");\n    // cut down the batch writer timeout so that writes will push through.\n    properties.setProperty(\"accumulo.batchWriterMaxLatency\", \"4\");\n    // set these explicitly to the defaults at the time we're compiled, since they'll be inlined in our class.\n    properties.setProperty(CoreWorkload.TABLENAME_PROPERTY, CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n    properties.setProperty(CoreWorkload.FIELD_COUNT_PROPERTY, CoreWorkload.FIELD_COUNT_PROPERTY_DEFAULT);\n    properties.setProperty(CoreWorkload.INSERT_ORDER_PROPERTY, \"ordered\");\n  }\n\n  @AfterClass\n  public static void clusterCleanup() throws Exception {\n    if (cluster != null) {\n      LOG.debug(\"shutting down minicluster\");\n      cluster.stop();\n      cluster = null;\n    }\n  }\n\n  @Before\n  public void client() throws Exception {\n\n    LOG.debug(\"Loading workload properties for {}\", test.getMethodName());\n    workloadProps = new Properties();\n    workloadProps.load(getClass().getResourceAsStream(\"/workloads/\" + test.getMethodName()));\n\n    for (String prop : properties.stringPropertyNames()) {\n      workloadProps.setProperty(prop, properties.getProperty(prop));\n    }\n\n    // TODO we need a better test rig for 'run this ycsb workload'\n    LOG.debug(\"initializing measurements and workload\");\n    Measurements.setProperties(workloadProps);\n    workload = new CoreWorkload();\n    workload.init(workloadProps);\n\n    LOG.debug(\"initializing client\");\n    client = new AccumuloClient();\n    client.setProperties(workloadProps);\n    client.init();\n  }\n\n  @After\n  public void cleanup() throws Exception {\n    if (client != null) {\n      LOG.debug(\"cleaning up client\");\n      client.cleanup();\n      client = null;\n    }\n    if (workload != null) {\n      LOG.debug(\"cleaning up workload\");\n      workload.cleanup();\n    }\n  }\n\n  @After\n  public void truncateTable() throws Exception {\n    if (cluster != null) {\n      LOG.debug(\"truncating table {}\", CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n      final Connector admin = cluster.getConnector(\"root\", \"protectyaneck\");\n      admin.tableOperations().deleteRows(CoreWorkload.TABLENAME_PROPERTY_DEFAULT, null, null);\n    }\n  }\n\n  @Test\n  public void workloada() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloadb() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloadc() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloadd() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloade() throws Exception {\n    runWorkload();\n  }\n\n  /**\n   * go through a workload cycle.\n   * <ol>\n   *   <li>initialize thread-specific state\n   *   <li>load the workload dataset\n   *   <li>run workload transactions\n   * </ol>\n   */\n  private void runWorkload() throws Exception {\n    final Object state = workload.initThread(workloadProps,0,0);\n    LOG.debug(\"load\");\n    for (int i = 0; i < INSERT_COUNT; i++) {\n      assertTrue(\"insert failed.\", workload.doInsert(client, state));\n    }\n    // Ensure we wait long enough for the batch writer to flush\n    // TODO accumulo client should be flushing per insert by default.\n    Thread.sleep(2000);\n    LOG.debug(\"verify number of cells\");\n    final Scanner scanner = cluster.getConnector(\"root\", \"protectyaneck\").createScanner(CoreWorkload.TABLENAME_PROPERTY_DEFAULT, Authorizations.EMPTY);\n    int count = 0;\n    for (Entry<Key, Value> entry : scanner) {\n      count++;\n    }\n    assertEquals(\"Didn't get enough total cells.\", (Integer.valueOf(CoreWorkload.FIELD_COUNT_PROPERTY_DEFAULT) * INSERT_COUNT), count);\n    LOG.debug(\"run\");\n    for (int i = 0; i < TRANSACTION_COUNT; i++) {\n      assertTrue(\"transaction failed.\", workload.doTransaction(client, state));\n    }\n  }\n}\n"
  },
  {
    "path": "accumulo1.7/src/test/resources/log4j.properties",
    "content": "#\n# Copyright (c) 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n\n# Root logger option\nlog4j.rootLogger=INFO, stderr\n\nlog4j.appender.stderr=org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.target=System.err\nlog4j.appender.stderr.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.conversionPattern=%d{yyyy/MM/dd HH:mm:ss} %-5p %c %x - %m%n\n\n# Suppress messages from ZooKeeper\nlog4j.logger.com.yahoo.ycsb.db.accumulo=DEBUG\nlog4j.logger.org.apache.zookeeper=ERROR\nlog4j.logger.org.apache.accumulo=WARN\n"
  },
  {
    "path": "accumulo1.8/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on [Accumulo](https://accumulo.apache.org/). \n\n### 1. Start Accumulo\n\nSee the [Accumulo Documentation](https://accumulo.apache.org/1.8/accumulo_user_manual.html#_installation)\nfor details on installing and running Accumulo.\n\nBefore running the YCSB test you must create the Accumulo table. Again see the \n[Accumulo Documentation](https://accumulo.apache.org/1.8/accumulo_user_manual.html#_basic_administration)\nfor details. The default table name is `ycsb`.\n\n### 2. Set Up YCSB\n\nGit clone YCSB and compile:\n\n    git clone http://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:accumulo1.8-binding -am clean package\n\n### 3. Create the Accumulo table\n\nBy default, YCSB uses a table with the name \"usertable\". Users must create this table before loading\ndata into Accumulo. For maximum Accumulo performance, the Accumulo table must be pre-split. A simple\nRuby script, based on the HBase README, can generate adequate split-point. 10's of Tablets per\nTabletServer is a good starting point. Unless otherwise specified, the following commands should run\non any version of Accumulo.\n\n    $ echo 'num_splits = 20; puts (1..num_splits).map {|i| \"user#{1000+i*(9999-1000)/num_splits}\"}' | ruby > /tmp/splits.txt\n    $ accumulo shell -u <user> -p <password> -e \"createtable usertable\"\n    $ accumulo shell -u <user> -p <password> -e \"addsplits -t usertable -sf /tmp/splits.txt\"\n    $ accumulo shell -u <user> -p <password> -e \"config -t usertable -s table.cache.block.enable=true\"\n\nAdditionally, there are some other configuration properties which can increase performance. These\ncan be set on the Accumulo table via the shell after it is created. Setting the table durability\nto `flush` relaxes the constraints on data durability during hard power-outages (avoids calls\nto fsync). Accumulo defaults table compression to `gzip` which is not particularly fast; `snappy`\nis a faster and similarly-efficient option. The mutation queue property controls how many writes\nthat Accumulo will buffer in memory before performing a flush; this property should be set relative\nto the amount of JVM heap the TabletServers are given.\n\nPlease note that the `table.durability` and `tserver.total.mutation.queue.max` properties only\nexists for >=Accumulo-1.7. There are no concise replacements for these properties in earlier versions.\n\n    accumulo> config -s table.durability=flush\n    accumulo> config -s tserver.total.mutation.queue.max=256M\n    accumulo> config -t usertable -s table.file.compress.type=snappy\n\nOn repeated data loads, the following commands may be helpful to re-set the state of the table quickly.\n\n    accumulo> createtable tmp --copy-splits usertable --copy-config usertable\n    accumulo> deletetable --force usertable\n    accumulo> renametable tmp usertable\n    accumulo> compact --wait -t accumulo.metadata\n\n### 4. Load Data and Run Tests\n\nLoad the data:\n\n    ./bin/ycsb load accumulo1.8 -s -P workloads/workloada \\\n         -p accumulo.zooKeepers=localhost \\\n         -p accumulo.columnFamily=ycsb \\\n         -p accumulo.instanceName=ycsb \\\n         -p accumulo.username=user \\\n         -p accumulo.password=supersecret \\\n         > outputLoad.txt\n\nRun the workload test:\n\n    ./bin/ycsb run accumulo1.8 -s -P workloads/workloada  \\\n         -p accumulo.zooKeepers=localhost \\\n         -p accumulo.columnFamily=ycsb \\\n         -p accumulo.instanceName=ycsb \\\n         -p accumulo.username=user \\\n         -p accumulo.password=supersecret \\\n         > outputLoad.txt\n\n## Accumulo Configuration Parameters\n\n- `accumulo.zooKeepers`\n  - The Accumulo cluster's [zookeeper servers](https://accumulo.apache.org/1.8/accumulo_user_manual.html#_connecting).\n  - Should contain a comma separated list of of hostname or hostname:port values.\n  - No default value.\n\n- `accumulo.columnFamily`\n  - The name of the column family to use to store the data within the table.\n  - No default value.\n\n- `accumulo.instanceName`\n  - Name of the Accumulo [instance](https://accumulo.apache.org/1.8/accumulo_user_manual.html#_connecting).\n  - No default value.\n\n- `accumulo.username`\n  - The username to use when connecting to Accumulo.\n  - No default value.\n \n- `accumulo.password`\n  - The password for the user connecting to Accumulo.\n  - No default value.\n\n"
  },
  {
    "path": "accumulo1.8/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2011 YCSB++ project, 2014 - 2016 YCSB contributors.\nAll rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n  <artifactId>accumulo1.8-binding</artifactId>\n  <name>Accumulo 1.8 DB Binding</name>\n  <properties>\n    <!-- This should match up to the one from your Accumulo version -->\n    <hadoop.version>2.6.4</hadoop.version>\n    <!-- Tests do not run on jdk9 -->\n    <skipJDK9Tests>true</skipJDK9Tests>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>org.apache.accumulo</groupId>\n      <artifactId>accumulo-core</artifactId>\n      <version>${accumulo.1.8.version}</version>\n    </dependency>\n    <!-- Needed for hadoop.io.Text :( -->\n    <dependency>\n      <groupId>org.apache.hadoop</groupId>\n      <artifactId>hadoop-common</artifactId>\n      <version>${hadoop.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>jdk.tools</groupId>\n          <artifactId>jdk.tools</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.accumulo</groupId>\n      <artifactId>accumulo-minicluster</artifactId>\n      <version>${accumulo.1.8.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <!-- needed directly only in test, but transitive\n         at runtime for accumulo, hadoop, and thrift. -->\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n      <version>1.7.13</version>\n    </dependency>\n  </dependencies>\n  <build>\n    <testResources>\n      <testResource>\n        <directory>../workloads</directory>\n        <targetPath>workloads</targetPath>\n      </testResource>\n      <testResource>\n        <directory>src/test/resources</directory>\n      </testResource>\n    </testResources>\n  </build>\n</project>\n"
  },
  {
    "path": "accumulo1.8/src/main/conf/accumulo.properties",
    "content": "# Copyright 2014 Cloudera, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n# Sample Accumulo configuration properties\n#\n# You may either set properties here or via the command line.\n#\n\n# This will influence the keys we write\naccumulo.columnFamily=YCSB\n\n# This should be set based on your Accumulo cluster\n#accumulo.instanceName=ExampleInstance\n\n# Comma separated list of host:port tuples for the ZooKeeper quorum used\n# by your Accumulo cluster\n#accumulo.zooKeepers=zoo1.example.com:2181,zoo2.example.com:2181,zoo3.example.com:2181\n\n# This user will need permissions on the table YCSB works against\n#accumulo.username=ycsb\n#accumulo.password=protectyaneck\n\n# Controls how long our client writer will wait to buffer more data\n# measured in milliseconds\naccumulo.batchWriterMaxLatency=30000\n\n# Controls how much data our client will attempt to buffer before sending\n# measured in bytes\naccumulo.batchWriterSize=100000\n\n# Controls how many worker threads our client will use to parallelize writes\naccumulo.batchWriterThreads=1\n"
  },
  {
    "path": "accumulo1.8/src/main/java/com/yahoo/ycsb/db/accumulo/AccumuloClient.java",
    "content": "/**\n * Copyright (c) 2011 YCSB++ project, 2014-2016 YCSB contributors.\n * All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.accumulo;\n\nimport static java.nio.charset.StandardCharsets.UTF_8;\n\nimport java.io.IOException;\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Set;\nimport java.util.SortedMap;\nimport java.util.Vector;\nimport java.util.concurrent.ConcurrentHashMap;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.accumulo.core.client.AccumuloException;\nimport org.apache.accumulo.core.client.AccumuloSecurityException;\nimport org.apache.accumulo.core.client.BatchWriter;\nimport org.apache.accumulo.core.client.BatchWriterConfig;\nimport org.apache.accumulo.core.client.ClientConfiguration;\nimport org.apache.accumulo.core.client.Connector;\nimport org.apache.accumulo.core.client.IteratorSetting;\nimport org.apache.accumulo.core.client.MutationsRejectedException;\nimport org.apache.accumulo.core.client.Scanner;\nimport org.apache.accumulo.core.client.TableNotFoundException;\nimport org.apache.accumulo.core.client.ZooKeeperInstance;\nimport org.apache.accumulo.core.client.security.tokens.AuthenticationToken;\nimport org.apache.accumulo.core.client.security.tokens.PasswordToken;\nimport org.apache.accumulo.core.data.Key;\nimport org.apache.accumulo.core.data.Mutation;\nimport org.apache.accumulo.core.data.Range;\nimport org.apache.accumulo.core.data.Value;\nimport org.apache.accumulo.core.iterators.user.WholeRowIterator;\nimport org.apache.accumulo.core.security.Authorizations;\nimport org.apache.accumulo.core.util.CleanUp;\nimport org.apache.hadoop.io.Text;\n\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\n/**\n * <a href=\"https://accumulo.apache.org/\">Accumulo</a> binding for YCSB.\n */\npublic class AccumuloClient extends DB {\n\n  private ZooKeeperInstance inst;\n  private Connector connector;\n  private Text colFam = new Text(\"\");\n  private byte[] colFamBytes = new byte[0];\n  private final ConcurrentHashMap<String, BatchWriter> writers = new ConcurrentHashMap<>();\n\n  static {\n    Runtime.getRuntime().addShutdownHook(new Thread() {\n      @Override\n      public void run() {\n        CleanUp.shutdownNow();\n      }\n    });\n  }\n\n  @Override\n  public void init() throws DBException {\n    colFam = new Text(getProperties().getProperty(\"accumulo.columnFamily\"));\n    colFamBytes = colFam.toString().getBytes(UTF_8);\n\n    inst = new ZooKeeperInstance(new ClientConfiguration()\n        .withInstance(getProperties().getProperty(\"accumulo.instanceName\"))\n        .withZkHosts(getProperties().getProperty(\"accumulo.zooKeepers\")));\n    try {\n      String principal = getProperties().getProperty(\"accumulo.username\");\n      AuthenticationToken token =\n          new PasswordToken(getProperties().getProperty(\"accumulo.password\"));\n      connector = inst.getConnector(principal, token);\n    } catch (AccumuloException | AccumuloSecurityException e) {\n      throw new DBException(e);\n    }\n\n    if (!(getProperties().getProperty(\"accumulo.pcFlag\", \"none\").equals(\"none\"))) {\n      System.err.println(\"Sorry, the ZK based producer/consumer implementation has been removed. \" +\n          \"Please see YCSB issue #416 for work on adding a general solution to coordinated work.\");\n    }\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    try {\n      Iterator<BatchWriter> iterator = writers.values().iterator();\n      while (iterator.hasNext()) {\n        BatchWriter writer = iterator.next();\n        writer.close();\n        iterator.remove();\n      }\n    } catch (MutationsRejectedException e) {\n      throw new DBException(e);\n    }\n  }\n\n  /**\n   * Called when the user specifies a table that isn't the same as the existing\n   * table. Connect to it and if necessary, close our current connection.\n   * \n   * @param table\n   *          The table to open.\n   */\n  public BatchWriter getWriter(String table) throws TableNotFoundException {\n    // tl;dr We're paying a cost for the ConcurrentHashMap here to deal with the DB api.\n    // We know that YCSB is really only ever going to send us data for one table, so using\n    // a concurrent data structure is overkill (especially in such a hot code path).\n    // However, the impact seems to be relatively negligible in trivial local tests and it's\n    // \"more correct\" WRT to the API.\n    BatchWriter writer = writers.get(table);\n    if (null == writer) {\n      BatchWriter newWriter = createBatchWriter(table);\n      BatchWriter oldWriter = writers.putIfAbsent(table, newWriter);\n      // Someone beat us to creating a BatchWriter for this table, use their BatchWriters\n      if (null != oldWriter) {\n        try {\n          // Make sure to clean up our new batchwriter!\n          newWriter.close();\n        } catch (MutationsRejectedException e) {\n          throw new RuntimeException(e);\n        }\n        writer = oldWriter;\n      } else {\n        writer = newWriter;\n      }\n    }\n    return writer;\n  }\n\n  /**\n   * Creates a BatchWriter with the expected configuration.\n   *\n   * @param table The table to write to\n   */\n  private BatchWriter createBatchWriter(String table) throws TableNotFoundException {\n    BatchWriterConfig bwc = new BatchWriterConfig();\n    bwc.setMaxLatency(\n        Long.parseLong(getProperties()\n            .getProperty(\"accumulo.batchWriterMaxLatency\", \"30000\")),\n        TimeUnit.MILLISECONDS);\n    bwc.setMaxMemory(Long.parseLong(\n        getProperties().getProperty(\"accumulo.batchWriterSize\", \"100000\")));\n    final String numThreadsValue = getProperties().getProperty(\"accumulo.batchWriterThreads\");\n    // Try to saturate the client machine.\n    int numThreads = Math.max(1, Runtime.getRuntime().availableProcessors() / 2);\n    if (null != numThreadsValue) {\n      numThreads = Integer.parseInt(numThreadsValue);\n    }\n    System.err.println(\"Using \" + numThreads + \" threads to write data\");\n    bwc.setMaxWriteThreads(numThreads);\n    return connector.createBatchWriter(table, bwc);\n  }\n\n  /**\n   * Gets a scanner from Accumulo over one row.\n   *\n   * @param row the row to scan\n   * @param fields the set of columns to scan\n   * @return an Accumulo {@link Scanner} bound to the given row and columns\n   */\n  private Scanner getRow(String table, Text row, Set<String> fields) throws TableNotFoundException {\n    Scanner scanner = connector.createScanner(table, Authorizations.EMPTY);\n    scanner.setRange(new Range(row));\n    if (fields != null) {\n      for (String field : fields) {\n        scanner.fetchColumn(colFam, new Text(field));\n      }\n    }\n    return scanner;\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n                     Map<String, ByteIterator> result) {\n\n    Scanner scanner = null;\n    try {\n      scanner = getRow(table, new Text(key), null);\n      // Pick out the results we care about.\n      final Text cq = new Text();\n      for (Entry<Key, Value> entry : scanner) {\n        entry.getKey().getColumnQualifier(cq);\n        Value v = entry.getValue();\n        byte[] buf = v.get();\n        result.put(cq.toString(),\n            new ByteArrayByteIterator(buf));\n      }\n    } catch (Exception e) {\n      System.err.println(\"Error trying to reading Accumulo table \" + table + \" \" + key);\n      e.printStackTrace();\n      return Status.ERROR;\n    } finally {\n      if (null != scanner) {\n        scanner.close();\n      }\n    }\n    return Status.OK;\n\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    // Just make the end 'infinity' and only read as much as we need.\n    Scanner scanner = null;\n    try {\n      scanner = connector.createScanner(table, Authorizations.EMPTY);\n      scanner.setRange(new Range(new Text(startkey), null));\n\n      // Have Accumulo send us complete rows, serialized in a single Key-Value pair\n      IteratorSetting cfg = new IteratorSetting(100, WholeRowIterator.class);\n      scanner.addScanIterator(cfg);\n\n      // If no fields are provided, we assume one column/row.\n      if (fields != null) {\n        // And add each of them as fields we want.\n        for (String field : fields) {\n          scanner.fetchColumn(colFam, new Text(field));\n        }\n      }\n\n      int count = 0;\n      for (Entry<Key, Value> entry : scanner) {\n        // Deserialize the row\n        SortedMap<Key, Value> row = WholeRowIterator.decodeRow(entry.getKey(), entry.getValue());\n        HashMap<String, ByteIterator> rowData;\n        if (null != fields) {\n          rowData = new HashMap<>(fields.size());\n        } else {\n          rowData = new HashMap<>();\n        }\n        result.add(rowData);\n        // Parse the data in the row, avoid unnecessary Text object creation\n        final Text cq = new Text();\n        for (Entry<Key, Value> rowEntry : row.entrySet()) {\n          rowEntry.getKey().getColumnQualifier(cq);\n          rowData.put(cq.toString(), new ByteArrayByteIterator(rowEntry.getValue().get()));\n        }\n        if (count++ == recordcount) { // Done reading the last row.\n          break;\n        }\n      }\n    } catch (TableNotFoundException e) {\n      System.err.println(\"Error trying to connect to Accumulo table.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    } catch (IOException e) {\n      System.err.println(\"Error deserializing data from Accumulo.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    } finally {\n      if (null != scanner) {\n        scanner.close();\n      }\n    }\n\n    return Status.OK;\n  }\n\n  @Override\n  public Status update(String table, String key,\n                       Map<String, ByteIterator> values) {\n    BatchWriter bw = null;\n    try {\n      bw = getWriter(table);\n    } catch (TableNotFoundException e) {\n      System.err.println(\"Error opening batch writer to Accumulo table \" + table);\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    Mutation mutInsert = new Mutation(key.getBytes(UTF_8));\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      mutInsert.put(colFamBytes, entry.getKey().getBytes(UTF_8), entry.getValue().toArray());\n    }\n\n    try {\n      bw.addMutation(mutInsert);\n    } catch (MutationsRejectedException e) {\n      System.err.println(\"Error performing update.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    return Status.BATCHED_OK;\n  }\n\n  @Override\n  public Status insert(String t, String key,\n                       Map<String, ByteIterator> values) {\n    return update(t, key, values);\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    BatchWriter bw;\n    try {\n      bw = getWriter(table);\n    } catch (TableNotFoundException e) {\n      System.err.println(\"Error trying to connect to Accumulo table.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    try {\n      deleteRow(table, new Text(key), bw);\n    } catch (TableNotFoundException | MutationsRejectedException e) {\n      System.err.println(\"Error performing delete.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    } catch (RuntimeException e) {\n      System.err.println(\"Error performing delete.\");\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  // These functions are adapted from RowOperations.java:\n  private void deleteRow(String table, Text row, BatchWriter bw) throws MutationsRejectedException,\n          TableNotFoundException {\n    // TODO Use a batchDeleter instead\n    deleteRow(getRow(table, row, null), bw);\n  }\n\n  /**\n   * Deletes a row, given a Scanner of JUST that row.\n   */\n  private void deleteRow(Scanner scanner, BatchWriter bw) throws MutationsRejectedException {\n    Mutation deleter = null;\n    // iterate through the keys\n    final Text row = new Text();\n    final Text cf = new Text();\n    final Text cq = new Text();\n    for (Entry<Key, Value> entry : scanner) {\n      // create a mutation for the row\n      if (deleter == null) {\n        entry.getKey().getRow(row);\n        deleter = new Mutation(row);\n      }\n      entry.getKey().getColumnFamily(cf);\n      entry.getKey().getColumnQualifier(cq);\n      // the remove function adds the key with the delete flag set to true\n      deleter.putDelete(cf, cq);\n    }\n\n    bw.addMutation(deleter);\n  }\n}\n"
  },
  {
    "path": "accumulo1.8/src/main/java/com/yahoo/ycsb/db/accumulo/package-info.java",
    "content": "/**\n * Copyright (c) 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * YCSB binding for <a href=\"https://accumulo.apache.org/\">Apache Accumulo</a>.\n */\npackage com.yahoo.ycsb.db.accumulo;\n\n"
  },
  {
    "path": "accumulo1.8/src/test/java/com/yahoo/ycsb/db/accumulo/AccumuloTest.java",
    "content": "/*\n * Copyright (c) 2016 YCSB contributors.\n * All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.accumulo;\n\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assume.assumeTrue;\n\nimport java.util.Map.Entry;\nimport java.util.Properties;\n\nimport com.yahoo.ycsb.Workload;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.measurements.Measurements;\nimport com.yahoo.ycsb.workloads.CoreWorkload;\n\nimport org.apache.accumulo.core.client.Connector;\nimport org.apache.accumulo.core.client.Scanner;\nimport org.apache.accumulo.core.client.security.tokens.PasswordToken;\nimport org.apache.accumulo.core.data.Key;\nimport org.apache.accumulo.core.data.Value;\nimport org.apache.accumulo.core.security.Authorizations;\nimport org.apache.accumulo.core.security.TablePermission;\nimport org.apache.accumulo.minicluster.MiniAccumuloCluster;\nimport org.junit.After;\nimport org.junit.AfterClass;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.ClassRule;\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.rules.TemporaryFolder;\nimport org.junit.rules.TestName;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\n/**\n * Use an Accumulo MiniCluster to test out basic workload operations with\n * the Accumulo binding.\n */\npublic class AccumuloTest {\n  private static final Logger LOG = LoggerFactory.getLogger(AccumuloTest.class);\n  private static final int INSERT_COUNT = 2000;\n  private static final int TRANSACTION_COUNT = 2000;\n\n  @ClassRule\n  public static TemporaryFolder workingDir = new TemporaryFolder();\n  @Rule\n  public TestName test = new TestName();\n\n  private static MiniAccumuloCluster cluster;\n  private static Properties properties;\n  private Workload workload;\n  private DB client;\n  private Properties workloadProps;\n\n  private static boolean isWindows() {\n    final String os = System.getProperty(\"os.name\");\n    return os.startsWith(\"Windows\");\n  }\n\n  @BeforeClass\n  public static void setup() throws Exception {\n    // Minicluster setup fails on Windows with an UnsatisfiedLinkError.\n    // Skip if windows.\n    assumeTrue(!isWindows());\n    cluster = new MiniAccumuloCluster(workingDir.newFolder(\"accumulo\").getAbsoluteFile(), \"protectyaneck\");\n    LOG.debug(\"starting minicluster\");\n    cluster.start();\n    LOG.debug(\"creating connection for admin operations.\");\n    // set up the table and user\n    final Connector admin = cluster.getConnector(\"root\", \"protectyaneck\");\n    admin.tableOperations().create(CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n    admin.securityOperations().createLocalUser(\"ycsb\", new PasswordToken(\"protectyaneck\"));\n    admin.securityOperations().grantTablePermission(\"ycsb\", CoreWorkload.TABLENAME_PROPERTY_DEFAULT, TablePermission.READ);\n    admin.securityOperations().grantTablePermission(\"ycsb\", CoreWorkload.TABLENAME_PROPERTY_DEFAULT, TablePermission.WRITE);\n\n    // set properties the binding will read\n    properties = new Properties();\n    properties.setProperty(\"accumulo.zooKeepers\", cluster.getZooKeepers());\n    properties.setProperty(\"accumulo.instanceName\", cluster.getInstanceName());\n    properties.setProperty(\"accumulo.columnFamily\", \"family\");\n    properties.setProperty(\"accumulo.username\", \"ycsb\");\n    properties.setProperty(\"accumulo.password\", \"protectyaneck\");\n    // cut down the batch writer timeout so that writes will push through.\n    properties.setProperty(\"accumulo.batchWriterMaxLatency\", \"4\");\n    // set these explicitly to the defaults at the time we're compiled, since they'll be inlined in our class.\n    properties.setProperty(CoreWorkload.TABLENAME_PROPERTY, CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n    properties.setProperty(CoreWorkload.FIELD_COUNT_PROPERTY, CoreWorkload.FIELD_COUNT_PROPERTY_DEFAULT);\n    properties.setProperty(CoreWorkload.INSERT_ORDER_PROPERTY, \"ordered\");\n  }\n\n  @AfterClass\n  public static void clusterCleanup() throws Exception {\n    if (cluster != null) {\n      LOG.debug(\"shutting down minicluster\");\n      cluster.stop();\n      cluster = null;\n    }\n  }\n\n  @Before\n  public void client() throws Exception {\n\n    LOG.debug(\"Loading workload properties for {}\", test.getMethodName());\n    workloadProps = new Properties();\n    workloadProps.load(getClass().getResourceAsStream(\"/workloads/\" + test.getMethodName()));\n\n    for (String prop : properties.stringPropertyNames()) {\n      workloadProps.setProperty(prop, properties.getProperty(prop));\n    }\n\n    // TODO we need a better test rig for 'run this ycsb workload'\n    LOG.debug(\"initializing measurements and workload\");\n    Measurements.setProperties(workloadProps);\n    workload = new CoreWorkload();\n    workload.init(workloadProps);\n\n    LOG.debug(\"initializing client\");\n    client = new AccumuloClient();\n    client.setProperties(workloadProps);\n    client.init();\n  }\n\n  @After\n  public void cleanup() throws Exception {\n    if (client != null) {\n      LOG.debug(\"cleaning up client\");\n      client.cleanup();\n      client = null;\n    }\n    if (workload != null) {\n      LOG.debug(\"cleaning up workload\");\n      workload.cleanup();\n    }\n  }\n\n  @After\n  public void truncateTable() throws Exception {\n    if (cluster != null) {\n      LOG.debug(\"truncating table {}\", CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n      final Connector admin = cluster.getConnector(\"root\", \"protectyaneck\");\n      admin.tableOperations().deleteRows(CoreWorkload.TABLENAME_PROPERTY_DEFAULT, null, null);\n    }\n  }\n\n  @Test\n  public void workloada() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloadb() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloadc() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloadd() throws Exception {\n    runWorkload();\n  }\n\n  @Test\n  public void workloade() throws Exception {\n    runWorkload();\n  }\n\n  /**\n   * go through a workload cycle.\n   * <ol>\n   *   <li>initialize thread-specific state\n   *   <li>load the workload dataset\n   *   <li>run workload transactions\n   * </ol>\n   */\n  private void runWorkload() throws Exception {\n    final Object state = workload.initThread(workloadProps,0,0);\n    LOG.debug(\"load\");\n    for (int i = 0; i < INSERT_COUNT; i++) {\n      assertTrue(\"insert failed.\", workload.doInsert(client, state));\n    }\n    // Ensure we wait long enough for the batch writer to flush\n    // TODO accumulo client should be flushing per insert by default.\n    Thread.sleep(2000);\n    LOG.debug(\"verify number of cells\");\n    final Scanner scanner = cluster.getConnector(\"root\", \"protectyaneck\").createScanner(CoreWorkload.TABLENAME_PROPERTY_DEFAULT, Authorizations.EMPTY);\n    int count = 0;\n    for (Entry<Key, Value> entry : scanner) {\n      count++;\n    }\n    assertEquals(\"Didn't get enough total cells.\", (Integer.valueOf(CoreWorkload.FIELD_COUNT_PROPERTY_DEFAULT) * INSERT_COUNT), count);\n    LOG.debug(\"run\");\n    for (int i = 0; i < TRANSACTION_COUNT; i++) {\n      assertTrue(\"transaction failed.\", workload.doTransaction(client, state));\n    }\n  }\n}\n"
  },
  {
    "path": "accumulo1.8/src/test/resources/log4j.properties",
    "content": "#\n# Copyright (c) 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n\n# Root logger option\nlog4j.rootLogger=INFO, stderr\n\nlog4j.appender.stderr=org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.target=System.err\nlog4j.appender.stderr.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.conversionPattern=%d{yyyy/MM/dd HH:mm:ss} %-5p %c %x - %m%n\n\n# Suppress messages from ZooKeeper\nlog4j.logger.com.yahoo.ycsb.db.accumulo=DEBUG\nlog4j.logger.org.apache.zookeeper=ERROR\nlog4j.logger.org.apache.accumulo=WARN\n"
  },
  {
    "path": "aerospike/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on Aerospike. \n\n### 1. Start Aerospike\n\n### 2. Install Java and Maven\n\n### 3. Set Up YCSB\n\nGit clone YCSB and compile:\n\n    git clone http://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:aerospike-binding -am clean package\n\n### 4. Provide Aerospike Connection Parameters\n\nThe following connection parameters are available.\n\n  * `as.host` - The Aerospike cluster to connect to (default: `localhost`)\n  * `as.port` - The port to connect to (default: `3000`)\n  * `as.user` - The user to connect as (no default)\n  * `as.password` - The password for the user (no default)\n  * `as.timeout` - The transaction and connection timeout (in ms, default: `10000`)\n  * `as.namespace` - The namespace to be used for the benchmark (default: `ycsb`)\n\nAdd them to the workload or set them with the shell command, as in:\n\n    ./bin/ycsb load aerospike -s -P workloads/workloada -p as.timeout=5000 >outputLoad.txt\n\n### 5. Load Data and Run Tests\n\nLoad the data:\n\n    ./bin/ycsb load aerospike -s -P workloads/workloada >outputLoad.txt\n\nRun the workload test:\n\n    ./bin/ycsb run aerospike -s -P workloads/workloada >outputRun.txt\n\n"
  },
  {
    "path": "aerospike/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2015-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>aerospike-binding</artifactId>\n  <name>Aerospike DB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.aerospike</groupId>\n      <artifactId>aerospike-client</artifactId>\n      <version>${aerospike.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "aerospike/src/main/java/com/yahoo/ycsb/db/AerospikeClient.java",
    "content": "/**\n * Copyright (c) 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.aerospike.client.AerospikeException;\nimport com.aerospike.client.Bin;\nimport com.aerospike.client.Key;\nimport com.aerospike.client.Record;\nimport com.aerospike.client.policy.ClientPolicy;\nimport com.aerospike.client.policy.Policy;\nimport com.aerospike.client.policy.RecordExistsAction;\nimport com.aerospike.client.policy.WritePolicy;\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\n/**\n * YCSB binding for <a href=\"http://www.aerospike.com/\">Areospike</a>.\n */\npublic class AerospikeClient extends com.yahoo.ycsb.DB {\n  private static final String DEFAULT_HOST = \"localhost\";\n  private static final String DEFAULT_PORT = \"3000\";\n  private static final String DEFAULT_TIMEOUT = \"10000\";\n  private static final String DEFAULT_NAMESPACE = \"ycsb\";\n\n  private String namespace = null;\n\n  private com.aerospike.client.AerospikeClient client = null;\n\n  private Policy readPolicy = new Policy();\n  private WritePolicy insertPolicy = new WritePolicy();\n  private WritePolicy updatePolicy = new WritePolicy();\n  private WritePolicy deletePolicy = new WritePolicy();\n\n  @Override\n  public void init() throws DBException {\n    insertPolicy.recordExistsAction = RecordExistsAction.CREATE_ONLY;\n    updatePolicy.recordExistsAction = RecordExistsAction.REPLACE_ONLY;\n\n    Properties props = getProperties();\n\n    namespace = props.getProperty(\"as.namespace\", DEFAULT_NAMESPACE);\n\n    String host = props.getProperty(\"as.host\", DEFAULT_HOST);\n    String user = props.getProperty(\"as.user\");\n    String password = props.getProperty(\"as.password\");\n    int port = Integer.parseInt(props.getProperty(\"as.port\", DEFAULT_PORT));\n    int timeout = Integer.parseInt(props.getProperty(\"as.timeout\",\n        DEFAULT_TIMEOUT));\n\n    readPolicy.timeout = timeout;\n    insertPolicy.timeout = timeout;\n    updatePolicy.timeout = timeout;\n    deletePolicy.timeout = timeout;\n\n    ClientPolicy clientPolicy = new ClientPolicy();\n\n    if (user != null && password != null) {\n      clientPolicy.user = user;\n      clientPolicy.password = password;\n    }\n\n    try {\n      client =\n          new com.aerospike.client.AerospikeClient(clientPolicy, host, port);\n    } catch (AerospikeException e) {\n      throw new DBException(String.format(\"Error while creating Aerospike \" +\n          \"client for %s:%d.\", host, port), e);\n    }\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    client.close();\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n      Map<String, ByteIterator> result) {\n    try {\n      Record record;\n\n      if (fields != null) {\n        record = client.get(readPolicy, new Key(namespace, table, key),\n            fields.toArray(new String[fields.size()]));\n      } else {\n        record = client.get(readPolicy, new Key(namespace, table, key));\n      }\n\n      if (record == null) {\n        System.err.println(\"Record key \" + key + \" not found (read)\");\n        return Status.ERROR;\n      }\n\n      for (Map.Entry<String, Object> entry: record.bins.entrySet()) {\n        result.put(entry.getKey(),\n            new ByteArrayByteIterator((byte[])entry.getValue()));\n      }\n\n      return Status.OK;\n    } catch (AerospikeException e) {\n      System.err.println(\"Error while reading key \" + key + \": \" + e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status scan(String table, String start, int count, Set<String> fields,\n      Vector<HashMap<String, ByteIterator>> result) {\n    System.err.println(\"Scan not implemented\");\n    return Status.ERROR;\n  }\n\n  private Status write(String table, String key, WritePolicy writePolicy,\n      Map<String, ByteIterator> values) {\n    Bin[] bins = new Bin[values.size()];\n    int index = 0;\n\n    for (Map.Entry<String, ByteIterator> entry: values.entrySet()) {\n      bins[index] = new Bin(entry.getKey(), entry.getValue().toArray());\n      ++index;\n    }\n\n    Key keyObj = new Key(namespace, table, key);\n\n    try {\n      client.put(writePolicy, keyObj, bins);\n      return Status.OK;\n    } catch (AerospikeException e) {\n      System.err.println(\"Error while writing key \" + key + \": \" + e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status update(String table, String key,\n                       Map<String, ByteIterator> values) {\n    return write(table, key, updatePolicy, values);\n  }\n\n  @Override\n  public Status insert(String table, String key,\n                       Map<String, ByteIterator> values) {\n    return write(table, key, insertPolicy, values);\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      if (!client.delete(deletePolicy, new Key(namespace, table, key))) {\n        System.err.println(\"Record key \" + key + \" not found (delete)\");\n        return Status.ERROR;\n      }\n\n      return Status.OK;\n    } catch (AerospikeException e) {\n      System.err.println(\"Error while deleting key \" + key + \": \" + e);\n      return Status.ERROR;\n    }\n  }\n}\n"
  },
  {
    "path": "aerospike/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/**\n * Copyright (c) 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * YCSB binding for <a href=\"http://www.aerospike.com/\">Areospike</a>.\n */\npackage com.yahoo.ycsb.db;\n"
  },
  {
    "path": "arangodb/.gitignore",
    "content": "/bin/\n"
  },
  {
    "path": "arangodb/README.md",
    "content": "<!--\nCopyright (c) 2012 - 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on ArangoDB. \n\n### 1. Start ArangoDB\nSee https://docs.arangodb.com/Installing/index.html\n\n### 2. Install Java and Maven\n\nGo to http://www.oracle.com/technetwork/java/javase/downloads/index.html\n\nand get the url to download the rpm into your server. For example:\n\n    wget http://download.oracle.com/otn-pub/java/jdk/7u40-b43/jdk-7u40-linux-x64.rpm?AuthParam=11232426132 -o jdk-7u40-linux-x64.rpm\n    rpm -Uvh jdk-7u40-linux-x64.rpm\n    \nOr install via yum/apt-get\n\n    sudo yum install java-devel\n\nDownload MVN from http://maven.apache.org/download.cgi\n\n    wget http://ftp.heanet.ie/mirrors/www.apache.org/dist/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.tar.gz\n    sudo tar xzf apache-maven-*-bin.tar.gz -C /usr/local\n    cd /usr/local\n    sudo ln -s apache-maven-* maven\n    sudo vi /etc/profile.d/maven.sh\n\nAdd the following to `maven.sh`\n\n    export M2_HOME=/usr/local/maven\n    export PATH=${M2_HOME}/bin:${PATH}\n\nReload bash and test mvn\n\n    bash\n    mvn -version\n\n### 3. Set Up YCSB\n\nClone this YCSB source code:\n\n    git clone https://github.com/brianfrankcooper/YCSB.git\n\n### 4. Run YCSB\n\nNow you are ready to run! First, drop the existing collection: \"usertable\" under database \"ycsb\":\n\t\n\tdb._collection(\"usertable\").drop()\n\nThen, load the data:\n\n    ./bin/ycsb load arangodb -s -P workloads/workloada -p arangodb.ip=xxx -p arangodb.port=xxx\n\nThen, run the workload:\n\n    ./bin/ycsb run arangodb -s -P workloads/workloada -p arangodb.ip=xxx -p arangodb.port=xxx\n\nSee the next section for the list of configuration parameters for ArangoDB.\n\n## ArangoDB Configuration Parameters\n\n- `arangodb.ip`\n  - Default value is `localhost`\n\n- `arangodb.port`\n  - Default value is `8529`.\n  \n- `arangodb.waitForSync`\n  - Default value is `true`.\n  \n- `arangodb.transactionUpdate`\n  - Default value is `false`.\n\n- `arangodb.dropDBBeforeRun`\n  - Default value is `false`.\n"
  },
  {
    "path": "arangodb/conf/logback.xml",
    "content": "<!-- \nCopyright (c) 2012 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<configuration>\n\n  <appender name=\"STDOUT\" class=\"ch.qos.logback.core.ConsoleAppender\">\n    <!-- encoders are assigned the type\n         ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->\n    <encoder>\n      <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>\n    </encoder>\n  </appender>\n\n  <root level=\"info\">\n    <appender-ref ref=\"STDOUT\" />\n  </root>\n</configuration>\n"
  },
  {
    "path": "arangodb/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>arangodb-binding</artifactId>\n  <name>ArangoDB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.arangodb</groupId>\n      <artifactId>arangodb-java-driver</artifactId>\n      <version>${arangodb.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n\t<dependency>\n\t\t<groupId>org.slf4j</groupId>\n\t\t<artifactId>slf4j-api</artifactId>\n\t\t<version>1.7.13</version>\n\t\t<type>jar</type>\n\t\t<scope>compile</scope>\n\t</dependency>\n\t<dependency>\n\t\t<groupId>ch.qos.logback</groupId>\n\t\t<artifactId>logback-classic</artifactId>\n\t\t<version>1.1.3</version>\n\t\t<type>jar</type>\n\t\t<scope>provided</scope>\n\t</dependency>\n\t<dependency>\n\t\t<groupId>ch.qos.logback</groupId>\n\t\t<artifactId>logback-core</artifactId>\n\t\t<version>1.1.3</version>\n\t\t<type>jar</type>\n\t\t<scope>provided</scope>\n\t</dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "arangodb/src/main/java/com/yahoo/ycsb/db/ArangoDBClient.java",
    "content": "/**\n * Copyright (c) 2012 - 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport com.arangodb.ArangoConfigure;\nimport com.arangodb.ArangoDriver;\nimport com.arangodb.ArangoException;\nimport com.arangodb.ArangoHost;\nimport com.arangodb.DocumentCursor;\nimport com.arangodb.ErrorNums;\nimport com.arangodb.entity.BaseDocument;\nimport com.arangodb.entity.DocumentEntity;\nimport com.arangodb.entity.EntityFactory;\nimport com.arangodb.entity.TransactionEntity;\nimport com.arangodb.util.MapBuilder;\n\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.concurrent.atomic.AtomicInteger;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\n/**\n * ArangoDB binding for YCSB framework using the ArangoDB Inc. <a\n * href=\"https://github.com/arangodb/arangodb-java-driver\">driver</a>\n * <p>\n * See the <code>README.md</code> for configuration information.\n * </p>\n * \n * @see <a href=\"https://github.com/arangodb/arangodb-java-driver\">ArangoDB Inc.\n *      driver</a>\n */\npublic class ArangoDBClient extends DB {\n\n  private static Logger logger = LoggerFactory.getLogger(ArangoDBClient.class);\n  \n  /**\n   * The database name to access.\n   */\n  private static String databaseName = \"ycsb\";\n\n  /**\n   * Count the number of times initialized to teardown on the last\n   * {@link #cleanup()}.\n   */\n  private static final AtomicInteger INIT_COUNT = new AtomicInteger(0);\n\n  /** ArangoDB Driver related, Singleton. */\n  private static ArangoDriver arangoDriver;\n  private static Boolean dropDBBeforeRun;\n  private static Boolean waitForSync = true;\n  private static Boolean transactionUpdate = false;\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is\n   * one DB instance per client thread.\n   * \n   * Actually, one client process will share one DB instance here.(Coincide to\n   * mongoDB driver)\n   */\n  @Override\n  public void init() throws DBException {\n    INIT_COUNT.incrementAndGet();\n    synchronized (ArangoDBClient.class) {\n      if (arangoDriver != null) {\n        return;\n      }\n\n      Properties props = getProperties();\n\n      // Set the DB address\n      String ip = props.getProperty(\"arangodb.ip\", \"localhost\");\n      String portStr = props.getProperty(\"arangodb.port\", \"8529\");\n      int port = Integer.parseInt(portStr);\n\n      // If clear db before run\n      String dropDBBeforeRunStr = props.getProperty(\"arangodb.dropDBBeforeRun\", \"false\");\n      dropDBBeforeRun = Boolean.parseBoolean(dropDBBeforeRunStr);\n      \n      // Set the sync mode\n      String waitForSyncStr = props.getProperty(\"arangodb.waitForSync\", \"false\");\n      waitForSync = Boolean.parseBoolean(waitForSyncStr);\n      \n      // Set if transaction for update\n      String transactionUpdateStr = props.getProperty(\"arangodb.transactionUpdate\", \"false\");\n      transactionUpdate = Boolean.parseBoolean(transactionUpdateStr);\n      \n      // Init ArangoDB connection\n      try {\n        ArangoConfigure arangoConfigure = new ArangoConfigure();\n        arangoConfigure.setArangoHost(new ArangoHost(ip, port));\n        arangoConfigure.init();\n        arangoDriver = new ArangoDriver(arangoConfigure);\n      } catch (Exception e) {\n        logger.error(\"Failed to initialize ArangoDB\", e);\n        System.exit(-1);\n      }\n\n      // Init the database\n      if (dropDBBeforeRun) {\n        // Try delete first\n        try {\n          arangoDriver.deleteDatabase(databaseName);\n        } catch (ArangoException e) {\n          if (e.getErrorNumber() != ErrorNums.ERROR_ARANGO_DATABASE_NOT_FOUND) {\n            logger.error(\"Failed to delete database: {} with ex: {}\", databaseName, e.toString());\n            System.exit(-1);\n          } else {\n            logger.info(\"Fail to delete DB, already deleted: {}\", databaseName);\n          }\n        }\n      }\n      try {\n        arangoDriver.createDatabase(databaseName);\n        logger.info(\"Database created: \" + databaseName);\n      } catch (ArangoException e) {\n        if (e.getErrorNumber() != ErrorNums.ERROR_ARANGO_DUPLICATE_NAME) {\n          logger.error(\"Failed to create database: {} with ex: {}\", databaseName, e.toString());\n          System.exit(-1);\n        } else {\n          logger.info(\"DB already exists: {}\", databaseName);\n        }\n      }\n      // Always set the default db\n      arangoDriver.setDefaultDatabase(databaseName);\n      logger.info(\"ArangoDB client connection created to {}:{}\", ip, port);\n      \n      // Log the configuration\n      logger.info(\"Arango Configuration: dropDBBeforeRun: {}; address: {}:{}; databaseName: {};\"\n                  + \" waitForSync: {}; transactionUpdate: {};\",\n                  dropDBBeforeRun, ip, port, databaseName, waitForSync, transactionUpdate);\n    }\n  }\n\n  /**\n   * Cleanup any state for this DB. Called once per DB instance; there is one\n   * DB instance per client thread.\n   * \n   * Actually, one client process will share one DB instance here.(Coincide to\n   * mongoDB driver)\n   */\n  @Override\n  public void cleanup() throws DBException {\n    if (INIT_COUNT.decrementAndGet() == 0) {\n      arangoDriver = null;\n      logger.info(\"Local cleaned up.\");\n    }\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key.\n   * \n   * @param table\n   *      The name of the table\n   * @param key\n   *      The record key of the record to insert.\n   * @param values\n   *      A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error. See the\n   *     {@link DB} class's description for a discussion of error codes.\n   */\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      BaseDocument toInsert = new BaseDocument(key);\n      for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n        toInsert.addAttribute(entry.getKey(), byteIteratorToString(entry.getValue()));\n      }\n      arangoDriver.createDocument(table, toInsert, true/*create collection if not exist*/,\n                                  waitForSync);\n      return Status.OK;\n    } catch (ArangoException e) {\n      if (e.getErrorNumber() != ErrorNums.ERROR_ARANGO_UNIQUE_CONSTRAINT_VIOLATED) {\n        logger.error(\"Fail to insert: {} {} with ex {}\", table, key, e.toString());\n      } else {\n        logger.debug(\"Trying to create document with duplicate key: {} {}\", table, key);\n        return Status.BAD_REQUEST;\n      }\n    }  catch (RuntimeException e) {\n      logger.error(\"Exception while trying insert {} {} with ex {}\", table, key, e.toString());\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result\n   * will be stored in a HashMap.\n   * \n   * @param table\n   *      The name of the table\n   * @param key\n   *      The record key of the record to read.\n   * @param fields\n   *      The list of fields to read, or null for all of them\n   * @param result\n   *      A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error or \"not found\".\n   */\n  @SuppressWarnings(\"unchecked\")\n  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    try {\n      DocumentEntity<BaseDocument> targetDoc = arangoDriver.getDocument(table, key, BaseDocument.class);\n      BaseDocument aDocument = targetDoc.getEntity();\n      if (!this.fillMap(result, aDocument.getProperties(), fields)) {\n        return Status.ERROR;\n      }\n      return Status.OK;\n    } catch (ArangoException e) {\n      if (e.getErrorNumber() != ErrorNums.ERROR_ARANGO_DOCUMENT_NOT_FOUND) {\n        logger.error(\"Fail to read: {} {} with ex {}\", table, key, e.toString());\n      } else {\n        logger.debug(\"Trying to read document not exist: {} {}\", table, key);\n        return Status.NOT_FOUND;\n      }\n    } catch (RuntimeException e) {\n      logger.error(\"Exception while trying read {} {} with ex {}\", table, key, e.toString());\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key, overwriting any existing values with the same field name.\n   * \n   * @param table\n   *      The name of the table\n   * @param key\n   *      The record key of the record to write.\n   * @param values\n   *      A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error. See this class's\n   *     description for a discussion of error codes.\n   */\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      \n      if (!transactionUpdate) {\n        BaseDocument updateDoc = new BaseDocument();\n        for (String field : values.keySet()) {\n          updateDoc.addAttribute(field, byteIteratorToString(values.get(field)));\n        }\n        arangoDriver.updateDocument(table, key, updateDoc);\n        return Status.OK;\n      } else {\n        // id for documentHandle\n        String transactionAction = \"function (id) {\"\n               // use internal database functions\n            + \"var db = require('internal').db;\"\n              // collection.update(document, data, overwrite, keepNull, waitForSync)\n            + String.format(\"db._update(id, %s, true, false, %s);}\",\n                mapToJson(values), Boolean.toString(waitForSync).toLowerCase());\n        TransactionEntity transaction = arangoDriver.createTransaction(transactionAction);\n        transaction.addWriteCollection(table);\n        transaction.setParams(createDocumentHandle(table, key));\n        arangoDriver.executeTransaction(transaction);\n        return Status.OK;\n      }\n    } catch (ArangoException e) {\n      if (e.getErrorNumber() != ErrorNums.ERROR_ARANGO_DOCUMENT_NOT_FOUND) {\n        logger.error(\"Fail to update: {} {} with ex {}\", table, key, e.toString());\n      } else {\n        logger.debug(\"Trying to update document not exist: {} {}\", table, key);\n        return Status.NOT_FOUND;\n      }\n    } catch (RuntimeException e) {\n      logger.error(\"Exception while trying update {} {} with ex {}\", table, key, e.toString());\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Delete a record from the database.\n   * \n   * @param table\n   *      The name of the table\n   * @param key\n   *      The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error. See the\n   *     {@link DB} class's description for a discussion of error codes.\n   */\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      arangoDriver.deleteDocument(table, key);\n      return Status.OK;\n    } catch (ArangoException e) {\n      if (e.getErrorNumber() != ErrorNums.ERROR_ARANGO_DOCUMENT_NOT_FOUND) {\n        logger.error(\"Fail to delete: {} {} with ex {}\", table, key, e.toString());\n      } else {\n        logger.debug(\"Trying to delete document not exist: {} {}\", table, key);\n        return Status.NOT_FOUND;\n      }\n    } catch (RuntimeException e) {\n      logger.error(\"Exception while trying delete {} {} with ex {}\", table, key, e.toString());\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each\n   * field/value pair from the result will be stored in a HashMap.\n   * \n   * @param table\n   *      The name of the table\n   * @param startkey\n   *      The record key of the first record to read.\n   * @param recordcount\n   *      The number of records to read\n   * @param fields\n   *      The list of fields to read, or null for all of them\n   * @param result\n   *      A Vector of HashMaps, where each HashMap is a set field/value\n   *      pairs for one record\n   * @return Zero on success, a non-zero error code on error. See the\n   *     {@link DB} class's description for a discussion of error codes.\n   */\n  @Override\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\n      Vector<HashMap<String, ByteIterator>> result) {\n    DocumentCursor<BaseDocument> cursor = null;\n    try {\n      String aqlQuery = String.format(\n          \"FOR target IN %s FILTER target._key >= @key SORT target._key ASC LIMIT %d RETURN %s \", table,\n          recordcount, constructReturnForAQL(fields, \"target\"));\n\n      Map<String, Object> bindVars = new MapBuilder().put(\"key\", startkey).get();\n      cursor = arangoDriver.executeDocumentQuery(aqlQuery, bindVars, null, BaseDocument.class);\n      Iterator<BaseDocument> iterator = cursor.entityIterator();\n      while (iterator.hasNext()) {\n        BaseDocument aDocument = iterator.next();\n        HashMap<String, ByteIterator> aMap = new HashMap<String, ByteIterator>(aDocument.getProperties().size());\n        if (!this.fillMap(aMap, aDocument.getProperties())) {\n          return Status.ERROR;\n        }\n        result.add(aMap);\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      logger.error(\"Exception while trying scan {} {} {} with ex {}\", table, startkey, recordcount, e.toString());\n    } finally {\n      if (cursor != null) {\n        try {\n          cursor.close();\n        } catch (ArangoException e) {\n          logger.error(\"Fail to close cursor\", e);\n        }\n      }\n    }\n    return Status.ERROR;\n  }\n\n  private String createDocumentHandle(String collectionName, String documentKey) throws ArangoException {\n    validateCollectionName(collectionName);\n    return collectionName + \"/\" + documentKey;\n  }\n\n  private void validateCollectionName(String name) throws ArangoException {\n    if (name.indexOf('/') != -1) {\n      throw new ArangoException(\"does not allow '/' in name.\");\n    }\n  }\n\n  \n  private String constructReturnForAQL(Set<String> fields, String targetName) {\n    // Construct the AQL query string.\n    String resultDes = targetName;\n    if (fields != null && fields.size() != 0) {\n      StringBuilder builder = new StringBuilder(\"{\");\n      for (String field : fields) {\n        builder.append(String.format(\"\\n\\\"%s\\\" : %s.%s,\", field, targetName, field));\n      }\n      //Replace last ',' to newline.\n      builder.setCharAt(builder.length() - 1, '\\n');\n      builder.append(\"}\");\n      resultDes = builder.toString();\n    }\n    return resultDes;\n  }\n  \n  private boolean fillMap(Map<String, ByteIterator> resultMap, Map<String, Object> properties) {\n    return fillMap(resultMap, properties, null);\n  }\n  \n  /**\n   * Fills the map with the properties from the BaseDocument.\n   * \n   * @param resultMap\n   *      The map to fill/\n   * @param obj\n   *      The object to copy values from.\n   * @return isSuccess\n   */\n  @SuppressWarnings(\"unchecked\")\n  private boolean fillMap(Map<String, ByteIterator> resultMap, Map<String, Object> properties, Set<String> fields) {\n    if (fields == null || fields.size() == 0) {\n      for (Map.Entry<String, Object> entry : properties.entrySet()) {\n        if (entry.getValue() instanceof String) {\n          resultMap.put(entry.getKey(),\n              stringToByteIterator((String)(entry.getValue())));\n        } else {\n          logger.error(\"Error! Not the format expected! Actually is {}\",\n              entry.getValue().getClass().getName());\n          return false;\n        }\n      }\n    } else {\n      for (String field : fields) {\n        if (properties.get(field) instanceof String) {\n          resultMap.put(field, stringToByteIterator((String)(properties.get(field))));\n        } else {\n          logger.error(\"Error! Not the format expected! Actually is {}\",\n              properties.get(field).getClass().getName());\n          return false;\n        }\n      }\n    }\n    return true;\n  }\n  \n  private String byteIteratorToString(ByteIterator byteIter) {\n    return new String(byteIter.toArray());\n  }\n\n  private ByteIterator stringToByteIterator(String content) {\n    return new StringByteIterator(content);\n  }\n  \n  private String mapToJson(Map<String, ByteIterator> values) {\n    Map<String, String> intervalRst = new HashMap<String, String>();\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      intervalRst.put(entry.getKey(), byteIteratorToString(entry.getValue()));\n    }\n    return EntityFactory.toJsonString(intervalRst);\n  }\n  \n}\n"
  },
  {
    "path": "arangodb/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/**\n * Copyright (c) 2012 - 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"https://www.arangodb.com/\">ArangoDB</a>.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "arangodb3/.gitignore",
    "content": ""
  },
  {
    "path": "arangodb3/README.md",
    "content": "<!--\nCopyright (c) 2017 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on ArangoDB. \n\n### 1. Start ArangoDB\nSee https://docs.arangodb.com/Installing/index.html\n\n### 2. Install Java and Maven\n\nGo to http://www.oracle.com/technetwork/java/javase/downloads/index.html\n\nand get the url to download the rpm into your server. For example:\n\n    wget http://download.oracle.com/otn-pub/java/jdk/7u40-b43/jdk-7u40-linux-x64.rpm?AuthParam=11232426132 -o jdk-7u40-linux-x64.rpm\n    rpm -Uvh jdk-7u40-linux-x64.rpm\n    \nOr install via yum/apt-get\n\n    sudo yum install java-devel\n\nDownload MVN from http://maven.apache.org/download.cgi\n\n    wget http://ftp.heanet.ie/mirrors/www.apache.org/dist/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.tar.gz\n    sudo tar xzf apache-maven-*-bin.tar.gz -C /usr/local\n    cd /usr/local\n    sudo ln -s apache-maven-* maven\n    sudo vi /etc/profile.d/maven.sh\n\nAdd the following to `maven.sh`\n\n    export M2_HOME=/usr/local/maven\n    export PATH=${M2_HOME}/bin:${PATH}\n\nReload bash and test mvn\n\n    bash\n    mvn -version\n\n### 3. Set Up YCSB\n\nClone this YCSB source code:\n\n    git clone https://github.com/brianfrankcooper/YCSB.git\n\n### 4. Run YCSB\n\nNow you are ready to run! First, drop the existing collection: \"usertable\" under database \"ycsb\":\n\t\n\tdb._collection(\"usertable\").drop()\n\nThen, load the data:\n\n    ./bin/ycsb load arangodb3 -s -P workloads/workloada -p arangodb.ip=xxx -p arangodb.port=xxx\n\nThen, run the workload:\n\n    ./bin/ycsb run arangodb3 -s -P workloads/workloada -p arangodb.ip=xxx -p arangodb.port=xxx\n\nSee the next section for the list of configuration parameters for ArangoDB.\n\n## ArangoDB Configuration Parameters\n\n- `arangodb.ip`\n  - Default value is `localhost`\n\n- `arangodb.port`\n  - Default value is `8529`.\n  \n- `arangodb.waitForSync`\n  - Default value is `true`.\n  \n- `arangodb.transactionUpdate`\n  - Default value is `false`.\n\n- `arangodb.dropDBBeforeRun`\n  - Default value is `false`.\n"
  },
  {
    "path": "arangodb3/conf/logback.xml",
    "content": "<!-- \nCopyright (c) 2017 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<configuration>\n\n  <appender name=\"STDERR\" class=\"ch.qos.logback.core.ConsoleAppender\">\n    <!-- encoders are assigned the type\n         ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->\n    <encoder>\n      <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>\n    </encoder>\n  </appender>\n\n  <root level=\"info\">\n    <appender-ref ref=\"STDERR\" />\n  </root>\n</configuration>\n"
  },
  {
    "path": "arangodb3/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2017 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>arangodb3-binding</artifactId>\n  <name>ArangoDB3 Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.arangodb</groupId>\n      <artifactId>arangodb-java-driver</artifactId>\n      <version>${arangodb3.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n\t<dependency>\n\t\t<groupId>org.slf4j</groupId>\n\t\t<artifactId>slf4j-api</artifactId>\n\t\t<version>1.7.13</version>\n\t\t<type>jar</type>\n\t\t<scope>compile</scope>\n\t</dependency>\n\t<dependency>\n\t\t<groupId>ch.qos.logback</groupId>\n\t\t<artifactId>logback-classic</artifactId>\n\t\t<version>1.1.3</version>\n\t\t<type>jar</type>\n\t\t<scope>provided</scope>\n\t</dependency>\n\t<dependency>\n\t\t<groupId>ch.qos.logback</groupId>\n\t\t<artifactId>logback-core</artifactId>\n\t\t<version>1.1.3</version>\n\t\t<type>jar</type>\n\t\t<scope>provided</scope>\n\t</dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "arangodb3/src/main/java/com/yahoo/ycsb/db/arangodb/ArangoDB3Client.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db.arangodb;\n\nimport java.io.IOException;\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.Map.Entry;\nimport java.util.concurrent.atomic.AtomicInteger;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport com.arangodb.ArangoCursor;\nimport com.arangodb.ArangoDB;\nimport com.arangodb.ArangoDBException;\nimport com.arangodb.entity.BaseDocument;\nimport com.arangodb.model.DocumentCreateOptions;\nimport com.arangodb.model.TransactionOptions;\nimport com.arangodb.util.MapBuilder;\nimport com.arangodb.velocypack.VPackBuilder;\nimport com.arangodb.velocypack.VPackSlice;\nimport com.arangodb.velocypack.ValueType;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\n/**\n * ArangoDB binding for YCSB framework using the ArangoDB Inc. <a\n * href=\"https://github.com/arangodb/arangodb-java-driver\">driver</a>\n * <p>\n * See the <code>README.md</code> for configuration information.\n * </p>\n * \n * @see <a href=\"https://github.com/arangodb/arangodb-java-driver\">ArangoDB Inc.\n *      driver</a>\n */\npublic class ArangoDB3Client extends DB {\n\n  private static Logger logger = LoggerFactory.getLogger(ArangoDB3Client.class);\n  \n  /**\n   * Count the number of times initialized to teardown on the last\n   * {@link #cleanup()}.\n   */\n  private static final AtomicInteger INIT_COUNT = new AtomicInteger(0);\n\n  /** ArangoDB Driver related, Singleton. */\n  private ArangoDB arangoDB;\n  private String databaseName = \"ycsb\";\n  private String collectionName;\n  private Boolean dropDBBeforeRun;\n  private Boolean waitForSync = false;\n  private Boolean transactionUpdate = false;\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is\n   * one DB instance per client thread.\n   * \n   * Actually, one client process will share one DB instance here.(Coincide to\n   * mongoDB driver)\n   */\n  @Override\n  public void init() throws DBException {\n    synchronized (ArangoDB3Client.class) {\n      Properties props = getProperties();\n\n      collectionName = props.getProperty(\"table\", \"usertable\");\n\n      // Set the DB address\n      String ip = props.getProperty(\"arangodb.ip\", \"localhost\");\n      String portStr = props.getProperty(\"arangodb.port\", \"8529\");\n      int port = Integer.parseInt(portStr);\n\n      // If clear db before run\n      String dropDBBeforeRunStr = props.getProperty(\"arangodb.dropDBBeforeRun\", \"false\");\n      dropDBBeforeRun = Boolean.parseBoolean(dropDBBeforeRunStr);\n      \n      // Set the sync mode\n      String waitForSyncStr = props.getProperty(\"arangodb.waitForSync\", \"false\");\n      waitForSync = Boolean.parseBoolean(waitForSyncStr);\n      \n      // Set if transaction for update\n      String transactionUpdateStr = props.getProperty(\"arangodb.transactionUpdate\", \"false\");\n      transactionUpdate = Boolean.parseBoolean(transactionUpdateStr);\n      \n      // Init ArangoDB connection\n      try {\n        arangoDB = new ArangoDB.Builder().host(ip).port(port).build();\n      } catch (Exception e) {\n        logger.error(\"Failed to initialize ArangoDB\", e);\n        System.exit(-1);\n      }\n\n      if(INIT_COUNT.getAndIncrement() == 0) {\n        // Init the database\n        if (dropDBBeforeRun) {\n          // Try delete first\n          try {\n            arangoDB.db(databaseName).drop();\n          } catch (ArangoDBException e) {\n            logger.info(\"Fail to delete DB: {}\", databaseName);\n          }\n        }\n        try {\n          arangoDB.createDatabase(databaseName);\n          logger.info(\"Database created: \" + databaseName);\n        } catch (ArangoDBException e) {\n          logger.error(\"Failed to create database: {} with ex: {}\", databaseName, e.toString());\n        }\n        try {\n          arangoDB.db(databaseName).createCollection(collectionName);\n          logger.info(\"Collection created: \" + collectionName);\n        } catch (ArangoDBException e) {\n          logger.error(\"Failed to create collection: {} with ex: {}\", collectionName, e.toString());\n        }\n        logger.info(\"ArangoDB client connection created to {}:{}\", ip, port);\n\n        // Log the configuration\n        logger.info(\"Arango Configuration: dropDBBeforeRun: {}; address: {}:{}; databaseName: {};\"\n                    + \" waitForSync: {}; transactionUpdate: {};\",\n                    dropDBBeforeRun, ip, port, databaseName, waitForSync, transactionUpdate);\n      }\n    }\n  }\n\n  /**\n   * Cleanup any state for this DB. Called once per DB instance; there is one\n   * DB instance per client thread.\n   * \n   * Actually, one client process will share one DB instance here.(Coincide to\n   * mongoDB driver)\n   */\n  @Override\n  public void cleanup() throws DBException {\n    if (INIT_COUNT.decrementAndGet() == 0) {\n      arangoDB.shutdown();\n      arangoDB = null;\n      logger.info(\"Local cleaned up.\");\n    }\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key.\n   * \n   * @param table\n   *      The name of the table\n   * @param key\n   *      The record key of the record to insert.\n   * @param values\n   *      A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error. See the\n   *     {@link DB} class's description for a discussion of error codes.\n   */\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      BaseDocument toInsert = new BaseDocument(key);\n      for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n        toInsert.addAttribute(entry.getKey(), byteIteratorToString(entry.getValue()));\n      }\n      DocumentCreateOptions options = new DocumentCreateOptions().waitForSync(waitForSync);\n      arangoDB.db(databaseName).collection(table).insertDocument(toInsert, options);\n      return Status.OK;\n    } catch (ArangoDBException e) {\n      logger.error(\"Exception while trying insert {} {} with ex {}\", table, key, e.toString());\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result\n   * will be stored in a HashMap.\n   * \n   * @param table\n   *      The name of the table\n   * @param key\n   *      The record key of the record to read.\n   * @param fields\n   *      The list of fields to read, or null for all of them\n   * @param result\n   *      A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error or \"not found\".\n   */\n  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    try {\n      VPackSlice document = arangoDB.db(databaseName).collection(table).getDocument(key, VPackSlice.class, null);\n      if (!this.fillMap(result, document, fields)) {\n        return Status.ERROR;\n      }\n      return Status.OK;\n    } catch (ArangoDBException e) {\n      logger.error(\"Exception while trying read {} {} with ex {}\", table, key, e.toString());\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key, overwriting any existing values with the same field name.\n   * \n   * @param table\n   *      The name of the table\n   * @param key\n   *      The record key of the record to write.\n   * @param values\n   *      A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error. See this class's\n   *     description for a discussion of error codes.\n   */\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      if (!transactionUpdate) {\n        BaseDocument updateDoc = new BaseDocument();\n        for (Entry<String, ByteIterator> field : values.entrySet()) {\n          updateDoc.addAttribute(field.getKey(), byteIteratorToString(field.getValue()));\n        }\n        arangoDB.db(databaseName).collection(table).updateDocument(key, updateDoc);\n        return Status.OK;\n      } else {\n        // id for documentHandle\n        String transactionAction = \"function (id) {\"\n               // use internal database functions\n            + \"var db = require('internal').db;\"\n              // collection.update(document, data, overwrite, keepNull, waitForSync)\n            + String.format(\"db._update(id, %s, true, false, %s);}\",\n                mapToJson(values), Boolean.toString(waitForSync).toLowerCase());\n        TransactionOptions options = new TransactionOptions();\n        options.writeCollections(table);\n        options.params(createDocumentHandle(table, key));\n        arangoDB.db(databaseName).transaction(transactionAction, Void.class, options);\n        return Status.OK;\n      }\n    } catch (ArangoDBException e) {\n      logger.error(\"Exception while trying update {} {} with ex {}\", table, key, e.toString());\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Delete a record from the database.\n   * \n   * @param table\n   *      The name of the table\n   * @param key\n   *      The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error. See the\n   *     {@link DB} class's description for a discussion of error codes.\n   */\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      arangoDB.db(databaseName).collection(table).deleteDocument(key);\n      return Status.OK;\n    } catch (ArangoDBException e) {\n      logger.error(\"Exception while trying delete {} {} with ex {}\", table, key, e.toString());\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each\n   * field/value pair from the result will be stored in a HashMap.\n   * \n   * @param table\n   *      The name of the table\n   * @param startkey\n   *      The record key of the first record to read.\n   * @param recordcount\n   *      The number of records to read\n   * @param fields\n   *      The list of fields to read, or null for all of them\n   * @param result\n   *      A Vector of HashMaps, where each HashMap is a set field/value\n   *      pairs for one record\n   * @return Zero on success, a non-zero error code on error. See the\n   *     {@link DB} class's description for a discussion of error codes.\n   */\n  @Override\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\n      Vector<HashMap<String, ByteIterator>> result) {\n    ArangoCursor<VPackSlice> cursor = null;\n    try {\n      String aqlQuery = String.format(\n          \"FOR target IN %s FILTER target._key >= @key SORT target._key ASC LIMIT %d RETURN %s \", table,\n          recordcount, constructReturnForAQL(fields, \"target\"));\n\n      Map<String, Object> bindVars = new MapBuilder().put(\"key\", startkey).get();\n      cursor = arangoDB.db(databaseName).query(aqlQuery, bindVars, null, VPackSlice.class);\n      while (cursor.hasNext()) {\n        VPackSlice aDocument = cursor.next();\n        HashMap<String, ByteIterator> aMap = new HashMap<String, ByteIterator>(aDocument.size());\n        if (!this.fillMap(aMap, aDocument)) {\n          return Status.ERROR;\n        }\n        result.add(aMap);\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      logger.error(\"Exception while trying scan {} {} {} with ex {}\", table, startkey, recordcount, e.toString());\n    } finally {\n      if (cursor != null) {\n        try {\n          cursor.close();\n        } catch (IOException e) {\n          logger.error(\"Fail to close cursor\", e);\n        }\n      }\n    }\n    return Status.ERROR;\n  }\n\n  private String createDocumentHandle(String collection, String documentKey) throws ArangoDBException {\n    validateCollectionName(collection);\n    return collection + \"/\" + documentKey;\n  }\n\n  private void validateCollectionName(String name) throws ArangoDBException {\n    if (name.indexOf('/') != -1) {\n      throw new ArangoDBException(\"does not allow '/' in name.\");\n    }\n  }\n\n  \n  private String constructReturnForAQL(Set<String> fields, String targetName) {\n    // Construct the AQL query string.\n    String resultDes = targetName;\n    if (fields != null && fields.size() != 0) {\n      StringBuilder builder = new StringBuilder(\"{\");\n      for (String field : fields) {\n        builder.append(String.format(\"\\n\\\"%s\\\" : %s.%s,\", field, targetName, field));\n      }\n      //Replace last ',' to newline.\n      builder.setCharAt(builder.length() - 1, '\\n');\n      builder.append(\"}\");\n      resultDes = builder.toString();\n    }\n    return resultDes;\n  }\n  \n  private boolean fillMap(Map<String, ByteIterator> resultMap, VPackSlice document) {\n    return fillMap(resultMap, document, null);\n  }\n  \n  /**\n   * Fills the map with the properties from the BaseDocument.\n   * \n   * @param resultMap\n   *      The map to fill/\n   * @param document\n   *      The record to read from\n   * @param fields\n   *      The list of fields to read, or null for all of them\n   * @return isSuccess\n   */\n  private boolean fillMap(Map<String, ByteIterator> resultMap, VPackSlice document, Set<String> fields) {\n    if (fields == null || fields.size() == 0) {\n      for (Iterator<Entry<String, VPackSlice>> iterator = document.objectIterator(); iterator.hasNext();) {\n        Entry<String, VPackSlice> next = iterator.next();\n        VPackSlice value = next.getValue();\n        if (value.isString()) {\n          resultMap.put(next.getKey(), stringToByteIterator(value.getAsString()));\n        } else if (!value.isCustom()) {\n          logger.error(\"Error! Not the format expected! Actually is {}\",\n              value.getClass().getName());\n          return false;\n        }\n      }\n    } else {\n      for (String field : fields) {\n        VPackSlice value = document.get(field);\n        if (value.isString()) {\n          resultMap.put(field, stringToByteIterator(value.getAsString()));\n        } else if (!value.isCustom()) {\n          logger.error(\"Error! Not the format expected! Actually is {}\",\n              value.getClass().getName());\n          return false;\n        }\n      }\n    }\n    return true;\n  }\n  \n  private String byteIteratorToString(ByteIterator byteIter) {\n    return new String(byteIter.toArray());\n  }\n\n  private ByteIterator stringToByteIterator(String content) {\n    return new StringByteIterator(content);\n  }\n  \n  private String mapToJson(Map<String, ByteIterator> values) {\n    VPackBuilder builder = new VPackBuilder().add(ValueType.OBJECT);\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      builder.add(entry.getKey(), byteIteratorToString(entry.getValue()));\n    }\n    builder.close();\n    return arangoDB.util().deserialize(builder.slice(), String.class);\n  }\n  \n}\n"
  },
  {
    "path": "arangodb3/src/main/java/com/yahoo/ycsb/db/arangodb/package-info.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"https://www.arangodb.com/\">ArangoDB</a>.\n */\npackage com.yahoo.ycsb.db.arangodb;\n\n"
  },
  {
    "path": "asynchbase/README.md",
    "content": "<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# AsyncHBase Driver for YCSB\n\nThis driver provides a YCSB workload binding for Apache HBase using an alternative to the included HBase client. AsyncHBase is completely asynchronous for all operations and is particularly useful for write heavy workloads. Note that it supports a subset of the HBase client APIs but supports all public released versions of HBase.\n\n## Quickstart\n\n### 1. Setup Hbase\n\nFollow directions 1 to 3 from ``hbase098``'s readme.\n\n### 2. Load a Workload\n\nSwitch to the root of the YCSB repo and choose the workload you want to run and `load` it first. With the CLI you must provide the column family at a minimum if HBase is running on localhost. Otherwise you must provide connection properties via CLI or the path to a config file. Additional configuration parameters are available below.\n\n```\nbin/ycsb load asynchbase -p columnfamily=cf -P workloads/workloada\n\n```\n\nThe `load` step only executes inserts into the datastore. After loading data, run the same workload to mix reads with writes.\n\n```\nbin/ycsb run asynchbase -p columnfamily=cf -P workloads/workloada\n\n```\n\n## Configuration Options\n\nThe following options can be configured using CLI (using the `-p` parameter) or via a JAVA style properties configuration file.. Check the [AsyncHBase Configuration](http://opentsdb.github.io/asynchbase/docs/build/html/configuration.html) project for additional tuning parameters.\n\n* `columnfamily`: (Required) The column family to target.\n* `config`: Optional full path to a configuration file with AsyncHBase options.\n* `hbase.zookeeper.quorum`: Zookeeper quorum list.\n* `hbase.zookeeper.znode.parent`: Path used by HBase in Zookeeper. Default is \"/hbase\".\n* `debug`: If true, prints debug information to standard out. The default is false.\n* `clientbuffering`: Whether or not to use client side buffering and batching of write operations. This can significantly improve performance and defaults to true.\n* `durable`: When set to false, writes and deletes bypass the WAL for quicker responses. Default is true.\n* `jointimeout`: A timeout value, in milliseconds, for waiting on operations synchronously before an error is thrown.\n* `prefetchmeta`: Whether or not to read meta for all regions in the table and connect to the proper region servers before starting operations. Defaults to false.\n\n\nNote: This module includes some Google Guava source files from version 12 that were later removed but are still required by HBase's test modules for setting up the mini cluster during integration testing."
  },
  {
    "path": "asynchbase/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent/</relativePath>\n  </parent>\n\n  <artifactId>asynchbase-binding</artifactId>\n  <name>AsyncHBase Client Binding for Apache HBase</name>\n\n  <properties>\n    <!-- Tests do not run on jdk9 -->\n    <skipJDK9Tests>true</skipJDK9Tests>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.hbase</groupId>\n      <artifactId>asynchbase</artifactId>\n      <version>${asynchbase.version}</version>\n    </dependency>\n    \n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    \n    <dependency>\n      <groupId>org.apache.zookeeper</groupId>\n      <artifactId>zookeeper</artifactId>\n      <version>3.4.5</version>\n      <exclusions>\n        <exclusion>\n          <groupId>log4j</groupId>\n          <artifactId>log4j</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>org.slf4j</groupId>\n          <artifactId>slf4j-log4j12</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>jline</groupId>\n          <artifactId>jline</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>junit</groupId>\n          <artifactId>junit</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>org.jboss.netty</groupId>\n          <artifactId>netty</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    \n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n    \n    <dependency>\n      <groupId>org.apache.hbase</groupId>\n      <artifactId>hbase-testing-util</artifactId>\n      <version>${hbase10.version}</version>\n      <scope>test</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>jdk.tools</groupId>\n          <artifactId>jdk.tools</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    \n    <dependency>\n      <groupId>org.apache.hbase</groupId>\n      <artifactId>hbase-client</artifactId>\n      <version>${hbase10.version}</version>\n      <scope>test</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>jdk.tools</groupId>\n          <artifactId>jdk.tools</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    \n    <dependency>\n      <groupId>log4j</groupId>\n      <artifactId>log4j</artifactId>\n      <version>1.2.17</version>\n      <scope>test</scope>\n    </dependency>\n    \n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>log4j-over-slf4j</artifactId>\n      <version>1.7.7</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-surefire-plugin</artifactId>\n        <version>2.20</version>\n        <configuration>\n          <argLine>-Xms4096m -Xms4096m</argLine>\n        </configuration>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "asynchbase/src/main/java/com/yahoo/ycsb/db/AsyncHBaseClient.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n * \n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport java.io.IOException;\nimport java.nio.charset.Charset;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport org.hbase.async.Bytes;\nimport org.hbase.async.Config;\nimport org.hbase.async.DeleteRequest;\nimport org.hbase.async.GetRequest;\nimport org.hbase.async.HBaseClient;\nimport org.hbase.async.KeyValue;\nimport org.hbase.async.PutRequest;\nimport org.hbase.async.Scanner;\n\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY;\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY_DEFAULT;\n\n/**\n * Alternative Java client for Apache HBase.\n * \n * This client provides a subset of the main HBase client and uses a completely\n * asynchronous pipeline for all calls. It is particularly useful for write heavy\n * workloads. It is also compatible with all production versions of HBase. \n */\npublic class AsyncHBaseClient extends com.yahoo.ycsb.DB {\n  public static final Charset UTF8_CHARSET = Charset.forName(\"UTF8\");\n  private static final String CLIENT_SIDE_BUFFERING_PROPERTY = \"clientbuffering\";\n  private static final String DURABILITY_PROPERTY = \"durability\";\n  private static final String PREFETCH_META_PROPERTY = \"prefetchmeta\";\n  private static final String CONFIG_PROPERTY = \"config\";\n  private static final String COLUMN_FAMILY_PROPERTY = \"columnfamily\";\n  private static final String JOIN_TIMEOUT_PROPERTY = \"jointimeout\";\n  private static final String JOIN_TIMEOUT_PROPERTY_DEFAULT = \"30000\";\n  \n  /** Mutex for instantiating a single instance of the client. */\n  private static final Object MUTEX = new Object();\n  \n  /** Use for tracking running thread counts so we know when to shutdown the client. */ \n  private static int threadCount = 0;\n  \n  /** The client that's used for all threads. */\n  private static HBaseClient client;\n  \n  /** Print debug information to standard out. */\n  private boolean debug = false;\n  \n  /** The column family use for the workload. */\n  private byte[] columnFamilyBytes;\n  \n  /** Cache for the last table name/ID to avoid byte conversions. */\n  private String lastTable = \"\";\n  private byte[] lastTableBytes;\n  \n  private long joinTimeout;\n  \n  /** Whether or not to bypass the WAL for puts and deletes. */\n  private boolean durability = true;\n  \n  /**\n   * If true, buffer mutations on the client. This is the default behavior for\n   * AsyncHBase. For measuring insert/update/delete latencies, client side\n   * buffering should be disabled.\n   * \n   * A single instance of this \n   */\n  private boolean clientSideBuffering = false;\n  \n  @Override\n  public void init() throws DBException {\n    if (getProperties().getProperty(CLIENT_SIDE_BUFFERING_PROPERTY, \"false\")\n        .toLowerCase().equals(\"true\")) {\n      clientSideBuffering = true;\n    }\n    if (getProperties().getProperty(DURABILITY_PROPERTY, \"true\")\n        .toLowerCase().equals(\"false\")) {\n      durability = false;\n    }\n    final String columnFamily = getProperties().getProperty(COLUMN_FAMILY_PROPERTY);\n    if (columnFamily == null || columnFamily.isEmpty()) {\n      System.err.println(\"Error, must specify a columnfamily for HBase table\");\n      throw new DBException(\"No columnfamily specified\");\n    }\n    columnFamilyBytes = columnFamily.getBytes();\n    \n    if ((getProperties().getProperty(\"debug\") != null)\n        && (getProperties().getProperty(\"debug\").compareTo(\"true\") == 0)) {\n      debug = true;\n    }\n    \n    joinTimeout = Integer.parseInt(getProperties().getProperty(\n        JOIN_TIMEOUT_PROPERTY, JOIN_TIMEOUT_PROPERTY_DEFAULT));\n    \n    final boolean prefetchMeta = getProperties()\n        .getProperty(PREFETCH_META_PROPERTY, \"false\")\n        .toLowerCase().equals(\"true\") ? true : false;\n    try {\n      synchronized (MUTEX) {\n        ++threadCount;\n        if (client == null) {\n          final String configPath = getProperties().getProperty(CONFIG_PROPERTY);\n          final Config config;\n          if (configPath == null || configPath.isEmpty()) {\n            config = new Config();\n            final Iterator<Entry<Object, Object>> iterator = getProperties()\n                 .entrySet().iterator();\n            while (iterator.hasNext()) {\n              final Entry<Object, Object> property = iterator.next();\n              config.overrideConfig((String)property.getKey(), \n                  (String)property.getValue());\n            }\n          } else {\n            config = new Config(configPath);\n          }\n          client = new HBaseClient(config);\n          \n          // Terminate right now if table does not exist, since the client\n          // will not propagate this error upstream once the workload\n          // starts.\n          String table = getProperties().getProperty(TABLENAME_PROPERTY, TABLENAME_PROPERTY_DEFAULT);\n          try {\n            client.ensureTableExists(table).join(joinTimeout);\n          } catch (InterruptedException e1) {\n            Thread.currentThread().interrupt();\n          } catch (Exception e) {\n            throw new DBException(e);\n          }\n          \n          if (prefetchMeta) {\n            try {\n              if (debug) {\n                System.out.println(\"Starting meta prefetch for table \" + table);\n              }\n              client.prefetchMeta(table).join(joinTimeout);\n              if (debug) {\n                System.out.println(\"Completed meta prefetch for table \" + table);\n              }\n            } catch (InterruptedException e) {\n              System.err.println(\"Interrupted during prefetch\");\n              Thread.currentThread().interrupt();\n            } catch (Exception e) {\n              throw new DBException(\"Failed prefetch\", e);\n            }\n          }\n        }\n      }\n    } catch (IOException e) {\n      throw new DBException(\"Failed instantiation of client\", e);\n    }\n  }\n  \n  @Override\n  public void cleanup() throws DBException {\n    synchronized (MUTEX) {\n      --threadCount;\n      if (client != null && threadCount < 1) {\n        try {\n          if (debug) {\n            System.out.println(\"Shutting down client\");\n          }\n          client.shutdown().joinUninterruptibly(joinTimeout);\n        } catch (Exception e) {\n          System.err.println(\"Failed to shutdown the AsyncHBase client \"\n              + \"properly: \" + e.getMessage());\n        }\n        client = null;\n      }\n    }\n  }\n  \n  @Override\n  public Status read(String table, String key, Set<String> fields,\n                     Map<String, ByteIterator> result) {\n    setTable(table);\n    \n    final GetRequest get = new GetRequest(\n        lastTableBytes, key.getBytes(), columnFamilyBytes);\n    if (fields != null) {\n      get.qualifiers(getQualifierList(fields));\n    }\n    \n    try {\n      if (debug) {\n        System.out.println(\"Doing read from HBase columnfamily \" + \n            Bytes.pretty(columnFamilyBytes));\n        System.out.println(\"Doing read for key: \" + key);\n      }\n      \n      final ArrayList<KeyValue> row = client.get(get).join(joinTimeout);\n      if (row == null || row.isEmpty()) {\n        return Status.NOT_FOUND;\n      }\n      \n      // got something so populate the results\n      for (final KeyValue column : row) {\n        result.put(new String(column.qualifier()), \n            // TODO - do we need to clone this array? YCSB may keep it in memory\n            // for a while which would mean the entire KV would hang out and won't\n            // be GC'd.\n            new ByteArrayByteIterator(column.value()));\n        \n        if (debug) {\n          System.out.println(\n              \"Result for field: \" + Bytes.pretty(column.qualifier())\n                  + \" is: \" + Bytes.pretty(column.value()));\n        }\n      }\n      return Status.OK;\n    } catch (InterruptedException e) {\n      System.err.println(\"Thread interrupted\");\n      Thread.currentThread().interrupt();\n    } catch (Exception e) {\n      System.err.println(\"Failure reading from row with key \" + key + \n          \": \" + e.getMessage());\n      return Status.ERROR;\n    }\n    return Status.ERROR;\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    setTable(table);\n    \n    final Scanner scanner = client.newScanner(lastTableBytes);\n    scanner.setFamily(columnFamilyBytes);\n    scanner.setStartKey(startkey.getBytes(UTF8_CHARSET));\n    // No end key... *sniff*\n    if (fields != null) {\n      scanner.setQualifiers(getQualifierList(fields));\n    }\n    \n    // no filters? *sniff*\n    ArrayList<ArrayList<KeyValue>> rows = null;\n    try {\n      int numResults = 0;\n      while ((rows = scanner.nextRows().join(joinTimeout)) != null) {\n        for (final ArrayList<KeyValue> row : rows) {\n          final HashMap<String, ByteIterator> rowResult =\n              new HashMap<String, ByteIterator>(row.size());\n          for (final KeyValue column : row) {\n            rowResult.put(new String(column.qualifier()), \n                // TODO - do we need to clone this array? YCSB may keep it in memory\n                // for a while which would mean the entire KV would hang out and won't\n                // be GC'd.\n                new ByteArrayByteIterator(column.value()));\n            if (debug) {\n              System.out.println(\"Got scan result for key: \" + \n                  Bytes.pretty(column.key()));\n            }\n          }\n          result.add(rowResult);\n          numResults++;\n\n          if (numResults >= recordcount) {// if hit recordcount, bail out\n            break;\n          }\n        }\n      }\n      scanner.close().join(joinTimeout);\n      return Status.OK;\n    } catch (InterruptedException e) {\n      System.err.println(\"Thread interrupted\");\n      Thread.currentThread().interrupt();\n    } catch (Exception e) {\n      System.err.println(\"Failure reading from row with key \" + startkey + \n          \": \" + e.getMessage());\n      return Status.ERROR;\n    }\n    \n    return Status.ERROR;\n  }\n\n  @Override\n  public Status update(String table, String key,\n                       Map<String, ByteIterator> values) {\n    setTable(table);\n    \n    if (debug) {\n      System.out.println(\"Setting up put for key: \" + key);\n    }\n    \n    final byte[][] qualifiers = new byte[values.size()][];\n    final byte[][] byteValues = new byte[values.size()][];\n    \n    int idx = 0;\n    for (final Entry<String, ByteIterator> entry : values.entrySet()) {\n      qualifiers[idx] = entry.getKey().getBytes();\n      byteValues[idx++] = entry.getValue().toArray();\n      if (debug) {\n        System.out.println(\"Adding field/value \" + entry.getKey() + \"/\"\n            + Bytes.pretty(entry.getValue().toArray()) + \" to put request\");\n      }\n    }\n    \n    final PutRequest put = new PutRequest(lastTableBytes, key.getBytes(), \n        columnFamilyBytes, qualifiers, byteValues);\n    if (!durability) {\n      put.setDurable(false);\n    }\n    if (!clientSideBuffering) {\n      put.setBufferable(false);\n      try {\n        client.put(put).join(joinTimeout);\n      } catch (InterruptedException e) {\n        System.err.println(\"Thread interrupted\");\n        Thread.currentThread().interrupt();\n      } catch (Exception e) {\n        System.err.println(\"Failure reading from row with key \" + key + \n            \": \" + e.getMessage());\n        return Status.ERROR;\n      }\n    } else {\n      // hooray! Asynchronous write. But without a callback and an async\n      // YCSB call we don't know whether it succeeded or not\n      client.put(put);\n    }\n    \n    return Status.OK;\n  }\n\n  @Override\n  public Status insert(String table, String key,\n                       Map<String, ByteIterator> values) {\n    return update(table, key, values);\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    setTable(table);\n    \n    if (debug) {\n      System.out.println(\"Doing delete for key: \" + key);\n    }\n    \n    final DeleteRequest delete = new DeleteRequest(\n        lastTableBytes, key.getBytes(), columnFamilyBytes);\n    if (!durability) {\n      delete.setDurable(false);\n    }\n    if (!clientSideBuffering) {\n      delete.setBufferable(false);\n      try {\n        client.delete(delete).join(joinTimeout);\n      } catch (InterruptedException e) {\n        System.err.println(\"Thread interrupted\");\n        Thread.currentThread().interrupt();\n      } catch (Exception e) {\n        System.err.println(\"Failure reading from row with key \" + key + \n            \": \" + e.getMessage());\n        return Status.ERROR;\n      }\n    } else {\n      // hooray! Asynchronous write. But without a callback and an async\n      // YCSB call we don't know whether it succeeded or not\n      client.delete(delete);\n    }\n    return Status.OK;\n  }\n\n  /**\n   * Little helper to set the table byte array. If it's different than the last\n   * table we reset the byte array. Otherwise we just use the existing array.\n   * @param table The table we're operating against\n   */\n  private void setTable(final String table) {\n    if (!lastTable.equals(table)) {\n      lastTable = table;\n      lastTableBytes = table.getBytes();\n    }\n  }\n  \n  /**\n   * Little helper to build a qualifier byte array from a field set.\n   * @param fields The fields to fetch.\n   * @return The column qualifier byte arrays.\n   */\n  private byte[][] getQualifierList(final Set<String> fields) {\n    final byte[][] qualifiers = new byte[fields.size()][];\n    int idx = 0;\n    for (final String field : fields) {\n      qualifiers[idx++] = field.getBytes();\n    }\n    return qualifiers;\n  }\n}"
  },
  {
    "path": "asynchbase/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n * \n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for HBase using the AsyncHBase client.\n */\npackage com.yahoo.ycsb.db;\n"
  },
  {
    "path": "asynchbase/src/test/java/com/google/common/base/Stopwatch.java",
    "content": "/*\n * Copyright (C) 2008 The Guava Authors\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage com.google.common.base;\n\nimport static com.google.common.base.Preconditions.checkNotNull;\nimport static com.google.common.base.Preconditions.checkState;\nimport static java.util.concurrent.TimeUnit.MICROSECONDS;\nimport static java.util.concurrent.TimeUnit.MILLISECONDS;\nimport static java.util.concurrent.TimeUnit.NANOSECONDS;\nimport static java.util.concurrent.TimeUnit.SECONDS;\n\nimport com.google.common.annotations.Beta;\nimport com.google.common.annotations.GwtCompatible;\nimport com.google.common.annotations.GwtIncompatible;\n\nimport java.util.concurrent.TimeUnit;\n\n/**\n * An object that measures elapsed time in nanoseconds. It is useful to measure\n * elapsed time using this class instead of direct calls to {@link\n * System#nanoTime} for a few reasons:\n *\n * <ul>\n * <li>An alternate time source can be substituted, for testing or performance\n *     reasons.\n * <li>As documented by {@code nanoTime}, the value returned has no absolute\n *     meaning, and can only be interpreted as relative to another timestamp\n *     returned by {@code nanoTime} at a different time. {@code Stopwatch} is a\n *     more effective abstraction because it exposes only these relative values,\n *     not the absolute ones.\n * </ul>\n *\n * <p>Basic usage:\n * <pre>\n *   Stopwatch stopwatch = Stopwatch.{@link #createStarted createStarted}();\n *   doSomething();\n *   stopwatch.{@link #stop stop}(); // optional\n *\n *   long millis = stopwatch.elapsed(MILLISECONDS);\n *\n *   log.info(\"that took: \" + stopwatch); // formatted string like \"12.3 ms\"\n * </pre>\n *\n * <p>Stopwatch methods are not idempotent; it is an error to start or stop a\n * stopwatch that is already in the desired state.\n *\n * <p>When testing code that uses this class, use the {@linkplain\n * #Stopwatch(Ticker) alternate constructor} to supply a fake or mock ticker.\n * <!-- TODO(kevinb): restore the \"such as\" --> This allows you to\n * simulate any valid behavior of the stopwatch.\n *\n * <p><b>Note:</b> This class is not thread-safe.\n *\n * @author Kevin Bourrillion\n * @since 10.0\n */\n@Beta\n@GwtCompatible(emulated = true)\npublic final class Stopwatch {\n  private final Ticker ticker;\n  private boolean isRunning;\n  private long elapsedNanos;\n  private long startTick;\n\n  /**\n   * Creates (but does not start) a new stopwatch using {@link System#nanoTime}\n   * as its time source.\n   *\n   * @since 15.0\n   */\n  public static Stopwatch createUnstarted() {\n    return new Stopwatch();\n  }\n\n  /**\n   * Creates (but does not start) a new stopwatch, using the specified time\n   * source.\n   *\n   * @since 15.0\n   */\n  public static Stopwatch createUnstarted(Ticker ticker) {\n    return new Stopwatch(ticker);\n  }\n\n  /**\n   * Creates (and starts) a new stopwatch using {@link System#nanoTime}\n   * as its time source.\n   *\n   * @since 15.0\n   */\n  public static Stopwatch createStarted() {\n    return new Stopwatch().start();\n  }\n\n  /**\n   * Creates (and starts) a new stopwatch, using the specified time\n   * source.\n   *\n   * @since 15.0\n   */\n  public static Stopwatch createStarted(Ticker ticker) {\n    return new Stopwatch(ticker).start();\n  }\n\n  /**\n   * Creates (but does not start) a new stopwatch using {@link System#nanoTime}\n   * as its time source.\n   *\n   * @deprecated Use {@link Stopwatch#createUnstarted()} instead.\n   */\n  @Deprecated\n  public Stopwatch() {\n    this(Ticker.systemTicker());\n  }\n\n  /**\n   * Creates (but does not start) a new stopwatch, using the specified time\n   * source.\n   *\n   * @deprecated Use {@link Stopwatch#createUnstarted(Ticker)} instead.\n   */\n  @Deprecated\n  public Stopwatch(Ticker ticker) {\n    this.ticker = checkNotNull(ticker, \"ticker\");\n  }\n\n  /**\n   * Returns {@code true} if {@link #start()} has been called on this stopwatch,\n   * and {@link #stop()} has not been called since the last call to {@code\n   * start()}.\n   */\n  public boolean isRunning() {\n    return isRunning;\n  }\n\n  /**\n   * Starts the stopwatch.\n   *\n   * @return this {@code Stopwatch} instance\n   * @throws IllegalStateException if the stopwatch is already running.\n   */\n  public Stopwatch start() {\n    checkState(!isRunning, \"This stopwatch is already running.\");\n    isRunning = true;\n    startTick = ticker.read();\n    return this;\n  }\n\n  /**\n   * Stops the stopwatch. Future reads will return the fixed duration that had\n   * elapsed up to this point.\n   *\n   * @return this {@code Stopwatch} instance\n   * @throws IllegalStateException if the stopwatch is already stopped.\n   */\n  public Stopwatch stop() {\n    long tick = ticker.read();\n    checkState(isRunning, \"This stopwatch is already stopped.\");\n    isRunning = false;\n    elapsedNanos += tick - startTick;\n    return this;\n  }\n\n  /**\n   * Sets the elapsed time for this stopwatch to zero,\n   * and places it in a stopped state.\n   *\n   * @return this {@code Stopwatch} instance\n   */\n  public Stopwatch reset() {\n    elapsedNanos = 0;\n    isRunning = false;\n    return this;\n  }\n\n  private long elapsedNanos() {\n    return isRunning ? ticker.read() - startTick + elapsedNanos : elapsedNanos;\n  }\n\n  /**\n   * Returns the current elapsed time shown on this stopwatch, expressed\n   * in the desired time unit, with any fraction rounded down.\n   *\n   * <p>Note that the overhead of measurement can be more than a microsecond, so\n   * it is generally not useful to specify {@link TimeUnit#NANOSECONDS}\n   * precision here.\n   *\n   * @since 14.0 (since 10.0 as {@code elapsedTime()})\n   */\n  public long elapsed(TimeUnit desiredUnit) {\n    return desiredUnit.convert(elapsedNanos(), NANOSECONDS);\n  }\n\n  /**\n   * Returns the current elapsed time shown on this stopwatch, expressed\n   * in the desired time unit, with any fraction rounded down.\n   *\n   * <p>Note that the overhead of measurement can be more than a microsecond, so\n   * it is generally not useful to specify {@link TimeUnit#NANOSECONDS}\n   * precision here.\n   *\n   * @deprecated Use {@link Stopwatch#elapsed(TimeUnit)} instead. This method is\n   *     scheduled to be removed in Guava release 16.0.\n   */\n  @Deprecated\n  public long elapsedTime(TimeUnit desiredUnit) {\n    return elapsed(desiredUnit);\n  }\n\n  /**\n   * Returns the current elapsed time shown on this stopwatch, expressed\n   * in milliseconds, with any fraction rounded down. This is identical to\n   * {@code elapsed(TimeUnit.MILLISECONDS)}.\n   *\n   * @deprecated Use {@code stopwatch.elapsed(MILLISECONDS)} instead. This\n   *     method is scheduled to be removed in Guava release 16.0.\n   */\n  @Deprecated\n  public long elapsedMillis() {\n    return elapsed(MILLISECONDS);\n  }\n\n  /**\n   * Returns a string representation of the current elapsed time.\n   */\n  @GwtIncompatible(\"String.format()\")\n  @Override public String toString() {\n    long nanos = elapsedNanos();\n\n    TimeUnit unit = chooseUnit(nanos);\n    double value = (double) nanos / NANOSECONDS.convert(1, unit);\n\n    // Too bad this functionality is not exposed as a regular method call\n    return String.format(\"%.4g %s\", value, abbreviate(unit));\n  }\n\n  private static TimeUnit chooseUnit(long nanos) {\n    if (SECONDS.convert(nanos, NANOSECONDS) > 0) {\n      return SECONDS;\n    }\n    if (MILLISECONDS.convert(nanos, NANOSECONDS) > 0) {\n      return MILLISECONDS;\n    }\n    if (MICROSECONDS.convert(nanos, NANOSECONDS) > 0) {\n      return MICROSECONDS;\n    }\n    return NANOSECONDS;\n  }\n\n  private static String abbreviate(TimeUnit unit) {\n    switch (unit) {\n      case NANOSECONDS:\n        return \"ns\";\n      case MICROSECONDS:\n        return \"\\u03bcs\"; // μs\n      case MILLISECONDS:\n        return \"ms\";\n      case SECONDS:\n        return \"s\";\n      default:\n        throw new AssertionError();\n    }\n  }\n}"
  },
  {
    "path": "asynchbase/src/test/java/com/google/common/io/Closeables.java",
    "content": "/*\n * Copyright (C) 2007 The Guava Authors\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage com.google.common.io;\n\nimport com.google.common.annotations.Beta;\nimport com.google.common.annotations.VisibleForTesting;\n\nimport java.io.Closeable;\nimport java.io.IOException;\nimport java.util.logging.Level;\nimport java.util.logging.Logger;\n\nimport javax.annotation.Nullable;\n\n/**\n * Utility methods for working with {@link Closeable} objects.\n *\n * @author Michael Lancaster\n * @since 1.0\n */\n@Beta\npublic final class Closeables {\n  @VisibleForTesting static final Logger logger\n      = Logger.getLogger(Closeables.class.getName());\n\n  private Closeables() {}\n\n  /**\n   * Closes a {@link Closeable}, with control over whether an\n   * {@code IOException} may be thrown. This is primarily useful in a\n   * finally block, where a thrown exception needs to be logged but not\n   * propagated (otherwise the original exception will be lost).\n   *\n   * <p>If {@code swallowIOException} is true then we never throw\n   * {@code IOException} but merely log it.\n   *\n   * <p>Example:\n   *\n   * <p><pre>public void useStreamNicely() throws IOException {\n   * SomeStream stream = new SomeStream(\"foo\");\n   * boolean threw = true;\n   * try {\n   *   // Some code which does something with the Stream. May throw a\n   *   // Throwable.\n   *   threw = false; // No throwable thrown.\n   * } finally {\n   *   // Close the stream.\n   *   // If an exception occurs, only rethrow it if (threw==false).\n   *   Closeables.close(stream, threw);\n   * }\n   * </pre>\n   *\n   * @param closeable the {@code Closeable} object to be closed, or null,\n   *     in which case this method does nothing\n   * @param swallowIOException if true, don't propagate IO exceptions\n   *     thrown by the {@code close} methods\n   * @throws IOException if {@code swallowIOException} is false and\n   *     {@code close} throws an {@code IOException}.\n   */\n  public static void close(@Nullable Closeable closeable,\n      boolean swallowIOException) throws IOException {\n    if (closeable == null) {\n      return;\n    }\n    try {\n      closeable.close();\n    } catch (IOException e) {\n      if (swallowIOException) {\n        logger.log(Level.WARNING,\n            \"IOException thrown while closing Closeable.\", e);\n      } else {\n        throw e;\n      }\n    }\n  }\n\n  /**\n   * Equivalent to calling {@code close(closeable, true)}, but with no\n   * IOException in the signature.\n   * @param closeable the {@code Closeable} object to be closed, or null, in\n   *      which case this method does nothing\n   */\n  public static void closeQuietly(@Nullable Closeable closeable) {\n    try {\n      close(closeable, true);\n    } catch (IOException e) {\n      logger.log(Level.SEVERE, \"IOException should not have been thrown.\", e);\n    }\n  }\n}"
  },
  {
    "path": "asynchbase/src/test/java/com/google/common/io/LimitInputStream.java",
    "content": "/*\n * Copyright (C) 2007 The Guava Authors\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage com.google.common.io;\n\nimport com.google.common.annotations.Beta;\nimport com.google.common.base.Preconditions;\n\nimport java.io.FilterInputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\n\n/**\n * An InputStream that limits the number of bytes which can be read.\n *\n * @author Charles Fry\n * @since 1.0\n */\n@Beta\npublic final class LimitInputStream extends FilterInputStream {\n\n  private long left;\n  private long mark = -1;\n\n  /**\n   * Wraps another input stream, limiting the number of bytes which can be read.\n   *\n   * @param in the input stream to be wrapped\n   * @param limit the maximum number of bytes to be read\n   */\n  public LimitInputStream(InputStream in, long limit) {\n    super(in);\n    Preconditions.checkNotNull(in);\n    Preconditions.checkArgument(limit >= 0, \"limit must be non-negative\");\n    left = limit;\n  }\n\n  @Override public int available() throws IOException {\n    return (int) Math.min(in.available(), left);\n  }\n\n  @Override public synchronized void mark(int readlimit) {\n    in.mark(readlimit);\n    mark = left;\n    // it's okay to mark even if mark isn't supported, as reset won't work\n  }\n\n  @Override public int read() throws IOException {\n    if (left == 0) {\n      return -1;\n    }\n\n    int result = in.read();\n    if (result != -1) {\n      --left;\n    }\n    return result;\n  }\n\n  @Override public int read(byte[] b, int off, int len) throws IOException {\n    if (left == 0) {\n      return -1;\n    }\n\n    len = (int) Math.min(len, left);\n    int result = in.read(b, off, len);\n    if (result != -1) {\n      left -= result;\n    }\n    return result;\n  }\n\n  @Override public synchronized void reset() throws IOException {\n    if (!in.markSupported()) {\n      throw new IOException(\"Mark not supported\");\n    }\n    if (mark == -1) {\n      throw new IOException(\"Mark not set\");\n    }\n\n    in.reset();\n    left = mark;\n  }\n\n  @Override public long skip(long n) throws IOException {\n    n = Math.min(n, left);\n    long skipped = in.skip(n);\n    left -= skipped;\n    return skipped;\n  }\n}"
  },
  {
    "path": "asynchbase/src/test/java/com/yahoo/ycsb/db/AsyncHBaseTest.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n * \n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY;\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY_DEFAULT;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\nimport static org.junit.Assume.assumeTrue;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport com.yahoo.ycsb.measurements.Measurements;\nimport com.yahoo.ycsb.workloads.CoreWorkload;\n\nimport org.apache.hadoop.hbase.HBaseTestingUtility;\nimport org.apache.hadoop.hbase.TableName;\nimport org.apache.hadoop.hbase.client.Get;\nimport org.apache.hadoop.hbase.client.Put;\nimport org.apache.hadoop.hbase.client.Result;\nimport org.apache.hadoop.hbase.client.Table;\nimport org.apache.hadoop.hbase.util.Bytes;\nimport org.junit.After;\nimport org.junit.AfterClass;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Ignore;\nimport org.junit.Test;\n\nimport java.nio.ByteBuffer;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Properties;\nimport java.util.Vector;\n\n/**\n * Integration tests for the YCSB AsyncHBase client, using an HBase minicluster.\n * These are the same as those for the hbase10 client.\n */\npublic class AsyncHBaseTest {\n\n  private final static String COLUMN_FAMILY = \"cf\";\n\n  private static HBaseTestingUtility testingUtil;\n  private AsyncHBaseClient client;\n  private Table table = null;\n  private String tableName;\n\n  private static boolean isWindows() {\n    final String os = System.getProperty(\"os.name\");\n    return os.startsWith(\"Windows\");\n  }\n\n  /**\n   * Creates a mini-cluster for use in these tests.\n   *\n   * This is a heavy-weight operation, so invoked only once for the test class.\n   */\n  @BeforeClass\n  public static void setUpClass() throws Exception {\n    // Minicluster setup fails on Windows with an UnsatisfiedLinkError.\n    // Skip if windows.\n    assumeTrue(!isWindows());\n    testingUtil = HBaseTestingUtility.createLocalHTU();\n    testingUtil.startMiniCluster();\n  }\n\n  /**\n   * Tears down mini-cluster.\n   */\n  @AfterClass\n  public static void tearDownClass() throws Exception {\n    if (testingUtil != null) {\n      testingUtil.shutdownMiniCluster();\n    }\n  }\n\n  /**\n   * Sets up the mini-cluster for testing.\n   *\n   * We re-create the table for each test.\n   */\n  @Before\n  public void setUp() throws Exception {\n    Properties p = new Properties();\n    p.setProperty(\"columnfamily\", COLUMN_FAMILY);\n\n    Measurements.setProperties(p);\n    final CoreWorkload workload = new CoreWorkload();\n    workload.init(p);\n\n    tableName = p.getProperty(TABLENAME_PROPERTY, TABLENAME_PROPERTY_DEFAULT);\n    table = testingUtil.createTable(TableName.valueOf(tableName), Bytes.toBytes(COLUMN_FAMILY));\n\n    final String zkQuorum = \"127.0.0.1:\" + testingUtil.getZkCluster().getClientPort();\n    p.setProperty(\"hbase.zookeeper.quorum\", zkQuorum);\n    client = new AsyncHBaseClient();\n    client.setProperties(p);\n    client.init();\n  }\n\n  @After\n  public void tearDown() throws Exception {\n    table.close();\n    testingUtil.deleteTable(tableName);\n  }\n\n  @Test\n  public void testRead() throws Exception {\n    final String rowKey = \"row1\";\n    final Put p = new Put(Bytes.toBytes(rowKey));\n    p.addColumn(Bytes.toBytes(COLUMN_FAMILY),\n        Bytes.toBytes(\"column1\"), Bytes.toBytes(\"value1\"));\n    p.addColumn(Bytes.toBytes(COLUMN_FAMILY),\n        Bytes.toBytes(\"column2\"), Bytes.toBytes(\"value2\"));\n    table.put(p);\n\n    final HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\n    final Status status = client.read(tableName, rowKey, null, result);\n    assertEquals(Status.OK, status);\n    assertEquals(2, result.size());\n    assertEquals(\"value1\", result.get(\"column1\").toString());\n    assertEquals(\"value2\", result.get(\"column2\").toString());\n  }\n\n  @Test\n  public void testReadMissingRow() throws Exception {\n    final HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\n    final Status status = client.read(tableName, \"Missing row\", null, result);\n    assertEquals(Status.NOT_FOUND, status);\n    assertEquals(0, result.size());\n  }\n\n  @Test\n  public void testScan() throws Exception {\n    // Fill with data\n    final String colStr = \"row_number\";\n    final byte[] col = Bytes.toBytes(colStr);\n    final int n = 10;\n    final List<Put> puts = new ArrayList<Put>(n);\n    for(int i = 0; i < n; i++) {\n      final byte[] key = Bytes.toBytes(String.format(\"%05d\", i));\n      final byte[] value = java.nio.ByteBuffer.allocate(4).putInt(i).array();\n      final Put p = new Put(key);\n      p.addColumn(Bytes.toBytes(COLUMN_FAMILY), col, value);\n      puts.add(p);\n    }\n    table.put(puts);\n\n    // Test\n    final Vector<HashMap<String, ByteIterator>> result =\n        new Vector<HashMap<String, ByteIterator>>();\n\n    // Scan 5 records, skipping the first\n    client.scan(tableName, \"00001\", 5, null, result);\n\n    assertEquals(5, result.size());\n    for(int i = 0; i < 5; i++) {\n      final HashMap<String, ByteIterator> row = result.get(i);\n      assertEquals(1, row.size());\n      assertTrue(row.containsKey(colStr));\n      final byte[] bytes = row.get(colStr).toArray();\n      final ByteBuffer buf = ByteBuffer.wrap(bytes);\n      final int rowNum = buf.getInt();\n      assertEquals(i + 1, rowNum);\n    }\n  }\n\n  @Test\n  public void testUpdate() throws Exception{\n    final String key = \"key\";\n    final HashMap<String, String> input = new HashMap<String, String>();\n    input.put(\"column1\", \"value1\");\n    input.put(\"column2\", \"value2\");\n    final Status status = client.insert(tableName, key, StringByteIterator.getByteIteratorMap(input));\n    assertEquals(Status.OK, status);\n\n    // Verify result\n    final Get get = new Get(Bytes.toBytes(key));\n    final Result result = this.table.get(get);\n    assertFalse(result.isEmpty());\n    assertEquals(2, result.size());\n    for(final java.util.Map.Entry<String, String> entry : input.entrySet()) {\n      assertEquals(entry.getValue(),\n          new String(result.getValue(Bytes.toBytes(COLUMN_FAMILY),\n            Bytes.toBytes(entry.getKey()))));\n    }\n  }\n\n  @Test\n  @Ignore(\"Not yet implemented\")\n  public void testDelete() {\n    fail(\"Not yet implemented\");\n  }\n}\n\n"
  },
  {
    "path": "asynchbase/src/test/resources/hbase-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<configuration>\n  <property>\n    <name>hbase.master.info.port</name>\n    <value>-1</value>\n    <description>The port for the hbase master web UI\n    Set to -1 if you do not want the info server to run.\n    </description>\n  </property>\n  <property>\n    <name>hbase.regionserver.info.port</name>\n    <value>-1</value>\n    <description>The port for the hbase regionserver web UI\n    Set to -1 if you do not want the info server to run.\n    </description>\n  </property>\n</configuration>\n"
  },
  {
    "path": "asynchbase/src/test/resources/log4j.properties",
    "content": "#\n# Copyright (c) 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n\n# Root logger option\nlog4j.rootLogger=WARN, stderr\n\nlog4j.appender.stderr=org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.target=System.err\nlog4j.appender.stderr.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.conversionPattern=%d{yyyy/MM/dd HH:mm:ss} %-5p %c %x - %m%n\n\n# Suppress messages from ZKTableStateManager: Creates a large number of table\n# state change messages.\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZKTableStateManager=ERROR\n"
  },
  {
    "path": "azuredocumentdb/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent/</relativePath>\n  </parent>\n\n  <artifactId>azuredocumentdb-binding</artifactId>\n  <name>Azure DocumentDB Binding</name>\n  <dependencies>\n    <dependency>\n      <groupId>com.microsoft.azure</groupId>\n      <artifactId>azure-documentdb</artifactId>\n      <version>${azuredocumentdb.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "azuredocumentdb/src/main/java/com/yahoo/ycsb/db/azuredocumentdb/AzureDocumentDBClient.java",
    "content": "/*\n * Copyright 2016 YCSB Contributors. All Rights Reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\").\n * You may not use this file except in compliance with the License.\n *\n * This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n * CONDITIONS OF ANY KIND, either express or implied. See the License\n * for the specific language governing permissions and limitations under\n * the License.\n */\npackage com.yahoo.ycsb.db.azuredocumentdb;\n\nimport com.yahoo.ycsb.*;\n\nimport com.microsoft.azure.documentdb.ConnectionPolicy;\nimport com.microsoft.azure.documentdb.ConsistencyLevel;\nimport com.microsoft.azure.documentdb.Database;\nimport com.microsoft.azure.documentdb.Document;\nimport com.microsoft.azure.documentdb.DocumentClient;\nimport com.microsoft.azure.documentdb.DocumentClientException;\nimport com.microsoft.azure.documentdb.DocumentCollection;\nimport com.microsoft.azure.documentdb.FeedOptions;\n\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.List;\n\n/**\n * Azure DocumentDB client binding.\n */\npublic class AzureDocumentDBClient extends DB {\n  private static String host;\n  private static String masterKey;\n  private static String databaseId;\n  private static Database database;\n  private static DocumentClient documentClient;\n  private static DocumentCollection collection;\n  private static FeedOptions feedOptions;\n\n  @Override\n  public void init() throws DBException {\n    host = getProperties().getProperty(\"documentdb.host\", null);\n    masterKey = getProperties().getProperty(\"documentdb.masterKey\", null);\n\n    if (host == null) {\n      System.err.println(\"ERROR: 'documentdb.host' must be set!\");\n      System.exit(1);\n    }\n\n    if (masterKey == null) {\n      System.err.println(\"ERROR: 'documentdb.masterKey' must be set!\");\n      System.exit(1);\n    }\n\n    databaseId = getProperties().getProperty(\"documentdb.databaseId\", \"ycsb\");\n    String collectionId =\n        getProperties().getProperty(\"documentdb.collectionId\", \"usertable\");\n    documentClient =\n        new DocumentClient(host, masterKey, ConnectionPolicy.GetDefault(),\n                           ConsistencyLevel.Session);\n    try {\n      // Initialize test database and collection.\n      collection = getCollection(collectionId);\n    } catch (DocumentClientException e) {\n      throw new DBException(\"Initialze collection failed\", e);\n    }\n\n    feedOptions = new FeedOptions();\n    feedOptions.setEmitVerboseTracesInQuery(false);\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n                     Map<String, ByteIterator> result) {\n    Document record = getDocumentById(table, key);\n\n    if (record != null) {\n      Set<String> fieldsToReturn =\n          (fields == null ? record.getHashMap().keySet() : fields);\n\n      for (String field : fieldsToReturn) {\n        if (field.startsWith(\"_\")) {\n          continue;\n        }\n        result.put(field, new StringByteIterator(record.getString(field)));\n      }\n      return Status.OK;\n    }\n    // Unable to find the specidifed document.\n    return Status.ERROR;\n  }\n\n  @Override\n  public Status update(String table, String key,\n                       Map<String, ByteIterator> values) {\n    Document record = getDocumentById(table, key);\n\n    if (record == null) {\n      return Status.ERROR;\n    }\n\n    // Update each field.\n    for (Entry<String, ByteIterator> val : values.entrySet()) {\n      record.set(val.getKey(), val.getValue().toString());\n    }\n\n    // Replace the document.\n    try {\n      documentClient.replaceDocument(record, null);\n    } catch (DocumentClientException e) {\n      e.printStackTrace(System.err);\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  @Override\n  public Status insert(String table, String key,\n                       Map<String, ByteIterator> values) {\n    Document record = new Document();\n\n    record.set(\"id\", key);\n\n    for (Entry<String, ByteIterator> val : values.entrySet()) {\n      record.set(val.getKey(), val.getValue().toString());\n    }\n\n    try {\n      documentClient.createDocument(collection.getSelfLink(), record, null,\n                                    false);\n    } catch (DocumentClientException e) {\n      e.printStackTrace(System.err);\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    Document record = getDocumentById(table, key);\n\n    try {\n      // Delete the document by self link.\n      documentClient.deleteDocument(record.getSelfLink(), null);\n    } catch (DocumentClientException e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n                     Set<String> fields,\n                     Vector<HashMap<String, ByteIterator>> result) {\n    // TODO: Implement Scan as query on primary key.\n    return Status.NOT_IMPLEMENTED;\n  }\n\n  private Database getDatabase() {\n    if (database == null) {\n      // Get the database if it exists\n      List<Database> databaseList =\n          documentClient\n              .queryDatabases(\n                  \"SELECT * FROM root r WHERE r.id='\" + databaseId + \"'\", null)\n              .getQueryIterable()\n              .toList();\n\n      if (databaseList.size() > 0) {\n        // Cache the database object so we won't have to query for it\n        // later to retrieve the selfLink.\n        database = databaseList.get(0);\n      } else {\n        // Create the database if it doesn't exist.\n        try {\n          Database databaseDefinition = new Database();\n          databaseDefinition.setId(databaseId);\n\n          database = documentClient.createDatabase(databaseDefinition, null)\n                         .getResource();\n        } catch (DocumentClientException e) {\n          // TODO: Something has gone terribly wrong - the app wasn't\n          // able to query or create the collection.\n          // Verify your connection, endpoint, and key.\n          e.printStackTrace(System.err);\n        }\n      }\n    }\n\n    return database;\n  }\n\n  private DocumentCollection getCollection(String collectionId)\n      throws DocumentClientException {\n    if (collection == null) {\n      // Get the collection if it exists.\n      List<DocumentCollection> collectionList =\n          documentClient\n              .queryCollections(getDatabase().getSelfLink(),\n                                \"SELECT * FROM root r WHERE r.id='\" +\n                                    collectionId + \"'\",\n                                null)\n              .getQueryIterable()\n              .toList();\n\n      if (collectionList.size() > 0) {\n        // Cache the collection object so we won't have to query for it\n        // later to retrieve the selfLink.\n        collection = collectionList.get(0);\n      } else {\n        // Create the collection if it doesn't exist.\n        try {\n          DocumentCollection collectionDefinition = new DocumentCollection();\n          collectionDefinition.setId(collectionId);\n\n          collection = documentClient\n                           .createCollection(getDatabase().getSelfLink(),\n                                             collectionDefinition, null)\n                           .getResource();\n        } catch (DocumentClientException e) {\n          // TODO: Something has gone terribly wrong - the app wasn't\n          // able to query or create the collection.\n          // Verify your connection, endpoint, and key.\n          e.printStackTrace(System.err);\n          throw e;\n        }\n      }\n    }\n\n    return collection;\n  }\n\n  private Document getDocumentById(String collectionId, String id) {\n    if (collection == null) {\n      return null;\n    }\n    // Retrieve the document using the DocumentClient.\n    List<Document> documentList =\n        documentClient\n            .queryDocuments(collection.getSelfLink(),\n                            \"SELECT * FROM root r WHERE r.id='\" + id + \"'\",\n                            feedOptions)\n            .getQueryIterable()\n            .toList();\n\n    if (documentList.size() > 0) {\n      return documentList.get(0);\n    }\n    return null;\n  }\n}\n"
  },
  {
    "path": "azuredocumentdb/src/main/java/com/yahoo/ycsb/db/azuredocumentdb/package-info.java",
    "content": "/*\n * Copyright 2016 YCSB Contributors. All Rights Reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for Azure DocumentDB.\n */\npackage com.yahoo.ycsb.db.azuredocumentdb;\n\n"
  },
  {
    "path": "azuretablestorage/README.md",
    "content": "<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on Azure table storage. \n\n### 1. Create an Azure Storage account.\n###    https://azure.microsoft.com/en-us/documentation/articles/storage-create-storage-account/#create-a-storage-account\n\n### 2. Install Java and Maven\n\n### 3. Set Up YCSB\n\nGit clone YCSB and compile:\n\n    git clone http://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:azuretablestorage-binding -am clean package\n\n### 4. Provide Azure Storage parameters\n    \nSet the account name and access key.\n\n- `azure.account`\n- `azure.key`\n\nOr, you can set configs with the shell command, EG:\n\n    ./bin/ycsb load azuretablestorage -s -P workloads/workloada -p azure.account=YourAccountName -p azure.key=YourAccessKey > outputLoad.txt\n\n### 5. Load data and run tests\n\nLoad the data:\n\n    ./bin/ycsb load azuretablestorage -s -P workloads/workloada -p azure.account=YourAccountName -p azure.key=YourAccessKey > outputLoad.txt\n\nRun the workload test:\n\n    ./bin/ycsb run azuretablestorage -s -P workloads/workloada -p azure.account=YourAccountName -p azure.key=YourAccessKey > outputRun.txt\n\t\n### 6. Optional Azure Storage parameters\n\n- `azure.batchsize`\t\n\tCould be between 1 ~ 100. Insert records to table in batch if batchsize > 1.\n- `azure.protocol`\n\thttps(in default) or http.\n- `azure.table`\n\tThe name of the table('usertable' in default).\n- `azure.partitionkey`\n\tThe partitionkey('Test' in default).\n- `azure.endpoint`\n\tFor Azure stack WOSS.\n\t\nEG:\n    ./bin/ycsb load azuretablestorage -s -P workloads/workloada -p azure.account=YourAccountName -p azure.key=YourAccessKey -p azure.batchsize=100 -p azure.protocol=http\n\t\n\t\n\n"
  },
  {
    "path": "azuretablestorage/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" \n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" \n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n    <modelVersion>4.0.0</modelVersion>\n    <parent>\n        <groupId>com.yahoo.ycsb</groupId>\n        <artifactId>binding-parent</artifactId>\n        <version>0.14.0-SNAPSHOT</version>\n        <relativePath>../binding-parent</relativePath>\n    </parent>\n\n    <artifactId>azuretablestorage-binding</artifactId>\n    <name>Azure table storage Binding</name>\n    <packaging>jar</packaging>\n\n    <dependencies>\n        <dependency>\n            <groupId>com.yahoo.ycsb</groupId>\n            <artifactId>core</artifactId>\n            <version>${project.version}</version>\n            <scope>provided</scope>\n        </dependency>\n        <dependency>\n    \t\t<groupId>com.microsoft.azure</groupId>\n    \t\t<artifactId>azure-storage</artifactId>\n    \t\t<version>${azurestorage.version}</version>\n\t\t</dependency>\n    </dependencies>\n</project>\n"
  },
  {
    "path": "azuretablestorage/src/main/java/com/yahoo/ycsb/db/azuretablestorage/AzureClient.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.azuretablestorage;\n\nimport com.microsoft.azure.storage.CloudStorageAccount;\nimport com.microsoft.azure.storage.table.CloudTable;\nimport com.microsoft.azure.storage.table.CloudTableClient;\nimport com.microsoft.azure.storage.table.DynamicTableEntity;\nimport com.microsoft.azure.storage.table.EntityProperty;\nimport com.microsoft.azure.storage.table.EntityResolver;\nimport com.microsoft.azure.storage.table.TableBatchOperation;\nimport com.microsoft.azure.storage.table.TableOperation;\nimport com.microsoft.azure.storage.table.TableQuery;\nimport com.microsoft.azure.storage.table.TableServiceEntity;\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\nimport java.util.Date;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\n\n/**\n * YCSB binding for <a href=\"https://azure.microsoft.com/en-us/services/storage/\">Azure</a>.\n * See {@code azure/README.md} for details.\n */\npublic class AzureClient extends DB {\n\n  public static final String PROTOCOL = \"azure.protocal\";\n  public static final String PROTOCOL_DEFAULT = \"https\";\n  public static final String TABLE_ENDPOINT = \"azure.endpoint\";\n  public static final String ACCOUNT = \"azure.account\";\n  public static final String KEY = \"azure.key\";\n  public static final String TABLE = \"azure.table\";\n  public static final String TABLE_DEFAULT = \"usertable\";\n  public static final String PARTITIONKEY = \"azure.partitionkey\";\n  public static final String PARTITIONKEY_DEFAULT = \"Test\";\n  public static final String BATCHSIZE = \"azure.batchsize\";\n  public static final String BATCHSIZE_DEFAULT = \"1\";\n  private static final int BATCHSIZE_UPPERBOUND = 100;\n  private static final TableBatchOperation BATCH_OPERATION = new TableBatchOperation();\n  private static String partitionKey;\n  private CloudStorageAccount storageAccount = null;\n  private CloudTableClient tableClient = null;\n  private CloudTable cloudTable = null;\n  private static int batchSize;\n  private static int curIdx = 0;\n\n  @Override\n  public void init() throws DBException {\n    Properties props = getProperties();\n    String protocol = props.getProperty(PROTOCOL, PROTOCOL_DEFAULT);\n    if (protocol != \"https\" && protocol != \"http\") {\n      throw new DBException(\"Protocol must be 'http' or 'https'!\\n\");\n    }\n    String table = props.getProperty(TABLE, TABLE_DEFAULT);\n    partitionKey = props.getProperty(PARTITIONKEY, PARTITIONKEY_DEFAULT);\n    batchSize = Integer.parseInt(props.getProperty(BATCHSIZE, BATCHSIZE_DEFAULT));\n    if (batchSize < 1 || batchSize > BATCHSIZE_UPPERBOUND) {\n      throw new DBException(String.format(\"Batchsize must be between 1 and %d!\\n\", \n          BATCHSIZE_UPPERBOUND));\n    }\n    String account = props.getProperty(ACCOUNT);\n    String key = props.getProperty(KEY);\n    String tableEndPoint = props.getProperty(TABLE_ENDPOINT);\n    String storageConnectionString = getStorageConnectionString(protocol, account, key, tableEndPoint);\n    try {\n      storageAccount = CloudStorageAccount.parse(storageConnectionString);\n    } catch (Exception e)  {\n      throw new DBException(\"Could not connect to the account.\\n\", e);\n    }\n    tableClient = storageAccount.createCloudTableClient();\n    try {\n      cloudTable = tableClient.getTableReference(table);\n      cloudTable.createIfNotExists();\n    } catch (Exception e)  {\n      throw new DBException(\"Could not connect to the table.\\n\", e);\n    }\n  }\n\n  @Override\n  public void cleanup() {\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n                     Map<String, ByteIterator> result) {\n    if (fields != null) {\n      return readSubset(key, fields, result);\n    } else {\n      return readEntity(key, result);\n    }\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount, \n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    try {\n      String whereStr = String.format(\"(PartitionKey eq '%s') and (RowKey ge '%s')\", \n          partitionKey, startkey);\n      TableQuery<DynamicTableEntity> scanQuery = \n          new TableQuery<DynamicTableEntity>(DynamicTableEntity.class)\n          .where(whereStr).take(recordcount);\n      int cnt = 0;\n      for (DynamicTableEntity entity : cloudTable.execute(scanQuery)) {\n        HashMap<String, EntityProperty> properties = entity.getProperties();\n        HashMap<String, ByteIterator> cur = new HashMap<String, ByteIterator>();\n        for (Entry<String, EntityProperty> entry : properties.entrySet()) {\n          String fieldName = entry.getKey();\n          ByteIterator fieldVal = new ByteArrayByteIterator(entry.getValue().getValueAsByteArray());\n          if (fields == null || fields.contains(fieldName)) {\n            cur.put(fieldName, fieldVal);\n          }\n        }\n        result.add(cur);\n        if (++cnt == recordcount) {\n          break;\n        }\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    return insertOrUpdate(key, values);\n  }\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    if (batchSize == 1) {\n      return insertOrUpdate(key, values);\n    } else {\n      return insertBatch(key, values);\n    }\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      // firstly, retrieve the entity to be deleted\n      TableOperation retrieveOp = \n          TableOperation.retrieve(partitionKey, key, TableServiceEntity.class);\n      TableServiceEntity entity = cloudTable.execute(retrieveOp).getResultAsType();\n      // secondly, delete the entity\n      TableOperation deleteOp = TableOperation.delete(entity);\n      cloudTable.execute(deleteOp);\n      return Status.OK;\n    } catch (Exception e) {\n      return Status.ERROR;\n    }\n  }\n\n  private String getStorageConnectionString(String protocol, String account, String key, String tableEndPoint) {\n    String res = \n        String.format(\"DefaultEndpointsProtocol=%s;AccountName=%s;AccountKey=%s\", \n        protocol, account, key);\n    if (tableEndPoint != null) {\n      res = String.format(\"%s;TableEndpoint=%s\", res, tableEndPoint);\n    }\n    return res;\n  }\n\n  /*\n   * Read subset of properties instead of full fields with projection.\n   */\n  public Status readSubset(String key, Set<String> fields, Map<String, ByteIterator> result) {\n    String whereStr = String.format(\"RowKey eq '%s'\", key);\n\n    TableQuery<TableServiceEntity> projectionQuery = TableQuery.from(\n        TableServiceEntity.class).where(whereStr).select(fields.toArray(new String[0]));\n\n    EntityResolver<HashMap<String, ByteIterator>> resolver = \n        new EntityResolver<HashMap<String, ByteIterator>>() {\n          public HashMap<String, ByteIterator> resolve(String partitionkey, String rowKey, \n              Date timeStamp, HashMap<String, EntityProperty> properties, String etag) {\n            HashMap<String, ByteIterator> tmp = new HashMap<String, ByteIterator>();\n            for (Entry<String, EntityProperty> entry : properties.entrySet()) {\n              String key = entry.getKey();\n              ByteIterator val = new ByteArrayByteIterator(entry.getValue().getValueAsByteArray());\n              tmp.put(key, val);\n            }\n            return tmp;\n      }\n    };\n    try {\n      for (HashMap<String, ByteIterator> tmp : cloudTable.execute(projectionQuery, resolver)) {\n        for (Entry<String, ByteIterator> entry : tmp.entrySet()){\n          String fieldName = entry.getKey();\n          ByteIterator fieldVal = entry.getValue();\n          result.put(fieldName, fieldVal);\n        }\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      return Status.ERROR;\n    }\n  }\n\n  private Status readEntity(String key, Map<String, ByteIterator> result) {\n    try {\n      // firstly, retrieve the entity to be deleted\n      TableOperation retrieveOp = \n          TableOperation.retrieve(partitionKey, key, DynamicTableEntity.class);\n      DynamicTableEntity entity = cloudTable.execute(retrieveOp).getResultAsType();\n      HashMap<String, EntityProperty> properties = entity.getProperties();\n      for (Entry<String, EntityProperty> entry: properties.entrySet()) {\n        String fieldName = entry.getKey();\n        ByteIterator fieldVal = new ByteArrayByteIterator(entry.getValue().getValueAsByteArray());\n        result.put(fieldName, fieldVal);\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      return Status.ERROR;\n    }\n  }\n\n  private Status insertBatch(String key, Map<String, ByteIterator> values) {\n    HashMap<String, EntityProperty> properties = new HashMap<String, EntityProperty>();\n    for (Entry<String, ByteIterator> entry : values.entrySet()) {\n      String fieldName = entry.getKey();\n      byte[] fieldVal = entry.getValue().toArray();\n      properties.put(fieldName, new EntityProperty(fieldVal));\n    }\n    DynamicTableEntity entity = new DynamicTableEntity(partitionKey, key, properties);\n    BATCH_OPERATION.insertOrReplace(entity);    \n    if (++curIdx == batchSize) {\n      try {\n        cloudTable.execute(BATCH_OPERATION);\n        BATCH_OPERATION.clear();\n        curIdx = 0;\n      } catch (Exception e) {\n        return Status.ERROR;\n      }\n    }\n    return Status.OK;\n  }\n  \n  private Status insertOrUpdate(String key, Map<String, ByteIterator> values) {\n    HashMap<String, EntityProperty> properties = new HashMap<String, EntityProperty>();\n    for (Entry<String, ByteIterator> entry : values.entrySet()) {\n      String fieldName = entry.getKey();\n      byte[] fieldVal = entry.getValue().toArray();\n      properties.put(fieldName, new EntityProperty(fieldVal));\n    }\n    DynamicTableEntity entity = new DynamicTableEntity(partitionKey, key, properties);\n    TableOperation insertOrReplace = TableOperation.insertOrReplace(entity);\n    try {\n      cloudTable.execute(insertOrReplace);\n      return Status.OK;\n    } catch (Exception e) {\n      return Status.ERROR;\n    }\n  }\n  \n}\n"
  },
  {
    "path": "azuretablestorage/src/main/java/com/yahoo/ycsb/db/azuretablestorage/package-info.java",
    "content": "/*\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"https://azure.microsoft.com/en-us/services/storage/\">Azure table Storage</a>.\n */\npackage com.yahoo.ycsb.db.azuretablestorage;\n\n"
  },
  {
    "path": "bin/bindings.properties",
    "content": "#\n# Copyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n\n#DATABASE BINDINGS\n#\n# Available bindings should be listed here in the form of\n#      name:class\n#\n# - the name must start in column 0.\n# - the name is also the directory where the class can be found.\n# - if the directory contains multiple versions with different classes,\n#   use a dash with the version.  (e.g. cassandra-7, cassandra-cql)\n#\naccumulo:com.yahoo.ycsb.db.accumulo.AccumuloClient\naccumulo1.6:com.yahoo.ycsb.db.accumulo.AccumuloClient\naccumulo1.7:com.yahoo.ycsb.db.accumulo.AccumuloClient\naccumulo1.8:com.yahoo.ycsb.db.accumulo.AccumuloClient\naerospike:com.yahoo.ycsb.db.AerospikeClient\nasynchbase:com.yahoo.ycsb.db.AsyncHBaseClient\narangodb:com.yahoo.ycsb.db.ArangoDBClient\narangodb3:com.yahoo.ycsb.db.arangodb.ArangoDB3Client\nazuredocumentdb:com.yahoo.ycsb.db.azuredocumentdb.AzureDocumentDBClient\nazuretablestorage:com.yahoo.ycsb.db.azuretablestorage.AzureClient\nbasic:com.yahoo.ycsb.BasicDB\nbasicts:com.yahoo.ycsb.BasicTSDB\ncassandra-cql:com.yahoo.ycsb.db.CassandraCQLClient\ncassandra2-cql:com.yahoo.ycsb.db.CassandraCQLClient\ncloudspanner:com.yahoo.ycsb.db.cloudspanner.CloudSpannerClient\ncouchbase:com.yahoo.ycsb.db.CouchbaseClient\ncouchbase2:com.yahoo.ycsb.db.couchbase2.Couchbase2Client\ndynamodb:com.yahoo.ycsb.db.DynamoDBClient\nelasticsearch:com.yahoo.ycsb.db.ElasticsearchClient\nelasticsearch5:com.yahoo.ycsb.db.elasticsearch5.ElasticsearchClient\nelasticsearch5-rest:com.yahoo.ycsb.db.elasticsearch5.ElasticsearchRestClient\ngeode:com.yahoo.ycsb.db.GeodeClient\ngooglebigtable:com.yahoo.ycsb.db.GoogleBigtableClient\ngoogledatastore:com.yahoo.ycsb.db.GoogleDatastoreClient\nhbase098:com.yahoo.ycsb.db.HBaseClient\nhbase10:com.yahoo.ycsb.db.HBaseClient10\nhbase12:com.yahoo.ycsb.db.hbase12.HBaseClient12\nhypertable:com.yahoo.ycsb.db.HypertableClient\ninfinispan-cs:com.yahoo.ycsb.db.InfinispanRemoteClient\ninfinispan:com.yahoo.ycsb.db.InfinispanClient\njdbc:com.yahoo.ycsb.db.JdbcDBClient\nkudu:com.yahoo.ycsb.db.KuduYCSBClient\nmemcached:com.yahoo.ycsb.db.MemcachedClient\nmongodb:com.yahoo.ycsb.db.MongoDbClient\nmongodb-async:com.yahoo.ycsb.db.AsyncMongoDbClient\nnosqldb:com.yahoo.ycsb.db.NoSqlDbClient\norientdb:com.yahoo.ycsb.db.OrientDBClient\nrados:com.yahoo.ycsb.db.RadosClient\nredis:com.yahoo.ycsb.db.RedisClient\nrest:com.yahoo.ycsb.webservice.rest.RestClient\nriak:com.yahoo.ycsb.db.riak.RiakKVClient\ns3:com.yahoo.ycsb.db.S3Client\nsolr:com.yahoo.ycsb.db.solr.SolrClient\nsolr6:com.yahoo.ycsb.db.solr6.SolrClient\ntarantool:com.yahoo.ycsb.db.TarantoolClient\n\n"
  },
  {
    "path": "bin/ycsb",
    "content": "#!/usr/bin/env python\n#\n# Copyright (c) 2012 - 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n\nimport errno\nimport fnmatch\nimport io\nimport os\nimport shlex\nimport sys\nimport subprocess\n\ntry:\n    mod = __import__('argparse')\n    import argparse\nexcept ImportError:\n    print >> sys.stderr, '[ERROR] argparse not found. Try installing it via \"pip\".'\n    exit(1)\n\nBASE_URL = \"https://github.com/brianfrankcooper/YCSB/tree/master/\"\nCOMMANDS = {\n    \"shell\" : {\n        \"command\"     : \"\",\n        \"description\" : \"Interactive mode\",\n        \"main\"        : \"com.yahoo.ycsb.CommandLine\",\n    },\n    \"load\" : {\n        \"command\"     : \"-load\",\n        \"description\" : \"Execute the load phase\",\n        \"main\"        : \"com.yahoo.ycsb.Client\",\n    },\n    \"run\" : {\n        \"command\"     : \"-t\",\n        \"description\" : \"Execute the transaction phase\",\n        \"main\"        : \"com.yahoo.ycsb.Client\",\n    },\n}\n\nDATABASES = {\n    \"accumulo\"     : \"com.yahoo.ycsb.db.accumulo.AccumuloClient\",\n    \"accumulo1.6\"     : \"com.yahoo.ycsb.db.accumulo.AccumuloClient\",\n    \"accumulo1.7\"     : \"com.yahoo.ycsb.db.accumulo.AccumuloClient\",\n    \"accumulo1.8\"     : \"com.yahoo.ycsb.db.accumulo.AccumuloClient\",\n    \"aerospike\"    : \"com.yahoo.ycsb.db.AerospikeClient\",\n    \"arangodb\"     : \"com.yahoo.ycsb.db.ArangoDBClient\",\n    \"arangodb3\"    : \"com.yahoo.ycsb.db.arangodb.ArangoDB3Client\",\n    \"asynchbase\"   : \"com.yahoo.ycsb.db.AsyncHBaseClient\",\n    \"azuredocumentdb\"   : \"com.yahoo.ycsb.db.azuredocumentdb.AzureDocumentDBClient\",\n    \"azuretablestorage\" : \"com.yahoo.ycsb.db.azuretablestorage.AzureClient\",\n    \"basic\"        : \"com.yahoo.ycsb.BasicDB\",\n    \"basicts\"      : \"com.yahoo.ycsb.BasicTSDB\",\n    \"cassandra-cql\": \"com.yahoo.ycsb.db.CassandraCQLClient\",\n    \"cassandra2-cql\": \"com.yahoo.ycsb.db.CassandraCQLClient\",\n    \"cloudspanner\" : \"com.yahoo.ycsb.db.cloudspanner.CloudSpannerClient\",\n    \"couchbase\"    : \"com.yahoo.ycsb.db.CouchbaseClient\",\n    \"couchbase2\"   : \"com.yahoo.ycsb.db.couchbase2.Couchbase2Client\",\n    \"dynamodb\"     : \"com.yahoo.ycsb.db.DynamoDBClient\",\n    \"elasticsearch\": \"com.yahoo.ycsb.db.ElasticsearchClient\",\n    \"elasticsearch5\": \"com.yahoo.ycsb.db.elasticsearch5.ElasticsearchClient\",\n    \"geode\"        : \"com.yahoo.ycsb.db.GeodeClient\",\n    \"googlebigtable\"  : \"com.yahoo.ycsb.db.GoogleBigtableClient\",\n    \"googledatastore\" : \"com.yahoo.ycsb.db.GoogleDatastoreClient\",\n    \"hbase098\"     : \"com.yahoo.ycsb.db.HBaseClient\",\n    \"hbase10\"      : \"com.yahoo.ycsb.db.HBaseClient10\",\n    \"hbase12\"      : \"com.yahoo.ycsb.db.hbase12.HBaseClient12\",\n    \"hypertable\"   : \"com.yahoo.ycsb.db.HypertableClient\",\n    \"infinispan-cs\": \"com.yahoo.ycsb.db.InfinispanRemoteClient\",\n    \"infinispan\"   : \"com.yahoo.ycsb.db.InfinispanClient\",\n    \"jdbc\"         : \"com.yahoo.ycsb.db.JdbcDBClient\",\n    \"kudu\"         : \"com.yahoo.ycsb.db.KuduYCSBClient\",\n    \"memcached\"    : \"com.yahoo.ycsb.db.MemcachedClient\",\n    \"mongodb\"      : \"com.yahoo.ycsb.db.MongoDbClient\",\n    \"mongodb-async\": \"com.yahoo.ycsb.db.AsyncMongoDbClient\",\n    \"nosqldb\"      : \"com.yahoo.ycsb.db.NoSqlDbClient\",\n    \"orientdb\"     : \"com.yahoo.ycsb.db.OrientDBClient\",\n    \"rados\"        : \"com.yahoo.ycsb.db.RadosClient\",\n    \"redis\"        : \"com.yahoo.ycsb.db.RedisClient\",\n    \"rest\"         : \"com.yahoo.ycsb.webservice.rest.RestClient\",\n    \"riak\"         : \"com.yahoo.ycsb.db.riak.RiakKVClient\",\n    \"s3\"           : \"com.yahoo.ycsb.db.S3Client\",\n    \"solr\"         : \"com.yahoo.ycsb.db.solr.SolrClient\",\n    \"solr6\"        : \"com.yahoo.ycsb.db.solr6.SolrClient\",\n    \"tarantool\"    : \"com.yahoo.ycsb.db.TarantoolClient\",\n}\n\nOPTIONS = {\n    \"-P file\"        : \"Specify workload file\",\n    \"-p key=value\"   : \"Override workload property\",\n    \"-s\"             : \"Print status to stderr\",\n    \"-target n\"      : \"Target ops/sec (default: unthrottled)\",\n    \"-threads n\"     : \"Number of client threads (default: 1)\",\n    \"-cp path\"       : \"Additional Java classpath entries\",\n    \"-jvm-args args\" : \"Additional arguments to the JVM\",\n}\n\ndef usage():\n    output = io.BytesIO()\n    print >> output, \"%s command database [options]\" % sys.argv[0]\n\n    print >> output, \"\\nCommands:\"\n    for command in sorted(COMMANDS.keys()):\n        print >> output, \"    %s %s\" % (command.ljust(14),\n                                        COMMANDS[command][\"description\"])\n\n    print >> output, \"\\nDatabases:\"\n    for db in sorted(DATABASES.keys()):\n        print >> output, \"    %s %s\" % (db.ljust(14), BASE_URL +\n                                        db.split(\"-\")[0])\n\n    print >> output, \"\\nOptions:\"\n    for option in sorted(OPTIONS.keys()):\n        print >> output, \"    %s %s\" % (option.ljust(14), OPTIONS[option])\n\n    print >> output, \"\"\"\\nWorkload Files:\n    There are various predefined workloads under workloads/ directory.\n    See https://github.com/brianfrankcooper/YCSB/wiki/Core-Properties\n    for the list of workload properties.\"\"\"\n\n    return output.getvalue()\n\n# Python 2.6 doesn't have check_output. Add the method as it is in Python 2.7\n# Based on https://github.com/python/cpython/blob/2.7/Lib/subprocess.py#L545\ndef check_output(*popenargs, **kwargs):\n    r\"\"\"Run command with arguments and return its output as a byte string.\n\n    If the exit code was non-zero it raises a CalledProcessError.  The\n    CalledProcessError object will have the return code in the returncode\n    attribute and output in the output attribute.\n\n    The arguments are the same as for the Popen constructor.  Example:\n\n    >>> check_output([\"ls\", \"-l\", \"/dev/null\"])\n    'crw-rw-rw- 1 root root 1, 3 Oct 18  2007 /dev/null\\n'\n\n    The stdout argument is not allowed as it is used internally.\n    To capture standard error in the result, use stderr=STDOUT.\n\n    >>> check_output([\"/bin/sh\", \"-c\",\n    ...               \"ls -l non_existent_file ; exit 0\"],\n    ...              stderr=STDOUT)\n    'ls: non_existent_file: No such file or directory\\n'\n    \"\"\"\n    if 'stdout' in kwargs:\n        raise ValueError('stdout argument not allowed, it will be overridden.')\n    process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)\n    output, unused_err = process.communicate()\n    retcode = process.poll()\n    if retcode:\n        cmd = kwargs.get(\"args\")\n        if cmd is None:\n            cmd = popenargs[0]\n        error = subprocess.CalledProcessError(retcode, cmd)\n        error.output = output\n        raise error\n    return output\n\ndef debug(message):\n    print >> sys.stderr, \"[DEBUG] \", message\n\ndef warn(message):\n    print >> sys.stderr, \"[WARN] \", message\n\ndef error(message):\n    print >> sys.stderr, \"[ERROR] \", message\n\ndef find_jars(dir, glob='*.jar'):\n    jars = []\n    for (dirpath, dirnames, filenames) in os.walk(dir):\n        for filename in fnmatch.filter(filenames, glob):\n            jars.append(os.path.join(dirpath, filename))\n    return jars\n\ndef get_ycsb_home():\n    dir = os.path.abspath(os.path.dirname(sys.argv[0]))\n    while \"LICENSE.txt\" not in os.listdir(dir):\n        dir = os.path.join(dir, os.path.pardir)\n    return os.path.abspath(dir)\n\ndef is_distribution():\n    # If there's a top level pom, we're a source checkout. otherwise a dist artifact\n    return \"pom.xml\" not in os.listdir(get_ycsb_home())\n\n# Run the maven dependency plugin to get the local jar paths.\n# presumes maven can run, so should only be run on source checkouts\n# will invoke the 'package' goal for the given binding in order to resolve intra-project deps\n# presumes maven properly handles system-specific path separators\n# Given module is full module name eg. 'core' or 'couchbase-binding'\ndef get_classpath_from_maven(module):\n    try:\n        debug(\"Running 'mvn -pl com.yahoo.ycsb:\" + module + \" -am package -DskipTests \"\n              \"dependency:build-classpath -DincludeScope=compile -Dmdep.outputFilterFile=true'\")\n        mvn_output = check_output([\"mvn\", \"-pl\", \"com.yahoo.ycsb:\" + module,\n                                   \"-am\", \"package\", \"-DskipTests\",\n                                   \"dependency:build-classpath\",\n                                   \"-DincludeScope=compile\",\n                                   \"-Dmdep.outputFilterFile=true\"])\n        # the above outputs a \"classpath=/path/tojar:/path/to/other/jar\" for each module\n        # the last module will be the datastore binding\n        line = [x for x in mvn_output.splitlines() if x.startswith(\"classpath=\")][-1:]\n        return line[0][len(\"classpath=\"):]\n    except subprocess.CalledProcessError, err:\n        error(\"Attempting to generate a classpath from Maven failed \"\n              \"with return code '\" + str(err.returncode) + \"'. The output from \"\n              \"Maven follows, try running \"\n              \"'mvn -DskipTests package dependency:build=classpath' on your \"\n              \"own and correct errors.\" + os.linesep + os.linesep + \"mvn output:\" + os.linesep\n              + err.output)\n        sys.exit(err.returncode)\n\ndef main():\n    p = argparse.ArgumentParser(\n            usage=usage(),\n            formatter_class=argparse.RawDescriptionHelpFormatter)\n    p.add_argument('-cp', dest='classpath', help=\"\"\"Additional classpath\n                   entries, e.g.  '-cp /tmp/hbase-1.0.1.1/conf'. Will be\n                   prepended to the YCSB classpath.\"\"\")\n    p.add_argument(\"-jvm-args\", default=[], type=shlex.split,\n                   help=\"\"\"Additional arguments to pass to 'java', e.g.\n                   '-Xmx4g'\"\"\")\n    p.add_argument(\"command\", choices=sorted(COMMANDS),\n                   help=\"\"\"Command to run.\"\"\")\n    p.add_argument(\"database\", choices=sorted(DATABASES),\n                   help=\"\"\"Database to test.\"\"\")\n    args, remaining = p.parse_known_args()\n    ycsb_home = get_ycsb_home()\n\n    # Use JAVA_HOME to find java binary if set, otherwise just use PATH.\n    java = \"java\"\n    java_home = os.getenv(\"JAVA_HOME\")\n    if java_home:\n        java = os.path.join(java_home, \"bin\", \"java\")\n    db_classname = DATABASES[args.database]\n    command = COMMANDS[args.command][\"command\"]\n    main_classname = COMMANDS[args.command][\"main\"]\n\n    # Classpath set up\n    binding = args.database.split(\"-\")[0]\n\n    if binding == \"accumulo\":\n        warn(\"The 'accumulo' client has been deprecated in favor of version \"\n             \"specific bindings. This name still maps to the binding for \"\n             \"Accumulo 1.6, which is named 'accumulo-1.6'. This alias will \"\n             \"be removed in a future YCSB release.\")\n        binding = \"accumulo1.6\"\n\n    if binding == \"cassandra2\":\n        warn(\"The 'cassandra2-cql' client has been deprecated. It has been \"\n             \"renamed to simply 'cassandra-cql'. This alias will be removed\"\n             \" in the next YCSB release.\")\n        binding = \"cassandra\"\n\n    if binding == \"couchbase\":\n        warn(\"The 'couchbase' client has been deprecated. If you are using \"\n             \"Couchbase 4.0+ try using the 'couchbase2' client instead.\")\n\n    if is_distribution():\n        db_dir = os.path.join(ycsb_home, binding + \"-binding\")\n        # include top-level conf for when we're a binding-specific artifact.\n        # If we add top-level conf to the general artifact, starting here\n        # will allow binding-specific conf to override (because it's prepended)\n        cp = [os.path.join(ycsb_home, \"conf\")]\n        cp.extend(find_jars(os.path.join(ycsb_home, \"lib\")))\n        cp.extend(find_jars(os.path.join(db_dir, \"lib\")))\n    else:\n        warn(\"Running against a source checkout. In order to get our runtime \"\n             \"dependencies we'll have to invoke Maven. Depending on the state \"\n             \"of your system, this may take ~30-45 seconds\")\n        db_location = \"core\" if (binding == \"basic\" or binding == \"basicts\") else binding\n        project = \"core\" if (binding == \"basic\" or binding == \"basicts\") else binding + \"-binding\"\n        db_dir = os.path.join(ycsb_home, db_location)\n        # goes first so we can rely on side-effect of package\n        maven_says = get_classpath_from_maven(project)\n        # TODO when we have a version property, skip the glob\n        cp = find_jars(os.path.join(db_dir, \"target\"),\n                       project + \"*.jar\")\n        # alredy in jar:jar:jar form\n        cp.append(maven_says)\n    cp.insert(0, os.path.join(db_dir, \"conf\"))\n    classpath = os.pathsep.join(cp)\n    if args.classpath:\n        classpath = os.pathsep.join([args.classpath, classpath])\n\n    ycsb_command = ([java] + args.jvm_args +\n                    [\"-cp\", classpath,\n                     main_classname, \"-db\", db_classname] + remaining)\n    if command:\n        ycsb_command.append(command)\n    print >> sys.stderr, \" \".join(ycsb_command)\n    try:\n        return subprocess.call(ycsb_command)\n    except OSError as e:\n        if e.errno == errno.ENOENT:\n            error('Command failed. Is java installed and on your PATH?')\n            return 1\n        else:\n            raise\n\nif __name__ == '__main__':\n    sys.exit(main())\n"
  },
  {
    "path": "bin/ycsb.bat",
    "content": "@REM\r\n@REM Copyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\r\n@REM\r\n@REM Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n@REM may not use this file except in compliance with the License. You\r\n@REM may obtain a copy of the License at\r\n@REM\r\n@REM http://www.apache.org/licenses/LICENSE-2.0\r\n@REM\r\n@REM Unless required by applicable law or agreed to in writing, software\r\n@REM distributed under the License is distributed on an \"AS IS\" BASIS,\r\n@REM WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n@REM implied. See the License for the specific language governing\r\n@REM permissions and limitations under the License. See accompanying\r\n@REM LICENSE file.\r\n@REM\r\n@REM -----------------------------------------------------------------------\r\n@REM Control Script for YCSB\r\n@REM\r\n@REM Environment Variable Prerequisites\r\n@REM\r\n@REM   Do not set the variables in this script. Instead put them into a script\r\n@REM   setenv.sh in YCSB_HOME/bin to keep your customizations separate.\r\n@REM\r\n@REM   YCSB_HOME       (Optional) YCSB installation directory.  If not set\r\n@REM                   this script will use the parent directory of where this\r\n@REM                   script is run from.\r\n@REM\r\n@REM   JAVA_HOME       (Required) Must point at your Java Development Kit\r\n@REM                   or Java Runtime Environment installation.\r\n@REM\r\n@REM   JAVA_OPTS       (Optional) Java runtime options used when any command\r\n@REM                   is executed.\r\n@REM\r\n@REM   WARNING!!! YCSB home must be located in a directory path that doesn't\r\n@REM   contain spaces.\r\n@REM\r\n\r\n@ECHO OFF\r\nSETLOCAL ENABLEDELAYEDEXPANSION\r\n\r\n@REM Only set YCSB_HOME if not already set\r\nPUSHD %~dp0..\r\nIF NOT DEFINED YCSB_HOME SET YCSB_HOME=%CD%\r\nPOPD\r\n\r\n@REM Ensure that any extra CLASSPATH variables are set via setenv.bat\r\nSET CLASSPATH=\r\n\r\n@REM Pull in customization options\r\nif exist \"%YCSB_HOME%\\bin\\setenv.bat\" call \"%YCSB_HOME%\\bin\\setenv.bat\"\r\n\r\n@REM Check if we have a usable JDK\r\nIF \"%JAVA_HOME%.\" == \".\" GOTO noJavaHome\r\nIF NOT EXIST \"%JAVA_HOME%\\bin\\java.exe\" GOTO noJavaHome\r\nGOTO okJava\r\n:noJavaHome\r\nECHO The JAVA_HOME environment variable is not defined correctly.\r\nGOTO exit\r\n:okJava\r\n\r\n@REM Determine YCSB command argument\r\nIF NOT \"load\" == \"%1\" GOTO noload\r\nSET YCSB_COMMAND=-load\r\nSET YCSB_CLASS=com.yahoo.ycsb.Client\r\nGOTO gotCommand\r\n:noload\r\nIF NOT \"run\" == \"%1\" GOTO noRun\r\nSET YCSB_COMMAND=-t\r\nSET YCSB_CLASS=com.yahoo.ycsb.Client\r\nGOTO gotCommand\r\n:noRun\r\nIF NOT \"shell\" == \"%1\" GOTO noShell\r\nSET YCSB_COMMAND=\r\nSET YCSB_CLASS=com.yahoo.ycsb.CommandLine\r\nGOTO gotCommand\r\n:noShell\r\nECHO [ERROR] Found unknown command '%1'\r\nECHO [ERROR] Expected one of 'load', 'run', or 'shell'. Exiting.\r\nGOTO exit\r\n:gotCommand\r\n\r\n@REM Find binding information\r\nFOR /F \"delims=\" %%G in (\r\n  'FINDSTR /B \"%2:\" %YCSB_HOME%\\bin\\bindings.properties'\r\n) DO SET \"BINDING_LINE=%%G\"\r\n\r\nIF NOT \"%BINDING_LINE%.\" == \".\" GOTO gotBindingLine\r\nECHO [ERROR] The specified binding '%2' was not found.  Exiting.\r\nGOTO exit\r\n:gotBindingLine\r\n\r\n@REM Pull out binding name and class\r\nFOR /F \"tokens=1-2 delims=:\" %%G IN (\"%BINDING_LINE%\") DO (\r\n  SET BINDING_NAME=%%G\r\n  SET BINDING_CLASS=%%H\r\n)\r\n\r\n@REM Some bindings have multiple versions that are managed in the same\r\n@REM directory.\r\n@REM   They are noted with a '-' after the binding name.\r\n@REM   (e.g. cassandra-7 & cassandra-8)\r\nFOR /F \"tokens=1 delims=-\" %%G IN (\"%BINDING_NAME%\") DO (\r\n  SET BINDING_DIR=%%G\r\n)\r\n\r\n@REM The 'basic' binding is core functionality\r\nIF NOT \"%BINDING_NAME%\" == \"basic\" GOTO noBasic\r\nSET BINDING_DIR=core\r\n:noBasic\r\n\r\n@REM Add Top level conf to classpath\r\nIF \"%CLASSPATH%.\" == \".\" GOTO emptyClasspath\r\nSET CLASSPATH=%CLASSPATH%;%YCSB_HOME%\\conf\r\nGOTO confAdded\r\n:emptyClasspath\r\nSET CLASSPATH=%YCSB_HOME%\\conf\r\n:confAdded\r\n\r\n@REM Accumulo deprecation message\r\nIF NOT \"%BINDING_DIR%\" == \"accumulo\" GOTO notAliasAccumulo\r\necho [WARN] The 'accumulo' client has been deprecated in favor of version specific bindings. This name still maps to the binding for Accumulo 1.6, which is named 'accumulo-1.6'. This alias will be removed in a future YCSB release.\r\nSET BINDING_DIR=accumulo1.6\r\n:notAliasAccumulo\r\n\r\n@REM Cassandra2 deprecation message\r\nIF NOT \"%BINDING_DIR%\" == \"cassandra2\" GOTO notAliasCassandra\r\necho [WARN] The 'cassandra2-cql' client has been deprecated. It has been renamed to simply 'cassandra-cql'. This alias will be removed in the next YCSB release.\r\nSET BINDING_DIR=cassandra\r\n:notAliasCassandra\r\n\r\n@REM Build classpath according to source checkout or release distribution\r\nIF EXIST \"%YCSB_HOME%\\pom.xml\" GOTO gotSource\r\n\r\n@REM Core libraries\r\nFOR %%F IN (%YCSB_HOME%\\lib\\*.jar) DO (\r\n  SET CLASSPATH=!CLASSPATH!;%%F%\r\n)\r\n\r\n@REM Database conf dir\r\nIF NOT EXIST \"%YCSB_HOME%\\%BINDING_DIR%-binding\\conf\" GOTO noBindingConf\r\nset CLASSPATH=%CLASSPATH%;%YCSB_HOME%\\%BINDING_DIR%-binding\\conf\r\n:noBindingConf\r\n\r\n@REM Database libraries\r\nFOR %%F IN (%YCSB_HOME%\\%BINDING_DIR%-binding\\lib\\*.jar) DO (\r\n  SET CLASSPATH=!CLASSPATH!;%%F%\r\n)\r\nGOTO classpathComplete\r\n\r\n:gotSource\r\n@REM Check for some basic libraries to see if the source has been built.\r\nIF EXIST \"%YCSB_HOME%\\%BINDING_DIR%\\target\\*.jar\" GOTO gotJars\r\n\r\n@REM Call mvn to build source checkout.\r\nIF \"%BINDING_NAME%\" == \"basic\" GOTO buildCore\r\nSET MVN_PROJECT=%BINDING_DIR%-binding\r\ngoto gotMvnProject\r\n:buildCore\r\nSET MVN_PROJECT=core\r\n:gotMvnProject\r\n\r\nECHO [WARN] YCSB libraries not found.  Attempting to build...\r\nCALL mvn -pl com.yahoo.ycsb:%MVN_PROJECT% -am package -DskipTests\r\nIF %ERRORLEVEL% NEQ 0 (\r\n  ECHO [ERROR] Error trying to build project. Exiting.\r\n  GOTO exit\r\n)\r\n\r\n:gotJars\r\n@REM Core libraries\r\nFOR %%F IN (%YCSB_HOME%\\core\\target\\*.jar) DO (\r\n  SET CLASSPATH=!CLASSPATH!;%%F%\r\n)\r\n\r\n@REM Database conf (need to find because location is not consistent)\r\nFOR /D /R %YCSB_HOME%\\%BINDING_DIR% %%F IN (*) DO (\r\n  IF \"%%~nxF\" == \"conf\" SET CLASSPATH=!CLASSPATH!;%%F%\r\n)\r\n\r\n@REM Database libraries\r\nFOR %%F IN (%YCSB_HOME%\\%BINDING_DIR%\\target\\*.jar) DO (\r\n  SET CLASSPATH=!CLASSPATH!;%%F%\r\n)\r\n\r\n@REM Database dependency libraries\r\nFOR %%F IN (%YCSB_HOME%\\%BINDING_DIR%\\target\\dependency\\*.jar) DO (\r\n  SET CLASSPATH=!CLASSPATH!;%%F%\r\n)\r\n\r\n:classpathComplete\r\n\r\n@REM Couchbase deprecation message\r\nIF NOT \"%BINDING_DIR%\" == \"couchbase\" GOTO notOldCouchbase\r\necho [WARN] The 'couchbase' client is deprecated. If you are using Couchbase 4.0+ try using the 'couchbase2' client instead.\r\n:notOldCouchbase\r\n\r\n@REM Get the rest of the arguments, skipping the first 2\r\nFOR /F \"tokens=2*\" %%G IN (\"%*\") DO (\r\n  SET YCSB_ARGS=%%H\r\n)\r\n\r\n@REM Run YCSB\r\n@ECHO ON\r\n\"%JAVA_HOME%\\bin\\java.exe\" %JAVA_OPTS% -classpath \"%CLASSPATH%\" %YCSB_CLASS% %YCSB_COMMAND% -db %BINDING_CLASS% %YCSB_ARGS%\r\n@ECHO OFF\r\n\r\nGOTO end\r\n\r\n:exit\r\nEXIT /B 1;\r\n\r\n:end\r\n\r\n"
  },
  {
    "path": "bin/ycsb.sh",
    "content": "#!/bin/sh\n#\n# Copyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n# -----------------------------------------------------------------------------\n# Control Script for YCSB\n#\n# Environment Variable Prerequisites\n#\n#   Do not set the variables in this script. Instead put them into a script\n#   setenv.sh in YCSB_HOME/bin to keep your customizations separate.\n#\n#   YCSB_HOME       (Optional) YCSB installation directory.  If not set\n#                   this script will use the parent directory of where this\n#                   script is run from.\n#\n#   JAVA_HOME       (Optional) Must point at your Java Development Kit\n#                   installation.  If empty, this script tries use the\n#                   available java executable.\n#\n#   JAVA_OPTS       (Optional) Java runtime options used when any command\n#                   is executed.\n#\n#   WARNING!!! YCSB home must be located in a directory path that doesn't\n#   contain spaces.\n#\n#        www.shellcheck.net was used to validate this script\n\n# Cygwin support\nCYGWIN=false\ncase \"$(uname)\" in\nCYGWIN*) CYGWIN=true;;\nesac\n\n# Get script path\nSCRIPT_DIR=$(dirname \"$0\" 2>/dev/null)\n\n# Only set YCSB_HOME if not already set\n[ -z \"$YCSB_HOME\" ] && YCSB_HOME=$(cd \"$SCRIPT_DIR/..\" || exit; pwd)\n\n# Ensure that any extra CLASSPATH variables are set via setenv.sh\nCLASSPATH=\n\n# Pull in customization options\nif [ -r \"$YCSB_HOME/bin/setenv.sh\" ]; then\n  # Shellcheck wants a source, but this directive only runs if available\n  #   So, tell shellcheck to ignore\n  # shellcheck source=/dev/null\n  . \"$YCSB_HOME/bin/setenv.sh\"\nfi\n\n# Attempt to find the available JAVA, if JAVA_HOME not set\nif [ -z \"$JAVA_HOME\" ]; then\n  JAVA_PATH=$(which java 2>/dev/null)\n  if [ \"x$JAVA_PATH\" != \"x\" ]; then\n    JAVA_HOME=$(dirname \"$(dirname \"$JAVA_PATH\" 2>/dev/null)\")\n  fi\nfi\n\n# If JAVA_HOME still not set, error\nif [ -z \"$JAVA_HOME\" ]; then\n  echo \"[ERROR] Java executable not found. Exiting.\"\n  exit 1;\nfi\n\n# Determine YCSB command argument\nif [ \"load\" = \"$1\" ] ; then\n  YCSB_COMMAND=-load\n  YCSB_CLASS=com.yahoo.ycsb.Client\nelif [ \"run\" = \"$1\" ] ; then\n  YCSB_COMMAND=-t\n  YCSB_CLASS=com.yahoo.ycsb.Client\nelif [ \"shell\" = \"$1\" ] ; then\n  YCSB_COMMAND=\n  YCSB_CLASS=com.yahoo.ycsb.CommandLine\nelse\n  echo \"[ERROR] Found unknown command '$1'\"\n  echo \"[ERROR] Expected one of 'load', 'run', or 'shell'. Exiting.\"\n  exit 1;\nfi\n\n# Find binding information\nBINDING_LINE=$(grep \"^$2:\" \"$YCSB_HOME/bin/bindings.properties\" -m 1)\n\nif [ -z \"$BINDING_LINE\" ] ; then\n  echo \"[ERROR] The specified binding '$2' was not found.  Exiting.\"\n  exit 1;\nfi\n\n# Get binding name and class\nBINDING_NAME=$(echo \"$BINDING_LINE\" | cut -d':' -f1)\nBINDING_CLASS=$(echo \"$BINDING_LINE\" | cut -d':' -f2)\n\n# Some bindings have multiple versions that are managed in the same directory.\n#   They are noted with a '-' after the binding name.\n#   (e.g. cassandra-7 & cassandra-8)\nBINDING_DIR=$(echo \"$BINDING_NAME\" | cut -d'-' -f1)\n\n# The 'basic' binding is core functionality\nif [ \"$BINDING_NAME\" = \"basic\" ] ; then\n  BINDING_DIR=core\nfi\n\n# For Cygwin, ensure paths are in UNIX format before anything is touched\nif $CYGWIN; then\n  [ -n \"$JAVA_HOME\" ] && JAVA_HOME=$(cygpath --unix \"$JAVA_HOME\")\n  [ -n \"$CLASSPATH\" ] && CLASSPATH=$(cygpath --path --unix \"$CLASSPATH\")\nfi\n\n# Check if source checkout, or release distribution\nDISTRIBUTION=true\nif [ -r \"$YCSB_HOME/pom.xml\" ]; then\n  DISTRIBUTION=false;\nfi\n\n# Add Top level conf to classpath\nif [ -z \"$CLASSPATH\" ] ; then\n  CLASSPATH=\"$YCSB_HOME/conf\"\nelse\n  CLASSPATH=\"$CLASSPATH:$YCSB_HOME/conf\"\nfi\n\n# Accumulo deprecation message\nif [ \"${BINDING_DIR}\" = \"accumulo\" ] ; then\n  echo \"[WARN] The 'accumulo' client has been deprecated in favor of version \\\nspecific bindings. This name still maps to the binding for \\\nAccumulo 1.6, which is named 'accumulo-1.6'. This alias will \\\nbe removed in a future YCSB release.\"\n  BINDING_DIR=\"accumulo1.6\"\nfi\n\n# Cassandra2 deprecation message\nif [ \"${BINDING_DIR}\" = \"cassandra2\" ] ; then\n  echo \"[WARN] The 'cassandra2-cql' client has been deprecated. It has been \\\nrenamed to simply 'cassandra-cql'. This alias will be removed  in the next \\\nYCSB release.\"\n  BINDING_DIR=\"cassandra\"\nfi\n\n# Build classpath\n#   The \"if\" check after the \"for\" is because glob may just return the pattern\n#   when no files are found.  The \"if\" makes sure the file is really there.\nif $DISTRIBUTION; then\n  # Core libraries\n  for f in \"$YCSB_HOME\"/lib/*.jar ; do\n    if [ -r \"$f\" ] ; then\n      CLASSPATH=\"$CLASSPATH:$f\"\n    fi\n  done\n\n  # Database conf dir\n  if [ -r \"$YCSB_HOME\"/\"$BINDING_DIR\"-binding/conf ] ; then\n    CLASSPATH=\"$CLASSPATH:$YCSB_HOME/$BINDING_DIR-binding/conf\"\n  fi\n\n  # Database libraries\n  for f in \"$YCSB_HOME\"/\"$BINDING_DIR\"-binding/lib/*.jar ; do\n    if [ -r \"$f\" ] ; then\n      CLASSPATH=\"$CLASSPATH:$f\"\n    fi\n  done\n\n# Source checkout\nelse\n  # Check for some basic libraries to see if the source has been built.\n  for f in \"$YCSB_HOME\"/\"$BINDING_DIR\"/target/*.jar ; do\n\n    # Call mvn to build source checkout.\n    if [ ! -e \"$f\" ] ; then\n      if [ \"$BINDING_NAME\" = \"basic\" ] ; then\n        MVN_PROJECT=core\n      else\n        MVN_PROJECT=\"$BINDING_DIR\"-binding\n      fi\n\n      echo \"[WARN] YCSB libraries not found.  Attempting to build...\"\n      mvn -pl com.yahoo.ycsb:\"$MVN_PROJECT\" -am package -DskipTests\n      if [ \"$?\" -ne 0 ] ; then\n        echo \"[ERROR] Error trying to build project. Exiting.\"\n        exit 1;\n      fi\n    fi\n\n  done\n\n  # Core libraries\n  for f in \"$YCSB_HOME\"/core/target/*.jar ; do\n    if [ -r \"$f\" ] ; then\n      CLASSPATH=\"$CLASSPATH:$f\"\n    fi\n  done\n\n  # Database conf (need to find because location is not consistent)\n  CLASSPATH_CONF=$(find \"$YCSB_HOME\"/$BINDING_DIR -name \"conf\" | while IFS=\"\" read -r file; do echo \":$file\"; done)\n  if [ \"x$CLASSPATH_CONF\" != \"x\" ]; then\n    CLASSPATH=\"$CLASSPATH$CLASSPATH_CONF\"\n  fi\n\n\n  # Database libraries\n  for f in \"$YCSB_HOME\"/\"$BINDING_DIR\"/target/*.jar ; do\n    if [ -r \"$f\" ] ; then\n      CLASSPATH=\"$CLASSPATH:$f\"\n    fi\n  done\n\n  # Database dependency libraries\n  for f in \"$YCSB_HOME\"/\"$BINDING_DIR\"/target/dependency/*.jar ; do\n    if [ -r \"$f\" ] ; then\n      CLASSPATH=\"$CLASSPATH:$f\"\n    fi\n  done\nfi\n\n# Couchbase deprecation message\nif [ \"${BINDING_DIR}\" = \"couchbase\" ] ; then\n  echo \"[WARN] The 'couchbase' client is deprecated. If you are using \\\nCouchbase 4.0+ try using the 'couchbase2' client instead.\"\nfi\n\n# For Cygwin, switch paths to Windows format before running java\nif $CYGWIN; then\n  [ -n \"$JAVA_HOME\" ] && JAVA_HOME=$(cygpath --unix \"$JAVA_HOME\")\n  [ -n \"$CLASSPATH\" ] && CLASSPATH=$(cygpath --path --windows \"$CLASSPATH\")\nfi\n\n# Get the rest of the arguments\nYCSB_ARGS=$(echo \"$@\" | cut -d' ' -f3-)\n\n# About to run YCSB\necho \"$JAVA_HOME/bin/java $JAVA_OPTS -classpath $CLASSPATH $YCSB_CLASS $YCSB_COMMAND -db $BINDING_CLASS $YCSB_ARGS\"\n\n# Run YCSB\n# Shellcheck reports the following line as needing double quotes to prevent\n# globbing and word splitting.  However, word splitting is the desired effect\n# here.  So, the shellcheck error is disabled for this line.\n# shellcheck disable=SC2086\n\"$JAVA_HOME/bin/java\" $JAVA_OPTS -classpath \"$CLASSPATH\" $YCSB_CLASS $YCSB_COMMAND -db $BINDING_CLASS $YCSB_ARGS\n\n"
  },
  {
    "path": "binding-parent/datastore-specific-descriptor/pom.xml",
    "content": "<!-- \nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>root</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../../</relativePath>\n  </parent>\n\n  <artifactId>datastore-specific-descriptor</artifactId>\n  <name>Per Datastore Binding descriptor</name>\n  <packaging>jar</packaging>\n\n  <description>\n    This module contains the assembly descriptor used by the individual components\n    to build binding-specific distributions.\n  </description>\n  <dependencies>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n\n"
  },
  {
    "path": "binding-parent/datastore-specific-descriptor/src/main/resources/assemblies/datastore-specific-assembly.xml",
    "content": "<!-- \nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd\">\n  <id>dist</id>\n  <includeBaseDirectory>true</includeBaseDirectory>\n  <baseDirectory>ycsb-${artifactId}-${version}</baseDirectory>\n  <files>\n    <file>\n      <source>README.md</source>\n      <outputDirectory></outputDirectory>\n    </file>\n  </files>\n  <fileSets>\n    <fileSet>\n      <directory>..</directory>\n      <outputDirectory></outputDirectory>\n      <fileMode>0644</fileMode>\n      <includes>\n        <include>LICENSE.txt</include>\n        <include>NOTICE.txt</include>\n      </includes>\n    </fileSet>\n    <fileSet>\n      <directory>../bin</directory>\n      <outputDirectory>bin</outputDirectory>\n      <fileMode>0755</fileMode>\n      <includes>\n        <include>ycsb*</include>\n      </includes>\n    </fileSet>\n    <fileSet>\n      <directory>../bin</directory>\n      <outputDirectory>bin</outputDirectory>\n      <fileMode>0644</fileMode>\n      <includes>\n        <include>bindings.properties</include>\n      </includes>\n    </fileSet>\n    <fileSet>\n      <directory>../workloads</directory>\n      <outputDirectory>workloads</outputDirectory>\n      <fileMode>0644</fileMode>\n    </fileSet>\n    <fileSet>\n      <directory>src/main/conf</directory>\n      <outputDirectory>conf</outputDirectory>\n      <fileMode>0644</fileMode>\n    </fileSet>\n  </fileSets>\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>lib</outputDirectory>\n      <includes>\n        <include>com.yahoo.ycsb:core</include>\n      </includes>\n      <scope>provided</scope>\n      <useTransitiveFiltering>true</useTransitiveFiltering>\n    </dependencySet>\n    <dependencySet>\n      <outputDirectory>lib</outputDirectory>\n      <includes>\n        <include>*:jar:*</include>\n      </includes>\n      <excludes>\n        <exclude>*:sources</exclude>\n      </excludes>\n    </dependencySet>\n  </dependencySets>\n</assembly>\n"
  },
  {
    "path": "binding-parent/pom.xml",
    "content": "<!-- \nCopyright (c) 2015-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>root</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>binding-parent</artifactId>\n  <name>YCSB Datastore Binding Parent</name>\n  <packaging>pom</packaging>\n\n  <description>\n    This module acts as the parent for new datastore bindings.\n    It creates a datastore specific binary artifact.\n  </description>\n\n  <modules>\n    <module>datastore-specific-descriptor</module>\n  </modules>\n\n  <properties>\n    <!-- See the test-on-jdk9 profile below. Default to 'jdk9 works' -->\n    <skipJDK9Tests>false</skipJDK9Tests>\n  </properties>\n\n  <build>\n    <pluginManagement>\n      <plugins>\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-assembly-plugin</artifactId>\n          <version>${maven.assembly.version}</version>\n          <dependencies>\n            <dependency>\n              <groupId>com.yahoo.ycsb</groupId>\n              <artifactId>datastore-specific-descriptor</artifactId>\n              <version>${project.version}</version>\n            </dependency>\n          </dependencies>\n          <configuration>\n            <descriptorRefs>\n              <descriptorRef>datastore-specific-assembly</descriptorRef>\n            </descriptorRefs>\n            <finalName>ycsb-${project.artifactId}-${project.version}</finalName>\n            <formats>\n              <format>tar.gz</format>\n            </formats>\n            <appendAssemblyId>false</appendAssemblyId>\n            <tarLongFileMode>posix</tarLongFileMode>\n          </configuration>\n          <executions>\n            <execution>\n              <phase>package</phase>\n              <goals>\n                <goal>single</goal>\n              </goals>\n            </execution>\n          </executions>\n        </plugin>\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-checkstyle-plugin</artifactId>\n          <executions>\n            <execution>\n              <id>validate</id>\n              <configuration>\n                <configLocation>../checkstyle.xml</configLocation>\n              </configuration>\n            </execution>\n          </executions>\n        </plugin>\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-dependency-plugin</artifactId>\n          <version>${maven.dependency.version}</version>\n        </plugin>\n      </plugins>\n    </pluginManagement>\n    <plugins>\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-dependency-plugin</artifactId>\n          <executions>\n            <execution>\n              <id>stage-dependencies</id>\n              <phase>package</phase>\n              <goals>\n                <goal>copy-dependencies</goal>\n               </goals>\n               <configuration>\n                 <includeScope>runtime</includeScope>\n               </configuration>\n            </execution>\n          </executions>\n        </plugin>\n    </plugins>\n  </build>\n  <profiles>\n    <!-- If the binding defines a README, presume we should make an assembly. -->\n    <profile>\n      <id>datastore-binding</id>\n      <activation>\n        <file>\n          <exists>README.md</exists>\n        </file>\n      </activation>\n      <build>\n        <plugins>\n          <plugin>\n            <groupId>org.apache.maven.plugins</groupId>\n            <artifactId>maven-assembly-plugin</artifactId>\n          </plugin>\n        </plugins>\n      </build>\n    </profile>\n    <!-- If the binding doesn't work with jdk9, it should redefine the\n         skipJDK9 property\n      -->\n    <profile>\n      <id>tests-on-jdk9</id>\n      <activation>\n        <jdk>9</jdk>\n      </activation>\n      <properties>\n        <skipTests>${skipJDK9Tests}</skipTests>\n      </properties>\n    </profile>\n  </profiles>\n</project>\n\n"
  },
  {
    "path": "cassandra/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# Apache Cassandra 2.x CQL binding\n\nBinding for [Apache Cassandra](http://cassandra.apache.org), using the CQL API\nvia the [DataStax\ndriver](http://docs.datastax.com/en/developer/java-driver/2.1/java-driver/whatsNew2.html).\n\nTo run against the (deprecated) Cassandra Thrift API, use the `cassandra-10` binding.\n\n## Creating a table for use with YCSB\n\nFor keyspace `ycsb`, table `usertable`:\n\n    cqlsh> create keyspace ycsb\n        WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor': 3 };\n    cqlsh> USE ycsb;\n    cqlsh> create table usertable (\n        y_id varchar primary key,\n        field0 varchar,\n        field1 varchar,\n        field2 varchar,\n        field3 varchar,\n        field4 varchar,\n        field5 varchar,\n        field6 varchar,\n        field7 varchar,\n        field8 varchar,\n        field9 varchar);\n\n**Note that `replication_factor` and consistency levels (below) will affect performance.**\n\n## Cassandra Configuration Parameters\n\n- `hosts` (**required**)\n  - Cassandra nodes to connect to.\n  - No default.\n\n* `port`\n  * CQL port for communicating with Cassandra cluster.\n  * Default is `9042`.\n\n- `cassandra.keyspace`\n  Keyspace name - must match the keyspace for the table created (see above).\n  See http://docs.datastax.com/en/cql/3.1/cql/cql_reference/create_keyspace_r.html for details.\n\n  - Default value is `ycsb`\n\n- `cassandra.username`\n- `cassandra.password`\n  - Optional user name and password for authentication. See http://docs.datastax.com/en/cassandra/2.0/cassandra/security/security_config_native_authenticate_t.html for details.\n\n* `cassandra.readconsistencylevel`\n* `cassandra.writeconsistencylevel`\n\n  * Default value is `ONE`\n  - Consistency level for reads and writes, respectively. See the [DataStax documentation](http://docs.datastax.com/en/cassandra/2.0/cassandra/dml/dml_config_consistency_c.html) for details.\n  * *Note that the default setting does not provide durability in the face of node failure. Changing this setting will affect observed performance.* See also `replication_factor`, above.\n\n* `cassandra.maxconnections`\n* `cassandra.coreconnections`\n  * Defaults for max and core connections can be found here: https://datastax.github.io/java-driver/2.1.8/features/pooling/#pool-size. Cassandra 2.0.X falls under protocol V2, Cassandra 2.1+ falls under protocol V3.\n* `cassandra.connecttimeoutmillis`\n* `cassandra.readtimeoutmillis`\n  * Defaults for connect and read timeouts can be found here: https://docs.datastax.com/en/drivers/java/2.0/com/datastax/driver/core/SocketOptions.html.\n* `cassandra.tracing`\n  * Default is false\n  * https://docs.datastax.com/en/cql/3.3/cql/cql_reference/tracing_r.html"
  },
  {
    "path": "cassandra/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--\nCopyright (c) 2012-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>cassandra-binding</artifactId>\n  <name>Cassandra 2.1+ DB Binding</name>\n  <packaging>jar</packaging>\n\n  <properties>\n    <!-- Skip tests by default. will be activated by jdk8 profile -->\n    <skipTests>true</skipTests>\n  </properties>\n\n  <dependencies>\n    <!-- CQL driver -->\n    <dependency>\n      <groupId>com.datastax.cassandra</groupId>\n      <artifactId>cassandra-driver-core</artifactId>\n      <version>${cassandra.cql.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.cassandraunit</groupId>\n      <artifactId>cassandra-unit</artifactId>\n      <version>3.0.0.1</version>\n      <classifier>shaded</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-simple</artifactId>\n      <version>1.7.21</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n    <!-- only for Cassandra test (Cassandra 2.2+ uses Sigar for collecting system information, and Sigar requires some native lib files) -->\n\t  <dependency>\n\t    <groupId>org.hyperic</groupId>\n\t    <artifactId>sigar-dist</artifactId>\n\t    <version>1.6.4.129</version>\n\t    <type>zip</type>\n\t    <scope>test</scope>\n\t  </dependency>\n  </dependencies>\n\n  <profiles>\n    <!-- Cassandra 2.2+ requires JDK8 to run, so none of our tests\n         will work unless we're using jdk8.\n      -->\n    <profile>\n      <id>jdk8-tests</id>\n      <activation>\n        <jdk>1.8</jdk>\n      </activation>\n      <properties>\n        <skipTests>false</skipTests>\n      </properties>\n    </profile>\n  </profiles>\n  <!-- sigar-dist can be downloaded from jboss repository -->\n\t<repositories>\n\t\t<repository>\n\t\t\t<id>central2</id>\n\t\t\t<name>sigar Repository</name>\n\t\t\t<url>http://repository.jboss.org/nexus/content/groups/public-jboss/</url>\n\t\t\t<layout>default</layout>\n\t\t\t<snapshots>\n\t\t\t\t<enabled>false</enabled>\n\t\t\t</snapshots>\n\t\t</repository>\n\t</repositories>\n\t<!-- unzip sigar-dist/lib files. \n\t\tReferences: \n\t\thttp://stackoverflow.com/questions/5388661/unzip-dependency-in-maven \n\t\thttps://arviarya.wordpress.com/2013/09/22/sigar-access-operating-system-and-hardware-level-information/\n\t -->\n \t<build>\n\t \t<plugins>\n\t \t\t<plugin>\n\t\t\t    <groupId>org.apache.maven.plugins</groupId>\n\t\t\t    <artifactId>maven-dependency-plugin</artifactId>\n\t\t\t    <executions>\n\t\t\t      <execution>\n\t\t\t        <id>unpack-sigar</id>\n\t\t\t        <phase>process-test-resources<!-- or any other valid maven phase --></phase>\n\t\t\t        <goals>\n\t\t\t          <goal>unpack-dependencies</goal>\n\t\t\t        </goals>\n\t\t\t        <configuration>\n\t\t\t          <includeGroupIds>org.hyperic</includeGroupIds>\n\t\t\t          <includeArtifactIds>sigar-dist</includeArtifactIds>\n\t\t\t          <includes>**/sigar-bin/lib/*</includes>\n\t\t\t\t\t  <excludes>**/sigar-bin/lib/*jar</excludes>\n\t\t\t          <outputDirectory>\n\t\t\t             ${project.build.directory}/cassandra-dependency\n\t\t\t          </outputDirectory>\n\t\t\t        </configuration>\n\t\t\t      </execution>\n\t\t\t    </executions>\n\t\t\t</plugin>\n\t\t\t<plugin>\n\t\t\t  <groupId>org.apache.maven.plugins</groupId>\n\t\t\t  <artifactId>maven-surefire-plugin</artifactId>\n\t\t\t  <version>2.8</version>\n\t\t\t  <configuration>\n\t\t\t    <argLine>-Djava.library.path=${project.build.directory}/cassandra-dependency/hyperic-sigar-1.6.4/sigar-bin/lib</argLine>\n\t\t\t  </configuration>\n\t\t\t</plugin>\n\t\t\t\n\t \t</plugins>\n \t</build>\n</project>\n"
  },
  {
    "path": "cassandra/src/main/java/com/yahoo/ycsb/db/CassandraCQLClient.java",
    "content": "/**\n * Copyright (c) 2013-2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n * use this file except in compliance with the License. You may obtain a copy of\n * the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n * License for the specific language governing permissions and limitations under\n * the License. See accompanying LICENSE file.\n *\n * Submitted by Chrisjan Matser on 10/11/2010.\n */\npackage com.yahoo.ycsb.db;\n\nimport com.datastax.driver.core.Cluster;\nimport com.datastax.driver.core.ColumnDefinitions;\nimport com.datastax.driver.core.ConsistencyLevel;\nimport com.datastax.driver.core.Host;\nimport com.datastax.driver.core.HostDistance;\nimport com.datastax.driver.core.Metadata;\nimport com.datastax.driver.core.ResultSet;\nimport com.datastax.driver.core.Row;\nimport com.datastax.driver.core.Session;\nimport com.datastax.driver.core.SimpleStatement;\nimport com.datastax.driver.core.Statement;\nimport com.datastax.driver.core.querybuilder.Insert;\nimport com.datastax.driver.core.querybuilder.QueryBuilder;\nimport com.datastax.driver.core.querybuilder.Select;\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\nimport java.nio.ByteBuffer;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.concurrent.atomic.AtomicInteger;\n\n/**\n * Cassandra 2.x CQL client.\n *\n * See {@code cassandra2/README.md} for details.\n *\n * @author cmatser\n */\npublic class CassandraCQLClient extends DB {\n\n  private static Cluster cluster = null;\n  private static Session session = null;\n\n  private static ConsistencyLevel readConsistencyLevel = ConsistencyLevel.ONE;\n  private static ConsistencyLevel writeConsistencyLevel = ConsistencyLevel.ONE;\n\n  public static final String YCSB_KEY = \"y_id\";\n  public static final String KEYSPACE_PROPERTY = \"cassandra.keyspace\";\n  public static final String KEYSPACE_PROPERTY_DEFAULT = \"ycsb\";\n  public static final String USERNAME_PROPERTY = \"cassandra.username\";\n  public static final String PASSWORD_PROPERTY = \"cassandra.password\";\n\n  public static final String HOSTS_PROPERTY = \"hosts\";\n  public static final String PORT_PROPERTY = \"port\";\n  public static final String PORT_PROPERTY_DEFAULT = \"9042\";\n\n  public static final String READ_CONSISTENCY_LEVEL_PROPERTY =\n      \"cassandra.readconsistencylevel\";\n  public static final String READ_CONSISTENCY_LEVEL_PROPERTY_DEFAULT = \"ONE\";\n  public static final String WRITE_CONSISTENCY_LEVEL_PROPERTY =\n      \"cassandra.writeconsistencylevel\";\n  public static final String WRITE_CONSISTENCY_LEVEL_PROPERTY_DEFAULT = \"ONE\";\n\n  public static final String MAX_CONNECTIONS_PROPERTY =\n      \"cassandra.maxconnections\";\n  public static final String CORE_CONNECTIONS_PROPERTY =\n      \"cassandra.coreconnections\";\n  public static final String CONNECT_TIMEOUT_MILLIS_PROPERTY =\n      \"cassandra.connecttimeoutmillis\";\n  public static final String READ_TIMEOUT_MILLIS_PROPERTY =\n      \"cassandra.readtimeoutmillis\";\n\n  public static final String TRACING_PROPERTY = \"cassandra.tracing\";\n  public static final String TRACING_PROPERTY_DEFAULT = \"false\";\n  \n  /**\n   * Count the number of times initialized to teardown on the last\n   * {@link #cleanup()}.\n   */\n  private static final AtomicInteger INIT_COUNT = new AtomicInteger(0);\n\n  private static boolean debug = false;\n\n  private static boolean trace = false;\n  \n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is one\n   * DB instance per client thread.\n   */\n  @Override\n  public void init() throws DBException {\n\n    // Keep track of number of calls to init (for later cleanup)\n    INIT_COUNT.incrementAndGet();\n\n    // Synchronized so that we only have a single\n    // cluster/session instance for all the threads.\n    synchronized (INIT_COUNT) {\n\n      // Check if the cluster has already been initialized\n      if (cluster != null) {\n        return;\n      }\n\n      try {\n\n        debug =\n            Boolean.parseBoolean(getProperties().getProperty(\"debug\", \"false\"));\n        trace = Boolean.valueOf(getProperties().getProperty(TRACING_PROPERTY, TRACING_PROPERTY_DEFAULT));\n        \n        String host = getProperties().getProperty(HOSTS_PROPERTY);\n        if (host == null) {\n          throw new DBException(String.format(\n              \"Required property \\\"%s\\\" missing for CassandraCQLClient\",\n              HOSTS_PROPERTY));\n        }\n        String[] hosts = host.split(\",\");\n        String port = getProperties().getProperty(PORT_PROPERTY, PORT_PROPERTY_DEFAULT);\n\n        String username = getProperties().getProperty(USERNAME_PROPERTY);\n        String password = getProperties().getProperty(PASSWORD_PROPERTY);\n\n        String keyspace = getProperties().getProperty(KEYSPACE_PROPERTY,\n            KEYSPACE_PROPERTY_DEFAULT);\n\n        readConsistencyLevel = ConsistencyLevel.valueOf(\n            getProperties().getProperty(READ_CONSISTENCY_LEVEL_PROPERTY,\n                READ_CONSISTENCY_LEVEL_PROPERTY_DEFAULT));\n        writeConsistencyLevel = ConsistencyLevel.valueOf(\n            getProperties().getProperty(WRITE_CONSISTENCY_LEVEL_PROPERTY,\n                WRITE_CONSISTENCY_LEVEL_PROPERTY_DEFAULT));\n\n        if ((username != null) && !username.isEmpty()) {\n          cluster = Cluster.builder().withCredentials(username, password)\n              .withPort(Integer.valueOf(port)).addContactPoints(hosts).build();\n        } else {\n          cluster = Cluster.builder().withPort(Integer.valueOf(port))\n              .addContactPoints(hosts).build();\n        }\n\n        String maxConnections = getProperties().getProperty(\n            MAX_CONNECTIONS_PROPERTY);\n        if (maxConnections != null) {\n          cluster.getConfiguration().getPoolingOptions()\n              .setMaxConnectionsPerHost(HostDistance.LOCAL,\n              Integer.valueOf(maxConnections));\n        }\n\n        String coreConnections = getProperties().getProperty(\n            CORE_CONNECTIONS_PROPERTY);\n        if (coreConnections != null) {\n          cluster.getConfiguration().getPoolingOptions()\n              .setCoreConnectionsPerHost(HostDistance.LOCAL,\n              Integer.valueOf(coreConnections));\n        }\n\n        String connectTimoutMillis = getProperties().getProperty(\n            CONNECT_TIMEOUT_MILLIS_PROPERTY);\n        if (connectTimoutMillis != null) {\n          cluster.getConfiguration().getSocketOptions()\n              .setConnectTimeoutMillis(Integer.valueOf(connectTimoutMillis));\n        }\n\n        String readTimoutMillis = getProperties().getProperty(\n            READ_TIMEOUT_MILLIS_PROPERTY);\n        if (readTimoutMillis != null) {\n          cluster.getConfiguration().getSocketOptions()\n              .setReadTimeoutMillis(Integer.valueOf(readTimoutMillis));\n        }\n\n        Metadata metadata = cluster.getMetadata();\n        System.err.printf(\"Connected to cluster: %s\\n\",\n            metadata.getClusterName());\n\n        for (Host discoveredHost : metadata.getAllHosts()) {\n          System.out.printf(\"Datacenter: %s; Host: %s; Rack: %s\\n\",\n              discoveredHost.getDatacenter(), discoveredHost.getAddress(),\n              discoveredHost.getRack());\n        }\n\n        session = cluster.connect(keyspace);\n\n      } catch (Exception e) {\n        throw new DBException(e);\n      }\n    } // synchronized\n  }\n\n  /**\n   * Cleanup any state for this DB. Called once per DB instance; there is one DB\n   * instance per client thread.\n   */\n  @Override\n  public void cleanup() throws DBException {\n    synchronized (INIT_COUNT) {\n      final int curInitCount = INIT_COUNT.decrementAndGet();\n      if (curInitCount <= 0) {\n        session.close();\n        cluster.close();\n        cluster = null;\n        session = null;\n      }\n      if (curInitCount < 0) {\n        // This should never happen.\n        throw new DBException(\n            String.format(\"initCount is negative: %d\", curInitCount));\n      }\n    }\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will\n   * be stored in a HashMap.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to read.\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n      Map<String, ByteIterator> result) {\n    try {\n      Statement stmt;\n      Select.Builder selectBuilder;\n\n      if (fields == null) {\n        selectBuilder = QueryBuilder.select().all();\n      } else {\n        selectBuilder = QueryBuilder.select();\n        for (String col : fields) {\n          ((Select.Selection) selectBuilder).column(col);\n        }\n      }\n\n      stmt = selectBuilder.from(table).where(QueryBuilder.eq(YCSB_KEY, key))\n          .limit(1);\n      stmt.setConsistencyLevel(readConsistencyLevel);\n\n      if (debug) {\n        System.out.println(stmt.toString());\n      }\n      if (trace) {\n        stmt.enableTracing();\n      }\n      \n      ResultSet rs = session.execute(stmt);\n\n      if (rs.isExhausted()) {\n        return Status.NOT_FOUND;\n      }\n\n      // Should be only 1 row\n      Row row = rs.one();\n      ColumnDefinitions cd = row.getColumnDefinitions();\n\n      for (ColumnDefinitions.Definition def : cd) {\n        ByteBuffer val = row.getBytesUnsafe(def.getName());\n        if (val != null) {\n          result.put(def.getName(), new ByteArrayByteIterator(val.array()));\n        } else {\n          result.put(def.getName(), null);\n        }\n      }\n\n      return Status.OK;\n\n    } catch (Exception e) {\n      e.printStackTrace();\n      System.out.println(\"Error reading key: \" + key);\n      return Status.ERROR;\n    }\n\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value\n   * pair from the result will be stored in a HashMap.\n   *\n   * Cassandra CQL uses \"token\" method for range scan which doesn't always yield\n   * intuitive results.\n   *\n   * @param table\n   *          The name of the table\n   * @param startkey\n   *          The record key of the first record to read.\n   * @param recordcount\n   *          The number of records to read\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A Vector of HashMaps, where each HashMap is a set field/value\n   *          pairs for one record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n\n    try {\n      Statement stmt;\n      Select.Builder selectBuilder;\n\n      if (fields == null) {\n        selectBuilder = QueryBuilder.select().all();\n      } else {\n        selectBuilder = QueryBuilder.select();\n        for (String col : fields) {\n          ((Select.Selection) selectBuilder).column(col);\n        }\n      }\n\n      stmt = selectBuilder.from(table);\n\n      // The statement builder is not setup right for tokens.\n      // So, we need to build it manually.\n      String initialStmt = stmt.toString();\n      StringBuilder scanStmt = new StringBuilder();\n      scanStmt.append(initialStmt.substring(0, initialStmt.length() - 1));\n      scanStmt.append(\" WHERE \");\n      scanStmt.append(QueryBuilder.token(YCSB_KEY));\n      scanStmt.append(\" >= \");\n      scanStmt.append(\"token('\");\n      scanStmt.append(startkey);\n      scanStmt.append(\"')\");\n      scanStmt.append(\" LIMIT \");\n      scanStmt.append(recordcount);\n\n      stmt = new SimpleStatement(scanStmt.toString());\n      stmt.setConsistencyLevel(readConsistencyLevel);\n\n      if (debug) {\n        System.out.println(stmt.toString());\n      }\n      if (trace) {\n        stmt.enableTracing();\n      }\n      \n      ResultSet rs = session.execute(stmt);\n\n      HashMap<String, ByteIterator> tuple;\n      while (!rs.isExhausted()) {\n        Row row = rs.one();\n        tuple = new HashMap<String, ByteIterator>();\n\n        ColumnDefinitions cd = row.getColumnDefinitions();\n\n        for (ColumnDefinitions.Definition def : cd) {\n          ByteBuffer val = row.getBytesUnsafe(def.getName());\n          if (val != null) {\n            tuple.put(def.getName(), new ByteArrayByteIterator(val.array()));\n          } else {\n            tuple.put(def.getName(), null);\n          }\n        }\n\n        result.add(tuple);\n      }\n\n      return Status.OK;\n\n    } catch (Exception e) {\n      e.printStackTrace();\n      System.out.println(\"Error scanning with startkey: \" + startkey);\n      return Status.ERROR;\n    }\n\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key, overwriting any existing values with the same field name.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to write.\n   * @param values\n   *          A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status update(String table, String key,\n                       Map<String, ByteIterator> values) {\n    // Insert and updates provide the same functionality\n    return insert(table, key, values);\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to insert.\n   * @param values\n   *          A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status insert(String table, String key,\n      Map<String, ByteIterator> values) {\n\n    try {\n      Insert insertStmt = QueryBuilder.insertInto(table);\n\n      // Add key\n      insertStmt.value(YCSB_KEY, key);\n\n      // Add fields\n      for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n        Object value;\n        ByteIterator byteIterator = entry.getValue();\n        value = byteIterator.toString();\n\n        insertStmt.value(entry.getKey(), value);\n      }\n\n      insertStmt.setConsistencyLevel(writeConsistencyLevel);\n\n      if (debug) {\n        System.out.println(insertStmt.toString());\n      }\n      if (trace) {\n        insertStmt.enableTracing();\n      }\n      \n      session.execute(insertStmt);\n\n      return Status.OK;\n    } catch (Exception e) {\n      e.printStackTrace();\n    }\n\n    return Status.ERROR;\n  }\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status delete(String table, String key) {\n\n    try {\n      Statement stmt;\n\n      stmt = QueryBuilder.delete().from(table)\n          .where(QueryBuilder.eq(YCSB_KEY, key));\n      stmt.setConsistencyLevel(writeConsistencyLevel);\n\n      if (debug) {\n        System.out.println(stmt.toString());\n      }\n      if (trace) {\n        stmt.enableTracing();\n      }\n      \n      session.execute(stmt);\n\n      return Status.OK;\n    } catch (Exception e) {\n      e.printStackTrace();\n      System.out.println(\"Error deleting key: \" + key);\n    }\n\n    return Status.ERROR;\n  }\n\n}\n"
  },
  {
    "path": "cassandra/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"http://cassandra.apache.org/\">Cassandra</a> \n * 2.1+ via CQL.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "cassandra/src/test/java/com/yahoo/ycsb/db/CassandraCQLClientTest.java",
    "content": "/**\n * Copyright (c) 2015 YCSB contributors All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport static org.hamcrest.MatcherAssert.assertThat;\nimport static org.hamcrest.Matchers.hasEntry;\nimport static org.hamcrest.Matchers.hasSize;\nimport static org.hamcrest.Matchers.is;\nimport static org.hamcrest.Matchers.notNullValue;\n\nimport com.google.common.collect.Sets;\n\nimport com.datastax.driver.core.ResultSet;\nimport com.datastax.driver.core.Row;\nimport com.datastax.driver.core.Session;\nimport com.datastax.driver.core.Statement;\nimport com.datastax.driver.core.querybuilder.Insert;\nimport com.datastax.driver.core.querybuilder.QueryBuilder;\nimport com.datastax.driver.core.querybuilder.Select;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport com.yahoo.ycsb.measurements.Measurements;\nimport com.yahoo.ycsb.workloads.CoreWorkload;\n\nimport org.cassandraunit.CassandraCQLUnit;\nimport org.cassandraunit.dataset.cql.ClassPathCQLDataSet;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.ClassRule;\nimport org.junit.Test;\n\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\n\n/**\n * Integration tests for the Cassandra client\n */\npublic class CassandraCQLClientTest {\n  // Change the default Cassandra timeout from 10s to 120s for slow CI machines\n  private final static long timeout = 120000L;\n\n  private final static String TABLE = \"usertable\";\n  private final static String HOST = \"localhost\";\n  private final static int PORT = 9142;\n  private final static String DEFAULT_ROW_KEY = \"user1\";\n\n  private CassandraCQLClient client;\n  private Session session;\n\n  @ClassRule\n  public static CassandraCQLUnit cassandraUnit = new CassandraCQLUnit(\n    new ClassPathCQLDataSet(\"ycsb.cql\", \"ycsb\"), null, timeout);\n\n  @Before\n  public void setUp() throws Exception {\n    session = cassandraUnit.getSession();\n\n    Properties p = new Properties();\n    p.setProperty(\"hosts\", HOST);\n    p.setProperty(\"port\", Integer.toString(PORT));\n    p.setProperty(\"table\", TABLE);\n\n    Measurements.setProperties(p);\n    final CoreWorkload workload = new CoreWorkload();\n    workload.init(p);\n    client = new CassandraCQLClient();\n    client.setProperties(p);\n    client.init();\n  }\n\n  @After\n  public void tearDownClient() throws Exception {\n    if (client != null) {\n      client.cleanup();\n    }\n    client = null;\n  }\n\n  @After\n  public void clearTable() throws Exception {\n    // Clear the table so that each test starts fresh.\n    final Statement truncate = QueryBuilder.truncate(TABLE);\n    if (cassandraUnit != null) {\n      cassandraUnit.getSession().execute(truncate);\n    }\n  }\n\n  @Test\n  public void testReadMissingRow() throws Exception {\n    final HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\n    final Status status = client.read(TABLE, \"Missing row\", null, result);\n    assertThat(result.size(), is(0));\n    assertThat(status, is(Status.NOT_FOUND));\n  }\n\n  private void insertRow() {\n    final String rowKey = DEFAULT_ROW_KEY;\n    Insert insertStmt = QueryBuilder.insertInto(TABLE);\n    insertStmt.value(CassandraCQLClient.YCSB_KEY, rowKey);\n\n    insertStmt.value(\"field0\", \"value1\");\n    insertStmt.value(\"field1\", \"value2\");\n    session.execute(insertStmt);\n  }\n\n  @Test\n  public void testRead() throws Exception {\n    insertRow();\n\n    final HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\n    final Status status = client.read(TABLE, DEFAULT_ROW_KEY, null, result);\n    assertThat(status, is(Status.OK));\n    assertThat(result.entrySet(), hasSize(11));\n    assertThat(result, hasEntry(\"field2\", null));\n\n    final HashMap<String, String> strResult = new HashMap<String, String>();\n    for (final Map.Entry<String, ByteIterator> e : result.entrySet()) {\n      if (e.getValue() != null) {\n        strResult.put(e.getKey(), e.getValue().toString());\n      }\n    }\n    assertThat(strResult, hasEntry(CassandraCQLClient.YCSB_KEY, DEFAULT_ROW_KEY));\n    assertThat(strResult, hasEntry(\"field0\", \"value1\"));\n    assertThat(strResult, hasEntry(\"field1\", \"value2\"));\n  }\n\n  @Test\n  public void testReadSingleColumn() throws Exception {\n    insertRow();\n    final HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\n    final Set<String> fields = Sets.newHashSet(\"field1\");\n    final Status status = client.read(TABLE, DEFAULT_ROW_KEY, fields, result);\n    assertThat(status, is(Status.OK));\n    assertThat(result.entrySet(), hasSize(1));\n    final Map<String, String> strResult = StringByteIterator.getStringMap(result);\n    assertThat(strResult, hasEntry(\"field1\", \"value2\"));\n  }\n\n  @Test\n  public void testUpdate() throws Exception {\n    final String key = \"key\";\n    final Map<String, String> input = new HashMap<String, String>();\n    input.put(\"field0\", \"value1\");\n    input.put(\"field1\", \"value2\");\n\n    final Status status = client.insert(TABLE, key, StringByteIterator.getByteIteratorMap(input));\n    assertThat(status, is(Status.OK));\n\n    // Verify result\n    final Select selectStmt =\n        QueryBuilder.select(\"field0\", \"field1\")\n            .from(TABLE)\n            .where(QueryBuilder.eq(CassandraCQLClient.YCSB_KEY, key))\n            .limit(1);\n\n    final ResultSet rs = session.execute(selectStmt);\n    final Row row = rs.one();\n    assertThat(row, notNullValue());\n    assertThat(rs.isExhausted(), is(true));\n    assertThat(row.getString(\"field0\"), is(\"value1\"));\n    assertThat(row.getString(\"field1\"), is(\"value2\"));\n  }\n}\n"
  },
  {
    "path": "cassandra/src/test/resources/ycsb.cql",
    "content": "/**\n * Copyright (c) 2015 YCSB Contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\nCREATE TABLE usertable (\n  y_id varchar primary key,\n  field0 varchar,\n  field1 varchar,\n  field2 varchar,\n  field3 varchar,\n  field4 varchar,\n  field5 varchar,\n  field6 varchar,\n  field7 varchar,\n  field8 varchar,\n  field9 varchar);\n"
  },
  {
    "path": "checkstyle.xml",
    "content": "<?xml version=\"1.0\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<!DOCTYPE module PUBLIC\n    \"-//Puppy Crawl//DTD Check Configuration 1.2//EN\"\n    \"http://www.puppycrawl.com/dtds/configuration_1_2.dtd\">\n\n<!--\n\n  Checkstyle configuration for Hadoop that is based on the sun_checks.xml file\n  that is bundled with Checkstyle and includes checks for:\n\n    - the Java Language Specification at\n      http://java.sun.com/docs/books/jls/second_edition/html/index.html\n\n    - the Sun Code Conventions at http://java.sun.com/docs/codeconv/\n\n    - the Javadoc guidelines at\n      http://java.sun.com/j2se/javadoc/writingdoccomments/index.html\n\n    - the JDK Api documentation http://java.sun.com/j2se/docs/api/index.html\n\n    - some best practices\n\n  Checkstyle is very configurable. Be sure to read the documentation at\n  http://checkstyle.sf.net (or in your downloaded distribution).\n\n  Most Checks are configurable, be sure to consult the documentation.\n\n  To completely disable a check, just comment it out or delete it from the file.\n\n  Finally, it is worth reading the documentation.\n\n-->\n\n<module name=\"Checker\">\n\n    <!-- Checks that a package.html file exists for each package.     -->\n    <!-- See http://checkstyle.sf.net/config_javadoc.html#PackageHtml -->\n    <module name=\"JavadocPackage\"/>\n\n    <!-- Checks whether files end with a new line.                        -->\n    <!-- See http://checkstyle.sf.net/config_misc.html#NewlineAtEndOfFile -->\n    <!-- module name=\"NewlineAtEndOfFile\"/-->\n\n    <!-- Checks that property files contain the same keys.         -->\n    <!-- See http://checkstyle.sf.net/config_misc.html#Translation -->\n    <module name=\"Translation\"/>\n\n    <module name=\"FileLength\"/>\n    <module name=\"FileTabCharacter\"/>\n\n    <module name=\"TreeWalker\">\n\n        <!-- Checks for Javadoc comments.                     -->\n        <!-- See http://checkstyle.sf.net/config_javadoc.html -->\n        <module name=\"JavadocType\">\n          <property name=\"scope\" value=\"public\"/>\n          <property name=\"allowMissingParamTags\" value=\"true\"/>\n        </module>\n        <module name=\"JavadocStyle\"/>\n\n        <!-- Checks for Naming Conventions.                  -->\n        <!-- See http://checkstyle.sf.net/config_naming.html -->\n        <module name=\"ConstantName\"/>\n        <module name=\"LocalFinalVariableName\"/>\n        <module name=\"LocalVariableName\"/>\n        <module name=\"MemberName\"/>\n        <module name=\"MethodName\"/>\n        <module name=\"PackageName\"/>\n        <module name=\"ParameterName\"/>\n        <module name=\"StaticVariableName\"/>\n        <module name=\"TypeName\"/>\n\n\n        <!-- Checks for Headers                                -->\n        <!-- See http://checkstyle.sf.net/config_header.html   -->\n        <!-- <module name=\"Header\">                            -->\n            <!-- The follow property value demonstrates the ability     -->\n            <!-- to have access to ANT properties. In this case it uses -->\n            <!-- the ${basedir} property to allow Checkstyle to be run  -->\n            <!-- from any directory within a project. See property      -->\n            <!-- expansion,                                             -->\n            <!-- http://checkstyle.sf.net/config.html#properties        -->\n            <!-- <property                                              -->\n            <!--     name=\"headerFile\"                                  -->\n            <!--     value=\"${basedir}/java.header\"/>                   -->\n        <!-- </module> -->\n\n        <!-- Following interprets the header file as regular expressions. -->\n        <!-- <module name=\"RegexpHeader\"/>                                -->\n\n\n        <!-- Checks for imports                              -->\n        <!-- See http://checkstyle.sf.net/config_import.html -->\n        <module name=\"IllegalImport\"/> <!-- defaults to sun.* packages -->\n        <module name=\"RedundantImport\"/>\n        <module name=\"UnusedImports\"/>\n\n\n        <!-- Checks for Size Violations.                    -->\n        <!-- See http://checkstyle.sf.net/config_sizes.html -->\n        <module name=\"LineLength\">\n            <property name=\"max\" value=\"120\"/>\n        </module>\n        <module name=\"MethodLength\"/>\n        <module name=\"ParameterNumber\"/>\n\n\n        <!-- Checks for whitespace                               -->\n        <!-- See http://checkstyle.sf.net/config_whitespace.html -->\n        <module name=\"EmptyForIteratorPad\"/>\n        <module name=\"MethodParamPad\"/>\n        <module name=\"NoWhitespaceAfter\"/>\n        <module name=\"NoWhitespaceBefore\"/>\n        <module name=\"ParenPad\"/>\n        <module name=\"TypecastParenPad\"/>\n        <module name=\"WhitespaceAfter\">\n\t    \t<property name=\"tokens\" value=\"COMMA, SEMI\"/>\n\t\t</module>\n\n\n        <!-- Modifier Checks                                    -->\n        <!-- See http://checkstyle.sf.net/config_modifiers.html -->\n        <module name=\"ModifierOrder\"/>\n        <module name=\"RedundantModifier\"/>\n\n\n        <!-- Checks for blocks. You know, those {}'s         -->\n        <!-- See http://checkstyle.sf.net/config_blocks.html -->\n        <module name=\"AvoidNestedBlocks\"/>\n        <module name=\"EmptyBlock\">\n          <property name=\"option\" value=\"text\"/>\n        </module>\n        <module name=\"LeftCurly\"/>\n        <module name=\"NeedBraces\"/>\n        <module name=\"RightCurly\"/>\n\n\n        <!-- Checks for common coding problems               -->\n        <!-- See http://checkstyle.sf.net/config_coding.html -->\n        <!-- module name=\"AvoidInlineConditionals\"/-->\n        <module name=\"EmptyStatement\"/>\n        <module name=\"EqualsHashCode\"/>\n        <module name=\"HiddenField\">\n          <property name=\"ignoreConstructorParameter\" value=\"true\"/>\n        </module>\n        <module name=\"IllegalInstantiation\"/>\n        <module name=\"InnerAssignment\"/>\n        <module name=\"MissingSwitchDefault\"/>\n        <module name=\"SimplifyBooleanExpression\"/>\n        <module name=\"SimplifyBooleanReturn\"/>\n\n        <!-- Checks for class design                         -->\n        <!-- See http://checkstyle.sf.net/config_design.html -->\n        <module name=\"FinalClass\"/>\n        <module name=\"HideUtilityClassConstructor\"/>\n        <module name=\"InterfaceIsType\"/>\n        <module name=\"VisibilityModifier\">\n          <property name=\"protectedAllowed\" value=\"true\"/>\n        </module>\n\n\n        <!-- Miscellaneous other checks.                   -->\n        <!-- See http://checkstyle.sf.net/config_misc.html -->\n        <module name=\"ArrayTypeStyle\"/>\n        <module name=\"Indentation\">\n            <property name=\"basicOffset\" value=\"2\" />\n            <property name=\"caseIndent\" value=\"0\" />\n        </module> \n        <!-- <module name=\"TodoComment\"/> -->\n        <module name=\"UpperEll\"/>\n\n    </module>\n\n</module>\n"
  },
  {
    "path": "cloudspanner/README.md",
    "content": "<!--\nCopyright (c) 2017 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# Cloud Spanner Driver for YCSB\n\nThis driver provides a YCSB workload binding for Google's Cloud Spanner database, the first relational database service that is both strongly consistent and horizontally scalable. This binding is implemented using the official Java client library for Cloud Spanner which uses GRPC for making calls.\n\nFor best results, we strongly recommend running the benchmark from a Google Compute Engine (GCE) VM.\n\n## Running a Workload\n\nWe recommend reading the [general guidelines](https://github.com/brianfrankcooper/YCSB/wiki/Running-a-Workload) in the YCSB documentation, and following the Cloud Spanner specific steps below.\n\n### 1. Set up Cloud Spanner with the Expected Schema\n\nFollow the [Quickstart instructions](https://cloud.google.com/spanner/docs/quickstart-console) in the Cloud Spanner documentation to set up a Cloud Spanner instance, and create a database with the following schema:\n\n```\nCREATE TABLE usertable (\n  id STRING(MAX),\n  field0 STRING(MAX),\n  field1 STRING(MAX),\n  field2 STRING(MAX),\n  field3 STRING(MAX),\n  field4 STRING(MAX),\n  field5 STRING(MAX),\n  field6 STRING(MAX),\n  field7 STRING(MAX),\n  field8 STRING(MAX),\n  field9 STRING(MAX),\n) PRIMARY KEY(id);\n```\nMake note of your project ID, instance ID, and database name.\n\n### 2. Set Up Your Environment and Auth\n\nFollow the [set up instructions](https://cloud.google.com/spanner/docs/getting-started/set-up) in the Cloud Spanner documentation to set up your environment and authentication. When not running on a GCE VM, make sure you run `gcloud auth application-default login`.\n\n### 3. Edit Properties\n\nIn your YCSB root directory, edit `cloudspanner/conf/cloudspanner.properties` and specify your project ID, instance ID, and database name.\n\n### 4. Run the YCSB Shell\n\nStart the YCBS shell connected to Cloud Spanner using the following command:\n\n```\n./bin/ycsb shell cloudspanner -P cloudspanner/conf/cloudspanner.properties\n```\n\nYou can use the `insert`, `read`, `update`, `scan`, and `delete` commands in the shell to experiment with your database and make sure the connection works. For example, try the following:\n\n```\ninsert name field0=adam\nread name field0\ndelete name\n```\n\n### 5. Load the Data\n\nYou can load, say, 10 GB of data into your YCSB database using the following command:\n\n```\n./bin/ycsb load cloudspanner -P cloudspanner/conf/cloudspanner.properties -P workloads/workloada -p recordcount=10000000 -p cloudspanner.batchinserts=1000 -threads 10 -s\n```\n\nWe recommend batching insertions so as to reach ~1 MB of data per commit request; this is controlled via the `cloudspanner.batchinserts` parameter which we recommend setting to `1000` during data load.\n\nIf you wish to load a large database, you can run YCSB on multiple client VMs in parallel and use the `insertstart` and `insertcount` parameters to distribute the load as described [here](https://github.com/brianfrankcooper/YCSB/wiki/Running-a-Workload-in-Parallel). In this case, we recommend the following:\n\n* Use ordered inserts via specifying the YCSB parameter `insertorder=ordered`;\n* Use zero-padding so that ordered inserts are actually lexicographically ordered; the option `zeropadding = 12` is set in the default `cloudspanner.properties` file;\n* Split the key range evenly between client VMs;\n* Use few threads on each client VM, so that each individual commit request contains keys which are (close to) consecutive, and would thus likely address a single split; this also helps avoid overloading the servers.\n\nThe idea is that we have a number of 'write heads' which are all writing to different parts of the database (and thus talking to different servers), but each individual head is writing its own data (more or less) in order. See the [best practices page](https://cloud.google.com/spanner/docs/best-practices#loading_data) for further details.\n\n### 6. Run a Workload\n\nAfter data load, you can a run a workload, say, workload B, using the following command:\n\n```\n./bin/ycsb run cloudspanner -P cloudspanner/conf/cloudspanner.properties -P workloads/workloadb -p recordcount=10000000 -p operationcount=1000000 -threads 10 -s \n```\n\nMake sure that you use the same `insertorder` (i.e. `ordered` or `hashed`) and `zeropadding` as specified during the data load. Further details about running workloads are given in the [YCSB wiki pages](https://github.com/brianfrankcooper/YCSB/wiki/Running-a-Workload).\n\n## Configuration Options\n\nIn addition to the standard YCSB parameters, the following Cloud Spanner specific options can be configured using the `-p` parameter or in `cloudspanner/conf/cloudspanner.properties`.\n\n* `cloudspanner.database`: (Required) The name of the database created in the instance, e.g. `ycsb-database`.\n* `cloudspanner.instance`: (Required) The ID of the Cloud Spanner instance, e.g. `ycsb-instance`.\n* `cloudspanner.project`: The ID of the project containing the Cloud Spanner instance, e.g. `myproject`. This is not strictly required and can often be automatically inferred from the environment.\n* `cloudspanner.readmode`: Allows choosing between the `read` and `query` interface of Cloud Spanner. The default is `query`.\n* `cloudspanner.batchinserts`: The number of inserts to batch into a single commit request. The default value is 1 which means no batching is done. Recommended value during data load is 1000.\n* `cloudspanner.boundedstaleness`: Number of seconds we allow reads to be stale for. Set to 0 for strong reads (default). For performance gains, this should be set to 10 seconds.\n"
  },
  {
    "path": "cloudspanner/conf/cloudspanner.properties",
    "content": "# Copyright (c) 2017 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n# Core YCSB properties.\ntable = usertable\nzeropadding = 12\n\n# Cloud Spanner properties\ncloudspanner.instance = ycsb-instance\ncloudspanner.database = ycsb-database\n\ncloudspanner.readmode = query\ncloudspanner.boundedstaleness = 0\ncloudspanner.batchinserts = 1\n"
  },
  {
    "path": "cloudspanner/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2017 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent/</relativePath>\n  </parent>\n\n  <artifactId>cloudspanner-binding</artifactId>\n  <name>Cloud Spanner DB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.google.cloud</groupId>\n      <artifactId>google-cloud-spanner</artifactId>\n      <version>${cloudspanner.version}</version>\n      <exclusions>\n        <exclusion> <!-- exclude an old version of Guava -->\n          <groupId>com.google.guava</groupId>\n          <artifactId>guava-jdk5</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    \n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    \n  </dependencies>\n</project>\n"
  },
  {
    "path": "cloudspanner/src/main/java/com/yahoo/ycsb/db/cloudspanner/CloudSpannerClient.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db.cloudspanner;\n\nimport com.google.common.base.Joiner;\nimport com.google.cloud.spanner.DatabaseId;\nimport com.google.cloud.spanner.DatabaseClient;\nimport com.google.cloud.spanner.Key;\nimport com.google.cloud.spanner.KeySet;\nimport com.google.cloud.spanner.KeyRange;\nimport com.google.cloud.spanner.Mutation;\nimport com.google.cloud.spanner.Options;\nimport com.google.cloud.spanner.ResultSet;\nimport com.google.cloud.spanner.SessionPoolOptions;\nimport com.google.cloud.spanner.Spanner;\nimport com.google.cloud.spanner.SpannerOptions;\nimport com.google.cloud.spanner.Statement;\nimport com.google.cloud.spanner.Struct;\nimport com.google.cloud.spanner.StructReader;\nimport com.google.cloud.spanner.TimestampBound;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.Client;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport com.yahoo.ycsb.workloads.CoreWorkload;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.logging.Level;\nimport java.util.logging.Logger;\nimport java.util.concurrent.TimeUnit;\n\n/**\n * YCSB Client for Google's Cloud Spanner.\n */\npublic class CloudSpannerClient extends DB {\n\n  /**\n   * The names of properties which can be specified in the config files and flags.\n   */\n  public static final class CloudSpannerProperties {\n    private CloudSpannerProperties() {}\n\n    /**\n     * The Cloud Spanner database name to use when running the YCSB benchmark, e.g. 'ycsb-database'.\n     */\n    static final String DATABASE = \"cloudspanner.database\";\n    /**\n     * The Cloud Spanner instance ID to use when running the YCSB benchmark, e.g. 'ycsb-instance'.\n     */\n    static final String INSTANCE = \"cloudspanner.instance\";\n    /**\n     * Choose between 'read' and 'query'. Affects both read() and scan() operations.\n     */\n    static final String READ_MODE = \"cloudspanner.readmode\";\n    /**\n     * The number of inserts to batch during the bulk loading phase. The default value is 1, which means no batching\n     * is done. Recommended value during data load is 1000.\n     */\n    static final String BATCH_INSERTS = \"cloudspanner.batchinserts\";\n    /**\n     * Number of seconds we allow reads to be stale for. Set to 0 for strong reads (default).\n     * For performance gains, this should be set to 10 seconds.\n     */\n    static final String BOUNDED_STALENESS = \"cloudspanner.boundedstaleness\";\n\n    // The properties below usually do not need to be set explicitly.\n\n    /**\n     * The Cloud Spanner project ID to use when running the YCSB benchmark, e.g. 'myproject'. This is not strictly\n     * necessary and can often be inferred from the environment.\n     */\n    static final String PROJECT = \"cloudspanner.project\";\n    /**\n     * The Cloud Spanner host name to use in the YCSB run.\n     */\n    static final String HOST = \"cloudspanner.host\";\n    /**\n     * Number of Cloud Spanner client channels to use. It's recommended to leave this to be the default value.\n     */\n    static final String NUM_CHANNELS = \"cloudspanner.channels\";\n  }\n\n  private static int fieldCount;\n\n  private static boolean queriesForReads;\n\n  private static int batchInserts;\n\n  private static TimestampBound timestampBound;\n\n  private static String standardQuery;\n\n  private static String standardScan;\n\n  private static final ArrayList<String> STANDARD_FIELDS = new ArrayList<>();\n\n  private static final String PRIMARY_KEY_COLUMN = \"id\";\n\n  private static final Logger LOGGER = Logger.getLogger(CloudSpannerClient.class.getName());\n\n  // Static lock for the class.\n  private static final Object CLASS_LOCK = new Object();\n\n  // Single Spanner client per process.\n  private static Spanner spanner = null;\n\n  // Single database client per process.\n  private static DatabaseClient dbClient = null;\n\n  // Buffered mutations on a per object/thread basis for batch inserts.\n  // Note that we have a separate CloudSpannerClient object per thread.\n  private final ArrayList<Mutation> bufferedMutations = new ArrayList<>();\n\n  private static void constructStandardQueriesAndFields(Properties properties) {\n    String table = properties.getProperty(CoreWorkload.TABLENAME_PROPERTY, CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n    standardQuery = new StringBuilder()\n        .append(\"SELECT * FROM \").append(table).append(\" WHERE id=@key\").toString();\n    standardScan = new StringBuilder()\n        .append(\"SELECT * FROM \").append(table).append(\" WHERE id>=@startKey LIMIT @count\").toString();\n    for (int i = 0; i < fieldCount; i++) {\n      STANDARD_FIELDS.add(\"field\" + i);\n    }\n  }\n\n  private static Spanner getSpanner(Properties properties, String host, String project) {\n    if (spanner != null) {\n      return spanner;\n    }\n    String numChannels = properties.getProperty(CloudSpannerProperties.NUM_CHANNELS);\n    int numThreads = Integer.parseInt(properties.getProperty(Client.THREAD_COUNT_PROPERTY, \"1\"));\n    SpannerOptions.Builder optionsBuilder = SpannerOptions.newBuilder()\n        .setSessionPoolOption(SessionPoolOptions.newBuilder()\n            .setMinSessions(numThreads)\n            // Since we have no read-write transactions, we can set the write session fraction to 0.\n            .setWriteSessionsFraction(0)\n            .build());\n    if (host != null) {\n      optionsBuilder.setHost(host);\n    }\n    if (project != null) {\n      optionsBuilder.setProjectId(project);\n    }\n    if (numChannels != null) {\n      optionsBuilder.setNumChannels(Integer.parseInt(numChannels));\n    }\n    spanner = optionsBuilder.build().getService();\n    Runtime.getRuntime().addShutdownHook(new Thread(\"spannerShutdown\") {\n        @Override\n        public void run() {\n          spanner.close();\n        }\n      });\n    return spanner;\n  }\n\n  @Override\n  public void init() throws DBException {\n    synchronized (CLASS_LOCK) {\n      if (dbClient != null) {\n        return;\n      }\n      Properties properties = getProperties();\n      String host = properties.getProperty(CloudSpannerProperties.HOST);\n      String project = properties.getProperty(CloudSpannerProperties.PROJECT);\n      String instance = properties.getProperty(CloudSpannerProperties.INSTANCE, \"ycsb-instance\");\n      String database = properties.getProperty(CloudSpannerProperties.DATABASE, \"ycsb-database\");\n\n      fieldCount = Integer.parseInt(properties.getProperty(\n          CoreWorkload.FIELD_COUNT_PROPERTY, CoreWorkload.FIELD_COUNT_PROPERTY_DEFAULT));\n      queriesForReads = properties.getProperty(CloudSpannerProperties.READ_MODE, \"query\").equals(\"query\");\n      batchInserts = Integer.parseInt(properties.getProperty(CloudSpannerProperties.BATCH_INSERTS, \"1\"));\n      constructStandardQueriesAndFields(properties);\n\n      int boundedStalenessSeconds = Integer.parseInt(properties.getProperty(\n          CloudSpannerProperties.BOUNDED_STALENESS, \"0\"));\n      timestampBound = (boundedStalenessSeconds <= 0) ?\n          TimestampBound.strong() : TimestampBound.ofMaxStaleness(boundedStalenessSeconds, TimeUnit.SECONDS);\n\n      try {\n        spanner = getSpanner(properties, host, project);\n        if (project == null) {\n          project = spanner.getOptions().getProjectId();\n        }\n        dbClient = spanner.getDatabaseClient(DatabaseId.of(project, instance, database));\n      } catch (Exception e) {\n        LOGGER.log(Level.SEVERE, \"init()\", e);\n        throw new DBException(e);\n      }\n\n      LOGGER.log(Level.INFO, new StringBuilder()\n          .append(\"\\nHost: \").append(spanner.getOptions().getHost())\n          .append(\"\\nProject: \").append(project)\n          .append(\"\\nInstance: \").append(instance)\n          .append(\"\\nDatabase: \").append(database)\n          .append(\"\\nUsing queries for reads: \").append(queriesForReads)\n          .append(\"\\nBatching inserts: \").append(batchInserts)\n          .append(\"\\nBounded staleness seconds: \").append(boundedStalenessSeconds)\n          .toString());\n    }\n  }\n\n  private Status readUsingQuery(\n      String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    Statement query;\n    Iterable<String> columns = fields == null ? STANDARD_FIELDS : fields;\n    if (fields == null || fields.size() == fieldCount) {\n      query = Statement.newBuilder(standardQuery).bind(\"key\").to(key).build();\n    } else {\n      Joiner joiner = Joiner.on(',');\n      query = Statement.newBuilder(\"SELECT \")\n          .append(joiner.join(fields))\n          .append(\" FROM \")\n          .append(table)\n          .append(\" WHERE id=@key\")\n          .bind(\"key\").to(key)\n          .build();\n    }\n    try (ResultSet resultSet = dbClient.singleUse(timestampBound).executeQuery(query)) {\n      resultSet.next();\n      decodeStruct(columns, resultSet, result);\n      if (resultSet.next()) {\n        throw new Exception(\"Expected exactly one row for each read.\");\n      }\n\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.log(Level.INFO, \"readUsingQuery()\", e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status read(\n      String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    if (queriesForReads) {\n      return readUsingQuery(table, key, fields, result);\n    }\n    Iterable<String> columns = fields == null ? STANDARD_FIELDS : fields;\n    try {\n      Struct row = dbClient.singleUse(timestampBound).readRow(table, Key.of(key), columns);\n      decodeStruct(columns, row, result);\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.log(Level.INFO, \"read()\", e);\n      return Status.ERROR;\n    }\n  }\n\n  private Status scanUsingQuery(\n      String table, String startKey, int recordCount, Set<String> fields,\n      Vector<HashMap<String, ByteIterator>> result) {\n    Iterable<String> columns = fields == null ? STANDARD_FIELDS : fields;\n    Statement query;\n    if (fields == null || fields.size() == fieldCount) {\n      query = Statement.newBuilder(standardScan).bind(\"startKey\").to(startKey).bind(\"count\").to(recordCount).build();\n    } else {\n      Joiner joiner = Joiner.on(',');\n      query = Statement.newBuilder(\"SELECT \")\n          .append(joiner.join(fields))\n          .append(\" FROM \")\n          .append(table)\n          .append(\" WHERE id>=@startKey LIMIT @count\")\n          .bind(\"startKey\").to(startKey)\n          .bind(\"count\").to(recordCount)\n          .build();\n    }\n    try (ResultSet resultSet = dbClient.singleUse(timestampBound).executeQuery(query)) {\n      while (resultSet.next()) {\n        HashMap<String, ByteIterator> row = new HashMap<>();\n        decodeStruct(columns, resultSet, row);\n        result.add(row);\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.log(Level.INFO, \"scanUsingQuery()\", e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status scan(\n      String table, String startKey, int recordCount, Set<String> fields,\n      Vector<HashMap<String, ByteIterator>> result) {\n    if (queriesForReads) {\n      return scanUsingQuery(table, startKey, recordCount, fields, result);\n    }\n    Iterable<String> columns = fields == null ? STANDARD_FIELDS : fields;\n    KeySet keySet =\n        KeySet.newBuilder().addRange(KeyRange.closedClosed(Key.of(startKey), Key.of())).build();\n    try (ResultSet resultSet = dbClient.singleUse(timestampBound)\n                                       .read(table, keySet, columns, Options.limit(recordCount))) {\n      while (resultSet.next()) {\n        HashMap<String, ByteIterator> row = new HashMap<>();\n        decodeStruct(columns, resultSet, row);\n        result.add(row);\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.log(Level.INFO, \"scan()\", e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    Mutation.WriteBuilder m = Mutation.newInsertOrUpdateBuilder(table);\n    m.set(PRIMARY_KEY_COLUMN).to(key);\n    for (Map.Entry<String, ByteIterator> e : values.entrySet()) {\n      m.set(e.getKey()).to(e.getValue().toString());\n    }\n    try {\n      dbClient.writeAtLeastOnce(Arrays.asList(m.build()));\n    } catch (Exception e) {\n      LOGGER.log(Level.INFO, \"update()\", e);\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    if (bufferedMutations.size() < batchInserts) {\n      Mutation.WriteBuilder m = Mutation.newInsertOrUpdateBuilder(table);\n      m.set(PRIMARY_KEY_COLUMN).to(key);\n      for (Map.Entry<String, ByteIterator> e : values.entrySet()) {\n        m.set(e.getKey()).to(e.getValue().toString());\n      }\n      bufferedMutations.add(m.build());\n    } else {\n      LOGGER.log(Level.INFO, \"Limit of cached mutations reached. The given mutation with key \" + key +\n          \" is ignored. Is this a retry?\");\n    }\n    if (bufferedMutations.size() < batchInserts) {\n      return Status.BATCHED_OK;\n    }\n    try {\n      dbClient.writeAtLeastOnce(bufferedMutations);\n      bufferedMutations.clear();\n    } catch (Exception e) {\n      LOGGER.log(Level.INFO, \"insert()\", e);\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public void cleanup() {\n    try {\n      if (bufferedMutations.size() > 0) {\n        dbClient.writeAtLeastOnce(bufferedMutations);\n        bufferedMutations.clear();\n      }\n    } catch (Exception e) {\n      LOGGER.log(Level.INFO, \"cleanup()\", e);\n    }\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      dbClient.writeAtLeastOnce(Arrays.asList(Mutation.delete(table, Key.of(key))));\n    } catch (Exception e) {\n      LOGGER.log(Level.INFO, \"delete()\", e);\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n\n  private static void decodeStruct(\n      Iterable<String> columns, StructReader structReader, Map<String, ByteIterator> result) {\n    for (String col : columns) {\n      result.put(col, new StringByteIterator(structReader.getString(col)));\n    }\n  }\n}\n"
  },
  {
    "path": "cloudspanner/src/main/java/com/yahoo/ycsb/db/cloudspanner/package-info.java",
    "content": "/*\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for Google's <a href=\"https://cloud.google.com/spanner/\">\n * Cloud Spanner</a>.\n */\npackage com.yahoo.ycsb.db.cloudspanner;\n"
  },
  {
    "path": "core/CHANGES.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\nWhen used as a latency under load benchmark YCSB in it's original form suffers from\nCoordinated Omission[1] and related measurement issue:\n\n* Load is controlled by response time\n* Measurement does not account for missing time\n* Measurement starts at beginning of request rather than at intended beginning\n* Measurement is limited in scope as the histogram does not provide data on overflow values\n\nTo provide a minimal correction patch the following were implemented:\n\n1. Replace internal histogram implementation with HdrHistogram[2]:\nHdrHistogram offers a dynamic range of measurement at a given precision and will\nimprove the fidelity of reporting. It allows capturing a much wider range of latencies.\nHdrHistogram also supports compressed loss-less serialization which enable capturing\nsnapshot histograms from which lower resolution histograms can be constructed for plotting\nlatency over time. Snapshot interval histograms are serialized on status reporting which\nmust be enabled using the '-s' option.\n \n2. Track intended operation start and report latencies from that point in time:\nAssuming the benchmark sets a target schedule of execution in which every operation\nis supposed to happen at a given time the benchmark should measure the latency between\nintended start time and operation completion.\nThis required the introduction of a new measurement point and inevitably\nincludes measuring some of the internal preparation steps of the load generator.\nThese overhead should be negligible in the context of a network hop, but could\nbe corrected for by estimating the load-generator overheads (e.g. by measuring a\nno-op DB or by measuring the setup time for an operation and deducting that from total).\nThis intended measurement point is only used when there is a target load (specified by\nthe -target paramaeter)\n\nThis branch supports the following new options:\n\n* -p measurementtype=[histogram|hdrhistogram|hdrhistogram+histogram|timeseries] (default=histogram)\nThe new measurement types are hdrhistogram and hdrhistogram+histogram. Default is still\nhistogram, which is the old histogram. Ultimately we would remove the old measurement types\nand use only HdrHistogram but the old measurement is left in there for comparison sake.\n\n* -p measurement.interval=[op|intended|both] (default=op)\nThis new option deferentiates between measured intervals and adds the intended interval(as described)\nabove, and the option to record both the op and intended for comparison.\n\n* -p hdrhistogram.fileoutput=[true|false] (default=false)\nThis new option will enable periodical writes of the interval histogram into an output file. The path can be set using '-p hdrhistogram.output.path=<PATH>'.\n\nExample parameters:\n-target 1000 -s -p workload=com.yahoo.ycsb.workloads.CoreWorkload -p basicdb.verbose=false -p basicdb.simulatedelay=4 -p measurement.interval=both -p measurementtype=hdrhistogram -p hdrhistogram.fileoutput=true -p maxexecutiontime=60\n\nFurther changes made:\n\n* -p status.interval=<number of seconds> (default=10)\nControls the number of seconds between status reports and therefore between HdrHistogram snapshots reported.\n\n* -p basicdb.randomizedelay=[true|false] (default=true)\nControls weather the delay simulated by the mock DB is uniformly random or not.\n\nFurther suggestions:\n\n1. Correction load control: currently after a pause the load generator will do\noperations back to back to catchup, this leads to a flat out throughput mode\nof testing as opposed to controlled load.\n\n2. Move to async model: Scenarios where Ops have no dependency could delegate the\nOp execution to a threadpool and thus separate the request rate control from the\nsynchronous execution of Ops. Measurement would start on queuing for execution.\n\n1. https://groups.google.com/forum/#!msg/mechanical-sympathy/icNZJejUHfE/BfDekfBEs_sJ\n2. https://github.com/HdrHistogram/HdrHistogram"
  },
  {
    "path": "core/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>root</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n  </parent>\n  \n  <artifactId>core</artifactId>\n  <name>Core YCSB</name>\n  <packaging>jar</packaging>\n\n  <properties>\n     <jackson.api.version>1.9.4</jackson.api.version>\n  </properties>\n\n  <dependencies>\t\n    <dependency>\n      <groupId>org.apache.htrace</groupId>\n      <artifactId>htrace-core4</artifactId>\n      <version>4.1.0-incubating</version>\n    </dependency>\n    <dependency>\n      <groupId>org.codehaus.jackson</groupId>\n      <artifactId>jackson-mapper-asl</artifactId>\n      <version>${jackson.api.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.codehaus.jackson</groupId>\n      <artifactId>jackson-core-asl</artifactId>\n      <version>${jackson.api.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.testng</groupId>\n      <artifactId>testng</artifactId>\n      <version>6.1.1</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.hdrhistogram</groupId>\n      <artifactId>HdrHistogram</artifactId>\n      <version>2.1.4</version>\n    </dependency>\n  </dependencies>\n  <build>\n    <resources>\n      <resource>\n        <directory>src/main/resources</directory>\n        <filtering>true</filtering>\n      </resource>\n    </resources>\n  </build>\n</project>\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/BasicDB.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport java.util.*;\nimport java.util.Map.Entry;\nimport java.util.Map;\nimport java.util.concurrent.TimeUnit;\nimport java.util.concurrent.locks.LockSupport;\n\n/**\n * Basic DB that just prints out the requested operations, instead of doing them against a database.\n */\npublic class BasicDB extends DB {\n  public static final String COUNT = \"basicdb.count\";\n  public static final String COUNT_DEFAULT = \"false\";\n  \n  public static final String VERBOSE = \"basicdb.verbose\";\n  public static final String VERBOSE_DEFAULT = \"true\";\n\n  public static final String SIMULATE_DELAY = \"basicdb.simulatedelay\";\n  public static final String SIMULATE_DELAY_DEFAULT = \"0\";\n\n  public static final String RANDOMIZE_DELAY = \"basicdb.randomizedelay\";\n  public static final String RANDOMIZE_DELAY_DEFAULT = \"true\";\n\n  protected static final Object MUTEX = new Object();\n  protected static int counter = 0;\n  protected static Map<Integer, Integer> reads;\n  protected static Map<Integer, Integer> scans;\n  protected static Map<Integer, Integer> updates;\n  protected static Map<Integer, Integer> inserts;\n  protected static Map<Integer, Integer> deletes;\n  \n  protected boolean verbose;\n  protected boolean randomizedelay;\n  protected int todelay;\n  protected boolean count;\n\n  public BasicDB() {\n    todelay = 0;\n  }\n\n  protected void delay() {\n    if (todelay > 0) {\n      long delayNs;\n      if (randomizedelay) {\n        delayNs = TimeUnit.MILLISECONDS.toNanos(Utils.random().nextInt(todelay));\n        if (delayNs == 0) {\n          return;\n        }\n      } else {\n        delayNs = TimeUnit.MILLISECONDS.toNanos(todelay);\n      }\n\n      final long deadline = System.nanoTime() + delayNs;\n      do {\n        LockSupport.parkNanos(deadline - System.nanoTime());\n      } while (System.nanoTime() < deadline && !Thread.interrupted());\n    }\n  }\n\n  /**\n   * Initialize any state for this DB.\n   * Called once per DB instance; there is one DB instance per client thread.\n   */\n  public void init() {\n    verbose = Boolean.parseBoolean(getProperties().getProperty(VERBOSE, VERBOSE_DEFAULT));\n    todelay = Integer.parseInt(getProperties().getProperty(SIMULATE_DELAY, SIMULATE_DELAY_DEFAULT));\n    randomizedelay = Boolean.parseBoolean(getProperties().getProperty(RANDOMIZE_DELAY, RANDOMIZE_DELAY_DEFAULT));\n    count = Boolean.parseBoolean(getProperties().getProperty(COUNT, COUNT_DEFAULT));\n    if (verbose) {\n      synchronized (System.out) {\n        System.out.println(\"***************** properties *****************\");\n        Properties p = getProperties();\n        if (p != null) {\n          for (Enumeration e = p.propertyNames(); e.hasMoreElements();) {\n            String k = (String) e.nextElement();\n            System.out.println(\"\\\"\" + k + \"\\\"=\\\"\" + p.getProperty(k) + \"\\\"\");\n          }\n        }\n        System.out.println(\"**********************************************\");\n      }\n    }\n    \n    synchronized (MUTEX) {\n      if (counter == 0 && count) {\n        reads = new HashMap<Integer, Integer>();\n        scans = new HashMap<Integer, Integer>();\n        updates = new HashMap<Integer, Integer>();\n        inserts = new HashMap<Integer, Integer>();\n        deletes = new HashMap<Integer, Integer>();\n      }\n      counter++;\n    }\n  }\n\n  protected static final ThreadLocal<StringBuilder> TL_STRING_BUILDER = new ThreadLocal<StringBuilder>() {\n    @Override\n    protected StringBuilder initialValue() {\n      return new StringBuilder();\n    }\n  };\n\n  protected static StringBuilder getStringBuilder() {\n    StringBuilder sb = TL_STRING_BUILDER.get();\n    sb.setLength(0);\n    return sb;\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will be stored in a HashMap.\n   *\n   * @param table  The name of the table\n   * @param key    The record key of the record to read.\n   * @param fields The list of fields to read, or null for all of them\n   * @param result A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    delay();\n\n    if (verbose) {\n      StringBuilder sb = getStringBuilder();\n      sb.append(\"READ \").append(table).append(\" \").append(key).append(\" [ \");\n      if (fields != null) {\n        for (String f : fields) {\n          sb.append(f).append(\" \");\n        }\n      } else {\n        sb.append(\"<all fields>\");\n      }\n\n      sb.append(\"]\");\n      System.out.println(sb);\n    }\n\n    if (count) {\n      incCounter(reads, hash(table, key, fields));\n    }\n    \n    return Status.OK;\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value pair from the result will be stored\n   * in a HashMap.\n   *\n   * @param table       The name of the table\n   * @param startkey    The record key of the first record to read.\n   * @param recordcount The number of records to read\n   * @param fields      The list of fields to read, or null for all of them\n   * @param result      A Vector of HashMaps, where each HashMap is a set field/value pairs for one record\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\n                     Vector<HashMap<String, ByteIterator>> result) {\n    delay();\n\n    if (verbose) {\n      StringBuilder sb = getStringBuilder();\n      sb.append(\"SCAN \").append(table).append(\" \").append(startkey).append(\" \").append(recordcount).append(\" [ \");\n      if (fields != null) {\n        for (String f : fields) {\n          sb.append(f).append(\" \");\n        }\n      } else {\n        sb.append(\"<all fields>\");\n      }\n\n      sb.append(\"]\");\n      System.out.println(sb);\n    }\n    \n    if (count) {\n      incCounter(scans, hash(table, startkey, fields));\n    }\n\n    return Status.OK;\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified values HashMap will be written into the\n   * record with the specified record key, overwriting any existing values with the same field name.\n   *\n   * @param table  The name of the table\n   * @param key    The record key of the record to write.\n   * @param values A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    delay();\n\n    if (verbose) {\n      StringBuilder sb = getStringBuilder();\n      sb.append(\"UPDATE \").append(table).append(\" \").append(key).append(\" [ \");\n      if (values != null) {\n        for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n          sb.append(entry.getKey()).append(\"=\").append(entry.getValue()).append(\" \");\n        }\n      }\n      sb.append(\"]\");\n      System.out.println(sb);\n    }\n\n    if (count) {\n      incCounter(updates, hash(table, key, values));\n    }\n    \n    return Status.OK;\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified values HashMap will be written into the\n   * record with the specified record key.\n   *\n   * @param table  The name of the table\n   * @param key    The record key of the record to insert.\n   * @param values A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    delay();\n\n    if (verbose) {\n      StringBuilder sb = getStringBuilder();\n      sb.append(\"INSERT \").append(table).append(\" \").append(key).append(\" [ \");\n      if (values != null) {\n        for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n          sb.append(entry.getKey()).append(\"=\").append(entry.getValue()).append(\" \");\n        }\n      }\n\n      sb.append(\"]\");\n      System.out.println(sb);\n    }\n\n    if (count) {\n      incCounter(inserts, hash(table, key, values));\n    }\n    \n    return Status.OK;\n  }\n\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table The name of the table\n   * @param key   The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status delete(String table, String key) {\n    delay();\n\n    if (verbose) {\n      StringBuilder sb = getStringBuilder();\n      sb.append(\"DELETE \").append(table).append(\" \").append(key);\n      System.out.println(sb);\n    }\n\n    if (count) {\n      incCounter(deletes, (table + key).hashCode());\n    }\n    \n    return Status.OK;\n  }\n\n  @Override\n  public void cleanup() {\n    synchronized (MUTEX) {\n      int countDown = --counter;\n      if (count && countDown < 1) {\n        // TODO - would be nice to call something like: \n        // Measurements.getMeasurements().oneOffMeasurement(\"READS\", \"Uniques\", reads.size());\n        System.out.println(\"[READS], Uniques, \" + reads.size());\n        System.out.println(\"[SCANS], Uniques, \" + scans.size());\n        System.out.println(\"[UPDATES], Uniques, \" + updates.size());\n        System.out.println(\"[INSERTS], Uniques, \" + inserts.size());\n        System.out.println(\"[DELETES], Uniques, \" + deletes.size());\n      }\n    }\n  }\n  \n  /**\n   * Increments the count on the hash in the map.\n   * @param map A non-null map to sync and use for incrementing.\n   * @param hash A hash code to increment.\n   */\n  protected void incCounter(final Map<Integer, Integer> map, final int hash) {\n    synchronized (map) {\n      Integer ctr = map.get(hash);\n      if (ctr == null) {\n        map.put(hash, 1);\n      } else {\n        map.put(hash, ctr + 1);\n      }\n    }\n  }\n  \n  /**\n   * Hashes the table, key and fields, sorting the fields first for a consistent\n   * hash.\n   * Note that this is expensive as we generate a copy of the fields and a string\n   * buffer to hash on. Hashing on the objects is problematic.\n   * @param table The user table.\n   * @param key The key read or scanned.\n   * @param fields The fields read or scanned.\n   * @return The hash code.\n   */\n  protected int hash(final String table, final String key, final Set<String> fields) {\n    if (fields == null) {\n      return (table + key).hashCode();\n    }\n    StringBuilder buf = getStringBuilder().append(table).append(key);\n    List<String> sorted = new ArrayList<String>(fields);\n    Collections.sort(sorted);\n    for (final String field : sorted) {\n      buf.append(field);\n    }\n    return buf.toString().hashCode();\n  }\n  \n  /**\n   * Hashes the table, key and fields, sorting the fields first for a consistent\n   * hash.\n   * Note that this is expensive as we generate a copy of the fields and a string\n   * buffer to hash on. Hashing on the objects is problematic.\n   * @param table The user table.\n   * @param key The key read or scanned.\n   * @param values The values to hash on.\n   * @return The hash code.\n   */\n  protected int hash(final String table, final String key, final Map<String, ByteIterator> values) {\n    if (values == null) {\n      return (table + key).hashCode();\n    }\n    final TreeMap<String, ByteIterator> sorted = \n        new TreeMap<String, ByteIterator>(values);\n    \n    StringBuilder buf = getStringBuilder().append(table).append(key);\n    for (final Entry<String, ByteIterator> entry : sorted.entrySet()) {\n      entry.getValue().reset();\n      buf.append(entry.getKey())\n         .append(entry.getValue().toString());\n    }\n    return buf.toString().hashCode();\n  }\n  \n  /**\n   * Short test of BasicDB\n   */\n  /*\n  public static void main(String[] args) {\n    BasicDB bdb = new BasicDB();\n\n    Properties p = new Properties();\n    p.setProperty(\"Sky\", \"Blue\");\n    p.setProperty(\"Ocean\", \"Wet\");\n\n    bdb.setProperties(p);\n\n    bdb.init();\n\n    HashMap<String, String> fields = new HashMap<String, String>();\n    fields.put(\"A\", \"X\");\n    fields.put(\"B\", \"Y\");\n\n    bdb.read(\"table\", \"key\", null, null);\n    bdb.insert(\"table\", \"key\", fields);\n\n    fields = new HashMap<String, String>();\n    fields.put(\"C\", \"Z\");\n\n    bdb.update(\"table\", \"key\", fields);\n\n    bdb.delete(\"table\", \"key\");\n  }\n  */\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/BasicTSDB.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb;\n\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Set;\nimport java.util.TreeMap;\n\nimport com.yahoo.ycsb.workloads.TimeSeriesWorkload;\n\n/**\n * Basic DB for printing out time series workloads and/or tracking the distribution\n * of keys and fields.\n */\npublic class BasicTSDB extends BasicDB {\n\n  /** Time series workload specific counters. */\n  protected static Map<Long, Integer> timestamps;\n  protected static Map<Integer, Integer> floats;\n  protected static Map<Integer, Integer> integers;\n  \n  private String timestampKey;\n  private String valueKey;\n  private String tagPairDelimiter;\n  private String queryTimeSpanDelimiter;\n  private long lastTimestamp;\n  \n  @Override\n  public void init() {\n    super.init();\n    \n    synchronized (MUTEX) {\n      if (timestamps == null) {\n        timestamps = new HashMap<Long, Integer>();\n        floats = new HashMap<Integer, Integer>();\n        integers = new HashMap<Integer, Integer>();\n      }\n    }\n    \n    timestampKey = getProperties().getProperty(\n        TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY, \n        TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT);\n    valueKey = getProperties().getProperty(\n        TimeSeriesWorkload.VALUE_KEY_PROPERTY, \n        TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT);\n    tagPairDelimiter = getProperties().getProperty(\n        TimeSeriesWorkload.PAIR_DELIMITER_PROPERTY, \n        TimeSeriesWorkload.PAIR_DELIMITER_PROPERTY_DEFAULT);\n    queryTimeSpanDelimiter = getProperties().getProperty(\n        TimeSeriesWorkload.QUERY_TIMESPAN_DELIMITER_PROPERTY,\n        TimeSeriesWorkload.QUERY_TIMESPAN_DELIMITER_PROPERTY_DEFAULT);\n  }\n  \n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    delay();\n    \n    if (verbose) {\n      StringBuilder sb = getStringBuilder();\n      sb.append(\"READ \").append(table).append(\" \").append(key).append(\" [ \");\n      if (fields != null) {\n        for (String f : fields) {\n          sb.append(f).append(\" \");\n        }\n      } else {\n        sb.append(\"<all fields>\");\n      }\n\n      sb.append(\"]\");\n      System.out.println(sb);\n    }\n\n    if (count) {\n      Set<String> filtered = null;\n      if (fields != null) {\n        filtered = new HashSet<String>();\n        for (final String field : fields) {\n          if (field.startsWith(timestampKey)) {\n            String[] parts = field.split(tagPairDelimiter);\n            if (parts[1].contains(queryTimeSpanDelimiter)) {\n              parts = parts[1].split(queryTimeSpanDelimiter);\n              lastTimestamp = Long.parseLong(parts[0]);\n            } else {\n              lastTimestamp = Long.parseLong(parts[1]);\n            }\n            synchronized(timestamps) {\n              Integer ctr = timestamps.get(lastTimestamp);\n              if (ctr == null) {\n                timestamps.put(lastTimestamp, 1);\n              } else {\n                timestamps.put(lastTimestamp, ctr + 1);\n              }\n            }\n          } else {\n            filtered.add(field);\n          }\n        }\n      }\n      incCounter(reads, hash(table, key, filtered));\n    }\n    return Status.OK;\n  }\n  \n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    delay();\n\n    boolean isFloat = false;\n    \n    if (verbose) {\n      StringBuilder sb = getStringBuilder();\n      sb.append(\"UPDATE \").append(table).append(\" \").append(key).append(\" [ \");\n      if (values != null) {\n        final TreeMap<String, ByteIterator> tree = new TreeMap<String, ByteIterator>(values);\n        for (Map.Entry<String, ByteIterator> entry : tree.entrySet()) {\n          if (entry.getKey().equals(timestampKey)) {\n            sb.append(entry.getKey()).append(\"=\")\n              .append(Utils.bytesToLong(entry.getValue().toArray())).append(\" \");\n          } else if (entry.getKey().equals(valueKey)) {\n            final NumericByteIterator it = (NumericByteIterator) entry.getValue();\n            isFloat = it.isFloatingPoint();\n            sb.append(entry.getKey()).append(\"=\")\n               .append(isFloat ? it.getDouble() : it.getLong()).append(\" \");\n          } else {\n            sb.append(entry.getKey()).append(\"=\").append(entry.getValue()).append(\" \");\n          }\n        }\n      }\n      sb.append(\"]\");\n      System.out.println(sb);\n    }\n\n    if (count) {\n      if (!verbose) {\n        isFloat = ((NumericByteIterator) values.get(valueKey)).isFloatingPoint();\n      }\n      int hash = hash(table, key, values);\n      incCounter(updates, hash);\n      synchronized(timestamps) {\n        Integer ctr = timestamps.get(lastTimestamp);\n        if (ctr == null) {\n          timestamps.put(lastTimestamp, 1);\n        } else {\n          timestamps.put(lastTimestamp, ctr + 1);\n        }\n      }\n      if (isFloat) {\n        incCounter(floats, hash);\n      } else {\n        incCounter(integers, hash);\n      }\n    }\n    \n    return Status.OK;\n  }\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    delay();\n    \n    boolean isFloat = false;\n    \n    if (verbose) {\n      StringBuilder sb = getStringBuilder();\n      sb.append(\"INSERT \").append(table).append(\" \").append(key).append(\" [ \");\n      if (values != null) {\n        final TreeMap<String, ByteIterator> tree = new TreeMap<String, ByteIterator>(values);\n        for (Map.Entry<String, ByteIterator> entry : tree.entrySet()) {\n          if (entry.getKey().equals(timestampKey)) {\n            sb.append(entry.getKey()).append(\"=\")\n              .append(Utils.bytesToLong(entry.getValue().toArray())).append(\" \");\n          } else if (entry.getKey().equals(valueKey)) {\n            final NumericByteIterator it = (NumericByteIterator) entry.getValue();\n            isFloat = it.isFloatingPoint();\n            sb.append(entry.getKey()).append(\"=\")\n              .append(isFloat ? it.getDouble() : it.getLong()).append(\" \");\n          } else {\n            sb.append(entry.getKey()).append(\"=\").append(entry.getValue()).append(\" \");\n          }\n        }\n      }\n      sb.append(\"]\");\n      System.out.println(sb);\n    }\n\n    if (count) {\n      if (!verbose) {\n        isFloat = ((NumericByteIterator) values.get(valueKey)).isFloatingPoint();\n      }\n      int hash = hash(table, key, values);\n      incCounter(inserts, hash);\n      synchronized(timestamps) {\n        Integer ctr = timestamps.get(lastTimestamp);\n        if (ctr == null) {\n          timestamps.put(lastTimestamp, 1);\n        } else {\n          timestamps.put(lastTimestamp, ctr + 1);\n        }\n      }\n      if (isFloat) {\n        incCounter(floats, hash);\n      } else {\n        incCounter(integers, hash);\n      }\n    }\n\n    return Status.OK;\n  }\n\n  @Override\n  public void cleanup() {\n    super.cleanup();\n    if (count && counter < 1) {\n      System.out.println(\"[TIMESTAMPS], Unique, \" + timestamps.size());\n      System.out.println(\"[FLOATS], Unique series, \" + floats.size());\n      System.out.println(\"[INTEGERS], Unique series, \" + integers.size());\n      \n      long minTs = Long.MAX_VALUE;\n      long maxTs = Long.MIN_VALUE;\n      for (final long ts : timestamps.keySet()) {\n        if (ts > maxTs) {\n          maxTs = ts;\n        }\n        if (ts < minTs) {\n          minTs = ts;\n        }\n      }\n      System.out.println(\"[TIMESTAMPS], Min, \" + minTs);\n      System.out.println(\"[TIMESTAMPS], Max, \" + maxTs);\n    }\n  }\n  \n  @Override\n  protected int hash(final String table, final String key, final Map<String, ByteIterator> values) {\n    final TreeMap<String, ByteIterator> sorted = new TreeMap<String, ByteIterator>();\n    for (final Entry<String, ByteIterator> entry : values.entrySet()) {\n      if (entry.getKey().equals(valueKey)) {\n        continue;\n      } else if (entry.getKey().equals(timestampKey)) {\n        lastTimestamp = ((NumericByteIterator) entry.getValue()).getLong();\n        entry.getValue().reset();\n        continue;\n      }\n      sorted.put(entry.getKey(), entry.getValue());\n    }\n    // yeah it's ugly but gives us a unique hash without having to add hashers\n    // to all of the ByteIterators.\n    StringBuilder buf = new StringBuilder().append(table).append(key);\n    for (final Entry<String, ByteIterator> entry : sorted.entrySet()) {\n      entry.getValue().reset();\n      buf.append(entry.getKey())\n         .append(entry.getValue().toString());\n    }\n    return buf.toString().hashCode();\n  }\n  \n}"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/ByteArrayByteIterator.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb;\n\n/**\n *  A ByteIterator that iterates through a byte array.\n */\npublic class ByteArrayByteIterator extends ByteIterator {\n  private final int originalOffset;\n  private byte[] str;\n  private int off;\n  private final int len;\n\n  public ByteArrayByteIterator(byte[] s) {\n    this.str = s;\n    this.off = 0;\n    this.len = s.length;\n    originalOffset = 0;\n  }\n\n  public ByteArrayByteIterator(byte[] s, int off, int len) {\n    this.str = s;\n    this.off = off;\n    this.len = off + len;\n    originalOffset = off;\n  }\n\n  @Override\n  public boolean hasNext() {\n    return off < len;\n  }\n\n  @Override\n  public byte nextByte() {\n    byte ret = str[off];\n    off++;\n    return ret;\n  }\n\n  @Override\n  public long bytesLeft() {\n    return len - off;\n  }\n\n  @Override\n  public void reset() {\n    off = originalOffset;\n  }\n  \n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/ByteIterator.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb;\n\nimport java.nio.ByteBuffer;\nimport java.nio.CharBuffer;\nimport java.nio.charset.Charset;\nimport java.util.Iterator;\n\n/**\n * YCSB-specific buffer class.  ByteIterators are designed to support\n * efficient field generation, and to allow backend drivers that can stream\n * fields (instead of materializing them in RAM) to do so.\n * <p>\n * YCSB originially used String objects to represent field values.  This led to\n * two performance issues.\n * </p><p>\n * First, it leads to unnecessary conversions between UTF-16 and UTF-8, both\n * during field generation, and when passing data to byte-based backend\n * drivers.\n * </p><p>\n * Second, Java strings are represented internally using UTF-16, and are\n * built by appending to a growable array type (StringBuilder or\n * StringBuffer), then calling a toString() method.  This leads to a 4x memory\n * overhead as field values are being built, which prevented YCSB from\n * driving large object stores.\n * </p>\n * The StringByteIterator class contains a number of convenience methods for\n * backend drivers that convert between Map&lt;String,String&gt; and\n * Map&lt;String,ByteBuffer&gt;.\n *\n */\npublic abstract class ByteIterator implements Iterator<Byte> {\n\n  @Override\n  public abstract boolean hasNext();\n\n  @Override\n  public Byte next() {\n    throw new UnsupportedOperationException();\n  }\n\n  public abstract byte nextByte();\n\n  /** @return byte offset immediately after the last valid byte */\n  public int nextBuf(byte[] buf, int bufOff) {\n    int sz = bufOff;\n    while (sz < buf.length && hasNext()) {\n      buf[sz] = nextByte();\n      sz++;\n    }\n    return sz;\n  }\n\n  public abstract long bytesLeft();\n\n  @Override\n  public void remove() {\n    throw new UnsupportedOperationException();\n  }\n\n  /** Resets the iterator so that it can be consumed again. Not all\n   * implementations support this call.\n   * @throws UnsupportedOperationException if the implementation hasn't implemented\n   * the method.\n   */\n  public void reset() {\n    throw new UnsupportedOperationException();\n  }\n  \n  /** Consumes remaining contents of this object, and returns them as a string. */\n  public String toString() {\n    Charset cset = Charset.forName(\"UTF-8\");\n    CharBuffer cb = cset.decode(ByteBuffer.wrap(this.toArray()));\n    return cb.toString();\n  }\n\n  /** Consumes remaining contents of this object, and returns them as a byte array. */\n  public byte[] toArray() {\n    long left = bytesLeft();\n    if (left != (int) left) {\n      throw new ArrayIndexOutOfBoundsException(\"Too much data to fit in one array!\");\n    }\n    byte[] ret = new byte[(int) left];\n    int off = 0;\n    while (off < ret.length) {\n      off = nextBuf(ret, off);\n    }\n    return ret;\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/Client.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport com.yahoo.ycsb.measurements.Measurements;\nimport com.yahoo.ycsb.measurements.exporter.MeasurementsExporter;\nimport com.yahoo.ycsb.measurements.exporter.TextMeasurementsExporter;\nimport org.apache.htrace.core.HTraceConfiguration;\nimport org.apache.htrace.core.TraceScope;\nimport org.apache.htrace.core.Tracer;\n\nimport java.io.FileInputStream;\nimport java.io.FileOutputStream;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.text.DecimalFormat;\nimport java.text.SimpleDateFormat;\nimport java.util.*;\nimport java.util.Map.Entry;\nimport java.util.concurrent.CountDownLatch;\nimport java.util.concurrent.TimeUnit;\nimport java.util.concurrent.locks.LockSupport;\n\n/**\n * A thread to periodically show the status of the experiment to reassure you that progress is being made.\n */\nclass StatusThread extends Thread {\n  // Counts down each of the clients completing\n  private final CountDownLatch completeLatch;\n\n  // Stores the measurements for the run\n  private final Measurements measurements;\n\n  // Whether or not to track the JVM stats per run\n  private final boolean trackJVMStats;\n\n  // The clients that are running.\n  private final List<ClientThread> clients;\n\n  private final String label;\n  private final boolean standardstatus;\n\n  // The interval for reporting status.\n  private long sleeptimeNs;\n\n  // JVM max/mins\n  private int maxThreads;\n  private int minThreads = Integer.MAX_VALUE;\n  private long maxUsedMem;\n  private long minUsedMem = Long.MAX_VALUE;\n  private double maxLoadAvg;\n  private double minLoadAvg = Double.MAX_VALUE;\n  private long lastGCCount = 0;\n  private long lastGCTime = 0;\n\n  /**\n   * Creates a new StatusThread without JVM stat tracking.\n   *\n   * @param completeLatch         The latch that each client thread will {@link CountDownLatch#countDown()}\n   *                              as they complete.\n   * @param clients               The clients to collect metrics from.\n   * @param label                 The label for the status.\n   * @param standardstatus        If true the status is printed to stdout in addition to stderr.\n   * @param statusIntervalSeconds The number of seconds between status updates.\n   */\n  public StatusThread(CountDownLatch completeLatch, List<ClientThread> clients,\n                      String label, boolean standardstatus, int statusIntervalSeconds) {\n    this(completeLatch, clients, label, standardstatus, statusIntervalSeconds, false);\n  }\n\n  /**\n   * Creates a new StatusThread.\n   *\n   * @param completeLatch         The latch that each client thread will {@link CountDownLatch#countDown()}\n   *                              as they complete.\n   * @param clients               The clients to collect metrics from.\n   * @param label                 The label for the status.\n   * @param standardstatus        If true the status is printed to stdout in addition to stderr.\n   * @param statusIntervalSeconds The number of seconds between status updates.\n   * @param trackJVMStats         Whether or not to track JVM stats.\n   */\n  public StatusThread(CountDownLatch completeLatch, List<ClientThread> clients,\n                      String label, boolean standardstatus, int statusIntervalSeconds,\n                      boolean trackJVMStats) {\n    this.completeLatch = completeLatch;\n    this.clients = clients;\n    this.label = label;\n    this.standardstatus = standardstatus;\n    sleeptimeNs = TimeUnit.SECONDS.toNanos(statusIntervalSeconds);\n    measurements = Measurements.getMeasurements();\n    this.trackJVMStats = trackJVMStats;\n  }\n\n  /**\n   * Run and periodically report status.\n   */\n  @Override\n  public void run() {\n    final long startTimeMs = System.currentTimeMillis();\n    final long startTimeNanos = System.nanoTime();\n    long deadline = startTimeNanos + sleeptimeNs;\n    long startIntervalMs = startTimeMs;\n    long lastTotalOps = 0;\n\n    boolean alldone;\n\n    do {\n      long nowMs = System.currentTimeMillis();\n\n      lastTotalOps = computeStats(startTimeMs, startIntervalMs, nowMs, lastTotalOps);\n\n      if (trackJVMStats) {\n        measureJVM();\n      }\n\n      alldone = waitForClientsUntil(deadline);\n\n      startIntervalMs = nowMs;\n      deadline += sleeptimeNs;\n    }\n    while (!alldone);\n\n    if (trackJVMStats) {\n      measureJVM();\n    }\n    // Print the final stats.\n    computeStats(startTimeMs, startIntervalMs, System.currentTimeMillis(), lastTotalOps);\n  }\n\n  /**\n   * Computes and prints the stats.\n   *\n   * @param startTimeMs     The start time of the test.\n   * @param startIntervalMs The start time of this interval.\n   * @param endIntervalMs   The end time (now) for the interval.\n   * @param lastTotalOps    The last total operations count.\n   * @return The current operation count.\n   */\n  private long computeStats(final long startTimeMs, long startIntervalMs, long endIntervalMs,\n                            long lastTotalOps) {\n    SimpleDateFormat format = new SimpleDateFormat(\"yyyy-MM-dd HH:mm:ss:SSS\");\n\n    long totalops = 0;\n    long todoops = 0;\n\n    // Calculate the total number of operations completed.\n    for (ClientThread t : clients) {\n      totalops += t.getOpsDone();\n      todoops += t.getOpsTodo();\n    }\n\n\n    long interval = endIntervalMs - startTimeMs;\n    double throughput = 1000.0 * (((double) totalops) / (double) interval);\n    double curthroughput = 1000.0 * (((double) (totalops - lastTotalOps)) /\n        ((double) (endIntervalMs - startIntervalMs)));\n    long estremaining = (long) Math.ceil(todoops / throughput);\n\n\n    DecimalFormat d = new DecimalFormat(\"#.##\");\n    String labelString = this.label + format.format(new Date());\n\n    StringBuilder msg = new StringBuilder(labelString).append(\" \").append(interval / 1000).append(\" sec: \");\n    msg.append(totalops).append(\" operations; \");\n\n    if (totalops != 0) {\n      msg.append(d.format(curthroughput)).append(\" current ops/sec; \");\n    }\n    if (todoops != 0) {\n      msg.append(\"est completion in \").append(RemainingFormatter.format(estremaining));\n    }\n\n    msg.append(Measurements.getMeasurements().getSummary());\n\n    System.err.println(msg);\n\n    if (standardstatus) {\n      System.out.println(msg);\n    }\n    return totalops;\n  }\n\n  /**\n   * Waits for all of the client to finish or the deadline to expire.\n   *\n   * @param deadline The current deadline.\n   * @return True if all of the clients completed.\n   */\n  private boolean waitForClientsUntil(long deadline) {\n    boolean alldone = false;\n    long now = System.nanoTime();\n\n    while (!alldone && now < deadline) {\n      try {\n        alldone = completeLatch.await(deadline - now, TimeUnit.NANOSECONDS);\n      } catch (InterruptedException ie) {\n        // If we are interrupted the thread is being asked to shutdown.\n        // Return true to indicate that and reset the interrupt state\n        // of the thread.\n        Thread.currentThread().interrupt();\n        alldone = true;\n      }\n      now = System.nanoTime();\n    }\n\n    return alldone;\n  }\n\n  /**\n   * Executes the JVM measurements.\n   */\n  private void measureJVM() {\n    final int threads = Utils.getActiveThreadCount();\n    if (threads < minThreads) {\n      minThreads = threads;\n    }\n    if (threads > maxThreads) {\n      maxThreads = threads;\n    }\n    measurements.measure(\"THREAD_COUNT\", threads);\n\n    // TODO - once measurements allow for other number types, switch to using\n    // the raw bytes. Otherwise we can track in MB to avoid negative values \n    // when faced with huge heaps.\n    final int usedMem = Utils.getUsedMemoryMegaBytes();\n    if (usedMem < minUsedMem) {\n      minUsedMem = usedMem;\n    }\n    if (usedMem > maxUsedMem) {\n      maxUsedMem = usedMem;\n    }\n    measurements.measure(\"USED_MEM_MB\", usedMem);\n\n    // Some JVMs may not implement this feature so if the value is less than\n    // zero, just ommit it.\n    final double systemLoad = Utils.getSystemLoadAverage();\n    if (systemLoad >= 0) {\n      // TODO - store the double if measurements allows for them\n      measurements.measure(\"SYS_LOAD_AVG\", (int) systemLoad);\n      if (systemLoad > maxLoadAvg) {\n        maxLoadAvg = systemLoad;\n      }\n      if (systemLoad < minLoadAvg) {\n        minLoadAvg = systemLoad;\n      }\n    }\n\n    final long gcs = Utils.getGCTotalCollectionCount();\n    measurements.measure(\"GCS\", (int) (gcs - lastGCCount));\n    final long gcTime = Utils.getGCTotalTime();\n    measurements.measure(\"GCS_TIME\", (int) (gcTime - lastGCTime));\n    lastGCCount = gcs;\n    lastGCTime = gcTime;\n  }\n\n  /**\n   * @return The maximum threads running during the test.\n   */\n  public int getMaxThreads() {\n    return maxThreads;\n  }\n\n  /**\n   * @return The minimum threads running during the test.\n   */\n  public int getMinThreads() {\n    return minThreads;\n  }\n\n  /**\n   * @return The maximum memory used during the test.\n   */\n  public long getMaxUsedMem() {\n    return maxUsedMem;\n  }\n\n  /**\n   * @return The minimum memory used during the test.\n   */\n  public long getMinUsedMem() {\n    return minUsedMem;\n  }\n\n  /**\n   * @return The maximum load average during the test.\n   */\n  public double getMaxLoadAvg() {\n    return maxLoadAvg;\n  }\n\n  /**\n   * @return The minimum load average during the test.\n   */\n  public double getMinLoadAvg() {\n    return minLoadAvg;\n  }\n\n  /**\n   * @return Whether or not the thread is tracking JVM stats.\n   */\n  public boolean trackJVMStats() {\n    return trackJVMStats;\n  }\n}\n\n/**\n * Turn seconds remaining into more useful units.\n * i.e. if there are hours or days worth of seconds, use them.\n */\nfinal class RemainingFormatter {\n  private RemainingFormatter() {\n    // not used\n  }\n\n  public static StringBuilder format(long seconds) {\n    StringBuilder time = new StringBuilder();\n    long days = TimeUnit.SECONDS.toDays(seconds);\n    if (days > 0) {\n      time.append(days).append(days == 1 ? \" day \" : \" days \");\n      seconds -= TimeUnit.DAYS.toSeconds(days);\n    }\n    long hours = TimeUnit.SECONDS.toHours(seconds);\n    if (hours > 0) {\n      time.append(hours).append(hours == 1 ? \" hour \" : \" hours \");\n      seconds -= TimeUnit.HOURS.toSeconds(hours);\n    }\n    /* Only include minute granularity if we're < 1 day. */\n    if (days < 1) {\n      long minutes = TimeUnit.SECONDS.toMinutes(seconds);\n      if (minutes > 0) {\n        time.append(minutes).append(minutes == 1 ? \" minute \" : \" minutes \");\n        seconds -= TimeUnit.MINUTES.toSeconds(seconds);\n      }\n    }\n    /* Only bother to include seconds if we're < 1 minute */\n    if (time.length() == 0) {\n      time.append(seconds).append(time.length() == 1 ? \" second \" : \" seconds \");\n    }\n    return time;\n  }\n}\n\n/**\n * A thread for executing transactions or data inserts to the database.\n */\nclass ClientThread implements Runnable {\n  // Counts down each of the clients completing.\n  private final CountDownLatch completeLatch;\n\n  private static boolean spinSleep;\n  private DB db;\n  private boolean dotransactions;\n  private Workload workload;\n  private int opcount;\n  private double targetOpsPerMs;\n\n  private int opsdone;\n  private int threadid;\n  private int threadcount;\n  private Object workloadstate;\n  private Properties props;\n  private long targetOpsTickNs;\n  private final Measurements measurements;\n\n  /**\n   * Constructor.\n   *\n   * @param db                   the DB implementation to use\n   * @param dotransactions       true to do transactions, false to insert data\n   * @param workload             the workload to use\n   * @param props                the properties defining the experiment\n   * @param opcount              the number of operations (transactions or inserts) to do\n   * @param targetperthreadperms target number of operations per thread per ms\n   * @param completeLatch        The latch tracking the completion of all clients.\n   */\n  public ClientThread(DB db, boolean dotransactions, Workload workload, Properties props, int opcount,\n                      double targetperthreadperms, CountDownLatch completeLatch) {\n    this.db = db;\n    this.dotransactions = dotransactions;\n    this.workload = workload;\n    this.opcount = opcount;\n    opsdone = 0;\n    if (targetperthreadperms > 0) {\n      targetOpsPerMs = targetperthreadperms;\n      targetOpsTickNs = (long) (1000000 / targetOpsPerMs);\n    }\n    this.props = props;\n    measurements = Measurements.getMeasurements();\n    spinSleep = Boolean.valueOf(this.props.getProperty(\"spin.sleep\", \"false\"));\n    this.completeLatch = completeLatch;\n  }\n\n  public void setThreadId(final int threadId) {\n    threadid = threadId;\n  }\n  \n  public void setThreadCount(final int threadCount) {\n    threadcount = threadCount;\n  }\n  \n  public int getOpsDone() {\n    return opsdone;\n  }\n\n  @Override\n  public void run() {\n    try {\n      db.init();\n    } catch (DBException e) {\n      e.printStackTrace();\n      e.printStackTrace(System.out);\n      return;\n    }\n\n    try {\n      workloadstate = workload.initThread(props, threadid, threadcount);\n    } catch (WorkloadException e) {\n      e.printStackTrace();\n      e.printStackTrace(System.out);\n      return;\n    }\n\n    //NOTE: Switching to using nanoTime and parkNanos for time management here such that the measurements\n    // and the client thread have the same view on time.\n\n    //spread the thread operations out so they don't all hit the DB at the same time\n    // GH issue 4 - throws exception if _target>1 because random.nextInt argument must be >0\n    // and the sleep() doesn't make sense for granularities < 1 ms anyway\n    if ((targetOpsPerMs > 0) && (targetOpsPerMs <= 1.0)) {\n      long randomMinorDelay = Utils.random().nextInt((int) targetOpsTickNs);\n      sleepUntil(System.nanoTime() + randomMinorDelay);\n    }\n    try {\n      if (dotransactions) {\n        long startTimeNanos = System.nanoTime();\n\n        while (((opcount == 0) || (opsdone < opcount)) && !workload.isStopRequested()) {\n\n          if (!workload.doTransaction(db, workloadstate)) {\n            break;\n          }\n\n          opsdone++;\n\n          throttleNanos(startTimeNanos);\n        }\n      } else {\n        long startTimeNanos = System.nanoTime();\n\n        while (((opcount == 0) || (opsdone < opcount)) && !workload.isStopRequested()) {\n\n          if (!workload.doInsert(db, workloadstate)) {\n            break;\n          }\n\n          opsdone++;\n\n          throttleNanos(startTimeNanos);\n        }\n      }\n    } catch (Exception e) {\n      e.printStackTrace();\n      e.printStackTrace(System.out);\n      System.exit(0);\n    }\n\n    try {\n      measurements.setIntendedStartTimeNs(0);\n      db.cleanup();\n    } catch (DBException e) {\n      e.printStackTrace();\n      e.printStackTrace(System.out);\n    } finally {\n      completeLatch.countDown();\n    }\n  }\n\n  private static void sleepUntil(long deadline) {\n    while (System.nanoTime() < deadline) {\n      if (!spinSleep) {\n        LockSupport.parkNanos(deadline - System.nanoTime());\n      }\n    }\n  }\n\n  private void throttleNanos(long startTimeNanos) {\n    //throttle the operations\n    if (targetOpsPerMs > 0) {\n      // delay until next tick\n      long deadline = startTimeNanos + opsdone * targetOpsTickNs;\n      sleepUntil(deadline);\n      measurements.setIntendedStartTimeNs(deadline);\n    }\n  }\n\n  /**\n   * The total amount of work this thread is still expected to do.\n   */\n  int getOpsTodo() {\n    int todo = opcount - opsdone;\n    return todo < 0 ? 0 : todo;\n  }\n}\n\n/**\n * Main class for executing YCSB.\n */\npublic final class Client {\n  private Client() {\n    //not used\n  }\n\n  public static final String DEFAULT_RECORD_COUNT = \"0\";\n\n  /**\n   * The target number of operations to perform.\n   */\n  public static final String OPERATION_COUNT_PROPERTY = \"operationcount\";\n\n  /**\n   * The number of records to load into the database initially.\n   */\n  public static final String RECORD_COUNT_PROPERTY = \"recordcount\";\n\n  /**\n   * The workload class to be loaded.\n   */\n  public static final String WORKLOAD_PROPERTY = \"workload\";\n\n  /**\n   * The database class to be used.\n   */\n  public static final String DB_PROPERTY = \"db\";\n\n  /**\n   * The exporter class to be used. The default is\n   * com.yahoo.ycsb.measurements.exporter.TextMeasurementsExporter.\n   */\n  public static final String EXPORTER_PROPERTY = \"exporter\";\n\n  /**\n   * If set to the path of a file, YCSB will write all output to this file\n   * instead of STDOUT.\n   */\n  public static final String EXPORT_FILE_PROPERTY = \"exportfile\";\n\n  /**\n   * The number of YCSB client threads to run.\n   */\n  public static final String THREAD_COUNT_PROPERTY = \"threadcount\";\n\n  /**\n   * Indicates how many inserts to do if less than recordcount.\n   * Useful for partitioning the load among multiple servers if the client is the bottleneck.\n   * Additionally workloads should support the \"insertstart\" property which tells them which record to start at.\n   */\n  public static final String INSERT_COUNT_PROPERTY = \"insertcount\";\n\n  /**\n   * Target number of operations per second.\n   */\n  public static final String TARGET_PROPERTY = \"target\";\n\n  /**\n   * The maximum amount of time (in seconds) for which the benchmark will be run.\n   */\n  public static final String MAX_EXECUTION_TIME = \"maxexecutiontime\";\n\n  /**\n   * Whether or not this is the transaction phase (run) or not (load).\n   */\n  public static final String DO_TRANSACTIONS_PROPERTY = \"dotransactions\";\n\n  /**\n   * Whether or not to show status during run.\n   */\n  public static final String STATUS_PROPERTY = \"status\";\n\n  /**\n   * Use label for status (e.g. to label one experiment out of a whole batch).\n   */\n  public static final String LABEL_PROPERTY = \"label\";\n\n  /**\n   * An optional thread used to track progress and measure JVM stats.\n   */\n  private static StatusThread statusthread = null;\n\n  // HTrace integration related constants.\n\n  /**\n   * All keys for configuring the tracing system start with this prefix.\n   */\n  private static final String HTRACE_KEY_PREFIX = \"htrace.\";\n  private static final String CLIENT_WORKLOAD_INIT_SPAN = \"Client#workload_init\";\n  private static final String CLIENT_INIT_SPAN = \"Client#init\";\n  private static final String CLIENT_WORKLOAD_SPAN = \"Client#workload\";\n  private static final String CLIENT_CLEANUP_SPAN = \"Client#cleanup\";\n  private static final String CLIENT_EXPORT_MEASUREMENTS_SPAN = \"Client#export_measurements\";\n\n  public static void usageMessage() {\n    System.out.println(\"Usage: java com.yahoo.ycsb.Client [options]\");\n    System.out.println(\"Options:\");\n    System.out.println(\"  -threads n: execute using n threads (default: 1) - can also be specified as the \\n\" +\n        \"        \\\"threadcount\\\" property using -p\");\n    System.out.println(\"  -target n: attempt to do n operations per second (default: unlimited) - can also\\n\" +\n        \"       be specified as the \\\"target\\\" property using -p\");\n    System.out.println(\"  -load:  run the loading phase of the workload\");\n    System.out.println(\"  -t:  run the transactions phase of the workload (default)\");\n    System.out.println(\"  -db dbname: specify the name of the DB to use (default: com.yahoo.ycsb.BasicDB) - \\n\" +\n        \"        can also be specified as the \\\"db\\\" property using -p\");\n    System.out.println(\"  -P propertyfile: load properties from the given file. Multiple files can\");\n    System.out.println(\"           be specified, and will be processed in the order specified\");\n    System.out.println(\"  -p name=value:  specify a property to be passed to the DB and workloads;\");\n    System.out.println(\"          multiple properties can be specified, and override any\");\n    System.out.println(\"          values in the propertyfile\");\n    System.out.println(\"  -s:  show status during run (default: no status)\");\n    System.out.println(\"  -l label:  use label for status (e.g. to label one experiment out of a whole batch)\");\n    System.out.println(\"\");\n    System.out.println(\"Required properties:\");\n    System.out.println(\"  \" + WORKLOAD_PROPERTY + \": the name of the workload class to use (e.g. \" +\n        \"com.yahoo.ycsb.workloads.CoreWorkload)\");\n    System.out.println(\"\");\n    System.out.println(\"To run the transaction phase from multiple servers, start a separate client on each.\");\n    System.out.println(\"To run the load phase from multiple servers, start a separate client on each; additionally,\");\n    System.out.println(\"use the \\\"insertcount\\\" and \\\"insertstart\\\" properties to divide up the records \" +\n        \"to be inserted\");\n  }\n\n  public static boolean checkRequiredProperties(Properties props) {\n    if (props.getProperty(WORKLOAD_PROPERTY) == null) {\n      System.out.println(\"Missing property: \" + WORKLOAD_PROPERTY);\n      return false;\n    }\n\n    return true;\n  }\n\n\n  /**\n   * Exports the measurements to either sysout or a file using the exporter\n   * loaded from conf.\n   *\n   * @throws IOException Either failed to write to output stream or failed to close it.\n   */\n  private static void exportMeasurements(Properties props, int opcount, long runtime)\n      throws IOException {\n    MeasurementsExporter exporter = null;\n    try {\n      // if no destination file is provided the results will be written to stdout\n      OutputStream out;\n      String exportFile = props.getProperty(EXPORT_FILE_PROPERTY);\n      if (exportFile == null) {\n        out = System.out;\n      } else {\n        out = new FileOutputStream(exportFile);\n      }\n\n      // if no exporter is provided the default text one will be used\n      String exporterStr = props.getProperty(EXPORTER_PROPERTY,\n          \"com.yahoo.ycsb.measurements.exporter.TextMeasurementsExporter\");\n      try {\n        exporter = (MeasurementsExporter) Class.forName(exporterStr).getConstructor(OutputStream.class)\n            .newInstance(out);\n      } catch (Exception e) {\n        System.err.println(\"Could not find exporter \" + exporterStr\n            + \", will use default text reporter.\");\n        e.printStackTrace();\n        exporter = new TextMeasurementsExporter(out);\n      }\n\n      exporter.write(\"OVERALL\", \"RunTime(ms)\", runtime);\n      double throughput = 1000.0 * (opcount) / (runtime);\n      exporter.write(\"OVERALL\", \"Throughput(ops/sec)\", throughput);\n\n      final Map<String, Long[]> gcs = Utils.getGCStatst();\n      long totalGCCount = 0;\n      long totalGCTime = 0;\n      for (final Entry<String, Long[]> entry : gcs.entrySet()) {\n        exporter.write(\"TOTAL_GCS_\" + entry.getKey(), \"Count\", entry.getValue()[0]);\n        exporter.write(\"TOTAL_GC_TIME_\" + entry.getKey(), \"Time(ms)\", entry.getValue()[1]);\n        exporter.write(\"TOTAL_GC_TIME_%_\" + entry.getKey(), \"Time(%)\",\n            ((double) entry.getValue()[1] / runtime) * (double) 100);\n        totalGCCount += entry.getValue()[0];\n        totalGCTime += entry.getValue()[1];\n      }\n      exporter.write(\"TOTAL_GCs\", \"Count\", totalGCCount);\n\n      exporter.write(\"TOTAL_GC_TIME\", \"Time(ms)\", totalGCTime);\n      exporter.write(\"TOTAL_GC_TIME_%\", \"Time(%)\", ((double) totalGCTime / runtime) * (double) 100);\n      if (statusthread != null && statusthread.trackJVMStats()) {\n        exporter.write(\"MAX_MEM_USED\", \"MBs\", statusthread.getMaxUsedMem());\n        exporter.write(\"MIN_MEM_USED\", \"MBs\", statusthread.getMinUsedMem());\n        exporter.write(\"MAX_THREADS\", \"Count\", statusthread.getMaxThreads());\n        exporter.write(\"MIN_THREADS\", \"Count\", statusthread.getMinThreads());\n        exporter.write(\"MAX_SYS_LOAD_AVG\", \"Load\", statusthread.getMaxLoadAvg());\n        exporter.write(\"MIN_SYS_LOAD_AVG\", \"Load\", statusthread.getMinLoadAvg());\n      }\n\n      Measurements.getMeasurements().exportMeasurements(exporter);\n    } finally {\n      if (exporter != null) {\n        exporter.close();\n      }\n    }\n  }\n\n  @SuppressWarnings(\"unchecked\")\n  public static void main(String[] args) {\n    Properties props = parseArguments(args);\n\n    boolean status = Boolean.valueOf(props.getProperty(STATUS_PROPERTY, String.valueOf(false)));\n    String label = props.getProperty(LABEL_PROPERTY, \"\");\n\n    long maxExecutionTime = Integer.parseInt(props.getProperty(MAX_EXECUTION_TIME, \"0\"));\n\n    //get number of threads, target and db\n    int threadcount = Integer.parseInt(props.getProperty(THREAD_COUNT_PROPERTY, \"1\"));\n    String dbname = props.getProperty(DB_PROPERTY, \"com.yahoo.ycsb.BasicDB\");\n    int target = Integer.parseInt(props.getProperty(TARGET_PROPERTY, \"0\"));\n\n    //compute the target throughput\n    double targetperthreadperms = -1;\n    if (target > 0) {\n      double targetperthread = ((double) target) / ((double) threadcount);\n      targetperthreadperms = targetperthread / 1000.0;\n    }\n\n    Thread warningthread = setupWarningThread();\n    warningthread.start();\n\n    Measurements.setProperties(props);\n\n    Workload workload = getWorkload(props);\n\n    final Tracer tracer = getTracer(props, workload);\n\n    initWorkload(props, warningthread, workload, tracer);\n\n    System.err.println(\"Starting test.\");\n    final CountDownLatch completeLatch = new CountDownLatch(threadcount);\n\n    final List<ClientThread> clients = initDb(dbname, props, threadcount, targetperthreadperms,\n        workload, tracer, completeLatch);\n\n    if (status) {\n      boolean standardstatus = false;\n      if (props.getProperty(Measurements.MEASUREMENT_TYPE_PROPERTY, \"\").compareTo(\"timeseries\") == 0) {\n        standardstatus = true;\n      }\n      int statusIntervalSeconds = Integer.parseInt(props.getProperty(\"status.interval\", \"10\"));\n      boolean trackJVMStats = props.getProperty(Measurements.MEASUREMENT_TRACK_JVM_PROPERTY,\n          Measurements.MEASUREMENT_TRACK_JVM_PROPERTY_DEFAULT).equals(\"true\");\n      statusthread = new StatusThread(completeLatch, clients, label, standardstatus, statusIntervalSeconds,\n          trackJVMStats);\n      statusthread.start();\n    }\n\n    Thread terminator = null;\n    long st;\n    long en;\n    int opsDone;\n\n    try (final TraceScope span = tracer.newScope(CLIENT_WORKLOAD_SPAN)) {\n\n      final Map<Thread, ClientThread> threads = new HashMap<>(threadcount);\n      for (ClientThread client : clients) {\n        threads.put(new Thread(tracer.wrap(client, \"ClientThread\")), client);\n      }\n\n      st = System.currentTimeMillis();\n\n      for (Thread t : threads.keySet()) {\n        t.start();\n      }\n\n      if (maxExecutionTime > 0) {\n        terminator = new TerminatorThread(maxExecutionTime, threads.keySet(), workload);\n        terminator.start();\n      }\n\n      opsDone = 0;\n\n      for (Map.Entry<Thread, ClientThread> entry : threads.entrySet()) {\n        try {\n          entry.getKey().join();\n          opsDone += entry.getValue().getOpsDone();\n        } catch (InterruptedException ignored) {\n          // ignored\n        }\n      }\n\n      en = System.currentTimeMillis();\n    }\n\n    try {\n      try (final TraceScope span = tracer.newScope(CLIENT_CLEANUP_SPAN)) {\n\n        if (terminator != null && !terminator.isInterrupted()) {\n          terminator.interrupt();\n        }\n\n        if (status) {\n          // wake up status thread if it's asleep\n          statusthread.interrupt();\n          // at this point we assume all the monitored threads are already gone as per above join loop.\n          try {\n            statusthread.join();\n          } catch (InterruptedException ignored) {\n            // ignored\n          }\n        }\n\n        workload.cleanup();\n      }\n    } catch (WorkloadException e) {\n      e.printStackTrace();\n      e.printStackTrace(System.out);\n      System.exit(0);\n    }\n\n    try {\n      try (final TraceScope span = tracer.newScope(CLIENT_EXPORT_MEASUREMENTS_SPAN)) {\n        exportMeasurements(props, opsDone, en - st);\n      }\n    } catch (IOException e) {\n      System.err.println(\"Could not export measurements, error: \" + e.getMessage());\n      e.printStackTrace();\n      System.exit(-1);\n    }\n\n    System.exit(0);\n  }\n\n  private static List<ClientThread> initDb(String dbname, Properties props, int threadcount,\n                                           double targetperthreadperms, Workload workload, Tracer tracer,\n                                           CountDownLatch completeLatch) {\n    boolean initFailed = false;\n    boolean dotransactions = Boolean.valueOf(props.getProperty(DO_TRANSACTIONS_PROPERTY, String.valueOf(true)));\n\n    final List<ClientThread> clients = new ArrayList<>(threadcount);\n    try (final TraceScope span = tracer.newScope(CLIENT_INIT_SPAN)) {\n      int opcount;\n      if (dotransactions) {\n        opcount = Integer.parseInt(props.getProperty(OPERATION_COUNT_PROPERTY, \"0\"));\n      } else {\n        if (props.containsKey(INSERT_COUNT_PROPERTY)) {\n          opcount = Integer.parseInt(props.getProperty(INSERT_COUNT_PROPERTY, \"0\"));\n        } else {\n          opcount = Integer.parseInt(props.getProperty(RECORD_COUNT_PROPERTY, DEFAULT_RECORD_COUNT));\n        }\n      }\n\n      for (int threadid = 0; threadid < threadcount; threadid++) {\n        DB db;\n        try {\n          db = DBFactory.newDB(dbname, props, tracer);\n        } catch (UnknownDBException e) {\n          System.out.println(\"Unknown DB \" + dbname);\n          initFailed = true;\n          break;\n        }\n\n        int threadopcount = opcount / threadcount;\n\n        // ensure correct number of operations, in case opcount is not a multiple of threadcount\n        if (threadid < opcount % threadcount) {\n          ++threadopcount;\n        }\n\n        ClientThread t = new ClientThread(db, dotransactions, workload, props, threadopcount, targetperthreadperms,\n            completeLatch);\n        t.setThreadId(threadid);\n        t.setThreadCount(threadcount);\n        clients.add(t);\n      }\n\n      if (initFailed) {\n        System.err.println(\"Error initializing datastore bindings.\");\n        System.exit(0);\n      }\n    }\n    return clients;\n  }\n\n  private static Tracer getTracer(Properties props, Workload workload) {\n    return new Tracer.Builder(\"YCSB \" + workload.getClass().getSimpleName())\n        .conf(getHTraceConfiguration(props))\n        .build();\n  }\n\n  private static void initWorkload(Properties props, Thread warningthread, Workload workload, Tracer tracer) {\n    try {\n      try (final TraceScope span = tracer.newScope(CLIENT_WORKLOAD_INIT_SPAN)) {\n        workload.init(props);\n        warningthread.interrupt();\n      }\n    } catch (WorkloadException e) {\n      e.printStackTrace();\n      e.printStackTrace(System.out);\n      System.exit(0);\n    }\n  }\n\n  private static HTraceConfiguration getHTraceConfiguration(Properties props) {\n    final Map<String, String> filteredProperties = new HashMap<>();\n    for (String key : props.stringPropertyNames()) {\n      if (key.startsWith(HTRACE_KEY_PREFIX)) {\n        filteredProperties.put(key.substring(HTRACE_KEY_PREFIX.length()), props.getProperty(key));\n      }\n    }\n    return HTraceConfiguration.fromMap(filteredProperties);\n  }\n\n  private static Thread setupWarningThread() {\n    //show a warning message that creating the workload is taking a while\n    //but only do so if it is taking longer than 2 seconds\n    //(showing the message right away if the setup wasn't taking very long was confusing people)\n    return new Thread() {\n      @Override\n      public void run() {\n        try {\n          sleep(2000);\n        } catch (InterruptedException e) {\n          return;\n        }\n        System.err.println(\" (might take a few minutes for large data sets)\");\n      }\n    };\n  }\n\n  private static Workload getWorkload(Properties props) {\n    ClassLoader classLoader = Client.class.getClassLoader();\n\n    try {\n      Properties projectProp = new Properties();\n      projectProp.load(classLoader.getResourceAsStream(\"project.properties\"));\n      System.err.println(\"YCSB Client \" + projectProp.getProperty(\"version\"));\n    } catch (IOException e) {\n      System.err.println(\"Unable to retrieve client version.\");\n    }\n\n    System.err.println();\n    System.err.println(\"Loading workload...\");\n    try {\n      Class workloadclass = classLoader.loadClass(props.getProperty(WORKLOAD_PROPERTY));\n\n      return (Workload) workloadclass.newInstance();\n    } catch (Exception e) {\n      e.printStackTrace();\n      e.printStackTrace(System.out);\n      System.exit(0);\n    }\n\n    return null;\n  }\n\n  private static Properties parseArguments(String[] args) {\n    Properties props = new Properties();\n    System.err.print(\"Command line:\");\n    for (String arg : args) {\n      System.err.print(\" \" + arg);\n    }\n\n    Properties fileprops = new Properties();\n    int argindex = 0;\n\n    if (args.length == 0) {\n      usageMessage();\n      System.out.println(\"At least one argument specifying a workload is required.\");\n      System.exit(0);\n    }\n\n    while (args[argindex].startsWith(\"-\")) {\n      if (args[argindex].compareTo(\"-threads\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.out.println(\"Missing argument value for -threads.\");\n          System.exit(0);\n        }\n        int tcount = Integer.parseInt(args[argindex]);\n        props.setProperty(THREAD_COUNT_PROPERTY, String.valueOf(tcount));\n        argindex++;\n      } else if (args[argindex].compareTo(\"-target\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.out.println(\"Missing argument value for -target.\");\n          System.exit(0);\n        }\n        int ttarget = Integer.parseInt(args[argindex]);\n        props.setProperty(TARGET_PROPERTY, String.valueOf(ttarget));\n        argindex++;\n      } else if (args[argindex].compareTo(\"-load\") == 0) {\n        props.setProperty(DO_TRANSACTIONS_PROPERTY, String.valueOf(false));\n        argindex++;\n      } else if (args[argindex].compareTo(\"-t\") == 0) {\n        props.setProperty(DO_TRANSACTIONS_PROPERTY, String.valueOf(true));\n        argindex++;\n      } else if (args[argindex].compareTo(\"-s\") == 0) {\n        props.setProperty(STATUS_PROPERTY, String.valueOf(true));\n        argindex++;\n      } else if (args[argindex].compareTo(\"-db\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.out.println(\"Missing argument value for -db.\");\n          System.exit(0);\n        }\n        props.setProperty(DB_PROPERTY, args[argindex]);\n        argindex++;\n      } else if (args[argindex].compareTo(\"-l\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.out.println(\"Missing argument value for -l.\");\n          System.exit(0);\n        }\n        props.setProperty(LABEL_PROPERTY, args[argindex]);\n        argindex++;\n      } else if (args[argindex].compareTo(\"-P\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.out.println(\"Missing argument value for -P.\");\n          System.exit(0);\n        }\n        String propfile = args[argindex];\n        argindex++;\n\n        Properties myfileprops = new Properties();\n        try {\n          myfileprops.load(new FileInputStream(propfile));\n        } catch (IOException e) {\n          System.out.println(\"Unable to open the properties file \" + propfile);\n          System.out.println(e.getMessage());\n          System.exit(0);\n        }\n\n        //Issue #5 - remove call to stringPropertyNames to make compilable under Java 1.5\n        for (Enumeration e = myfileprops.propertyNames(); e.hasMoreElements();) {\n          String prop = (String) e.nextElement();\n\n          fileprops.setProperty(prop, myfileprops.getProperty(prop));\n        }\n\n      } else if (args[argindex].compareTo(\"-p\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.out.println(\"Missing argument value for -p\");\n          System.exit(0);\n        }\n        int eq = args[argindex].indexOf('=');\n        if (eq < 0) {\n          usageMessage();\n          System.out.println(\"Argument '-p' expected to be in key=value format (e.g., -p operationcount=99999)\");\n          System.exit(0);\n        }\n\n        String name = args[argindex].substring(0, eq);\n        String value = args[argindex].substring(eq + 1);\n        props.put(name, value);\n        argindex++;\n      } else {\n        usageMessage();\n        System.out.println(\"Unknown option \" + args[argindex]);\n        System.exit(0);\n      }\n\n      if (argindex >= args.length) {\n        break;\n      }\n    }\n\n    if (argindex != args.length) {\n      usageMessage();\n      if (argindex < args.length) {\n        System.out.println(\"An argument value without corresponding argument specifier (e.g., -p, -s) was found. \"\n            + \"We expected an argument specifier and instead found \" + args[argindex]);\n      } else {\n        System.out.println(\"An argument specifier without corresponding value was found at the end of the supplied \" +\n            \"command line arguments.\");\n      }\n      System.exit(0);\n    }\n\n    //overwrite file properties with properties from the command line\n\n    //Issue #5 - remove call to stringPropertyNames to make compilable under Java 1.5\n    for (Enumeration e = props.propertyNames(); e.hasMoreElements();) {\n      String prop = (String) e.nextElement();\n\n      fileprops.setProperty(prop, props.getProperty(prop));\n    }\n\n    props = fileprops;\n\n    if (!checkRequiredProperties(props)) {\n      System.out.println(\"Failed check required properties.\");\n      System.exit(0);\n    }\n\n    return props;\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/CommandLine.java",
    "content": "/**\n * Copyright (c) 2010 Yahoo! Inc. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport com.yahoo.ycsb.workloads.CoreWorkload;\n\nimport java.io.BufferedReader;\nimport java.io.FileInputStream;\nimport java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.util.*;\n\n/**\n * A simple command line client to a database, using the appropriate com.yahoo.ycsb.DB implementation.\n */\npublic final class CommandLine {\n  private CommandLine() {\n    //not used\n  }\n\n  public static final String DEFAULT_DB = \"com.yahoo.ycsb.BasicDB\";\n\n  public static void usageMessage() {\n    System.out.println(\"YCSB Command Line Client\");\n    System.out.println(\"Usage: java com.yahoo.ycsb.CommandLine [options]\");\n    System.out.println(\"Options:\");\n    System.out.println(\"  -P filename: Specify a property file\");\n    System.out.println(\"  -p name=value: Specify a property value\");\n    System.out.println(\"  -db classname: Use a specified DB class (can also set the \\\"db\\\" property)\");\n    System.out.println(\"  -table tablename: Use the table name instead of the default \\\"\" +\n        CoreWorkload.TABLENAME_PROPERTY_DEFAULT + \"\\\"\");\n    System.out.println();\n  }\n\n  public static void help() {\n    System.out.println(\"Commands:\");\n    System.out.println(\"  read key [field1 field2 ...] - Read a record\");\n    System.out.println(\"  scan key recordcount [field1 field2 ...] - Scan starting at key\");\n    System.out.println(\"  insert key name1=value1 [name2=value2 ...] - Insert a new record\");\n    System.out.println(\"  update key name1=value1 [name2=value2 ...] - Update a record\");\n    System.out.println(\"  delete key - Delete a record\");\n    System.out.println(\"  table [tablename] - Get or [set] the name of the table\");\n    System.out.println(\"  quit - Quit\");\n  }\n\n  public static void main(String[] args) {\n\n    Properties props = new Properties();\n    Properties fileprops = new Properties();\n\n    parseArguments(args, props, fileprops);\n\n    for (Enumeration e = props.propertyNames(); e.hasMoreElements();) {\n      String prop = (String) e.nextElement();\n\n      fileprops.setProperty(prop, props.getProperty(prop));\n    }\n\n    props = fileprops;\n\n    System.out.println(\"YCSB Command Line client\");\n    System.out.println(\"Type \\\"help\\\" for command line help\");\n    System.out.println(\"Start with \\\"-help\\\" for usage info\");\n\n    String table = props.getProperty(CoreWorkload.TABLENAME_PROPERTY, CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n\n    //create a DB\n    String dbname = props.getProperty(Client.DB_PROPERTY, DEFAULT_DB);\n\n    ClassLoader classLoader = CommandLine.class.getClassLoader();\n\n    DB db = null;\n\n    try {\n      Class dbclass = classLoader.loadClass(dbname);\n      db = (DB) dbclass.newInstance();\n    } catch (Exception e) {\n      e.printStackTrace();\n      System.exit(0);\n    }\n\n    db.setProperties(props);\n    try {\n      db.init();\n    } catch (DBException e) {\n      e.printStackTrace();\n      System.exit(0);\n    }\n\n    System.out.println(\"Connected.\");\n\n    //main loop\n    BufferedReader br = new BufferedReader(new InputStreamReader(System.in));\n\n    for (;;) {\n      //get user input\n      System.out.print(\"> \");\n\n      String input = null;\n\n      try {\n        input = br.readLine();\n      } catch (IOException e) {\n        e.printStackTrace();\n        System.exit(1);\n      }\n\n      if (input.compareTo(\"\") == 0) {\n        continue;\n      }\n\n      if (input.compareTo(\"help\") == 0) {\n        help();\n        continue;\n      }\n\n      if (input.compareTo(\"quit\") == 0) {\n        break;\n      }\n\n      String[] tokens = input.split(\" \");\n\n      long st = System.currentTimeMillis();\n      //handle commands\n      if (tokens[0].compareTo(\"table\") == 0) {\n        handleTable(tokens, table);\n      } else if (tokens[0].compareTo(\"read\") == 0) {\n        handleRead(tokens, table, db);\n      } else if (tokens[0].compareTo(\"scan\") == 0) {\n        handleScan(tokens, table, db);\n      } else if (tokens[0].compareTo(\"update\") == 0) {\n        handleUpdate(tokens, table, db);\n      } else if (tokens[0].compareTo(\"insert\") == 0) {\n        handleInsert(tokens, table, db);\n      } else if (tokens[0].compareTo(\"delete\") == 0) {\n        handleDelete(tokens, table, db);\n      } else {\n        System.out.println(\"Error: unknown command \\\"\" + tokens[0] + \"\\\"\");\n      }\n\n      System.out.println((System.currentTimeMillis() - st) + \" ms\");\n    }\n  }\n\n  private static void parseArguments(String[] args, Properties props, Properties fileprops) {\n    int argindex = 0;\n    while ((argindex < args.length) && (args[argindex].startsWith(\"-\"))) {\n      if ((args[argindex].compareTo(\"-help\") == 0) ||\n          (args[argindex].compareTo(\"--help\") == 0) ||\n          (args[argindex].compareTo(\"-?\") == 0) ||\n          (args[argindex].compareTo(\"--?\") == 0)) {\n        usageMessage();\n        System.exit(0);\n      }\n\n      if (args[argindex].compareTo(\"-db\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.exit(0);\n        }\n        props.setProperty(Client.DB_PROPERTY, args[argindex]);\n        argindex++;\n      } else if (args[argindex].compareTo(\"-P\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.exit(0);\n        }\n        String propfile = args[argindex];\n        argindex++;\n\n        Properties myfileprops = new Properties();\n        try {\n          myfileprops.load(new FileInputStream(propfile));\n        } catch (IOException e) {\n          System.out.println(e.getMessage());\n          System.exit(0);\n        }\n\n        for (Enumeration e = myfileprops.propertyNames(); e.hasMoreElements();) {\n          String prop = (String) e.nextElement();\n\n          fileprops.setProperty(prop, myfileprops.getProperty(prop));\n        }\n\n      } else if (args[argindex].compareTo(\"-p\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.exit(0);\n        }\n        int eq = args[argindex].indexOf('=');\n        if (eq < 0) {\n          usageMessage();\n          System.exit(0);\n        }\n\n        String name = args[argindex].substring(0, eq);\n        String value = args[argindex].substring(eq + 1);\n        props.put(name, value);\n        argindex++;\n      } else if (args[argindex].compareTo(\"-table\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.exit(0);\n        }\n        props.put(CoreWorkload.TABLENAME_PROPERTY, args[argindex]);\n\n        argindex++;\n      } else {\n        System.out.println(\"Unknown option \" + args[argindex]);\n        usageMessage();\n        System.exit(0);\n      }\n\n      if (argindex >= args.length) {\n        break;\n      }\n    }\n\n    if (argindex != args.length) {\n      usageMessage();\n      System.exit(0);\n    }\n  }\n\n  private static void handleDelete(String[] tokens, String table, DB db) {\n    if (tokens.length != 2) {\n      System.out.println(\"Error: syntax is \\\"delete keyname\\\"\");\n    } else {\n      Status ret = db.delete(table, tokens[1]);\n      System.out.println(\"Return result: \" + ret.getName());\n    }\n  }\n\n  private static void handleInsert(String[] tokens, String table, DB db) {\n    if (tokens.length < 3) {\n      System.out.println(\"Error: syntax is \\\"insert keyname name1=value1 [name2=value2 ...]\\\"\");\n    } else {\n      HashMap<String, ByteIterator> values = new HashMap<>();\n\n      for (int i = 2; i < tokens.length; i++) {\n        String[] nv = tokens[i].split(\"=\");\n        values.put(nv[0], new StringByteIterator(nv[1]));\n      }\n\n      Status ret = db.insert(table, tokens[1], values);\n      System.out.println(\"Result: \" + ret.getName());\n    }\n  }\n\n  private static void handleUpdate(String[] tokens, String table, DB db) {\n    if (tokens.length < 3) {\n      System.out.println(\"Error: syntax is \\\"update keyname name1=value1 [name2=value2 ...]\\\"\");\n    } else {\n      HashMap<String, ByteIterator> values = new HashMap<>();\n\n      for (int i = 2; i < tokens.length; i++) {\n        String[] nv = tokens[i].split(\"=\");\n        values.put(nv[0], new StringByteIterator(nv[1]));\n      }\n\n      Status ret = db.update(table, tokens[1], values);\n      System.out.println(\"Result: \" + ret.getName());\n    }\n  }\n\n  private static void handleScan(String[] tokens, String table, DB db) {\n    if (tokens.length < 3) {\n      System.out.println(\"Error: syntax is \\\"scan keyname scanlength [field1 field2 ...]\\\"\");\n    } else {\n      Set<String> fields = null;\n\n      if (tokens.length > 3) {\n        fields = new HashSet<>();\n\n        fields.addAll(Arrays.asList(tokens).subList(3, tokens.length));\n      }\n\n      Vector<HashMap<String, ByteIterator>> results = new Vector<>();\n      Status ret = db.scan(table, tokens[1], Integer.parseInt(tokens[2]), fields, results);\n      System.out.println(\"Result: \" + ret.getName());\n      int record = 0;\n      if (results.isEmpty()) {\n        System.out.println(\"0 records\");\n      } else {\n        System.out.println(\"--------------------------------\");\n      }\n      for (Map<String, ByteIterator> result : results) {\n        System.out.println(\"Record \" + (record++));\n        for (Map.Entry<String, ByteIterator> ent : result.entrySet()) {\n          System.out.println(ent.getKey() + \"=\" + ent.getValue());\n        }\n        System.out.println(\"--------------------------------\");\n      }\n    }\n  }\n\n  private static void handleRead(String[] tokens, String table, DB db) {\n    if (tokens.length == 1) {\n      System.out.println(\"Error: syntax is \\\"read keyname [field1 field2 ...]\\\"\");\n    } else {\n      Set<String> fields = null;\n\n      if (tokens.length > 2) {\n        fields = new HashSet<>();\n\n        fields.addAll(Arrays.asList(tokens).subList(2, tokens.length));\n      }\n\n      HashMap<String, ByteIterator> result = new HashMap<>();\n      Status ret = db.read(table, tokens[1], fields, result);\n      System.out.println(\"Return code: \" + ret.getName());\n      for (Map.Entry<String, ByteIterator> ent : result.entrySet()) {\n        System.out.println(ent.getKey() + \"=\" + ent.getValue());\n      }\n    }\n  }\n\n  private static void handleTable(String[] tokens, String table) {\n    if (tokens.length == 1) {\n      System.out.println(\"Using table \\\"\" + table + \"\\\"\");\n    } else if (tokens.length == 2) {\n      table = tokens[1];\n      System.out.println(\"Using table \\\"\" + table + \"\\\"\");\n    } else {\n      System.out.println(\"Error: syntax is \\\"table tablename\\\"\");\n    }\n  }\n\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/DB.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\n/**\n * A layer for accessing a database to be benchmarked. Each thread in the client\n * will be given its own instance of whatever DB class is to be used in the test.\n * This class should be constructed using a no-argument constructor, so we can\n * load it dynamically. Any argument-based initialization should be\n * done by init().\n *\n * Note that YCSB does not make any use of the return codes returned by this class.\n * Instead, it keeps a count of the return values and presents them to the user.\n *\n * The semantics of methods such as insert, update and delete vary from database\n * to database.  In particular, operations may or may not be durable once these\n * methods commit, and some systems may return 'success' regardless of whether\n * or not a tuple with a matching key existed before the call.  Rather than dictate\n * the exact semantics of these methods, we recommend you either implement them\n * to match the database's default semantics, or the semantics of your \n * target application.  For the sake of comparison between experiments we also \n * recommend you explain the semantics you chose when presenting performance results.\n */\npublic abstract class DB {\n  /**\n   * Properties for configuring this DB.\n   */\n  private Properties properties = new Properties();\n\n  /**\n   * Set the properties for this DB.\n   */\n  public void setProperties(Properties p) {\n    properties = p;\n\n  }\n\n  /**\n   * Get the set of properties for this DB.\n   */\n  public Properties getProperties() {\n    return properties;\n  }\n\n  /**\n   * Initialize any state for this DB.\n   * Called once per DB instance; there is one DB instance per client thread.\n   */\n  public void init() throws DBException {\n  }\n\n  /**\n   * Cleanup any state for this DB.\n   * Called once per DB instance; there is one DB instance per client thread.\n   */\n  public void cleanup() throws DBException {\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will be stored in a HashMap.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to read.\n   * @param fields The list of fields to read, or null for all of them\n   * @param result A HashMap of field/value pairs for the result\n   * @return The result of the operation.\n   */\n  public abstract Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result);\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value pair from the result will be stored\n   * in a HashMap.\n   *\n   * @param table The name of the table\n   * @param startkey The record key of the first record to read.\n   * @param recordcount The number of records to read\n   * @param fields The list of fields to read, or null for all of them\n   * @param result A Vector of HashMaps, where each HashMap is a set field/value pairs for one record\n   * @return The result of the operation.\n   */\n  public abstract Status scan(String table, String startkey, int recordcount, Set<String> fields,\n                              Vector<HashMap<String, ByteIterator>> result);\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified values HashMap will be written into the\n   * record with the specified record key, overwriting any existing values with the same field name.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to write.\n   * @param values A HashMap of field/value pairs to update in the record\n   * @return The result of the operation.\n   */\n  public abstract Status update(String table, String key, Map<String, ByteIterator> values);\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified values HashMap will be written into the\n   * record with the specified record key.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to insert.\n   * @param values A HashMap of field/value pairs to insert in the record\n   * @return The result of the operation.\n   */\n  public abstract Status insert(String table, String key, Map<String, ByteIterator> values);\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to delete.\n   * @return The result of the operation.\n   */\n  public abstract Status delete(String table, String key);\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/DBException.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\n/**\n * Something bad happened while interacting with the database.\n */\npublic class DBException extends Exception {\n  /**\n   *\n   */\n  private static final long serialVersionUID = 6646883591588721475L;\n\n  public DBException(String message) {\n    super(message);\n  }\n\n  public DBException() {\n    super();\n  }\n\n  public DBException(String message, Throwable cause) {\n    super(message, cause);\n  }\n\n  public DBException(Throwable cause) {\n    super(cause);\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/DBFactory.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport org.apache.htrace.core.Tracer;\n\nimport java.util.Properties;\n\n/**\n * Creates a DB layer by dynamically classloading the specified DB class.\n */\npublic final class DBFactory {\n  private DBFactory() {\n    // not used\n  }\n\n  public static DB newDB(String dbname, Properties properties, final Tracer tracer) throws UnknownDBException {\n    ClassLoader classLoader = DBFactory.class.getClassLoader();\n\n    DB ret;\n\n    try {\n      Class dbclass = classLoader.loadClass(dbname);\n\n      ret = (DB) dbclass.newInstance();\n    } catch (Exception e) {\n      e.printStackTrace();\n      return null;\n    }\n\n    ret.setProperties(properties);\n\n    return new DBWrapper(ret, tracer);\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/DBWrapper.java",
    "content": "/**\n * Copyright (c) 2010 Yahoo! Inc., 2016-2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport java.util.Map;\nimport com.yahoo.ycsb.measurements.Measurements;\nimport org.apache.htrace.core.TraceScope;\nimport org.apache.htrace.core.Tracer;\n\nimport java.util.*;\n\n/**\n * Wrapper around a \"real\" DB that measures latencies and counts return codes.\n * Also reports latency separately between OK and failed operations.\n */\npublic class DBWrapper extends DB {\n  private final DB db;\n  private final Measurements measurements;\n  private final Tracer tracer;\n\n  private boolean reportLatencyForEachError = false;\n  private Set<String> latencyTrackedErrors = new HashSet<String>();\n\n  private static final String REPORT_LATENCY_FOR_EACH_ERROR_PROPERTY = \"reportlatencyforeacherror\";\n  private static final String REPORT_LATENCY_FOR_EACH_ERROR_PROPERTY_DEFAULT = \"false\";\n\n  private static final String LATENCY_TRACKED_ERRORS_PROPERTY = \"latencytrackederrors\";\n\n  private final String scopeStringCleanup;\n  private final String scopeStringDelete;\n  private final String scopeStringInit;\n  private final String scopeStringInsert;\n  private final String scopeStringRead;\n  private final String scopeStringScan;\n  private final String scopeStringUpdate;\n\n  public DBWrapper(final DB db, final Tracer tracer) {\n    this.db = db;\n    measurements = Measurements.getMeasurements();\n    this.tracer = tracer;\n    final String simple = db.getClass().getSimpleName();\n    scopeStringCleanup = simple + \"#cleanup\";\n    scopeStringDelete = simple + \"#delete\";\n    scopeStringInit = simple + \"#init\";\n    scopeStringInsert = simple + \"#insert\";\n    scopeStringRead = simple + \"#read\";\n    scopeStringScan = simple + \"#scan\";\n    scopeStringUpdate = simple + \"#update\";\n  }\n\n  /**\n   * Set the properties for this DB.\n   */\n  public void setProperties(Properties p) {\n    db.setProperties(p);\n  }\n\n  /**\n   * Get the set of properties for this DB.\n   */\n  public Properties getProperties() {\n    return db.getProperties();\n  }\n\n  /**\n   * Initialize any state for this DB.\n   * Called once per DB instance; there is one DB instance per client thread.\n   */\n  public void init() throws DBException {\n    try (final TraceScope span = tracer.newScope(scopeStringInit)) {\n      db.init();\n\n      this.reportLatencyForEachError = Boolean.parseBoolean(getProperties().\n          getProperty(REPORT_LATENCY_FOR_EACH_ERROR_PROPERTY,\n              REPORT_LATENCY_FOR_EACH_ERROR_PROPERTY_DEFAULT));\n\n      if (!reportLatencyForEachError) {\n        String latencyTrackedErrorsProperty = getProperties().getProperty(LATENCY_TRACKED_ERRORS_PROPERTY, null);\n        if (latencyTrackedErrorsProperty != null) {\n          this.latencyTrackedErrors = new HashSet<String>(Arrays.asList(\n              latencyTrackedErrorsProperty.split(\",\")));\n        }\n      }\n\n      System.err.println(\"DBWrapper: report latency for each error is \" +\n          this.reportLatencyForEachError + \" and specific error codes to track\" +\n          \" for latency are: \" + this.latencyTrackedErrors.toString());\n    }\n  }\n\n  /**\n   * Cleanup any state for this DB.\n   * Called once per DB instance; there is one DB instance per client thread.\n   */\n  public void cleanup() throws DBException {\n    try (final TraceScope span = tracer.newScope(scopeStringCleanup)) {\n      long ist = measurements.getIntendedtartTimeNs();\n      long st = System.nanoTime();\n      db.cleanup();\n      long en = System.nanoTime();\n      measure(\"CLEANUP\", Status.OK, ist, st, en);\n    }\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result\n   * will be stored in a HashMap.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to read.\n   * @param fields The list of fields to read, or null for all of them\n   * @param result A HashMap of field/value pairs for the result\n   * @return The result of the operation.\n   */\n  public Status read(String table, String key, Set<String> fields,\n                     Map<String, ByteIterator> result) {\n    try (final TraceScope span = tracer.newScope(scopeStringRead)) {\n      long ist = measurements.getIntendedtartTimeNs();\n      long st = System.nanoTime();\n      Status res = db.read(table, key, fields, result);\n      long en = System.nanoTime();\n      measure(\"READ\", res, ist, st, en);\n      measurements.reportStatus(\"READ\", res);\n      return res;\n    }\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database.\n   * Each field/value pair from the result will be stored in a HashMap.\n   *\n   * @param table The name of the table\n   * @param startkey The record key of the first record to read.\n   * @param recordcount The number of records to read\n   * @param fields The list of fields to read, or null for all of them\n   * @param result A Vector of HashMaps, where each HashMap is a set field/value pairs for one record\n   * @return The result of the operation.\n   */\n  public Status scan(String table, String startkey, int recordcount,\n                     Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    try (final TraceScope span = tracer.newScope(scopeStringScan)) {\n      long ist = measurements.getIntendedtartTimeNs();\n      long st = System.nanoTime();\n      Status res = db.scan(table, startkey, recordcount, fields, result);\n      long en = System.nanoTime();\n      measure(\"SCAN\", res, ist, st, en);\n      measurements.reportStatus(\"SCAN\", res);\n      return res;\n    }\n  }\n\n  private void measure(String op, Status result, long intendedStartTimeNanos,\n                       long startTimeNanos, long endTimeNanos) {\n    String measurementName = op;\n    if (result == null || !result.isOk()) {\n      if (this.reportLatencyForEachError ||\n          this.latencyTrackedErrors.contains(result.getName())) {\n        measurementName = op + \"-\" + result.getName();\n      } else {\n        measurementName = op + \"-FAILED\";\n      }\n    }\n    measurements.measure(measurementName,\n        (int) ((endTimeNanos - startTimeNanos) / 1000));\n    measurements.measureIntended(measurementName,\n        (int) ((endTimeNanos - intendedStartTimeNanos) / 1000));\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified values HashMap will be written into the\n   * record with the specified record key, overwriting any existing values with the same field name.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to write.\n   * @param values A HashMap of field/value pairs to update in the record\n   * @return The result of the operation.\n   */\n  public Status update(String table, String key,\n                       Map<String, ByteIterator> values) {\n    try (final TraceScope span = tracer.newScope(scopeStringUpdate)) {\n      long ist = measurements.getIntendedtartTimeNs();\n      long st = System.nanoTime();\n      Status res = db.update(table, key, values);\n      long en = System.nanoTime();\n      measure(\"UPDATE\", res, ist, st, en);\n      measurements.reportStatus(\"UPDATE\", res);\n      return res;\n    }\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified\n   * record key.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to insert.\n   * @param values A HashMap of field/value pairs to insert in the record\n   * @return The result of the operation.\n   */\n  public Status insert(String table, String key,\n                       Map<String, ByteIterator> values) {\n    try (final TraceScope span = tracer.newScope(scopeStringInsert)) {\n      long ist = measurements.getIntendedtartTimeNs();\n      long st = System.nanoTime();\n      Status res = db.insert(table, key, values);\n      long en = System.nanoTime();\n      measure(\"INSERT\", res, ist, st, en);\n      measurements.reportStatus(\"INSERT\", res);\n      return res;\n    }\n  }\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to delete.\n   * @return The result of the operation.\n   */\n  public Status delete(String table, String key) {\n    try (final TraceScope span = tracer.newScope(scopeStringDelete)) {\n      long ist = measurements.getIntendedtartTimeNs();\n      long st = System.nanoTime();\n      Status res = db.delete(table, key);\n      long en = System.nanoTime();\n      measure(\"DELETE\", res, ist, st, en);\n      measurements.reportStatus(\"DELETE\", res);\n      return res;\n    }\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/GoodBadUglyDB.java",
    "content": "/**\n * Copyright (c) 2010 Yahoo! Inc. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Random;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.concurrent.locks.Lock;\nimport java.util.concurrent.locks.LockSupport;\nimport java.util.concurrent.locks.ReadWriteLock;\nimport java.util.concurrent.locks.ReentrantReadWriteLock;\n\nimport static java.util.concurrent.TimeUnit.MICROSECONDS;\n\n/**\n * Basic DB that just prints out the requested operations, instead of doing them against a database.\n */\npublic class GoodBadUglyDB extends DB {\n  public static final String SIMULATE_DELAY = \"gbudb.delays\";\n  public static final String SIMULATE_DELAY_DEFAULT = \"200,1000,10000,50000,100000\";\n  private static final ReadWriteLock DB_ACCESS = new ReentrantReadWriteLock();\n  private long[] delays;\n\n  public GoodBadUglyDB() {\n    delays = new long[]{200, 1000, 10000, 50000, 200000};\n  }\n\n  private void delay() {\n    final Random random = Utils.random();\n    double p = random.nextDouble();\n    int mod;\n    if (p < 0.9) {\n      mod = 0;\n    } else if (p < 0.99) {\n      mod = 1;\n    } else if (p < 0.9999) {\n      mod = 2;\n    } else {\n      mod = 3;\n    }\n    // this will make mod 3 pauses global\n    Lock lock = mod == 3 ? DB_ACCESS.writeLock() : DB_ACCESS.readLock();\n    if (mod == 3) {\n      System.out.println(\"OUCH\");\n    }\n    lock.lock();\n    try {\n      final long baseDelayNs = MICROSECONDS.toNanos(delays[mod]);\n      final int delayRangeNs = (int) (MICROSECONDS.toNanos(delays[mod + 1]) - baseDelayNs);\n      final long delayNs = baseDelayNs + random.nextInt(delayRangeNs);\n      final long deadline = System.nanoTime() + delayNs;\n      do {\n        LockSupport.parkNanos(deadline - System.nanoTime());\n      } while (System.nanoTime() < deadline && !Thread.interrupted());\n    } finally {\n      lock.unlock();\n    }\n\n  }\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is one DB instance per client thread.\n   */\n  public void init() {\n    int i = 0;\n    for (String delay : getProperties().getProperty(SIMULATE_DELAY, SIMULATE_DELAY_DEFAULT).split(\",\")) {\n      delays[i++] = Long.parseLong(delay);\n    }\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will be stored in a HashMap.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to read.\n   * @param fields The list of fields to read, or null for all of them\n   * @param result A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    delay();\n    return Status.OK;\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value pair from the result will be stored\n   * in a HashMap.\n   *\n   * @param table The name of the table\n   * @param startkey The record key of the first record to read.\n   * @param recordcount The number of records to read\n   * @param fields The list of fields to read, or null for all of them\n   * @param result A Vector of HashMaps, where each HashMap is a set field/value pairs for one record\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\n                     Vector<HashMap<String, ByteIterator>> result) {\n    delay();\n\n    return Status.OK;\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified values HashMap will be written into the\n   * record with the specified record key, overwriting any existing values with the same field name.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to write.\n   * @param values A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    delay();\n\n    return Status.OK;\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified values HashMap will be written into the\n   * record with the specified record key.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to insert.\n   * @param values A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    delay();\n    return Status.OK;\n  }\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status delete(String table, String key) {\n    delay();\n    return Status.OK;\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/InputStreamByteIterator.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb;\n\nimport java.io.IOException;\nimport java.io.InputStream;\n\n/**\n *  A ByteIterator that iterates through an inputstream of bytes.\n */\npublic class InputStreamByteIterator extends ByteIterator {\n  private long len;\n  private InputStream ins;\n  private long off;\n  private final boolean resetable;\n\n  public InputStreamByteIterator(InputStream ins, long len) {\n    this.len = len;\n    this.ins = ins;\n    off = 0;\n    resetable = ins.markSupported();\n    if (resetable) {\n      ins.mark((int) len);\n    }\n  }\n\n  @Override\n  public boolean hasNext() {\n    return off < len;\n  }\n\n  @Override\n  public byte nextByte() {\n    int ret;\n    try {\n      ret = ins.read();\n    } catch (Exception e) {\n      throw new IllegalStateException(e);\n    }\n    if (ret == -1) {\n      throw new IllegalStateException(\"Past EOF!\");\n    }\n    off++;\n    return (byte) ret;\n  }\n\n  @Override\n  public long bytesLeft() {\n    return len - off;\n  }\n\n  @Override\n  public void reset() {\n    if (resetable) {\n      try {\n        ins.reset();\n        ins.mark((int) len);\n      } catch (IOException e) {\n        throw new IllegalStateException(\"Failed to reset the input stream\", e);\n      }\n    }\n    throw new UnsupportedOperationException();\n  }\n  \n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/NumericByteIterator.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb;\n\n/**\n * A byte iterator that handles encoding and decoding numeric values.\n * Currently this iterator can handle 64 bit signed values and double precision\n * floating point values.\n */\npublic class NumericByteIterator extends ByteIterator {\n  private final byte[] payload;\n  private final boolean floatingPoint;\n  private int off;\n  \n  public NumericByteIterator(final long value) {\n    floatingPoint = false;\n    payload = Utils.longToBytes(value);\n    off = 0;\n  }\n  \n  public NumericByteIterator(final double value) {\n    floatingPoint = true;\n    payload = Utils.doubleToBytes(value);\n    off = 0;\n  }\n  \n  @Override\n  public boolean hasNext() {\n    return off < payload.length;\n  }\n\n  @Override\n  public byte nextByte() {\n    return payload[off++];\n  }\n\n  @Override\n  public long bytesLeft() {\n    return payload.length - off;\n  }\n\n  @Override\n  public void reset() {\n    off = 0;\n  }\n  \n  public long getLong() {\n    if (floatingPoint) {\n      throw new IllegalStateException(\"Byte iterator is of the type double\");\n    }\n    return Utils.bytesToLong(payload);\n  }\n\n  public double getDouble() {\n    if (!floatingPoint) {\n      throw new IllegalStateException(\"Byte iterator is of the type long\");\n    }\n    return Utils.bytesToDouble(payload);\n  }\n\n  public boolean isFloatingPoint() {\n    return floatingPoint;\n  }\n\n}"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/RandomByteIterator.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb;\n\n/**\n *  A ByteIterator that generates a random sequence of bytes.\n */\npublic class RandomByteIterator extends ByteIterator {\n  private long len;\n  private long off;\n  private int bufOff;\n  private byte[] buf;\n\n  @Override\n  public boolean hasNext() {\n    return (off + bufOff) < len;\n  }\n\n  private void fillBytesImpl(byte[] buffer, int base) {\n    int bytes = Utils.random().nextInt();\n\n    switch (buffer.length - base) {\n    default:\n      buffer[base + 5] = (byte) (((bytes >> 25) & 95) + ' ');\n    case 5:\n      buffer[base + 4] = (byte) (((bytes >> 20) & 63) + ' ');\n    case 4:\n      buffer[base + 3] = (byte) (((bytes >> 15) & 31) + ' ');\n    case 3:\n      buffer[base + 2] = (byte) (((bytes >> 10) & 95) + ' ');\n    case 2:\n      buffer[base + 1] = (byte) (((bytes >> 5) & 63) + ' ');\n    case 1:\n      buffer[base + 0] = (byte) (((bytes) & 31) + ' ');\n    case 0:\n      break;\n    }\n  }\n\n  private void fillBytes() {\n    if (bufOff == buf.length) {\n      fillBytesImpl(buf, 0);\n      bufOff = 0;\n      off += buf.length;\n    }\n  }\n\n  public RandomByteIterator(long len) {\n    this.len = len;\n    this.buf = new byte[6];\n    this.bufOff = buf.length;\n    fillBytes();\n    this.off = 0;\n  }\n\n  public byte nextByte() {\n    fillBytes();\n    bufOff++;\n    return buf[bufOff - 1];\n  }\n\n  @Override\n  public int nextBuf(byte[] buffer, int bufOffset) {\n    int ret;\n    if (len - off < buffer.length - bufOffset) {\n      ret = (int) (len - off);\n    } else {\n      ret = buffer.length - bufOffset;\n    }\n    int i;\n    for (i = 0; i < ret; i += 6) {\n      fillBytesImpl(buffer, i + bufOffset);\n    }\n    off += ret;\n    return ret + bufOffset;\n  }\n\n  @Override\n  public long bytesLeft() {\n    return len - off - bufOff;\n  }\n\n  @Override\n  public void reset() {\n    off = 0;\n  }\n  \n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/Status.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\n/**\n * The result of an operation.\n */\npublic class Status {\n  private final String name;\n  private final String description;\n\n  /**\n   * @param name A short name for the status.\n   * @param description A description of the status.\n   */\n  public Status(String name, String description) {\n    super();\n    this.name = name;\n    this.description = description;\n  }\n\n  public String getName() {\n    return name;\n  }\n\n  public String getDescription() {\n    return description;\n  }\n\n  @Override\n  public String toString() {\n    return \"Status [name=\" + name + \", description=\" + description + \"]\";\n  }\n\n  @Override\n  public int hashCode() {\n    final int prime = 31;\n    int result = 1;\n    result = prime * result + ((description == null) ? 0 : description.hashCode());\n    result = prime * result + ((name == null) ? 0 : name.hashCode());\n    return result;\n  }\n\n  @Override\n  public boolean equals(Object obj) {\n    if (this == obj) {\n      return true;\n    }\n    if (obj == null) {\n      return false;\n    }\n    if (getClass() != obj.getClass()) {\n      return false;\n    }\n    Status other = (Status) obj;\n    if (description == null) {\n      if (other.description != null) {\n        return false;\n      }\n    } else if (!description.equals(other.description)) {\n      return false;\n    }\n    if (name == null) {\n      if (other.name != null) {\n        return false;\n      }\n    } else if (!name.equals(other.name)) {\n      return false;\n    }\n    return true;\n  }\n\n  /**\n   * Is {@code this} a passing state for the operation: {@link Status#OK} or {@link Status#BATCHED_OK}.\n   * @return true if the operation is successful, false otherwise\n   */\n  public boolean isOk() {\n    return this == OK || this == BATCHED_OK;\n  }\n\n  public static final Status OK = new Status(\"OK\", \"The operation completed successfully.\");\n  public static final Status ERROR = new Status(\"ERROR\", \"The operation failed.\");\n  public static final Status NOT_FOUND = new Status(\"NOT_FOUND\", \"The requested record was not found.\");\n  public static final Status NOT_IMPLEMENTED = new Status(\"NOT_IMPLEMENTED\", \"The operation is not \" +\n      \"implemented for the current binding.\");\n  public static final Status UNEXPECTED_STATE = new Status(\"UNEXPECTED_STATE\", \"The operation reported\" +\n      \" success, but the result was not as expected.\");\n  public static final Status BAD_REQUEST = new Status(\"BAD_REQUEST\", \"The request was not valid.\");\n  public static final Status FORBIDDEN = new Status(\"FORBIDDEN\", \"The operation is forbidden.\");\n  public static final Status SERVICE_UNAVAILABLE = new Status(\"SERVICE_UNAVAILABLE\", \"Dependant \" +\n      \"service for the current binding is not available.\");\n  public static final Status BATCHED_OK = new Status(\"BATCHED_OK\", \"The operation has been batched by \" +\n      \"the binding to be executed later.\");\n}\n\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/StringByteIterator.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport java.util.HashMap;\nimport java.util.Map;\n\n/**\n * A ByteIterator that iterates through a string.\n */\npublic class StringByteIterator extends ByteIterator {\n  private String str;\n  private int off;\n\n  /**\n   * Put all of the entries of one map into the other, converting\n   * String values into ByteIterators.\n   */\n  public static void putAllAsByteIterators(Map<String, ByteIterator> out, Map<String, String> in) {\n    for (Map.Entry<String, String> entry : in.entrySet()) {\n      out.put(entry.getKey(), new StringByteIterator(entry.getValue()));\n    }\n  }\n\n  /**\n   * Put all of the entries of one map into the other, converting\n   * ByteIterator values into Strings.\n   */\n  public static void putAllAsStrings(Map<String, String> out, Map<String, ByteIterator> in) {\n    for (Map.Entry<String, ByteIterator> entry : in.entrySet()) {\n      out.put(entry.getKey(), entry.getValue().toString());\n    }\n  }\n\n  /**\n   * Create a copy of a map, converting the values from Strings to\n   * StringByteIterators.\n   */\n  public static Map<String, ByteIterator> getByteIteratorMap(Map<String, String> m) {\n    HashMap<String, ByteIterator> ret =\n        new HashMap<String, ByteIterator>();\n\n    for (Map.Entry<String, String> entry : m.entrySet()) {\n      ret.put(entry.getKey(), new StringByteIterator(entry.getValue()));\n    }\n    return ret;\n  }\n\n  /**\n   * Create a copy of a map, converting the values from\n   * StringByteIterators to Strings.\n   */\n  public static Map<String, String> getStringMap(Map<String, ByteIterator> m) {\n    HashMap<String, String> ret = new HashMap<String, String>();\n\n    for (Map.Entry<String, ByteIterator> entry : m.entrySet()) {\n      ret.put(entry.getKey(), entry.getValue().toString());\n    }\n    return ret;\n  }\n\n  public StringByteIterator(String s) {\n    this.str = s;\n    this.off = 0;\n  }\n\n  @Override\n  public boolean hasNext() {\n    return off < str.length();\n  }\n\n  @Override\n  public byte nextByte() {\n    byte ret = (byte) str.charAt(off);\n    off++;\n    return ret;\n  }\n\n  @Override\n  public long bytesLeft() {\n    return str.length() - off;\n  }\n\n  @Override\n  public void reset() {\n    off = 0;\n  }\n  \n  /**\n   * Specialization of general purpose toString() to avoid unnecessary\n   * copies.\n   * <p>\n   * Creating a new StringByteIterator, then calling toString()\n   * yields the original String object, and does not perform any copies\n   * or String conversion operations.\n   * </p>\n   */\n  @Override\n  public String toString() {\n    if (off > 0) {\n      return super.toString();\n    } else {\n      return str;\n    }\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/TerminatorThread.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb;\n\nimport java.util.Collection;\n\n/**\n * A thread that waits for the maximum specified time and then interrupts all the client\n * threads passed at initialization of this thread.\n *\n * The maximum execution time passed is assumed to be in seconds.\n *\n */\npublic class TerminatorThread extends Thread {\n\n  private final Collection<? extends Thread> threads;\n  private long maxExecutionTime;\n  private Workload workload;\n  private long waitTimeOutInMS;\n\n  public TerminatorThread(long maxExecutionTime, Collection<? extends Thread> threads,\n                          Workload workload) {\n    this.maxExecutionTime = maxExecutionTime;\n    this.threads = threads;\n    this.workload = workload;\n    waitTimeOutInMS = 2000;\n    System.err.println(\"Maximum execution time specified as: \" + maxExecutionTime + \" secs\");\n  }\n\n  public void run() {\n    try {\n      Thread.sleep(maxExecutionTime * 1000);\n    } catch (InterruptedException e) {\n      System.err.println(\"Could not wait until max specified time, TerminatorThread interrupted.\");\n      return;\n    }\n    System.err.println(\"Maximum time elapsed. Requesting stop for the workload.\");\n    workload.requestStop();\n    System.err.println(\"Stop requested for workload. Now Joining!\");\n    for (Thread t : threads) {\n      while (t.isAlive()) {\n        try {\n          t.join(waitTimeOutInMS);\n          if (t.isAlive()) {\n            System.out.println(\"Still waiting for thread \" + t.getName() + \" to complete. \" +\n                \"Workload status: \" + workload.isStopRequested());\n          }\n        } catch (InterruptedException e) {\n          // Do nothing. Don't know why I was interrupted.\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/UnknownDBException.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\n/**\n * Could not create the specified DB.\n */\npublic class UnknownDBException extends Exception {\n  /**\n   *\n   */\n  private static final long serialVersionUID = 459099842269616836L;\n\n  public UnknownDBException(String message) {\n    super(message);\n  }\n\n  public UnknownDBException() {\n    super();\n  }\n\n  public UnknownDBException(String message, Throwable cause) {\n    super(message, cause);\n  }\n\n  public UnknownDBException(Throwable cause) {\n    super(cause);\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/Utils.java",
    "content": "/**\n * Copyright (c) 2010 Yahoo! Inc., 2016 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport java.lang.management.GarbageCollectorMXBean;\nimport java.lang.management.ManagementFactory;\nimport java.lang.management.OperatingSystemMXBean;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Random;\n\n/**\n * Utility functions.\n */\npublic final class Utils {\n  private Utils() {\n    // not used\n  }\n\n  private static final Random RAND = new Random();\n  private static final ThreadLocal<Random> RNG = new ThreadLocal<Random>();\n\n  public static Random random() {\n    Random ret = RNG.get();\n    if (ret == null) {\n      ret = new Random(RAND.nextLong());\n      RNG.set(ret);\n    }\n    return ret;\n  }\n\n  /**\n   * Hash an integer value.\n   */\n  public static long hash(long val) {\n    return fnvhash64(val);\n  }\n\n  public static final long FNV_OFFSET_BASIS_64 = 0xCBF29CE484222325L;\n  public static final long FNV_PRIME_64 = 1099511628211L;\n\n  /**\n   * 64 bit FNV hash. Produces more \"random\" hashes than (say) String.hashCode().\n   *\n   * @param val The value to hash.\n   * @return The hash value\n   */\n  public static long fnvhash64(long val) {\n    //from http://en.wikipedia.org/wiki/Fowler_Noll_Vo_hash\n    long hashval = FNV_OFFSET_BASIS_64;\n\n    for (int i = 0; i < 8; i++) {\n      long octet = val & 0x00ff;\n      val = val >> 8;\n\n      hashval = hashval ^ octet;\n      hashval = hashval * FNV_PRIME_64;\n      //hashval = hashval ^ octet;\n    }\n    return Math.abs(hashval);\n  }\n\n  /**\n   * Reads a big-endian 8-byte long from an offset in the given array.\n   * @param bytes The array to read from.\n   * @return A long integer.\n   * @throws IndexOutOfBoundsException if the byte array is too small.\n   * @throws NullPointerException if the byte array is null.\n   */\n  public static long bytesToLong(final byte[] bytes) {\n    return (bytes[0] & 0xFFL) << 56\n        | (bytes[1] & 0xFFL) << 48\n        | (bytes[2] & 0xFFL) << 40\n        | (bytes[3] & 0xFFL) << 32\n        | (bytes[4] & 0xFFL) << 24\n        | (bytes[5] & 0xFFL) << 16\n        | (bytes[6] & 0xFFL) << 8\n        | (bytes[7] & 0xFFL) << 0;\n  }\n\n  /**\n   * Writes a big-endian 8-byte long at an offset in the given array.\n   * @param val The value to encode.\n   * @throws IndexOutOfBoundsException if the byte array is too small.\n   */\n  public static byte[] longToBytes(final long val) {\n    final byte[] bytes = new byte[8];\n    bytes[0] = (byte) (val >>> 56);\n    bytes[1] = (byte) (val >>> 48);\n    bytes[2] = (byte) (val >>> 40);\n    bytes[3] = (byte) (val >>> 32);\n    bytes[4] = (byte) (val >>> 24);\n    bytes[5] = (byte) (val >>> 16);\n    bytes[6] = (byte) (val >>> 8);\n    bytes[7] = (byte) (val >>> 0);\n    return bytes;\n  }\n\n  /**\n   * Parses the byte array into a double.\n   * The byte array must be at least 8 bytes long and have been encoded using\n   * {@link #doubleToBytes}. If the array is longer than 8 bytes, only the\n   * first 8 bytes are parsed.\n   * @param bytes The byte array to parse, at least 8 bytes.\n   * @return A double value read from the byte array.\n   * @throws IllegalArgumentException if the byte array is not 8 bytes wide.\n   */\n  public static double bytesToDouble(final byte[] bytes) {\n    if (bytes.length < 8) {\n      throw new IllegalArgumentException(\"Byte array must be 8 bytes wide.\");\n    }\n    return Double.longBitsToDouble(bytesToLong(bytes));\n  }\n\n  /**\n   * Encodes the double value as an 8 byte array.\n   * @param val The double value to encode.\n   * @return A byte array of length 8.\n   */\n  public static byte[] doubleToBytes(final double val) {\n    return longToBytes(Double.doubleToRawLongBits(val));\n  }\n\n  /**\n   * Measure the estimated active thread count in the current thread group.\n   * Since this calls {@link Thread.activeCount} it should be called from the\n   * main thread or one started by the main thread. Threads included in the\n   * count can be in any state.\n   * For a more accurate count we could use {@link Thread.getAllStackTraces().size()}\n   * but that freezes the JVM and incurs a high overhead.\n   * @return An estimated thread count, good for showing the thread count\n   * over time.\n   */\n  public static int getActiveThreadCount() {\n    return Thread.activeCount();\n  }\n\n  /** @return The currently used memory in bytes */\n  public static long getUsedMemoryBytes() {\n    final Runtime runtime = Runtime.getRuntime();\n    return runtime.totalMemory() - runtime.freeMemory();\n  }\n\n  /** @return The currently used memory in megabytes. */\n  public static int getUsedMemoryMegaBytes() {\n    return (int) (getUsedMemoryBytes() / 1024 / 1024);\n  }\n\n  /** @return The current system load average if supported by the JDK.\n   * If it's not supported, the value will be negative. */\n  public static double getSystemLoadAverage() {\n    final OperatingSystemMXBean osBean =\n        ManagementFactory.getOperatingSystemMXBean();\n    return osBean.getSystemLoadAverage();\n  }\n\n  /** @return The total number of garbage collections executed for all\n   * memory pools. */\n  public static long getGCTotalCollectionCount() {\n    final List<GarbageCollectorMXBean> gcBeans =\n        ManagementFactory.getGarbageCollectorMXBeans();\n    long count = 0;\n    for (final GarbageCollectorMXBean bean : gcBeans) {\n      if (bean.getCollectionCount() < 0) {\n        continue;\n      }\n      count += bean.getCollectionCount();\n    }\n    return count;\n  }\n\n  /** @return The total time, in milliseconds, spent in GC. */\n  public static long getGCTotalTime() {\n    final List<GarbageCollectorMXBean> gcBeans =\n        ManagementFactory.getGarbageCollectorMXBeans();\n    long time = 0;\n    for (final GarbageCollectorMXBean bean : gcBeans) {\n      if (bean.getCollectionTime() < 0) {\n        continue;\n      }\n      time += bean.getCollectionTime();\n    }\n    return time;\n  }\n\n  /**\n   * Returns a map of garbage collectors and their stats.\n   * The first object in the array is the total count since JVM start and the\n   * second is the total time (ms) since JVM start.\n   * If a garbage collectors does not support the collector MXBean, then it\n   * will not be represented in the map.\n   * @return A non-null map of garbage collectors and their metrics. The map\n   * may be empty.\n   */\n  public static Map<String, Long[]> getGCStatst() {\n    final List<GarbageCollectorMXBean> gcBeans =\n        ManagementFactory.getGarbageCollectorMXBeans();\n    final Map<String, Long[]> map = new HashMap<String, Long[]>(gcBeans.size());\n    for (final GarbageCollectorMXBean bean : gcBeans) {\n      if (!bean.isValid() || bean.getCollectionCount() < 0 ||\n          bean.getCollectionTime() < 0) {\n        continue;\n      }\n\n      final Long[] measurements = new Long[]{\n          bean.getCollectionCount(),\n          bean.getCollectionTime()\n      };\n      map.put(bean.getName().replace(\" \", \"_\"), measurements);\n    }\n    return map;\n  }\n\n  /**\n   * Simple Fisher-Yates array shuffle to randomize discrete sets.\n   * @param array The array to randomly shuffle.\n   * @return The shuffled array.\n   */\n  public static <T> T [] shuffleArray(final T[] array) {\n    for (int i = array.length -1; i > 0; i--) {\n      final int idx = RAND.nextInt(i + 1);\n      final T temp = array[idx];\n      array[idx] = array[i];\n      array[i] = temp;\n    }\n    return array;\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/Workload.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport java.util.concurrent.atomic.AtomicBoolean;\nimport java.util.Properties;\n\n\n/**\n * One experiment scenario. One object of this type will\n * be instantiated and shared among all client threads. This class\n * should be constructed using a no-argument constructor, so we can\n * load it dynamically. Any argument-based initialization should be\n * done by init().\n * \n * If you extend this class, you should support the \"insertstart\" property. This\n * allows the Client to proceed from multiple clients on different machines, in case\n * the client is the bottleneck. For example, if we want to load 1 million records from\n * 2 machines, the first machine should have insertstart=0 and the second insertstart=500000. Additionally,\n * the \"insertcount\" property, which is interpreted by Client, can be used to tell each instance of the\n * client how many inserts to do. In the example above, both clients should have insertcount=500000.\n */\npublic abstract class Workload {\n  public static final String INSERT_START_PROPERTY = \"insertstart\";\n  public static final String INSERT_COUNT_PROPERTY = \"insertcount\";\n  \n  public static final String INSERT_START_PROPERTY_DEFAULT = \"0\";\n  \n  private volatile AtomicBoolean stopRequested = new AtomicBoolean(false);\n  \n  /** Operations available for a database. */\n  public enum Operation {\n    READ,\n    UPDATE,\n    INSERT,\n    SCAN,\n    DELETE\n  }\n  \n  /**\n   * Initialize the scenario. Create any generators and other shared objects here.\n   * Called once, in the main client thread, before any operations are started.\n   */\n  public void init(Properties p) throws WorkloadException {\n  }\n\n  /**\n   * Initialize any state for a particular client thread. Since the scenario object\n   * will be shared among all threads, this is the place to create any state that is specific\n   * to one thread. To be clear, this means the returned object should be created anew on each\n   * call to initThread(); do not return the same object multiple times.\n   * The returned object will be passed to invocations of doInsert() and doTransaction()\n   * for this thread. There should be no side effects from this call; all state should be encapsulated\n   * in the returned object. If you have no state to retain for this thread, return null. (But if you have\n   * no state to retain for this thread, probably you don't need to override initThread().)\n   * \n   * @return false if the workload knows it is done for this thread. Client will terminate the thread.\n   * Return true otherwise. Return true for workloads that rely on operationcount. For workloads that read\n   * traces from a file, return true when there are more to do, false when you are done.\n   */\n  public Object initThread(Properties p, int mythreadid, int threadcount) throws WorkloadException {\n    return null;\n  }\n      \n  /**\n   * Cleanup the scenario. Called once, in the main client thread, after all operations have completed.\n   */\n  public void cleanup() throws WorkloadException {\n  }\n\n  /**\n   * Do one insert operation. Because it will be called concurrently from multiple client threads, this\n   * function must be thread safe. However, avoid synchronized, or the threads will block waiting for each\n   * other, and it will be difficult to reach the target throughput. Ideally, this function would have no side\n   * effects other than DB operations and mutations on threadstate. Mutations to threadstate do not need to be\n   * synchronized, since each thread has its own threadstate instance.\n   */\n  public abstract boolean doInsert(DB db, Object threadstate);\n\n  /**\n   * Do one transaction operation. Because it will be called concurrently from multiple client threads, this\n   * function must be thread safe. However, avoid synchronized, or the threads will block waiting for each\n   * other, and it will be difficult to reach the target throughput. Ideally, this function would have no side\n   * effects other than DB operations and mutations on threadstate. Mutations to threadstate do not need to be\n   * synchronized, since each thread has its own threadstate instance.\n   * \n   * @return false if the workload knows it is done for this thread. Client will terminate the thread. \n   * Return true otherwise. Return true for workloads that rely on operationcount. For workloads that read\n   * traces from a file, return true when there are more to do, false when you are done.\n   */\n  public abstract boolean doTransaction(DB db, Object threadstate);\n\n  /**\n   * Allows scheduling a request to stop the workload.\n   */\n  public void requestStop() {\n    stopRequested.set(true);\n  }\n\n  /**\n   * Check the status of the stop request flag.\n   * @return true if stop was requested, false otherwise.\n   */\n  public boolean isStopRequested() {\n    return stopRequested.get();\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/WorkloadException.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\n/**\n * The workload tried to do something bad.\n */\npublic class WorkloadException extends Exception {\n  /**\n   *\n   */\n  private static final long serialVersionUID = 8844396756042772132L;\n\n  public WorkloadException(String message) {\n    super(message);\n  }\n\n  public WorkloadException() {\n    super();\n  }\n\n  public WorkloadException(String message, Throwable cause) {\n    super(message, cause);\n  }\n\n  public WorkloadException(Throwable cause) {\n    super(cause);\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/AcknowledgedCounterGenerator.java",
    "content": "/**\n * Copyright (c) 2015-2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.generator;\n\nimport java.util.concurrent.locks.ReentrantLock;\n\n/**\n * A CounterGenerator that reports generated integers via lastInt()\n * only after they have been acknowledged.\n */\npublic class AcknowledgedCounterGenerator extends CounterGenerator {\n  /** The size of the window of pending id ack's. 2^20 = {@value} */\n  static final int WINDOW_SIZE = Integer.rotateLeft(1, 20);\n\n  /** The mask to use to turn an id into a slot in {@link #window}. */\n  private static final int WINDOW_MASK = WINDOW_SIZE - 1;\n\n  private final ReentrantLock lock;\n  private final boolean[] window;\n  private volatile long limit;\n\n  /**\n   * Create a counter that starts at countstart.\n   */\n  public AcknowledgedCounterGenerator(long countstart) {\n    super(countstart);\n    lock = new ReentrantLock();\n    window = new boolean[WINDOW_SIZE];\n    limit = countstart - 1;\n  }\n\n  /**\n   * In this generator, the highest acknowledged counter value\n   * (as opposed to the highest generated counter value).\n   */\n  @Override\n  public Long lastValue() {\n    return limit;\n  }\n\n  /**\n   * Make a generated counter value available via lastInt().\n   */\n  public void acknowledge(long value) {\n    final int currentSlot = (int)(value & WINDOW_MASK);\n    if (window[currentSlot]) {\n      throw new RuntimeException(\"Too many unacknowledged insertion keys.\");\n    }\n\n    window[currentSlot] = true;\n\n    if (lock.tryLock()) {\n      // move a contiguous sequence from the window\n      // over to the \"limit\" variable\n      try {\n        // Only loop through the entire window at most once.\n        long beforeFirstSlot = (limit & WINDOW_MASK);\n        long index;\n        for (index = limit + 1; index != beforeFirstSlot; ++index) {\n          int slot = (int)(index & WINDOW_MASK);\n          if (!window[slot]) {\n            break;\n          }\n\n          window[slot] = false;\n        }\n\n        limit = index - 1;\n      } finally {\n        lock.unlock();\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/ConstantIntegerGenerator.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.generator;\n\n/**\n * A trivial integer generator that always returns the same value.\n *\n */\npublic class ConstantIntegerGenerator extends NumberGenerator {\n  private final int i;\n\n  /**\n   * @param i The integer that this generator will always return.\n   */\n  public ConstantIntegerGenerator(int i) {\n    this.i = i;\n  }\n\n  @Override\n  public Integer nextValue() {\n    return i;\n  }\n\n  @Override\n  public double mean() {\n    return i;\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/CounterGenerator.java",
    "content": "/**\n * Copyright (c) 2010 Yahoo! Inc., Copyright (c) 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.generator;\n\nimport java.util.concurrent.atomic.AtomicLong;\n\n/**\n * Generates a sequence of integers.\n * (0, 1, ...)\n */\npublic class CounterGenerator extends NumberGenerator {\n  private final AtomicLong counter;\n\n  /**\n   * Create a counter that starts at countstart.\n   */\n  public CounterGenerator(long countstart) {\n    counter=new AtomicLong(countstart);\n  }\n\n  @Override\n  public Long nextValue() {\n    return counter.getAndIncrement();\n  }\n\n  @Override\n  public Long lastValue() {\n    return counter.get() - 1;\n  }\n\n  @Override\n  public double mean() {\n    throw new UnsupportedOperationException(\"Can't compute mean of non-stationary distribution!\");\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/DiscreteGenerator.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.generator;\n\nimport com.yahoo.ycsb.Utils;\n\nimport java.util.ArrayList;\nimport java.util.Collection;\n\nimport static java.util.Objects.requireNonNull;\n\n/**\n * Generates a distribution by choosing from a discrete set of values.\n */\npublic class DiscreteGenerator extends Generator<String> {\n  private static class Pair {\n    private double weight;\n    private String value;\n\n    Pair(double weight, String value) {\n      this.weight = weight;\n      this.value = requireNonNull(value);\n    }\n  }\n\n  private final Collection<Pair> values = new ArrayList<>();\n  private String lastvalue;\n\n  public DiscreteGenerator() {\n    lastvalue = null;\n  }\n\n  /**\n   * Generate the next string in the distribution.\n   */\n  @Override\n  public String nextValue() {\n    double sum = 0;\n\n    for (Pair p : values) {\n      sum += p.weight;\n    }\n\n    double val = Utils.random().nextDouble();\n\n    for (Pair p : values) {\n      double pw = p.weight / sum;\n      if (val < pw) {\n        return p.value;\n      }\n\n      val -= pw;\n    }\n\n    throw new AssertionError(\"oops. should not get here.\");\n\n  }\n\n  /**\n   * Return the previous string generated by the distribution; e.g., returned from the last nextString() call.\n   * Calling lastString() should not advance the distribution or have any side effects. If nextString() has not yet\n   * been called, lastString() should return something reasonable.\n   */\n  @Override\n  public String lastValue() {\n    if (lastvalue == null) {\n      lastvalue = nextValue();\n    }\n    return lastvalue;\n  }\n\n  public void addValue(double weight, String value) {\n    values.add(new Pair(weight, value));\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/ExponentialGenerator.java",
    "content": "/**\n * Copyright (c) 2011-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.generator;\n\nimport com.yahoo.ycsb.Utils;\n\n/**\n * A generator of an exponential distribution. It produces a sequence\n * of time intervals according to an exponential\n * distribution.  Smaller intervals are more frequent than larger\n * ones, and there is no bound on the length of an interval.  When you\n * construct an instance of this class, you specify a parameter gamma,\n * which corresponds to the rate at which events occur.\n * Alternatively, 1/gamma is the average length of an interval.\n */\npublic class ExponentialGenerator extends NumberGenerator {\n  // What percentage of the readings should be within the most recent exponential.frac portion of the dataset?\n  public static final String EXPONENTIAL_PERCENTILE_PROPERTY = \"exponential.percentile\";\n  public static final String EXPONENTIAL_PERCENTILE_DEFAULT = \"95\";\n\n  // What fraction of the dataset should be accessed exponential.percentile of the time?\n  public static final String EXPONENTIAL_FRAC_PROPERTY = \"exponential.frac\";\n  public static final String EXPONENTIAL_FRAC_DEFAULT = \"0.8571428571\";  // 1/7\n\n  /**\n   * The exponential constant to use.\n   */\n  private double gamma;\n\n  /******************************* Constructors **************************************/\n\n  /**\n   * Create an exponential generator with a mean arrival rate of\n   * gamma.  (And half life of 1/gamma).\n   */\n  public ExponentialGenerator(double mean) {\n    gamma = 1.0 / mean;\n  }\n\n  public ExponentialGenerator(double percentile, double range) {\n    gamma = -Math.log(1.0 - percentile / 100.0) / range;  //1.0/mean;\n  }\n\n  /****************************************************************************************/\n\n\n  /**\n   * Generate the next item as a long. This distribution will be skewed toward lower values; e.g. 0 will\n   * be the most popular, 1 the next most popular, etc.\n   * @return The next item in the sequence.\n   */\n  @Override\n  public Double nextValue() {\n    return -Math.log(Utils.random().nextDouble()) / gamma;\n  }\n\n  @Override\n  public double mean() {\n    return 1.0 / gamma;\n  }\n\n  public static void main(String[] args) {\n    ExponentialGenerator e = new ExponentialGenerator(90, 100);\n    int j = 0;\n    for (int i = 0; i < 1000; i++) {\n      if (e.nextValue() < 100) {\n        j++;\n      }\n    }\n    System.out.println(\"Got \" + j + \" hits.  Expect 900\");\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/FileGenerator.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.generator;\n\nimport java.io.BufferedReader;\nimport java.io.FileReader;\nimport java.io.IOException;\nimport java.io.Reader;\n\n/**\n * A generator, whose sequence is the lines of a file.\n */\npublic class FileGenerator extends Generator<String> {\n  private final String filename;\n  private String current;\n  private BufferedReader reader;\n\n  /**\n   * Create a FileGenerator with the given file.\n   * @param filename The file to read lines from.\n   */\n  public FileGenerator(String filename) {\n    this.filename = filename;\n    reloadFile();\n  }\n\n  /**\n   * Return the next string of the sequence, ie the next line of the file.\n   */\n  @Override\n  public synchronized String nextValue() {\n    try {\n      current = reader.readLine();\n      return current;\n    } catch (IOException e) {\n      throw new RuntimeException(e);\n    }\n  }\n\n  /**\n   * Return the previous read line.\n   */\n  @Override\n  public String lastValue() {\n    return current;\n  }\n\n  /**\n   * Reopen the file to reuse values.\n   */\n  public synchronized void reloadFile() {\n    try (Reader r = reader) {\n      System.err.println(\"Reload \" + filename);\n      reader = new BufferedReader(new FileReader(filename));\n    } catch (IOException e) {\n      throw new RuntimeException(e);\n    }\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/Generator.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.generator;\n\n/**\n * An expression that generates a sequence of values, following some distribution (Uniform, Zipfian, Sequential, etc.).\n */\npublic abstract class Generator<V> {\n  /**\n   * Generate the next value in the distribution.\n   */\n  public abstract V nextValue();\n\n  /**\n   * Return the previous value generated by the distribution; e.g., returned from the last {@link Generator#nextValue()}\n   *  call.\n   * Calling {@link #lastValue()} should not advance the distribution or have any side effects. If {@link #nextValue()}\n   * has not yet been called, {@link #lastValue()} should return something reasonable.\n   */\n  public abstract V lastValue();\n\n  public final String nextString() {\n    V ret = nextValue();\n    return ret == null ? null : ret.toString();\n  }\n\n  public final String lastString() {\n    V ret = lastValue();\n    return ret == null ? null : ret.toString();\n  }\n}\n\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/HistogramGenerator.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.generator;\n\nimport com.yahoo.ycsb.Utils;\n\nimport java.io.BufferedReader;\nimport java.io.FileReader;\nimport java.io.IOException;\nimport java.util.ArrayList;\n\n/**\n * Generate integers according to a histogram distribution.  The histogram\n * buckets are of width one, but the values are multiplied by a block size.\n * Therefore, instead of drawing sizes uniformly at random within each\n * bucket, we always draw the largest value in the current bucket, so the value\n * drawn is always a multiple of blockSize.\n *\n * The minimum value this distribution returns is blockSize (not zero).\n *\n */\npublic class HistogramGenerator extends NumberGenerator {\n\n  private final long blockSize;\n  private final long[] buckets;\n  private long area;\n  private long weightedArea = 0;\n  private double meanSize = 0;\n\n  public HistogramGenerator(String histogramfile) throws IOException {\n    try (BufferedReader in = new BufferedReader(new FileReader(histogramfile))) {\n      String str;\n      String[] line;\n\n      ArrayList<Integer> a = new ArrayList<>();\n\n      str = in.readLine();\n      if (str == null) {\n        throw new IOException(\"Empty input file!\\n\");\n      }\n      line = str.split(\"\\t\");\n      if (line[0].compareTo(\"BlockSize\") != 0) {\n        throw new IOException(\"First line of histogram is not the BlockSize!\\n\");\n      }\n      blockSize = Integer.parseInt(line[1]);\n\n      while ((str = in.readLine()) != null) {\n        // [0] is the bucket, [1] is the value\n        line = str.split(\"\\t\");\n\n        a.add(Integer.parseInt(line[0]), Integer.parseInt(line[1]));\n      }\n      buckets = new long[a.size()];\n      for (int i = 0; i < a.size(); i++) {\n        buckets[i] = a.get(i);\n      }\n    }\n    init();\n  }\n\n  public HistogramGenerator(long[] buckets, int blockSize) {\n    this.blockSize = blockSize;\n    this.buckets = buckets;\n    init();\n  }\n\n  private void init() {\n    for (int i = 0; i < buckets.length; i++) {\n      area += buckets[i];\n      weightedArea += i * buckets[i];\n    }\n    // calculate average file size\n    meanSize = ((double) blockSize) * ((double) weightedArea) / (area);\n  }\n\n  @Override\n  public Long nextValue() {\n    int number = Utils.random().nextInt((int) area);\n    int i;\n\n    for (i = 0; i < (buckets.length - 1); i++) {\n      number -= buckets[i];\n      if (number <= 0) {\n        return (i + 1) * blockSize;\n      }\n    }\n\n    return i * blockSize;\n  }\n\n  @Override\n  public double mean() {\n    return meanSize;\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/HotspotIntegerGenerator.java",
    "content": "/**\n * Copyright (c) 2010 Yahoo! Inc. Copyright (c) 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.generator;\n\nimport com.yahoo.ycsb.Utils;\n\nimport java.util.Random;\n\n/**\n * Generate integers resembling a hotspot distribution where x% of operations\n * access y% of data items. The parameters specify the bounds for the numbers,\n * the percentage of the of the interval which comprises the hot set and\n * the percentage of operations that access the hot set. Numbers of the hot set are\n * always smaller than any number in the cold set. Elements from the hot set and\n * the cold set are chose using a uniform distribution.\n *\n */\npublic class HotspotIntegerGenerator extends NumberGenerator {\n\n  private final long lowerBound;\n  private final long upperBound;\n  private final long hotInterval;\n  private final long coldInterval;\n  private final double hotsetFraction;\n  private final double hotOpnFraction;\n\n  /**\n   * Create a generator for Hotspot distributions.\n   *\n   * @param lowerBound lower bound of the distribution.\n   * @param upperBound upper bound of the distribution.\n   * @param hotsetFraction percentage of data item\n   * @param hotOpnFraction percentage of operations accessing the hot set.\n   */\n  public HotspotIntegerGenerator(long lowerBound, long upperBound,\n                                 double hotsetFraction, double hotOpnFraction) {\n    if (hotsetFraction < 0.0 || hotsetFraction > 1.0) {\n      System.err.println(\"Hotset fraction out of range. Setting to 0.0\");\n      hotsetFraction = 0.0;\n    }\n    if (hotOpnFraction < 0.0 || hotOpnFraction > 1.0) {\n      System.err.println(\"Hot operation fraction out of range. Setting to 0.0\");\n      hotOpnFraction = 0.0;\n    }\n    if (lowerBound > upperBound) {\n      System.err.println(\"Upper bound of Hotspot generator smaller than the lower bound. \" +\n          \"Swapping the values.\");\n      long temp = lowerBound;\n      lowerBound = upperBound;\n      upperBound = temp;\n    }\n    this.lowerBound = lowerBound;\n    this.upperBound = upperBound;\n    this.hotsetFraction = hotsetFraction;\n    long interval = upperBound - lowerBound + 1;\n    this.hotInterval = (int) (interval * hotsetFraction);\n    this.coldInterval = interval - hotInterval;\n    this.hotOpnFraction = hotOpnFraction;\n  }\n\n  @Override\n  public Long nextValue() {\n    long value = 0;\n    Random random = Utils.random();\n    if (random.nextDouble() < hotOpnFraction) {\n      // Choose a value from the hot set.\n      value = lowerBound + Math.abs(Utils.random().nextLong()) % hotInterval;\n    } else {\n      // Choose a value from the cold set.\n      value = lowerBound + hotInterval + Math.abs(Utils.random().nextLong()) % coldInterval;\n    }\n    setLastValue(value);\n    return value;\n  }\n\n  /**\n   * @return the lowerBound\n   */\n  public long getLowerBound() {\n    return lowerBound;\n  }\n\n  /**\n   * @return the upperBound\n   */\n  public long getUpperBound() {\n    return upperBound;\n  }\n\n  /**\n   * @return the hotsetFraction\n   */\n  public double getHotsetFraction() {\n    return hotsetFraction;\n  }\n\n  /**\n   * @return the hotOpnFraction\n   */\n  public double getHotOpnFraction() {\n    return hotOpnFraction;\n  }\n\n  @Override\n  public double mean() {\n    return hotOpnFraction * (lowerBound + hotInterval / 2.0)\n        + (1 - hotOpnFraction) * (lowerBound + hotInterval + coldInterval / 2.0);\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/IncrementingPrintableStringGenerator.java",
    "content": "/**\n * Copyright (c) 2016-2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.generator;\n\nimport java.util.*;\n\n/**\n * A generator that produces strings of {@link #length} using a set of code points\n * from {@link #characterSet}. Each time {@link #nextValue()} is executed, the string\n * is incremented by one character. Eventually the string may rollover to the beginning\n * and the user may choose to have the generator throw a NoSuchElementException at that \n * point or continue incrementing. (By default the generator will continue incrementing).\n * <p>\n * For example, if we set a length of 2 characters and the character set includes\n * [A, B] then the generator output will be:\n * <ul>\n * <li>AA</li>\n * <li>AB</li>\n * <li>BA</li>\n * <li>BB</li>\n * <li>AA <-- rolled over</li>\n * </ul>\n * <p>\n * This class includes some default character sets to choose from including ASCII\n * and plane 0 UTF. \n */\npublic class IncrementingPrintableStringGenerator extends Generator<String> {\n\n  /** Default string length for the generator. */\n  public static final int DEFAULTSTRINGLENGTH = 8;\n\n  /**\n   * Set of all character types that include every symbol other than non-printable\n   * control characters.\n   */\n  public static final Set<Integer> CHAR_TYPES_ALL_BUT_CONTROL;\n\n  static {\n    CHAR_TYPES_ALL_BUT_CONTROL = new HashSet<Integer>(24);\n    // numbers\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.DECIMAL_DIGIT_NUMBER);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.LETTER_NUMBER);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.OTHER_NUMBER);\n\n    // letters\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.UPPERCASE_LETTER);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.LOWERCASE_LETTER);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.TITLECASE_LETTER);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.OTHER_LETTER);\n\n    // marks\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.COMBINING_SPACING_MARK);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.NON_SPACING_MARK);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.ENCLOSING_MARK);\n\n    // punctuation\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.CONNECTOR_PUNCTUATION);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.DASH_PUNCTUATION);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.START_PUNCTUATION);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.END_PUNCTUATION);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.INITIAL_QUOTE_PUNCTUATION);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.FINAL_QUOTE_PUNCTUATION);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.OTHER_PUNCTUATION);\n\n    // symbols\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.MATH_SYMBOL);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.CURRENCY_SYMBOL);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.MODIFIER_SYMBOL);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.OTHER_SYMBOL);\n\n    // separators\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.SPACE_SEPARATOR);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.LINE_SEPARATOR);\n    CHAR_TYPES_ALL_BUT_CONTROL.add((int) Character.PARAGRAPH_SEPARATOR);\n  }\n\n  /**\n   * Set of character types including only decimals, upper and lower case letters.\n   */\n  public static final Set<Integer> CHAR_TYPES_BASIC_ALPHA;\n\n  static {\n    CHAR_TYPES_BASIC_ALPHA = new HashSet<Integer>(2);\n    CHAR_TYPES_BASIC_ALPHA.add((int) Character.UPPERCASE_LETTER);\n    CHAR_TYPES_BASIC_ALPHA.add((int) Character.LOWERCASE_LETTER);\n  }\n\n  /**\n   * Set of character types including only  decimals, upper and lower case letters.\n   */\n  public static final Set<Integer> CHAR_TYPES_BASIC_ALPHANUMERICS;\n\n  static {\n    CHAR_TYPES_BASIC_ALPHANUMERICS = new HashSet<Integer>(3);\n    CHAR_TYPES_BASIC_ALPHANUMERICS.add((int) Character.DECIMAL_DIGIT_NUMBER);\n    CHAR_TYPES_BASIC_ALPHANUMERICS.add((int) Character.UPPERCASE_LETTER);\n    CHAR_TYPES_BASIC_ALPHANUMERICS.add((int) Character.LOWERCASE_LETTER);\n  }\n\n  /**\n   * Set of character types including only decimals, letter numbers, \n   * other numbers, upper, lower, title case as well as letter modifiers \n   * and other letters.\n   */\n  public static final Set<Integer> CHAR_TYPE_EXTENDED_ALPHANUMERICS;\n\n  static {\n    CHAR_TYPE_EXTENDED_ALPHANUMERICS = new HashSet<Integer>(8);\n    CHAR_TYPE_EXTENDED_ALPHANUMERICS.add((int) Character.DECIMAL_DIGIT_NUMBER);\n    CHAR_TYPE_EXTENDED_ALPHANUMERICS.add((int) Character.LETTER_NUMBER);\n    CHAR_TYPE_EXTENDED_ALPHANUMERICS.add((int) Character.OTHER_NUMBER);\n    CHAR_TYPE_EXTENDED_ALPHANUMERICS.add((int) Character.UPPERCASE_LETTER);\n    CHAR_TYPE_EXTENDED_ALPHANUMERICS.add((int) Character.LOWERCASE_LETTER);\n    CHAR_TYPE_EXTENDED_ALPHANUMERICS.add((int) Character.TITLECASE_LETTER);\n    CHAR_TYPE_EXTENDED_ALPHANUMERICS.add((int) Character.MODIFIER_LETTER);\n    CHAR_TYPE_EXTENDED_ALPHANUMERICS.add((int) Character.OTHER_LETTER);\n  }\n\n  /** The character set to iterate over. */\n  private final int[] characterSet;\n\n  /** An array indices matching a position in the output string. */\n  private int[] indices;\n\n  /** The length of the output string in characters. */\n  private final int length;\n\n  /** The last value returned by the generator. Should be null if {@link #nextValue()}\n   * has not been called.*/\n  private String lastValue;\n\n  /** Whether or not to throw an exception when the string rolls over. */\n  private boolean throwExceptionOnRollover;\n\n  /** Whether or not the generator has rolled over. */\n  private boolean hasRolledOver;\n\n  /**\n   * Generates strings of 8 characters using only the upper and lower case alphabetical\n   * characters from the ASCII set. \n   */\n  public IncrementingPrintableStringGenerator() {\n    this(DEFAULTSTRINGLENGTH, printableBasicAlphaASCIISet());\n  }\n\n  /**\n   * Generates strings of {@link #length} characters using only the upper and lower \n   * case alphabetical characters from the ASCII set. \n   * @param length The length of string to return from the generator.\n   * @throws IllegalArgumentException if the length is less than one.\n   */\n  public IncrementingPrintableStringGenerator(final int length) {\n    this(length, printableBasicAlphaASCIISet());\n  }\n\n  /**\n   * Generates strings of {@link #length} characters using the code points in\n   * {@link #characterSet}.\n   * @param length The length of string to return from the generator.\n   * @param characterSet A set of code points to choose from. Code points in the \n   * set can be in any order, not necessarily lexical.\n   * @throws IllegalArgumentException if the length is less than one or the character\n   * set has fewer than one code points.\n   */\n  public IncrementingPrintableStringGenerator(final int length, final int[] characterSet) {\n    if (length < 1) {\n      throw new IllegalArgumentException(\"Length must be greater than or equal to 1\");\n    }\n    if (characterSet == null || characterSet.length < 1) {\n      throw new IllegalArgumentException(\"Character set must have at least one character\");\n    }\n    this.length = length;\n    this.characterSet = characterSet;\n    indices = new int[length];\n  }\n\n  @Override\n  public String nextValue() {\n    if (hasRolledOver && throwExceptionOnRollover) {\n      throw new NoSuchElementException(\"The generator has rolled over to the beginning\");\n    }\n\n    final StringBuilder buffer = new StringBuilder(length);\n    for (int i = 0; i < length; i++) {\n      buffer.append(Character.toChars(characterSet[indices[i]]));\n    }\n\n    // increment the indices;\n    for (int i = length - 1; i >= 0; --i) {\n      if (indices[i] >= characterSet.length - 1) {\n        indices[i] = 0;\n        if (i == 0 || characterSet.length == 1 && lastValue != null) {\n          hasRolledOver = true;\n        }\n      } else {\n        ++indices[i];\n        break;\n      }\n    }\n\n    lastValue = buffer.toString();\n    return lastValue;\n  }\n\n  @Override\n  public String lastValue() {\n    return lastValue;\n  }\n\n  /** @param exceptionOnRollover Whether or not to throw an exception on rollover. */\n  public void setThrowExceptionOnRollover(final boolean exceptionOnRollover) {\n    this.throwExceptionOnRollover = exceptionOnRollover;\n  }\n\n  /** @return Whether or not to throw an exception on rollover. */\n  public boolean getThrowExceptionOnRollover() {\n    return throwExceptionOnRollover;\n  }\n\n  /**\n   * Returns an array of printable code points with only the upper and lower\n   * case alphabetical characters from the basic ASCII set.\n   * @return An array of code points\n   */\n  public static int[] printableBasicAlphaASCIISet() {\n    final List<Integer> validCharacters =\n        generatePrintableCharacterSet(0, 127, null, false, CHAR_TYPES_BASIC_ALPHA);\n    final int[] characterSet = new int[validCharacters.size()];\n    for (int i = 0; i < validCharacters.size(); i++) {\n      characterSet[i] = validCharacters.get(i);\n    }\n    return characterSet;\n  }\n\n  /**\n   * Returns an array of printable code points with the upper and lower case \n   * alphabetical characters as well as the numeric values from the basic \n   * ASCII set.\n   * @return An array of code points\n   */\n  public static int[] printableBasicAlphaNumericASCIISet() {\n    final List<Integer> validCharacters =\n        generatePrintableCharacterSet(0, 127, null, false, CHAR_TYPES_BASIC_ALPHANUMERICS);\n    final int[] characterSet = new int[validCharacters.size()];\n    for (int i = 0; i < validCharacters.size(); i++) {\n      characterSet[i] = validCharacters.get(i);\n    }\n    return characterSet;\n  }\n\n  /**\n   * Returns an array of printable code points with the entire basic ASCII table,\n   * including spaces. Excludes new lines.\n   * @return An array of code points\n   */\n  public static int[] fullPrintableBasicASCIISet() {\n    final List<Integer> validCharacters =\n        generatePrintableCharacterSet(32, 127, null, false, null);\n    final int[] characterSet = new int[validCharacters.size()];\n    for (int i = 0; i < validCharacters.size(); i++) {\n      characterSet[i] = validCharacters.get(i);\n    }\n    return characterSet;\n  }\n\n  /**\n   * Returns an array of printable code points with the entire basic ASCII table,\n   * including spaces and new lines.\n   * @return An array of code points\n   */\n  public static int[] fullPrintableBasicASCIISetWithNewlines() {\n    final List<Integer> validCharacters = new ArrayList<Integer>();\n    validCharacters.add(10); // newline\n    validCharacters.addAll(generatePrintableCharacterSet(32, 127, null, false, null));\n    final int[] characterSet = new int[validCharacters.size()];\n    for (int i = 0; i < validCharacters.size(); i++) {\n      characterSet[i] = validCharacters.get(i);\n    }\n    return characterSet;\n  }\n\n  /**\n   * Returns an array of printable code points the first plane of Unicode characters\n   * including only the alpha-numeric values.\n   * @return An array of code points\n   */\n  public static int[] printableAlphaNumericPlaneZeroSet() {\n    final List<Integer> validCharacters =\n        generatePrintableCharacterSet(0, 65535, null, false, CHAR_TYPES_BASIC_ALPHANUMERICS);\n    final int[] characterSet = new int[validCharacters.size()];\n    for (int i = 0; i < validCharacters.size(); i++) {\n      characterSet[i] = validCharacters.get(i);\n    }\n    return characterSet;\n  }\n\n  /**\n   * Returns an array of printable code points the first plane of Unicode characters\n   * including all printable characters.\n   * @return An array of code points\n   */\n  public static int[] fullPrintablePlaneZeroSet() {\n    final List<Integer> validCharacters =\n        generatePrintableCharacterSet(0, 65535, null, false, CHAR_TYPES_ALL_BUT_CONTROL);\n    final int[] characterSet = new int[validCharacters.size()];\n    for (int i = 0; i < validCharacters.size(); i++) {\n      characterSet[i] = validCharacters.get(i);\n    }\n    return characterSet;\n  }\n\n  /**\n   * Generates a list of code points based on a range and filters.\n   * These can be used for generating strings with various ASCII and/or\n   * Unicode printable character sets for use with DBs that may have \n   * character limitations.\n   * <p>\n   * Note that control, surrogate, format, private use and unassigned \n   * code points are skipped.\n   * @param startCodePoint The starting code point, inclusive.\n   * @param lastCodePoint The final code point, inclusive.\n   * @param characterTypesFilter An optional set of allowable character\n   * types. See {@link Character} for types.\n   * @param isFilterAllowableList Determines whether the {@code allowableTypes}\n   * set is inclusive or exclusive. When true, only those code points that\n   * appear in the list will be included in the resulting set. Otherwise\n   * matching code points are excluded.\n   * @param allowableTypes An optional list of code points for inclusion or\n   * exclusion.\n   * @return A list of code points matching the given range and filters. The\n   * list may be empty but is guaranteed not to be null.\n   */\n  public static List<Integer> generatePrintableCharacterSet(\n      final int startCodePoint,\n      final int lastCodePoint,\n      final Set<Integer> characterTypesFilter,\n      final boolean isFilterAllowableList,\n      final Set<Integer> allowableTypes) {\n\n    // since we don't know the final size of the allowable character list we\n    // start with a list then we'll flatten it to an array.\n    final List<Integer> validCharacters = new ArrayList<Integer>(lastCodePoint);\n\n    for (int codePoint = startCodePoint; codePoint <= lastCodePoint; ++codePoint) {\n      if (allowableTypes != null &&\n          !allowableTypes.contains(Character.getType(codePoint))) {\n        continue;\n      } else {\n        // skip control points, formats, surrogates, etc\n        final int type = Character.getType(codePoint);\n        if (type == Character.CONTROL ||\n            type == Character.SURROGATE ||\n            type == Character.FORMAT ||\n            type == Character.PRIVATE_USE ||\n            type == Character.UNASSIGNED) {\n          continue;\n        }\n      }\n\n      if (characterTypesFilter != null) {\n        // if the filter is enabled then we need to make sure the code point \n        // is in the allowable list if it's a whitelist or that the code point\n        // is NOT in the list if it's a blacklist.\n        if ((isFilterAllowableList && !characterTypesFilter.contains(codePoint)) ||\n            (characterTypesFilter.contains(codePoint))) {\n          continue;\n        }\n      }\n\n      validCharacters.add(codePoint);\n    }\n    return validCharacters;\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/NumberGenerator.java",
    "content": "/**\r\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.\r\n * <p>\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n * <p>\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n * <p>\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.generator;\r\n\r\n/**\r\n * A generator that is capable of generating numeric values.\r\n *\r\n */\r\npublic abstract class NumberGenerator extends Generator<Number> {\r\n  private Number lastVal;\r\n\r\n  /**\r\n   * Set the last value generated. NumberGenerator subclasses must use this call\r\n   * to properly set the last value, or the {@link #lastValue()} calls won't work.\r\n   */\r\n  protected void setLastValue(Number last) {\r\n    lastVal = last;\r\n  }\r\n\r\n\r\n  @Override\r\n  public Number lastValue() {\r\n    return lastVal;\r\n  }\r\n\r\n  /**\r\n   * Return the expected value (mean) of the values this generator will return.\r\n   */\r\n  public abstract double mean();\r\n}\r\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/RandomDiscreteTimestampGenerator.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.generator;\n\nimport java.util.concurrent.TimeUnit;\n\nimport com.yahoo.ycsb.Utils;\n\n/**\n * A generator that picks from a discrete set of offsets from a base Unix Epoch\n * timestamp that returns timestamps in a random order with the guarantee that\n * each timestamp is only returned once.\n * <p>\n * TODO - It would be best to implement some kind of psuedo non-repeating random\n * generator for this as it's likely OK that some small percentage of values are\n * repeated. For now we just generate all of the offsets in an array, shuffle\n * it and then iterate over the array.\n * <p>\n * Note that {@link #MAX_INTERVALS} defines a hard limit on the size of the \n * offset array so that we don't completely blow out the heap.\n * <p>\n * The constructor parameter {@code intervals} determines how many values will be\n * returned by the generator. For example, if the {@code interval} is 60 and the\n * {@code timeUnits} are set to {@link TimeUnit#SECONDS} and {@code intervals} \n * is set to 60, then the consumer can call {@link #nextValue()} 60 times for\n * timestamps within an hour.\n */\npublic class RandomDiscreteTimestampGenerator extends UnixEpochTimestampGenerator {\n\n  /** A hard limit on the size of the offsets array to a void using too much heap. */\n  public static final int MAX_INTERVALS = 16777216;\n  \n  /** The total number of intervals for this generator. */\n  private final int intervals;\n  \n  // can't be primitives due to the generic params on the sort function :(\n  /** The array of generated offsets from the base time. */\n  private final Integer[] offsets;\n  \n  /** The current index into the offsets array. */\n  private int offsetIndex;\n  \n  /**\n   * Ctor that uses the current system time as current.\n   * @param interval The interval between timestamps.\n   * @param timeUnits The time units of the returned Unix Epoch timestamp (as well\n   * as the units for the interval).\n   * @param intervals The total number of intervals for the generator.\n   * @throws IllegalArgumentException if the intervals is larger than {@link #MAX_INTERVALS}\n   */\n  public RandomDiscreteTimestampGenerator(final long interval, final TimeUnit timeUnits, \n                                          final int intervals) {\n    super(interval, timeUnits);\n    this.intervals = intervals;\n    offsets = new Integer[intervals];\n    setup();\n  }\n  \n  /**\n   * Ctor for supplying a starting timestamp.\n   * The interval between timestamps.\n   * @param timeUnits The time units of the returned Unix Epoch timestamp (as well\n   * as the units for the interval).\n   * @param startTimestamp The start timestamp to use. \n   * NOTE that this must match the time units used for the interval. \n   * If the units are in nanoseconds, provide a nanosecond timestamp {@code System.nanoTime()}\n   * or in microseconds, {@code System.nanoTime() / 1000}\n   * or in millis, {@code System.currentTimeMillis()}\n   * @param intervals The total number of intervals for the generator.\n   * @throws IllegalArgumentException if the intervals is larger than {@link #MAX_INTERVALS}\n   */\n  public RandomDiscreteTimestampGenerator(final long interval, final TimeUnit timeUnits,\n                                          final long startTimestamp, final int intervals) {\n    super(interval, timeUnits, startTimestamp);\n    this.intervals = intervals;\n    offsets = new Integer[intervals];\n    setup();\n  }\n  \n  /**\n   * Generates the offsets and shuffles the array.\n   */\n  private void setup() {\n    if (intervals > MAX_INTERVALS) {\n      throw new IllegalArgumentException(\"Too many intervals for the in-memory \"\n          + \"array. The limit is \" + MAX_INTERVALS + \".\");\n    }\n    offsetIndex = 0;\n    for (int i = 0; i < intervals; i++) {\n      offsets[i] = i;\n    }\n    Utils.shuffleArray(offsets);\n  }\n  \n  @Override\n  public Long nextValue() {\n    if (offsetIndex >= offsets.length) {\n      throw new IllegalStateException(\"Reached the end of the random timestamp \"\n          + \"intervals: \" + offsetIndex);\n    }\n    lastTimestamp = currentTimestamp;\n    currentTimestamp = startTimestamp + (offsets[offsetIndex++] * getOffset(1));\n    return currentTimestamp;\n  }\n}"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/ScrambledZipfianGenerator.java",
    "content": "/**\r\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.\r\n * <p>\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n * <p>\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n * <p>\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.generator;\r\n\r\nimport com.yahoo.ycsb.Utils;\r\n\r\n/**\r\n * A generator of a zipfian distribution. It produces a sequence of items, such that some items are more popular than\r\n * others, according to a zipfian distribution. When you construct an instance of this class, you specify the number\r\n * of items in the set to draw from, either by specifying an itemcount (so that the sequence is of items from 0 to\r\n * itemcount-1) or by specifying a min and a max (so that the sequence is of items from min to max inclusive). After\r\n * you construct the instance, you can change the number of items by calling nextInt(itemcount) or nextLong(itemcount).\r\n * <p>\r\n * Unlike @ZipfianGenerator, this class scatters the \"popular\" items across the itemspace. Use this, instead of\r\n * @ZipfianGenerator, if you don't want the head of the distribution (the popular items) clustered together.\r\n */\r\npublic class ScrambledZipfianGenerator extends NumberGenerator {\r\n  public static final double ZETAN = 26.46902820178302;\r\n  public static final double USED_ZIPFIAN_CONSTANT = 0.99;\r\n  public static final long ITEM_COUNT = 10000000000L;\r\n\r\n  private ZipfianGenerator gen;\r\n  private final long min, max, itemcount;\r\n\r\n  /******************************* Constructors **************************************/\r\n\r\n  /**\r\n   * Create a zipfian generator for the specified number of items.\r\n   *\r\n   * @param items The number of items in the distribution.\r\n   */\r\n  public ScrambledZipfianGenerator(long items) {\r\n    this(0, items - 1);\r\n  }\r\n\r\n  /**\r\n   * Create a zipfian generator for items between min and max.\r\n   *\r\n   * @param min The smallest integer to generate in the sequence.\r\n   * @param max The largest integer to generate in the sequence.\r\n   */\r\n  public ScrambledZipfianGenerator(long min, long max) {\r\n    this(min, max, ZipfianGenerator.ZIPFIAN_CONSTANT);\r\n  }\r\n\r\n  /**\r\n   * Create a zipfian generator for the specified number of items using the specified zipfian constant.\r\n   *\r\n   * @param _items The number of items in the distribution.\r\n   * @param _zipfianconstant The zipfian constant to use.\r\n   */\r\n  /*\r\n// not supported, as the value of zeta depends on the zipfian constant, and we have only precomputed zeta for one\r\nzipfian constant\r\n  public ScrambledZipfianGenerator(long _items, double _zipfianconstant)\r\n  {\r\n    this(0,_items-1,_zipfianconstant);\r\n  }\r\n*/\r\n\r\n  /**\r\n   * Create a zipfian generator for items between min and max (inclusive) for the specified zipfian constant. If you\r\n   * use a zipfian constant other than 0.99, this will take a long time to complete because we need to recompute zeta.\r\n   *\r\n   * @param min             The smallest integer to generate in the sequence.\r\n   * @param max             The largest integer to generate in the sequence.\r\n   * @param zipfianconstant The zipfian constant to use.\r\n   */\r\n  public ScrambledZipfianGenerator(long min, long max, double zipfianconstant) {\r\n    this.min = min;\r\n    this.max = max;\r\n    itemcount = this.max - this.min + 1;\r\n    if (zipfianconstant == USED_ZIPFIAN_CONSTANT) {\r\n      gen = new ZipfianGenerator(0, ITEM_COUNT, zipfianconstant, ZETAN);\r\n    } else {\r\n      gen = new ZipfianGenerator(0, ITEM_COUNT, zipfianconstant);\r\n    }\r\n  }\r\n\r\n  /**************************************************************************************************/\r\n\r\n  /**\r\n   * Return the next long in the sequence.\r\n   */\r\n  @Override\r\n  public Long nextValue() {\r\n    long ret = gen.nextValue();\r\n    ret = min + Utils.fnvhash64(ret) % itemcount;\r\n    setLastValue(ret);\r\n    return ret;\r\n  }\r\n\r\n  public static void main(String[] args) {\r\n    double newzetan = ZipfianGenerator.zetastatic(ITEM_COUNT, ZipfianGenerator.ZIPFIAN_CONSTANT);\r\n    System.out.println(\"zetan: \" + newzetan);\r\n    System.exit(0);\r\n\r\n    ScrambledZipfianGenerator gen = new ScrambledZipfianGenerator(10000);\r\n\r\n    for (int i = 0; i < 1000000; i++) {\r\n      System.out.println(\"\" + gen.nextValue());\r\n    }\r\n  }\r\n\r\n  /**\r\n   * since the values are scrambled (hopefully uniformly), the mean is simply the middle of the range.\r\n   */\r\n  @Override\r\n  public double mean() {\r\n    return ((min) + max) / 2.0;\r\n  }\r\n}\r\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/SequentialGenerator.java",
    "content": "/**\n * Copyright (c) 2016-2017 YCSB Contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.generator;\n\nimport java.util.concurrent.atomic.AtomicLong;\n\n/**\n * Generates a sequence of integers 0, 1, ...\n */\npublic class SequentialGenerator extends NumberGenerator {\n  private final AtomicLong counter;\n  private long interval;\n  private long countstart;\n\n  /**\n   * Create a counter that starts at countstart.\n   */\n  public SequentialGenerator(long countstart, long countend) {\n    counter = new AtomicLong();\n    setLastValue(counter.get());\n    this.countstart = countstart;\n    interval = countend - countstart + 1;\n  }\n\n  /**\n   * If the generator returns numeric (long) values, return the next value as an long.\n   * Default is to return -1, which is appropriate for generators that do not return numeric values.\n   */\n  public long nextLong() {\n    long ret = countstart + counter.getAndIncrement() % interval;\n    setLastValue(ret);\n    return ret;\n  }\n\n  @Override\n  public Number nextValue() {\n    long ret = countstart + counter.getAndIncrement() % interval;\n    setLastValue(ret);\n    return ret;\n  }\n\n  @Override\n  public Number lastValue() {\n    return counter.get() + 1;\n  }\n\n  @Override\n  public double mean() {\n    throw new UnsupportedOperationException(\"Can't compute mean of non-stationary distribution!\");\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/SkewedLatestGenerator.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.generator;\n\n/**\n * Generate a popularity distribution of items, skewed to favor recent items significantly more than older items.\n */\npublic class SkewedLatestGenerator extends NumberGenerator {\n  private CounterGenerator basis;\n  private final ZipfianGenerator zipfian;\n\n  public SkewedLatestGenerator(CounterGenerator basis) {\n    this.basis = basis;\n    zipfian = new ZipfianGenerator(this.basis.lastValue());\n    nextValue();\n  }\n\n  /**\n   * Generate the next string in the distribution, skewed Zipfian favoring the items most recently returned by\n   * the basis generator.\n   */\n  @Override\n  public Long nextValue() {\n    long max = basis.lastValue();\n    long next = max - zipfian.nextLong(max);\n    setLastValue(next);\n    return next;\n  }\n\n  public static void main(String[] args) {\n    SkewedLatestGenerator gen = new SkewedLatestGenerator(new CounterGenerator(1000));\n    for (int i = 0; i < Integer.parseInt(args[0]); i++) {\n      System.out.println(gen.nextString());\n    }\n  }\n\n  @Override\n  public double mean() {\n    throw new UnsupportedOperationException(\"Can't compute mean of non-stationary distribution!\");\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/UniformGenerator.java",
    "content": "/**\n * Copyright (c) 2010 Yahoo! Inc. Copyright (c) 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.generator;\n\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.List;\n\n/**\n * An expression that generates a random value in the specified range.\n */\npublic class UniformGenerator extends Generator<String> {\n  private final List<String> values;\n  private String laststring;\n  private final UniformLongGenerator gen;\n\n  /**\n   * Creates a generator that will return strings from the specified set uniformly randomly.\n   */\n  public UniformGenerator(Collection<String> values) {\n    this.values = new ArrayList<>(values);\n    laststring = null;\n    gen = new UniformLongGenerator(0, values.size() - 1);\n  }\n\n  /**\n   * Generate the next string in the distribution.\n   */\n  @Override\n  public String nextValue() {\n    laststring = values.get(gen.nextValue().intValue());\n    return laststring;\n  }\n\n  /**\n   * Return the previous string generated by the distribution; e.g., returned from the last nextString() call.\n   * Calling lastString() should not advance the distribution or have any side effects. If nextString() has not yet\n   * been called, lastString() should return something reasonable.\n   */\n  @Override\n  public String lastValue() {\n    if (laststring == null) {\n      nextValue();\n    }\n    return laststring;\n  }\n}\n\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/UniformLongGenerator.java",
    "content": "/**\r\n * Copyright (c) 2010 Yahoo! Inc. Copyright (c) 2017 YCSB contributors. All rights reserved.\r\n * <p>\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n * <p>\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n * <p>\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.generator;\r\n\r\nimport com.yahoo.ycsb.Utils;\r\n\r\n/**\r\n * Generates longs randomly uniform from an interval.\r\n */\r\npublic class UniformLongGenerator extends NumberGenerator {\r\n  private final long lb, ub, interval;\r\n\r\n  /**\r\n   * Creates a generator that will return longs uniformly randomly from the \r\n   * interval [lb,ub] inclusive (that is, lb and ub are possible values)\r\n   * (lb and ub are possible values).\r\n   *\r\n   * @param lb the lower bound (inclusive) of generated values\r\n   * @param ub the upper bound (inclusive) of generated values\r\n   */\r\n  public UniformLongGenerator(long lb, long ub) {\r\n    this.lb = lb;\r\n    this.ub = ub;\r\n    interval = this.ub - this.lb + 1;\r\n  }\r\n\r\n  @Override\r\n  public Long nextValue() {\r\n    long ret = Math.abs(Utils.random().nextLong()) % interval  + lb;\r\n    setLastValue(ret);\r\n\r\n    return ret;\r\n  }\r\n\r\n  @Override\r\n  public double mean() {\r\n    return ((lb + (long) ub)) / 2.0;\r\n  }\r\n}\r\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/UnixEpochTimestampGenerator.java",
    "content": "/**\n * Copyright (c) 2016-2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.generator;\n\nimport java.util.concurrent.TimeUnit;\n\n/**\n * A generator that produces Unix epoch timestamps in seconds, milli, micro or\n * nanoseconds and increments the stamp a given interval each time \n * {@link #nextValue()} is called. The result is emitted as a long in the same\n * way calls to {@code System.currentTimeMillis()} and \n * {@code System.nanoTime()} behave.\n * <p>\n * By default, the current system time of the host is used as the starting\n * timestamp. Calling {@link #initalizeTimestamp(long)} can adjust the timestamp\n * back or forward in time. For example, if a workload will generate an hour of \n * data at 1 minute intervals, then to set the start timestamp an hour in the past\n * from the current run, use:\n * <pre>{@code\n * UnixEpochTimestampGenerator generator = new UnixEpochTimestampGenerator();\n * generator.initalizeTimestamp(-60);\n * }</pre>\n * A constructor is also present for setting an explicit start time. \n * Negative intervals are supported as well for iterating back in time.\n * <p>\n * WARNING: This generator is not thread safe and should not called from multiple\n * threads.\n */\npublic class UnixEpochTimestampGenerator extends Generator<Long> {\n\n  /** The base timestamp used as a starting reference. */\n  protected long startTimestamp;\n  \n  /** The current timestamp that will be incremented. */\n  protected long currentTimestamp;\n\n  /** The last used timestamp. Should always be one interval behind current. */\n  protected long lastTimestamp;\n\n  /** The interval to increment by. Multiplied by {@link #timeUnits}. */\n  protected long interval;\n\n  /** The units of time the interval represents. */\n  protected TimeUnit timeUnits;\n\n  /**\n   * Default ctor with the current system time and a 60 second interval.\n   */\n  public UnixEpochTimestampGenerator() {\n    this(60, TimeUnit.SECONDS);\n  }\n\n  /**\n   * Ctor that uses the current system time as current.\n   * @param interval The interval for incrementing the timestamp.\n   * @param timeUnits The units of time the increment represents.\n   */\n  public UnixEpochTimestampGenerator(final long interval, final TimeUnit timeUnits) {\n    this.interval = interval;\n    this.timeUnits = timeUnits;\n    // move the first timestamp by 1 interval so that the first call to nextValue\n    // returns this timestamp\n    initalizeTimestamp(-1);\n    currentTimestamp -= getOffset(1);\n    lastTimestamp = currentTimestamp;\n  }\n\n  /**\n   * Ctor for supplying a starting timestamp.\n   * @param interval The interval for incrementing the timestamp.\n   * @param timeUnits The units of time the increment represents.\n   * @param startTimestamp The start timestamp to use. \n   * NOTE that this must match the time units used for the interval. \n   * If the units are in nanoseconds, provide a nanosecond timestamp {@code System.nanoTime()}\n   * or in microseconds, {@code System.nanoTime() / 1000}\n   * or in millis, {@code System.currentTimeMillis()}\n   * or seconds and any interval above, {@code System.currentTimeMillis() / 1000}\n   */\n  public UnixEpochTimestampGenerator(final long interval, final TimeUnit timeUnits,\n                                     final long startTimestamp) {\n    this.interval = interval;\n    this.timeUnits = timeUnits;\n    // move the first timestamp by 1 interval so that the first call to nextValue\n    // returns this timestamp\n    currentTimestamp = startTimestamp - getOffset(1);\n    this.startTimestamp = currentTimestamp;\n    lastTimestamp = currentTimestamp - getOffset(1);\n  }\n\n  /**\n   * Sets the starting timestamp to the current system time plus the interval offset.\n   * E.g. to set the time an hour in the past, supply a value of {@code -60}.\n   * @param intervalOffset The interval to increment or decrement by.\n   */\n  public void initalizeTimestamp(final long intervalOffset) {\n    switch (timeUnits) {\n    case NANOSECONDS:\n      currentTimestamp = System.nanoTime() + getOffset(intervalOffset);\n      break;\n    case MICROSECONDS:\n      currentTimestamp = (System.nanoTime() / 1000) + getOffset(intervalOffset);\n      break;\n    case MILLISECONDS:\n      currentTimestamp = System.currentTimeMillis() + getOffset(intervalOffset);\n      break;\n    case SECONDS:\n      currentTimestamp = (System.currentTimeMillis() / 1000) +\n          getOffset(intervalOffset);\n      break;\n    case MINUTES:\n      currentTimestamp = (System.currentTimeMillis() / 1000) +\n          getOffset(intervalOffset);\n      break;\n    case HOURS:\n      currentTimestamp = (System.currentTimeMillis() / 1000) +\n          getOffset(intervalOffset);\n      break;\n    case DAYS:\n      currentTimestamp = (System.currentTimeMillis() / 1000) +\n          getOffset(intervalOffset);\n      break;\n    default:\n      throw new IllegalArgumentException(\"Unhandled time unit type: \" + timeUnits);\n    }\n    startTimestamp = currentTimestamp;\n  }\n\n  @Override\n  public Long nextValue() {\n    lastTimestamp = currentTimestamp;\n    currentTimestamp += getOffset(1);\n    return currentTimestamp;\n  }\n\n  /**\n   * Returns the proper increment offset to use given the interval and timeunits. \n   * @param intervalOffset The amount of offset to multiply by.\n   * @return An offset value to adjust the timestamp by.\n   */\n  public long getOffset(final long intervalOffset) {\n    switch (timeUnits) {\n    case NANOSECONDS:\n    case MICROSECONDS:\n    case MILLISECONDS:\n    case SECONDS:\n      return intervalOffset * interval;\n    case MINUTES:\n      return intervalOffset * interval * (long) 60;\n    case HOURS:\n      return intervalOffset * interval * (long) (60 * 60);\n    case DAYS:\n      return intervalOffset * interval * (long) (60 * 60 * 24);\n    default:\n      throw new IllegalArgumentException(\"Unhandled time unit type: \" + timeUnits);\n    }\n  }\n\n  @Override\n  public Long lastValue() {\n    return lastTimestamp;\n  }\n\n  /** @return The current timestamp as set by the last call to {@link #nextValue()} */\n  public long currentValue() {\n    return currentTimestamp;\n  }\n\n}"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/ZipfianGenerator.java",
    "content": "/**\r\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.\r\n * <p>\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n * <p>\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n * <p>\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.generator;\r\n\r\nimport com.yahoo.ycsb.Utils;\r\n\r\n/**\r\n * A generator of a zipfian distribution. It produces a sequence of items, such that some items are more popular than\r\n * others, according to a zipfian distribution. When you construct an instance of this class, you specify the number\r\n * of items in the set to draw from, either by specifying an itemcount (so that the sequence is of items from 0 to\r\n * itemcount-1) or by specifying a min and a max (so that the sequence is of items from min to max inclusive). After\r\n * you construct the instance, you can change the number of items by calling nextInt(itemcount) or nextLong(itemcount).\r\n *\r\n * Note that the popular items will be clustered together, e.g. item 0 is the most popular, item 1 the second most\r\n * popular, and so on (or min is the most popular, min+1 the next most popular, etc.) If you don't want this clustering,\r\n * and instead want the popular items scattered throughout the item space, then use ScrambledZipfianGenerator instead.\r\n *\r\n * Be aware: initializing this generator may take a long time if there are lots of items to choose from (e.g. over a\r\n * minute for 100 million objects). This is because certain mathematical values need to be computed to properly\r\n * generate a zipfian skew, and one of those values (zeta) is a sum sequence from 1 to n, where n is the itemcount.\r\n * Note that if you increase the number of items in the set, we can compute a new zeta incrementally, so it should be\r\n * fast unless you have added millions of items. However, if you decrease the number of items, we recompute zeta from\r\n * scratch, so this can take a long time.\r\n *\r\n * The algorithm used here is from \"Quickly Generating Billion-Record Synthetic Databases\", Jim Gray et al, SIGMOD 1994.\r\n */\r\npublic class ZipfianGenerator extends NumberGenerator {\r\n  public static final double ZIPFIAN_CONSTANT = 0.99;\r\n\r\n  /**\r\n   * Number of items.\r\n   */\r\n  private final long items;\r\n\r\n  /**\r\n   * Min item to generate.\r\n   */\r\n  private final long base;\r\n\r\n  /**\r\n   * The zipfian constant to use.\r\n   */\r\n  private final double zipfianconstant;\r\n\r\n  /**\r\n   * Computed parameters for generating the distribution.\r\n   */\r\n  private double alpha, zetan, eta, theta, zeta2theta;\r\n\r\n  /**\r\n   * The number of items used to compute zetan the last time.\r\n   */\r\n  private long countforzeta;\r\n\r\n  /**\r\n   * Flag to prevent problems. If you increase the number of items the zipfian generator is allowed to choose from,\r\n   * this code will incrementally compute a new zeta value for the larger itemcount. However, if you decrease the\r\n   * number of items, the code computes zeta from scratch; this is expensive for large itemsets.\r\n   * Usually this is not intentional; e.g. one thread thinks the number of items is 1001 and calls \"nextLong()\" with\r\n   * that item count; then another thread who thinks the number of items is 1000 calls nextLong() with itemcount=1000\r\n   * triggering the expensive recomputation. (It is expensive for 100 million items, not really for 1000 items.) Why\r\n   * did the second thread think there were only 1000 items? maybe it read the item count before the first thread\r\n   * incremented it. So this flag allows you to say if you really do want that recomputation. If true, then the code\r\n   * will recompute zeta if the itemcount goes down. If false, the code will assume itemcount only goes up, and never\r\n   * recompute.\r\n   */\r\n  private boolean allowitemcountdecrease = false;\r\n\r\n  /******************************* Constructors **************************************/\r\n\r\n  /**\r\n   * Create a zipfian generator for the specified number of items.\r\n   * @param items The number of items in the distribution.\r\n   */\r\n  public ZipfianGenerator(long items) {\r\n    this(0, items - 1);\r\n  }\r\n\r\n  /**\r\n   * Create a zipfian generator for items between min and max.\r\n   * @param min The smallest integer to generate in the sequence.\r\n   * @param max The largest integer to generate in the sequence.\r\n   */\r\n  public ZipfianGenerator(long min, long max) {\r\n    this(min, max, ZIPFIAN_CONSTANT);\r\n  }\r\n\r\n  /**\r\n   * Create a zipfian generator for the specified number of items using the specified zipfian constant.\r\n   *\r\n   * @param items The number of items in the distribution.\r\n   * @param zipfianconstant The zipfian constant to use.\r\n   */\r\n  public ZipfianGenerator(long items, double zipfianconstant) {\r\n    this(0, items - 1, zipfianconstant);\r\n  }\r\n\r\n  /**\r\n   * Create a zipfian generator for items between min and max (inclusive) for the specified zipfian constant.\r\n   * @param min The smallest integer to generate in the sequence.\r\n   * @param max The largest integer to generate in the sequence.\r\n   * @param zipfianconstant The zipfian constant to use.\r\n   */\r\n  public ZipfianGenerator(long min, long max, double zipfianconstant) {\r\n    this(min, max, zipfianconstant, zetastatic(max - min + 1, zipfianconstant));\r\n  }\r\n\r\n  /**\r\n   * Create a zipfian generator for items between min and max (inclusive) for the specified zipfian constant, using\r\n   * the precomputed value of zeta.\r\n   *\r\n   * @param min The smallest integer to generate in the sequence.\r\n   * @param max The largest integer to generate in the sequence.\r\n   * @param zipfianconstant The zipfian constant to use.\r\n   * @param zetan The precomputed zeta constant.\r\n   */\r\n  public ZipfianGenerator(long min, long max, double zipfianconstant, double zetan) {\r\n\r\n    items = max - min + 1;\r\n    base = min;\r\n    this.zipfianconstant = zipfianconstant;\r\n\r\n    theta = this.zipfianconstant;\r\n\r\n    zeta2theta = zeta(2, theta);\r\n    \r\n    alpha = 1.0 / (1.0 - theta);\r\n    this.zetan = zetan;\r\n    countforzeta = items;\r\n    eta = (1 - Math.pow(2.0 / items, 1 - theta)) / (1 - zeta2theta / this.zetan);\r\n\r\n    nextValue();\r\n  }\r\n\r\n  /**************************************************************************/\r\n\r\n  /**\r\n   * Compute the zeta constant needed for the distribution. Do this from scratch for a distribution with n items,\r\n   * using the zipfian constant thetaVal. Remember the value of n, so if we change the itemcount, we can recompute zeta.\r\n   *\r\n   * @param n The number of items to compute zeta over.\r\n   * @param thetaVal The zipfian constant.\r\n   */\r\n  double zeta(long n, double thetaVal) {\r\n    countforzeta = n;\r\n    return zetastatic(n, thetaVal);\r\n  }\r\n\r\n  /**\r\n   * Compute the zeta constant needed for the distribution. Do this from scratch for a distribution with n items,\r\n   * using the zipfian constant theta. This is a static version of the function which will not remember n.\r\n   * @param n The number of items to compute zeta over.\r\n   * @param theta The zipfian constant.\r\n   */\r\n  static double zetastatic(long n, double theta) {\r\n    return zetastatic(0, n, theta, 0);\r\n  }\r\n\r\n  /**\r\n   * Compute the zeta constant needed for the distribution. Do this incrementally for a distribution that\r\n   * has n items now but used to have st items. Use the zipfian constant thetaVal. Remember the new value of\r\n   * n so that if we change the itemcount, we'll know to recompute zeta.\r\n   *\r\n   * @param st The number of items used to compute the last initialsum\r\n   * @param n The number of items to compute zeta over.\r\n   * @param thetaVal The zipfian constant.\r\n   * @param initialsum The value of zeta we are computing incrementally from.\r\n   */\r\n  double zeta(long st, long n, double thetaVal, double initialsum) {\r\n    countforzeta = n;\r\n    return zetastatic(st, n, thetaVal, initialsum);\r\n  }\r\n\r\n  /**\r\n   * Compute the zeta constant needed for the distribution. Do this incrementally for a distribution that\r\n   * has n items now but used to have st items. Use the zipfian constant theta. Remember the new value of\r\n   * n so that if we change the itemcount, we'll know to recompute zeta.\r\n   * @param st The number of items used to compute the last initialsum\r\n   * @param n The number of items to compute zeta over.\r\n   * @param theta The zipfian constant.\r\n   * @param initialsum The value of zeta we are computing incrementally from.\r\n   */\r\n  static double zetastatic(long st, long n, double theta, double initialsum) {\r\n    double sum = initialsum;\r\n    for (long i = st; i < n; i++) {\r\n\r\n      sum += 1 / (Math.pow(i + 1, theta));\r\n    }\r\n\r\n    //System.out.println(\"countforzeta=\"+countforzeta);\r\n\r\n    return sum;\r\n  }\r\n\r\n  /****************************************************************************************/\r\n\r\n\r\n  /**\r\n   * Generate the next item as a long.\r\n   *\r\n   * @param itemcount The number of items in the distribution.\r\n   * @return The next item in the sequence.\r\n   */\r\n  long nextLong(long itemcount) {\r\n    //from \"Quickly Generating Billion-Record Synthetic Databases\", Jim Gray et al, SIGMOD 1994\r\n\r\n    if (itemcount != countforzeta) {\r\n\r\n      //have to recompute zetan and eta, since they depend on itemcount\r\n      synchronized (this) {\r\n        if (itemcount > countforzeta) {\r\n          //System.err.println(\"WARNING: Incrementally recomputing Zipfian distribtion. (itemcount=\"+itemcount+\"\r\n          // countforzeta=\"+countforzeta+\")\");\r\n\r\n          //we have added more items. can compute zetan incrementally, which is cheaper\r\n          zetan = zeta(countforzeta, itemcount, theta, zetan);\r\n          eta = (1 - Math.pow(2.0 / items, 1 - theta)) / (1 - zeta2theta / zetan);\r\n        } else if ((itemcount < countforzeta) && (allowitemcountdecrease)) {\r\n          //have to start over with zetan\r\n          //note : for large itemsets, this is very slow. so don't do it!\r\n\r\n          //TODO: can also have a negative incremental computation, e.g. if you decrease the number of items,\r\n          // then just subtract the zeta sequence terms for the items that went away. This would be faster than\r\n          // recomputing from scratch when the number of items decreases\r\n\r\n          System.err.println(\"WARNING: Recomputing Zipfian distribtion. This is slow and should be avoided. \" +\r\n              \"(itemcount=\" + itemcount + \" countforzeta=\" + countforzeta + \")\");\r\n\r\n          zetan = zeta(itemcount, theta);\r\n          eta = (1 - Math.pow(2.0 / items, 1 - theta)) / (1 - zeta2theta / zetan);\r\n        }\r\n      }\r\n    }\r\n\r\n    double u = Utils.random().nextDouble();\r\n    double uz = u * zetan;\r\n\r\n    if (uz < 1.0) {\r\n      return base;\r\n    }\r\n\r\n    if (uz < 1.0 + Math.pow(0.5, theta)) {\r\n      return base + 1;\r\n    }\r\n\r\n    long ret = base + (long) ((itemcount) * Math.pow(eta * u - eta + 1, alpha));\r\n    setLastValue(ret);\r\n    return ret;\r\n  }\r\n\r\n  /**\r\n   * Return the next value, skewed by the Zipfian distribution. The 0th item will be the most popular, followed by\r\n   * the 1st, followed by the 2nd, etc. (Or, if min != 0, the min-th item is the most popular, the min+1th item the\r\n   * next most popular, etc.) If you want the popular items scattered throughout the item space, use\r\n   * ScrambledZipfianGenerator instead.\r\n   */\r\n  @Override\r\n  public Long nextValue() {\r\n    return nextLong(items);\r\n  }\r\n\r\n  public static void main(String[] args) {\r\n    new ZipfianGenerator(ScrambledZipfianGenerator.ITEM_COUNT);\r\n  }\r\n\r\n  /**\r\n   * @todo Implement ZipfianGenerator.mean()\r\n   */\r\n  @Override\r\n  public double mean() {\r\n    throw new UnsupportedOperationException(\"@todo implement ZipfianGenerator.mean()\");\r\n  }\r\n}\r\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/generator/package-info.java",
    "content": "/*\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB generator package.\n */\npackage com.yahoo.ycsb.generator;\n\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/Measurements.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.measurements;\n\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.measurements.exporter.MeasurementsExporter;\n\nimport java.io.IOException;\nimport java.util.Properties;\nimport java.util.concurrent.ConcurrentHashMap;\n\n/**\n * Collects latency measurements, and reports them when requested.\n */\npublic class Measurements {\n  /**\n   * All supported measurement types are defined in this enum.\n   */\n  public enum MeasurementType {\n    HISTOGRAM,\n    HDRHISTOGRAM,\n    HDRHISTOGRAM_AND_HISTOGRAM,\n    HDRHISTOGRAM_AND_RAW,\n    TIMESERIES,\n    RAW\n  }\n\n  public static final String MEASUREMENT_TYPE_PROPERTY = \"measurementtype\";\n  private static final String MEASUREMENT_TYPE_PROPERTY_DEFAULT = \"hdrhistogram\";\n\n  public static final String MEASUREMENT_INTERVAL = \"measurement.interval\";\n  private static final String MEASUREMENT_INTERVAL_DEFAULT = \"op\";\n\n  public static final String MEASUREMENT_TRACK_JVM_PROPERTY = \"measurement.trackjvm\";\n  public static final String MEASUREMENT_TRACK_JVM_PROPERTY_DEFAULT = \"false\";\n\n  private static Measurements singleton = null;\n  private static Properties measurementproperties = null;\n\n  public static void setProperties(Properties props) {\n    measurementproperties = props;\n  }\n\n  /**\n   * Return the singleton Measurements object.\n   */\n  public static synchronized Measurements getMeasurements() {\n    if (singleton == null) {\n      singleton = new Measurements(measurementproperties);\n    }\n    return singleton;\n  }\n\n  private final ConcurrentHashMap<String, OneMeasurement> opToMesurementMap;\n  private final ConcurrentHashMap<String, OneMeasurement> opToIntendedMesurementMap;\n  private final MeasurementType measurementType;\n  private final int measurementInterval;\n  private final Properties props;\n\n  /**\n   * Create a new object with the specified properties.\n   */\n  public Measurements(Properties props) {\n    opToMesurementMap = new ConcurrentHashMap<>();\n    opToIntendedMesurementMap = new ConcurrentHashMap<>();\n\n    this.props = props;\n\n    String mTypeString = this.props.getProperty(MEASUREMENT_TYPE_PROPERTY, MEASUREMENT_TYPE_PROPERTY_DEFAULT);\n    switch (mTypeString) {\n    case \"histogram\":\n      measurementType = MeasurementType.HISTOGRAM;\n      break;\n    case \"hdrhistogram\":\n      measurementType = MeasurementType.HDRHISTOGRAM;\n      break;\n    case \"hdrhistogram+histogram\":\n      measurementType = MeasurementType.HDRHISTOGRAM_AND_HISTOGRAM;\n      break;\n    case \"hdrhistogram+raw\":\n      measurementType = MeasurementType.HDRHISTOGRAM_AND_RAW;\n      break;\n    case \"timeseries\":\n      measurementType = MeasurementType.TIMESERIES;\n      break;\n    case \"raw\":\n      measurementType = MeasurementType.RAW;\n      break;\n    default:\n      throw new IllegalArgumentException(\"unknown \" + MEASUREMENT_TYPE_PROPERTY + \"=\" + mTypeString);\n    }\n\n    String mIntervalString = this.props.getProperty(MEASUREMENT_INTERVAL, MEASUREMENT_INTERVAL_DEFAULT);\n    switch (mIntervalString) {\n    case \"op\":\n      measurementInterval = 0;\n      break;\n    case \"intended\":\n      measurementInterval = 1;\n      break;\n    case \"both\":\n      measurementInterval = 2;\n      break;\n    default:\n      throw new IllegalArgumentException(\"unknown \" + MEASUREMENT_INTERVAL + \"=\" + mIntervalString);\n    }\n  }\n\n  private OneMeasurement constructOneMeasurement(String name) {\n    switch (measurementType) {\n    case HISTOGRAM:\n      return new OneMeasurementHistogram(name, props);\n    case HDRHISTOGRAM:\n      return new OneMeasurementHdrHistogram(name, props);\n    case HDRHISTOGRAM_AND_HISTOGRAM:\n      return new TwoInOneMeasurement(name,\n          new OneMeasurementHdrHistogram(\"Hdr\" + name, props),\n          new OneMeasurementHistogram(\"Bucket\" + name, props));\n    case HDRHISTOGRAM_AND_RAW:\n      return new TwoInOneMeasurement(name,\n          new OneMeasurementHdrHistogram(\"Hdr\" + name, props),\n          new OneMeasurementRaw(\"Raw\" + name, props));\n    case TIMESERIES:\n      return new OneMeasurementTimeSeries(name, props);\n    case RAW:\n      return new OneMeasurementRaw(name, props);\n    default:\n      throw new AssertionError(\"Impossible to be here. Dead code reached. Bugs?\");\n    }\n  }\n\n  static class StartTimeHolder {\n    protected long time;\n\n    long startTime() {\n      if (time == 0) {\n        return System.nanoTime();\n      } else {\n        return time;\n      }\n    }\n  }\n\n  private final ThreadLocal<StartTimeHolder> tlIntendedStartTime = new ThreadLocal<Measurements.StartTimeHolder>() {\n    protected StartTimeHolder initialValue() {\n      return new StartTimeHolder();\n    }\n  };\n\n  public void setIntendedStartTimeNs(long time) {\n    if (measurementInterval == 0) {\n      return;\n    }\n    tlIntendedStartTime.get().time = time;\n  }\n\n  public long getIntendedtartTimeNs() {\n    if (measurementInterval == 0) {\n      return 0L;\n    }\n    return tlIntendedStartTime.get().startTime();\n  }\n\n  /**\n   * Report a single value of a single metric. E.g. for read latency, operation=\"READ\" and latency is the measured\n   * value.\n   */\n  public void measure(String operation, int latency) {\n    if (measurementInterval == 1) {\n      return;\n    }\n    try {\n      OneMeasurement m = getOpMeasurement(operation);\n      m.measure(latency);\n    } catch (java.lang.ArrayIndexOutOfBoundsException e) {\n      // This seems like a terribly hacky way to cover up for a bug in the measurement code\n      System.out.println(\"ERROR: java.lang.ArrayIndexOutOfBoundsException - ignoring and continuing\");\n      e.printStackTrace();\n      e.printStackTrace(System.out);\n    }\n  }\n\n  /**\n   * Report a single value of a single metric. E.g. for read latency, operation=\"READ\" and latency is the measured\n   * value.\n   */\n  public void measureIntended(String operation, int latency) {\n    if (measurementInterval == 0) {\n      return;\n    }\n    try {\n      OneMeasurement m = getOpIntendedMeasurement(operation);\n      m.measure(latency);\n    } catch (java.lang.ArrayIndexOutOfBoundsException e) {\n      // This seems like a terribly hacky way to cover up for a bug in the measurement code\n      System.out.println(\"ERROR: java.lang.ArrayIndexOutOfBoundsException - ignoring and continuing\");\n      e.printStackTrace();\n      e.printStackTrace(System.out);\n    }\n  }\n\n  private OneMeasurement getOpMeasurement(String operation) {\n    OneMeasurement m = opToMesurementMap.get(operation);\n    if (m == null) {\n      m = constructOneMeasurement(operation);\n      OneMeasurement oldM = opToMesurementMap.putIfAbsent(operation, m);\n      if (oldM != null) {\n        m = oldM;\n      }\n    }\n    return m;\n  }\n\n  private OneMeasurement getOpIntendedMeasurement(String operation) {\n    OneMeasurement m = opToIntendedMesurementMap.get(operation);\n    if (m == null) {\n      final String name = measurementInterval == 1 ? operation : \"Intended-\" + operation;\n      m = constructOneMeasurement(name);\n      OneMeasurement oldM = opToIntendedMesurementMap.putIfAbsent(operation, m);\n      if (oldM != null) {\n        m = oldM;\n      }\n    }\n    return m;\n  }\n\n  /**\n   * Report a return code for a single DB operation.\n   */\n  public void reportStatus(final String operation, final Status status) {\n    OneMeasurement m = measurementInterval == 1 ?\n        getOpIntendedMeasurement(operation) :\n        getOpMeasurement(operation);\n    m.reportStatus(status);\n  }\n\n  /**\n   * Export the current measurements to a suitable format.\n   *\n   * @param exporter Exporter representing the type of format to write to.\n   * @throws IOException Thrown if the export failed.\n   */\n  public void exportMeasurements(MeasurementsExporter exporter) throws IOException {\n    for (OneMeasurement measurement : opToMesurementMap.values()) {\n      measurement.exportMeasurements(exporter);\n    }\n    for (OneMeasurement measurement : opToIntendedMesurementMap.values()) {\n      measurement.exportMeasurements(exporter);\n    }\n  }\n\n  /**\n   * Return a one line summary of the measurements.\n   */\n  public synchronized String getSummary() {\n    String ret = \"\";\n    for (OneMeasurement m : opToMesurementMap.values()) {\n      ret += m.getSummary() + \" \";\n    }\n    for (OneMeasurement m : opToIntendedMesurementMap.values()) {\n      ret += m.getSummary() + \" \";\n    }\n    return ret;\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/OneMeasurement.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.measurements;\n\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.measurements.exporter.MeasurementsExporter;\n\nimport java.io.IOException;\nimport java.util.Map;\nimport java.util.concurrent.ConcurrentHashMap;\nimport java.util.concurrent.atomic.AtomicInteger;\n\n/**\n * A single measured metric (such as READ LATENCY).\n */\npublic abstract class OneMeasurement {\n\n  private final String name;\n  private final ConcurrentHashMap<Status, AtomicInteger> returncodes;\n\n  public String getName() {\n    return name;\n  }\n\n  /**\n   * @param name measurement name\n   */\n  public OneMeasurement(String name) {\n    this.name = name;\n    this.returncodes = new ConcurrentHashMap<>();\n  }\n\n  public abstract void measure(int latency);\n\n  public abstract String getSummary();\n\n  /**\n   * No need for synchronization, using CHM to deal with that.\n   */\n  public void reportStatus(Status status) {\n    AtomicInteger counter = returncodes.get(status);\n\n    if (counter == null) {\n      counter = new AtomicInteger();\n      AtomicInteger other = returncodes.putIfAbsent(status, counter);\n      if (other != null) {\n        counter = other;\n      }\n    }\n\n    counter.incrementAndGet();\n  }\n\n  /**\n   * Export the current measurements to a suitable format.\n   *\n   * @param exporter Exporter representing the type of format to write to.\n   * @throws IOException Thrown if the export failed.\n   */\n  public abstract void exportMeasurements(MeasurementsExporter exporter) throws IOException;\n\n  protected final void exportStatusCounts(MeasurementsExporter exporter) throws IOException {\n    for (Map.Entry<Status, AtomicInteger> entry : returncodes.entrySet()) {\n      exporter.write(getName(), \"Return=\" + entry.getKey().getName(), entry.getValue().get());\n    }\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/OneMeasurementHdrHistogram.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.measurements;\n\nimport com.yahoo.ycsb.measurements.exporter.MeasurementsExporter;\nimport org.HdrHistogram.Histogram;\nimport org.HdrHistogram.HistogramIterationValue;\nimport org.HdrHistogram.HistogramLogWriter;\nimport org.HdrHistogram.Recorder;\n\nimport java.io.FileNotFoundException;\nimport java.io.FileOutputStream;\nimport java.io.IOException;\nimport java.io.PrintStream;\nimport java.text.DecimalFormat;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Properties;\n\n/**\n * Take measurements and maintain a HdrHistogram of a given metric, such as READ LATENCY.\n *\n */\npublic class OneMeasurementHdrHistogram extends OneMeasurement {\n\n  // we need one log per measurement histogram\n  private final PrintStream log;\n  private final HistogramLogWriter histogramLogWriter;\n\n  private final Recorder histogram;\n  private Histogram totalHistogram;\n\n  /**\n   * The name of the property for deciding what percentile values to output.\n   */\n  public static final String PERCENTILES_PROPERTY = \"hdrhistogram.percentiles\";\n\n  /**\n   * The default value for the hdrhistogram.percentiles property.\n   */\n  public static final String PERCENTILES_PROPERTY_DEFAULT = \"95,99\";\n\n  private final List<Double> percentiles;\n\n  public OneMeasurementHdrHistogram(String name, Properties props) {\n    super(name);\n    percentiles = getPercentileValues(props.getProperty(PERCENTILES_PROPERTY, PERCENTILES_PROPERTY_DEFAULT));\n    boolean shouldLog = Boolean.parseBoolean(props.getProperty(\"hdrhistogram.fileoutput\", \"false\"));\n    if (!shouldLog) {\n      log = null;\n      histogramLogWriter = null;\n    } else {\n      try {\n        final String hdrOutputFilename = props.getProperty(\"hdrhistogram.output.path\", \"\") + name + \".hdr\";\n        log = new PrintStream(new FileOutputStream(hdrOutputFilename), false);\n      } catch (FileNotFoundException e) {\n        throw new RuntimeException(\"Failed to open hdr histogram output file\", e);\n      }\n      histogramLogWriter = new HistogramLogWriter(log);\n      histogramLogWriter.outputComment(\"[Logging for: \" + name + \"]\");\n      histogramLogWriter.outputLogFormatVersion();\n      long now = System.currentTimeMillis();\n      histogramLogWriter.outputStartTime(now);\n      histogramLogWriter.setBaseTime(now);\n      histogramLogWriter.outputLegend();\n    }\n    histogram = new Recorder(3);\n  }\n\n  /**\n   * It appears latency is reported in micros.\n   * Using {@link Recorder} to support concurrent updates to histogram.\n   */\n  public void measure(int latencyInMicros) {\n    histogram.recordValue(latencyInMicros);\n  }\n\n  /**\n   * This is called from a main thread, on orderly termination.\n   */\n  @Override\n  public void exportMeasurements(MeasurementsExporter exporter) throws IOException {\n    // accumulate the last interval which was not caught by status thread\n    Histogram intervalHistogram = getIntervalHistogramAndAccumulate();\n    if (histogramLogWriter != null) {\n      histogramLogWriter.outputIntervalHistogram(intervalHistogram);\n      // we can close now\n      log.close();\n    }\n    exporter.write(getName(), \"Operations\", totalHistogram.getTotalCount());\n    exporter.write(getName(), \"AverageLatency(us)\", totalHistogram.getMean());\n    exporter.write(getName(), \"MinLatency(us)\", totalHistogram.getMinValue());\n    exporter.write(getName(), \"MaxLatency(us)\", totalHistogram.getMaxValue());\n\n    for (Double percentile : percentiles) {\n      exporter.write(getName(), ordinal(percentile) + \"PercentileLatency(us)\",\n          totalHistogram.getValueAtPercentile(percentile));\n    }\n\n    exportStatusCounts(exporter);\n\n    // also export totalHistogram\n    for (HistogramIterationValue v : totalHistogram.recordedValues()) {\n      int value;\n      if (v.getValueIteratedTo() > (long)Integer.MAX_VALUE) {\n        value = Integer.MAX_VALUE;\n      } else {\n        value = (int)v.getValueIteratedTo();\n      }\n\n      exporter.write(getName(), Integer.toString(value), (double)v.getCountAtValueIteratedTo());\n    }\n  }\n\n  /**\n   * This is called periodically from the StatusThread. There's a single\n   * StatusThread per Client process. We optionally serialize the interval to\n   * log on this opportunity.\n   *\n   * @see com.yahoo.ycsb.measurements.OneMeasurement#getSummary()\n   */\n  @Override\n  public String getSummary() {\n    Histogram intervalHistogram = getIntervalHistogramAndAccumulate();\n    // we use the summary interval as the histogram file interval.\n    if (histogramLogWriter != null) {\n      histogramLogWriter.outputIntervalHistogram(intervalHistogram);\n    }\n\n    DecimalFormat d = new DecimalFormat(\"#.##\");\n    return \"[\" + getName() + \": Count=\" + intervalHistogram.getTotalCount() + \", Max=\"\n        + intervalHistogram.getMaxValue() + \", Min=\" + intervalHistogram.getMinValue() + \", Avg=\"\n        + d.format(intervalHistogram.getMean()) + \", 90=\" + d.format(intervalHistogram.getValueAtPercentile(90))\n        + \", 99=\" + d.format(intervalHistogram.getValueAtPercentile(99)) + \", 99.9=\"\n        + d.format(intervalHistogram.getValueAtPercentile(99.9)) + \", 99.99=\"\n        + d.format(intervalHistogram.getValueAtPercentile(99.99)) + \"]\";\n  }\n\n  private Histogram getIntervalHistogramAndAccumulate() {\n    Histogram intervalHistogram = histogram.getIntervalHistogram();\n    // add this to the total time histogram.\n    if (totalHistogram == null) {\n      totalHistogram = intervalHistogram;\n    } else {\n      totalHistogram.add(intervalHistogram);\n    }\n    return intervalHistogram;\n  }\n\n  /**\n   * Helper method to parse the given percentile value string.\n   *\n   * @param percentileString - comma delimited string of Integer values\n   * @return An Integer List of percentile values\n   */\n  private List<Double> getPercentileValues(String percentileString) {\n    List<Double> percentileValues = new ArrayList<>();\n\n    try {\n      for (String rawPercentile : percentileString.split(\",\")) {\n        percentileValues.add(Double.parseDouble(rawPercentile));\n      }\n    } catch (Exception e) {\n      // If the given hdrhistogram.percentiles value is unreadable for whatever reason,\n      // then calculate and return the default set.\n      System.err.println(\"[WARN] Couldn't read \" + PERCENTILES_PROPERTY + \" value: '\" + percentileString +\n          \"', the default of '\" + PERCENTILES_PROPERTY_DEFAULT + \"' will be used.\");\n      e.printStackTrace();\n      return getPercentileValues(PERCENTILES_PROPERTY_DEFAULT);\n    }\n\n    return percentileValues;\n  }\n\n  /**\n   * Helper method to find the ordinal of any number. eg 1 -> 1st\n   * @param i number\n   * @return ordinal string\n   */\n  private String ordinal(Double i) {\n    String[] suffixes = new String[]{\"th\", \"st\", \"nd\", \"rd\", \"th\", \"th\", \"th\", \"th\", \"th\", \"th\"};\n    Integer j = i.intValue();\n    if (i % 1 == 0) {\n      switch (j % 100) {\n      case 11:\n      case 12:\n      case 13:\n        return j + \"th\";\n      default:\n        return j + suffixes[j % 10];\n      }\n    } else {\n      return i.toString();\n    }\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/OneMeasurementHistogram.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.measurements;\n\nimport com.yahoo.ycsb.measurements.exporter.MeasurementsExporter;\n\nimport java.io.IOException;\nimport java.text.DecimalFormat;\nimport java.util.Properties;\n\n/**\n * Take measurements and maintain a histogram of a given metric, such as READ LATENCY.\n *\n */\npublic class OneMeasurementHistogram extends OneMeasurement {\n  public static final String BUCKETS = \"histogram.buckets\";\n  public static final String BUCKETS_DEFAULT = \"1000\";\n\n  /**\n   * Specify the range of latencies to track in the histogram.\n   */\n  private final int buckets;\n\n  /**\n   * Groups operations in discrete blocks of 1ms width.\n   */\n  private long[] histogram;\n\n  /**\n   * Counts all operations outside the histogram's range.\n   */\n  private long histogramoverflow;\n\n  /**\n   * The total number of reported operations.\n   */\n  private long operations;\n\n  /**\n   * The sum of each latency measurement over all operations.\n   * Calculated in ms.\n   */\n  private long totallatency;\n\n  /**\n   * The sum of each latency measurement squared over all operations. \n   * Used to calculate variance of latency.\n   * Calculated in ms. \n   */\n  private double totalsquaredlatency;\n\n  //keep a windowed version of these stats for printing status\n  private long windowoperations;\n  private long windowtotallatency;\n\n  private int min;\n  private int max;\n\n  public OneMeasurementHistogram(String name, Properties props) {\n    super(name);\n    buckets = Integer.parseInt(props.getProperty(BUCKETS, BUCKETS_DEFAULT));\n    histogram = new long[buckets];\n    histogramoverflow = 0;\n    operations = 0;\n    totallatency = 0;\n    totalsquaredlatency = 0;\n    windowoperations = 0;\n    windowtotallatency = 0;\n    min = -1;\n    max = -1;\n  }\n\n  /* (non-Javadoc)\n   * @see com.yahoo.ycsb.OneMeasurement#measure(int)\n   */\n  public synchronized void measure(int latency) {\n    //latency reported in us and collected in bucket by ms.\n    if (latency / 1000 >= buckets) {\n      histogramoverflow++;\n    } else {\n      histogram[latency / 1000]++;\n    }\n    operations++;\n    totallatency += latency;\n    totalsquaredlatency += ((double) latency) * ((double) latency);\n    windowoperations++;\n    windowtotallatency += latency;\n\n    if ((min < 0) || (latency < min)) {\n      min = latency;\n    }\n\n    if ((max < 0) || (latency > max)) {\n      max = latency;\n    }\n  }\n\n  @Override\n  public void exportMeasurements(MeasurementsExporter exporter) throws IOException {\n    double mean = totallatency / ((double) operations);\n    double variance = totalsquaredlatency / ((double) operations) - (mean * mean);\n    exporter.write(getName(), \"Operations\", operations);\n    exporter.write(getName(), \"AverageLatency(us)\", mean);\n    exporter.write(getName(), \"LatencyVariance(us)\", variance);\n    exporter.write(getName(), \"MinLatency(us)\", min);\n    exporter.write(getName(), \"MaxLatency(us)\", max);\n\n    long opcounter=0;\n    boolean done95th = false;\n    for (int i = 0; i < buckets; i++) {\n      opcounter += histogram[i];\n      if ((!done95th) && (((double) opcounter) / ((double) operations) >= 0.95)) {\n        exporter.write(getName(), \"95thPercentileLatency(us)\", i * 1000);\n        done95th = true;\n      }\n      if (((double) opcounter) / ((double) operations) >= 0.99) {\n        exporter.write(getName(), \"99thPercentileLatency(us)\", i * 1000);\n        break;\n      }\n    }\n\n    exportStatusCounts(exporter);\n\n    for (int i = 0; i < buckets; i++) {\n      exporter.write(getName(), Integer.toString(i), histogram[i]);\n    }\n    exporter.write(getName(), \">\" + buckets, histogramoverflow);\n  }\n\n  @Override\n  public String getSummary() {\n    if (windowoperations == 0) {\n      return \"\";\n    }\n    DecimalFormat d = new DecimalFormat(\"#.##\");\n    double report = ((double) windowtotallatency) / ((double) windowoperations);\n    windowtotallatency = 0;\n    windowoperations = 0;\n    return \"[\" + getName() + \" AverageLatency(us)=\" + d.format(report) + \"]\";\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/OneMeasurementRaw.java",
    "content": "/**\n * Copyright (c) 2015-2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.measurements;\n\nimport com.yahoo.ycsb.measurements.exporter.MeasurementsExporter;\n\nimport java.io.FileNotFoundException;\nimport java.io.FileOutputStream;\nimport java.io.IOException;\nimport java.io.PrintStream;\nimport java.util.Collections;\nimport java.util.Comparator;\nimport java.util.LinkedList;\nimport java.util.Properties;\n\n/**\n * Record a series of measurements as raw data points without down sampling,\n * optionally write to an output file when configured.\n *\n */\npublic class OneMeasurementRaw extends OneMeasurement {\n  /**\n   * One raw data point, two fields: timestamp (ms) when the datapoint is\n   * inserted, and the value.\n   */\n  class RawDataPoint {\n    private final long timestamp;\n    private final int value;\n\n    public RawDataPoint(int value) {\n      this.timestamp = System.currentTimeMillis();\n      this.value = value;\n    }\n\n    public long timeStamp() {\n      return timestamp;\n    }\n\n    public int value() {\n      return value;\n    }\n  }\n\n  class RawDataPointComparator implements Comparator<RawDataPoint> {\n    @Override\n    public int compare(RawDataPoint p1, RawDataPoint p2) {\n      if (p1.value() < p2.value()) {\n        return -1;\n      } else if (p1.value() == p2.value()) {\n        return 0;\n      } else {\n        return 1;\n      }\n    }\n  }\n\n  /**\n   * Optionally, user can configure an output file to save the raw data points.\n   * Default is none, raw results will be written to stdout.\n   *\n   */\n  public static final String OUTPUT_FILE_PATH = \"measurement.raw.output_file\";\n  public static final String OUTPUT_FILE_PATH_DEFAULT = \"\";\n\n  /**\n   * Optionally, user can request to not output summary stats. This is useful\n   * if the user chains the raw measurement type behind the HdrHistogram type\n   * which already outputs summary stats. But even in that case, the user may\n   * still want this class to compute summary stats for them, especially if\n   * they want accurate computation of percentiles (because percentils computed\n   * by histogram classes are still approximations).\n   */\n  public static final String NO_SUMMARY_STATS = \"measurement.raw.no_summary\";\n  public static final String NO_SUMMARY_STATS_DEFAULT = \"false\";\n\n  private final PrintStream outputStream;\n\n  private boolean noSummaryStats = false;\n\n  private LinkedList<RawDataPoint> measurements;\n  private long totalLatency = 0;\n\n  // A window of stats to print summary for at the next getSummary() call.\n  // It's supposed to be a one line summary, so we will just print count and\n  // average.\n  private int windowOperations = 0;\n  private long windowTotalLatency = 0;\n\n  public OneMeasurementRaw(String name, Properties props) {\n    super(name);\n\n    String outputFilePath = props.getProperty(OUTPUT_FILE_PATH, OUTPUT_FILE_PATH_DEFAULT);\n    if (!outputFilePath.isEmpty()) {\n      System.out.println(\"Raw data measurement: will output to result file: \" +\n          outputFilePath);\n\n      try {\n        outputStream = new PrintStream(\n            new FileOutputStream(outputFilePath, true),\n            true);\n      } catch (FileNotFoundException e) {\n        throw new RuntimeException(\"Failed to open raw data output file\", e);\n      }\n\n    } else {\n      System.out.println(\"Raw data measurement: will output to stdout.\");\n      outputStream = System.out;\n\n    }\n\n    noSummaryStats = Boolean.parseBoolean(props.getProperty(NO_SUMMARY_STATS,\n        NO_SUMMARY_STATS_DEFAULT));\n\n    measurements = new LinkedList<>();\n  }\n\n  @Override\n  public synchronized void measure(int latency) {\n    totalLatency += latency;\n    windowTotalLatency += latency;\n    windowOperations++;\n\n    measurements.add(new RawDataPoint(latency));\n  }\n\n  @Override\n  public void exportMeasurements(MeasurementsExporter exporter)\n      throws IOException {\n    // Output raw data points first then print out a summary of percentiles to\n    // stdout.\n\n    outputStream.println(getName() +\n        \" latency raw data: op, timestamp(ms), latency(us)\");\n    for (RawDataPoint point : measurements) {\n      outputStream.println(\n          String.format(\"%s,%d,%d\", getName(), point.timeStamp(),\n              point.value()));\n    }\n    if (outputStream != System.out) {\n      outputStream.close();\n    }\n\n    int totalOps = measurements.size();\n    exporter.write(getName(), \"Total Operations\", totalOps);\n    if (totalOps > 0 && !noSummaryStats) {\n      exporter.write(getName(),\n          \"Below is a summary of latency in microseconds:\", -1);\n      exporter.write(getName(), \"Average\",\n          (double) totalLatency / (double) totalOps);\n\n      Collections.sort(measurements, new RawDataPointComparator());\n\n      exporter.write(getName(), \"Min\", measurements.get(0).value());\n      exporter.write(\n          getName(), \"Max\", measurements.get(totalOps - 1).value());\n      exporter.write(\n          getName(), \"p1\", measurements.get((int) (totalOps * 0.01)).value());\n      exporter.write(\n          getName(), \"p5\", measurements.get((int) (totalOps * 0.05)).value());\n      exporter.write(\n          getName(), \"p50\", measurements.get((int) (totalOps * 0.5)).value());\n      exporter.write(\n          getName(), \"p90\", measurements.get((int) (totalOps * 0.9)).value());\n      exporter.write(\n          getName(), \"p95\", measurements.get((int) (totalOps * 0.95)).value());\n      exporter.write(\n          getName(), \"p99\", measurements.get((int) (totalOps * 0.99)).value());\n      exporter.write(getName(), \"p99.9\",\n          measurements.get((int) (totalOps * 0.999)).value());\n      exporter.write(getName(), \"p99.99\",\n          measurements.get((int) (totalOps * 0.9999)).value());\n    }\n\n    exportStatusCounts(exporter);\n  }\n\n  @Override\n  public synchronized String getSummary() {\n    if (windowOperations == 0) {\n      return \"\";\n    }\n\n    String toReturn = String.format(\"%s count: %d, average latency(us): %.2f\",\n        getName(), windowOperations,\n        (double) windowTotalLatency / (double) windowOperations);\n\n    windowTotalLatency = 0;\n    windowOperations = 0;\n\n    return toReturn;\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/OneMeasurementTimeSeries.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.measurements;\n\nimport com.yahoo.ycsb.measurements.exporter.MeasurementsExporter;\n\nimport java.io.IOException;\nimport java.text.DecimalFormat;\nimport java.util.Properties;\nimport java.util.Vector;\n\nclass SeriesUnit {\n  /**\n   * @param time\n   * @param average\n   */\n  public SeriesUnit(long time, double average) {\n    this.time = time;\n    this.average = average;\n  }\n\n  protected final long time;\n  protected final double average;\n}\n\n/**\n * A time series measurement of a metric, such as READ LATENCY.\n */\npublic class OneMeasurementTimeSeries extends OneMeasurement {\n\n  /**\n   * Granularity for time series; measurements will be averaged in chunks of this granularity. Units are milliseconds.\n   */\n  public static final String GRANULARITY = \"timeseries.granularity\";\n  public static final String GRANULARITY_DEFAULT = \"1000\";\n\n  private final int granularity;\n  private final Vector<SeriesUnit> measurements;\n\n  private long start = -1;\n  private long currentunit = -1;\n  private long count = 0;\n  private long sum = 0;\n  private long operations = 0;\n  private long totallatency = 0;\n\n  //keep a windowed version of these stats for printing status\n  private int windowoperations = 0;\n  private long windowtotallatency = 0;\n\n  private int min = -1;\n  private int max = -1;\n\n  public OneMeasurementTimeSeries(String name, Properties props) {\n    super(name);\n    granularity = Integer.parseInt(props.getProperty(GRANULARITY, GRANULARITY_DEFAULT));\n    measurements = new Vector<>();\n  }\n\n  private synchronized void checkEndOfUnit(boolean forceend) {\n    long now = System.currentTimeMillis();\n\n    if (start < 0) {\n      currentunit = 0;\n      start = now;\n    }\n\n    long unit = ((now - start) / granularity) * granularity;\n\n    if ((unit > currentunit) || (forceend)) {\n      double avg = ((double) sum) / ((double) count);\n      measurements.add(new SeriesUnit(currentunit, avg));\n\n      currentunit = unit;\n\n      count = 0;\n      sum = 0;\n    }\n  }\n\n  @Override\n  public void measure(int latency) {\n    checkEndOfUnit(false);\n\n    count++;\n    sum += latency;\n    totallatency += latency;\n    operations++;\n    windowoperations++;\n    windowtotallatency += latency;\n\n    if (latency > max) {\n      max = latency;\n    }\n\n    if ((latency < min) || (min < 0)) {\n      min = latency;\n    }\n  }\n\n\n  @Override\n  public void exportMeasurements(MeasurementsExporter exporter) throws IOException {\n    checkEndOfUnit(true);\n\n    exporter.write(getName(), \"Operations\", operations);\n    exporter.write(getName(), \"AverageLatency(us)\", (((double) totallatency) / ((double) operations)));\n    exporter.write(getName(), \"MinLatency(us)\", min);\n    exporter.write(getName(), \"MaxLatency(us)\", max);\n\n    // TODO: 95th and 99th percentile latency\n\n    exportStatusCounts(exporter);\n    for (SeriesUnit unit : measurements) {\n      exporter.write(getName(), Long.toString(unit.time), unit.average);\n    }\n  }\n\n  @Override\n  public String getSummary() {\n    if (windowoperations == 0) {\n      return \"\";\n    }\n    DecimalFormat d = new DecimalFormat(\"#.##\");\n    double report = ((double) windowtotallatency) / ((double) windowoperations);\n    windowtotallatency = 0;\n    windowoperations = 0;\n    return \"[\" + getName() + \" AverageLatency(us)=\" + d.format(report) + \"]\";\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/TwoInOneMeasurement.java",
    "content": "/**\r\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\r\n * <p>\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n * <p>\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n * <p>\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.measurements;\r\n\r\nimport com.yahoo.ycsb.Status;\r\nimport com.yahoo.ycsb.measurements.exporter.MeasurementsExporter;\r\n\r\nimport java.io.IOException;\r\n\r\n/**\r\n * delegates to 2 measurement instances.\r\n */\r\npublic class TwoInOneMeasurement extends OneMeasurement {\r\n\r\n  private final OneMeasurement thing1, thing2;\r\n\r\n  public TwoInOneMeasurement(String name, OneMeasurement thing1, OneMeasurement thing2) {\r\n    super(name);\r\n    this.thing1 = thing1;\r\n    this.thing2 = thing2;\r\n  }\r\n\r\n  /**\r\n   * No need for synchronization, using CHM to deal with that.\r\n   */\r\n  @Override\r\n  public void reportStatus(final Status status) {\r\n    thing1.reportStatus(status);\r\n  }\r\n\r\n  /**\r\n   * It appears latency is reported in micros.\r\n   * Using {@link org.HdrHistogram.Recorder} to support concurrent updates to histogram.\r\n   */\r\n  @Override\r\n  public void measure(int latencyInMicros) {\r\n    thing1.measure(latencyInMicros);\r\n    thing2.measure(latencyInMicros);\r\n  }\r\n\r\n  /**\r\n   * This is called from a main thread, on orderly termination.\r\n   */\r\n  @Override\r\n  public void exportMeasurements(MeasurementsExporter exporter) throws IOException {\r\n    thing1.exportMeasurements(exporter);\r\n    thing2.exportMeasurements(exporter);\r\n  }\r\n\r\n  /**\r\n   * This is called periodically from the StatusThread. There's a single StatusThread per Client process.\r\n   * We optionally serialize the interval to log on this opportunity.\r\n   *\r\n   * @see com.yahoo.ycsb.measurements.OneMeasurement#getSummary()\r\n   */\r\n  @Override\r\n  public String getSummary() {\r\n    return thing1.getSummary() + \"\\n\" + thing2.getSummary();\r\n  }\r\n\r\n}\r\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/exporter/JSONArrayMeasurementsExporter.java",
    "content": "/**\n * Copyright (c) 2015-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.measurements.exporter;\n\nimport org.codehaus.jackson.JsonFactory;\nimport org.codehaus.jackson.JsonGenerator;\nimport org.codehaus.jackson.util.DefaultPrettyPrinter;\n\nimport java.io.BufferedWriter;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.io.OutputStreamWriter;\n\n/**\n * Export measurements into a machine readable JSON Array of measurement objects.\n */\npublic class JSONArrayMeasurementsExporter implements MeasurementsExporter {\n  private final JsonFactory factory = new JsonFactory();\n  private JsonGenerator g;\n\n  public JSONArrayMeasurementsExporter(OutputStream os) throws IOException {\n    BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(os));\n    g = factory.createJsonGenerator(bw);\n    g.setPrettyPrinter(new DefaultPrettyPrinter());\n    g.writeStartArray();\n  }\n\n  public void write(String metric, String measurement, int i) throws IOException {\n    g.writeStartObject();\n    g.writeStringField(\"metric\", metric);\n    g.writeStringField(\"measurement\", measurement);\n    g.writeNumberField(\"value\", i);\n    g.writeEndObject();\n  }\n\n  public void write(String metric, String measurement, long i) throws IOException {\n    g.writeStartObject();\n    g.writeStringField(\"metric\", metric);\n    g.writeStringField(\"measurement\", measurement);\n    g.writeNumberField(\"value\", i);\n    g.writeEndObject();\n  }\n\n  public void write(String metric, String measurement, double d) throws IOException {\n    g.writeStartObject();\n    g.writeStringField(\"metric\", metric);\n    g.writeStringField(\"measurement\", measurement);\n    g.writeNumberField(\"value\", d);\n    g.writeEndObject();\n  }\n\n  public void close() throws IOException {\n    if (g != null) {\n      g.writeEndArray();\n      g.close();\n    }\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/exporter/JSONMeasurementsExporter.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.measurements.exporter;\n\nimport org.codehaus.jackson.JsonFactory;\nimport org.codehaus.jackson.JsonGenerator;\nimport org.codehaus.jackson.util.DefaultPrettyPrinter;\n\nimport java.io.BufferedWriter;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.io.OutputStreamWriter;\n\n/**\n * Export measurements into a machine readable JSON file.\n */\npublic class JSONMeasurementsExporter implements MeasurementsExporter {\n\n  private final JsonFactory factory = new JsonFactory();\n  private JsonGenerator g;\n\n  public JSONMeasurementsExporter(OutputStream os) throws IOException {\n\n    BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(os));\n    g = factory.createJsonGenerator(bw);\n    g.setPrettyPrinter(new DefaultPrettyPrinter());\n  }\n\n  public void write(String metric, String measurement, int i) throws IOException {\n    g.writeStartObject();\n    g.writeStringField(\"metric\", metric);\n    g.writeStringField(\"measurement\", measurement);\n    g.writeNumberField(\"value\", i);\n    g.writeEndObject();\n  }\n\n  public void write(String metric, String measurement, long i) throws IOException {\n    g.writeStartObject();\n    g.writeStringField(\"metric\", metric);\n    g.writeStringField(\"measurement\", measurement);\n    g.writeNumberField(\"value\", i);\n    g.writeEndObject();\n  }\n\n  public void write(String metric, String measurement, double d) throws IOException {\n    g.writeStartObject();\n    g.writeStringField(\"metric\", metric);\n    g.writeStringField(\"measurement\", measurement);\n    g.writeNumberField(\"value\", d);\n    g.writeEndObject();\n  }\n\n  public void close() throws IOException {\n    if (g != null) {\n      g.close();\n    }\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/exporter/MeasurementsExporter.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.measurements.exporter;\n\nimport java.io.Closeable;\nimport java.io.IOException;\n\n/**\n * Used to export the collected measurements into a useful format, for example\n * human readable text or machine readable JSON.\n */\npublic interface MeasurementsExporter extends Closeable {\n  /**\n   * Write a measurement to the exported format.\n   *\n   * @param metric Metric name, for example \"READ LATENCY\".\n   * @param measurement Measurement name, for example \"Average latency\".\n   * @param i Measurement to write.\n   * @throws IOException if writing failed\n   */\n  void write(String metric, String measurement, int i) throws IOException;\n\n  /**\n   * Write a measurement to the exported format.\n   *\n   * @param metric Metric name, for example \"READ LATENCY\".\n   * @param measurement Measurement name, for example \"Average latency\".\n   * @param i Measurement to write.\n   * @throws IOException if writing failed\n   */\n  void write(String metric, String measurement, long i) throws IOException;\n\n  /**\n   * Write a measurement to the exported format.\n   * \n   * @param metric Metric name, for example \"READ LATENCY\".\n   * @param measurement Measurement name, for example \"Average latency\".\n   * @param d Measurement to write.\n   * @throws IOException if writing failed\n   */\n  void write(String metric, String measurement, double d) throws IOException;\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/exporter/TextMeasurementsExporter.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.measurements.exporter;\n\nimport java.io.BufferedWriter;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.io.OutputStreamWriter;\n\n/**\n * Write human readable text. Tries to emulate the previous print report method.\n */\npublic class TextMeasurementsExporter implements MeasurementsExporter {\n  private final BufferedWriter bw;\n\n  public TextMeasurementsExporter(OutputStream os) {\n    this.bw = new BufferedWriter(new OutputStreamWriter(os));\n  }\n\n  public void write(String metric, String measurement, int i) throws IOException {\n    bw.write(\"[\" + metric + \"], \" + measurement + \", \" + i);\n    bw.newLine();\n  }\n\n  public void write(String metric, String measurement, long i) throws IOException {\n    bw.write(\"[\" + metric + \"], \" + measurement + \", \" + i);\n    bw.newLine();\n  }\n\n  public void write(String metric, String measurement, double d) throws IOException {\n    bw.write(\"[\" + metric + \"], \" + measurement + \", \" + d);\n    bw.newLine();\n  }\n\n  public void close() throws IOException {\n    this.bw.close();\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/exporter/package-info.java",
    "content": "/*\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB measurements.exporter package.\n */\npackage com.yahoo.ycsb.measurements.exporter;\n\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/measurements/package-info.java",
    "content": "/*\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB measurements package.\n */\npackage com.yahoo.ycsb.measurements;\n\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/package-info.java",
    "content": "/*\n * Copyright (c) 2015 - 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB core package.\n */\npackage com.yahoo.ycsb;\n\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/workloads/ConstantOccupancyWorkload.java",
    "content": "/**\n * Copyright (c) 2010 Yahoo! Inc. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.workloads;\n\nimport com.yahoo.ycsb.Client;\nimport com.yahoo.ycsb.WorkloadException;\nimport com.yahoo.ycsb.generator.NumberGenerator;\n\nimport java.util.Properties;\n\n/**\n * A disk-fragmenting workload.\n * <p>\n * Properties to control the client:\n * </p>\n * <UL>\n * <LI><b>disksize</b>: how many bytes of storage can the disk store? (default 100,000,000)\n * <LI><b>occupancy</b>: what fraction of the available storage should be used? (default 0.9)\n * <LI><b>requestdistribution</b>: what distribution should be used to select the records to operate on - uniform,\n * zipfian or latest (default: histogram)\n * </ul>\n * <p>\n * <p>\n * <p> See also:\n * Russell Sears, Catharine van Ingen.\n * <a href='https://database.cs.wisc.edu/cidr/cidr2007/papers/cidr07p34.pdf'>Fragmentation in Large Object\n * Repositories</a>,\n * CIDR 2006. [<a href='https://database.cs.wisc.edu/cidr/cidr2007/slides/p34-sears.ppt'>Presentation</a>]\n * </p>\n */\npublic class ConstantOccupancyWorkload extends CoreWorkload {\n  private long disksize;\n  private long storageages;\n  private double occupancy;\n\n  private long objectCount;\n\n  public static final String STORAGE_AGE_PROPERTY = \"storageages\";\n  public static final long STORAGE_AGE_PROPERTY_DEFAULT = 10;\n\n  public static final String DISK_SIZE_PROPERTY = \"disksize\";\n  public static final long DISK_SIZE_PROPERTY_DEFAULT = 100 * 1000 * 1000;\n\n  public static final String OCCUPANCY_PROPERTY = \"occupancy\";\n  public static final double OCCUPANCY_PROPERTY_DEFAULT = 0.9;\n\n  @Override\n  public void init(Properties p) throws WorkloadException {\n    disksize = Long.parseLong(p.getProperty(DISK_SIZE_PROPERTY, String.valueOf(DISK_SIZE_PROPERTY_DEFAULT)));\n    storageages = Long.parseLong(p.getProperty(STORAGE_AGE_PROPERTY, String.valueOf(STORAGE_AGE_PROPERTY_DEFAULT)));\n    occupancy = Double.parseDouble(p.getProperty(OCCUPANCY_PROPERTY, String.valueOf(OCCUPANCY_PROPERTY_DEFAULT)));\n\n    if (p.getProperty(Client.RECORD_COUNT_PROPERTY) != null ||\n        p.getProperty(Client.INSERT_COUNT_PROPERTY) != null ||\n        p.getProperty(Client.OPERATION_COUNT_PROPERTY) != null) {\n      System.err.println(\"Warning: record, insert or operation count was set prior to initting \" +\n          \"ConstantOccupancyWorkload.  Overriding old values.\");\n    }\n    NumberGenerator g = CoreWorkload.getFieldLengthGenerator(p);\n    double fieldsize = g.mean();\n    int fieldcount = Integer.parseInt(p.getProperty(FIELD_COUNT_PROPERTY, FIELD_COUNT_PROPERTY_DEFAULT));\n\n    objectCount = (long) (occupancy * (disksize / (fieldsize * fieldcount)));\n    if (objectCount == 0) {\n      throw new IllegalStateException(\"Object count was zero.  Perhaps disksize is too low?\");\n    }\n    p.setProperty(Client.RECORD_COUNT_PROPERTY, String.valueOf(objectCount));\n    p.setProperty(Client.OPERATION_COUNT_PROPERTY, String.valueOf(storageages * objectCount));\n    p.setProperty(Client.INSERT_COUNT_PROPERTY, String.valueOf(objectCount));\n\n    super.init(p);\n  }\n\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/workloads/CoreWorkload.java",
    "content": "/**\n * Copyright (c) 2010 Yahoo! Inc., Copyright (c) 2016-2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.workloads;\n\nimport com.yahoo.ycsb.*;\nimport com.yahoo.ycsb.generator.*;\nimport com.yahoo.ycsb.generator.UniformLongGenerator;\nimport com.yahoo.ycsb.measurements.Measurements;\n\nimport java.io.IOException;\nimport java.util.*;\n\n/**\n * The core benchmark scenario. Represents a set of clients doing simple CRUD operations. The\n * relative proportion of different kinds of operations, and other properties of the workload,\n * are controlled by parameters specified at runtime.\n * <p>\n * Properties to control the client:\n * <UL>\n * <LI><b>fieldcount</b>: the number of fields in a record (default: 10)\n * <LI><b>fieldlength</b>: the size of each field (default: 100)\n * <LI><b>readallfields</b>: should reads read all fields (true) or just one (false) (default: true)\n * <LI><b>writeallfields</b>: should updates and read/modify/writes update all fields (true) or just\n * one (false) (default: false)\n * <LI><b>readproportion</b>: what proportion of operations should be reads (default: 0.95)\n * <LI><b>updateproportion</b>: what proportion of operations should be updates (default: 0.05)\n * <LI><b>insertproportion</b>: what proportion of operations should be inserts (default: 0)\n * <LI><b>scanproportion</b>: what proportion of operations should be scans (default: 0)\n * <LI><b>readmodifywriteproportion</b>: what proportion of operations should be read a record,\n * modify it, write it back (default: 0)\n * <LI><b>requestdistribution</b>: what distribution should be used to select the records to operate\n * on - uniform, zipfian, hotspot, sequential, exponential or latest (default: uniform)\n * <LI><b>maxscanlength</b>: for scans, what is the maximum number of records to scan (default: 1000)\n * <LI><b>scanlengthdistribution</b>: for scans, what distribution should be used to choose the\n * number of records to scan, for each scan, between 1 and maxscanlength (default: uniform)\n * <LI><b>insertstart</b>: for parallel loads and runs, defines the starting record for this\n * YCSB instance (default: 0)\n * <LI><b>insertcount</b>: for parallel loads and runs, defines the number of records for this\n * YCSB instance (default: recordcount)\n * <LI><b>zeropadding</b>: for generating a record sequence compatible with string sort order by\n * 0 padding the record number. Controls the number of 0s to use for padding. (default: 1)\n * For example for row 5, with zeropadding=1 you get 'user5' key and with zeropading=8 you get\n * 'user00000005' key. In order to see its impact, zeropadding needs to be bigger than number of\n * digits in the record number.\n * <LI><b>insertorder</b>: should records be inserted in order by key (\"ordered\"), or in hashed\n * order (\"hashed\") (default: hashed)\n * </ul>\n */\npublic class CoreWorkload extends Workload {\n  /**\n   * The name of the database table to run queries against.\n   */\n  public static final String TABLENAME_PROPERTY = \"table\";\n\n  /**\n   * The default name of the database table to run queries against.\n   */\n  public static final String TABLENAME_PROPERTY_DEFAULT = \"usertable\";\n\n  protected String table;\n\n  /**\n   * The name of the property for the number of fields in a record.\n   */\n  public static final String FIELD_COUNT_PROPERTY = \"fieldcount\";\n\n  /**\n   * Default number of fields in a record.\n   */\n  public static final String FIELD_COUNT_PROPERTY_DEFAULT = \"10\";\n  \n  private List<String> fieldnames;\n\n  /**\n   * The name of the property for the field length distribution. Options are \"uniform\", \"zipfian\"\n   * (favouring short records), \"constant\", and \"histogram\".\n   * <p>\n   * If \"uniform\", \"zipfian\" or \"constant\", the maximum field length will be that specified by the\n   * fieldlength property. If \"histogram\", then the histogram will be read from the filename\n   * specified in the \"fieldlengthhistogram\" property.\n   */\n  public static final String FIELD_LENGTH_DISTRIBUTION_PROPERTY = \"fieldlengthdistribution\";\n\n  /**\n   * The default field length distribution.\n   */\n  public static final String FIELD_LENGTH_DISTRIBUTION_PROPERTY_DEFAULT = \"constant\";\n\n  /**\n   * The name of the property for the length of a field in bytes.\n   */\n  public static final String FIELD_LENGTH_PROPERTY = \"fieldlength\";\n\n  /**\n   * The default maximum length of a field in bytes.\n   */\n  public static final String FIELD_LENGTH_PROPERTY_DEFAULT = \"100\";\n\n  /**\n   * The name of a property that specifies the filename containing the field length histogram (only\n   * used if fieldlengthdistribution is \"histogram\").\n   */\n  public static final String FIELD_LENGTH_HISTOGRAM_FILE_PROPERTY = \"fieldlengthhistogram\";\n\n  /**\n   * The default filename containing a field length histogram.\n   */\n  public static final String FIELD_LENGTH_HISTOGRAM_FILE_PROPERTY_DEFAULT = \"hist.txt\";\n\n  /**\n   * Generator object that produces field lengths.  The value of this depends on the properties that\n   * start with \"FIELD_LENGTH_\".\n   */\n  protected NumberGenerator fieldlengthgenerator;\n\n  /**\n   * The name of the property for deciding whether to read one field (false) or all fields (true) of\n   * a record.\n   */\n  public static final String READ_ALL_FIELDS_PROPERTY = \"readallfields\";\n\n  /**\n   * The default value for the readallfields property.\n   */\n  public static final String READ_ALL_FIELDS_PROPERTY_DEFAULT = \"true\";\n\n  protected boolean readallfields;\n\n  /**\n   * The name of the property for deciding whether to write one field (false) or all fields (true)\n   * of a record.\n   */\n  public static final String WRITE_ALL_FIELDS_PROPERTY = \"writeallfields\";\n\n  /**\n   * The default value for the writeallfields property.\n   */\n  public static final String WRITE_ALL_FIELDS_PROPERTY_DEFAULT = \"false\";\n\n  protected boolean writeallfields;\n\n  /**\n   * The name of the property for deciding whether to check all returned\n   * data against the formation template to ensure data integrity.\n   */\n  public static final String DATA_INTEGRITY_PROPERTY = \"dataintegrity\";\n\n  /**\n   * The default value for the dataintegrity property.\n   */\n  public static final String DATA_INTEGRITY_PROPERTY_DEFAULT = \"false\";\n\n  /**\n   * Set to true if want to check correctness of reads. Must also\n   * be set to true during loading phase to function.\n   */\n  private boolean dataintegrity;\n\n  /**\n   * The name of the property for the proportion of transactions that are reads.\n   */\n  public static final String READ_PROPORTION_PROPERTY = \"readproportion\";\n\n  /**\n   * The default proportion of transactions that are reads.\n   */\n  public static final String READ_PROPORTION_PROPERTY_DEFAULT = \"0.95\";\n\n  /**\n   * The name of the property for the proportion of transactions that are updates.\n   */\n  public static final String UPDATE_PROPORTION_PROPERTY = \"updateproportion\";\n\n  /**\n   * The default proportion of transactions that are updates.\n   */\n  public static final String UPDATE_PROPORTION_PROPERTY_DEFAULT = \"0.05\";\n\n  /**\n   * The name of the property for the proportion of transactions that are inserts.\n   */\n  public static final String INSERT_PROPORTION_PROPERTY = \"insertproportion\";\n\n  /**\n   * The default proportion of transactions that are inserts.\n   */\n  public static final String INSERT_PROPORTION_PROPERTY_DEFAULT = \"0.0\";\n\n  /**\n   * The name of the property for the proportion of transactions that are scans.\n   */\n  public static final String SCAN_PROPORTION_PROPERTY = \"scanproportion\";\n\n  /**\n   * The default proportion of transactions that are scans.\n   */\n  public static final String SCAN_PROPORTION_PROPERTY_DEFAULT = \"0.0\";\n\n  /**\n   * The name of the property for the proportion of transactions that are read-modify-write.\n   */\n  public static final String READMODIFYWRITE_PROPORTION_PROPERTY = \"readmodifywriteproportion\";\n\n  /**\n   * The default proportion of transactions that are scans.\n   */\n  public static final String READMODIFYWRITE_PROPORTION_PROPERTY_DEFAULT = \"0.0\";\n\n  /**\n   * The name of the property for the the distribution of requests across the keyspace. Options are\n   * \"uniform\", \"zipfian\" and \"latest\"\n   */\n  public static final String REQUEST_DISTRIBUTION_PROPERTY = \"requestdistribution\";\n\n  /**\n   * The default distribution of requests across the keyspace.\n   */\n  public static final String REQUEST_DISTRIBUTION_PROPERTY_DEFAULT = \"uniform\";\n\n  /**\n   * The name of the property for adding zero padding to record numbers in order to match\n   * string sort order. Controls the number of 0s to left pad with.\n   */\n  public static final String ZERO_PADDING_PROPERTY = \"zeropadding\";\n\n  /**\n   * The default zero padding value. Matches integer sort order\n   */\n  public static final String ZERO_PADDING_PROPERTY_DEFAULT = \"1\";\n\n\n  /**\n   * The name of the property for the max scan length (number of records).\n   */\n  public static final String MAX_SCAN_LENGTH_PROPERTY = \"maxscanlength\";\n\n  /**\n   * The default max scan length.\n   */\n  public static final String MAX_SCAN_LENGTH_PROPERTY_DEFAULT = \"1000\";\n\n  /**\n   * The name of the property for the scan length distribution. Options are \"uniform\" and \"zipfian\"\n   * (favoring short scans)\n   */\n  public static final String SCAN_LENGTH_DISTRIBUTION_PROPERTY = \"scanlengthdistribution\";\n\n  /**\n   * The default max scan length.\n   */\n  public static final String SCAN_LENGTH_DISTRIBUTION_PROPERTY_DEFAULT = \"uniform\";\n\n  /**\n   * The name of the property for the order to insert records. Options are \"ordered\" or \"hashed\"\n   */\n  public static final String INSERT_ORDER_PROPERTY = \"insertorder\";\n\n  /**\n   * Default insert order.\n   */\n  public static final String INSERT_ORDER_PROPERTY_DEFAULT = \"hashed\";\n\n  /**\n   * Percentage data items that constitute the hot set.\n   */\n  public static final String HOTSPOT_DATA_FRACTION = \"hotspotdatafraction\";\n\n  /**\n   * Default value of the size of the hot set.\n   */\n  public static final String HOTSPOT_DATA_FRACTION_DEFAULT = \"0.2\";\n\n  /**\n   * Percentage operations that access the hot set.\n   */\n  public static final String HOTSPOT_OPN_FRACTION = \"hotspotopnfraction\";\n\n  /**\n   * Default value of the percentage operations accessing the hot set.\n   */\n  public static final String HOTSPOT_OPN_FRACTION_DEFAULT = \"0.8\";\n\n  /**\n   * How many times to retry when insertion of a single item to a DB fails.\n   */\n  public static final String INSERTION_RETRY_LIMIT = \"core_workload_insertion_retry_limit\";\n  public static final String INSERTION_RETRY_LIMIT_DEFAULT = \"0\";\n\n  /**\n   * On average, how long to wait between the retries, in seconds.\n   */\n  public static final String INSERTION_RETRY_INTERVAL = \"core_workload_insertion_retry_interval\";\n  public static final String INSERTION_RETRY_INTERVAL_DEFAULT = \"3\";\n\n  protected NumberGenerator keysequence;\n  protected DiscreteGenerator operationchooser;\n  protected NumberGenerator keychooser;\n  protected NumberGenerator fieldchooser;\n  protected AcknowledgedCounterGenerator transactioninsertkeysequence;\n  protected NumberGenerator scanlength;\n  protected boolean orderedinserts;\n  protected long fieldcount;\n  protected long recordcount;\n  protected int zeropadding;\n  protected int insertionRetryLimit;\n  protected int insertionRetryInterval;\n\n  private Measurements measurements = Measurements.getMeasurements();\n\n  protected static NumberGenerator getFieldLengthGenerator(Properties p) throws WorkloadException {\n    NumberGenerator fieldlengthgenerator;\n    String fieldlengthdistribution = p.getProperty(\n        FIELD_LENGTH_DISTRIBUTION_PROPERTY, FIELD_LENGTH_DISTRIBUTION_PROPERTY_DEFAULT);\n    int fieldlength =\n        Integer.parseInt(p.getProperty(FIELD_LENGTH_PROPERTY, FIELD_LENGTH_PROPERTY_DEFAULT));\n    String fieldlengthhistogram = p.getProperty(\n        FIELD_LENGTH_HISTOGRAM_FILE_PROPERTY, FIELD_LENGTH_HISTOGRAM_FILE_PROPERTY_DEFAULT);\n    if (fieldlengthdistribution.compareTo(\"constant\") == 0) {\n      fieldlengthgenerator = new ConstantIntegerGenerator(fieldlength);\n    } else if (fieldlengthdistribution.compareTo(\"uniform\") == 0) {\n      fieldlengthgenerator = new UniformLongGenerator(1, fieldlength);\n    } else if (fieldlengthdistribution.compareTo(\"zipfian\") == 0) {\n      fieldlengthgenerator = new ZipfianGenerator(1, fieldlength);\n    } else if (fieldlengthdistribution.compareTo(\"histogram\") == 0) {\n      try {\n        fieldlengthgenerator = new HistogramGenerator(fieldlengthhistogram);\n      } catch (IOException e) {\n        throw new WorkloadException(\n            \"Couldn't read field length histogram file: \" + fieldlengthhistogram, e);\n      }\n    } else {\n      throw new WorkloadException(\n          \"Unknown field length distribution \\\"\" + fieldlengthdistribution + \"\\\"\");\n    }\n    return fieldlengthgenerator;\n  }\n\n  /**\n   * Initialize the scenario.\n   * Called once, in the main client thread, before any operations are started.\n   */\n  @Override\n  public void init(Properties p) throws WorkloadException {\n    table = p.getProperty(TABLENAME_PROPERTY, TABLENAME_PROPERTY_DEFAULT);\n\n    fieldcount =\n        Long.parseLong(p.getProperty(FIELD_COUNT_PROPERTY, FIELD_COUNT_PROPERTY_DEFAULT));\n    fieldnames = new ArrayList<>();\n    for (int i = 0; i < fieldcount; i++) {\n      fieldnames.add(\"field\" + i);\n    }\n    fieldlengthgenerator = CoreWorkload.getFieldLengthGenerator(p);\n\n    recordcount =\n        Long.parseLong(p.getProperty(Client.RECORD_COUNT_PROPERTY, Client.DEFAULT_RECORD_COUNT));\n    if (recordcount == 0) {\n      recordcount = Integer.MAX_VALUE;\n    }\n    String requestdistrib =\n        p.getProperty(REQUEST_DISTRIBUTION_PROPERTY, REQUEST_DISTRIBUTION_PROPERTY_DEFAULT);\n    int maxscanlength =\n        Integer.parseInt(p.getProperty(MAX_SCAN_LENGTH_PROPERTY, MAX_SCAN_LENGTH_PROPERTY_DEFAULT));\n    String scanlengthdistrib =\n        p.getProperty(SCAN_LENGTH_DISTRIBUTION_PROPERTY, SCAN_LENGTH_DISTRIBUTION_PROPERTY_DEFAULT);\n\n    long insertstart =\n        Long.parseLong(p.getProperty(INSERT_START_PROPERTY, INSERT_START_PROPERTY_DEFAULT));\n    long insertcount=\n        Integer.parseInt(p.getProperty(INSERT_COUNT_PROPERTY, String.valueOf(recordcount - insertstart)));\n    // Confirm valid values for insertstart and insertcount in relation to recordcount\n    if (recordcount < (insertstart + insertcount)) {\n      System.err.println(\"Invalid combination of insertstart, insertcount and recordcount.\");\n      System.err.println(\"recordcount must be bigger than insertstart + insertcount.\");\n      System.exit(-1);\n    }\n    zeropadding =\n        Integer.parseInt(p.getProperty(ZERO_PADDING_PROPERTY, ZERO_PADDING_PROPERTY_DEFAULT));\n\n    readallfields = Boolean.parseBoolean(\n        p.getProperty(READ_ALL_FIELDS_PROPERTY, READ_ALL_FIELDS_PROPERTY_DEFAULT));\n    writeallfields = Boolean.parseBoolean(\n        p.getProperty(WRITE_ALL_FIELDS_PROPERTY, WRITE_ALL_FIELDS_PROPERTY_DEFAULT));\n\n    dataintegrity = Boolean.parseBoolean(\n        p.getProperty(DATA_INTEGRITY_PROPERTY, DATA_INTEGRITY_PROPERTY_DEFAULT));\n    // Confirm that fieldlengthgenerator returns a constant if data\n    // integrity check requested.\n    if (dataintegrity && !(p.getProperty(\n        FIELD_LENGTH_DISTRIBUTION_PROPERTY,\n        FIELD_LENGTH_DISTRIBUTION_PROPERTY_DEFAULT)).equals(\"constant\")) {\n      System.err.println(\"Must have constant field size to check data integrity.\");\n      System.exit(-1);\n    }\n\n    if (p.getProperty(INSERT_ORDER_PROPERTY, INSERT_ORDER_PROPERTY_DEFAULT).compareTo(\"hashed\") == 0) {\n      orderedinserts = false;\n    } else if (requestdistrib.compareTo(\"exponential\") == 0) {\n      double percentile = Double.parseDouble(p.getProperty(\n          ExponentialGenerator.EXPONENTIAL_PERCENTILE_PROPERTY,\n          ExponentialGenerator.EXPONENTIAL_PERCENTILE_DEFAULT));\n      double frac = Double.parseDouble(p.getProperty(\n          ExponentialGenerator.EXPONENTIAL_FRAC_PROPERTY,\n          ExponentialGenerator.EXPONENTIAL_FRAC_DEFAULT));\n      keychooser = new ExponentialGenerator(percentile, recordcount * frac);\n    } else {\n      orderedinserts = true;\n    }\n\n    keysequence = new CounterGenerator(insertstart);\n    operationchooser = createOperationGenerator(p);\n\n    transactioninsertkeysequence = new AcknowledgedCounterGenerator(recordcount);\n    if (requestdistrib.compareTo(\"uniform\") == 0) {\n      keychooser = new UniformLongGenerator(insertstart, insertstart + insertcount - 1);\n    } else if (requestdistrib.compareTo(\"sequential\") == 0) {\n      keychooser = new SequentialGenerator(insertstart, insertstart + insertcount - 1);\n    } else if (requestdistrib.compareTo(\"zipfian\") == 0) {\n      // it does this by generating a random \"next key\" in part by taking the modulus over the\n      // number of keys.\n      // If the number of keys changes, this would shift the modulus, and we don't want that to\n      // change which keys are popular so we'll actually construct the scrambled zipfian generator\n      // with a keyspace that is larger than exists at the beginning of the test. that is, we'll predict\n      // the number of inserts, and tell the scrambled zipfian generator the number of existing keys\n      // plus the number of predicted keys as the total keyspace. then, if the generator picks a key\n      // that hasn't been inserted yet, will just ignore it and pick another key. this way, the size of\n      // the keyspace doesn't change from the perspective of the scrambled zipfian generator\n      final double insertproportion = Double.parseDouble(\n          p.getProperty(INSERT_PROPORTION_PROPERTY, INSERT_PROPORTION_PROPERTY_DEFAULT));\n      int opcount = Integer.parseInt(p.getProperty(Client.OPERATION_COUNT_PROPERTY));\n      int expectednewkeys = (int) ((opcount) * insertproportion * 2.0); // 2 is fudge factor\n\n      keychooser = new ScrambledZipfianGenerator(insertstart, insertstart + insertcount + expectednewkeys);\n    } else if (requestdistrib.compareTo(\"latest\") == 0) {\n      keychooser = new SkewedLatestGenerator(transactioninsertkeysequence);\n    } else if (requestdistrib.equals(\"hotspot\")) {\n      double hotsetfraction =\n          Double.parseDouble(p.getProperty(HOTSPOT_DATA_FRACTION, HOTSPOT_DATA_FRACTION_DEFAULT));\n      double hotopnfraction =\n          Double.parseDouble(p.getProperty(HOTSPOT_OPN_FRACTION, HOTSPOT_OPN_FRACTION_DEFAULT));\n      keychooser = new HotspotIntegerGenerator(insertstart, insertstart + insertcount - 1,\n          hotsetfraction, hotopnfraction);\n    } else {\n      throw new WorkloadException(\"Unknown request distribution \\\"\" + requestdistrib + \"\\\"\");\n    }\n\n    fieldchooser = new UniformLongGenerator(0, fieldcount - 1);\n\n    if (scanlengthdistrib.compareTo(\"uniform\") == 0) {\n      scanlength = new UniformLongGenerator(1, maxscanlength);\n    } else if (scanlengthdistrib.compareTo(\"zipfian\") == 0) {\n      scanlength = new ZipfianGenerator(1, maxscanlength);\n    } else {\n      throw new WorkloadException(\n          \"Distribution \\\"\" + scanlengthdistrib + \"\\\" not allowed for scan length\");\n    }\n\n    insertionRetryLimit = Integer.parseInt(p.getProperty(\n        INSERTION_RETRY_LIMIT, INSERTION_RETRY_LIMIT_DEFAULT));\n    insertionRetryInterval = Integer.parseInt(p.getProperty(\n        INSERTION_RETRY_INTERVAL, INSERTION_RETRY_INTERVAL_DEFAULT));\n  }\n\n  protected String buildKeyName(long keynum) {\n    if (!orderedinserts) {\n      keynum = Utils.hash(keynum);\n    }\n    String value = Long.toString(keynum);\n    int fill = zeropadding - value.length();\n    String prekey = \"user\";\n    for (int i = 0; i < fill; i++) {\n      prekey += '0';\n    }\n    return prekey + value;\n  }\n\n  /**\n   * Builds a value for a randomly chosen field.\n   */\n  private HashMap<String, ByteIterator> buildSingleValue(String key) {\n    HashMap<String, ByteIterator> value = new HashMap<>();\n\n    String fieldkey = fieldnames.get(fieldchooser.nextValue().intValue());\n    ByteIterator data;\n    if (dataintegrity) {\n      data = new StringByteIterator(buildDeterministicValue(key, fieldkey));\n    } else {\n      // fill with random data\n      data = new RandomByteIterator(fieldlengthgenerator.nextValue().longValue());\n    }\n    value.put(fieldkey, data);\n\n    return value;\n  }\n\n  /**\n   * Builds values for all fields.\n   */\n  private HashMap<String, ByteIterator> buildValues(String key) {\n    HashMap<String, ByteIterator> values = new HashMap<>();\n\n    for (String fieldkey : fieldnames) {\n      ByteIterator data;\n      if (dataintegrity) {\n        data = new StringByteIterator(buildDeterministicValue(key, fieldkey));\n      } else {\n        // fill with random data\n        data = new RandomByteIterator(fieldlengthgenerator.nextValue().longValue());\n      }\n      values.put(fieldkey, data);\n    }\n    return values;\n  }\n\n  /**\n   * Build a deterministic value given the key information.\n   */\n  private String buildDeterministicValue(String key, String fieldkey) {\n    int size = fieldlengthgenerator.nextValue().intValue();\n    StringBuilder sb = new StringBuilder(size);\n    sb.append(key);\n    sb.append(':');\n    sb.append(fieldkey);\n    while (sb.length() < size) {\n      sb.append(':');\n      sb.append(sb.toString().hashCode());\n    }\n    sb.setLength(size);\n\n    return sb.toString();\n  }\n\n  /**\n   * Do one insert operation. Because it will be called concurrently from multiple client threads,\n   * this function must be thread safe. However, avoid synchronized, or the threads will block waiting\n   * for each other, and it will be difficult to reach the target throughput. Ideally, this function would\n   * have no side effects other than DB operations.\n   */\n  @Override\n  public boolean doInsert(DB db, Object threadstate) {\n    int keynum = keysequence.nextValue().intValue();\n    String dbkey = buildKeyName(keynum);\n    HashMap<String, ByteIterator> values = buildValues(dbkey);\n\n    Status status;\n    int numOfRetries = 0;\n    do {\n      status = db.insert(table, dbkey, values);\n      if (null != status && status.isOk()) {\n        break;\n      }\n      // Retry if configured. Without retrying, the load process will fail\n      // even if one single insertion fails. User can optionally configure\n      // an insertion retry limit (default is 0) to enable retry.\n      if (++numOfRetries <= insertionRetryLimit) {\n        System.err.println(\"Retrying insertion, retry count: \" + numOfRetries);\n        try {\n          // Sleep for a random number between [0.8, 1.2)*insertionRetryInterval.\n          int sleepTime = (int) (1000 * insertionRetryInterval * (0.8 + 0.4 * Math.random()));\n          Thread.sleep(sleepTime);\n        } catch (InterruptedException e) {\n          break;\n        }\n\n      } else {\n        System.err.println(\"Error inserting, not retrying any more. number of attempts: \" + numOfRetries +\n            \"Insertion Retry Limit: \" + insertionRetryLimit);\n        break;\n\n      }\n    } while (true);\n\n    return null != status && status.isOk();\n  }\n\n  /**\n   * Do one transaction operation. Because it will be called concurrently from multiple client\n   * threads, this function must be thread safe. However, avoid synchronized, or the threads will block waiting\n   * for each other, and it will be difficult to reach the target throughput. Ideally, this function would\n   * have no side effects other than DB operations.\n   */\n  @Override\n  public boolean doTransaction(DB db, Object threadstate) {\n    String operation = operationchooser.nextString();\n    if(operation == null) {\n      return false;\n    }\n\n    switch (operation) {\n    case \"READ\":\n      doTransactionRead(db);\n      break;\n    case \"UPDATE\":\n      doTransactionUpdate(db);\n      break;\n    case \"INSERT\":\n      doTransactionInsert(db);\n      break;\n    case \"SCAN\":\n      doTransactionScan(db);\n      break;\n    default:\n      doTransactionReadModifyWrite(db);\n    }\n\n    return true;\n  }\n\n  /**\n   * Results are reported in the first three buckets of the histogram under\n   * the label \"VERIFY\".\n   * Bucket 0 means the expected data was returned.\n   * Bucket 1 means incorrect data was returned.\n   * Bucket 2 means null data was returned when some data was expected.\n   */\n  protected void verifyRow(String key, HashMap<String, ByteIterator> cells) {\n    Status verifyStatus = Status.OK;\n    long startTime = System.nanoTime();\n    if (!cells.isEmpty()) {\n      for (Map.Entry<String, ByteIterator> entry : cells.entrySet()) {\n        if (!entry.getValue().toString().equals(buildDeterministicValue(key, entry.getKey()))) {\n          verifyStatus = Status.UNEXPECTED_STATE;\n          break;\n        }\n      }\n    } else {\n      // This assumes that null data is never valid\n      verifyStatus = Status.ERROR;\n    }\n    long endTime = System.nanoTime();\n    measurements.measure(\"VERIFY\", (int) (endTime - startTime) / 1000);\n    measurements.reportStatus(\"VERIFY\", verifyStatus);\n  }\n\n  long nextKeynum() {\n    long keynum;\n    if (keychooser instanceof ExponentialGenerator) {\n      do {\n        keynum = transactioninsertkeysequence.lastValue() - keychooser.nextValue().intValue();\n      } while (keynum < 0);\n    } else {\n      do {\n        keynum = keychooser.nextValue().intValue();\n      } while (keynum > transactioninsertkeysequence.lastValue());\n    }\n    return keynum;\n  }\n\n  public void doTransactionRead(DB db) {\n    // choose a random key\n    long keynum = nextKeynum();\n\n    String keyname = buildKeyName(keynum);\n\n    HashSet<String> fields = null;\n\n    if (!readallfields) {\n      // read a random field\n      String fieldname = fieldnames.get(fieldchooser.nextValue().intValue());\n\n      fields = new HashSet<String>();\n      fields.add(fieldname);\n    } else if (dataintegrity) {\n      // pass the full field list if dataintegrity is on for verification\n      fields = new HashSet<String>(fieldnames);\n    }\n\n    HashMap<String, ByteIterator> cells = new HashMap<String, ByteIterator>();\n    db.read(table, keyname, fields, cells);\n\n    if (dataintegrity) {\n      verifyRow(keyname, cells);\n    }\n  }\n\n  public void doTransactionReadModifyWrite(DB db) {\n    // choose a random key\n    long keynum = nextKeynum();\n\n    String keyname = buildKeyName(keynum);\n\n    HashSet<String> fields = null;\n\n    if (!readallfields) {\n      // read a random field\n      String fieldname = fieldnames.get(fieldchooser.nextValue().intValue());\n\n      fields = new HashSet<String>();\n      fields.add(fieldname);\n    }\n\n    HashMap<String, ByteIterator> values;\n\n    if (writeallfields) {\n      // new data for all the fields\n      values = buildValues(keyname);\n    } else {\n      // update a random field\n      values = buildSingleValue(keyname);\n    }\n\n    // do the transaction\n\n    HashMap<String, ByteIterator> cells = new HashMap<String, ByteIterator>();\n\n\n    long ist = measurements.getIntendedtartTimeNs();\n    long st = System.nanoTime();\n    db.read(table, keyname, fields, cells);\n\n    db.update(table, keyname, values);\n\n    long en = System.nanoTime();\n\n    if (dataintegrity) {\n      verifyRow(keyname, cells);\n    }\n\n    measurements.measure(\"READ-MODIFY-WRITE\", (int) ((en - st) / 1000));\n    measurements.measureIntended(\"READ-MODIFY-WRITE\", (int) ((en - ist) / 1000));\n  }\n\n  public void doTransactionScan(DB db) {\n    // choose a random key\n    long keynum = nextKeynum();\n\n    String startkeyname = buildKeyName(keynum);\n\n    // choose a random scan length\n    int len = scanlength.nextValue().intValue();\n\n    HashSet<String> fields = null;\n\n    if (!readallfields) {\n      // read a random field\n      String fieldname = fieldnames.get(fieldchooser.nextValue().intValue());\n\n      fields = new HashSet<String>();\n      fields.add(fieldname);\n    }\n\n    db.scan(table, startkeyname, len, fields, new Vector<HashMap<String, ByteIterator>>());\n  }\n\n  public void doTransactionUpdate(DB db) {\n    // choose a random key\n    long keynum = nextKeynum();\n\n    String keyname = buildKeyName(keynum);\n\n    HashMap<String, ByteIterator> values;\n\n    if (writeallfields) {\n      // new data for all the fields\n      values = buildValues(keyname);\n    } else {\n      // update a random field\n      values = buildSingleValue(keyname);\n    }\n\n    db.update(table, keyname, values);\n  }\n\n  public void doTransactionInsert(DB db) {\n    // choose the next key\n    long keynum = transactioninsertkeysequence.nextValue();\n\n    try {\n      String dbkey = buildKeyName(keynum);\n\n      HashMap<String, ByteIterator> values = buildValues(dbkey);\n      db.insert(table, dbkey, values);\n    } finally {\n      transactioninsertkeysequence.acknowledge(keynum);\n    }\n  }\n\n  /**\n   * Creates a weighted discrete values with database operations for a workload to perform.\n   * Weights/proportions are read from the properties list and defaults are used\n   * when values are not configured.\n   * Current operations are \"READ\", \"UPDATE\", \"INSERT\", \"SCAN\" and \"READMODIFYWRITE\".\n   *\n   * @param p The properties list to pull weights from.\n   * @return A generator that can be used to determine the next operation to perform.\n   * @throws IllegalArgumentException if the properties object was null.\n   */\n  protected static DiscreteGenerator createOperationGenerator(final Properties p) {\n    if (p == null) {\n      throw new IllegalArgumentException(\"Properties object cannot be null\");\n    }\n    final double readproportion = Double.parseDouble(\n        p.getProperty(READ_PROPORTION_PROPERTY, READ_PROPORTION_PROPERTY_DEFAULT));\n    final double updateproportion = Double.parseDouble(\n        p.getProperty(UPDATE_PROPORTION_PROPERTY, UPDATE_PROPORTION_PROPERTY_DEFAULT));\n    final double insertproportion = Double.parseDouble(\n        p.getProperty(INSERT_PROPORTION_PROPERTY, INSERT_PROPORTION_PROPERTY_DEFAULT));\n    final double scanproportion = Double.parseDouble(\n        p.getProperty(SCAN_PROPORTION_PROPERTY, SCAN_PROPORTION_PROPERTY_DEFAULT));\n    final double readmodifywriteproportion = Double.parseDouble(p.getProperty(\n        READMODIFYWRITE_PROPORTION_PROPERTY, READMODIFYWRITE_PROPORTION_PROPERTY_DEFAULT));\n\n    final DiscreteGenerator operationchooser = new DiscreteGenerator();\n    if (readproportion > 0) {\n      operationchooser.addValue(readproportion, \"READ\");\n    }\n\n    if (updateproportion > 0) {\n      operationchooser.addValue(updateproportion, \"UPDATE\");\n    }\n\n    if (insertproportion > 0) {\n      operationchooser.addValue(insertproportion, \"INSERT\");\n    }\n\n    if (scanproportion > 0) {\n      operationchooser.addValue(scanproportion, \"SCAN\");\n    }\n\n    if (readmodifywriteproportion > 0) {\n      operationchooser.addValue(readmodifywriteproportion, \"READMODIFYWRITE\");\n    }\n    return operationchooser;\n  }\n}\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/workloads/RestWorkload.java",
    "content": "/**\r\n * Copyright (c) 2016-2017 YCSB contributors. All rights reserved.\r\n * <p>\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n * <p>\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n * <p>\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.workloads;\r\n\r\nimport com.yahoo.ycsb.ByteIterator;\r\nimport com.yahoo.ycsb.DB;\r\nimport com.yahoo.ycsb.RandomByteIterator;\r\nimport com.yahoo.ycsb.WorkloadException;\r\nimport com.yahoo.ycsb.generator.*;\r\n\r\nimport java.io.BufferedReader;\r\nimport java.io.FileReader;\r\nimport java.io.IOException;\r\nimport java.util.HashMap;\r\nimport java.util.Map;\r\nimport java.util.Properties;\r\n\r\nimport com.yahoo.ycsb.generator.UniformLongGenerator;\r\n/**\r\n * Typical RESTFul services benchmarking scenario. Represents a set of client\r\n * calling REST operations like HTTP DELETE, GET, POST, PUT on a web service.\r\n * This scenario is completely different from CoreWorkload which is mainly\r\n * designed for databases benchmarking. However due to some reusable\r\n * functionality this class extends {@link CoreWorkload} and overrides necessary\r\n * methods like init, doTransaction etc.\r\n */\r\npublic class RestWorkload extends CoreWorkload {\r\n\r\n  /**\r\n   * The name of the property for the proportion of transactions that are\r\n   * delete.\r\n   */\r\n  public static final String DELETE_PROPORTION_PROPERTY = \"deleteproportion\";\r\n\r\n  /**\r\n   * The default proportion of transactions that are delete.\r\n   */\r\n  public static final String DELETE_PROPORTION_PROPERTY_DEFAULT = \"0.00\";\r\n\r\n  /**\r\n   * The name of the property for the file that holds the field length size for insert operations.\r\n   */\r\n  public static final String FIELD_LENGTH_DISTRIBUTION_FILE_PROPERTY = \"fieldlengthdistfile\";\r\n\r\n  /**\r\n   * The default file name that holds the field length size for insert operations.\r\n   */\r\n  public static final String FIELD_LENGTH_DISTRIBUTION_FILE_PROPERTY_DEFAULT = \"fieldLengthDistFile.txt\";\r\n\r\n  /**\r\n   * In web services even though the CRUD operations follow the same request\r\n   * distribution, they have different traces and distribution parameter\r\n   * values. Hence configuring the parameters of these operations separately\r\n   * makes the benchmark more flexible and capable of generating better\r\n   * realistic workloads.\r\n   */\r\n  // Read related properties.\r\n  private static final String READ_TRACE_FILE = \"url.trace.read\";\r\n  private static final String READ_TRACE_FILE_DEFAULT = \"readtrace.txt\";\r\n  private static final String READ_ZIPFIAN_CONSTANT = \"readzipfconstant\";\r\n  private static final String READ_ZIPFIAN_CONSTANT_DEAFULT = \"0.99\";\r\n  private static final String READ_RECORD_COUNT_PROPERTY = \"readrecordcount\";\r\n  // Insert related properties.\r\n  private static final String INSERT_TRACE_FILE = \"url.trace.insert\";\r\n  private static final String INSERT_TRACE_FILE_DEFAULT = \"inserttrace.txt\";\r\n  private static final String INSERT_ZIPFIAN_CONSTANT = \"insertzipfconstant\";\r\n  private static final String INSERT_ZIPFIAN_CONSTANT_DEAFULT = \"0.99\";\r\n  private static final String INSERT_SIZE_ZIPFIAN_CONSTANT = \"insertsizezipfconstant\";\r\n  private static final String INSERT_SIZE_ZIPFIAN_CONSTANT_DEAFULT = \"0.99\";\r\n  private static final String INSERT_RECORD_COUNT_PROPERTY = \"insertrecordcount\";\r\n  // Delete related properties.\r\n  private static final String DELETE_TRACE_FILE = \"url.trace.delete\";\r\n  private static final String DELETE_TRACE_FILE_DEFAULT = \"deletetrace.txt\";\r\n  private static final String DELETE_ZIPFIAN_CONSTANT = \"deletezipfconstant\";\r\n  private static final String DELETE_ZIPFIAN_CONSTANT_DEAFULT = \"0.99\";\r\n  private static final String DELETE_RECORD_COUNT_PROPERTY = \"deleterecordcount\";\r\n  // Delete related properties.\r\n  private static final String UPDATE_TRACE_FILE = \"url.trace.update\";\r\n  private static final String UPDATE_TRACE_FILE_DEFAULT = \"updatetrace.txt\";\r\n  private static final String UPDATE_ZIPFIAN_CONSTANT = \"updatezipfconstant\";\r\n  private static final String UPDATE_ZIPFIAN_CONSTANT_DEAFULT = \"0.99\";\r\n  private static final String UPDATE_RECORD_COUNT_PROPERTY = \"updaterecordcount\";\r\n\r\n  private Map<Integer, String> readUrlMap;\r\n  private Map<Integer, String> insertUrlMap;\r\n  private Map<Integer, String> deleteUrlMap;\r\n  private Map<Integer, String> updateUrlMap;\r\n  private int readRecordCount;\r\n  private int insertRecordCount;\r\n  private int deleteRecordCount;\r\n  private int updateRecordCount;\r\n  private NumberGenerator readKeyChooser;\r\n  private NumberGenerator insertKeyChooser;\r\n  private NumberGenerator deleteKeyChooser;\r\n  private NumberGenerator updateKeyChooser;\r\n  private NumberGenerator fieldlengthgenerator;\r\n  private DiscreteGenerator operationchooser;\r\n\r\n  @Override\r\n  public void init(Properties p) throws WorkloadException {\r\n\r\n    readRecordCount = Integer.parseInt(p.getProperty(READ_RECORD_COUNT_PROPERTY, String.valueOf(Integer.MAX_VALUE)));\r\n    insertRecordCount = Integer\r\n      .parseInt(p.getProperty(INSERT_RECORD_COUNT_PROPERTY, String.valueOf(Integer.MAX_VALUE)));\r\n    deleteRecordCount = Integer\r\n      .parseInt(p.getProperty(DELETE_RECORD_COUNT_PROPERTY, String.valueOf(Integer.MAX_VALUE)));\r\n    updateRecordCount = Integer\r\n      .parseInt(p.getProperty(UPDATE_RECORD_COUNT_PROPERTY, String.valueOf(Integer.MAX_VALUE)));\r\n\r\n    readUrlMap = getTrace(p.getProperty(READ_TRACE_FILE, READ_TRACE_FILE_DEFAULT), readRecordCount);\r\n    insertUrlMap = getTrace(p.getProperty(INSERT_TRACE_FILE, INSERT_TRACE_FILE_DEFAULT), insertRecordCount);\r\n    deleteUrlMap = getTrace(p.getProperty(DELETE_TRACE_FILE, DELETE_TRACE_FILE_DEFAULT), deleteRecordCount);\r\n    updateUrlMap = getTrace(p.getProperty(UPDATE_TRACE_FILE, UPDATE_TRACE_FILE_DEFAULT), updateRecordCount);\r\n\r\n    operationchooser = createOperationGenerator(p);\r\n\r\n    // Common distribution for all operations.\r\n    String requestDistrib = p.getProperty(REQUEST_DISTRIBUTION_PROPERTY, REQUEST_DISTRIBUTION_PROPERTY_DEFAULT);\r\n\r\n    double readZipfconstant = Double.parseDouble(p.getProperty(READ_ZIPFIAN_CONSTANT, READ_ZIPFIAN_CONSTANT_DEAFULT));\r\n    readKeyChooser = getKeyChooser(requestDistrib, readUrlMap.size(), readZipfconstant, p);\r\n    double updateZipfconstant = Double\r\n        .parseDouble(p.getProperty(UPDATE_ZIPFIAN_CONSTANT, UPDATE_ZIPFIAN_CONSTANT_DEAFULT));\r\n    updateKeyChooser = getKeyChooser(requestDistrib, updateUrlMap.size(), updateZipfconstant, p);\r\n    double insertZipfconstant = Double\r\n        .parseDouble(p.getProperty(INSERT_ZIPFIAN_CONSTANT, INSERT_ZIPFIAN_CONSTANT_DEAFULT));\r\n    insertKeyChooser = getKeyChooser(requestDistrib, insertUrlMap.size(), insertZipfconstant, p);\r\n    double deleteZipfconstant = Double\r\n        .parseDouble(p.getProperty(DELETE_ZIPFIAN_CONSTANT, DELETE_ZIPFIAN_CONSTANT_DEAFULT));\r\n    deleteKeyChooser = getKeyChooser(requestDistrib, deleteUrlMap.size(), deleteZipfconstant, p);\r\n\r\n    fieldlengthgenerator = getFieldLengthGenerator(p);\r\n  }\r\n\r\n  public static DiscreteGenerator createOperationGenerator(final Properties p) {\r\n    // Re-using CoreWorkload method.\r\n    final DiscreteGenerator operationChooser = CoreWorkload.createOperationGenerator(p);\r\n    // Needs special handling for delete operations not supported in CoreWorkload.\r\n    double deleteproportion = Double\r\n        .parseDouble(p.getProperty(DELETE_PROPORTION_PROPERTY, DELETE_PROPORTION_PROPERTY_DEFAULT));\r\n    if (deleteproportion > 0) {\r\n      operationChooser.addValue(deleteproportion, \"DELETE\");\r\n    }\r\n    return operationChooser;\r\n  }\r\n\r\n  private static NumberGenerator getKeyChooser(String requestDistrib, int recordCount, double zipfContant,\r\n                                               Properties p) throws WorkloadException {\r\n    NumberGenerator keychooser;\r\n\r\n    switch (requestDistrib) {\r\n    case \"exponential\":\r\n      double percentile = Double.parseDouble(p.getProperty(ExponentialGenerator.EXPONENTIAL_PERCENTILE_PROPERTY,\r\n          ExponentialGenerator.EXPONENTIAL_PERCENTILE_DEFAULT));\r\n      double frac = Double.parseDouble(p.getProperty(ExponentialGenerator.EXPONENTIAL_FRAC_PROPERTY,\r\n          ExponentialGenerator.EXPONENTIAL_FRAC_DEFAULT));\r\n      keychooser = new ExponentialGenerator(percentile, recordCount * frac);\r\n      break;\r\n    case \"uniform\":\r\n      keychooser = new UniformLongGenerator(0, recordCount - 1);\r\n      break;\r\n    case \"zipfian\":\r\n      keychooser = new ZipfianGenerator(recordCount, zipfContant);\r\n      break;\r\n    case \"latest\":\r\n      throw new WorkloadException(\"Latest request distribution is not supported for RestWorkload.\");\r\n    case \"hotspot\":\r\n      double hotsetfraction = Double.parseDouble(p.getProperty(HOTSPOT_DATA_FRACTION, HOTSPOT_DATA_FRACTION_DEFAULT));\r\n      double hotopnfraction = Double.parseDouble(p.getProperty(HOTSPOT_OPN_FRACTION, HOTSPOT_OPN_FRACTION_DEFAULT));\r\n      keychooser = new HotspotIntegerGenerator(0, recordCount - 1, hotsetfraction, hotopnfraction);\r\n      break;\r\n    default:\r\n      throw new WorkloadException(\"Unknown request distribution \\\"\" + requestDistrib + \"\\\"\");\r\n    }\r\n    return keychooser;\r\n  }\r\n\r\n  protected static NumberGenerator getFieldLengthGenerator(Properties p) throws WorkloadException {\r\n    // Re-using CoreWorkload method. \r\n    NumberGenerator fieldLengthGenerator = CoreWorkload.getFieldLengthGenerator(p);\r\n    String fieldlengthdistribution = p.getProperty(FIELD_LENGTH_DISTRIBUTION_PROPERTY,\r\n        FIELD_LENGTH_DISTRIBUTION_PROPERTY_DEFAULT);\r\n    // Needs special handling for Zipfian distribution for variable Zipf Constant.\r\n    if (fieldlengthdistribution.compareTo(\"zipfian\") == 0) {\r\n      int fieldlength = Integer.parseInt(p.getProperty(FIELD_LENGTH_PROPERTY, FIELD_LENGTH_PROPERTY_DEFAULT));\r\n      double insertsizezipfconstant = Double\r\n          .parseDouble(p.getProperty(INSERT_SIZE_ZIPFIAN_CONSTANT, INSERT_SIZE_ZIPFIAN_CONSTANT_DEAFULT));\r\n      fieldLengthGenerator = new ZipfianGenerator(1, fieldlength, insertsizezipfconstant);\r\n    }\r\n    return fieldLengthGenerator;\r\n  }\r\n\r\n  /**\r\n   * Reads the trace file and returns a URL map.\r\n   */\r\n  private static Map<Integer, String> getTrace(String filePath, int recordCount)\r\n    throws WorkloadException {\r\n    Map<Integer, String> urlMap = new HashMap<Integer, String>();\r\n    int count = 0;\r\n    String line;\r\n    try {\r\n      FileReader inputFile = new FileReader(filePath);\r\n      BufferedReader bufferReader = new BufferedReader(inputFile);\r\n      while ((line = bufferReader.readLine()) != null) {\r\n        urlMap.put(count++, line.trim());\r\n        if (count >= recordCount) {\r\n          break;\r\n        }\r\n      }\r\n      bufferReader.close();\r\n    } catch (IOException e) {\r\n      throw new WorkloadException(\r\n        \"Error while reading the trace. Please make sure the trace file path is correct. \"\r\n          + e.getLocalizedMessage());\r\n    }\r\n    return urlMap;\r\n  }\r\n\r\n  /**\r\n   * Not required for Rest Clients as data population is service specific.\r\n   */\r\n  @Override\r\n  public boolean doInsert(DB db, Object threadstate) {\r\n    return false;\r\n  }\r\n\r\n  @Override\r\n  public boolean doTransaction(DB db, Object threadstate) {\r\n    String operation = operationchooser.nextString();\r\n    if (operation == null) {\r\n      return false;\r\n    }\r\n\r\n    switch (operation) {\r\n    case \"UPDATE\":\r\n      doTransactionUpdate(db);\r\n      break;\r\n    case \"INSERT\":\r\n      doTransactionInsert(db);\r\n      break;\r\n    case \"DELETE\":\r\n      doTransactionDelete(db);\r\n      break;\r\n    default:\r\n      doTransactionRead(db);\r\n    }\r\n    return true;\r\n  }\r\n\r\n  /**\r\n   * Returns next URL to be called.\r\n   */\r\n  private String getNextURL(int opType) {\r\n    if (opType == 1) {\r\n      return readUrlMap.get(readKeyChooser.nextValue().intValue());\r\n    } else if (opType == 2) {\r\n      return insertUrlMap.get(insertKeyChooser.nextValue().intValue());\r\n    } else if (opType == 3) {\r\n      return deleteUrlMap.get(deleteKeyChooser.nextValue().intValue());\r\n    } else {\r\n      return updateUrlMap.get(updateKeyChooser.nextValue().intValue());\r\n    }\r\n  }\r\n\r\n  @Override\r\n  public void doTransactionRead(DB db) {\r\n    HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\r\n    db.read(null, getNextURL(1), null, result);\r\n  }\r\n\r\n  @Override\r\n  public void doTransactionInsert(DB db) {\r\n    HashMap<String, ByteIterator> value = new HashMap<String, ByteIterator>();\r\n    // Create random bytes of insert data with a specific size.\r\n    value.put(\"data\", new RandomByteIterator(fieldlengthgenerator.nextValue().longValue()));\r\n    db.insert(null, getNextURL(2), value);\r\n  }\r\n\r\n  public void doTransactionDelete(DB db) {\r\n    db.delete(null, getNextURL(3));\r\n  }\r\n\r\n  @Override\r\n  public void doTransactionUpdate(DB db) {\r\n    HashMap<String, ByteIterator> value = new HashMap<String, ByteIterator>();\r\n    // Create random bytes of update data with a specific size.\r\n    value.put(\"data\", new RandomByteIterator(fieldlengthgenerator.nextValue().longValue()));\r\n    db.update(null, getNextURL(4), value);\r\n  }\r\n\r\n}\r\n"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/workloads/TimeSeriesWorkload.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.workloads;\n\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.TreeMap;\nimport java.util.Vector;\nimport java.util.concurrent.TimeUnit;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.Client;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.NumericByteIterator;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport com.yahoo.ycsb.Utils;\nimport com.yahoo.ycsb.Workload;\nimport com.yahoo.ycsb.WorkloadException;\nimport com.yahoo.ycsb.generator.DiscreteGenerator;\nimport com.yahoo.ycsb.generator.Generator;\nimport com.yahoo.ycsb.generator.HotspotIntegerGenerator;\nimport com.yahoo.ycsb.generator.IncrementingPrintableStringGenerator;\nimport com.yahoo.ycsb.generator.NumberGenerator;\nimport com.yahoo.ycsb.generator.RandomDiscreteTimestampGenerator;\nimport com.yahoo.ycsb.generator.ScrambledZipfianGenerator;\nimport com.yahoo.ycsb.generator.SequentialGenerator;\nimport com.yahoo.ycsb.generator.UniformLongGenerator;\nimport com.yahoo.ycsb.generator.UnixEpochTimestampGenerator;\nimport com.yahoo.ycsb.generator.ZipfianGenerator;\nimport com.yahoo.ycsb.measurements.Measurements;\n\n/**\n * A specialized workload dealing with time series data, i.e. series of discreet\n * events associated with timestamps and identifiers. For this workload, identities\n * consist of a {@link String} <b>key</b> and a set of {@link String} <b>tag key/value</b>\n * pairs. \n * <p>\n * For example:\n * <table border=\"1\">\n * <tr><th>Time Series Key</th><th>Tag Keys/Values</th><th>1483228800</th><th>1483228860</th><th>1483228920</th></tr>\n * <tr><td>AA</td><td>AA=AA, AB=AA</td><td>42.5</td><td>1.0</td><td>85.9</td></tr>\n * <tr><td>AA</td><td>AA=AA, AB=AB</td><td>-9.4</td><td>76.9</td><td>0.18</td></tr>\n * <tr><td>AB</td><td>AA=AA, AB=AA</td><td>-93.0</td><td>57.1</td><td>-63.8</td></tr>\n * <tr><td>AB</td><td>AA=AA, AB=AB</td><td>7.6</td><td>56.1</td><td>-0.3</td></tr>\n * </table>\n * <p>\n * This table shows four time series with 3 measurements at three different timestamps.\n * Keys, tags, timestamps and values (numeric only at this time) are generated by\n * this workload. For details on properties and behavior, see  the \n * {@code workloads/tsworkload_template} file. The Javadocs will focus on implementation\n * and how {@link DB} clients can parse the workload.\n * <p>\n * In order to avoid having existing DB implementations implement a brand new interface\n * this workload uses the existing APIs to encode a few special values that can be parsed\n * by the client. The special values include the timestamp, numeric value and some\n * query (read or scan) parameters. As an example on how to parse the fields, see\n * {@link BasicTSDB}.\n * <p>\n * <b>Timestamps</b>\n * <p>\n * Timestamps are presented as Unix Epoch values in units of {@link TimeUnit#SECONDS},\n * {@link TimeUnit#MILLISECONDS} or {@link TimeUnit#NANOSECONDS} based on the \n * {@code timestampunits} property. For calls to {@link DB#insert(String, String, java.util.Map)}\n * and {@link DB#update(String, String, java.util.Map)}, the timestamp is added to the \n * {@code values} map encoded in a {@link NumericByteIterator} with the key defined\n * in the {@code timestampkey} property (defaulting to \"YCSBTS\"). To pull out the timestamp\n * when iterating over the values map, cast the {@link ByteIterator} to a \n * {@link NumericByteIterator} and call {@link NumericByteIterator#getLong()}.\n * <p>\n * Note that for calls to {@link DB#update(String, String, java.util.Map)}, timestamps\n * earlier than the timestamp generator's timestamp will be choosen at random to\n * mimic a lambda architecture or old job re-reporting some data.\n * <p>\n * For calls to {@link DB#read(String, String, java.util.Set, java.util.Map)} and \n * {@link DB#scan(String, String, int, java.util.Set, Vector)}, timestamps\n * are encoded in a {@link StringByteIterator} in a key/value format with the \n * {@code tagpairdelimiter} separator. E.g {@code YCSBTS=1483228800}. If {@code querytimespan}\n * has been set to a positive value then the value will include a range with the\n * starting (oldest) timestamp followed by the {@code querytimespandelimiter} separator\n * and the ending (most recent) timestamp. E.g. {@code YCSBTS=1483228800-1483228920}.\n * <p>\n * For calls to {@link DB#delete(String, String)}, encoding is the same as reads and \n * scans but key/value pairs are separated by the {@code deletedelimiter} property value.\n * <p>\n * By default, the starting timestamp is the current system time without any rounding.\n * All timestamps are then offsets from that starting value.\n * <p>\n * <b>Values</b>\n * <p>\n * Similar to timestamps, values are encoded in {@link NumericByteIterator}s and stored\n * in the values map with the key defined in {@code valuekey} (defaulting to \"YCSBV\").\n * Values can either be 64 bit signed {@link long}s or double precision {@link double}s \n * depending on the {@code valuetype} or {@code dataintegrity} properties. When parsing\n * out the value, always call {@link NumericByteIterator#isFloatingPoint()} to determine\n * whether or not to call {@link NumericByteIterator#getDouble()} (true) or \n * {@link NumericByteIterator#getLong()} (false). \n * <p>\n * When {@code dataintegrity} is set to true, then the value is always set to a \n * 64 bit signed integer which is the Java hash code of the concatenation of the\n * key and map of values (sorted on the map keys and skipping the timestamp and value\n * entries) OR'd with the timestamp of the data point. See \n * {@link #validationFunction(String, long, TreeMap)} for the implementation.\n * <p>\n * <b>Keys and Tags</b>\n * <p>\n * As mentioned, the workload generates strings for the keys and tags. On initialization\n * three string generators are created using the {@link IncrementingPrintableStringGenerator} \n * implementation. Then the generators fill three arrays with values based on the\n * number of keys, the number of tags and the cardinality of each tag key/value pair.\n * This implementation gives us time series like the example table where every string\n * starts at something like \"AA\" (depending on the length of keys, tag keys and tag values)\n * and continuing to \"ZZ\" wherein they rollover back to \"AA\". \n * <p>\n * Each time series must have a unique set of tag keys, i.e. the key \"AA\" cannot appear\n * more than once per time series. If the workload is configured for four tags with a\n * tag key length of 2, the keys would be \"AA\", \"AB\", \"AC\" and \"AD\". \n * <p>\n * Each tag key is then associated with a tag value. Tag values may appear more than once\n * in each time series. E.g. time series will usually start with the tags \"AA=AA\", \n * \"AB=AA\", \"AC=AA\" and \"AD=AA\". The {@code tagcardinality} property determines how many\n * unique values will be generated per tag key. In the example table above, the \n * {@code tagcardinality} property would have been set to {@code 1,2} meaning tag \n * key \"AA\" would always have the tag value \"AA\" given a cardinality of 1. However \n * tag key \"AB\" would have values \"AA\" and \"AB\" due to a cardinality of 2. This \n * cardinality map, along with the number of unique time series keys determines how \n * many unique time series are generated for the workload. Tag values share a common\n * array of generated strings to save on memory.\n * <p>\n * <b>Operation Order</b>\n * <p>\n * The default behavior of the workload (for inserts and updates) is to generate a\n * value for each time series for a given timestamp before incrementing to the next\n * timestamp and writing values. This is an ideal workload and some time series \n * databases are designed for this behavior. However in the real-world events will\n * arrive grouped close to the current system time with a number of events being \n * delayed, hence their timestamps are further in the past. The {@code delayedseries}\n * property determines the percentage of time series that are delayed by up to\n * {@code delayedintervals} intervals. E.g. setting this value to 0.05 means that \n * 5% of the time series will be written with timestamps earlier than the timestamp\n * generator's current time.\n * </p>\n * <b>Reads and Scans</b>\n * <p>\n * For benchmarking queries, some common tasks implemented by almost every time series\n * data base are available and are passed in the fields {@link Set}:\n * <p>\n * <b>GroupBy</b> - A common operation is to aggregate multiple time series into a\n * single time series via common parameters. For example, a user may want to see the\n * total network traffic in a data center so they'll issue a SQL query like:\n * <code>SELECT value FROM timeseriesdb GROUP BY datacenter ORDER BY SUM(value);</code>\n * If the {@code groupbyfunction} has been set to a group by function, then the fields\n * will contain a key/value pair with the key set in {@code groupbykey}. E.g.\n * {@code YCSBGB=SUM}.\n * <p>\n * Additionally with grouping enabled, fields on tag keys where group bys should \n * occur will only have the key defined and will not have a value or delimiter. E.g.\n * if grouping on tag key \"AA\", the field will contain {@code AA} instead of {@code AA=AB}.\n * <p>\n * <b>Downsampling</b> - Another common operation is to reduce the resolution of the\n * queried time series when fetching a wide time range of data so fewer data points \n * are returned. For example, a user may fetch a week of data but if the data is\n * recorded on a 1 second interval, that would be over 600k data points so they\n * may ask for a 1 hour downsampling (also called bucketing) wherein every hour, all\n * of the data points for a \"bucket\" are aggregated into a single value. \n * <p>\n * To enable downsampling, the {@code downsamplingfunction} property must be set to\n * a supported function such as \"SUM\" and the {@code downsamplinginterval} must be\n * set to a valid time interval with the same units as {@code timestampunits}, e.g.\n * \"3600\" which would create 1 hour buckets if the time units were set to seconds.\n * With downsampling, query fields will include a key/value pair with \n * {@code downsamplingkey} as the key (defaulting to \"YCSBDS\") and the value being\n * a concatenation of {@code downsamplingfunction} and {@code downsamplinginterval},\n * for example {@code YCSBDS=SUM60}.\n * <p>\n * <b>Timestamps</b> - For every read, a random timestamp is selected from the interval\n * set. If {@code querytimespan} has been set to a positive value, then the configured\n * query time interval is added to the selected timestamp so the read passes the DB\n * a range of times. Note that during the run phase, if no data was previously loaded,\n * or if there are more {@code recordcount}s set for the run phase, reads may be sent\n * to the DB with timestamps that are beyond the written data time range (or even the\n * system clock of the DB).\n * <p>\n * <b>Deletes</b>\n * <p>\n * Because the delete API only accepts a single key, a full key and tag key/value \n * pair map is flattened into a single string for parsing by the database. Common\n * workloads include deleting a single time series (wherein all tag key and values are\n * defined), deleting all series containing a tag key and value or deleting all of the\n * time series sharing a common time series key. \n * <p>\n * Right now the workload supports deletes with a key and for time series tag key/value\n * pairs or a key with tags and a group by on one or more tags (meaning, delete all of\n * the series with any value for the given tag key). The parameters are collapsed into\n * a single string delimited with the character in the {@code deletedelimiter} property.\n * For example, a delete request may look like: {@code AA:AA=AA:AA=AB} to delete the \n * first time series in the table above.\n * <p>\n * <b>Threads</b>\n * <p>\n * For a multi-threaded execution, the number of time series keys set via the \n * {@code fieldcount} property, must be greater than or equal to the number of\n * threads set via {@code threads}. This is due to each thread choosing a subset\n * of the total number of time series keys and being responsible for writing values \n * for each time series containing those keys at each timestamp. Thus each thread\n * will have it's own timestamp generator, incrementing each time every time series\n * it is responsible for has had a value written.\n * <p>\n * Each thread may, however, issue reads and scans for any time series in the \n * complete set.\n * <p>\n * <b>Sparsity</b>\n * <p>\n * By default, during loads, every time series will have a data point written at every\n * time stamp in the interval set. This is common in workloads where a sensor writes\n * a value at regular intervals. However some time series are only reported under \n * certain conditions. \n * <p>\n * For example, a counter may track the number of errors over a \n * time period for a web service and only report when the value is greater than 1.\n * Or a time series may include tags such as a user ID and IP address when a request\n * arrives at the web service and only report values when that combination is seen.\n * This means the timeseries will <i>not</i> have a value at every timestamp and in\n * some cases there may be only a single value!  \n * <p>\n * This workload has a {@code sparsity} parameter that can choose how often a \n * time series should record a value. The default value of 0.0 means every series \n * will get a value at every timestamp. A value of 0.95 will mean that for each \n * series, only 5% of the timestamps in the interval will have a value. The distribution\n * of values is random.\n * <p>\n * <b>Notes/Warnings</b>\n * <p>\n * <ul>\n * <li>Because time series keys and tag key/values are generated and stored in memory,\n * be careful of setting the cardinality too high for the JVM's heap.</li>\n * <li>When running for data integrity, a number of settings are incompatible and will\n * throw errors. Check the error messages for details.</li>\n * <li>Databases that support keys only and can't store tags should order and then \n * collapse the tag values using a delimiter. For example the series in the example \n * table at the top could be written as:\n * <ul>\n * <li>{@code AA.AA.AA}</li>\n * <li>{@code AA.AA.AB}</li>\n * <li>{@code AB.AA.AA}</li>\n * <li>{@code AB.AA.AB}</li>\n * </ul></li>\n * </ul>\n * <p>\n * <b>TODOs</b>\n * <p>\n * <ul>\n * <li>Support random time intervals. E.g. some series write every second, others every\n * 60 seconds.</li>\n * <li>Support random time series cardinality. Right now every series has the same \n * cardinality.</li>\n * <li>Truly random timetamps per time series. We could use bitmaps to determine if\n * a series has had a value written for a given timestamp. Right now all of the series\n * are in sync time-wise.</li>\n * <li>Possibly a real-time load where values are written with the current system time.\n * It's more of a bulk-loading operation now.</li>\n * </ul>\n */\npublic class TimeSeriesWorkload extends Workload {  \n  \n  /**\n   * The types of values written to the timeseries store.\n   */\n  public enum ValueType {\n    INTEGERS(\"integers\"),\n    FLOATS(\"floats\"),\n    MIXED(\"mixednumbers\");\n    \n    protected final String name;\n    \n    ValueType(final String name) {\n      this.name = name;\n    }\n    \n    public static ValueType fromString(final String name) {\n      for (final ValueType type : ValueType.values()) {\n        if (type.name.equalsIgnoreCase(name)) {\n          return type;\n        }\n      }\n      throw new IllegalArgumentException(\"Unrecognized type: \" + name);\n    }\n  }\n  \n  /** Name and default value for the timestamp key property. */\n  public static final String TIMESTAMP_KEY_PROPERTY = \"timestampkey\";\n  public static final String TIMESTAMP_KEY_PROPERTY_DEFAULT = \"YCSBTS\";\n  \n  /** Name and default value for the value key property. */\n  public static final String VALUE_KEY_PROPERTY = \"valuekey\";\n  public static final String VALUE_KEY_PROPERTY_DEFAULT = \"YCSBV\";\n  \n  /** Name and default value for the timestamp interval property. */    \n  public static final String TIMESTAMP_INTERVAL_PROPERTY = \"timestampinterval\";    \n  public static final String TIMESTAMP_INTERVAL_PROPERTY_DEFAULT = \"60\";    \n      \n  /** Name and default value for the timestamp units property. */   \n  public static final String TIMESTAMP_UNITS_PROPERTY = \"timestampunits\";    \n  public static final String TIMESTAMP_UNITS_PROPERTY_DEFAULT = \"SECONDS\"; \n  \n  /** Name and default value for the number of tags property. */\n  public static final String TAG_COUNT_PROPERTY = \"tagcount\";\n  public static final String TAG_COUNT_PROPERTY_DEFAULT = \"4\";\n  \n  /** Name and default value for the tag value cardinality map property. */\n  public static final String TAG_CARDINALITY_PROPERTY = \"tagcardinality\";\n  public static final String TAG_CARDINALITY_PROPERTY_DEFAULT = \"1, 2, 4, 8\";\n  \n  /** Name and default value for the tag key length property. */\n  public static final String TAG_KEY_LENGTH_PROPERTY = \"tagkeylength\";\n  public static final String TAG_KEY_LENGTH_PROPERTY_DEFAULT = \"8\";\n  \n  /** Name and default value for the tag value length property. */\n  public static final String TAG_VALUE_LENGTH_PROPERTY = \"tagvaluelength\";\n  public static final String TAG_VALUE_LENGTH_PROPERTY_DEFAULT = \"8\";\n  \n  /** Name and default value for the tag pair delimiter property. */\n  public static final String PAIR_DELIMITER_PROPERTY = \"tagpairdelimiter\";\n  public static final String PAIR_DELIMITER_PROPERTY_DEFAULT = \"=\";\n  \n  /** Name and default value for the delete string delimiter property. */\n  public static final String DELETE_DELIMITER_PROPERTY = \"deletedelimiter\";\n  public static final String DELETE_DELIMITER_PROPERTY_DEFAULT = \":\";\n  \n  /** Name and default value for the random timestamp write order property. */\n  public static final String RANDOMIZE_TIMESTAMP_ORDER_PROPERTY = \"randomwritetimestamporder\";\n  public static final String RANDOMIZE_TIMESTAMP_ORDER_PROPERTY_DEFAULT = \"false\";\n  \n  /** Name and default value for the random time series write order property. */\n  public static final String RANDOMIZE_TIMESERIES_ORDER_PROPERTY = \"randomtimeseriesorder\";\n  public static final String RANDOMIZE_TIMESERIES_ORDER_PROPERTY_DEFAULT = \"true\";\n  \n  /** Name and default value for the value types property. */\n  public static final String VALUE_TYPE_PROPERTY = \"valuetype\";\n  public static final String VALUE_TYPE_PROPERTY_DEFAULT = \"floats\";\n  \n  /** Name and default value for the sparsity property. */\n  public static final String SPARSITY_PROPERTY = \"sparsity\";\n  public static final String SPARSITY_PROPERTY_DEFAULT = \"0.00\";\n  \n  /** Name and default value for the delayed series percentage property. */\n  public static final String DELAYED_SERIES_PROPERTY = \"delayedseries\";\n  public static final String DELAYED_SERIES_PROPERTY_DEFAULT = \"0.10\";\n  \n  /** Name and default value for the delayed series intervals property. */\n  public static final String DELAYED_INTERVALS_PROPERTY = \"delayedintervals\";\n  public static final String DELAYED_INTERVALS_PROPERTY_DEFAULT = \"5\";\n  \n  /** Name and default value for the query time span property. */\n  public static final String QUERY_TIMESPAN_PROPERTY = \"querytimespan\";\n  public static final String QUERY_TIMESPAN_PROPERTY_DEFAULT = \"0\";\n  \n  /** Name and default value for the randomized query time span property. */\n  public static final String QUERY_RANDOM_TIMESPAN_PROPERTY = \"queryrandomtimespan\";\n  public static final String QUERY_RANDOM_TIMESPAN_PROPERTY_DEFAULT = \"false\";\n  \n  /** Name and default value for the query time stamp delimiter property. */\n  public static final String QUERY_TIMESPAN_DELIMITER_PROPERTY = \"querytimespandelimiter\";\n  public static final String QUERY_TIMESPAN_DELIMITER_PROPERTY_DEFAULT = \",\";\n  \n  /** Name and default value for the group-by key property. */\n  public static final String GROUPBY_KEY_PROPERTY = \"groupbykey\";\n  public static final String GROUPBY_KEY_PROPERTY_DEFAULT = \"YCSBGB\";\n  \n  /** Name and default value for the group-by function property. */\n  public static final String GROUPBY_PROPERTY = \"groupbyfunction\";\n  \n  /** Name and default value for the group-by key map property. */\n  public static final String GROUPBY_KEYS_PROPERTY = \"groupbykeys\";\n  \n  /** Name and default value for the downsampling key property. */\n  public static final String DOWNSAMPLING_KEY_PROPERTY = \"downsamplingkey\";\n  public static final String DOWNSAMPLING_KEY_PROPERTY_DEFAULT = \"YCSBDS\";\n  \n  /** Name and default value for the downsampling function property. */\n  public static final String DOWNSAMPLING_FUNCTION_PROPERTY = \"downsamplingfunction\";\n  \n  /** Name and default value for the downsampling interval property. */\n  public static final String DOWNSAMPLING_INTERVAL_PROPERTY = \"downsamplinginterval\";\n  \n  /** The properties to pull settings from. */\n  protected Properties properties;\n  \n  /** Generators for keys, tag keys and tag values. */\n  protected Generator<String> keyGenerator;\n  protected Generator<String> tagKeyGenerator;\n  protected Generator<String> tagValueGenerator;\n  \n  /** The timestamp key, defaults to \"YCSBTS\". */\n  protected String timestampKey;\n  \n  /** The value key, defaults to \"YCSBDS\". */\n  protected String valueKey;\n  \n  /** The number of time units in between timestamps. */\n  protected int timestampInterval;\n  \n  /** The units of time the timestamp and various intervals represent. */\n  protected TimeUnit timeUnits;\n  \n  /** Whether or not to randomize the timestamp order when writing. */\n  protected boolean randomizeTimestampOrder;\n  \n  /** Whether or not to randomize (shuffle) the time series order. NOT compatible\n   * with data integrity. */\n  protected boolean randomizeTimeseriesOrder;\n  \n  /** The type of values to generate when writing data. */\n  protected ValueType valueType;\n  \n  /** Used to calculate an offset for each time series. */\n  protected int[] cumulativeCardinality;\n  \n  /** The calculated total cardinality based on the config. */\n  protected int totalCardinality;\n  \n  /** The calculated per-time-series-key cardinality. I.e. the number of unique\n   * tag key and value combinations. */\n  protected int perKeyCardinality;\n  \n  /** How much data to scan for in each call. */\n  protected NumberGenerator scanlength;\n  \n  /** A generator used to select a random time series key per read/scan. */\n  protected NumberGenerator keychooser;\n  \n  /** A generator to select what operation to perform during the run phase. */\n  protected DiscreteGenerator operationchooser;\n  \n  /** The maximum number of interval offsets from the starting timestamp. Calculated\n   * based on the number of records configured for the run. */\n  protected int maxOffsets;\n  \n  /** The number of records or operations to perform for this run. */\n  protected int recordcount;\n  \n  /** The number of tag pairs per time series. */\n  protected int tagPairs;\n  \n  /** The table we'll write to. */\n  protected String table;\n  \n  /** How many time series keys will be generated. */\n  protected int numKeys;\n  \n  /** The generated list of possible time series key values. */\n  protected String[] keys;\n\n  /** The generated list of possible tag key values. */\n  protected String[] tagKeys;\n  \n  /** The generated list of possible tag value values. */\n  protected String[] tagValues;\n  \n  /** The cardinality for each tag key. */\n  protected int[] tagCardinality;\n  \n  /** A helper to skip non-incrementing tag values. */\n  protected int firstIncrementableCardinality;\n  \n  /** How sparse the data written should be. */\n  protected double sparsity;\n  \n  /** The percentage of time series that should be delayed in writes. */\n  protected double delayedSeries;\n  \n  /** The maximum number of intervals to delay a series. */\n  protected int delayedIntervals;\n  \n  /** Optional query time interval during reads/scans. */\n  protected int queryTimeSpan;\n  \n  /** Whether or not the actual interval should be randomly chosen, using \n   * queryTimeSpan as the maximum value. */\n  protected boolean queryRandomTimeSpan;\n  \n  /** The delimiter for tag pairs in fields. */\n  protected String tagPairDelimiter;\n  \n  /** The delimiter between parameters for the delete key. */\n  protected String deleteDelimiter;\n  \n  /** The delimiter between timestamps for query time spans. */\n  protected String queryTimeSpanDelimiter;\n  \n  /** Whether or not to issue group-by queries. */\n  protected boolean groupBy;\n  \n  /** The key used for group-by tag keys. */\n  protected String groupByKey;\n  \n  /** The function used for group-by's. */\n  protected String groupByFunction;\n  \n  /** The tag keys to group on. */\n  protected boolean[] groupBys;\n  \n  /** Whether or not to issue downsampling queries. */\n  protected boolean downsample;\n  \n  /** The key used for downsampling tag keys. */\n  protected String downsampleKey;\n  \n  /** The downsampling function. */\n  protected String downsampleFunction;\n  \n  /** The downsampling interval. */\n  protected int downsampleInterval;\n\n  /**\n   * Set to true if want to check correctness of reads. Must also\n   * be set to true during loading phase to function.\n   */\n  protected boolean dataintegrity;\n  \n  /** Measurements to write data integrity results to. */\n  protected Measurements measurements = Measurements.getMeasurements();\n  \n  @Override\n  public void init(final Properties p) throws WorkloadException {\n    properties = p;\n    recordcount =\n        Integer.parseInt(p.getProperty(Client.RECORD_COUNT_PROPERTY, \n            Client.DEFAULT_RECORD_COUNT));\n    if (recordcount == 0) {\n      recordcount = Integer.MAX_VALUE;\n    }\n    timestampKey = p.getProperty(TIMESTAMP_KEY_PROPERTY, TIMESTAMP_KEY_PROPERTY_DEFAULT);\n    valueKey = p.getProperty(VALUE_KEY_PROPERTY, VALUE_KEY_PROPERTY_DEFAULT);\n    operationchooser = CoreWorkload.createOperationGenerator(properties);\n    \n    final int maxscanlength =\n        Integer.parseInt(p.getProperty(CoreWorkload.MAX_SCAN_LENGTH_PROPERTY, \n            CoreWorkload.MAX_SCAN_LENGTH_PROPERTY_DEFAULT));\n    String scanlengthdistrib =\n        p.getProperty(CoreWorkload.SCAN_LENGTH_DISTRIBUTION_PROPERTY, \n            CoreWorkload.SCAN_LENGTH_DISTRIBUTION_PROPERTY_DEFAULT);\n    \n    if (scanlengthdistrib.compareTo(\"uniform\") == 0) {\n      scanlength = new UniformLongGenerator(1, maxscanlength);\n    } else if (scanlengthdistrib.compareTo(\"zipfian\") == 0) {\n      scanlength = new ZipfianGenerator(1, maxscanlength);\n    } else {\n      throw new WorkloadException(\n          \"Distribution \\\"\" + scanlengthdistrib + \"\\\" not allowed for scan length\");\n    }\n    \n    randomizeTimestampOrder = Boolean.parseBoolean(p.getProperty(\n        RANDOMIZE_TIMESTAMP_ORDER_PROPERTY, \n        RANDOMIZE_TIMESTAMP_ORDER_PROPERTY_DEFAULT));\n    randomizeTimeseriesOrder = Boolean.parseBoolean(p.getProperty(\n        RANDOMIZE_TIMESERIES_ORDER_PROPERTY, \n        RANDOMIZE_TIMESERIES_ORDER_PROPERTY_DEFAULT));\n    \n    // setup the cardinality\n    numKeys = Integer.parseInt(p.getProperty(CoreWorkload.FIELD_COUNT_PROPERTY, \n        CoreWorkload.FIELD_COUNT_PROPERTY_DEFAULT));\n    tagPairs = Integer.parseInt(p.getProperty(TAG_COUNT_PROPERTY, \n        TAG_COUNT_PROPERTY_DEFAULT));\n    sparsity = Double.parseDouble(p.getProperty(SPARSITY_PROPERTY, SPARSITY_PROPERTY_DEFAULT));\n    tagCardinality = new int[tagPairs];\n    \n    final String requestdistrib =\n        p.getProperty(CoreWorkload.REQUEST_DISTRIBUTION_PROPERTY, \n            CoreWorkload.REQUEST_DISTRIBUTION_PROPERTY_DEFAULT);\n    if (requestdistrib.compareTo(\"uniform\") == 0) {\n      keychooser = new UniformLongGenerator(0, numKeys - 1);\n    } else if (requestdistrib.compareTo(\"sequential\") == 0) {\n      keychooser = new SequentialGenerator(0, numKeys - 1);\n    } else if (requestdistrib.compareTo(\"zipfian\") == 0) {\n      keychooser = new ScrambledZipfianGenerator(0, numKeys - 1);\n    //} else if (requestdistrib.compareTo(\"latest\") == 0) {\n    //  keychooser = new SkewedLatestGenerator(transactioninsertkeysequence);\n    } else if (requestdistrib.equals(\"hotspot\")) {\n      double hotsetfraction =\n          Double.parseDouble(p.getProperty(CoreWorkload.HOTSPOT_DATA_FRACTION, \n              CoreWorkload.HOTSPOT_DATA_FRACTION_DEFAULT));\n      double hotopnfraction =\n          Double.parseDouble(p.getProperty(CoreWorkload.HOTSPOT_OPN_FRACTION, \n              CoreWorkload.HOTSPOT_OPN_FRACTION_DEFAULT));\n      keychooser = new HotspotIntegerGenerator(0, numKeys - 1,\n          hotsetfraction, hotopnfraction);\n    } else {\n      throw new WorkloadException(\"Unknown request distribution \\\"\" + requestdistrib + \"\\\"\");\n    }\n    \n    // figure out the start timestamp based on the units, cardinality and interval\n    try {\n      timestampInterval = Integer.parseInt(p.getProperty(\n          TIMESTAMP_INTERVAL_PROPERTY, TIMESTAMP_INTERVAL_PROPERTY_DEFAULT));\n    } catch (NumberFormatException nfe) {\n      throw new WorkloadException(\"Unable to parse the \" + \n          TIMESTAMP_INTERVAL_PROPERTY, nfe);\n    }\n    \n    try {\n      timeUnits = TimeUnit.valueOf(p.getProperty(TIMESTAMP_UNITS_PROPERTY, \n          TIMESTAMP_UNITS_PROPERTY_DEFAULT).toUpperCase());\n    } catch (IllegalArgumentException e) {\n      throw new WorkloadException(\"Unknown time unit type\", e);\n    }\n    if (timeUnits == TimeUnit.NANOSECONDS || timeUnits == TimeUnit.MICROSECONDS) {\n      throw new WorkloadException(\"YCSB doesn't support \" + timeUnits + \n          \" at this time.\");\n    }\n    \n    tagPairDelimiter = p.getProperty(PAIR_DELIMITER_PROPERTY, PAIR_DELIMITER_PROPERTY_DEFAULT);\n    deleteDelimiter = p.getProperty(DELETE_DELIMITER_PROPERTY, DELETE_DELIMITER_PROPERTY_DEFAULT);\n    dataintegrity = Boolean.parseBoolean(\n        p.getProperty(CoreWorkload.DATA_INTEGRITY_PROPERTY, \n            CoreWorkload.DATA_INTEGRITY_PROPERTY_DEFAULT));\n    \n    queryTimeSpan = Integer.parseInt(p.getProperty(QUERY_TIMESPAN_PROPERTY, \n        QUERY_TIMESPAN_PROPERTY_DEFAULT));\n    queryRandomTimeSpan = Boolean.parseBoolean(p.getProperty(QUERY_RANDOM_TIMESPAN_PROPERTY, \n        QUERY_RANDOM_TIMESPAN_PROPERTY_DEFAULT));\n    queryTimeSpanDelimiter = p.getProperty(QUERY_TIMESPAN_DELIMITER_PROPERTY, \n        QUERY_TIMESPAN_DELIMITER_PROPERTY_DEFAULT);\n    \n    groupByKey = p.getProperty(GROUPBY_KEY_PROPERTY, GROUPBY_KEY_PROPERTY_DEFAULT);\n    groupByFunction = p.getProperty(GROUPBY_PROPERTY);\n    if (groupByFunction != null && !groupByFunction.isEmpty()) {\n      final String groupByKeys = p.getProperty(GROUPBY_KEYS_PROPERTY);\n      if (groupByKeys == null || groupByKeys.isEmpty()) {\n        throw new WorkloadException(\"Group by was enabled but no keys were specified.\");\n      }\n      final String[] gbKeys = groupByKeys.split(\",\");\n      if (gbKeys.length != tagKeys.length) {\n        throw new WorkloadException(\"Only \" + gbKeys.length + \" group by keys \"\n            + \"were specified but there were \" + tagKeys.length + \" tag keys given.\");\n      }\n      groupBys = new boolean[gbKeys.length];\n      for (int i = 0; i < gbKeys.length; i++) {\n        groupBys[i] = Integer.parseInt(gbKeys[i].trim()) == 0 ? false : true;\n      }\n      groupBy = true;\n    }\n    \n    downsampleKey = p.getProperty(DOWNSAMPLING_KEY_PROPERTY, DOWNSAMPLING_KEY_PROPERTY_DEFAULT);\n    downsampleFunction = p.getProperty(DOWNSAMPLING_FUNCTION_PROPERTY);\n    if (downsampleFunction != null && !downsampleFunction.isEmpty()) {\n      final String interval = p.getProperty(DOWNSAMPLING_INTERVAL_PROPERTY);\n      if (interval == null || interval.isEmpty()) {\n        throw new WorkloadException(\"'\" + DOWNSAMPLING_INTERVAL_PROPERTY + \"' was missing despite '\" \n            + DOWNSAMPLING_FUNCTION_PROPERTY + \"' being set.\");\n      }\n      downsampleInterval = Integer.parseInt(interval);\n      downsample = true;\n    }\n    \n    delayedSeries = Double.parseDouble(p.getProperty(DELAYED_SERIES_PROPERTY, DELAYED_SERIES_PROPERTY_DEFAULT));\n    delayedIntervals = Integer.parseInt(p.getProperty(DELAYED_INTERVALS_PROPERTY, DELAYED_INTERVALS_PROPERTY_DEFAULT));\n    \n    valueType = ValueType.fromString(p.getProperty(VALUE_TYPE_PROPERTY, VALUE_TYPE_PROPERTY_DEFAULT));\n    table = p.getProperty(CoreWorkload.TABLENAME_PROPERTY, CoreWorkload.TABLENAME_PROPERTY_DEFAULT);\n    initKeysAndTags();\n    validateSettings();\n  }\n  \n  @Override\n  public Object initThread(Properties p, int mythreadid, int threadcount) throws WorkloadException {\n    if (properties == null) {\n      throw new WorkloadException(\"Workload has not been initialized.\");\n    }\n    return new ThreadState(mythreadid, threadcount);\n  }\n  \n  @Override\n  public boolean doInsert(DB db, Object threadstate) {\n    if (threadstate == null) {\n      throw new IllegalStateException(\"Missing thread state.\");\n    }\n    final Map<String, ByteIterator> tags = new TreeMap<String, ByteIterator>();\n    final String key = ((ThreadState)threadstate).nextDataPoint(tags, true);\n    if (db.insert(table, key, tags) == Status.OK) {\n      return true;\n    }\n    return false;\n  }\n\n  @Override\n  public boolean doTransaction(DB db, Object threadstate) {\n    if (threadstate == null) {\n      throw new IllegalStateException(\"Missing thread state.\");\n    }\n    switch (operationchooser.nextString()) {\n    case \"READ\":\n      doTransactionRead(db, threadstate);\n      break;\n    case \"UPDATE\":\n      doTransactionUpdate(db, threadstate);\n      break;\n    case \"INSERT\": \n      doTransactionInsert(db, threadstate);\n      break;\n    case \"SCAN\":\n      doTransactionScan(db, threadstate);\n      break;\n    case \"DELETE\":\n      doTransactionDelete(db, threadstate);\n      break;\n    default:\n      return false;\n    }\n    return true;\n  }\n\n  protected void doTransactionRead(final DB db, Object threadstate) {\n    final ThreadState state = (ThreadState) threadstate;\n    final String keyname = keys[keychooser.nextValue().intValue()];\n    \n    int offsets = state.queryOffsetGenerator.nextValue().intValue();\n    //int offsets = Utils.random().nextInt(maxOffsets - 1);\n    final long startTimestamp;\n    if (offsets > 0) {\n      startTimestamp = state.startTimestamp + state.timestampGenerator.getOffset(offsets);\n    } else {\n      startTimestamp = state.startTimestamp;\n    }\n    \n    // rando tags\n    Set<String> fields = new HashSet<String>();\n    for (int i = 0; i < tagPairs; ++i) {\n      if (groupBy && groupBys[i]) {\n        fields.add(tagKeys[i]);\n      } else {\n        fields.add(tagKeys[i] + tagPairDelimiter + \n            tagValues[Utils.random().nextInt(tagCardinality[i])]);\n      }\n    }\n    \n    if (queryTimeSpan > 0) {\n      final long endTimestamp;\n      if (queryRandomTimeSpan) {\n        endTimestamp = startTimestamp + (timestampInterval * Utils.random().nextInt(queryTimeSpan / timestampInterval));\n      } else {\n        endTimestamp = startTimestamp + queryTimeSpan;\n      }\n      fields.add(timestampKey + tagPairDelimiter + startTimestamp + queryTimeSpanDelimiter + endTimestamp);\n    } else {\n      fields.add(timestampKey + tagPairDelimiter + startTimestamp);  \n    }\n    if (groupBy) {\n      fields.add(groupByKey + tagPairDelimiter + groupByFunction);\n    }\n    if (downsample) {\n      fields.add(downsampleKey + tagPairDelimiter + downsampleFunction + downsampleInterval);\n    }\n    \n    final Map<String, ByteIterator> cells = new HashMap<String, ByteIterator>();\n    final Status status = db.read(table, keyname, fields, cells);\n    \n    if (dataintegrity && status == Status.OK) {\n      verifyRow(keyname, cells);\n    }\n  }\n  \n  protected void doTransactionUpdate(final DB db, Object threadstate) {\n    if (threadstate == null) {\n      throw new IllegalStateException(\"Missing thread state.\");\n    }\n    final Map<String, ByteIterator> tags = new TreeMap<String, ByteIterator>();\n    final String key = ((ThreadState)threadstate).nextDataPoint(tags, false);\n    db.update(table, key, tags);\n  }\n  \n  protected void doTransactionInsert(final DB db, Object threadstate) {\n    doInsert(db, threadstate);\n  }\n  \n  protected void doTransactionScan(final DB db, Object threadstate) {\n    final ThreadState state = (ThreadState) threadstate;\n    \n    final String keyname = keys[Utils.random().nextInt(keys.length)];\n    \n    // choose a random scan length\n    int len = scanlength.nextValue().intValue();\n    \n    int offsets = Utils.random().nextInt(maxOffsets - 1);\n    final long startTimestamp;\n    if (offsets > 0) {\n      startTimestamp = state.startTimestamp + state.timestampGenerator.getOffset(offsets);\n    } else {\n      startTimestamp = state.startTimestamp;\n    }\n    \n    // rando tags\n    Set<String> fields = new HashSet<String>();\n    for (int i = 0; i < tagPairs; ++i) {\n      if (groupBy && groupBys[i]) {\n        fields.add(tagKeys[i]);\n      } else {\n        fields.add(tagKeys[i] + tagPairDelimiter + \n            tagValues[Utils.random().nextInt(tagCardinality[i])]);\n      }\n    }\n    \n    if (queryTimeSpan > 0) {\n      final long endTimestamp;\n      if (queryRandomTimeSpan) {\n        endTimestamp = startTimestamp + (timestampInterval * Utils.random().nextInt(queryTimeSpan / timestampInterval));\n      } else {\n        endTimestamp = startTimestamp + queryTimeSpan;\n      }\n      fields.add(timestampKey + tagPairDelimiter + startTimestamp + queryTimeSpanDelimiter + endTimestamp);\n    } else {\n      fields.add(timestampKey + tagPairDelimiter + startTimestamp);  \n    }\n    if (groupBy) {\n      fields.add(groupByKey + tagPairDelimiter + groupByFunction);\n    }\n    if (downsample) {\n      fields.add(downsampleKey + tagPairDelimiter + downsampleFunction + tagPairDelimiter + downsampleInterval);\n    }\n    \n    final Vector<HashMap<String, ByteIterator>> results = new Vector<HashMap<String, ByteIterator>>();\n    db.scan(table, keyname, len, fields, results);\n  }\n  \n  protected void doTransactionDelete(final DB db, Object threadstate) {\n    final ThreadState state = (ThreadState) threadstate;\n    \n    final StringBuilder buf = new StringBuilder().append(keys[Utils.random().nextInt(keys.length)]);\n    \n    int offsets = Utils.random().nextInt(maxOffsets - 1);\n    final long startTimestamp;\n    if (offsets > 0) {\n      startTimestamp = state.startTimestamp + state.timestampGenerator.getOffset(offsets);\n    } else {\n      startTimestamp = state.startTimestamp;\n    }\n    \n    // rando tags\n    for (int i = 0; i < tagPairs; ++i) {\n      if (groupBy && groupBys[i]) {\n        buf.append(deleteDelimiter)\n           .append(tagKeys[i]);\n      } else {\n        buf.append(deleteDelimiter).append(tagKeys[i] + tagPairDelimiter + \n            tagValues[Utils.random().nextInt(tagCardinality[i])]);\n      }\n    }\n    \n    if (queryTimeSpan > 0) {\n      final long endTimestamp;\n      if (queryRandomTimeSpan) {\n        endTimestamp = startTimestamp + (timestampInterval * Utils.random().nextInt(queryTimeSpan / timestampInterval));\n      } else {\n        endTimestamp = startTimestamp + queryTimeSpan;\n      }\n      buf.append(deleteDelimiter)\n         .append(timestampKey + tagPairDelimiter + startTimestamp + queryTimeSpanDelimiter + endTimestamp);\n    } else {\n      buf.append(deleteDelimiter)\n         .append(timestampKey + tagPairDelimiter + startTimestamp);  \n    }\n    \n    db.delete(table, buf.toString());\n  }\n  \n  /**\n   * Parses the values returned by a read or scan operation and determines whether\n   * or not the integer value matches the hash and timestamp of the original timestamp.\n   * Only works for raw data points, will not work for group-by's or downsampled data.\n   * @param key The time series key.\n   * @param cells The cells read by the DB.\n   * @return {@link Status#OK} if the data matched or {@link Status#UNEXPECTED_STATE} if\n   * the data did not match.\n   */\n  protected Status verifyRow(final String key, final Map<String, ByteIterator> cells) {\n    Status verifyStatus = Status.UNEXPECTED_STATE;\n    long startTime = System.nanoTime();\n\n    double value = 0;\n    long timestamp = 0;\n    final TreeMap<String, String> validationTags = new TreeMap<String, String>();\n    for (final Entry<String, ByteIterator> entry : cells.entrySet()) {\n      if (entry.getKey().equals(timestampKey)) {\n        final NumericByteIterator it = (NumericByteIterator) entry.getValue();\n        timestamp = it.getLong();\n      } else if (entry.getKey().equals(valueKey)) {\n        final NumericByteIterator it = (NumericByteIterator) entry.getValue();\n        value = it.isFloatingPoint() ? it.getDouble() : it.getLong();\n      } else {\n        validationTags.put(entry.getKey(), entry.getValue().toString());\n      }\n    }\n\n    if (validationFunction(key, timestamp, validationTags) == value) {\n      verifyStatus = Status.OK;\n    }\n    long endTime = System.nanoTime();\n    measurements.measure(\"VERIFY\", (int) (endTime - startTime) / 1000);\n    measurements.reportStatus(\"VERIFY\", verifyStatus);\n    return verifyStatus;\n  }\n  \n  /**\n   * Function used for generating a deterministic hash based on the combination\n   * of metric, tags and timestamp.\n   * @param key A non-null string representing the key.\n   * @param timestamp A timestamp in the proper units for the workload.\n   * @param tags A non-null map of tag keys and values NOT including the YCSB\n   * key or timestamp.\n   * @return A hash value as an 8 byte integer.\n   */\n  protected long validationFunction(final String key, final long timestamp, \n                                    final TreeMap<String, String> tags) {\n    final StringBuilder validationBuffer = new StringBuilder(keys[0].length() + \n        (tagPairs * tagKeys[0].length()) + (tagPairs * tagCardinality[1]));\n    for (final Entry<String, String> pair : tags.entrySet()) {\n      validationBuffer.append(pair.getKey()).append(pair.getValue());\n    }\n    return (long) validationBuffer.toString().hashCode() ^ timestamp;\n  }\n  \n  /**\n   * Breaks out the keys, tags and cardinality initialization in another method\n   * to keep CheckStyle happy.\n   * @throws WorkloadException If something goes pear shaped.\n   */\n  protected void initKeysAndTags() throws WorkloadException {\n    final int keyLength = Integer.parseInt(properties.getProperty(\n        CoreWorkload.FIELD_LENGTH_PROPERTY, \n        CoreWorkload.FIELD_LENGTH_PROPERTY_DEFAULT));\n    final int tagKeyLength = Integer.parseInt(properties.getProperty(\n        TAG_KEY_LENGTH_PROPERTY, TAG_KEY_LENGTH_PROPERTY_DEFAULT));\n    final int tagValueLength = Integer.parseInt(properties.getProperty(\n        TAG_VALUE_LENGTH_PROPERTY, TAG_VALUE_LENGTH_PROPERTY_DEFAULT));\n    \n    keyGenerator = new IncrementingPrintableStringGenerator(keyLength);\n    tagKeyGenerator = new IncrementingPrintableStringGenerator(tagKeyLength);\n    tagValueGenerator = new IncrementingPrintableStringGenerator(tagValueLength);\n    \n    final int threads = Integer.parseInt(properties.getProperty(Client.THREAD_COUNT_PROPERTY, \"1\"));\n    final String tagCardinalityString = properties.getProperty(\n        TAG_CARDINALITY_PROPERTY, \n        TAG_CARDINALITY_PROPERTY_DEFAULT);\n    final String[] tagCardinalityParts = tagCardinalityString.split(\",\");\n    int idx = 0;\n    totalCardinality = numKeys;\n    perKeyCardinality = 1;\n    int maxCardinality = 0;\n    for (final String card : tagCardinalityParts) {\n      try {\n        tagCardinality[idx] = Integer.parseInt(card.trim());\n      } catch (NumberFormatException nfe) {\n        throw new WorkloadException(\"Unable to parse cardinality: \" + \n            card, nfe);\n      }\n      if (tagCardinality[idx] < 1) {\n        throw new WorkloadException(\"Cardinality must be greater than zero: \" + \n            tagCardinality[idx]);\n      }\n      totalCardinality *= tagCardinality[idx];\n      perKeyCardinality *= tagCardinality[idx];\n      if (tagCardinality[idx] > maxCardinality) {\n        maxCardinality = tagCardinality[idx];\n      }\n      ++idx;\n      if (idx >= tagPairs) {\n        // we have more cardinalities than tag keys so bail at this point.\n        break;\n      }\n    }\n    if (numKeys < threads) {\n      throw new WorkloadException(\"Field count \" + numKeys + \" (keys for time \"\n          + \"series workloads) must be greater or equal to the number of \"\n          + \"threads \" + threads);\n    }\n    \n    // fill tags without explicit cardinality with 1\n    if (idx < tagPairs) {\n      tagCardinality[idx++] = 1;\n    }\n    \n    for (int i = 0; i < tagCardinality.length; ++i) {\n      if (tagCardinality[i] > 1) {\n        firstIncrementableCardinality = i;\n        break;\n      }\n    }\n    \n    keys = new String[numKeys];\n    tagKeys = new String[tagPairs];\n    tagValues = new String[maxCardinality];\n    for (int i = 0; i < numKeys; ++i) {\n      keys[i] = keyGenerator.nextString();\n    }\n\n    for (int i = 0; i < tagPairs; ++i) {\n      tagKeys[i] = tagKeyGenerator.nextString();\n    }\n    \n    for (int i = 0; i < maxCardinality; i++) {\n      tagValues[i] = tagValueGenerator.nextString();\n    }\n    if (randomizeTimeseriesOrder) {\n      Utils.shuffleArray(keys);\n      Utils.shuffleArray(tagValues);\n    }\n    \n    maxOffsets = (recordcount / totalCardinality) + 1;\n    final int[] keyAndTagCardinality = new int[tagPairs + 1];\n    keyAndTagCardinality[0] = numKeys;\n    for (int i = 0; i < tagPairs; i++) {\n      keyAndTagCardinality[i + 1] = tagCardinality[i];\n    }\n    \n    cumulativeCardinality = new int[keyAndTagCardinality.length];\n    for (int i = 0; i < keyAndTagCardinality.length; i++) {\n      int cumulation = 1;\n      for (int x = i; x <= keyAndTagCardinality.length - 1; x++) {\n        cumulation *= keyAndTagCardinality[x];\n      }\n      if (i > 0) {\n        cumulativeCardinality[i - 1] = cumulation;\n      }\n    }\n    cumulativeCardinality[cumulativeCardinality.length - 1] = 1;\n  }\n  \n  /**\n   * Makes sure the settings as given are compatible.\n   * @throws WorkloadException If one or more settings were invalid.\n   */\n  protected void validateSettings() throws WorkloadException {\n    if (dataintegrity) {\n      if (valueType != ValueType.INTEGERS) {\n        throw new WorkloadException(\"Data integrity was enabled. 'valuetype' must \"\n            + \"be set to 'integers'.\");\n      }\n      if (groupBy) {\n        throw new WorkloadException(\"Data integrity was enabled. 'groupbyfunction' must \"\n            + \"be empty or null.\");\n      }\n      if (downsample) {\n        throw new WorkloadException(\"Data integrity was enabled. 'downsamplingfunction' must \"\n            + \"be empty or null.\");\n      }\n      if (queryTimeSpan > 0) {\n        throw new WorkloadException(\"Data integrity was enabled. 'querytimespan' must \"\n            + \"be empty or 0.\");\n      }\n      if (randomizeTimeseriesOrder) {\n        throw new WorkloadException(\"Data integrity was enabled. 'randomizetimeseriesorder' must \"\n            + \"be false.\");\n      }\n      final String startTimestamp = properties.getProperty(CoreWorkload.INSERT_START_PROPERTY);\n      if (startTimestamp == null || startTimestamp.isEmpty()) {\n        throw new WorkloadException(\"Data integrity was enabled. 'insertstart' must \"\n            + \"be set to a Unix Epoch timestamp.\");\n      }\n    }\n  }\n  \n  /**\n   * Thread state class holding thread local generators and indices.\n   */\n  protected class ThreadState {\n    /** The timestamp generator for this thread. */\n    protected final UnixEpochTimestampGenerator timestampGenerator;\n    \n    /** An offset generator to select a random offset for queries. */\n    protected final NumberGenerator queryOffsetGenerator;\n    \n    /** The current write key index. */\n    protected int keyIdx;\n    \n    /** The starting fence for writing keys. */\n    protected int keyIdxStart;\n    \n    /** The ending fence for writing keys. */\n    protected int keyIdxEnd;\n    \n    /** Indices for each tag value for writes. */\n    protected int[] tagValueIdxs;\n\n    /** Whether or not all time series have written values for the current timestamp. */\n    protected boolean rollover;\n    \n    /** The starting timestamp. */\n    protected long startTimestamp;\n    \n    /**\n     * Default ctor.\n     * @param threadID The zero based thread ID.\n     * @param threadCount The total number of threads.\n     * @throws WorkloadException If something went pear shaped.\n     */\n    protected ThreadState(final int threadID, final int threadCount) throws WorkloadException {\n      int totalThreads = threadCount > 0 ? threadCount : 1;\n      \n      if (threadID >= totalThreads) {\n        throw new IllegalStateException(\"Thread ID \" + threadID + \" cannot be greater \"\n            + \"than or equal than the thread count \" + totalThreads);\n      }\n      if (keys.length < threadCount) {\n        throw new WorkloadException(\"Thread count \" + totalThreads + \" must be greater \"\n            + \"than or equal to key count \" + keys.length);\n      }\n      \n      int keysPerThread = keys.length / totalThreads;\n      keyIdx = keysPerThread * threadID;\n      keyIdxStart = keyIdx;\n      if (totalThreads - 1 == threadID) {\n        keyIdxEnd = keys.length;\n      } else {\n        keyIdxEnd = keyIdxStart + keysPerThread;\n      }\n      \n      tagValueIdxs = new int[tagPairs]; // all zeros\n      \n      final String startingTimestamp = \n          properties.getProperty(CoreWorkload.INSERT_START_PROPERTY);\n      if (startingTimestamp == null || startingTimestamp.isEmpty()) {\n        timestampGenerator = randomizeTimestampOrder ? \n            new RandomDiscreteTimestampGenerator(timestampInterval, timeUnits, maxOffsets) :\n            new UnixEpochTimestampGenerator(timestampInterval, timeUnits);\n      } else {\n        try {\n          timestampGenerator = randomizeTimestampOrder ? \n              new RandomDiscreteTimestampGenerator(timestampInterval, timeUnits, \n                  Long.parseLong(startingTimestamp), maxOffsets) :\n              new UnixEpochTimestampGenerator(timestampInterval, timeUnits, \n                  Long.parseLong(startingTimestamp));\n        } catch (NumberFormatException nfe) {\n          throw new WorkloadException(\"Unable to parse the \" + \n              CoreWorkload.INSERT_START_PROPERTY, nfe);\n        }\n      }\n      // Set the last value properly for the timestamp, otherwise it may start \n      // one interval ago.\n      startTimestamp = timestampGenerator.nextValue();\n      // TODO - pick it\n      queryOffsetGenerator = new UniformLongGenerator(0, maxOffsets - 2);\n    }\n    \n    /**\n     * Generates the next write value for thread.\n     * @param map An initialized map to populate with tag keys and values as well\n     * as the timestamp and actual value.\n     * @param isInsert Whether or not it's an insert or an update. Updates will pick\n     * an older timestamp (if random isn't enabled).\n     * @return The next key to write.\n     */\n    protected String nextDataPoint(final Map<String, ByteIterator> map, final boolean isInsert) {\n      int iterations = sparsity <= 0 ? 1 : \n          Utils.random().nextInt((int) ((double) perKeyCardinality * sparsity));\n      if (iterations < 1) {\n        iterations = 1;\n      }\n      while (true) {\n        iterations--;\n        if (rollover) {\n          timestampGenerator.nextValue();\n          rollover = false;\n        }\n        String key = null;\n        if (iterations <= 0) {\n          final TreeMap<String, String> validationTags;\n          if (dataintegrity) {\n            validationTags = new TreeMap<String, String>();\n          } else {\n            validationTags = null;\n          }\n          key = keys[keyIdx];\n          int overallIdx = keyIdx * cumulativeCardinality[0];\n          for (int i = 0; i < tagPairs; ++i) {\n            int tvidx = tagValueIdxs[i];\n            map.put(tagKeys[i], new StringByteIterator(tagValues[tvidx]));\n            if (dataintegrity) {\n              validationTags.put(tagKeys[i], tagValues[tvidx]);\n            }\n            if (delayedSeries > 0) {\n              overallIdx += (tvidx * cumulativeCardinality[i + 1]);\n            }\n          }\n          \n          if (!isInsert) {\n            final long delta = (timestampGenerator.currentValue() - startTimestamp) / timestampInterval;\n            final int intervals = Utils.random().nextInt((int) delta);\n            map.put(timestampKey, new NumericByteIterator(startTimestamp + (intervals * timestampInterval)));\n          } else if (delayedSeries > 0) {\n            // See if the series falls in a delay bucket and calculate an offset earlier\n            // than the current timestamp value if so.\n            double pct = (double) overallIdx / (double) totalCardinality;\n            if (pct < delayedSeries) {\n              int modulo = overallIdx % delayedIntervals;\n              if (modulo < 0) {\n                modulo *= -1;\n              }\n              map.put(timestampKey, new NumericByteIterator(timestampGenerator.currentValue() - \n                  timestampInterval * modulo));\n            } else {\n              map.put(timestampKey, new NumericByteIterator(timestampGenerator.currentValue()));\n            }\n          } else {\n            map.put(timestampKey, new NumericByteIterator(timestampGenerator.currentValue()));\n          }\n          \n          if (dataintegrity) {\n            map.put(valueKey, new NumericByteIterator(validationFunction(key, \n                timestampGenerator.currentValue(), validationTags)));\n          } else {\n            switch (valueType) {\n            case INTEGERS:\n              map.put(valueKey, new NumericByteIterator(Utils.random().nextInt()));\n              break;\n            case FLOATS:\n              map.put(valueKey, new NumericByteIterator(\n                  Utils.random().nextDouble() * (double) 100000));\n              break;\n            case MIXED:\n              if (Utils.random().nextBoolean()) {\n                map.put(valueKey, new NumericByteIterator(Utils.random().nextInt()));\n              } else {\n                map.put(valueKey, new NumericByteIterator(\n                    Utils.random().nextDouble() * (double) 100000));\n              }\n              break;\n            default:\n              throw new IllegalStateException(\"Somehow we didn't have a value \"\n                  + \"type configured that we support: \" + valueType);\n            }        \n          }\n        }\n        \n        boolean tagRollover = false;\n        for (int i = tagCardinality.length - 1; i >= 0; --i) {\n          if (tagCardinality[i] <= 1) {\n            // nothing to increment here\n            continue;\n          }\n          \n          if (tagValueIdxs[i] + 1 >= tagCardinality[i]) {\n            tagValueIdxs[i] = 0;\n            if (i == firstIncrementableCardinality) {\n              tagRollover = true;\n            }\n          } else {\n            ++tagValueIdxs[i];\n            break;\n          }\n        }\n        \n        if (tagRollover) {\n          if (keyIdx + 1 >= keyIdxEnd) {\n            keyIdx = keyIdxStart;\n            rollover = true;\n          } else {\n            ++keyIdx;\n          }\n        }\n        \n        if (iterations <= 0) {\n          return key;\n        }\n      }\n    }\n  }\n\n}"
  },
  {
    "path": "core/src/main/java/com/yahoo/ycsb/workloads/package-info.java",
    "content": "/*\n * Copyright (c) 2015 - 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB workloads.\n */\npackage com.yahoo.ycsb.workloads;\n\n"
  },
  {
    "path": "core/src/main/resources/project.properties",
    "content": "version=${project.version}\n"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/TestByteIterator.java",
    "content": "/**\n * Copyright (c) 2012 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport org.testng.annotations.Test;\nimport static org.testng.AssertJUnit.*;\n\npublic class TestByteIterator {\n  @Test\n  public void testRandomByteIterator() {\n    int size = 100;\n    ByteIterator itor = new RandomByteIterator(size);\n    assertTrue(itor.hasNext());\n    assertEquals(size, itor.bytesLeft());\n    assertEquals(size, itor.toString().getBytes().length);\n    assertFalse(itor.hasNext());\n    assertEquals(0, itor.bytesLeft());\n\n    itor = new RandomByteIterator(size);\n    assertEquals(size, itor.toArray().length);\n    assertFalse(itor.hasNext());\n    assertEquals(0, itor.bytesLeft());\n  }\n}\n"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/TestNumericByteIterator.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb;\n\nimport org.testng.annotations.Test;\nimport static org.testng.AssertJUnit.*;\n\npublic class TestNumericByteIterator {\n\n  @Test\n  public void testLong() throws Exception {\n    NumericByteIterator it = new NumericByteIterator(42L);\n    assertFalse(it.isFloatingPoint());\n    assertEquals(42L, it.getLong());\n    \n    try {\n      it.getDouble();\n      fail(\"Expected IllegalStateException.\");\n    } catch (IllegalStateException e) { }\n    try {\n      it.next();\n      fail(\"Expected UnsupportedOperationException.\");\n    } catch (UnsupportedOperationException e) { }\n    \n    assertEquals(8, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(7, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(6, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(5, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(4, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(3, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(2, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(1, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 42, (byte) it.nextByte());\n    assertEquals(0, it.bytesLeft());\n    assertFalse(it.hasNext());\n    \n    it.reset();\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n  }\n  \n  @Test\n  public void testDouble() throws Exception {\n    NumericByteIterator it = new NumericByteIterator(42.75);\n    assertTrue(it.isFloatingPoint());\n    assertEquals(42.75, it.getDouble(), 0.001);\n    \n    try {\n      it.getLong();\n      fail(\"Expected IllegalStateException.\");\n    } catch (IllegalStateException e) { }\n    try {\n      it.next();\n      fail(\"Expected UnsupportedOperationException.\");\n    } catch (UnsupportedOperationException e) { }\n    \n    assertEquals(8, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 64, (byte) it.nextByte());\n    assertEquals(7, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 69, (byte) it.nextByte());\n    assertEquals(6, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 96, (byte) it.nextByte());\n    assertEquals(5, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(4, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(3, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(2, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(1, it.bytesLeft());\n    assertTrue(it.hasNext());\n    assertEquals((byte) 0, (byte) it.nextByte());\n    assertEquals(0, it.bytesLeft());\n    assertFalse(it.hasNext());\n    \n    it.reset();\n    assertTrue(it.hasNext());\n    assertEquals((byte) 64, (byte) it.nextByte());\n  }\n}\n"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/TestStatus.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb;\n\nimport org.testng.annotations.Test;\n\nimport static org.testng.Assert.assertFalse;\nimport static org.testng.Assert.assertTrue;\n\n/**\n * Test class for {@link Status}.\n */\npublic class TestStatus {\n\n  @Test\n  public void testAcceptableStatus() {\n    assertTrue(Status.OK.isOk());\n    assertTrue(Status.BATCHED_OK.isOk());\n    assertFalse(Status.BAD_REQUEST.isOk());\n    assertFalse(Status.ERROR.isOk());\n    assertFalse(Status.FORBIDDEN.isOk());\n    assertFalse(Status.NOT_FOUND.isOk());\n    assertFalse(Status.NOT_IMPLEMENTED.isOk());\n    assertFalse(Status.SERVICE_UNAVAILABLE.isOk());\n    assertFalse(Status.UNEXPECTED_STATE.isOk());\n  }\n}\n"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/TestUtils.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n * \n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb;\n\nimport static org.testng.Assert.assertEquals;\nimport static org.testng.Assert.assertTrue;\n\nimport java.util.Arrays;\n\nimport org.testng.annotations.Test;\n\npublic class TestUtils {\n\n  @Test\n  public void bytesToFromLong() throws Exception {\n    byte[] bytes = new byte[8];\n    assertEquals(Utils.bytesToLong(bytes), 0L);\n    assertArrayEquals(Utils.longToBytes(0), bytes);\n    \n    bytes[7] = 1;\n    assertEquals(Utils.bytesToLong(bytes), 1L);\n    assertArrayEquals(Utils.longToBytes(1L), bytes);\n    \n    bytes = new byte[] { 127, -1, -1, -1, -1, -1, -1, -1 };\n    assertEquals(Utils.bytesToLong(bytes), Long.MAX_VALUE);\n    assertArrayEquals(Utils.longToBytes(Long.MAX_VALUE), bytes);\n    \n    bytes = new byte[] { -128, 0, 0, 0, 0, 0, 0, 0 };\n    assertEquals(Utils.bytesToLong(bytes), Long.MIN_VALUE);\n    assertArrayEquals(Utils.longToBytes(Long.MIN_VALUE), bytes);\n    \n    bytes = new byte[] { (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, \n        (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF  };\n    assertEquals(Utils.bytesToLong(bytes), -1L);\n    assertArrayEquals(Utils.longToBytes(-1L), bytes);\n    \n    // if the array is too long we just skip the remainder\n    bytes = new byte[] { 0, 0, 0, 0, 0, 0, 0, 1, 42, 42, 42 };\n    assertEquals(Utils.bytesToLong(bytes), 1L);\n  }\n  \n  @Test\n  public void bytesToFromDouble() throws Exception {\n    byte[] bytes = new byte[8];\n    assertEquals(Utils.bytesToDouble(bytes), 0, 0.0001);\n    assertArrayEquals(Utils.doubleToBytes(0), bytes);\n    \n    bytes = new byte[] { 63, -16, 0, 0, 0, 0, 0, 0 };\n    assertEquals(Utils.bytesToDouble(bytes), 1, 0.0001);\n    assertArrayEquals(Utils.doubleToBytes(1), bytes);\n    \n    bytes = new byte[] { -65, -16, 0, 0, 0, 0, 0, 0 };\n    assertEquals(Utils.bytesToDouble(bytes), -1, 0.0001);\n    assertArrayEquals(Utils.doubleToBytes(-1), bytes);\n    \n    bytes = new byte[] { 127, -17, -1, -1, -1, -1, -1, -1 };\n    assertEquals(Utils.bytesToDouble(bytes), Double.MAX_VALUE, 0.0001);\n    assertArrayEquals(Utils.doubleToBytes(Double.MAX_VALUE), bytes);\n    \n    bytes = new byte[] { 0, 0, 0, 0, 0, 0, 0, 1 };\n    assertEquals(Utils.bytesToDouble(bytes), Double.MIN_VALUE, 0.0001);\n    assertArrayEquals(Utils.doubleToBytes(Double.MIN_VALUE), bytes);\n    \n    bytes = new byte[] { 127, -8, 0, 0, 0, 0, 0, 0 };\n    assertTrue(Double.isNaN(Utils.bytesToDouble(bytes)));\n    assertArrayEquals(Utils.doubleToBytes(Double.NaN), bytes);\n    \n    bytes = new byte[] { 63, -16, 0, 0, 0, 0, 0, 0, 42, 42, 42 };\n    assertEquals(Utils.bytesToDouble(bytes), 1, 0.0001);\n  }\n  \n  @Test (expectedExceptions = NullPointerException.class)\n  public void bytesToLongNull() throws Exception {\n    Utils.bytesToLong(null);\n  }\n  \n  @Test (expectedExceptions = IndexOutOfBoundsException.class)\n  public void bytesToLongTooShort() throws Exception {\n    Utils.bytesToLong(new byte[] { 0, 0, 0, 0, 0, 0, 0 });\n  }\n  \n  @Test (expectedExceptions = IllegalArgumentException.class)\n  public void bytesToDoubleTooShort() throws Exception {\n    Utils.bytesToDouble(new byte[] { 0, 0, 0, 0, 0, 0, 0 });\n  }\n  \n  @Test\n  public void jvmUtils() throws Exception {\n    // This should ALWAYS return at least one thread.\n    assertTrue(Utils.getActiveThreadCount() > 0);\n    // This should always be greater than 0 or something is goofed up in the JVM.\n    assertTrue(Utils.getUsedMemoryBytes() > 0);\n    // Some operating systems may not implement this so we don't have a good\n    // test. Just make sure it doesn't throw an exception.\n    Utils.getSystemLoadAverage();\n    // This will probably be zero but should never be negative.\n    assertTrue(Utils.getGCTotalCollectionCount() >= 0);\n    // Could be zero similar to GC total collection count\n    assertTrue(Utils.getGCTotalTime() >= 0);\n    // Could be empty\n    assertTrue(Utils.getGCStatst().size() >= 0);\n  }\n   \n  /**\n   * Since this version of TestNG doesn't appear to have an assertArrayEquals,\n   * this will compare the two to make sure they're the same. \n   * @param actual Actual array to validate\n   * @param expected What the array should contain\n   * @throws AssertionError if the test fails.\n   */\n  public void assertArrayEquals(final byte[] actual, final byte[] expected) {\n    if (actual == null && expected != null) {\n      throw new AssertionError(\"Expected \" + Arrays.toString(expected) + \n          \" but found [null]\");\n    }\n    if (actual != null && expected == null) {\n      throw new AssertionError(\"Expected [null] but found \" + \n          Arrays.toString(actual));\n    }\n    if (actual.length != expected.length) {\n      throw new AssertionError(\"Expected length \" + expected.length + \n          \" but found \" + actual.length);\n    }\n    for (int i = 0; i < expected.length; i++) {\n      if (actual[i] != expected[i]) {\n        throw new AssertionError(\"Expected byte [\" + expected[i] + \n            \"] at index \" + i + \" but found [\" + actual[i] + \"]\");\n      }\n    }\n  }\n}"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/generator/AcknowledgedCounterGeneratorTest.java",
    "content": "/**\n * Copyright (c) 2015-2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.generator;\n\nimport java.util.Random;\nimport java.util.concurrent.ArrayBlockingQueue;\nimport java.util.concurrent.BlockingQueue;\n\nimport org.testng.annotations.Test;\n\n/**\n * Tests for the AcknowledgedCounterGenerator class.\n */\npublic class AcknowledgedCounterGeneratorTest {\n\n  /**\n   * Test that advancing past {@link Integer#MAX_VALUE} works.\n   */\n  @Test\n  public void testIncrementPastIntegerMaxValue() {\n    final long toTry = AcknowledgedCounterGenerator.WINDOW_SIZE * 3;\n\n    AcknowledgedCounterGenerator generator =\n        new AcknowledgedCounterGenerator(Integer.MAX_VALUE - 1000);\n\n    Random rand = new Random(System.currentTimeMillis());\n    BlockingQueue<Long> pending = new ArrayBlockingQueue<Long>(1000);\n    for (long i = 0; i < toTry; ++i) {\n      long value = generator.nextValue();\n\n      while (!pending.offer(value)) {\n\n        Long first = pending.poll();\n\n        // Don't always advance by one.\n        if (rand.nextBoolean()) {\n          generator.acknowledge(first);\n        } else {\n          Long second = pending.poll();\n          pending.add(first);\n          generator.acknowledge(second);\n        }\n      }\n    }\n\n  }\n}\n"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/generator/TestIncrementingPrintableStringGenerator.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n * \n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.generator;\n\nimport static org.testng.Assert.assertEquals;\nimport static org.testng.Assert.assertNull;\nimport static org.testng.Assert.fail;\n\nimport java.util.NoSuchElementException;\n\nimport org.testng.annotations.Test;\n\npublic class TestIncrementingPrintableStringGenerator {\n  private final static int[] ATOC = new int[] { 65, 66, 67 };\n  \n  @Test\n  public void rolloverOK() throws Exception {\n    final IncrementingPrintableStringGenerator gen = \n        new IncrementingPrintableStringGenerator(2, ATOC);\n    \n    assertNull(gen.lastValue());\n    assertEquals(gen.nextValue(), \"AA\");\n    assertEquals(gen.lastValue(), \"AA\");\n    assertEquals(gen.nextValue(), \"AB\");\n    assertEquals(gen.lastValue(), \"AB\");\n    assertEquals(gen.nextValue(), \"AC\");\n    assertEquals(gen.lastValue(), \"AC\");\n    assertEquals(gen.nextValue(), \"BA\");\n    assertEquals(gen.lastValue(), \"BA\");\n    assertEquals(gen.nextValue(), \"BB\");\n    assertEquals(gen.lastValue(), \"BB\");\n    assertEquals(gen.nextValue(), \"BC\");\n    assertEquals(gen.lastValue(), \"BC\");\n    assertEquals(gen.nextValue(), \"CA\");\n    assertEquals(gen.lastValue(), \"CA\");\n    assertEquals(gen.nextValue(), \"CB\");\n    assertEquals(gen.lastValue(), \"CB\");\n    assertEquals(gen.nextValue(), \"CC\");\n    assertEquals(gen.lastValue(), \"CC\");\n    assertEquals(gen.nextValue(), \"AA\"); // <-- rollover\n    assertEquals(gen.lastValue(), \"AA\");\n  }\n  \n  @Test\n  public void rolloverOneCharacterOK() throws Exception {\n    // It would be silly to create a generator with one character.\n    final IncrementingPrintableStringGenerator gen = \n        new IncrementingPrintableStringGenerator(2, new int[] { 65 });\n    for (int i = 0; i < 5; i++) {\n      assertEquals(gen.nextValue(), \"AA\");\n    }\n  }\n  \n  @Test\n  public void rolloverException() throws Exception {\n    final IncrementingPrintableStringGenerator gen = \n        new IncrementingPrintableStringGenerator(2, ATOC);\n    gen.setThrowExceptionOnRollover(true);\n    \n    int i = 0;\n    try {\n      while(i < 11) {\n        ++i;\n        gen.nextValue();\n      }\n      fail(\"Expected NoSuchElementException\");\n    } catch (NoSuchElementException e) {\n      assertEquals(i, 10);\n    }\n  }\n  \n  @Test\n  public void rolloverOneCharacterException() throws Exception {\n    // It would be silly to create a generator with one character.\n    final IncrementingPrintableStringGenerator gen = \n        new IncrementingPrintableStringGenerator(2, new int[] { 65 });\n    gen.setThrowExceptionOnRollover(true);\n    \n    int i = 0;\n    try {\n      while(i < 3) {\n        ++i;\n        gen.nextValue();\n      }\n      fail(\"Expected NoSuchElementException\");\n    } catch (NoSuchElementException e) {\n      assertEquals(i, 2);\n    }\n  }\n  \n  @Test\n  public void invalidLengths() throws Exception {\n    try {\n      new IncrementingPrintableStringGenerator(0, ATOC);\n      fail(\"Expected IllegalArgumentException\");\n    } catch (IllegalArgumentException e) { }\n    \n    try {\n      new IncrementingPrintableStringGenerator(-42, ATOC);\n      fail(\"Expected IllegalArgumentException\");\n    } catch (IllegalArgumentException e) { }\n  }\n  \n  @Test\n  public void invalidCharacterSets() throws Exception {\n    try {\n      new IncrementingPrintableStringGenerator(2, null);\n      fail(\"Expected IllegalArgumentException\");\n    } catch (IllegalArgumentException e) { }\n    \n    try {\n      new IncrementingPrintableStringGenerator(2, new int[] {});\n      fail(\"Expected IllegalArgumentException\");\n    } catch (IllegalArgumentException e) { }\n  }\n}\n"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/generator/TestRandomDiscreteTimestampGenerator.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.generator;\n\nimport static org.testng.Assert.assertEquals;\nimport static org.testng.Assert.fail;\n\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.concurrent.TimeUnit;\n\nimport org.testng.annotations.Test;\nimport org.testng.collections.Lists;\n\npublic class TestRandomDiscreteTimestampGenerator {\n\n  @Test\n  public void systemTime() throws Exception {\n    final RandomDiscreteTimestampGenerator generator = \n        new RandomDiscreteTimestampGenerator(60, TimeUnit.SECONDS, 60);\n    List<Long> generated = Lists.newArrayList();\n    for (int i = 0; i < 60; i++) {\n      generated.add(generator.nextValue());\n    }\n    assertEquals(generated.size(), 60);\n    try {\n      generator.nextValue();\n      fail(\"Expected IllegalStateException\");\n    } catch (IllegalStateException e) { }\n  }\n  \n  @Test\n  public void withStartTime() throws Exception {\n    final RandomDiscreteTimestampGenerator generator = \n        new RandomDiscreteTimestampGenerator(60, TimeUnit.SECONDS, 1072915200L, 60);\n    List<Long> generated = Lists.newArrayList();\n    for (int i = 0; i < 60; i++) {\n      generated.add(generator.nextValue());\n    }\n    assertEquals(generated.size(), 60);\n    Collections.sort(generated);\n    long ts = 1072915200L - 60; // starts 1 interval in the past\n    for (final long t : generated) {\n      assertEquals(t, ts);\n      ts += 60;\n    }\n    try {\n      generator.nextValue();\n      fail(\"Expected IllegalStateException\");\n    } catch (IllegalStateException e) { }\n  }\n  \n  @Test (expectedExceptions = IllegalArgumentException.class)\n  public void tooLarge() throws Exception {\n    new RandomDiscreteTimestampGenerator(60, TimeUnit.SECONDS, \n        RandomDiscreteTimestampGenerator.MAX_INTERVALS + 1);\n  }\n  \n  //TODO - With PowerMockito we could UT the initializeTimestamp(long) call.\n  // Otherwise it would involve creating more functions and that would get ugly.\n}\n"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/generator/TestUnixEpochTimestampGenerator.java",
    "content": "/**                                                                                                                                                                                \n * Copyright (c) 2016 YCSB contributors. All rights reserved.                                                                                                                             \n *                                                                                                                                                                                 \n * Licensed under the Apache License, Version 2.0 (the \"License\"); you                                                                                                             \n * may not use this file except in compliance with the License. You                                                                                                                \n * may obtain a copy of the License at                                                                                                                                             \n *                                                                                                                                                                                 \n * http://www.apache.org/licenses/LICENSE-2.0                                                                                                                                      \n *                                                                                                                                                                                 \n * Unless required by applicable law or agreed to in writing, software                                                                                                             \n * distributed under the License is distributed on an \"AS IS\" BASIS,                                                                                                               \n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or                                                                                                                 \n * implied. See the License for the specific language governing                                                                                                                    \n * permissions and limitations under the License. See accompanying                                                                                                                 \n * LICENSE file.                                                                                                                                                                   \n */\npackage com.yahoo.ycsb.generator;\n\nimport static org.testng.Assert.assertEquals;\n\nimport java.util.concurrent.TimeUnit;\nimport org.testng.annotations.Test;\n\npublic class TestUnixEpochTimestampGenerator {\n\n  @Test\n  public void defaultCtor() throws Exception {\n    final UnixEpochTimestampGenerator generator = \n        new UnixEpochTimestampGenerator();\n    final long startTime = generator.currentValue();\n    assertEquals((long) generator.nextValue(), startTime + 60);\n    assertEquals((long) generator.lastValue(), startTime);\n    assertEquals((long) generator.nextValue(), startTime + 120);\n    assertEquals((long) generator.lastValue(), startTime + 60);\n    assertEquals((long) generator.nextValue(), startTime + 180);\n  }\n  \n  @Test\n  public void ctorWithIntervalAndUnits() throws Exception {\n    final UnixEpochTimestampGenerator generator = \n        new UnixEpochTimestampGenerator(120, TimeUnit.SECONDS);\n    final long startTime = generator.currentValue();\n    assertEquals((long) generator.nextValue(), startTime + 120);\n    assertEquals((long) generator.lastValue(), startTime);\n    assertEquals((long) generator.nextValue(), startTime + 240);\n    assertEquals((long) generator.lastValue(), startTime + 120);\n  }\n  \n  @Test\n  public void ctorWithIntervalAndUnitsAndStart() throws Exception {\n    final UnixEpochTimestampGenerator generator = \n        new UnixEpochTimestampGenerator(120, TimeUnit.SECONDS, 1072915200L);\n    assertEquals((long) generator.nextValue(), 1072915200L);\n    assertEquals((long) generator.lastValue(), 1072915200L - 120);\n    assertEquals((long) generator.nextValue(), 1072915200L + 120);\n    assertEquals((long) generator.lastValue(), 1072915200L);\n  }\n  \n  @Test\n  public void variousIntervalsAndUnits() throws Exception {\n    // negatives could happen, just start and roll back in time\n    UnixEpochTimestampGenerator generator = \n        new UnixEpochTimestampGenerator(-60, TimeUnit.SECONDS);\n    long startTime = generator.currentValue();\n    assertEquals((long) generator.nextValue(), startTime - 60);\n    assertEquals((long) generator.lastValue(), startTime);\n    assertEquals((long) generator.nextValue(), startTime - 120);\n    assertEquals((long) generator.lastValue(), startTime - 60);\n    \n    generator = new UnixEpochTimestampGenerator(100, TimeUnit.NANOSECONDS);\n    startTime = generator.currentValue();\n    assertEquals((long) generator.nextValue(), startTime + 100);\n    assertEquals((long) generator.lastValue(), startTime);\n    assertEquals((long) generator.nextValue(), startTime + 200);\n    assertEquals((long) generator.lastValue(), startTime + 100);\n    \n    generator = new UnixEpochTimestampGenerator(100, TimeUnit.MICROSECONDS);\n    startTime = generator.currentValue();\n    assertEquals((long) generator.nextValue(), startTime + 100);\n    assertEquals((long) generator.lastValue(), startTime);\n    assertEquals((long) generator.nextValue(), startTime + 200);\n    assertEquals((long) generator.lastValue(), startTime + 100);\n    \n    generator = new UnixEpochTimestampGenerator(100, TimeUnit.MILLISECONDS);\n    startTime = generator.currentValue();\n    assertEquals((long) generator.nextValue(), startTime + 100);\n    assertEquals((long) generator.lastValue(), startTime);\n    assertEquals((long) generator.nextValue(), startTime + 200);\n    assertEquals((long) generator.lastValue(), startTime + 100);\n    \n    generator = new UnixEpochTimestampGenerator(100, TimeUnit.SECONDS);\n    startTime = generator.currentValue();\n    assertEquals((long) generator.nextValue(), startTime + 100);\n    assertEquals((long) generator.lastValue(), startTime);\n    assertEquals((long) generator.nextValue(), startTime + 200);\n    assertEquals((long) generator.lastValue(), startTime + 100);\n    \n    generator = new UnixEpochTimestampGenerator(1, TimeUnit.MINUTES);\n    startTime = generator.currentValue();\n    assertEquals((long) generator.nextValue(), startTime + (1 * 60));\n    assertEquals((long) generator.lastValue(), startTime);\n    assertEquals((long) generator.nextValue(), startTime + (2 * 60));\n    assertEquals((long) generator.lastValue(), startTime + (1 * 60));\n    \n    generator = new UnixEpochTimestampGenerator(1, TimeUnit.HOURS);\n    startTime = generator.currentValue();\n    assertEquals((long) generator.nextValue(), startTime + (1 * 60 * 60));\n    assertEquals((long) generator.lastValue(), startTime);\n    assertEquals((long) generator.nextValue(), startTime + (2 * 60 * 60));\n    assertEquals((long) generator.lastValue(), startTime + (1 * 60 * 60));\n    \n    generator = new UnixEpochTimestampGenerator(1, TimeUnit.DAYS);\n    startTime = generator.currentValue();\n    assertEquals((long) generator.nextValue(), startTime + (1 * 60 * 60 * 24));\n    assertEquals((long) generator.lastValue(), startTime);\n    assertEquals((long) generator.nextValue(), startTime + (2 * 60 * 60 * 24));\n    assertEquals((long) generator.lastValue(), startTime + (1 * 60 * 60 * 24));\n  }\n  \n  // TODO - With PowerMockito we could UT the initializeTimestamp(long) call.\n  // Otherwise it would involve creating more functions and that would get ugly.\n}\n"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/generator/TestZipfianGenerator.java",
    "content": "/**\n * Copyright (c) 2010 Yahoo! Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.generator;\n\nimport org.testng.annotations.Test;\n\nimport static org.testng.AssertJUnit.assertFalse;\n\n\npublic class TestZipfianGenerator {\n    @Test\n    public void testMinAndMaxParameter() {\n        long min = 5;\n        long max = 10;\n        ZipfianGenerator zipfian = new ZipfianGenerator(min, max);\n\n        for (int i = 0; i < 10000; i++) {\n            long rnd = zipfian.nextValue();\n            assertFalse(rnd < min);\n            assertFalse(rnd > max);\n        }\n\n    }\n}\n"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/measurements/exporter/TestMeasurementsExporter.java",
    "content": "/**\n * Copyright (c) 2015 Yahoo! Inc. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.measurements.exporter;\n\nimport com.yahoo.ycsb.generator.ZipfianGenerator;\nimport com.yahoo.ycsb.measurements.Measurements;\nimport org.codehaus.jackson.JsonNode;\nimport org.codehaus.jackson.map.ObjectMapper;\nimport org.testng.annotations.Test;\n\nimport java.io.ByteArrayOutputStream;\nimport java.io.IOException;\nimport java.util.Properties;\n\nimport static org.testng.AssertJUnit.assertEquals;\nimport static org.testng.AssertJUnit.assertTrue;\n\npublic class TestMeasurementsExporter {\n    @Test\n    public void testJSONArrayMeasurementsExporter() throws IOException {\n        Properties props = new Properties();\n        props.put(Measurements.MEASUREMENT_TYPE_PROPERTY, \"histogram\");\n        Measurements mm = new Measurements(props);\n        ByteArrayOutputStream out = new ByteArrayOutputStream();\n        JSONArrayMeasurementsExporter export = new JSONArrayMeasurementsExporter(out);\n\n        long min = 5000;\n        long max = 100000;\n        ZipfianGenerator zipfian = new ZipfianGenerator(min, max);\n        for (int i = 0; i < 1000; i++) {\n            int rnd = zipfian.nextValue().intValue();\n            mm.measure(\"UPDATE\", rnd);\n        }\n        mm.exportMeasurements(export);\n        export.close();\n\n        ObjectMapper mapper = new ObjectMapper();\n        JsonNode  json = mapper.readTree(out.toString(\"UTF-8\"));\n        assertTrue(json.isArray());\n        assertEquals(json.get(0).get(\"measurement\").asText(), \"Operations\");\n        assertEquals(json.get(4).get(\"measurement\").asText(), \"MaxLatency(us)\");\n        assertEquals(json.get(11).get(\"measurement\").asText(), \"4\");\n    }\n}\n"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/workloads/TestCoreWorkload.java",
    "content": "/**                                                                                                                                                                                \n * Copyright (c) 2016 YCSB contributors. All rights reserved.                                                                                                                             \n *                                                                                                                                                                                 \n * Licensed under the Apache License, Version 2.0 (the \"License\"); you                                                                                                             \n * may not use this file except in compliance with the License. You                                                                                                                \n * may obtain a copy of the License at                                                                                                                                             \n *                                                                                                                                                                                 \n * http://www.apache.org/licenses/LICENSE-2.0                                                                                                                                      \n *                                                                                                                                                                                 \n * Unless required by applicable law or agreed to in writing, software                                                                                                             \n * distributed under the License is distributed on an \"AS IS\" BASIS,                                                                                                               \n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or                                                                                                                 \n * implied. See the License for the specific language governing                                                                                                                    \n * permissions and limitations under the License. See accompanying                                                                                                                 \n * LICENSE file.                                                                                                                                                                   \n */\npackage com.yahoo.ycsb.workloads;\n\nimport static org.testng.Assert.assertTrue;\n\nimport java.util.Properties;\n\nimport org.testng.annotations.Test;\n\nimport com.yahoo.ycsb.generator.DiscreteGenerator;\n\npublic class TestCoreWorkload {\n\n  @Test\n  public void createOperationChooser() {\n    final Properties p = new Properties();\n    p.setProperty(CoreWorkload.READ_PROPORTION_PROPERTY, \"0.20\");\n    p.setProperty(CoreWorkload.UPDATE_PROPORTION_PROPERTY, \"0.20\");\n    p.setProperty(CoreWorkload.INSERT_PROPORTION_PROPERTY, \"0.20\");\n    p.setProperty(CoreWorkload.SCAN_PROPORTION_PROPERTY, \"0.20\");\n    p.setProperty(CoreWorkload.READMODIFYWRITE_PROPORTION_PROPERTY, \"0.20\");\n    final DiscreteGenerator generator = CoreWorkload.createOperationGenerator(p);\n    final int[] counts = new int[5];\n    \n    for (int i = 0; i < 100; ++i) {\n      switch (generator.nextString()) {\n      case \"READ\":\n        ++counts[0];\n        break;\n      case \"UPDATE\":\n        ++counts[1];\n        break;\n      case \"INSERT\": \n        ++counts[2];\n        break;\n      case \"SCAN\":\n        ++counts[3];\n        break;\n      default:\n        ++counts[4];\n      } \n    }\n    \n    for (int i : counts) {\n      // Doesn't do a wonderful job of equal distribution, but in a hundred, if we \n      // don't see at least one of each operation then the generator is really broke.\n      assertTrue(i > 1);\n    }\n  }\n  \n  @Test (expectedExceptions = IllegalArgumentException.class)\n  public void createOperationChooserNullProperties() {\n    CoreWorkload.createOperationGenerator(null);\n  }\n}"
  },
  {
    "path": "core/src/test/java/com/yahoo/ycsb/workloads/TestTimeSeriesWorkload.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.workloads;\n\nimport static org.testng.Assert.assertEquals;\nimport static org.testng.Assert.assertFalse;\nimport static org.testng.Assert.assertNotNull;\nimport static org.testng.Assert.assertTrue;\nimport static org.testng.Assert.fail;\n\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Map.Entry;\n\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.TreeMap;\nimport java.util.Vector;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.Client;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.NumericByteIterator;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport com.yahoo.ycsb.Utils;\nimport com.yahoo.ycsb.WorkloadException;\nimport com.yahoo.ycsb.measurements.Measurements;\n\nimport org.testng.annotations.Test;\n\npublic class TestTimeSeriesWorkload {\n  \n  @Test\n  public void twoThreads() throws Exception {\n    final Properties p = getUTProperties();\n    Measurements.setProperties(p);\n    \n    final TimeSeriesWorkload wl = new TimeSeriesWorkload();\n    wl.init(p);\n    Object threadState = wl.initThread(p, 0, 2);\n    \n    MockDB db = new MockDB();\n    for (int i = 0; i < 74; i++) {\n      assertTrue(wl.doInsert(db, threadState));\n    }\n    \n    assertEquals(db.keys.size(), 74);\n    assertEquals(db.values.size(), 74);\n    long timestamp = 1451606400;\n    for (int i = 0; i < db.keys.size(); i++) {\n      assertEquals(db.keys.get(i), \"AAAA\");\n      assertEquals(db.values.get(i).get(\"AA\").toString(), \"AAAA\");\n      assertEquals(Utils.bytesToLong(db.values.get(i).get(\n          TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT).toArray()), timestamp);\n      assertNotNull(db.values.get(i).get(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT));\n      if (i % 2 == 0) {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAA\");\n      } else {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAB\");\n        timestamp += 60;\n      }\n    }\n    \n    threadState = wl.initThread(p, 1, 2);\n    db = new MockDB();\n    for (int i = 0; i < 74; i++) {\n      assertTrue(wl.doInsert(db, threadState));\n    }\n    \n    assertEquals(db.keys.size(), 74);\n    assertEquals(db.values.size(), 74);\n    timestamp = 1451606400;\n    for (int i = 0; i < db.keys.size(); i++) {\n      assertEquals(db.keys.get(i), \"AAAB\");\n      assertEquals(db.values.get(i).get(\"AA\").toString(), \"AAAA\");\n      assertEquals(Utils.bytesToLong(db.values.get(i).get(\n          TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT).toArray()), timestamp);\n      assertNotNull(db.values.get(i).get(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT));\n      if (i % 2 == 0) {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAA\");\n      } else {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAB\");\n        timestamp += 60;\n      }\n    }\n  }\n  \n  @Test (expectedExceptions = WorkloadException.class)\n  public void badTimeUnit() throws Exception {\n    final Properties p = new Properties();\n    p.put(TimeSeriesWorkload.TIMESTAMP_UNITS_PROPERTY, \"foobar\");\n    getWorkload(p, true);\n  }\n  \n  @Test (expectedExceptions = WorkloadException.class)\n  public void failedToInitWorkloadBeforeThreadInit() throws Exception {\n    final Properties p = getUTProperties();\n    final TimeSeriesWorkload wl = getWorkload(p, false);\n    //wl.init(p); // <-- we NEED this :(\n    final Object threadState = wl.initThread(p, 0, 2);\n    \n    final MockDB db = new MockDB();\n    wl.doInsert(db, threadState);\n  }\n  \n  @Test (expectedExceptions = IllegalStateException.class)\n  public void failedToInitThread() throws Exception {\n    final Properties p = getUTProperties();\n    final TimeSeriesWorkload wl = getWorkload(p, true);\n    \n    final MockDB db = new MockDB();\n    wl.doInsert(db, null);\n  }\n  \n  @Test\n  public void insertOneKeyTwoTagsLowCardinality() throws Exception {\n    final Properties p = getUTProperties();\n    p.put(CoreWorkload.FIELD_COUNT_PROPERTY, \"1\");\n    final TimeSeriesWorkload wl = getWorkload(p, true);\n    final Object threadState = wl.initThread(p, 0, 1);\n    \n    final MockDB db = new MockDB();\n    for (int i = 0; i < 74; i++) {\n      assertTrue(wl.doInsert(db, threadState));\n    }\n    \n    assertEquals(db.keys.size(), 74);\n    assertEquals(db.values.size(), 74);\n    long timestamp = 1451606400;\n    for (int i = 0; i < db.keys.size(); i++) {\n      assertEquals(db.keys.get(i), \"AAAA\");\n      assertEquals(db.values.get(i).get(\"AA\").toString(), \"AAAA\");\n      assertEquals(Utils.bytesToLong(db.values.get(i).get(\n          TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT).toArray()), timestamp);\n      assertTrue(((NumericByteIterator) db.values.get(i)\n          .get(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT)).isFloatingPoint());\n      if (i % 2 == 0) {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAA\");\n      } else {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAB\");\n        timestamp += 60;\n      }\n    }\n  }\n  \n  @Test\n  public void insertTwoKeysTwoTagsLowCardinality() throws Exception {\n    final Properties p = getUTProperties();\n    \n    final TimeSeriesWorkload wl = getWorkload(p, true);\n    final Object threadState = wl.initThread(p, 0, 1);\n    \n    final MockDB db = new MockDB();\n    for (int i = 0; i < 74; i++) {\n      assertTrue(wl.doInsert(db, threadState));\n    }\n    \n    assertEquals(db.keys.size(), 74);\n    assertEquals(db.values.size(), 74);\n    long timestamp = 1451606400;\n    int metricCtr = 0;\n    for (int i = 0; i < db.keys.size(); i++) {\n      assertEquals(db.values.get(i).get(\"AA\").toString(), \"AAAA\");\n      assertEquals(Utils.bytesToLong(db.values.get(i).get(\n          TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT).toArray()), timestamp);\n      assertTrue(((NumericByteIterator) db.values.get(i)\n          .get(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT)).isFloatingPoint());\n      if (i % 2 == 0) {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAA\");\n      } else {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAB\");\n      }\n      if (metricCtr++ > 1) {\n        assertEquals(db.keys.get(i), \"AAAB\");\n        if (metricCtr >= 4) {\n          metricCtr = 0;\n          timestamp += 60;\n        }\n      } else {\n        assertEquals(db.keys.get(i), \"AAAA\");\n      }\n    }\n  }\n  \n  @Test\n  public void insertTwoKeysTwoThreads() throws Exception {\n    final Properties p = getUTProperties();\n    \n    final TimeSeriesWorkload wl = getWorkload(p, true);\n    Object threadState = wl.initThread(p, 0, 2);\n    \n    MockDB db = new MockDB();\n    for (int i = 0; i < 74; i++) {\n      assertTrue(wl.doInsert(db, threadState));\n    }\n    \n    assertEquals(db.keys.size(), 74);\n    assertEquals(db.values.size(), 74);\n    long timestamp = 1451606400;\n    for (int i = 0; i < db.keys.size(); i++) {\n      assertEquals(db.keys.get(i), \"AAAA\"); // <-- key 1\n      assertEquals(db.values.get(i).get(\"AA\").toString(), \"AAAA\");\n      assertEquals(Utils.bytesToLong(db.values.get(i).get(\n          TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT).toArray()), timestamp);\n      assertTrue(((NumericByteIterator) db.values.get(i)\n          .get(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT)).isFloatingPoint());\n      if (i % 2 == 0) {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAA\");\n      } else {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAB\");\n        timestamp += 60;\n      }\n    }\n    \n    threadState = wl.initThread(p, 1, 2);\n    db = new MockDB();\n    for (int i = 0; i < 74; i++) {\n      assertTrue(wl.doInsert(db, threadState));\n    }\n    \n    assertEquals(db.keys.size(), 74);\n    assertEquals(db.values.size(), 74);\n    timestamp = 1451606400;\n    for (int i = 0; i < db.keys.size(); i++) {\n      assertEquals(db.keys.get(i), \"AAAB\"); // <-- key 2\n      assertEquals(db.values.get(i).get(\"AA\").toString(), \"AAAA\");\n      assertEquals(Utils.bytesToLong(db.values.get(i).get(\n          TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT).toArray()), timestamp);\n      assertNotNull(db.values.get(i).get(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT));\n      if (i % 2 == 0) {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAA\");\n      } else {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAB\");\n        timestamp += 60;\n      }\n    }\n  }\n  \n  @Test\n  public void insertThreeKeysTwoThreads() throws Exception {\n    // To make sure the distribution doesn't miss any metrics\n    final Properties p = getUTProperties();\n    p.put(CoreWorkload.FIELD_COUNT_PROPERTY, \"3\");\n    \n    final TimeSeriesWorkload wl = getWorkload(p, true);\n    Object threadState = wl.initThread(p, 0, 2);\n    \n    MockDB db = new MockDB();\n    for (int i = 0; i < 74; i++) {\n      assertTrue(wl.doInsert(db, threadState));\n    }\n    \n    assertEquals(db.keys.size(), 74);\n    assertEquals(db.values.size(), 74);\n    long timestamp = 1451606400;\n    for (int i = 0; i < db.keys.size(); i++) {\n      assertEquals(db.keys.get(i), \"AAAA\");\n      assertEquals(db.values.get(i).get(\"AA\").toString(), \"AAAA\");\n      assertEquals(Utils.bytesToLong(db.values.get(i).get(\n          TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT).toArray()), timestamp);\n      assertTrue(((NumericByteIterator) db.values.get(i)\n          .get(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT)).isFloatingPoint());\n      if (i % 2 == 0) {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAA\");\n      } else {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAB\");\n        timestamp += 60;\n      }\n    }\n    \n    threadState = wl.initThread(p, 1, 2);\n    db = new MockDB();\n    for (int i = 0; i < 74; i++) {\n      assertTrue(wl.doInsert(db, threadState));\n    }\n    \n    timestamp = 1451606400;\n    int metricCtr = 0;\n    for (int i = 0; i < db.keys.size(); i++) {\n      assertEquals(db.values.get(i).get(\"AA\").toString(), \"AAAA\");\n      assertEquals(Utils.bytesToLong(db.values.get(i).get(\n          TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT).toArray()), timestamp);\n      assertNotNull(db.values.get(i).get(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT));\n      if (i % 2 == 0) {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAA\");\n      } else {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAB\");\n      }\n      if (metricCtr++ > 1) {\n        assertEquals(db.keys.get(i), \"AAAC\");\n        if (metricCtr >= 4) {\n          metricCtr = 0;\n          timestamp += 60;\n        }\n      } else {\n        assertEquals(db.keys.get(i), \"AAAB\");\n      }\n    }\n  }\n  \n  @Test\n  public void insertWithValidation() throws Exception {\n    final Properties p = getUTProperties();\n    p.put(CoreWorkload.FIELD_COUNT_PROPERTY, \"1\");\n    p.put(CoreWorkload.DATA_INTEGRITY_PROPERTY, \"true\");\n    p.put(TimeSeriesWorkload.VALUE_TYPE_PROPERTY, \"integers\");\n    final TimeSeriesWorkload wl = getWorkload(p, true);\n    final Object threadState = wl.initThread(p, 0, 1);\n    \n    final MockDB db = new MockDB();\n    for (int i = 0; i < 74; i++) {\n      assertTrue(wl.doInsert(db, threadState));\n    }\n    \n    assertEquals(db.keys.size(), 74);\n    assertEquals(db.values.size(), 74);\n    long timestamp = 1451606400;\n    for (int i = 0; i < db.keys.size(); i++) {\n      assertEquals(db.keys.get(i), \"AAAA\");\n      assertEquals(db.values.get(i).get(\"AA\").toString(), \"AAAA\");\n      assertEquals(Utils.bytesToLong(db.values.get(i).get(\n          TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT).toArray()), timestamp);\n      assertFalse(((NumericByteIterator) db.values.get(i)\n          .get(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT)).isFloatingPoint());\n      \n      // validation check\n      final TreeMap<String, String> validationTags = new TreeMap<String, String>();\n      for (final Entry<String, ByteIterator> entry : db.values.get(i).entrySet()) {\n        if (entry.getKey().equals(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT) || \n            entry.getKey().equals(TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT)) {\n          continue;\n        }\n        validationTags.put(entry.getKey(), entry.getValue().toString());\n      }\n      assertEquals(wl.validationFunction(db.keys.get(i), timestamp, validationTags), \n          ((NumericByteIterator) db.values.get(i).get(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT)).getLong());\n      \n      if (i % 2 == 0) {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAA\");\n      } else {\n        assertEquals(db.values.get(i).get(\"AB\").toString(), \"AAAB\");\n        timestamp += 60;\n      }\n    }\n  }\n  \n  @Test\n  public void read() throws Exception {\n    final Properties p = getUTProperties();\n    final TimeSeriesWorkload wl = getWorkload(p, true);\n    final Object threadState = wl.initThread(p, 0, 1);\n    \n    final MockDB db = new MockDB();\n    for (int i = 0; i < 20; i++) {\n      wl.doTransactionRead(db, threadState);\n    }\n  }\n  \n  @Test\n  public void verifyRow() throws Exception {\n    final Properties p = getUTProperties();\n    final TimeSeriesWorkload wl = getWorkload(p, true);\n    \n    final TreeMap<String, String> validationTags = new TreeMap<String, String>();\n    final HashMap<String, ByteIterator> cells = new HashMap<String, ByteIterator>();\n    \n    validationTags.put(\"AA\", \"AAAA\");\n    cells.put(\"AA\", new StringByteIterator(\"AAAA\"));\n    validationTags.put(\"AB\", \"AAAB\");\n    cells.put(\"AB\", new StringByteIterator(\"AAAB\"));\n    long hash = wl.validationFunction(\"AAAA\", 1451606400L, validationTags);\n        \n    cells.put(TimeSeriesWorkload.TIMESTAMP_KEY_PROPERTY_DEFAULT, new NumericByteIterator(1451606400L));\n    cells.put(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT, new NumericByteIterator(hash));\n    \n    assertEquals(wl.verifyRow(\"AAAA\", cells), Status.OK);\n    \n    // tweak the last value a bit\n    for (final ByteIterator it : cells.values()) {\n      it.reset();\n    }\n    cells.put(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT, new NumericByteIterator(hash + 1));\n    assertEquals(wl.verifyRow(\"AAAA\", cells), Status.UNEXPECTED_STATE);\n    \n    // no value cell, returns an unexpected state\n    for (final ByteIterator it : cells.values()) {\n      it.reset();\n    }\n    cells.remove(TimeSeriesWorkload.VALUE_KEY_PROPERTY_DEFAULT);\n    assertEquals(wl.verifyRow(\"AAAA\", cells), Status.UNEXPECTED_STATE);\n  }\n  \n  @Test\n  public void validateSettingsDataIntegrity() throws Exception {\n    Properties p = getUTProperties();\n    \n    // data validation incompatibilities\n    p.setProperty(CoreWorkload.DATA_INTEGRITY_PROPERTY, \"true\");\n    try {\n      getWorkload(p, true);\n      fail(\"Expected WorkloadException\");\n    } catch (WorkloadException e) { }\n    \n    p.setProperty(TimeSeriesWorkload.VALUE_TYPE_PROPERTY, \"integers\"); // now it's ok\n    p.setProperty(TimeSeriesWorkload.GROUPBY_PROPERTY, \"sum\"); // now it's not\n    try {\n      getWorkload(p, true);\n      fail(\"Expected WorkloadException\");\n    } catch (WorkloadException e) { }\n    \n    p.setProperty(TimeSeriesWorkload.GROUPBY_PROPERTY, \"\");\n    p.setProperty(TimeSeriesWorkload.DOWNSAMPLING_FUNCTION_PROPERTY, \"sum\");\n    p.setProperty(TimeSeriesWorkload.DOWNSAMPLING_INTERVAL_PROPERTY, \"60\");\n    try {\n      getWorkload(p, true);\n      fail(\"Expected WorkloadException\");\n    } catch (WorkloadException e) { }\n    \n    p.setProperty(TimeSeriesWorkload.DOWNSAMPLING_FUNCTION_PROPERTY, \"\");\n    p.setProperty(TimeSeriesWorkload.DOWNSAMPLING_INTERVAL_PROPERTY, \"\");\n    p.setProperty(TimeSeriesWorkload.QUERY_TIMESPAN_PROPERTY, \"60\");\n    try {\n      getWorkload(p, true);\n      fail(\"Expected WorkloadException\");\n    } catch (WorkloadException e) { }\n    \n    p = getUTProperties();\n    p.setProperty(CoreWorkload.DATA_INTEGRITY_PROPERTY, \"true\");\n    p.setProperty(TimeSeriesWorkload.VALUE_TYPE_PROPERTY, \"integers\");\n    p.setProperty(TimeSeriesWorkload.RANDOMIZE_TIMESERIES_ORDER_PROPERTY, \"true\");\n    try {\n      getWorkload(p, true);\n      fail(\"Expected WorkloadException\");\n    } catch (WorkloadException e) { }\n    \n    p.setProperty(TimeSeriesWorkload.RANDOMIZE_TIMESERIES_ORDER_PROPERTY, \"false\");\n    p.setProperty(TimeSeriesWorkload.INSERT_START_PROPERTY, \"\");\n    try {\n      getWorkload(p, true);\n      fail(\"Expected WorkloadException\");\n    } catch (WorkloadException e) { }\n  }\n  \n  /** Helper method that generates unit testing defaults for the properties map */\n  private Properties getUTProperties() {\n    final Properties p = new Properties();\n    p.put(Client.RECORD_COUNT_PROPERTY, \"10\");\n    p.put(CoreWorkload.FIELD_COUNT_PROPERTY, \"2\");\n    p.put(CoreWorkload.FIELD_LENGTH_PROPERTY, \"4\");\n    p.put(TimeSeriesWorkload.TAG_KEY_LENGTH_PROPERTY, \"2\");\n    p.put(TimeSeriesWorkload.TAG_VALUE_LENGTH_PROPERTY, \"4\");\n    p.put(TimeSeriesWorkload.TAG_COUNT_PROPERTY, \"2\");\n    p.put(TimeSeriesWorkload.TAG_CARDINALITY_PROPERTY, \"1,2\");\n    p.put(CoreWorkload.INSERT_START_PROPERTY, \"1451606400\");\n    p.put(TimeSeriesWorkload.DELAYED_SERIES_PROPERTY, \"0\");\n    p.put(TimeSeriesWorkload.RANDOMIZE_TIMESERIES_ORDER_PROPERTY, \"false\");\n    return p;\n  }\n  \n  /** Helper to setup the workload for testing. */\n  private TimeSeriesWorkload getWorkload(final Properties p, final boolean init) \n      throws WorkloadException {\n    Measurements.setProperties(p);\n    if (!init) {\n      return new TimeSeriesWorkload();\n    } else {\n      final TimeSeriesWorkload workload = new TimeSeriesWorkload();\n      workload.init(p);\n      return workload;\n    }\n  }\n  \n  static class MockDB extends DB {\n    final List<String> keys = new ArrayList<String>();\n    final List<Map<String, ByteIterator>> values = \n        new ArrayList<Map<String, ByteIterator>>();\n    \n    @Override\n    public Status read(String table, String key, Set<String> fields,\n                       Map<String, ByteIterator> result) {\n      return Status.OK;\n    }\n\n    @Override\n    public Status scan(String table, String startkey, int recordcount,\n        Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n      // TODO Auto-generated method stub\n      return Status.OK;\n    }\n\n    @Override\n    public Status update(String table, String key,\n        Map<String, ByteIterator> values) {\n      // TODO Auto-generated method stub\n      return Status.OK;\n    }\n\n    @Override\n    public Status insert(String table, String key,\n        Map<String, ByteIterator> values) {\n      keys.add(key);\n      this.values.add(values);\n      return Status.OK;\n    }\n\n    @Override\n    public Status delete(String table, String key) {\n      // TODO Auto-generated method stub\n      return Status.OK;\n    }\n    \n    public void dumpStdout() {\n      for (int i = 0; i < keys.size(); i++) {\n        System.out.print(\"[\" + i + \"] Key: \" + keys.get(i) + \" Values: {\");\n        int x = 0;\n        for (final Entry<String, ByteIterator> entry : values.get(i).entrySet()) {\n          if (x++ > 0) {\n            System.out.print(\", \");\n          }\n          System.out.print(\"{\" + entry.getKey() + \" => \");\n          if (entry.getKey().equals(\"YCSBV\")) {\n            System.out.print(new String(Utils.bytesToDouble(entry.getValue().toArray()) + \"}\"));  \n          } else if (entry.getKey().equals(\"YCSBTS\")) {\n            System.out.print(new String(Utils.bytesToLong(entry.getValue().toArray()) + \"}\"));\n          } else {\n            System.out.print(new String(entry.getValue().toArray()) + \"}\");\n          }\n        }\n        System.out.println(\"}\");\n      }\n    }\n  }\n}"
  },
  {
    "path": "couchbase/README.md",
    "content": "<!--\nCopyright (c) 2015 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# Couchbase Driver for YCSB\nThis driver is a binding for the YCSB facilities to operate against a Couchbase Server cluster. It uses the official Couchbase Java SDK and provides a rich set of configuration options.\n\n## Quickstart\n\n### 1. Start Couchbase Server\nYou need to start a single node or a cluster to point the client at. Please see [http://couchbase.com](couchbase.com) for more details and instructions.\n\n### 2. Set up YCSB\nYou need to clone the repository and compile everything.\n\n```\ngit clone git://github.com/brianfrankcooper/YCSB.git\ncd YCSB\nmvn clean package\n```\n\n### 3. Run the Workload\nBefore you can actually run the workload, you need to \"load\" the data first.\n\n```\nbin/ycsb load couchbase -s -P workloads/workloada\n```\n\nThen, you can run the workload:\n\n```\nbin/ycsb run couchbase -s -P workloads/workloada\n```\n\nPlease see the general instructions in the `doc` folder if you are not sure how it all works. You can apply a property (as seen in the next section) like this:\n\n```\nbin/ycsb run couchbase -s -P workloads/workloada -p couchbase.useJson=false\n```\n\n## Scans in the CouchbaseClient\nThe scan operation in the CouchbaseClient requires a Couchbase View to be created manually. To do this:\n\n1. Go to the Couchbase UI, then to Views\n2. Create a new development view, specify a ddoc and view name, use these in your YCSB properties. See Configuration Options below.\n3. The default map code is sufficient.\n4. Save, and publish this View.\n\n## Configuration Options\nSince no setup is the same and the goal of YCSB is to deliver realistic benchmarks, here are some setups that you can tune. Note that if you need more flexibility (let's say a custom transcoder), you still need to extend this driver and implement the facilities on your own.\n\nYou can set the following properties (with the default settings applied):\n\n - couchbase.url=http://127.0.0.1:8091/pools => The connection URL from one server.\n - couchbase.bucket=default => The bucket name to use.\n - couchbase.password= => The password of the bucket.\n - couchbase.checkFutures=true => If the futures should be inspected (makes ops sync).\n - couchbase.persistTo=0 => Observe Persistence (\"PersistTo\" constraint).\n - couchbase.replicateTo=0 => Observe Replication (\"ReplicateTo\" constraint).\n - couchbase.ddoc => The ddoc name used for scanning\n - couchbase.view => The view name used for scanning\n - couchbase.stale => How to deal with stale values in View Query for scanning. (OK, FALSE, UPDATE_AFTER)\n - couchbase.json=true => Use json or java serialization as target format.\n\n"
  },
  {
    "path": "couchbase/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2015-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" \n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" \n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n    <modelVersion>4.0.0</modelVersion>\n    <parent>\n        <groupId>com.yahoo.ycsb</groupId>\n        <artifactId>binding-parent</artifactId>\n        <version>0.14.0-SNAPSHOT</version>\n        <relativePath>../binding-parent</relativePath>\n    </parent>\n\n    <artifactId>couchbase-binding</artifactId>\n    <name>Couchbase Binding</name>\n    <packaging>jar</packaging>\n\n    <dependencies>\n        <dependency>\n            <groupId>com.couchbase.client</groupId>\n            <artifactId>couchbase-client</artifactId>\n            <version>${couchbase.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>com.yahoo.ycsb</groupId>\n            <artifactId>core</artifactId>\n            <version>${project.version}</version>\n            <scope>provided</scope>\n        </dependency>\n        <dependency>\n            <groupId>com.fasterxml.jackson.core</groupId>\n            <artifactId>jackson-databind</artifactId>\n            <version>2.2.2</version>\n        </dependency>\n        <dependency>\n            <groupId>org.slf4j</groupId>\n            <artifactId>slf4j-api</artifactId>\n        </dependency>\n    </dependencies>\n</project>\n"
  },
  {
    "path": "couchbase/src/main/java/com/yahoo/ycsb/db/CouchbaseClient.java",
    "content": "/**\n * Copyright (c) 2013 - 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.couchbase.client.protocol.views.*;\nimport com.fasterxml.jackson.core.JsonFactory;\nimport com.fasterxml.jackson.core.JsonGenerator;\nimport com.fasterxml.jackson.databind.JsonNode;\nimport com.fasterxml.jackson.databind.ObjectMapper;\nimport com.fasterxml.jackson.databind.node.ObjectNode;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport net.spy.memcached.PersistTo;\nimport net.spy.memcached.ReplicateTo;\nimport net.spy.memcached.internal.OperationFuture;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.io.StringWriter;\nimport java.io.Writer;\nimport java.net.URI;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\n/**\n * A class that wraps the CouchbaseClient to allow it to be interfaced with YCSB.\n * This class extends {@link DB} and implements the database interface used by YCSB client.\n */\npublic class CouchbaseClient extends DB {\n  public static final String URL_PROPERTY = \"couchbase.url\";\n  public static final String BUCKET_PROPERTY = \"couchbase.bucket\";\n  public static final String PASSWORD_PROPERTY = \"couchbase.password\";\n  public static final String CHECKF_PROPERTY = \"couchbase.checkFutures\";\n  public static final String PERSIST_PROPERTY = \"couchbase.persistTo\";\n  public static final String REPLICATE_PROPERTY = \"couchbase.replicateTo\";\n  public static final String JSON_PROPERTY = \"couchbase.json\";\n  public static final String DESIGN_DOC_PROPERTY = \"couchbase.ddoc\";\n  public static final String VIEW_PROPERTY = \"couchbase.view\";\n  public static final String STALE_PROPERTY = \"couchbase.stale\";\n  public static final String SCAN_PROPERTY = \"scanproportion\";\n\n  public static final String STALE_PROPERTY_DEFAULT = Stale.OK.name();\n  public static final String SCAN_PROPERTY_DEFAULT = \"0.0\";\n\n  protected static final ObjectMapper JSON_MAPPER = new ObjectMapper();\n\n  private com.couchbase.client.CouchbaseClient client;\n  private PersistTo persistTo;\n  private ReplicateTo replicateTo;\n  private boolean checkFutures;\n  private boolean useJson;\n  private String designDoc;\n  private String viewName;\n  private Stale stale;\n  private View view;\n  private final Logger log = LoggerFactory.getLogger(getClass());\n\n  @Override\n  public void init() throws DBException {\n    Properties props = getProperties();\n\n    String url = props.getProperty(URL_PROPERTY, \"http://127.0.0.1:8091/pools\");\n    String bucket = props.getProperty(BUCKET_PROPERTY, \"default\");\n    String password = props.getProperty(PASSWORD_PROPERTY, \"\");\n\n    checkFutures = props.getProperty(CHECKF_PROPERTY, \"true\").equals(\"true\");\n    useJson = props.getProperty(JSON_PROPERTY, \"true\").equals(\"true\");\n\n    persistTo = parsePersistTo(props.getProperty(PERSIST_PROPERTY, \"0\"));\n    replicateTo = parseReplicateTo(props.getProperty(REPLICATE_PROPERTY, \"0\"));\n\n    designDoc = getProperties().getProperty(DESIGN_DOC_PROPERTY);\n    viewName = getProperties().getProperty(VIEW_PROPERTY);\n    stale = Stale.valueOf(getProperties().getProperty(STALE_PROPERTY, STALE_PROPERTY_DEFAULT).toUpperCase());\n\n    Double scanproportion = Double.valueOf(props.getProperty(SCAN_PROPERTY, SCAN_PROPERTY_DEFAULT));\n\n    Properties systemProperties = System.getProperties();\n    systemProperties.put(\"net.spy.log.LoggerImpl\", \"net.spy.memcached.compat.log.SLF4JLogger\");\n    System.setProperties(systemProperties);\n\n    try {\n      client = new com.couchbase.client.CouchbaseClient(Arrays.asList(new URI(url)), bucket, password);\n    } catch (Exception e) {\n      throw new DBException(\"Could not create CouchbaseClient object.\", e);\n    }\n\n    if (scanproportion > 0) {\n      try {\n        view = client.getView(designDoc, viewName);\n      } catch (Exception e) {\n        throw new DBException(String.format(\"%s=%s and %s=%s provided, unable to connect to view.\",\n          DESIGN_DOC_PROPERTY, designDoc, VIEW_PROPERTY, viewName), e.getCause());\n      }\n    }\n  }\n\n  /**\n   * Parse the replicate property into the correct enum.\n   *\n   * @param property the stringified property value.\n   * @throws DBException if parsing the property did fail.\n   * @return the correct enum.\n   */\n  private ReplicateTo parseReplicateTo(final String property) throws DBException {\n    int value = Integer.parseInt(property);\n\n    switch (value) {\n    case 0:\n      return ReplicateTo.ZERO;\n    case 1:\n      return ReplicateTo.ONE;\n    case 2:\n      return ReplicateTo.TWO;\n    case 3:\n      return ReplicateTo.THREE;\n    default:\n      throw new DBException(REPLICATE_PROPERTY + \" must be between 0 and 3\");\n    }\n  }\n\n  /**\n   * Parse the persist property into the correct enum.\n   *\n   * @param property the stringified property value.\n   * @throws DBException if parsing the property did fail.\n   * @return the correct enum.\n   */\n  private PersistTo parsePersistTo(final String property) throws DBException {\n    int value = Integer.parseInt(property);\n\n    switch (value) {\n    case 0:\n      return PersistTo.ZERO;\n    case 1:\n      return PersistTo.ONE;\n    case 2:\n      return PersistTo.TWO;\n    case 3:\n      return PersistTo.THREE;\n    case 4:\n      return PersistTo.FOUR;\n    default:\n      throw new DBException(PERSIST_PROPERTY + \" must be between 0 and 4\");\n    }\n  }\n\n  /**\n   * Shutdown the client.\n   */\n  @Override\n  public void cleanup() {\n    client.shutdown();\n  }\n\n  @Override\n  public Status read(final String table, final String key, final Set<String> fields,\n                     final Map<String, ByteIterator> result) {\n    String formattedKey = formatKey(table, key);\n\n    try {\n      Object loaded = client.get(formattedKey);\n\n      if (loaded == null) {\n        return Status.ERROR;\n      }\n\n      decode(loaded, fields, result);\n      return Status.OK;\n    } catch (Exception e) {\n      if (log.isErrorEnabled()) {\n        log.error(\"Could not read value for key \" + formattedKey, e);\n      }\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status scan(final String table, final String startkey, final int recordcount, final Set<String> fields,\n                     final Vector<HashMap<String, ByteIterator>> result) {\n    try {\n      Query query = new Query().setRangeStart(startkey)\n          .setLimit(recordcount)\n          .setIncludeDocs(true)\n          .setStale(stale);\n      ViewResponse response = client.query(view, query);\n\n      for (ViewRow row : response) {\n        HashMap<String, ByteIterator> rowMap = new HashMap();\n        decode(row.getDocument(), fields, rowMap);\n        result.add(rowMap);\n      }\n\n      return Status.OK;\n    } catch (Exception e) {\n      log.error(e.getMessage());\n    }\n\n    return Status.ERROR;\n  }\n\n  @Override\n  public Status update(final String table, final String key, final Map<String, ByteIterator> values) {\n    String formattedKey = formatKey(table, key);\n\n    try {\n      final OperationFuture<Boolean> future = client.replace(formattedKey, encode(values), persistTo, replicateTo);\n      return checkFutureStatus(future);\n    } catch (Exception e) {\n      if (log.isErrorEnabled()) {\n        log.error(\"Could not update value for key \" + formattedKey, e);\n      }\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status insert(final String table, final String key, final Map<String, ByteIterator> values) {\n    String formattedKey = formatKey(table, key);\n\n    try {\n      final OperationFuture<Boolean> future = client.add(formattedKey, encode(values), persistTo, replicateTo);\n      return checkFutureStatus(future);\n    } catch (Exception e) {\n      if (log.isErrorEnabled()) {\n        log.error(\"Could not insert value for key \" + formattedKey, e);\n      }\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status delete(final String table, final String key) {\n    String formattedKey = formatKey(table, key);\n\n    try {\n      final OperationFuture<Boolean> future = client.delete(formattedKey, persistTo, replicateTo);\n      return checkFutureStatus(future);\n    } catch (Exception e) {\n      if (log.isErrorEnabled()) {\n        log.error(\"Could not delete value for key \" + formattedKey, e);\n      }\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Prefix the key with the given prefix, to establish a unique namespace.\n   *\n   * @param prefix the prefix to use.\n   * @param key the actual key.\n   * @return the formatted and prefixed key.\n   */\n  private String formatKey(final String prefix, final String key) {\n    return prefix + \":\" + key;\n  }\n\n  /**\n   * Wrapper method that either inspects the future or not.\n   *\n   * @param future the future to potentially verify.\n   * @return the status of the future result.\n   */\n  private Status checkFutureStatus(final OperationFuture<?> future) {\n    if (checkFutures) {\n      return future.getStatus().isSuccess() ? Status.OK : Status.ERROR;\n    } else {\n      return Status.OK;\n    }\n  }\n\n  /**\n   * Decode the object from server into the storable result.\n   *\n   * @param source the loaded object.\n   * @param fields the fields to check.\n   * @param dest the result passed back to the ycsb core.\n   */\n  private void decode(final Object source, final Set<String> fields, final Map<String, ByteIterator> dest) {\n    if (useJson) {\n      try {\n        JsonNode json = JSON_MAPPER.readTree((String) source);\n        boolean checkFields = fields != null && !fields.isEmpty();\n        for (Iterator<Map.Entry<String, JsonNode>> jsonFields = json.fields(); jsonFields.hasNext();) {\n          Map.Entry<String, JsonNode> jsonField = jsonFields.next();\n          String name = jsonField.getKey();\n          if (checkFields && fields.contains(name)) {\n            continue;\n          }\n          JsonNode jsonValue = jsonField.getValue();\n          if (jsonValue != null && !jsonValue.isNull()) {\n            dest.put(name, new StringByteIterator(jsonValue.asText()));\n          }\n        }\n      } catch (Exception e) {\n        throw new RuntimeException(\"Could not decode JSON\");\n      }\n    } else {\n      Map<String, String> converted = (HashMap<String, String>) source;\n      for (Map.Entry<String, String> entry : converted.entrySet()) {\n        dest.put(entry.getKey(), new StringByteIterator(entry.getValue()));\n      }\n    }\n  }\n\n  /**\n   * Encode the object for couchbase storage.\n   *\n   * @param source the source value.\n   * @return the storable object.\n   */\n  private Object encode(final Map<String, ByteIterator> source) {\n    Map<String, String> stringMap = StringByteIterator.getStringMap(source);\n    if (!useJson) {\n      return stringMap;\n    }\n\n    ObjectNode node = JSON_MAPPER.createObjectNode();\n    for (Map.Entry<String, String> pair : stringMap.entrySet()) {\n      node.put(pair.getKey(), pair.getValue());\n    }\n    JsonFactory jsonFactory = new JsonFactory();\n    Writer writer = new StringWriter();\n    try {\n      JsonGenerator jsonGenerator = jsonFactory.createGenerator(writer);\n      JSON_MAPPER.writeTree(jsonGenerator, node);\n    } catch (Exception e) {\n      throw new RuntimeException(\"Could not encode JSON value\");\n    }\n    return writer.toString();\n  }\n}\n"
  },
  {
    "path": "couchbase/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2015 - 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"http://www.couchbase.com/\">Couchbase</a>.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "couchbase2/README.md",
    "content": "<!--\nCopyright (c) 2015 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# Couchbase (SDK 2.x) Driver for YCSB\nThis driver is a binding for the YCSB facilities to operate against a Couchbase Server cluster. It uses the official\nCouchbase Java SDK (version 2.x) and provides a rich set of configuration options, including support for the N1QL\nquery language.\n\n## Quickstart\n\n### 1. Start Couchbase Server\nYou need to start a single node or a cluster to point the client at. Please see [http://couchbase.com](couchbase.com)\nfor more details and instructions.\n\n### 2. Set up YCSB\nYou can either download the release zip and run it, or just clone from master.\n\n```\ngit clone git://github.com/brianfrankcooper/YCSB.git\ncd YCSB\nmvn clean package\n```\n\n### 3. Run the Workload\nBefore you can actually run the workload, you need to \"load\" the data first.\n\n```\nbin/ycsb load couchbase2 -s -P workloads/workloada\n```\n\nThen, you can run the workload:\n\n```\nbin/ycsb run couchbase2 -s -P workloads/workloada\n```\n\nPlease see the general instructions in the `doc` folder if you are not sure how it all works. You can apply a property\n(as seen in the next section) like this:\n\n```\nbin/ycsb run couchbase -s -P workloads/workloada -p couchbase.epoll=true\n```\n\n## N1QL Index Setup\nIn general, every time N1QL is used (either implicitly through using `workloade` or through setting `kv=false`) some\nkind of index must be present to make it work. Depending on the workload and data size, choosing the right index is\ncrucial at runtime in order to get the best performance. If in doubt, please ask at the\n[forums](http://forums.couchbase.com) or get in touch with our team at Couchbase.\n\nFor `workloade` and the default `readallfields=true` we recommend creating the following index, and if using Couchbase\nServer 4.5 or later with the \"Memory Optimized Index\" setting on the bucket.\n\n```\nCREATE PRIMARY INDEX ON `bucketname`;\n```\n\nCouchbase Server prior to 4.5 may need a slightly different index to deliver the best performance.  In those releases\nadditional covering information may be added to the index with this form.\n\n```\n-CREATE INDEX wle_idx ON `bucketname`(meta().id);\n```\n\nFor other workloads, different index setups might be even more performant.\n\n## Performance Considerations\nAs it is with any benchmark, there are lot of knobs to tune in order to get great or (if you are reading\nthis and trying to write a competitor benchmark ;-)) bad performance.\n\nThe first setting you should consider, if you are running on Linux 64bit is setting `-p couchbase.epoll=true`. This will\nthen turn on the Epoll IO mechanisms in the underlying Netty library which provides better performance since it has less\nsynchronization to do than the NIO default. This only works on Linux, but you are benchmarking on the OS you are\ndeploying to, right?\n\nThe second option, `boost`, sounds more magic than it actually is. By default this benchmark trades CPU for throughput,\nbut this can be disabled by setting `-p couchbase.boost=0`. This defaults to 3, and 3 is the number of event loops run\nin the IO layer. 3 is a reasonable default but you should set it to the number of **physical** cores you have available\non the machine if you only plan to run one YCSB instance. Make sure (using profiling) to max out your cores, but don't\noverdo it.\n\n## Sync vs Async\nBy default, since YCSB is sync the code will always wait for the operation to complete. In some cases it can be useful\nto just \"drive load\" and disable the waiting. Note that when the \"-p couchbase.syncMutationResponse=false\" option is\nused, the measured results by YCSB can basically be thrown away. Still helpful sometimes during load phases to speed\nthem up :)\n\n## Debugging Latency\nThe Couchbase Java SDK has the ability to collect and dump different kinds of metrics which allow you to analyze\nperformance during benchmarking and production. By default this option is disabled in the benchmark, but by setting\n`couchbase.networkMetricsInterval` and/or `couchbase.runtimeMetricsInterval` to something greater than 0 it will\noutput the information as JSON into the configured logger. The number provides is the interval in seconds. If you are\nunsure what interval to pick, start with 10 or 30 seconds, depending on your runtime length.\n\nThis is how such logs look like:\n\n```\nINFO: {\"heap.used\":{\"init\":268435456,\"used\":36500912,\"committed\":232259584,\"max\":3817865216},\"gc.ps marksweep.collectionTime\":0,\"gc.ps scavenge.collectionTime\":54,\"gc.ps scavenge.collectionCount\":17,\"thread.count\":26,\"offHeap.used\":{\"init\":2555904,\"used\":30865944,\"committed\":31719424,\"max\":-1},\"gc.ps marksweep.collectionCount\":0,\"heap.pendingFinalize\":0,\"thread.peakCount\":26,\"event\":{\"name\":\"RuntimeMetrics\",\"type\":\"METRIC\"},\"thread.startedCount\":28}\nINFO: {\"localhost/127.0.0.1:11210\":{\"BINARY\":{\"ReplaceRequest\":{\"SUCCESS\":{\"metrics\":{\"percentiles\":{\"50.0\":102,\"90.0\":136,\"95.0\":155,\"99.0\":244,\"99.9\":428},\"min\":55,\"max\":1564,\"count\":35787,\"timeUnit\":\"MICROSECONDS\"}}},\"GetRequest\":{\"SUCCESS\":{\"metrics\":{\"percentiles\":{\"50.0\":74,\"90.0\":98,\"95.0\":110,\"99.0\":158,\"99.9\":358},\"min\":34,\"max\":2310,\"count\":35604,\"timeUnit\":\"MICROSECONDS\"}}},\"GetBucketConfigRequest\":{\"SUCCESS\":{\"metrics\":{\"percentiles\":{\"50.0\":462,\"90.0\":462,\"95.0\":462,\"99.0\":462,\"99.9\":462},\"min\":460,\"max\":462,\"count\":1,\"timeUnit\":\"MICROSECONDS\"}}}}},\"event\":{\"name\":\"NetworkLatencyMetrics\",\"type\":\"METRIC\"}}\n```\n\nIt is recommended to either feed it into a program which can analyze and visualize JSON or just dump it into a JSON\npretty printer and look at it manually. Since the output can be changed (only by changing the code at the moment), you\ncan even configure to put those messages into another couchbase bucket and then analyze it through N1QL! You can learn\nmore about this in general [in the official docs](http://developer.couchbase.com/documentation/server/4.0/sdks/java-2.2/event-bus-metrics.html).\n\n\n## Configuration Options\nSince no setup is the same and the goal of YCSB is to deliver realistic benchmarks, here are some setups that you can\ntune. Note that if you need more flexibility (let's say a custom transcoder), you still need to extend this driver and\nimplement the facilities on your own.\n\nYou can set the following properties (with the default settings applied):\n\n - couchbase.host=127.0.0.1: The hostname from one server.\n - couchbase.bucket=default: The bucket name to use.\n - couchbase.password=: The password of the bucket.\n - couchbase.syncMutationResponse=true: If mutations should wait for the response to complete.\n - couchbase.persistTo=0: Persistence durability requirement\n - couchbase.replicateTo=0: Replication durability requirement\n - couchbase.upsert=false: Use upsert instead of insert or replace.\n - couchbase.adhoc=false: If set to true, prepared statements are not used.\n - couchbase.kv=true: If set to false, mutation operations will also be performed through N1QL.\n - couchbase.maxParallelism=1: The server parallelism for all n1ql queries.\n - couchbase.kvEndpoints=1: The number of KV sockets to open per server.\n - couchbase.queryEndpoints=5: The number of N1QL Query sockets to open per server.\n - couchbase.epoll=false: If Epoll instead of NIO should be used (only available for linux.\n - couchbase.boost=3: If > 0 trades CPU for higher throughput. N is the number of event loops, ideally\n   set to the number of physical cores. Setting higher than that will likely degrade performance.\n - couchbase.networkMetricsInterval=0: The interval in seconds when latency metrics will be logged.\n - couchbase.runtimeMetricsInterval=0: The interval in seconds when runtime metrics will be logged.\n - couchbase.documentExpiry=0: Document Expiry is the amount of time(second) until a document expires in Couchbase."
  },
  {
    "path": "couchbase2/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <modelVersion>4.0.0</modelVersion>\n    <parent>\n        <groupId>com.yahoo.ycsb</groupId>\n        <artifactId>binding-parent</artifactId>\n        <version>0.14.0-SNAPSHOT</version>\n        <relativePath>../binding-parent</relativePath>\n    </parent>\n\n    <artifactId>couchbase2-binding</artifactId>\n    <name>Couchbase Java SDK 2.x Binding</name>\n    <packaging>jar</packaging>\n\n    <dependencies>\n        <dependency>\n            <groupId>com.couchbase.client</groupId>\n            <artifactId>java-client</artifactId>\n            <version>${couchbase2.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>com.yahoo.ycsb</groupId>\n            <artifactId>core</artifactId>\n            <version>${project.version}</version>\n            <scope>provided</scope>\n        </dependency>\n    </dependencies>\n\n</project>\n"
  },
  {
    "path": "couchbase2/src/main/java/com/yahoo/ycsb/db/couchbase2/Couchbase2Client.java",
    "content": "/**\n * Copyright (c) 2016 Yahoo! Inc. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.couchbase2;\n\nimport com.couchbase.client.core.env.DefaultCoreEnvironment;\nimport com.couchbase.client.core.env.resources.IoPoolShutdownHook;\nimport com.couchbase.client.core.logging.CouchbaseLogger;\nimport com.couchbase.client.core.logging.CouchbaseLoggerFactory;\nimport com.couchbase.client.core.metrics.DefaultLatencyMetricsCollectorConfig;\nimport com.couchbase.client.core.metrics.DefaultMetricsCollectorConfig;\nimport com.couchbase.client.core.metrics.LatencyMetricsCollectorConfig;\nimport com.couchbase.client.core.metrics.MetricsCollectorConfig;\nimport com.couchbase.client.deps.com.fasterxml.jackson.core.JsonFactory;\nimport com.couchbase.client.deps.com.fasterxml.jackson.core.JsonGenerator;\nimport com.couchbase.client.deps.com.fasterxml.jackson.databind.JsonNode;\nimport com.couchbase.client.deps.com.fasterxml.jackson.databind.node.ObjectNode;\nimport com.couchbase.client.deps.io.netty.channel.DefaultSelectStrategyFactory;\nimport com.couchbase.client.deps.io.netty.channel.EventLoopGroup;\nimport com.couchbase.client.deps.io.netty.channel.SelectStrategy;\nimport com.couchbase.client.deps.io.netty.channel.SelectStrategyFactory;\nimport com.couchbase.client.deps.io.netty.channel.epoll.EpollEventLoopGroup;\nimport com.couchbase.client.deps.io.netty.channel.nio.NioEventLoopGroup;\nimport com.couchbase.client.deps.io.netty.util.IntSupplier;\nimport com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory;\nimport com.couchbase.client.java.Bucket;\nimport com.couchbase.client.java.Cluster;\nimport com.couchbase.client.java.CouchbaseCluster;\nimport com.couchbase.client.java.PersistTo;\nimport com.couchbase.client.java.ReplicateTo;\nimport com.couchbase.client.java.document.Document;\nimport com.couchbase.client.java.document.RawJsonDocument;\nimport com.couchbase.client.java.document.json.JsonArray;\nimport com.couchbase.client.java.document.json.JsonObject;\nimport com.couchbase.client.java.env.CouchbaseEnvironment;\nimport com.couchbase.client.java.env.DefaultCouchbaseEnvironment;\nimport com.couchbase.client.java.error.TemporaryFailureException;\nimport com.couchbase.client.java.query.*;\nimport com.couchbase.client.java.transcoder.JacksonTransformers;\nimport com.couchbase.client.java.util.Blocking;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport rx.Observable;\nimport rx.Subscriber;\nimport rx.functions.Action1;\nimport rx.functions.Func1;\n\nimport java.io.StringWriter;\nimport java.io.Writer;\nimport java.nio.channels.spi.SelectorProvider;\nimport java.util.*;\nimport java.util.concurrent.ThreadFactory;\nimport java.util.concurrent.TimeUnit;\nimport java.util.concurrent.locks.LockSupport;\n\n/**\n * A class that wraps the 2.x Couchbase SDK to be used with YCSB.\n *\n * <p> The following options can be passed when using this database client to override the defaults.\n *\n * <ul>\n * <li><b>couchbase.host=127.0.0.1</b> The hostname from one server.</li>\n * <li><b>couchbase.bucket=default</b> The bucket name to use.</li>\n * <li><b>couchbase.password=</b> The password of the bucket.</li>\n * <li><b>couchbase.syncMutationResponse=true</b> If mutations should wait for the response to complete.</li>\n * <li><b>couchbase.persistTo=0</b> Persistence durability requirement</li>\n * <li><b>couchbase.replicateTo=0</b> Replication durability requirement</li>\n * <li><b>couchbase.upsert=false</b> Use upsert instead of insert or replace.</li>\n * <li><b>couchbase.adhoc=false</b> If set to true, prepared statements are not used.</li>\n * <li><b>couchbase.kv=true</b> If set to false, mutation operations will also be performed through N1QL.</li>\n * <li><b>couchbase.maxParallelism=1</b> The server parallelism for all n1ql queries.</li>\n * <li><b>couchbase.kvEndpoints=1</b> The number of KV sockets to open per server.</li>\n * <li><b>couchbase.queryEndpoints=5</b> The number of N1QL Query sockets to open per server.</li>\n * <li><b>couchbase.epoll=false</b> If Epoll instead of NIO should be used (only available for linux.</li>\n * <li><b>couchbase.boost=3</b> If > 0 trades CPU for higher throughput. N is the number of event loops, ideally\n *      set to the number of physical cores. Setting higher than that will likely degrade performance.</li>\n * <li><b>couchbase.networkMetricsInterval=0</b> The interval in seconds when latency metrics will be logged.</li>\n * <li><b>couchbase.runtimeMetricsInterval=0</b> The interval in seconds when runtime metrics will be logged.</li>\n * <li><b>couchbase.documentExpiry=0</b> Document Expiry is the amount of time until a document expires in\n *      Couchbase.</li>\n * </ul>\n */\npublic class Couchbase2Client extends DB {\n\n  static {\n    // No need to send the full encoded_plan for this benchmark workload, less network overhead!\n    System.setProperty(\"com.couchbase.query.encodedPlanEnabled\", \"false\");\n  }\n\n  private static final String SEPARATOR = \":\";\n  private static final CouchbaseLogger LOGGER = CouchbaseLoggerFactory.getInstance(Couchbase2Client.class);\n  private static final Object INIT_COORDINATOR = new Object();\n\n  private static volatile CouchbaseEnvironment env = null;\n\n  private Cluster cluster;\n  private Bucket bucket;\n  private String bucketName;\n  private boolean upsert;\n  private PersistTo persistTo;\n  private ReplicateTo replicateTo;\n  private boolean syncMutResponse;\n  private boolean epoll;\n  private long kvTimeout;\n  private boolean adhoc;\n  private boolean kv;\n  private int maxParallelism;\n  private String host;\n  private int kvEndpoints;\n  private int queryEndpoints;\n  private int boost;\n  private int networkMetricsInterval;\n  private int runtimeMetricsInterval;\n  private String scanAllQuery;\n  private int documentExpiry;\n  \n  @Override\n  public void init() throws DBException {\n    Properties props = getProperties();\n\n    host = props.getProperty(\"couchbase.host\", \"127.0.0.1\");\n    bucketName = props.getProperty(\"couchbase.bucket\", \"default\");\n    String bucketPassword = props.getProperty(\"couchbase.password\", \"\");\n\n    upsert = props.getProperty(\"couchbase.upsert\", \"false\").equals(\"true\");\n    persistTo = parsePersistTo(props.getProperty(\"couchbase.persistTo\", \"0\"));\n    replicateTo = parseReplicateTo(props.getProperty(\"couchbase.replicateTo\", \"0\"));\n    syncMutResponse = props.getProperty(\"couchbase.syncMutationResponse\", \"true\").equals(\"true\");\n    adhoc = props.getProperty(\"couchbase.adhoc\", \"false\").equals(\"true\");\n    kv = props.getProperty(\"couchbase.kv\", \"true\").equals(\"true\");\n    maxParallelism = Integer.parseInt(props.getProperty(\"couchbase.maxParallelism\", \"1\"));\n    kvEndpoints = Integer.parseInt(props.getProperty(\"couchbase.kvEndpoints\", \"1\"));\n    queryEndpoints = Integer.parseInt(props.getProperty(\"couchbase.queryEndpoints\", \"1\"));\n    epoll = props.getProperty(\"couchbase.epoll\", \"false\").equals(\"true\");\n    boost = Integer.parseInt(props.getProperty(\"couchbase.boost\", \"3\"));\n    networkMetricsInterval = Integer.parseInt(props.getProperty(\"couchbase.networkMetricsInterval\", \"0\"));\n    runtimeMetricsInterval = Integer.parseInt(props.getProperty(\"couchbase.runtimeMetricsInterval\", \"0\"));\n    documentExpiry = Integer.parseInt(props.getProperty(\"couchbase.documentExpiry\", \"0\"));\n    scanAllQuery =  \"SELECT RAW meta().id FROM `\" + bucketName +\n      \"` WHERE meta().id >= '$1' ORDER BY meta().id LIMIT $2\";\n\n    try {\n      synchronized (INIT_COORDINATOR) {\n        if (env == null) {\n\n          LatencyMetricsCollectorConfig latencyConfig = networkMetricsInterval <= 0\n              ? DefaultLatencyMetricsCollectorConfig.disabled()\n              : DefaultLatencyMetricsCollectorConfig\n                  .builder()\n                  .emitFrequency(networkMetricsInterval)\n                  .emitFrequencyUnit(TimeUnit.SECONDS)\n                  .build();\n\n          MetricsCollectorConfig runtimeConfig = runtimeMetricsInterval <= 0\n              ? DefaultMetricsCollectorConfig.disabled()\n              : DefaultMetricsCollectorConfig.create(runtimeMetricsInterval, TimeUnit.SECONDS);\n\n          DefaultCouchbaseEnvironment.Builder builder = DefaultCouchbaseEnvironment\n              .builder()\n              .queryEndpoints(queryEndpoints)\n              .callbacksOnIoPool(true)\n              .runtimeMetricsCollectorConfig(runtimeConfig)\n              .networkLatencyMetricsCollectorConfig(latencyConfig)\n              .socketConnectTimeout(10000) // 10 secs socket connect timeout\n              .connectTimeout(30000) // 30 secs overall bucket open timeout\n              .kvTimeout(10000) // 10 instead of 2.5s for KV ops\n              .kvEndpoints(kvEndpoints);\n\n          // Tune boosting and epoll based on settings\n          SelectStrategyFactory factory = boost > 0 ?\n              new BackoffSelectStrategyFactory() : DefaultSelectStrategyFactory.INSTANCE;\n\n          int poolSize = boost > 0 ? boost : Integer.parseInt(\n              System.getProperty(\"com.couchbase.ioPoolSize\", Integer.toString(DefaultCoreEnvironment.IO_POOL_SIZE))\n          );\n          ThreadFactory threadFactory = new DefaultThreadFactory(\"cb-io\", true);\n\n          EventLoopGroup group = epoll ? new EpollEventLoopGroup(poolSize, threadFactory, factory)\n              : new NioEventLoopGroup(poolSize, threadFactory, SelectorProvider.provider(), factory);\n          builder.ioPool(group, new IoPoolShutdownHook(group));\n\n          env = builder.build();\n          logParams();\n        }\n      }\n\n      cluster = CouchbaseCluster.create(env, host);\n      bucket = cluster.openBucket(bucketName, bucketPassword);\n      kvTimeout = env.kvTimeout();\n    } catch (Exception ex) {\n      throw new DBException(\"Could not connect to Couchbase Bucket.\", ex);\n    }\n\n    if (!kv && !syncMutResponse) {\n      throw new DBException(\"Not waiting for N1QL responses on mutations not yet implemented.\");\n    }\n  }\n\n  /**\n   * Helper method to log the CLI params so that on the command line debugging is easier.\n   */\n  private void logParams() {\n    StringBuilder sb = new StringBuilder();\n\n    sb.append(\"host=\").append(host);\n    sb.append(\", bucket=\").append(bucketName);\n    sb.append(\", upsert=\").append(upsert);\n    sb.append(\", persistTo=\").append(persistTo);\n    sb.append(\", replicateTo=\").append(replicateTo);\n    sb.append(\", syncMutResponse=\").append(syncMutResponse);\n    sb.append(\", adhoc=\").append(adhoc);\n    sb.append(\", kv=\").append(kv);\n    sb.append(\", maxParallelism=\").append(maxParallelism);\n    sb.append(\", queryEndpoints=\").append(queryEndpoints);\n    sb.append(\", kvEndpoints=\").append(kvEndpoints);\n    sb.append(\", queryEndpoints=\").append(queryEndpoints);\n    sb.append(\", epoll=\").append(epoll);\n    sb.append(\", boost=\").append(boost);\n    sb.append(\", networkMetricsInterval=\").append(networkMetricsInterval);\n    sb.append(\", runtimeMetricsInterval=\").append(runtimeMetricsInterval);\n\n    LOGGER.info(\"===> Using Params: \" + sb.toString());\n  }\n\n  @Override\n  public Status read(final String table, final String key, Set<String> fields,\n                     final Map<String, ByteIterator> result) {\n    try {\n      String docId = formatId(table, key);\n      if (kv) {\n        return readKv(docId, fields, result);\n      } else {\n        return readN1ql(docId, fields, result);\n      }\n    } catch (Exception ex) {\n      ex.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Performs the {@link #read(String, String, Set, Map)} operation via Key/Value (\"get\").\n   *\n   * @param docId the document ID\n   * @param fields the fields to be loaded\n   * @param result the result map where the doc needs to be converted into\n   * @return The result of the operation.\n   */\n  private Status readKv(final String docId, final Set<String> fields, final Map<String, ByteIterator> result)\n    throws Exception {\n    RawJsonDocument loaded = bucket.get(docId, RawJsonDocument.class);\n    if (loaded == null) {\n      return Status.NOT_FOUND;\n    }\n    decode(loaded.content(), fields, result);\n    return Status.OK;\n  }\n\n  /**\n   * Performs the {@link #read(String, String, Set, Map)} operation via N1QL (\"SELECT\").\n   *\n   * If this option should be used, the \"-p couchbase.kv=false\" property must be set.\n   *\n   * @param docId the document ID\n   * @param fields the fields to be loaded\n   * @param result the result map where the doc needs to be converted into\n   * @return The result of the operation.\n   */\n  private Status readN1ql(final String docId, Set<String> fields, final Map<String, ByteIterator> result)\n    throws Exception {\n    String readQuery = \"SELECT \" + joinFields(fields) + \" FROM `\" + bucketName + \"` USE KEYS [$1]\";\n    N1qlQueryResult queryResult = bucket.query(N1qlQuery.parameterized(\n        readQuery,\n        JsonArray.from(docId),\n        N1qlParams.build().adhoc(adhoc).maxParallelism(maxParallelism)\n    ));\n\n    if (!queryResult.parseSuccess() || !queryResult.finalSuccess()) {\n      throw new DBException(\"Error while parsing N1QL Result. Query: \" + readQuery\n        + \", Errors: \" + queryResult.errors());\n    }\n\n    N1qlQueryRow row;\n    try {\n      row = queryResult.rows().next();\n    } catch (NoSuchElementException ex) {\n      return Status.NOT_FOUND;\n    }\n\n    JsonObject content = row.value();\n    if (fields == null) {\n      content = content.getObject(bucketName); // n1ql result set scoped under *.bucketName\n      fields = content.getNames();\n    }\n\n    for (String field : fields) {\n      Object value = content.get(field);\n      result.put(field, new StringByteIterator(value != null ? value.toString() : \"\"));\n    }\n\n    return Status.OK;\n  }\n\n  @Override\n  public Status update(final String table, final String key, final Map<String, ByteIterator> values) {\n    if (upsert) {\n      return upsert(table, key, values);\n    }\n\n    try {\n      String docId = formatId(table, key);\n      if (kv) {\n        return updateKv(docId, values);\n      } else {\n        return updateN1ql(docId, values);\n      }\n    } catch (Exception ex) {\n      ex.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Performs the {@link #update(String, String, Map)} operation via Key/Value (\"replace\").\n   *\n   * @param docId the document ID\n   * @param values the values to update the document with.\n   * @return The result of the operation.\n   */\n  private Status updateKv(final String docId, final Map<String, ByteIterator> values) {\n    waitForMutationResponse(bucket.async().replace(\n        RawJsonDocument.create(docId, documentExpiry, encode(values)),\n        persistTo,\n        replicateTo\n    ));\n    return Status.OK;\n  }\n\n  /**\n   * Performs the {@link #update(String, String, Map)} operation via N1QL (\"UPDATE\").\n   *\n   * If this option should be used, the \"-p couchbase.kv=false\" property must be set.\n   *\n   * @param docId the document ID\n   * @param values the values to update the document with.\n   * @return The result of the operation.\n   */\n  private Status updateN1ql(final String docId, final Map<String, ByteIterator> values)\n    throws Exception {\n    String fields = encodeN1qlFields(values);\n    String updateQuery = \"UPDATE `\" + bucketName + \"` USE KEYS [$1] SET \" + fields;\n\n    N1qlQueryResult queryResult = bucket.query(N1qlQuery.parameterized(\n        updateQuery,\n        JsonArray.from(docId),\n        N1qlParams.build().adhoc(adhoc).maxParallelism(maxParallelism)\n    ));\n\n    if (!queryResult.parseSuccess() || !queryResult.finalSuccess()) {\n      throw new DBException(\"Error while parsing N1QL Result. Query: \" + updateQuery\n        + \", Errors: \" + queryResult.errors());\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status insert(final String table, final String key, final Map<String, ByteIterator> values) {\n    if (upsert) {\n      return upsert(table, key, values);\n    }\n\n    try {\n      String docId = formatId(table, key);\n      if (kv) {\n        return insertKv(docId, values);\n      } else {\n        return insertN1ql(docId, values);\n      }\n    } catch (Exception ex) {\n      ex.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Performs the {@link #insert(String, String, Map)} operation via Key/Value (\"INSERT\").\n   *\n   * Note that during the \"load\" phase it makes sense to retry TMPFAILS (so that even if the server is\n   * overloaded temporarily the ops will succeed eventually). The current code will retry TMPFAILs\n   * for maximum of one minute and then bubble up the error.\n   *\n   * @param docId the document ID\n   * @param values the values to update the document with.\n   * @return The result of the operation.\n   */\n  private Status insertKv(final String docId, final Map<String, ByteIterator> values) {\n    int tries = 60; // roughly 60 seconds with the 1 second sleep, not 100% accurate.\n\n    for(int i = 0; i < tries; i++) {\n      try {\n        waitForMutationResponse(bucket.async().insert(\n            RawJsonDocument.create(docId, documentExpiry, encode(values)),\n            persistTo,\n            replicateTo\n        ));\n        return Status.OK;\n      } catch (TemporaryFailureException ex) {\n        try {\n          Thread.sleep(1000);\n        } catch (InterruptedException e) {\n          throw new RuntimeException(\"Interrupted while sleeping on TMPFAIL backoff.\", ex);\n        }\n      }\n    }\n\n    throw new RuntimeException(\"Still receiving TMPFAIL from the server after trying \" + tries + \" times. \" +\n      \"Check your server.\");\n  }\n\n  /**\n   * Performs the {@link #insert(String, String, Map)} operation via N1QL (\"INSERT\").\n   *\n   * If this option should be used, the \"-p couchbase.kv=false\" property must be set.\n   *\n   * @param docId the document ID\n   * @param values the values to update the document with.\n   * @return The result of the operation.\n   */\n  private Status insertN1ql(final String docId, final Map<String, ByteIterator> values)\n    throws Exception {\n    String insertQuery = \"INSERT INTO `\" + bucketName + \"`(KEY,VALUE) VALUES ($1,$2)\";\n\n    N1qlQueryResult queryResult = bucket.query(N1qlQuery.parameterized(\n        insertQuery,\n        JsonArray.from(docId, valuesToJsonObject(values)),\n        N1qlParams.build().adhoc(adhoc).maxParallelism(maxParallelism)\n    ));\n\n    if (!queryResult.parseSuccess() || !queryResult.finalSuccess()) {\n      throw new DBException(\"Error while parsing N1QL Result. Query: \" + insertQuery\n        + \", Errors: \" + queryResult.errors());\n    }\n    return Status.OK;\n  }\n\n  /**\n   * Performs an upsert instead of insert or update using either Key/Value or N1QL.\n   *\n   * If this option should be used, the \"-p couchbase.upsert=true\" property must be set.\n   *\n   * @param table The name of the table\n   * @param key The record key of the record to insert.\n   * @param values A HashMap of field/value pairs to insert in the record\n   * @return The result of the operation.\n   */\n  private Status upsert(final String table, final String key, final Map<String, ByteIterator> values) {\n    try {\n      String docId = formatId(table, key);\n      if (kv) {\n        return upsertKv(docId, values);\n      } else {\n        return upsertN1ql(docId, values);\n      }\n    } catch (Exception ex) {\n      ex.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Performs the {@link #upsert(String, String, Map)} operation via Key/Value (\"upsert\").\n   *\n   * If this option should be used, the \"-p couchbase.upsert=true\" property must be set.\n   *\n   * @param docId the document ID\n   * @param values the values to update the document with.\n   * @return The result of the operation.\n   */\n  private Status upsertKv(final String docId, final Map<String, ByteIterator> values) {\n    waitForMutationResponse(bucket.async().upsert(\n        RawJsonDocument.create(docId, documentExpiry, encode(values)),\n        persistTo,\n        replicateTo\n    ));\n    return Status.OK;\n  }\n\n  /**\n   * Performs the {@link #upsert(String, String, Map)} operation via N1QL (\"UPSERT\").\n   *\n   * If this option should be used, the \"-p couchbase.upsert=true -p couchbase.kv=false\" properties must be set.\n   *\n   * @param docId the document ID\n   * @param values the values to update the document with.\n   * @return The result of the operation.\n   */\n  private Status upsertN1ql(final String docId, final Map<String, ByteIterator> values)\n    throws Exception {\n    String upsertQuery = \"UPSERT INTO `\" + bucketName + \"`(KEY,VALUE) VALUES ($1,$2)\";\n\n    N1qlQueryResult queryResult = bucket.query(N1qlQuery.parameterized(\n        upsertQuery,\n        JsonArray.from(docId, valuesToJsonObject(values)),\n        N1qlParams.build().adhoc(adhoc).maxParallelism(maxParallelism)\n    ));\n\n    if (!queryResult.parseSuccess() || !queryResult.finalSuccess()) {\n      throw new DBException(\"Error while parsing N1QL Result. Query: \" + upsertQuery\n        + \", Errors: \" + queryResult.errors());\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status delete(final String table, final String key) {\n    try {\n      String docId = formatId(table, key);\n      if (kv) {\n        return deleteKv(docId);\n      } else {\n        return deleteN1ql(docId);\n      }\n    } catch (Exception ex) {\n      ex.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Performs the {@link #delete(String, String)} (String, String)} operation via Key/Value (\"remove\").\n   *\n   * @param docId the document ID.\n   * @return The result of the operation.\n   */\n  private Status deleteKv(final String docId) {\n    waitForMutationResponse(bucket.async().remove(\n        docId,\n        persistTo,\n        replicateTo\n    ));\n    return Status.OK;\n  }\n\n  /**\n   * Performs the {@link #delete(String, String)} (String, String)} operation via N1QL (\"DELETE\").\n   *\n   * If this option should be used, the \"-p couchbase.kv=false\" property must be set.\n   *\n   * @param docId the document ID.\n   * @return The result of the operation.\n   */\n  private Status deleteN1ql(final String docId) throws Exception {\n    String deleteQuery = \"DELETE FROM `\" + bucketName + \"` USE KEYS [$1]\";\n    N1qlQueryResult queryResult = bucket.query(N1qlQuery.parameterized(\n        deleteQuery,\n        JsonArray.from(docId),\n        N1qlParams.build().adhoc(adhoc).maxParallelism(maxParallelism)\n    ));\n\n    if (!queryResult.parseSuccess() || !queryResult.finalSuccess()) {\n      throw new DBException(\"Error while parsing N1QL Result. Query: \" + deleteQuery\n        + \", Errors: \" + queryResult.errors());\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status scan(final String table, final String startkey, final int recordcount, final Set<String> fields,\n      final Vector<HashMap<String, ByteIterator>> result) {\n    try {\n      if (fields == null || fields.isEmpty()) {\n        return scanAllFields(table, startkey, recordcount, result);\n      } else {\n        return scanSpecificFields(table, startkey, recordcount, fields, result);\n      }\n    } catch (Exception ex) {\n      ex.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Performs the {@link #scan(String, String, int, Set, Vector)} operation, optimized for all fields.\n   *\n   * Since the full document bodies need to be loaded anyways, it makes sense to just grab the document IDs\n   * from N1QL and then perform the bulk loading via KV for better performance. This is a usual pattern with\n   * Couchbase and shows the benefits of using both N1QL and KV together.\n   *\n   * @param table The name of the table\n   * @param startkey The record key of the first record to read.\n   * @param recordcount The number of records to read\n   * @param result A Vector of HashMaps, where each HashMap is a set field/value pairs for one record\n   * @return The result of the operation.\n   */\n  private Status scanAllFields(final String table, final String startkey, final int recordcount,\n      final Vector<HashMap<String, ByteIterator>> result) {\n    final List<HashMap<String, ByteIterator>> data = new ArrayList<HashMap<String, ByteIterator>>(recordcount);\n    bucket.async()\n        .query(N1qlQuery.parameterized(\n          scanAllQuery,\n          JsonArray.from(formatId(table, startkey), recordcount),\n          N1qlParams.build().adhoc(adhoc).maxParallelism(maxParallelism)\n        ))\n        .doOnNext(new Action1<AsyncN1qlQueryResult>() {\n          @Override\n          public void call(AsyncN1qlQueryResult result) {\n            if (!result.parseSuccess()) {\n              throw new RuntimeException(\"Error while parsing N1QL Result. Query: \" + scanAllQuery\n                + \", Errors: \" + result.errors());\n            }\n          }\n        })\n        .flatMap(new Func1<AsyncN1qlQueryResult, Observable<AsyncN1qlQueryRow>>() {\n          @Override\n          public Observable<AsyncN1qlQueryRow> call(AsyncN1qlQueryResult result) {\n            return result.rows();\n          }\n        })\n        .flatMap(new Func1<AsyncN1qlQueryRow, Observable<RawJsonDocument>>() {\n          @Override\n          public Observable<RawJsonDocument> call(AsyncN1qlQueryRow row) {\n            String id = new String(row.byteValue()).trim();\n            return bucket.async().get(id.substring(1, id.length()-1), RawJsonDocument.class);\n          }\n        })\n        .map(new Func1<RawJsonDocument, HashMap<String, ByteIterator>>() {\n          @Override\n          public HashMap<String, ByteIterator> call(RawJsonDocument document) {\n            HashMap<String, ByteIterator> tuple = new HashMap<String, ByteIterator>();\n            decode(document.content(), null, tuple);\n            return tuple;\n          }\n        })\n        .toBlocking()\n        .forEach(new Action1<HashMap<String, ByteIterator>>() {\n          @Override\n          public void call(HashMap<String, ByteIterator> tuple) {\n            data.add(tuple);\n          }\n        });\n\n    result.addAll(data);\n    return Status.OK;\n  }\n\n  /**\n   * Performs the {@link #scan(String, String, int, Set, Vector)} operation N1Ql only for a subset of the fields.\n   *\n   * @param table The name of the table\n   * @param startkey The record key of the first record to read.\n   * @param recordcount The number of records to read\n   * @param fields The list of fields to read, or null for all of them\n   * @param result A Vector of HashMaps, where each HashMap is a set field/value pairs for one record\n   * @return The result of the operation.\n   */\n  private Status scanSpecificFields(final String table, final String startkey, final int recordcount,\n      final Set<String> fields, final Vector<HashMap<String, ByteIterator>> result) {\n    String scanSpecQuery = \"SELECT \" + joinFields(fields) + \" FROM `\" + bucketName\n        + \"` WHERE meta().id >= '$1' LIMIT $2\";\n    N1qlQueryResult queryResult = bucket.query(N1qlQuery.parameterized(\n        scanSpecQuery,\n        JsonArray.from(formatId(table, startkey), recordcount),\n        N1qlParams.build().adhoc(adhoc).maxParallelism(maxParallelism)\n    ));\n\n    if (!queryResult.parseSuccess() || !queryResult.finalSuccess()) {\n      throw new RuntimeException(\"Error while parsing N1QL Result. Query: \" + scanSpecQuery\n        + \", Errors: \" + queryResult.errors());\n    }\n\n    boolean allFields = fields == null || fields.isEmpty();\n    result.ensureCapacity(recordcount);\n\n    for (N1qlQueryRow row : queryResult) {\n      JsonObject value = row.value();\n      if (fields == null) {\n        value = value.getObject(bucketName);\n      }\n      Set<String> f = allFields ? value.getNames() : fields;\n      HashMap<String, ByteIterator> tuple = new HashMap<String, ByteIterator>(f.size());\n      for (String field : f) {\n        tuple.put(field, new StringByteIterator(value.getString(field)));\n      }\n      result.add(tuple);\n    }\n    return Status.OK;\n  }\n\n  /**\n   * Helper method to block on the response, depending on the property set.\n   *\n   * By default, since YCSB is sync the code will always wait for the operation to complete. In some\n   * cases it can be useful to just \"drive load\" and disable the waiting. Note that when the\n   * \"-p couchbase.syncMutationResponse=false\" option is used, the measured results by YCSB can basically\n   * be thrown away. Still helpful sometimes during load phases to speed them up :)\n   *\n   * @param input the async input observable.\n   */\n  private void waitForMutationResponse(final Observable<? extends Document<?>> input) {\n    if (!syncMutResponse) {\n      ((Observable<Document<?>>)input).subscribe(new Subscriber<Document<?>>() {\n        @Override\n        public void onCompleted() {\n        }\n\n        @Override\n        public void onError(Throwable e) {\n        }\n\n        @Override\n        public void onNext(Document<?> document) {\n        }\n      });\n    } else {\n      Blocking.blockForSingle(input, kvTimeout, TimeUnit.MILLISECONDS);\n    }\n  }\n\n  /**\n   * Helper method to turn the values into a String, used with {@link #upsertN1ql(String, Map)}.\n   *\n   * @param values the values to encode.\n   * @return the encoded string.\n   */\n  private static String encodeN1qlFields(final Map<String, ByteIterator> values) {\n    if (values.isEmpty()) {\n      return \"\";\n    }\n\n    StringBuilder sb = new StringBuilder();\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      String raw = entry.getValue().toString();\n      String escaped = raw.replace(\"\\\"\", \"\\\\\\\"\").replace(\"\\'\", \"\\\\\\'\");\n      sb.append(entry.getKey()).append(\"=\\\"\").append(escaped).append(\"\\\" \");\n    }\n    String toReturn = sb.toString();\n    return toReturn.substring(0, toReturn.length() - 1);\n  }\n\n  /**\n   * Helper method to turn the map of values into a {@link JsonObject} for further use.\n   *\n   * @param values the values to transform.\n   * @return the created json object.\n   */\n  private static JsonObject valuesToJsonObject(final Map<String, ByteIterator> values) {\n    JsonObject result = JsonObject.create();\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      result.put(entry.getKey(), entry.getValue().toString());\n    }\n    return result;\n  }\n\n  /**\n   * Helper method to join the set of fields into a String suitable for N1QL.\n   *\n   * @param fields the fields to join.\n   * @return the joined fields as a String.\n   */\n  private static String joinFields(final Set<String> fields) {\n    if (fields == null || fields.isEmpty()) {\n      return \"*\";\n    }\n    StringBuilder builder = new StringBuilder();\n    for (String f : fields) {\n      builder.append(\"`\").append(f).append(\"`\").append(\",\");\n    }\n    String toReturn = builder.toString();\n    return toReturn.substring(0, toReturn.length() - 1);\n  }\n\n  /**\n   * Helper method to turn the prefix and key into a proper document ID.\n   *\n   * @param prefix the prefix (table).\n   * @param key the key itself.\n   * @return a document ID that can be used with Couchbase.\n   */\n  private static String formatId(final String prefix, final String key) {\n    return prefix + SEPARATOR + key;\n  }\n\n  /**\n   * Helper method to parse the \"ReplicateTo\" property on startup.\n   *\n   * @param property the proeprty to parse.\n   * @return the parsed setting.\n   */\n  private static ReplicateTo parseReplicateTo(final String property) throws DBException {\n    int value = Integer.parseInt(property);\n\n    switch (value) {\n    case 0:\n      return ReplicateTo.NONE;\n    case 1:\n      return ReplicateTo.ONE;\n    case 2:\n      return ReplicateTo.TWO;\n    case 3:\n      return ReplicateTo.THREE;\n    default:\n      throw new DBException(\"\\\"couchbase.replicateTo\\\" must be between 0 and 3\");\n    }\n  }\n\n  /**\n   * Helper method to parse the \"PersistTo\" property on startup.\n   *\n   * @param property the proeprty to parse.\n   * @return the parsed setting.\n   */\n  private static PersistTo parsePersistTo(final String property) throws DBException {\n    int value = Integer.parseInt(property);\n\n    switch (value) {\n    case 0:\n      return PersistTo.NONE;\n    case 1:\n      return PersistTo.ONE;\n    case 2:\n      return PersistTo.TWO;\n    case 3:\n      return PersistTo.THREE;\n    case 4:\n      return PersistTo.FOUR;\n    default:\n      throw new DBException(\"\\\"couchbase.persistTo\\\" must be between 0 and 4\");\n    }\n  }\n\n  /**\n   * Decode the String from server and pass it into the decoded destination.\n   *\n   * @param source the loaded object.\n   * @param fields the fields to check.\n   * @param dest the result passed back to YCSB.\n   */\n  private void decode(final String source, final Set<String> fields,\n                      final Map<String, ByteIterator> dest) {\n    try {\n      JsonNode json = JacksonTransformers.MAPPER.readTree(source);\n      boolean checkFields = fields != null && !fields.isEmpty();\n      for (Iterator<Map.Entry<String, JsonNode>> jsonFields = json.fields(); jsonFields.hasNext();) {\n        Map.Entry<String, JsonNode> jsonField = jsonFields.next();\n        String name = jsonField.getKey();\n        if (checkFields && !fields.contains(name)) {\n          continue;\n        }\n        JsonNode jsonValue = jsonField.getValue();\n        if (jsonValue != null && !jsonValue.isNull()) {\n          dest.put(name, new StringByteIterator(jsonValue.asText()));\n        }\n      }\n    } catch (Exception e) {\n      throw new RuntimeException(\"Could not decode JSON\");\n    }\n  }\n\n  /**\n   * Encode the source into a String for storage.\n   *\n   * @param source the source value.\n   * @return the encoded string.\n   */\n  private String encode(final Map<String, ByteIterator> source) {\n    Map<String, String> stringMap = StringByteIterator.getStringMap(source);\n    ObjectNode node = JacksonTransformers.MAPPER.createObjectNode();\n    for (Map.Entry<String, String> pair : stringMap.entrySet()) {\n      node.put(pair.getKey(), pair.getValue());\n    }\n    JsonFactory jsonFactory = new JsonFactory();\n    Writer writer = new StringWriter();\n    try {\n      JsonGenerator jsonGenerator = jsonFactory.createGenerator(writer);\n      JacksonTransformers.MAPPER.writeTree(jsonGenerator, node);\n    } catch (Exception e) {\n      throw new RuntimeException(\"Could not encode JSON value\");\n    }\n    return writer.toString();\n  }\n}\n\n/**\n * Factory for the {@link BackoffSelectStrategy} to be used with boosting.\n */\nclass BackoffSelectStrategyFactory implements SelectStrategyFactory {\n  @Override\n  public SelectStrategy newSelectStrategy() {\n    return new BackoffSelectStrategy();\n  }\n}\n\n/**\n * Custom IO select strategy which trades CPU for throughput, used with the boost setting.\n */\nclass BackoffSelectStrategy implements SelectStrategy {\n\n  private int counter = 0;\n\n  @Override\n  public int calculateStrategy(final IntSupplier supplier, final boolean hasTasks) throws Exception {\n    int selectNowResult = supplier.get();\n    if (hasTasks || selectNowResult != 0) {\n      counter = 0;\n      return selectNowResult;\n    }\n    counter++;\n\n    if (counter > 2000) {\n      LockSupport.parkNanos(1);\n    } else if (counter > 3000) {\n      Thread.yield();\n    } else if (counter > 4000) {\n      LockSupport.parkNanos(1000);\n    } else if (counter > 5000) {\n      // defer to blocking select\n      counter = 0;\n      return SelectStrategy.SELECT;\n    }\n\n    return SelectStrategy.CONTINUE;\n  }\n}\n"
  },
  {
    "path": "couchbase2/src/main/java/com/yahoo/ycsb/db/couchbase2/package-info.java",
    "content": "/*\n * Copyright (c) 2015 - 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"http://www.couchbase.com/\">Couchbase</a>, new driver.\n */\npackage com.yahoo.ycsb.db.couchbase2;\n\n"
  },
  {
    "path": "distribution/pom.xml",
    "content": "<!-- \nCopyright (c) 2012 - 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>root</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>ycsb</artifactId>\n  <name>YCSB Release Distribution Builder</name>\n  <packaging>pom</packaging>\n\n  <description>\n    This module creates the release package of the YCSB with all DB library bindings.\n    It is only used by the build process and does not contain any real\n    code of itself.\n  </description>\n  <dependencies>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>accumulo1.6-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>accumulo1.7-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>accumulo1.8-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>aerospike-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>arangodb-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>arangodb3-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>asynchbase-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>cassandra-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>cloudspanner-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>couchbase-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>couchbase2-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>azuredocumentdb-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>azuretablestorage-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>dynamodb-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>elasticsearch-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>elasticsearch5-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>geode-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>googledatastore-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>googlebigtable-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>hbase098-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>hbase10-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>hbase12-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>hypertable-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>infinispan-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>jdbc-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>kudu-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>memcached-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>mongodb-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>nosqldb-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>orientdb-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>rados-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>redis-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>rest-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>riak-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>s3-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>solr-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>solr6-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>tarantool-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n<!--\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>voldemort-binding</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n-->\n  </dependencies>\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-assembly-plugin</artifactId>\n        <version>${maven.assembly.version}</version>\n        <configuration>\n          <descriptors>\n            <descriptor>src/main/assembly/distribution.xml</descriptor>\n          </descriptors>\n          <appendAssemblyId>false</appendAssemblyId>\n          <tarLongFileMode>posix</tarLongFileMode>\n        </configuration>\n        <executions>\n          <execution>\n            <phase>package</phase>\n            <goals>\n              <goal>single</goal>\n            </goals>\n          </execution>\n        </executions>\n      </plugin>\n\n    </plugins>\n  </build>\n\n</project>\n\n"
  },
  {
    "path": "distribution/src/main/assembly/distribution.xml",
    "content": "<!-- \nCopyright (c) 2012 - 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd\">\n  <id>package</id>\n  <formats>\n    <format>tar.gz</format>\n  </formats>\n  <includeBaseDirectory>true</includeBaseDirectory>\n  <fileSets>\n    <fileSet>\n      <directory>..</directory>\n      <outputDirectory>.</outputDirectory>\n      <fileMode>0644</fileMode>\n      <includes>\n        <include>README</include>\n        <include>LICENSE.txt</include>\n        <include>NOTICE.txt</include>\n      </includes>\n    </fileSet>\n    <fileSet>\n      <directory>../bin</directory>\n      <outputDirectory>bin</outputDirectory>\n      <fileMode>0755</fileMode>\n      <includes>\n        <include>ycsb*</include>\n      </includes>\n    </fileSet>\n    <fileSet>\n      <directory>../bin</directory>\n      <outputDirectory>bin</outputDirectory>\n      <fileMode>0644</fileMode>\n      <includes>\n        <include>bindings.properties</include>\n      </includes>\n    </fileSet>\n    <fileSet>\n      <directory>../workloads</directory>\n      <outputDirectory>workloads</outputDirectory>\n      <fileMode>0644</fileMode>\n    </fileSet>\n  </fileSets>\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>lib</outputDirectory>\n      <includes>\n        <include>com.yahoo.ycsb:core</include>\n      </includes>\n      <scope>runtime</scope>\n      <useProjectArtifact>false</useProjectArtifact>\n      <useProjectAttachments>false</useProjectAttachments>\n      <useTransitiveDependencies>true</useTransitiveDependencies>\n      <useTransitiveFiltering>true</useTransitiveFiltering>\n    </dependencySet>\n  </dependencySets>\n  <moduleSets>\n    <moduleSet>\n      <useAllReactorProjects>true</useAllReactorProjects>\n      <includeSubModules>true</includeSubModules>\n      <excludes>\n        <exclude>com.yahoo.ycsb:core</exclude>\n        <exclude>com.yahoo.ycsb:binding-parent</exclude>\n        <exclude>com.yahoo.ycsb:datastore-specific-descriptor</exclude>\n        <exclude>com.yahoo.ycsb:ycsb</exclude>\n        <exclude>com.yahoo.ycsb:root</exclude>\n      </excludes>\n      <sources>\n        <fileSets>\n          <fileSet>\n            <includes>\n              <include>README.md</include>\n            </includes>\n          </fileSet>\n          <fileSet>\n            <outputDirectory>conf</outputDirectory>\n            <directory>src/main/conf</directory>\n          </fileSet>\n          <fileSet>\n            <outputDirectory>lib</outputDirectory>\n            <directory>target/dependency</directory>\n          </fileSet>\n        </fileSets>\n      </sources>\n      <binaries>\n        <includeDependencies>false</includeDependencies>\n        <outputDirectory>${module.artifactId}/lib</outputDirectory>\n        <unpack>false</unpack>\n      </binaries>\n    </moduleSet>\n  </moduleSets>\n</assembly>\n"
  },
  {
    "path": "doc/coreproperties.html",
    "content": "<HTML>\r\n<!-- \r\nCopyright (c) 2010 Yahoo! Inc. All rights reserved.\r\n\r\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\r\nmay not use this file except in compliance with the License. You\r\nmay obtain a copy of the License at\r\n\r\nhttp://www.apache.org/licenses/LICENSE-2.0\r\n\r\nUnless required by applicable law or agreed to in writing, software\r\ndistributed under the License is distributed on an \"AS IS\" BASIS,\r\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\nimplied. See the License for the specific language governing\r\npermissions and limitations under the License. See accompanying\r\nLICENSE file.\r\n-->\r\n\r\n<HEAD>\r\n<TITLE>YCSB - Core workload package properties</TITLE>\r\n</HEAD>\r\n<BODY>\r\n<H1><img src=\"images/ycsb.jpg\" width=150> Yahoo! Cloud Serving Benchmark</H1>\r\n<H3>Version 0.1.2</H3>\r\n<HR>\r\n<A HREF=\"index.html\">Home</A> - <A href=\"coreworkloads.html\">Core workloads</A> - <a href=\"tipsfaq.html\">Tips and FAQ</A>\r\n<HR>\r\n<H2>Core workload package properties</h2>\r\nThe property files used with the core workload generator can specify values for the following properties:<p>\r\n<UL>\r\n<LI><b>fieldcount</b>: the number of fields in a record (default: 10) \r\n<LI><b>fieldlength</b>: the size of each field (default: 100) \r\n<LI><b>readallfields</b>: should reads read all fields (true) or just one (false) (default: true) \r\n<LI><b>readproportion</b>: what proportion of operations should be reads (default: 0.95) \r\n<LI><b>updateproportion</b>: what proportion of operations should be updates (default: 0.05) \r\n<LI><b>insertproportion</b>: what proportion of operations should be inserts (default: 0) \r\n<LI><b>scanproportion</b>: what proportion of operations should be scans (default: 0) \r\n<LI><b>readmodifywriteproportion</b>: what proportion of operations should be read a record, modify it, write it back (default: 0) \r\n<LI><b>requestdistribution</b>: what distribution should be used to select the records to operate on - uniform, zipfian or latest (default: uniform) \r\n<LI><b>maxscanlength</b>: for scans, what is the maximum number of records to scan (default: 1000) \r\n<LI><b>scanlengthdistribution</b>: for scans, what distribution should be used to choose the number of records to scan, for each scan, between 1 and maxscanlength (default: uniform) \r\n<LI><b>insertorder</b>: should records be inserted in order by key (\"ordered\"), or in hashed order (\"hashed\") (default: hashed) \r\n</UL>\r\n<HR>\r\nYCSB - Yahoo! Research - Contact cooperb@yahoo-inc.com.\r\n</BODY>\r\n</HTML>\r\n"
  },
  {
    "path": "doc/coreworkloads.html",
    "content": "<HTML>\r\n<!-- \r\nCopyright (c) 2010 Yahoo! Inc. All rights reserved.\r\n\r\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\r\nmay not use this file except in compliance with the License. You\r\nmay obtain a copy of the License at\r\n\r\nhttp://www.apache.org/licenses/LICENSE-2.0\r\n\r\nUnless required by applicable law or agreed to in writing, software\r\ndistributed under the License is distributed on an \"AS IS\" BASIS,\r\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\nimplied. See the License for the specific language governing\r\npermissions and limitations under the License. See accompanying\r\nLICENSE file.\r\n-->\r\n\r\n<HEAD>\r\n<TITLE>YCSB - Core workloads</TITLE>\r\n</HEAD>\r\n<BODY>\r\n<H1><img src=\"images/ycsb.jpg\" width=150> Yahoo! Cloud Serving Benchmark</H1>\r\n<H3>Version 0.1.2</H3>\r\n<HR>\r\n<A HREF=\"index.html\">Home</A> - <A href=\"coreworkloads.html\">Core workloads</A> - <a href=\"tipsfaq.html\">Tips and FAQ</A>\r\n<HR>\r\n<H2>Core workloads</h2>\r\nYCSB includes a set of core workloads that define a basic benchmark for cloud systems. Of course, you can define your own workloads, as described <a href=\"workload.html\">here</A>. However,\r\nthe core workloads are a useful first step, and obtaining these benchmark numbers for a variety of different systems would allow you to understand the performance\r\ntradeoffs of different systems.\r\n<P>\r\nThe core workloads consist of six different workloads:\r\n<P>\r\n<B>Workload A: Update heavy workload</B>\r\n<P>\r\nThis workload has a mix of 50/50 reads and writes. An application example is a session store recording recent actions.\r\n<P>\r\n<B>Workload B: Read mostly workload</B>\r\n<P>\r\nThis workload has a 95/5 reads/write mix. Application example: photo tagging; add a tag is an update, but most operations are to read tags.\r\n<P>\r\n<B>Workload C: Read only</B>\r\n<P>\r\nThis workload is 100% read. Application example: user profile cache, where profiles are constructed elsewhere (e.g., Hadoop).\r\n<P>\r\n<B>Workload D: Read latest workload</B>\r\n<P>\r\nIn this workload, new records are inserted, and the most recently inserted records are the most popular. Application example: user status updates; people want to read the latest.\r\n<P>\r\n<B>Workload E: Short ranges</B>\r\n<P>\r\nIn this workload, short ranges of records are queried, instead of individual records. Application example: threaded conversations, where each scan is for the posts in a given thread (assumed to be clustered by thread id).\r\n<P>\r\n<B>Workload F: Read-modify-write</B>\r\n<P>\r\nIn this workload, the client will read a record, modify it, and write back the changes. Application example: user database, where user records are read and modified by the user or to record user activity.\r\n\r\n<HR>\r\n<H2>Running the workloads</H2>\r\nAll six workloads have a data set which is similar. Workloads D and E insert records during the test run. Thus, to keep the database size consistent, we recommend the following sequence:\r\n<OL>\r\n<LI>Load the database, using workload A's parameter file (workloads/workloada) and the \"-load\" switch to the client.\r\n<LI>Run workload A (using workloads/workloada and \"-t\") for a variety of throughputs.\r\n<LI>Run workload B (using workloads/workloadb and \"-t\") for a variety of throughputs.\r\n<LI>Run workload C (using workloads/workloadc and \"-t\") for a variety of throughputs. \r\n<LI>Run workload F (using workloads/workloadf and \"-t\") for a variety of throughputs.\r\n<LI>Run workload D (using workloads/workloadd and \"-t\") for a variety of throughputs. This workload inserts records, increasing the size of the database.\r\n<LI>Delete the data in the database.\r\n<LI>Reload the database, using workload E's parameter file (workloads/workloade) and the \"-load switch to the client.\r\n<LI>Run workload E (using workloads/workloadd and \"-t\") for a variety of throughputs. This workload inserts records, increasing the size of the database.\r\n</OL>\r\n<HR>\r\nYCSB - Yahoo! Research - Contact cooperb@yahoo-inc.com.\r\n</BODY>\r\n</HTML>\r\n"
  },
  {
    "path": "doc/dblayer.html",
    "content": "<HTML>\r\n<!-- \r\nCopyright (c) 2010 Yahoo! Inc. All rights reserved.\r\n\r\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\r\nmay not use this file except in compliance with the License. You\r\nmay obtain a copy of the License at\r\n\r\nhttp://www.apache.org/licenses/LICENSE-2.0\r\n\r\nUnless required by applicable law or agreed to in writing, software\r\ndistributed under the License is distributed on an \"AS IS\" BASIS,\r\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\nimplied. See the License for the specific language governing\r\npermissions and limitations under the License. See accompanying\r\nLICENSE file.\r\n-->\r\n\r\n<HEAD>\r\n<TITLE>YCSB - DB Interface Layer</TITLE>\r\n</HEAD>\r\n<BODY>\r\n<H1><img src=\"images/ycsb.jpg\" width=150> Yahoo! Cloud Serving Benchmark</H1>\r\n<H3>Version 0.1.2</H3>\r\n<HR>\r\n<A HREF=\"index.html\">Home</A> - <A href=\"coreworkloads.html\">Core workloads</A> - <a href=\"tipsfaq.html\">Tips and FAQ</A>\r\n<HR>\r\n<H2>Implementing a database interface layer - overview</H2>\r\nThe database interface layer hides the details of the specific database you are benchmarking from the YCSB Client. This\r\nallows the client to generate operations like \"read record\" or \"update record\" without having to understand \r\nthe specific API of your database. Thus, it is very easy to benchmark new database systems; once you have\r\ncreated the database interface layer, the rest of the benchmark framework runs without having to change.\r\n<P>\r\nThe database interface layer is a simple abstract class that provides read, insert, update, delete and scan operations for your\r\ndatabase. Implementing a database interface layer for your database means filling out the body of each of those methods. Once you\r\nhave compiled your layer, you can specify the name of your implemented class on the command line (or as a property) to the YCSB Client.\r\nThe YCSB Client will load your implementation dynamically when it starts. Thus, you do not need to recompile the YCSB Client itself\r\nto add or change a database interface layer.\r\n<HR>\r\n<H2>Creating a new layer step-by-step</H2>\r\n<h3>Step 1 - Extend com.yahoo.ycsb.DB</h3>\r\nThe base class of all database interface layer implementations is com.yahoo.ycsb.DB. This is an abstract class, so you need to create a new \r\nclass which extends the DB class. Your class must have a public no-argument constructor, because the instances will be constructed inside a factory\r\nwhich will use the no-argument constructor.\r\n<P>\r\nThe YCSB Client framework will create one instance of your DB class per worker thread, but there might be multiple worker threads generating the workload,\r\nso there might be multiple instances of your DB class created.\r\n\r\n<H3>Step 2 - Implement init() if necessary</h3>\r\nYou can perform any initialization of your DB object by implementing the following method\r\n<pre> \r\npublic void init() throws DBException\r\n</pre>\r\nto perform any initialization actions. The init() method will be called once per DB instance; so if there are multiple threads, each DB instance will have init()\r\ncalled separately.\r\n<P>\r\nThe init() method should be used to set up the connection to the database and do any other initialization. In particular, you can configure your database layer \r\nusing properties passed to the YCSB Client at runtime. In fact, the YCSB Client will pass to the DB interface layer \r\nall of the \r\nproperties specified in all parameter files specified when the Client starts up. Thus, you can create new properties for configuring your DB interface layer, \r\nset them in your parameter files (or on the command line), and\r\nthen retrieve them inside your implementation of the DB interface layer. \r\n<P>\r\nThese properties will be passed to the DB instance <i>after</i> the constructor, so it is important to retrieve them only in the init() method and not the\r\nconstructor. You can get the set of properties using the \r\n<pre>\r\npublic Properties getProperties()\r\n</pre>\r\nmethod which is already implemented and inherited from the DB base class.\r\n\r\n<h3>Step 3 - Implement the database query and update methods</h3>\r\n\r\nThe methods that you need to implement are:\r\n\r\n<pre>\r\n  //Read a single record\r\n  public int read(String table, String key, Set<String> fields, HashMap<String,String> result);\r\n\r\n  //Perform a range scan\r\n  public int scan(String table, String startkey, int recordcount, Set<String> fields, Vector<HashMap<String,String>> result);\r\n\t\r\n  //Update a single record\r\n  public int update(String table, String key, HashMap<String,String> values);\r\n\r\n  //Insert a single record\r\n  public int insert(String table, String key, HashMap<String,String> values);\r\n\r\n  //Delete a single record\r\n  public int delete(String table, String key);\r\n</pre>\r\nIn each case, the method takes a table name and record key. (In the case of scan, the record key is the first key in the range to scan.) For the \r\nread methods (read() and scan()) the methods additionally take a set of fields to be read, and provide a structure (HashMap or Vector of HashMaps) to store\r\nthe returned data. For the write methods (insert() and update()) the methods take HashMap which maps field names to values.\r\n<P>\r\nThe database should have the appropriate tables created before you run the benchmark. So you can assume in your implementation of the above methods\r\nthat the appropriate tables already exist, and just write code to read or write from the tables named in the \"table\" parameter.\r\n<h3>Step 4 - Compile your database interface layer</h3>\r\nYour code can be compiled separately from the compilation of the YCSB Client and framework. In particular, you can make changes to your DB class and \r\nrecompile without having to recompile the YCSB Client.\r\n<h3>Step 5 - Use it with the YCSB Client</h3>\r\nMake sure that the classes for your implementation (or a jar containing those classes) are available on your CLASSPATH, as well as any libraries/jar files used\r\nby your implementation. Now, when you run the YCSB Client, specify the \"-db\" argument on the command line and provide the fully qualified classname of your\r\nDB class. For example, to run workloada with your DB class:\r\n<pre>\r\n%  java -cp build/ycsb.jar:yourjarpath com.yahoo.ycsb.Client -t -db com.foo.YourDBClass -P workloads/workloada -P large.dat -s > transactions.dat\r\n</pre>  \r\n\r\nYou can also specify the DB interface layer using the DB property in your parameter file:\r\n<pre>\r\ndb=com.foo.YourDBClass\r\n</pre>\r\n<HR>\r\nYCSB - Yahoo! Research - Contact cooperb@yahoo-inc.com.\r\n</BODY>\r\n</HTML>\r\n"
  },
  {
    "path": "doc/index.html",
    "content": "<html>\r\n<!-- \r\nCopyright (c) 2010 Yahoo! Inc. All rights reserved.\r\n\r\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\r\nmay not use this file except in compliance with the License. You\r\nmay obtain a copy of the License at\r\n\r\nhttp://www.apache.org/licenses/LICENSE-2.0\r\n\r\nUnless required by applicable law or agreed to in writing, software\r\ndistributed under the License is distributed on an \"AS IS\" BASIS,\r\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\nimplied. See the License for the specific language governing\r\npermissions and limitations under the License. See accompanying\r\nLICENSE file.\r\n-->\r\n\r\n<head>\r\n<title>YCSB - Yahoo! Cloud Serving Benchmark</title>\r\n</head>\r\n<body>\r\n<H1><img src=\"images/ycsb.jpg\" width=150> Yahoo! Cloud Serving Benchmark</H1>\r\n<H3>Version 0.1.2</H3>\r\n<hr>\r\n<A HREF=\"index.html\">Home</A> - <A href=\"coreworkloads.html\">Core workloads</A> - <a href=\"tipsfaq.html\">Tips and FAQ</A>\r\n<HR>\r\n<UL>\r\n<LI><A href=\"#overview\">Overview</A>\r\n<LI><A href=\"#download\">Download YCSB</A>\r\n<LI><A href=\"#gettingstarted\">Getting started</A>\r\n<LI><A href=\"#extending\">Extending YCSB</A>\r\n</UL>\r\n<HR>\r\n<A name=\"overview\">\r\n<H2>Overview</H2>\r\nThere are many new serving databases available, including:\r\n<ul>\r\n<LI>BigTable\r\n<ul>\r\n<LI><A HREF=\"http://hadoop.apache.org/hbase/\">HBase</A>, <A HREF=\"http://hypertable.org/\">Hypertable\r\n</ul>\r\n<LI><A HREF=\"http://www.microsoft.com/windowsazure/\">Azure</A>\r\n<LI><A HREF=\"http://incubator.apache.org/cassandra/\">Cassandra</A>\r\n<LI><A HREF=\"http://couchdb.apache.org/\">CouchDB</A>\r\n<LI><A HREF=\"http://project-voldemort.com/\">Voldemort</A>\r\n<LI><A HREF=http://wiki.github.com/cliffmoon/dynomite/dynomite-framework\">Dynomite</A>\r\n<li>...and many others\r\n</ul>\r\nIt is difficult to decide which system is right for your application, partially because the features differ between \r\nsystems, and partially because there is not an easy way to compare the performance of one system versus another.\r\n<P>\r\nThe goal of the YCSB project is to develop a framework and common set of workloads for evaluating the performance of\r\ndifferent \"key-value\" and \"cloud\" serving stores. The project comprises two things:\r\n<ul>\r\n<LI>The YCSB Client, an extensible workload generator\r\n<LI>The Core workloads, a set of workload scenarios to be executed by the generator\r\n</UL>\r\nAlthough the core workloads provide a well rounded picture of a system's performance, the Client is extensible so that\r\nyou can define new and different workloads to examine system aspects, or application scenarios, not adequately covered by\r\nthe core workload. Similarly, the Client is extensible to support benchmarking different databases. Although we include\r\nsample code for benchmarking HBase and Cassandra, it is straightforward to write a new interface layer to benchmark\r\nyour favorite database.\r\n<P>\r\nA common use of the tool is to benchmark multiple systems and compare them. For example, you can install multiple systems\r\non the same hardward configuration, and run the same workloads against each system. Then you can plot the performance \r\nof each system (for example, as latency versus throughput curves) to see when one system does better than another.\r\n<HR>\r\n<A name=\"download\">\r\n<H2>Download YCSB</H2>\r\nYCSB is available\r\nat <A HREF=\"http://wiki.github.com/brianfrankcooper/YCSB/\">http://wiki.github.com/brianfrankcooper/YCSB/</A>. \r\n<HR>\r\n<a name=\"gettingstarted\">\r\n<H2>Getting started</H2>\r\nDetailed instructions for using YCSB are available on the GitHub wiki:\r\n<A HREF=\"http://wiki.github.com/brianfrankcooper/YCSB/getting-started\">http://wiki.github.com/brianfrankcooper/YCSB/getting-started</A>.\r\n<HR>\r\n<A name=\"extending\">\r\n<H1>Extending YCSB</H1>\r\nYCSB is designed to be extensible. It is easy to add a new database interface layer to support benchmarking a new database. It is also easy to define new workloads.\r\n<ul>\r\n<li><A HREF=\"dblayer.html\">DB Interface Layer</a>\r\n<li><A HREF=\"workload.html\">Implementing new workloads</a>\r\n</UL>\r\nMore details about the entire class structure of YCSB is available here:\r\n<UL>\r\n<LI><A HREF=\"javadoc/index.html\">YCSB javadoc documentation</A>\r\n</ul>  \r\n<HR>\r\nYCSB - Yahoo! Research - Contact cooperb@yahoo-inc.com.\r\n</body>\r\n</html>\r\n"
  },
  {
    "path": "doc/parallelclients.html",
    "content": "<HTML>\r\n<!-- \r\nCopyright (c) 2010 Yahoo! Inc. All rights reserved.\r\n\r\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\r\nmay not use this file except in compliance with the License. You\r\nmay obtain a copy of the License at\r\n\r\nhttp://www.apache.org/licenses/LICENSE-2.0\r\n\r\nUnless required by applicable law or agreed to in writing, software\r\ndistributed under the License is distributed on an \"AS IS\" BASIS,\r\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\nimplied. See the License for the specific language governing\r\npermissions and limitations under the License. See accompanying\r\nLICENSE file.\r\n-->\r\n\r\n<HEAD>\r\n<TITLE>YCSB - Parallel clients</TITLE>\r\n</HEAD>\r\n<BODY>\r\n<H1><img src=\"images/ycsb.jpg\" width=150> Yahoo! Cloud Serving Benchmark</H1>\r\n<H3>Version 0.1.2</H3>\r\n<HR>\r\n<A HREF=\"index.html\">Home</A> - <A href=\"coreworkloads.html\">Core workloads</A> - <a href=\"tipsfaq.html\">Tips and FAQ</A>\r\n<HR>\r\n<H2>Running multiple clients in parallel</h2>\r\nIt is straightforward to run the transaction phase of the workload from multiple servers - just start up clients on different servers, each running the same workload. Each client will\r\nproduce performance statistics when it is done, and you'll have to aggregate these individual files into a single set of results.\r\n<P>\r\nIn some cases it makes sense to load the database using multiple servers. In this case, you will want to partition the records to be loaded among the clients. Normally, YCSB just loads\r\nall of the records (as defined by the recordcount property). However, if you want to partition the load you need to additionally specify two other properties for each client:\r\n<UL>\r\n<LI><b>insertstart</b>: The index of the record to start at.\r\n<LI><b>insertcount</b>: The number of records to insert.\r\n</UL>\r\nThese properties can be specified in a property file or on the command line using the -p option.\r\n<P>\r\nFor example, imagine you want to load 100 million records (so recordcount=100000000). Imagine you want to load with four clients. For the first client:\r\n<pre>\r\ninsertstart=0\r\ninsertcount=25000000\r\n</pre>\r\nFor the second client:\r\n<pre>\r\ninsertstart=25000000\r\ninsertcount=25000000\r\n</pre>\r\nFor the third client:\r\n<pre>\r\ninsertstart=50000000\r\ninsertcount=25000000\r\n</pre>\r\nAnd for the fourth client:\r\n<pre>\r\ninsertstart=75000000\r\ninsertcount=25000000\r\n</pre>\r\n<HR>\r\nYCSB - Yahoo! Research - Contact cooperb@yahoo-inc.com.\r\n</body>\r\n</html>\r\n"
  },
  {
    "path": "doc/tipsfaq.html",
    "content": "<HTML>\r\n<!-- \r\nCopyright (c) 2010 Yahoo! Inc. All rights reserved.\r\n\r\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\r\nmay not use this file except in compliance with the License. You\r\nmay obtain a copy of the License at\r\n\r\nhttp://www.apache.org/licenses/LICENSE-2.0\r\n\r\nUnless required by applicable law or agreed to in writing, software\r\ndistributed under the License is distributed on an \"AS IS\" BASIS,\r\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\nimplied. See the License for the specific language governing\r\npermissions and limitations under the License. See accompanying\r\nLICENSE file.\r\n-->\r\n\r\n<HEAD>\r\n<TITLE>YCSB - Tips and FAQ</TITLE>\r\n</HEAD>\r\n<BODY>\r\n<H1><img src=\"images/ycsb.jpg\" width=150> Yahoo! Cloud Serving Benchmark</H1>\r\n<H3>Version 0.1.2</H3>\r\n<HR>\r\n<A HREF=\"index.html\">Home</A> - <A href=\"coreworkloads.html\">Core workloads</A> - <a href=\"tipsfaq.html\">Tips and FAQ</A>\r\n<HR>\r\n<H2>Tips</h2>\r\n<B>Tip 1 - Carefully adjust the number of threads</B>\r\n<P>\r\nThe number of threads determines how much workload you can generate against the database. Imagine that you are trying to run a test with 10,000 operations per second, \r\nbut you are only achieving 8,000 operations per second. Is this because the database can't keep up with the load? Not necessarily. Imagine that you are running with 100\r\nclient threads (e.g. \"-threads 100\") and each operation is taking 12 milliseconds on average. Each thread will only be able to generate 83 operations per second, because each\r\nthread operates sequentially. Over 100 threads, your client will only generate 8300 operations per second, even if the database can support more. Increasing the number of threads\r\nensures there are enough parallel clients hitting the database so that the database, not the client, is the bottleneck.\r\n<P>\r\nTo calculate the number of threads needed, you should have some idea of the expected latency. For example, at 10,000 operations per second, we might expect the database\r\nto have a latency of 10-30 milliseconds on average. So you to generate 10,000 operations per second, you will need (Ops per sec / (1000 / avg latency in ms) ), or (10000/(1000/30))=300 threads.\r\nIn fact, to be conservative, you might consider having 400 threads. Although this is a lot of threads, each thread will spend most of its time waiting for the database to respond,\r\nso the context switching overhead will be low. \r\n<P>\r\nExperiment with increasing the number of threads, especially if you find you are not reaching your target throughput. Eventually, of course, you will saturate the database\r\nand there will be no way to increase the number of threads to get more throughput (in fact, increasing the number of client threads may make things worse) but you need to have \r\nenough threads to ensure it is the database, not the client, that is the bottleneck.\r\n<HR>\r\nYCSB - Yahoo! Research - Contact cooperb@yahoo-inc.com.\r\n</BODY>\r\n</HTML>\r\n"
  },
  {
    "path": "doc/workload.html",
    "content": "<HTML>\r\n<!-- \r\nCopyright (c) 2010 Yahoo! Inc. All rights reserved.\r\n\r\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\r\nmay not use this file except in compliance with the License. You\r\nmay obtain a copy of the License at\r\n\r\nhttp://www.apache.org/licenses/LICENSE-2.0\r\n\r\nUnless required by applicable law or agreed to in writing, software\r\ndistributed under the License is distributed on an \"AS IS\" BASIS,\r\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\nimplied. See the License for the specific language governing\r\npermissions and limitations under the License. See accompanying\r\nLICENSE file.\r\n-->\r\n\r\n<HEAD>\r\n<TITLE>YCSB - Implementing new workloads</TITLE>\r\n</HEAD>\r\n<BODY>\r\n<H1><img src=\"images/ycsb.jpg\" width=150> Yahoo! Cloud Serving Benchmark</H1>\r\n<H3>Version 0.1.2</H3>\r\n<HR>\r\n<A HREF=\"index.html\">Home</A> - <A href=\"coreworkloads.html\">Core workloads</A> - <a href=\"tipsfaq.html\">Tips and FAQ</A>\r\n<HR>\r\n<H2>Implementing new workloads - overview</h2>\r\nA workload represents the load that a given application will put on the database system. For benchmarking purposes, we must define\r\nworkloads that are relatively simple compared to real applications, so that we can better reason about the benchmarking results\r\nwe get. However, a workload should be detailed enough so that once we measure the database's performance, we know what kinds of applications\r\nmight experience similar performance.\r\n<p>\r\nIn the context of YCSB, a workload defines both a <b>data set</b>, which is a set of records to be loaded into the database, and a <b>transaction set</b>,\r\nwhich are the set of read and write operations against the database. Creating the transactions requires understanding the structure of the records, which\r\nis why both the data and the transactions must be defined in the workload.\r\n<P>\r\nFor a complete benchmark, multiple important (but distinct) workloads might be grouped together into a <i>workload package</I>. The CoreWorkload\r\npackage included with the YCSB client is an example of such a collection of workloads. \r\n<P>\r\nTypically a workload consists of two files:\r\n<UL>\r\n<LI>A java class which contains the code to create data records and generate transactions against them\r\n<LI>A parameter file which tunes the specifics of the workload\r\n</UL>\r\nFor example, a workload class file might generate some combination of read and update operations against the database. The parameter\r\nfile might specify whether the mix of reads and updates is 50/50, 80/20, etc.\r\n<P>\r\nThere are two ways to create a new workload or package of workloads.\r\n<P>\r\n<h3>Option 1: new parameter files</h3>\r\n<P>\r\nThe core workloads included with YCSB are defined by a set of parameter files (workloada, workloadb, etc.) You can create your own parameter file with new values\r\nfor the read/write mix, request distribution, etc. For example, the workloada file has the following contents:\r\n\r\n<pre>\r\nworkload=com.yahoo.ycsb.workloads.CoreWorkload\r\n\r\nreadallfields=false\r\n\r\nreadproportion=0.5\r\nupdateproportion=0.5\r\nscanproportion=0\r\ninsertproportion=0\r\n\r\nrequestdistribution=zipfian\r\n</pre>\r\n\r\nCreating a new file that changes any of these values will produce a new workload with different characteristics. The set of properties that can be specified is <a href=\"coreproperties.html\">here</a>.\r\n<P>\r\n<h3>Option 2: new java class</h3>\r\n<P>\r\nThe workload java class will be created by the YCSB Client at runtime, and will use an instance of the <a href=\"dblayer.html\">DB interface layer</A>\r\nto generate the actual operations against the database. Thus, the java class only needs to decide (based on settings in the parameter file) what records\r\nto create for the data set, and what reads, updates etc. to generate for the transaction phase. The YCSB Client will take care of creating the workload java class,\r\npassing it to a worker thread for executing, deciding how many records to create or how many operations to execute, and measuring the resulting \r\nperformance.\r\n<P>\r\nIf the CoreWorkload (or some other existing package) does not have the ability to generate the workload you desire, you can create a new workload java class.\r\nThis is done using the following steps:\r\n<H3>Step 1. Extend <a href=\"javadoc/com/yahoo/ycsb/Workload.html\">com.yahoo.ycsb.Workload</A></H3>\r\nThe base class of all workload classes is com.yahoo.ycsb.Workload. This is an abstract class, so you create a new workload that extends this base class. Your\r\nclass must have a public no-argument constructor, because the workload will be created in a factory using the no-argument constructor. The YCSB Client will\r\ncreate one Workload object for each worker thread, so if you run the Client with multiple threads, multiple workload objects will be created.\r\n<H3>Step 2. Write code to initialize your workload class</H3>\r\nThe parameter fill will be passed to the workload object after the constructor has been called, so if you are using any parameter properties, you must\r\nuse them to initialize your workload using either the init() or initThread() methods. \r\n<UL>\r\n<LI>init() - called once for all workload instances. Used to initialize any objects shared by all threads.\r\n<LI>initThread() - called once per workload instance in the context of the worker thread. Used to initialize any objects specific to a single Workload instance \r\nand single worker thread.\r\n</UL>\r\nIn either case, you can access the parameter properties using the Properties object passed in to both methods. These properties will include all properties defined\r\nin any property file passed to the YCSB Client or defined on the client command line.\r\n<H3>Step 3. Write any cleanup code</H3>\r\nThe cleanup() method is called once for all workload instances, after the workload has completed.\r\n<H3>Step 4. Define the records to be inserted</H3>\r\nThe YCSB Client will call the doInsert() method once for each record to be inserted into the database. So you should implement this method\r\nto create and insert a single record. The DB object you can use to perform the insert will be passed to the doInsert() method.\r\n<H3>Step 5. Define the transactions</H3>\r\nThe YCSB Client will call the doTransaction() method once for every transaction that is to be executed. So you should implement this method to execute\r\na single transaction, using the DB object passed in to access the database. Your implementation of this method can choose between different types of \r\ntransactions, and can make multiple calls to the DB interface layer. However, each invocation of the method should be a logical transaction. In particular, when you run the client,\r\nyou'll specify the number of operations to execute; if you request 1000 operations then doTransaction() will be executed 1000 times.\r\n<P>\r\nNote that you do not have to do any throttling of your transactions (or record insertions) to achieve the target throughput. The YCSB Client will do the throttling\r\nfor you.\r\n<P>\r\nNote also that it is allowable to insert records inside the doTransaction() method. You might do this if you wish the database to grow during the workload. In this case,\r\nthe initial dataset will be constructed using calls to the doInsert() method, while additional records would be inserted using calls to the doTransaction() method.\r\n<h3>Step 6 - Measure latency, if necessary</h3>\r\nThe YCSB client will automatically measure the latency and throughput of database operations, even for workloads that you define. However, the client will only measure\r\nthe latency of individual calls to the database, not of more complex transactions. Consider for example a workload that reads a record, modifies it, and writes\r\nthe changes back to the database. The YCSB client will automatically measure the latency of the read operation to the database; and separately will automatically measure the \r\nlatency of the update operation. However, if you would like to measure the latency of the entire read-modify-write transaction, you will need to add an additional timing step to your\r\ncode.\r\n<P>\r\nMeasurements are gathered using the Measurements.measure() call. There is a singleton instance of Measurements, which can be obtained using the \r\nMeasurements.getMeasurements() static method. For each metric you are measuring, you need to assign a string tag; this tag will label the resulting\r\naverage, min, max, histogram etc. measurements output by the tool at the end of the workload. For example, consider the following code:\r\n\r\n<pre>\r\nlong st=System.currentTimeMillis();\r\ndb.read(TABLENAME,keyname,fields,new HashMap<String,String>());\r\ndb.update(TABLENAME,keyname,values);\r\nlong en=System.currentTimeMillis();\r\nMeasurements.getMeasurements().measure(\"READ-MODIFY-WRITE\", (int)(en-st));\r\n</pre>\r\n\r\nIn this code, the calls to System.currentTimeMillis() are used to time the read and write transaction. Then, the call to measure() reports the latency to the \r\nmeasurement component. \r\n<p>\r\nUsing this pattern, your custom measurements will be gathered and aggregated using the same mechanism that is used to gather measurements for individual READ, UPDATE etc. operations.\r\n\r\n<h3>Step 7 - Use it with the YCSB Client</h3>\r\nMake sure that the classes for your implementation (or a jar containing those classes) are available on your CLASSPATH, as well as any libraries/jar files used\r\nby your implementation. Now, when you run the YCSB Client, specify the \"workload\" property to provide the fully qualified classname of your\r\nDB class. For example:\r\n\r\n<pre>\r\nworkload=com.foo.YourWorkloadClass\r\n</pre>\r\n<HR>\r\nYCSB - Yahoo! Research - Contact cooperb@yahoo-inc.com.\r\n</body>\r\n</html>\r\n"
  },
  {
    "path": "dynamodb/README.md",
    "content": "<!--\nCopyright (c) 2010 Yahoo! Inc., 2012 - 2015 YCSB contributors.\nAll rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# DynamoDB Binding\n\nhttp://aws.amazon.com/documentation/dynamodb/\n\n## Configure\n\n    YCSB_HOME - YCSB home directory\n    DYNAMODB_HOME - Amazon DynamoDB package files\n\nPlease refer to https://github.com/brianfrankcooper/YCSB/wiki/Using-the-Database-Libraries\nfor more information on setup.\n\n# Benchmark\n\n    $YCSB_HOME/bin/ycsb load dynamodb -P workloads/workloada -P dynamodb.properties\n    $YCSB_HOME/bin/ycsb run dynamodb -P workloads/workloada -P dynamodb.properties\n\n# Properties\n\n    $DYNAMODB_HOME/conf/dynamodb.properties\n    $DYNAMODB_HOME/conf/AWSCredentials.properties\n\n# FAQs\n* Why is the recommended workload distribution set to 'uniform'?\n    This is to conform with the best practices for using DynamoDB - uniform,\nevenly distributed workload is the recommended pattern for scaling and\ngetting predictable performance out of DynamoDB\n\nFor more information refer to\nhttp://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/BestPractices.html\n\n* How does workload size affect provisioned throughput?\n    The default payload size requires double the provisioned throughput to execute\nthe workload. This translates to double the provisioned throughput cost for testing.\nThe default item size in YCSB are 1000 bytes plus metadata overhead, which makes the\nitem exceed 1024 bytes. DynamoDB charges one capacity unit per 1024 bytes for read\nor writes. An item that is greater than 1024 bytes but less than or equal to 2048 bytes\nwould cost 2 capacity units. With the change in payload size, each request would cost\n1 capacity unit as opposed to 2, saving the cost of running the benchmark.\n\nFor more information refer to\nhttp://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/WorkingWithDDTables.html\n\n* How do you know if DynamoDB throttling is affecting benchmarking?\n    Monitor CloudWatch for ThrottledRequests and if ThrottledRequests is greater\nthan zero, either increase the DynamoDB table provisioned throughput or reduce\nYCSB throughput by reducing YCSB target throughput, adjusting the number of YCSB\nclient threads, or combination of both.\n\nFor more information please refer to\nhttps://github.com/brianfrankcooper/YCSB/blob/master/doc/tipsfaq.html\n\nWhen requests are throttled, latency measurements by YCSB can increase.\n\nPlease refer to http://aws.amazon.com/dynamodb/faqs/ for more information.\n\nPlease refer to Amazon DynamoDB docs here:\nhttp://aws.amazon.com/documentation/dynamodb/\n"
  },
  {
    "path": "dynamodb/conf/AWSCredentials.properties",
    "content": "# Copyright (c) 2012 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n# Fill in your AWS Access Key ID and Secret Access Key\n# http://aws.amazon.com/security-credentials\n#accessKey =\n#secretKey =\n"
  },
  {
    "path": "dynamodb/conf/dynamodb.properties",
    "content": "# Copyright (c) 2012 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n#\n# Sample property file for Amazon DynamoDB database client\n\n## Mandatory parameters\n\n# AWS credentials associated with your aws account.\n#dynamodb.awsCredentialsFile = <path to AWSCredentials.properties>\n\n# Primarykey of table 'usertable'\n#dynamodb.primaryKey = <firstname>\n\n# If you set dynamodb.primaryKeyType to HASH_AND_RANGE, you must specify the\n# hash key name of your primary key here. (see documentation below for details)\n#dynamodb.hashKeyName = <hashid>\n\n## Optional parameters\n\n# The property \"primaryKeyType\" below specifies the type of primary key\n# you have setup for the test table. There are two choices:\n# - HASH (default)\n# - HASH_AND_RANGE\n#\n# When testing the DB in HASH mode (which is the default), your table's\n# primary key must be of the \"HASH\" key type, and the name of the primary key\n# is specified via the dynamodb.primaryKey property. In this mode, all\n# keys from YCSB are hashed across multiple hash partitions and\n# performance of individual operations are good. However, query across\n# multiple items is eventually consistent in this mode and relies on the\n# global secondary index.\n#\n#\n# When testing the DB in HASH_AND_RANGE mode, your table's primary key must be\n# of the \"HASH_AND_RANGE\" key type. You need to specify the name of the\n# hash key via the \"dynamodb.hashKeyName\" property and you also need to\n# specify the name of the range key via the \"dynamodb.primaryKey\" property.\n# In this mode, keys supplied by YCSB will be used as the range part of\n# the primary key and the hash part of the primary key will have a fixed value.\n# Optionally you can designate the value used in the hash part of the primary\n# key via the dynamodb.hashKeyValue.\n#\n# The purpose of the HASH_AND_RANGE mode is to benchmark the performance\n# characteristics of a single logical hash partition. This is useful because\n# so far the only practical way to do strongly consistent query is to do it\n# in a single hash partition (Whole table scan can be consistent but it becomes\n# less practical when the table is really large). Therefore, for users who\n# really want to have strongly consistent query, it's important for them to\n# know the performance capabilities of a single logical hash partition so\n# they can plan their application accordingly.\n\n#dynamodb.primaryKeyType = HASH\n\n#Optionally you can specify a value for the hash part of the primary key\n#when testing in HASH_AND_RANG mode.\n#dynamodb.hashKeyValue = <some value of your choice>\n\n# Endpoint to connect to.For best latency, it is recommended \n# to choose the endpoint which is closer to the client. \n# Default is us-east-1\n#dynamodb.endpoint = http://dynamodb.us-east-1.amazonaws.com\n\n# Strongly recommended to set to uniform.Refer FAQs in README\n#requestdistribution = uniform\n\n# Enable/disable debug messages.Defaults to false\n# \"true\" or \"false\"\n#dynamodb.debug = false\n\n# Maximum number of concurrent connections\n#dynamodb.connectMax = 50\n\n# Read consistency.Consistent reads are expensive and consume twice \n# as many resources as eventually consistent reads. Defaults to false.\n# \"true\" or \"false\"\n#dynamodb.consistentReads = false\n\n# Workload size has implications on provisioned read and write\n# capacity units.Refer FAQs in README\n#fieldcount = 10\n#fieldlength = 90\n"
  },
  {
    "path": "dynamodb/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>dynamodb-binding</artifactId>\n  <name>DynamoDB DB Binding</name>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk</artifactId>\n      <version>1.10.48</version>\n    </dependency>\n    <dependency>\n      <groupId>log4j</groupId>\n      <artifactId>log4j</artifactId>\n      <version>1.2.17</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "dynamodb/src/main/java/com/yahoo/ycsb/db/DynamoDBClient.java",
    "content": "/*\n * Copyright 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n * Copyright 2015-2016 YCSB Contributors. All Rights Reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\").\n * You may not use this file except in compliance with the License.\n * A copy of the License is located at\n *\n *  http://aws.amazon.com/apache2.0\n *\n * or in the \"license\" file accompanying this file. This file is distributed\n * on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n * express or implied. See the License for the specific language governing\n * permissions and limitations under the License.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.amazonaws.AmazonClientException;\nimport com.amazonaws.AmazonServiceException;\nimport com.amazonaws.ClientConfiguration;\nimport com.amazonaws.auth.AWSCredentials;\nimport com.amazonaws.auth.PropertiesCredentials;\nimport com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient;\nimport com.amazonaws.services.dynamodbv2.model.*;\nimport com.yahoo.ycsb.*;\nimport org.apache.log4j.Level;\nimport org.apache.log4j.Logger;\n\nimport java.io.File;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Set;\nimport java.util.Vector;\n\n/**\n * DynamoDB v1.10.48 client for YCSB.\n */\n\npublic class DynamoDBClient extends DB {\n\n  /**\n   * Defines the primary key type used in this particular DB instance.\n   * <p>\n   * By default, the primary key type is \"HASH\". Optionally, the user can\n   * choose to use hash_and_range key type. See documentation in the\n   * DynamoDB.Properties file for more details.\n   */\n  private enum PrimaryKeyType {\n    HASH,\n    HASH_AND_RANGE\n  }\n\n  private AmazonDynamoDBClient dynamoDB;\n  private String primaryKeyName;\n  private PrimaryKeyType primaryKeyType = PrimaryKeyType.HASH;\n\n  // If the user choose to use HASH_AND_RANGE as primary key type, then\n  // the following two variables become relevant. See documentation in the\n  // DynamoDB.Properties file for more details.\n  private String hashKeyValue;\n  private String hashKeyName;\n\n  private boolean consistentRead = false;\n  private String endpoint = \"http://dynamodb.us-east-1.amazonaws.com\";\n  private int maxConnects = 50;\n  private static final Logger LOGGER = Logger.getLogger(DynamoDBClient.class);\n  private static final Status CLIENT_ERROR = new Status(\"CLIENT_ERROR\", \"An error occurred on the client.\");\n  private static final String DEFAULT_HASH_KEY_VALUE = \"YCSB_0\";\n\n  @Override\n  public void init() throws DBException {\n    String debug = getProperties().getProperty(\"dynamodb.debug\", null);\n\n    if (null != debug && \"true\".equalsIgnoreCase(debug)) {\n      LOGGER.setLevel(Level.DEBUG);\n    }\n\n    String configuredEndpoint = getProperties().getProperty(\"dynamodb.endpoint\", null);\n    String credentialsFile = getProperties().getProperty(\"dynamodb.awsCredentialsFile\", null);\n    String primaryKey = getProperties().getProperty(\"dynamodb.primaryKey\", null);\n    String primaryKeyTypeString = getProperties().getProperty(\"dynamodb.primaryKeyType\", null);\n    String consistentReads = getProperties().getProperty(\"dynamodb.consistentReads\", null);\n    String connectMax = getProperties().getProperty(\"dynamodb.connectMax\", null);\n\n    if (null != connectMax) {\n      this.maxConnects = Integer.parseInt(connectMax);\n    }\n\n    if (null != consistentReads && \"true\".equalsIgnoreCase(consistentReads)) {\n      this.consistentRead = true;\n    }\n\n    if (null != configuredEndpoint) {\n      this.endpoint = configuredEndpoint;\n    }\n\n    if (null == primaryKey || primaryKey.length() < 1) {\n      throw new DBException(\"Missing primary key attribute name, cannot continue\");\n    }\n\n    if (null != primaryKeyTypeString) {\n      try {\n        this.primaryKeyType = PrimaryKeyType.valueOf(primaryKeyTypeString.trim().toUpperCase());\n      } catch (IllegalArgumentException e) {\n        throw new DBException(\"Invalid primary key mode specified: \" + primaryKeyTypeString +\n            \". Expecting HASH or HASH_AND_RANGE.\");\n      }\n    }\n\n    if (this.primaryKeyType == PrimaryKeyType.HASH_AND_RANGE) {\n      // When the primary key type is HASH_AND_RANGE, keys used by YCSB\n      // are range keys so we can benchmark performance of individual hash\n      // partitions. In this case, the user must specify the hash key's name\n      // and optionally can designate a value for the hash key.\n\n      String configuredHashKeyName = getProperties().getProperty(\"dynamodb.hashKeyName\", null);\n      if (null == configuredHashKeyName || configuredHashKeyName.isEmpty()) {\n        throw new DBException(\"Must specify a non-empty hash key name when the primary key type is HASH_AND_RANGE.\");\n      }\n      this.hashKeyName = configuredHashKeyName;\n      this.hashKeyValue = getProperties().getProperty(\"dynamodb.hashKeyValue\", DEFAULT_HASH_KEY_VALUE);\n    }\n\n    try {\n      AWSCredentials credentials = new PropertiesCredentials(new File(credentialsFile));\n      ClientConfiguration cconfig = new ClientConfiguration();\n      cconfig.setMaxConnections(maxConnects);\n      dynamoDB = new AmazonDynamoDBClient(credentials, cconfig);\n      dynamoDB.setEndpoint(this.endpoint);\n      primaryKeyName = primaryKey;\n      LOGGER.info(\"dynamodb connection created with \" + this.endpoint);\n    } catch (Exception e1) {\n      LOGGER.error(\"DynamoDBClient.init(): Could not initialize DynamoDB client.\", e1);\n    }\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    if (LOGGER.isDebugEnabled()) {\n      LOGGER.debug(\"readkey: \" + key + \" from table: \" + table);\n    }\n\n    GetItemRequest req = new GetItemRequest(table, createPrimaryKey(key));\n    req.setAttributesToGet(fields);\n    req.setConsistentRead(consistentRead);\n    GetItemResult res;\n\n    try {\n      res = dynamoDB.getItem(req);\n    } catch (AmazonServiceException ex) {\n      LOGGER.error(ex);\n      return Status.ERROR;\n    } catch (AmazonClientException ex) {\n      LOGGER.error(ex);\n      return CLIENT_ERROR;\n    }\n\n    if (null != res.getItem()) {\n      result.putAll(extractResult(res.getItem()));\n      if (LOGGER.isDebugEnabled()) {\n        LOGGER.debug(\"Result: \" + res.toString());\n      }\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n                     Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n\n    if (LOGGER.isDebugEnabled()) {\n      LOGGER.debug(\"scan \" + recordcount + \" records from key: \" + startkey + \" on table: \" + table);\n    }\n\n    /*\n     * on DynamoDB's scan, startkey is *exclusive* so we need to\n     * getItem(startKey) and then use scan for the res\n    */\n    GetItemRequest greq = new GetItemRequest(table, createPrimaryKey(startkey));\n    greq.setAttributesToGet(fields);\n\n    GetItemResult gres;\n\n    try {\n      gres = dynamoDB.getItem(greq);\n    } catch (AmazonServiceException ex) {\n      LOGGER.error(ex);\n      return Status.ERROR;\n    } catch (AmazonClientException ex) {\n      LOGGER.error(ex);\n      return CLIENT_ERROR;\n    }\n\n    if (null != gres.getItem()) {\n      result.add(extractResult(gres.getItem()));\n    }\n\n    int count = 1; // startKey is done, rest to go.\n\n    Map<String, AttributeValue> startKey = createPrimaryKey(startkey);\n    ScanRequest req = new ScanRequest(table);\n    req.setAttributesToGet(fields);\n    while (count < recordcount) {\n      req.setExclusiveStartKey(startKey);\n      req.setLimit(recordcount - count);\n      ScanResult res;\n      try {\n        res = dynamoDB.scan(req);\n      } catch (AmazonServiceException ex) {\n        LOGGER.error(ex);\n        return Status.ERROR;\n      } catch (AmazonClientException ex) {\n        LOGGER.error(ex);\n        return CLIENT_ERROR;\n      }\n\n      count += res.getCount();\n      for (Map<String, AttributeValue> items : res.getItems()) {\n        result.add(extractResult(items));\n      }\n      startKey = res.getLastEvaluatedKey();\n\n    }\n\n    return Status.OK;\n  }\n\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    if (LOGGER.isDebugEnabled()) {\n      LOGGER.debug(\"updatekey: \" + key + \" from table: \" + table);\n    }\n\n    Map<String, AttributeValueUpdate> attributes = new HashMap<>(values.size());\n    for (Entry<String, ByteIterator> val : values.entrySet()) {\n      AttributeValue v = new AttributeValue(val.getValue().toString());\n      attributes.put(val.getKey(), new AttributeValueUpdate().withValue(v).withAction(\"PUT\"));\n    }\n\n    UpdateItemRequest req = new UpdateItemRequest(table, createPrimaryKey(key), attributes);\n\n    try {\n      dynamoDB.updateItem(req);\n    } catch (AmazonServiceException ex) {\n      LOGGER.error(ex);\n      return Status.ERROR;\n    } catch (AmazonClientException ex) {\n      LOGGER.error(ex);\n      return CLIENT_ERROR;\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    if (LOGGER.isDebugEnabled()) {\n      LOGGER.debug(\"insertkey: \" + primaryKeyName + \"-\" + key + \" from table: \" + table);\n    }\n\n    Map<String, AttributeValue> attributes = createAttributes(values);\n    // adding primary key\n    attributes.put(primaryKeyName, new AttributeValue(key));\n    if (primaryKeyType == PrimaryKeyType.HASH_AND_RANGE) {\n      // If the primary key type is HASH_AND_RANGE, then what has been put\n      // into the attributes map above is the range key part of the primary\n      // key, we still need to put in the hash key part here.\n      attributes.put(hashKeyName, new AttributeValue(hashKeyValue));\n    }\n\n    PutItemRequest putItemRequest = new PutItemRequest(table, attributes);\n    try {\n      dynamoDB.putItem(putItemRequest);\n    } catch (AmazonServiceException ex) {\n      LOGGER.error(ex);\n      return Status.ERROR;\n    } catch (AmazonClientException ex) {\n      LOGGER.error(ex);\n      return CLIENT_ERROR;\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    if (LOGGER.isDebugEnabled()) {\n      LOGGER.debug(\"deletekey: \" + key + \" from table: \" + table);\n    }\n\n    DeleteItemRequest req = new DeleteItemRequest(table, createPrimaryKey(key));\n\n    try {\n      dynamoDB.deleteItem(req);\n    } catch (AmazonServiceException ex) {\n      LOGGER.error(ex);\n      return Status.ERROR;\n    } catch (AmazonClientException ex) {\n      LOGGER.error(ex);\n      return CLIENT_ERROR;\n    }\n    return Status.OK;\n  }\n\n  private static Map<String, AttributeValue> createAttributes(Map<String, ByteIterator> values) {\n    Map<String, AttributeValue> attributes = new HashMap<>(values.size() + 1);\n    for (Entry<String, ByteIterator> val : values.entrySet()) {\n      attributes.put(val.getKey(), new AttributeValue(val.getValue().toString()));\n    }\n    return attributes;\n  }\n\n  private HashMap<String, ByteIterator> extractResult(Map<String, AttributeValue> item) {\n    if (null == item) {\n      return null;\n    }\n    HashMap<String, ByteIterator> rItems = new HashMap<>(item.size());\n\n    for (Entry<String, AttributeValue> attr : item.entrySet()) {\n      if (LOGGER.isDebugEnabled()) {\n        LOGGER.debug(String.format(\"Result- key: %s, value: %s\", attr.getKey(), attr.getValue()));\n      }\n      rItems.put(attr.getKey(), new StringByteIterator(attr.getValue().getS()));\n    }\n    return rItems;\n  }\n\n  private Map<String, AttributeValue> createPrimaryKey(String key) {\n    Map<String, AttributeValue> k = new HashMap<>();\n    if (primaryKeyType == PrimaryKeyType.HASH) {\n      k.put(primaryKeyName, new AttributeValue().withS(key));\n    } else if (primaryKeyType == PrimaryKeyType.HASH_AND_RANGE) {\n      k.put(hashKeyName, new AttributeValue().withS(hashKeyValue));\n      k.put(primaryKeyName, new AttributeValue().withS(key));\n    } else {\n      throw new RuntimeException(\"Assertion Error: impossible primary key type\");\n    }\n    return k;\n  }\n}\n"
  },
  {
    "path": "dynamodb/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright 2015-2016 YCSB Contributors. All Rights Reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"https://aws.amazon.com/dynamodb/\">DynamoDB</a>.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "dynamodb/src/main/resources/log4j.properties",
    "content": "# Copyright (c) 2012 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n#define the console appender\nlog4j.appender.consoleAppender = org.apache.log4j.ConsoleAppender\n\n# now define the layout for the appender\nlog4j.appender.consoleAppender.layout = org.apache.log4j.PatternLayout\nlog4j.appender.consoleAppender.layout.ConversionPattern=%-4r [%t] %-5p %c %x -%m%n\n\n# now map our console appender as a root logger, means all log messages will go\n# to this appender\nlog4j.rootLogger = INFO, consoleAppender\n"
  },
  {
    "path": "elasticsearch/README.md",
    "content": "<!--\nCopyright (c) 2012 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on Elasticsearch running locally. \n\n### 1. Set Up YCSB\n\nClone the YCSB git repository and compile:\n\n    git clone git://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn clean package\n\n### 2. Run YCSB\n    \nNow you are ready to run! First, load the data:\n\n    ./bin/ycsb load elasticsearch -s -P workloads/workloada -p path.home=<path>\n\nThen, run the workload:\n\n    ./bin/ycsb run elasticsearch -s -P workloads/workloada -p path.home=<path>\n\nNote that the `<path>` specified in each execution should be the same.\n\nThe Elasticsearch binding has two modes of operation, embedded mode and remote\nmode. In embedded mode, the client creates an embedded instance of\nElasticsearch that uses the specified `<path>` to persist data between\nexecutions.\n\nIn remote mode, the client will hit a standalone instance of Elasticsearch. To\nuse remote mode, add the flags `-p es.remote=true` and specify a hosts list via\n`-p es.hosts.list=<hostname1:port1>,...,<hostnamen:portn>`.\n\n    ./bin/ycsb run elasticsearch -s -P workloads/workloada -p es.remote=true \\\n    -p es.hosts.list=<hostname1:port1>,...,<hostnamen:portn>`\n\nNote that `es.hosts.list` defaults to `localhost:9300`. For further\nconfiguration see below:\n\n### Defaults Configuration\nThe default setting for the Elasticsearch node that is created is as follows:\n\n    cluster.name=es.ycsb.cluster\n    es.index.key=es.ycsb\n    es.number_of_shards=1\n    es.number_of_replicas=0\n    es.remote=false\n    es.newdb=false\n    es.hosts.list=localhost:9300 (only applies if es.remote=true)\n\n### Custom Configuration\nIf you wish to customize the settings used to create the Elasticsearch node\nyou can created a new property file that contains your desired Elasticsearch \nnode settings and pass it in via the parameter to 'bin/ycsb' script. Note that \nthe default properties will be kept if you don't explicitly overwrite them.\n\nAssuming that we have a properties file named \"myproperties.data\" that contains \ncustom Elasticsearch node configuration you can execute the following to\npass it into the Elasticsearch client:\n\n\n    ./bin/ycsb run elasticsearch -P workloads/workloada -P myproperties.data -s\n\nIf you wish to change the default index name you can set the following property:\n\n    es.index.key=my_index_key\n\nIf you wish to run against a remote cluster you can set the following property:\n\n    es.remote=true\n\nBy default this will use localhost:9300 as a seed node to discover the cluster.\nYou can also specify\n\n    es.hosts.list=(\\w+:\\d+)+\n\n(a comma-separated list of host/port pairs) to change this.\n"
  },
  {
    "path": "elasticsearch/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n    <modelVersion>4.0.0</modelVersion>\n    <parent>\n        <groupId>com.yahoo.ycsb</groupId>\n        <artifactId>binding-parent</artifactId>\n        <version>0.14.0-SNAPSHOT</version>\n        <relativePath>../binding-parent</relativePath>\n    </parent>\n\n    <artifactId>elasticsearch-binding</artifactId>\n    <name>Elasticsearch Binding</name>\n    <packaging>jar</packaging>\n    <properties>\n        <elasticsearch-version>2.4.0</elasticsearch-version>\n    </properties>\n    <dependencies>\n        <dependency>\n        <!-- jna is supported in ES and will be used when provided \n             otherwise a fallback is used -->    \n            <groupId>net.java.dev.jna</groupId>\n            <artifactId>jna</artifactId>\n            <version>4.1.0</version>\n        </dependency>\n        <dependency>\n            <groupId>com.yahoo.ycsb</groupId>\n            <artifactId>core</artifactId>\n            <version>${project.version}</version>\n            <scope>provided</scope>\n        </dependency>\n        <dependency>\n            <groupId>org.elasticsearch</groupId>\n            <artifactId>elasticsearch</artifactId>\n            <version>${elasticsearch-version}</version>\n        </dependency>\n        <dependency>\n            <groupId>junit</groupId>\n            <artifactId>junit</artifactId>\n            <version>4.12</version>\n            <scope>test</scope>\n        </dependency>\n    </dependencies>\n</project>\n"
  },
  {
    "path": "elasticsearch/src/main/java/com/yahoo/ycsb/db/ElasticsearchClient.java",
    "content": "/**\n * Copyright (c) 2012 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport static org.elasticsearch.common.settings.Settings.Builder;\nimport static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\nimport static org.elasticsearch.index.query.QueryBuilders.rangeQuery;\nimport static org.elasticsearch.node.NodeBuilder.nodeBuilder;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport org.elasticsearch.action.admin.cluster.health.ClusterHealthRequest;\nimport org.elasticsearch.action.admin.indices.create.CreateIndexRequest;\nimport org.elasticsearch.action.delete.DeleteResponse;\nimport org.elasticsearch.action.get.GetResponse;\nimport org.elasticsearch.action.search.SearchResponse;\nimport org.elasticsearch.client.Client;\nimport org.elasticsearch.client.Requests;\nimport org.elasticsearch.client.transport.TransportClient;\nimport org.elasticsearch.common.settings.Settings;\nimport org.elasticsearch.common.transport.InetSocketTransportAddress;\nimport org.elasticsearch.common.xcontent.XContentBuilder;\nimport org.elasticsearch.index.query.RangeQueryBuilder;\nimport org.elasticsearch.node.Node;\nimport org.elasticsearch.search.SearchHit;\n\nimport java.net.InetAddress;\nimport java.net.UnknownHostException;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\n/**\n * Elasticsearch client for YCSB framework.\n *\n * <p>\n * Default properties to set:\n * </p>\n * <ul>\n * <li>cluster.name = es.ycsb.cluster\n * <li>es.index.key = es.ycsb\n * <li>es.number_of_shards = 1\n * <li>es.number_of_replicas = 0\n * </ul>\n */\npublic class ElasticsearchClient extends DB {\n\n  private static final String DEFAULT_CLUSTER_NAME = \"es.ycsb.cluster\";\n  private static final String DEFAULT_INDEX_KEY = \"es.ycsb\";\n  private static final String DEFAULT_REMOTE_HOST = \"localhost:9300\";\n  private static final int NUMBER_OF_SHARDS = 1;\n  private static final int NUMBER_OF_REPLICAS = 0;\n  private Node node;\n  private Client client;\n  private String indexKey;\n  private Boolean remoteMode;\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is one\n   * DB instance per client thread.\n   */\n  @Override\n  public void init() throws DBException {\n    final Properties props = getProperties();\n\n    // Check if transport client needs to be used (To connect to multiple\n    // elasticsearch nodes)\n    remoteMode = Boolean.parseBoolean(props.getProperty(\"es.remote\", \"false\"));\n\n    final String pathHome = props.getProperty(\"path.home\");\n\n    // when running in embedded mode, require path.home\n    if (!remoteMode && (pathHome == null || pathHome.isEmpty())) {\n      throw new IllegalArgumentException(\"path.home must be specified when running in embedded mode\");\n    }\n\n    this.indexKey = props.getProperty(\"es.index.key\", DEFAULT_INDEX_KEY);\n\n    int numberOfShards = parseIntegerProperty(props, \"es.number_of_shards\", NUMBER_OF_SHARDS);\n    int numberOfReplicas = parseIntegerProperty(props, \"es.number_of_replicas\", NUMBER_OF_REPLICAS);\n\n    Boolean newdb = Boolean.parseBoolean(props.getProperty(\"es.newdb\", \"false\"));\n    Builder settings = Settings.settingsBuilder()\n        .put(\"cluster.name\", DEFAULT_CLUSTER_NAME)\n        .put(\"node.local\", Boolean.toString(!remoteMode))\n        .put(\"path.home\", pathHome);\n\n    // if properties file contains elasticsearch user defined properties\n    // add it to the settings file (will overwrite the defaults).\n    settings.put(props);\n    final String clusterName = settings.get(\"cluster.name\");\n    System.err.println(\"Elasticsearch starting node = \" + clusterName);\n    System.err.println(\"Elasticsearch node path.home = \" + settings.get(\"path.home\"));\n    System.err.println(\"Elasticsearch Remote Mode = \" + remoteMode);\n    // Remote mode support for connecting to remote elasticsearch cluster\n    if (remoteMode) {\n      settings.put(\"client.transport.sniff\", true)\n          .put(\"client.transport.ignore_cluster_name\", false)\n          .put(\"client.transport.ping_timeout\", \"30s\")\n          .put(\"client.transport.nodes_sampler_interval\", \"30s\");\n      // Default it to localhost:9300\n      String[] nodeList = props.getProperty(\"es.hosts.list\", DEFAULT_REMOTE_HOST).split(\",\");\n      System.out.println(\"Elasticsearch Remote Hosts = \" + props.getProperty(\"es.hosts.list\", DEFAULT_REMOTE_HOST));\n      TransportClient tClient = TransportClient.builder().settings(settings).build();\n      for (String h : nodeList) {\n        String[] nodes = h.split(\":\");\n        try {\n          tClient.addTransportAddress(new InetSocketTransportAddress(\n              InetAddress.getByName(nodes[0]),\n              Integer.parseInt(nodes[1])\n              ));\n        } catch (NumberFormatException e) {\n          throw new IllegalArgumentException(\"Unable to parse port number.\", e);\n        } catch (UnknownHostException e) {\n          throw new IllegalArgumentException(\"Unable to Identify host.\", e);\n        }\n      }\n      client = tClient;\n    } else { // Start node only if transport client mode is disabled\n      node = nodeBuilder().clusterName(clusterName).settings(settings).node();\n      node.start();\n      client = node.client();\n    }\n\n    final boolean exists =\n            client.admin().indices()\n                    .exists(Requests.indicesExistsRequest(indexKey)).actionGet()\n                    .isExists();\n    if (exists && newdb) {\n      client.admin().indices().prepareDelete(indexKey).execute().actionGet();\n    }\n    if (!exists || newdb) {\n      client.admin().indices().create(\n              new CreateIndexRequest(indexKey)\n                      .settings(\n                              Settings.builder()\n                                      .put(\"index.number_of_shards\", numberOfShards)\n                                      .put(\"index.number_of_replicas\", numberOfReplicas)\n                                      .put(\"index.mapping._id.indexed\", true)\n                      )).actionGet();\n    }\n    client.admin().cluster().health(new ClusterHealthRequest().waitForGreenStatus()).actionGet();\n  }\n\n  private int parseIntegerProperty(Properties properties, String key, int defaultValue) {\n    String value = properties.getProperty(key);\n    return value == null ? defaultValue : Integer.parseInt(value);\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    if (!remoteMode) {\n      if (!node.isClosed()) {\n        client.close();\n        node.close();\n      }\n    } else {\n      client.close();\n    }\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to insert.\n   * @param values\n   *          A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error. See this class's\n   *         description for a discussion of error codes.\n   */\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      final XContentBuilder doc = jsonBuilder().startObject();\n\n      for (Entry<String, String> entry : StringByteIterator.getStringMap(values).entrySet()) {\n        doc.field(entry.getKey(), entry.getValue());\n      }\n\n      doc.endObject();\n\n      client.prepareIndex(indexKey, table, key).setSource(doc).execute().actionGet();\n\n      return Status.OK;\n    } catch (Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error. See this class's\n   *         description for a discussion of error codes.\n   */\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      DeleteResponse response = client.prepareDelete(indexKey, table, key).execute().actionGet();\n      if (response.isFound()) {\n        return Status.OK;\n      } else {\n        return Status.NOT_FOUND;\n      }\n    } catch (Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will\n   * be stored in a HashMap.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to read.\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error or \"not found\".\n   */\n  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    try {\n      final GetResponse response = client.prepareGet(indexKey, table, key).execute().actionGet();\n\n      if (response.isExists()) {\n        if (fields != null) {\n          for (String field : fields) {\n            result.put(field, new StringByteIterator(\n                (String) response.getSource().get(field)));\n          }\n        } else {\n          for (String field : response.getSource().keySet()) {\n            result.put(field, new StringByteIterator(\n                (String) response.getSource().get(field)));\n          }\n        }\n        return Status.OK;\n      } else {\n        return Status.NOT_FOUND;\n      }\n    } catch (Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key, overwriting any existing values with the same field name.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to write.\n   * @param values\n   *          A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error. See this class's\n   *         description for a discussion of error codes.\n   */\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      final GetResponse response = client.prepareGet(indexKey, table, key).execute().actionGet();\n\n      if (response.isExists()) {\n        for (Entry<String, String> entry : StringByteIterator.getStringMap(values).entrySet()) {\n          response.getSource().put(entry.getKey(), entry.getValue());\n        }\n\n        client.prepareIndex(indexKey, table, key).setSource(response.getSource()).execute().actionGet();\n\n        return Status.OK;\n      } else {\n        return Status.NOT_FOUND;\n      }\n    } catch (Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value\n   * pair from the result will be stored in a HashMap.\n   *\n   * @param table\n   *          The name of the table\n   * @param startkey\n   *          The record key of the first record to read.\n   * @param recordcount\n   *          The number of records to read\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A Vector of HashMaps, where each HashMap is a set field/value\n   *          pairs for one record\n   * @return Zero on success, a non-zero error code on error. See this class's\n   *         description for a discussion of error codes.\n   */\n  @Override\n  public Status scan(\n          String table,\n          String startkey,\n          int recordcount,\n          Set<String> fields,\n          Vector<HashMap<String, ByteIterator>> result) {\n    try {\n      final RangeQueryBuilder rangeQuery = rangeQuery(\"_id\").gte(startkey);\n      final SearchResponse response = client.prepareSearch(indexKey)\n          .setTypes(table)\n          .setQuery(rangeQuery)\n          .setSize(recordcount)\n          .execute()\n          .actionGet();\n\n      HashMap<String, ByteIterator> entry;\n\n      for (SearchHit hit : response.getHits()) {\n        entry = new HashMap<>(fields.size());\n        for (String field : fields) {\n          entry.put(field, new StringByteIterator((String) hit.getSource().get(field)));\n        }\n        result.add(entry);\n      }\n\n      return Status.OK;\n    } catch (Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n}\n"
  },
  {
    "path": "elasticsearch/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for \n * <a href=\"https://www.elastic.co/products/elasticsearch\">Elasticsearch</a>.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "elasticsearch/src/test/java/com/yahoo/ycsb/db/ElasticsearchClientTest.java",
    "content": "/**\n * Copyright (c) 2012-2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/*\n * To change this template, choose Tools | Templates\n * and open the template in the editor.\n */\npackage com.yahoo.ycsb.db;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport org.junit.After;\nimport org.junit.AfterClass;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.ClassRule;\nimport org.junit.Test;\nimport org.junit.rules.TemporaryFolder;\n\nimport java.util.HashMap;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport static org.junit.Assert.assertEquals;\n\npublic class ElasticsearchClientTest {\n\n    @ClassRule public final static TemporaryFolder temp = new TemporaryFolder();\n    private final static ElasticsearchClient instance = new ElasticsearchClient();\n    private final static HashMap<String, ByteIterator> MOCK_DATA;\n    private final static String MOCK_TABLE = \"MOCK_TABLE\";\n    private final static String MOCK_KEY0 = \"0\";\n    private final static String MOCK_KEY1 = \"1\";\n    private final static String MOCK_KEY2 = \"2\";\n\n    static {\n        MOCK_DATA = new HashMap<>(10);\n        for (int i = 1; i <= 10; i++) {\n            MOCK_DATA.put(\"field\" + i, new StringByteIterator(\"value\" + i));\n        }\n    }\n\n    @BeforeClass\n    public static void setUpClass() throws DBException {\n        final Properties props = new Properties();\n        props.put(\"path.home\", temp.getRoot().toString());\n        instance.setProperties(props);\n        instance.init();\n    }\n\n    @AfterClass\n    public static void tearDownClass() throws DBException {\n        instance.cleanup();\n    }\n\n    @Before\n    public void setUp() {\n        instance.insert(MOCK_TABLE, MOCK_KEY1, MOCK_DATA);\n        instance.insert(MOCK_TABLE, MOCK_KEY2, MOCK_DATA);\n    }\n\n    @After\n    public void tearDown() {\n        instance.delete(MOCK_TABLE, MOCK_KEY1);\n        instance.delete(MOCK_TABLE, MOCK_KEY2);\n    }\n\n    /**\n     * Test of insert method, of class ElasticsearchClient.\n     */\n    @Test\n    public void testInsert() {\n        Status result = instance.insert(MOCK_TABLE, MOCK_KEY0, MOCK_DATA);\n        assertEquals(Status.OK, result);\n    }\n\n    /**\n     * Test of delete method, of class ElasticsearchClient.\n     */\n    @Test\n    public void testDelete() {\n        Status result = instance.delete(MOCK_TABLE, MOCK_KEY1);\n        assertEquals(Status.OK, result);\n    }\n\n    /**\n     * Test of read method, of class ElasticsearchClient.\n     */\n    @Test\n    public void testRead() {\n        Set<String> fields = MOCK_DATA.keySet();\n        HashMap<String, ByteIterator> resultParam = new HashMap<>(10);\n        Status result = instance.read(MOCK_TABLE, MOCK_KEY1, fields, resultParam);\n        assertEquals(Status.OK, result);\n    }\n\n    /**\n     * Test of update method, of class ElasticsearchClient.\n     */\n    @Test\n    public void testUpdate() {\n        int i;\n        HashMap<String, ByteIterator> newValues = new HashMap<>(10);\n\n        for (i = 1; i <= 10; i++) {\n            newValues.put(\"field\" + i, new StringByteIterator(\"newvalue\" + i));\n        }\n\n        Status result = instance.update(MOCK_TABLE, MOCK_KEY1, newValues);\n        assertEquals(Status.OK, result);\n\n        //validate that the values changed\n        HashMap<String, ByteIterator> resultParam = new HashMap<>(10);\n        instance.read(MOCK_TABLE, MOCK_KEY1, MOCK_DATA.keySet(), resultParam);\n\n        for (i = 1; i <= 10; i++) {\n            assertEquals(\"newvalue\" + i, resultParam.get(\"field\" + i).toString());\n        }\n    }\n\n    /**\n     * Test of scan method, of class ElasticsearchClient.\n     */\n    @Test\n    public void testScan() {\n        int recordcount = 10;\n        Set<String> fields = MOCK_DATA.keySet();\n        Vector<HashMap<String, ByteIterator>> resultParam = new Vector<>(10);\n        Status result = instance.scan(MOCK_TABLE, MOCK_KEY1, recordcount, fields, resultParam);\n        assertEquals(Status.OK, result);\n    }\n}\n"
  },
  {
    "path": "elasticsearch5/README.md",
    "content": "<!--\nCopyright (c) 2017 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on Elasticsearch 5.x running locally. \n\n### 1. Install and Start Elasticsearch\n\n[Download and install Elasticsearch][1]. When starting Elasticsearch, you should\n[configure][2] the cluster name to be `es.ycsb.cluster` (see below).\n\n### 2. Set Up YCSB\n\nClone the YCSB git repository and compile:\n\n    git clone git://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn clean package\n\n### 3. Run YCSB\n    \nNow you are ready to run! First, load the data:\n\n    ./bin/ycsb load elasticsearch5 -s -P workloads/workloada\n\nThen, run the workload:\n\n    ./bin/ycsb run elasticsearch5 -s -P workloads/workloada\n\nThe Elasticsearch 5 binding requires a standalone instance of Elasticsearch.\nYou must specify a hosts list for the transport client to connect to via\n`-p es.hosts.list=<hostname1:port1>,...,<hostnamen:portn>`:\n\n    ./bin/ycsb run elasticsearch5 -s -P workloads/workloada \\\n    -p es.hosts.list=<hostname1:port1>,...,<hostnamen:portn>`\n\nNote that `es.hosts.list` defaults to `localhost:9300`. For further\nconfiguration see below:\n\n### Defaults Configuration\nThe default setting for the Elasticsearch node that is created is as follows:\n\n    es.setting.cluster.name=es.ycsb.cluster\n    es.index.key=es.ycsb\n    es.number_of_shards=1\n    es.number_of_replicas=0\n    es.new_index=false\n    es.hosts.list=localhost:9300\n\n### Custom Configuration\nIf you wish to customize the settings used to create the Elasticsearch node\nyou can created a new property file that contains your desired Elasticsearch \nnode settings and pass it in via the parameter to 'bin/ycsb' script. Note that \nthe default properties will be kept if you don't explicitly overwrite them.\n\nAssuming that we have a properties file named \"myproperties.data\" that contains \ncustom Elasticsearch node configuration you can execute the following to\npass it into the Elasticsearch client:\n\n    ./bin/ycsb run elasticsearch5 -P workloads/workloada -P myproperties.data -s\n\nIf you wish to change the default index name you can set the following property:\n\n    es.index.key=my_index_key\n\nBy default this will use localhost:9300 as a seed node to discover the cluster.\nYou can also specify\n\n    es.hosts.list=(\\w+:\\d+)+\n\n(a comma-separated list of host/port pairs) to change this.\n\n### Configuring the transport client\n\nThe `elasticsearch5` binding starts a transport client to connect to\nElasticsearch using the transport protocol. You can pass arbitrary settings to\nthis instance by using properties with the prefix `es.setting.` followed by any\nvalid Elasticsearch setting. For example, assuming that you started your\nElasticsearch node with the cluster name `my-elasticsearch-cluster`, you would\nneed to configure the transport client to use the same cluster name via\n\n    ./bin/ycsb run elasticsearch5 -P <workload> \\\n    -p es.setting.cluster.name=my-elasticsearch-cluster\n    \n### Using the Elasticsearch low-level REST client\n\nThe Elasticsearch 5 bindings also ship with an implementation that uses the\nlow-level Elasticsearch REST client. The name of this binding is\n`elasticsearch-rest`. For example:\n\n    ./bin/ycsb load elasticsearch5-rest -P workloads/workloada\n    \nYou can configure the hosts to connect to via the same `es.hosts.list` property\nused to configure the transport client in the `elasticsearch5` binding (note\nthat by default you should use port 9200)\n\n[1]: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/_installation.html\n[2]: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/settings.html\n"
  },
  {
    "path": "elasticsearch5/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2017 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <properties>\n    <!-- Skip tests by default. will be activated by jdk8 profile -->\n    <skipTests>true</skipTests>\n    <elasticsearch.groupid>org.elasticsearch.distribution.zip</elasticsearch.groupid>\n\n    <!-- For integration tests using ANT -->\n    <integ.http.port>9400</integ.http.port>\n    <integ.transport.port>9500</integ.transport.port>\n\n    <!-- If tests are skipped, skip ES spin up -->\n    <es.skip>${skipTests}</es.skip>\n  </properties>\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-dependency-plugin</artifactId>\n        <version>2.10</version>\n        <executions>\n          <execution>\n            <id>integ-setup-dependencies</id>\n            <phase>pre-integration-test</phase>\n            <goals>\n              <goal>copy</goal>\n            </goals>\n            <configuration>\n              <skip>${skipTests}</skip>\n              <artifactItems>\n                <artifactItem>\n                  <groupId>${elasticsearch.groupid}</groupId>\n                  <artifactId>elasticsearch</artifactId>\n                  <version>${elasticsearch5-version}</version>\n                  <type>zip</type>\n                </artifactItem>\n              </artifactItems>\n              <useBaseVersion>true</useBaseVersion>\n              <outputDirectory>${project.build.directory}/integration-tests/binaries</outputDirectory>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-surefire-plugin</artifactId>\n        <version>2.19</version>\n        <executions>\n          <execution>\n            <id>default-test</id>\n            <phase>none</phase>\n          </execution>\n        </executions>\n      </plugin>\n\n      <plugin>\n        <groupId>com.carrotsearch.randomizedtesting</groupId>\n        <artifactId>junit4-maven-plugin</artifactId>\n        <version>2.3.3</version>\n\n        <configuration>\n          <assertions enableSystemAssertions=\"false\">\n            <enable/>\n          </assertions>\n\n          <listeners>\n            <report-text />\n          </listeners>\n        </configuration>\n\n        <executions>\n          <execution>\n            <id>unit-tests</id>\n            <phase>test</phase>\n            <goals>\n              <goal>junit4</goal>\n            </goals>\n            <inherited>true</inherited>\n            <configuration>\n              <skipTests>${skipTests}</skipTests>\n              <includes>\n                <include>**/*Test.class</include>\n              </includes>\n              <excludes>\n                <exclude>**/*$*</exclude>\n              </excludes>\n            </configuration>\n          </execution>\n          <execution>\n            <id>integration-tests</id>\n            <phase>integration-test</phase>\n            <goals>\n              <goal>junit4</goal>\n            </goals>\n            <inherited>true</inherited>\n            <configuration>\n              <skipTests>${skipTests}</skipTests>\n              <includes>\n                <include>**/*IT.class</include>\n              </includes>\n              <excludes>\n                <exclude>**/*$*</exclude>\n              </excludes>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n\n  <artifactId>elasticsearch5-binding</artifactId>\n  <name>Elasticsearch 5.x Binding</name>\n  <packaging>jar</packaging>\n  <dependencies>\n    <dependency>\n      <!-- jna is supported in ES and will be used when provided\n           otherwise a fallback is used -->\n      <groupId>org.elasticsearch</groupId>\n      <artifactId>jna</artifactId>\n      <version>4.4.0</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.elasticsearch.client</groupId>\n      <artifactId>transport</artifactId>\n      <version>${elasticsearch5-version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.elasticsearch.client</groupId>\n      <artifactId>rest</artifactId>\n      <version>${elasticsearch5-version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.logging.log4j</groupId>\n      <artifactId>log4j-api</artifactId>\n      <version>2.8.2</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.logging.log4j</groupId>\n      <artifactId>log4j-core</artifactId>\n      <version>2.8.2</version>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n\n  <profiles>\n    <!-- Requires JDK8 to run, so none of our tests\n         will work unless we're using jdk8.\n      -->\n    <profile>\n      <id>jdk8-tests</id>\n      <activation>\n        <jdk>1.8</jdk>\n      </activation>\n      <build>\n        <plugins>\n          <plugin>\n            <groupId>com.github.alexcojocaru</groupId>\n            <artifactId>elasticsearch-maven-plugin</artifactId>\n            <version>5.9</version>\n            <configuration>\n              <version>${elasticsearch5-version}</version>\n              <clusterName>test</clusterName>\n              <httpPort>9200</httpPort>\n              <transportPort>9300</transportPort>\n            </configuration>\n            <executions>\n              <execution>\n                <id>start-elasticsearch</id>\n                <phase>pre-integration-test</phase>\n                <goals>\n                  <goal>runforked</goal>\n                </goals>\n              </execution>\n              <execution>\n                <id>stop-elasticsearch</id>\n                <phase>post-integration-test</phase>\n                <goals>\n                  <goal>stop</goal>\n                </goals>\n              </execution>\n            </executions>\n          </plugin>\n        </plugins>\n      </build>\n      <properties>\n        <skipTests>false</skipTests>\n      </properties>\n    </profile>\n  </profiles>\n</project>\n"
  },
  {
    "path": "elasticsearch5/src/main/java/com/yahoo/ycsb/db/elasticsearch5/Elasticsearch5.java",
    "content": "/*\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.elasticsearch5;\n\nimport java.util.Properties;\n\nfinal class Elasticsearch5 {\n\n  private Elasticsearch5() {\n\n  }\n\n  static final String KEY = \"key\";\n\n  static int parseIntegerProperty(final Properties properties, final String key, final int defaultValue) {\n    final String value = properties.getProperty(key);\n    return value == null ? defaultValue : Integer.parseInt(value);\n  }\n\n}\n"
  },
  {
    "path": "elasticsearch5/src/main/java/com/yahoo/ycsb/db/elasticsearch5/ElasticsearchClient.java",
    "content": "/*\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.elasticsearch5;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport org.elasticsearch.action.DocWriteResponse;\nimport org.elasticsearch.action.admin.cluster.health.ClusterHealthRequest;\nimport org.elasticsearch.action.admin.indices.create.CreateIndexRequest;\nimport org.elasticsearch.action.admin.indices.refresh.RefreshRequest;\nimport org.elasticsearch.action.delete.DeleteResponse;\nimport org.elasticsearch.action.index.IndexResponse;\nimport org.elasticsearch.action.search.SearchResponse;\nimport org.elasticsearch.client.Requests;\nimport org.elasticsearch.client.transport.TransportClient;\nimport org.elasticsearch.common.settings.Settings;\nimport org.elasticsearch.common.transport.InetSocketTransportAddress;\nimport org.elasticsearch.common.xcontent.XContentBuilder;\nimport org.elasticsearch.index.query.RangeQueryBuilder;\nimport org.elasticsearch.index.query.TermQueryBuilder;\nimport org.elasticsearch.search.SearchHit;\nimport org.elasticsearch.transport.client.PreBuiltTransportClient;\n\nimport java.net.InetAddress;\nimport java.net.UnknownHostException;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport static com.yahoo.ycsb.db.elasticsearch5.Elasticsearch5.KEY;\nimport static com.yahoo.ycsb.db.elasticsearch5.Elasticsearch5.parseIntegerProperty;\nimport static org.elasticsearch.common.settings.Settings.Builder;\nimport static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n\n/**\n * Elasticsearch client for YCSB framework.\n */\npublic class ElasticsearchClient extends DB {\n\n  private static final String DEFAULT_CLUSTER_NAME = \"es.ycsb.cluster\";\n  private static final String DEFAULT_INDEX_KEY = \"es.ycsb\";\n  private static final String DEFAULT_REMOTE_HOST = \"localhost:9300\";\n  private static final int NUMBER_OF_SHARDS = 1;\n  private static final int NUMBER_OF_REPLICAS = 0;\n  private TransportClient client;\n  private String indexKey;\n\n  /**\n   *\n   * Initialize any state for this DB. Called once per DB instance; there is one\n   * DB instance per client thread.\n   */\n  @Override\n  public void init() throws DBException {\n    final Properties props = getProperties();\n\n    this.indexKey = props.getProperty(\"es.index.key\", DEFAULT_INDEX_KEY);\n\n    final int numberOfShards = parseIntegerProperty(props, \"es.number_of_shards\", NUMBER_OF_SHARDS);\n    final int numberOfReplicas = parseIntegerProperty(props, \"es.number_of_replicas\", NUMBER_OF_REPLICAS);\n\n    final Boolean newIndex = Boolean.parseBoolean(props.getProperty(\"es.new_index\", \"false\"));\n    final Builder settings = Settings.builder().put(\"cluster.name\", DEFAULT_CLUSTER_NAME);\n\n    // if properties file contains elasticsearch user defined properties\n    // add it to the settings file (will overwrite the defaults).\n    for (final Entry<Object, Object> e : props.entrySet()) {\n      if (e.getKey() instanceof String) {\n        final String key = (String) e.getKey();\n        if (key.startsWith(\"es.setting.\")) {\n          settings.put(key.substring(\"es.setting.\".length()), e.getValue());\n        }\n      }\n    }\n\n    settings.put(\"client.transport.sniff\", true)\n            .put(\"client.transport.ignore_cluster_name\", false)\n            .put(\"client.transport.ping_timeout\", \"30s\")\n            .put(\"client.transport.nodes_sampler_interval\", \"30s\");\n    // Default it to localhost:9300\n    final String[] nodeList = props.getProperty(\"es.hosts.list\", DEFAULT_REMOTE_HOST).split(\",\");\n    client = new PreBuiltTransportClient(settings.build());\n    for (String h : nodeList) {\n      String[] nodes = h.split(\":\");\n\n      final InetAddress address;\n      try {\n        address = InetAddress.getByName(nodes[0]);\n      } catch (UnknownHostException e) {\n        throw new IllegalArgumentException(\"unable to identity host [\" + nodes[0]+ \"]\", e);\n      }\n      final int port;\n      try {\n        port = Integer.parseInt(nodes[1]);\n      } catch (final NumberFormatException e) {\n        throw new IllegalArgumentException(\"unable to parse port [\" + nodes[1] + \"]\", e);\n      }\n      client.addTransportAddress(new InetSocketTransportAddress(address, port));\n    }\n\n    final boolean exists =\n        client.admin().indices()\n            .exists(Requests.indicesExistsRequest(indexKey)).actionGet()\n            .isExists();\n    if (exists && newIndex) {\n      client.admin().indices().prepareDelete(indexKey).get();\n    }\n    if (!exists || newIndex) {\n      client.admin().indices().create(\n          new CreateIndexRequest(indexKey)\n              .settings(\n                  Settings.builder()\n                      .put(\"index.number_of_shards\", numberOfShards)\n                      .put(\"index.number_of_replicas\", numberOfReplicas)\n              )).actionGet();\n    }\n    client.admin().cluster().health(new ClusterHealthRequest().waitForGreenStatus()).actionGet();\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    if (client != null) {\n      client.close();\n      client = null;\n    }\n  }\n\n  private volatile boolean isRefreshNeeded = false;\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    try (XContentBuilder doc = jsonBuilder()) {\n\n      doc.startObject();\n      for (final Entry<String, String> entry : StringByteIterator.getStringMap(values).entrySet()) {\n        doc.field(entry.getKey(), entry.getValue());\n      }\n      doc.field(KEY, key);\n      doc.endObject();\n\n      final IndexResponse indexResponse = client.prepareIndex(indexKey, table).setSource(doc).get();\n      if (indexResponse.getResult() != DocWriteResponse.Result.CREATED) {\n        return Status.ERROR;\n      }\n\n      if (!isRefreshNeeded) {\n        synchronized (this) {\n          isRefreshNeeded = true;\n        }\n      }\n\n      return Status.OK;\n    } catch (final Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status delete(final String table, final String key) {\n    try {\n      final SearchResponse searchResponse = search(table, key);\n      if (searchResponse.getHits().totalHits == 0) {\n        return Status.NOT_FOUND;\n      }\n\n      final String id = searchResponse.getHits().getAt(0).getId();\n\n      final DeleteResponse deleteResponse = client.prepareDelete(indexKey, table, id).get();\n      if (deleteResponse.getResult() == DocWriteResponse.Result.NOT_FOUND) {\n        return Status.NOT_FOUND;\n      }\n\n      if (!isRefreshNeeded) {\n        synchronized (this) {\n          isRefreshNeeded = true;\n        }\n      }\n\n      return Status.OK;\n    } catch (final Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status read(\n          final String table,\n          final String key,\n          final Set<String> fields,\n          final Map<String, ByteIterator> result) {\n    try {\n      final SearchResponse searchResponse = search(table, key);\n      if (searchResponse.getHits().totalHits == 0) {\n        return Status.NOT_FOUND;\n      }\n\n      final SearchHit hit = searchResponse.getHits().getAt(0);\n      if (fields != null) {\n        for (final String field : fields) {\n          result.put(field, new StringByteIterator((String) hit.getSource().get(field)));\n        }\n      } else {\n        for (final Map.Entry<String, Object> e : hit.getSource().entrySet()) {\n          if (KEY.equals(e.getKey())) {\n            continue;\n          }\n          result.put(e.getKey(), new StringByteIterator((String) e.getValue()));\n        }\n      }\n\n      return Status.OK;\n    } catch (final Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status update(final String table, final String key, final Map<String, ByteIterator> values) {\n    try {\n      final SearchResponse response = search(table, key);\n      if (response.getHits().totalHits == 0) {\n        return Status.NOT_FOUND;\n      }\n\n      final SearchHit hit = response.getHits().getAt(0);\n      for (final Entry<String, String> entry : StringByteIterator.getStringMap(values).entrySet()) {\n        hit.getSource().put(entry.getKey(), entry.getValue());\n      }\n\n      final IndexResponse indexResponse =\n              client.prepareIndex(indexKey, table, hit.getId()).setSource(hit.getSource()).get();\n\n      if (indexResponse.getResult() != DocWriteResponse.Result.UPDATED) {\n        return Status.ERROR;\n      }\n\n      if (!isRefreshNeeded) {\n        synchronized (this) {\n          isRefreshNeeded = true;\n        }\n      }\n\n      return Status.OK;\n    } catch (final Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status scan(\n      final String table,\n      final String startkey,\n      final int recordcount,\n      final Set<String> fields,\n      final Vector<HashMap<String, ByteIterator>> result) {\n    try {\n      refreshIfNeeded();\n      final RangeQueryBuilder query = new RangeQueryBuilder(KEY).gte(startkey);\n      final SearchResponse response = client.prepareSearch(indexKey).setQuery(query).setSize(recordcount).get();\n\n      for (final SearchHit hit : response.getHits()) {\n        final HashMap<String, ByteIterator> entry;\n        if (fields != null) {\n          entry = new HashMap<>(fields.size());\n          for (final String field : fields) {\n            entry.put(field, new StringByteIterator((String) hit.getSource().get(field)));\n          }\n        } else {\n          entry = new HashMap<>(hit.getSource().size());\n          for (final Map.Entry<String, Object> field : hit.getSource().entrySet()) {\n            if (KEY.equals(field.getKey())) {\n              continue;\n            }\n            entry.put(field.getKey(), new StringByteIterator((String) field.getValue()));\n          }\n        }\n        result.add(entry);\n      }\n      return Status.OK;\n    } catch (final Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  private void refreshIfNeeded() {\n    if (isRefreshNeeded) {\n      final boolean refresh;\n      synchronized (this) {\n        if (isRefreshNeeded) {\n          refresh = true;\n          isRefreshNeeded = false;\n        } else {\n          refresh = false;\n        }\n      }\n      if (refresh) {\n        client.admin().indices().refresh(new RefreshRequest()).actionGet();\n      }\n    }\n  }\n\n  private SearchResponse search(final String table, final String key) {\n    refreshIfNeeded();\n    return client.prepareSearch(indexKey).setTypes(table).setQuery(new TermQueryBuilder(KEY, key)).get();\n  }\n\n}\n"
  },
  {
    "path": "elasticsearch5/src/main/java/com/yahoo/ycsb/db/elasticsearch5/ElasticsearchRestClient.java",
    "content": "/*\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.elasticsearch5;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport org.apache.http.Header;\nimport org.apache.http.HttpEntity;\nimport org.apache.http.HttpHost;\nimport org.apache.http.HttpStatus;\nimport org.apache.http.entity.ContentType;\nimport org.apache.http.entity.StringEntity;\nimport org.apache.http.message.BasicHeader;\nimport org.apache.http.nio.entity.NStringEntity;\nimport org.codehaus.jackson.map.ObjectMapper;\nimport org.elasticsearch.client.Response;\nimport org.elasticsearch.client.RestClient;\nimport org.elasticsearch.common.xcontent.XContentBuilder;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport static com.yahoo.ycsb.db.elasticsearch5.Elasticsearch5.KEY;\nimport static com.yahoo.ycsb.db.elasticsearch5.Elasticsearch5.parseIntegerProperty;\nimport static java.util.Collections.emptyMap;\nimport static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n\n/**\n * Elasticsearch REST client for YCSB framework.\n */\npublic class ElasticsearchRestClient extends DB {\n\n  private static final String DEFAULT_INDEX_KEY = \"es.ycsb\";\n  private static final String DEFAULT_REMOTE_HOST = \"localhost:9200\";\n  private static final int NUMBER_OF_SHARDS = 1;\n  private static final int NUMBER_OF_REPLICAS = 0;\n  private RestClient restClient;\n  private String indexKey;\n\n  /**\n   *\n   * Initialize any state for this DB. Called once per DB instance; there is one\n   * DB instance per client thread.\n   */\n  @Override\n  public void init() throws DBException {\n    final Properties props = getProperties();\n\n    this.indexKey = props.getProperty(\"es.index.key\", DEFAULT_INDEX_KEY);\n\n    final int numberOfShards = parseIntegerProperty(props, \"es.number_of_shards\", NUMBER_OF_SHARDS);\n    final int numberOfReplicas = parseIntegerProperty(props, \"es.number_of_replicas\", NUMBER_OF_REPLICAS);\n\n    final Boolean newIndex = Boolean.parseBoolean(props.getProperty(\"es.new_index\", \"false\"));\n\n    final String[] nodeList = props.getProperty(\"es.hosts.list\", DEFAULT_REMOTE_HOST).split(\",\");\n\n    final List<HttpHost> esHttpHosts = new ArrayList<>(nodeList.length);\n    for (String h : nodeList) {\n      String[] nodes = h.split(\":\");\n      esHttpHosts.add(new HttpHost(nodes[0], Integer.valueOf(nodes[1]), \"http\"));\n    }\n\n    restClient = RestClient.builder(esHttpHosts.toArray(new HttpHost[esHttpHosts.size()])).build();\n\n    final Response existsResponse = performRequest(restClient, \"HEAD\", \"/\" + indexKey);\n    final boolean exists = existsResponse.getStatusLine().getStatusCode() == HttpStatus.SC_OK;\n\n    if (exists && newIndex) {\n      final Response deleteResponse = performRequest(restClient, \"DELETE\", \"/\" + indexKey);\n      final int statusCode = deleteResponse.getStatusLine().getStatusCode();\n      if (statusCode != HttpStatus.SC_OK) {\n        throw new DBException(\"delete [\" + indexKey + \"] failed with status [\" + statusCode + \"]\");\n      }\n    }\n\n    if (!exists || newIndex) {\n      try (XContentBuilder builder = jsonBuilder()) {\n        builder.startObject();\n        builder.startObject(\"settings\");\n        builder.field(\"index.number_of_shards\", numberOfShards);\n        builder.field(\"index.number_of_replicas\", numberOfReplicas);\n        builder.endObject();\n        builder.endObject();\n        final Map<String, String> params = emptyMap();\n        final StringEntity entity = new StringEntity(builder.string());\n        final Response createResponse = performRequest(restClient, \"PUT\", \"/\" + indexKey, params, entity);\n        final int statusCode = createResponse.getStatusLine().getStatusCode();\n        if (statusCode != HttpStatus.SC_OK) {\n          throw new DBException(\"create [\" + indexKey + \"] failed with status [\" + statusCode + \"]\");\n        }\n      } catch (final IOException e) {\n        throw new DBException(e);\n      }\n    }\n\n    final Map<String, String> params = Collections.singletonMap(\"wait_for_status\", \"green\");\n    final Response healthResponse = performRequest(restClient, \"GET\", \"/_cluster/health/\" + indexKey, params);\n    final int healthStatusCode = healthResponse.getStatusLine().getStatusCode();\n    if (healthStatusCode != HttpStatus.SC_OK) {\n      throw new DBException(\"cluster health [\" + indexKey + \"] failed with status [\" + healthStatusCode + \"]\");\n    }\n  }\n\n  private static Response performRequest(\n          final RestClient restClient,\n          final String method,\n          final String endpoint) throws DBException {\n    final Map<String, String> params = emptyMap();\n    return performRequest(restClient, method, endpoint, params);\n  }\n\n  private static Response performRequest(\n          final RestClient restClient,\n          final String method,\n          final String endpoint,\n          final Map<String, String> params) throws DBException {\n    return performRequest(restClient, method, endpoint, params, null);\n  }\n\n  private static final Header[] EMPTY_HEADERS = new Header[0];\n\n  private static Response performRequest(\n          final RestClient restClient,\n          final String method,\n          final String endpoint,\n          final Map<String, String> params,\n          final HttpEntity entity) throws DBException {\n    try {\n      final Header[] headers;\n      if (entity != null) {\n        headers = new Header[]{new BasicHeader(\"content-type\", ContentType.APPLICATION_JSON.toString())};\n      } else {\n        headers = EMPTY_HEADERS;\n      }\n      return restClient.performRequest(\n              method,\n              endpoint,\n              params,\n              entity,\n              headers);\n    } catch (final IOException e) {\n      e.printStackTrace();\n      throw new DBException(e);\n    }\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    if (restClient != null) {\n      try {\n        restClient.close();\n        restClient = null;\n      } catch (final IOException e) {\n        throw new DBException(e);\n      }\n    }\n  }\n\n  private volatile boolean isRefreshNeeded = false;\n  \n  @Override\n  public Status insert(final String table, final String key, final Map<String, ByteIterator> values) {\n    try {\n      final Map<String, String> data = StringByteIterator.getStringMap(values);\n      data.put(KEY, key);\n\n      final Response response = restClient.performRequest(\n          \"POST\",\n          \"/\" + indexKey + \"/\" + table + \"/\",\n          Collections.<String, String>emptyMap(),\n          new NStringEntity(new ObjectMapper().writeValueAsString(data), ContentType.APPLICATION_JSON));\n\n      if (response.getStatusLine().getStatusCode() != HttpStatus.SC_CREATED) {\n        return Status.ERROR;\n      }\n\n      if (!isRefreshNeeded) {\n        synchronized (this) {\n          isRefreshNeeded = true;\n        }\n      }\n\n      return Status.OK;\n    } catch (final Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status delete(final String table, final String key) {\n    try {\n      final Response searchResponse = search(table, key);\n      final int statusCode = searchResponse.getStatusLine().getStatusCode();\n      if (statusCode == HttpStatus.SC_NOT_FOUND) {\n        return Status.NOT_FOUND;\n      } else if (statusCode != HttpStatus.SC_OK) {\n        return Status.ERROR;\n      }\n\n      final Map<String, Object> map = map(searchResponse);\n      @SuppressWarnings(\"unchecked\") final Map<String, Object> hits = (Map<String, Object>)map.get(\"hits\");\n      final int total = (int)hits.get(\"total\");\n      if (total == 0) {\n        return Status.NOT_FOUND;\n      }\n      @SuppressWarnings(\"unchecked\") final Map<String, Object> hit =\n              (Map<String, Object>)((List<Object>)hits.get(\"hits\")).get(0);\n      final Response deleteResponse =\n              restClient.performRequest(\"DELETE\", \"/\" + indexKey + \"/\" + table + \"/\" + hit.get(\"_id\"));\n      if (deleteResponse.getStatusLine().getStatusCode() != HttpStatus.SC_OK) {\n        return Status.ERROR;\n      }\n\n      if (!isRefreshNeeded) {\n        synchronized (this) {\n          isRefreshNeeded = true;\n        }\n      }\n\n      return Status.OK;\n    } catch (final Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status read(\n          final String table,\n          final String key,\n          final Set<String> fields,\n          final Map<String, ByteIterator> result) {\n    try {\n      final Response searchResponse = search(table, key);\n      final int statusCode = searchResponse.getStatusLine().getStatusCode();\n      if (statusCode == 404) {\n        return Status.NOT_FOUND;\n      } else if (statusCode != HttpStatus.SC_OK) {\n        return Status.ERROR;\n      }\n\n      final Map<String, Object> map = map(searchResponse);\n      @SuppressWarnings(\"unchecked\") final Map<String, Object> hits = (Map<String, Object>)map.get(\"hits\");\n      final int total = (int)hits.get(\"total\");\n      if (total == 0) {\n        return Status.NOT_FOUND;\n      }\n      @SuppressWarnings(\"unchecked\") final Map<String, Object> hit =\n              (Map<String, Object>)((List<Object>)hits.get(\"hits\")).get(0);\n      @SuppressWarnings(\"unchecked\") final Map<String, Object> source = (Map<String, Object>)hit.get(\"_source\");\n      if (fields != null) {\n        for (final String field : fields) {\n          result.put(field, new StringByteIterator((String) source.get(field)));\n        }\n      } else {\n        for (final Map.Entry<String, Object> e : source.entrySet()) {\n          if (KEY.equals(e.getKey())) {\n            continue;\n          }\n          result.put(e.getKey(), new StringByteIterator((String) e.getValue()));\n        }\n      }\n\n      return Status.OK;\n    } catch (final Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status update(final String table, final String key, final Map<String, ByteIterator> values) {\n    try {\n      final Response searchResponse = search(table, key);\n      final int statusCode = searchResponse.getStatusLine().getStatusCode();\n      if (statusCode == 404) {\n        return Status.NOT_FOUND;\n      } else if (statusCode != HttpStatus.SC_OK) {\n        return Status.ERROR;\n      }\n\n      final Map<String, Object> map = map(searchResponse);\n      @SuppressWarnings(\"unchecked\") final Map<String, Object> hits = (Map<String, Object>) map.get(\"hits\");\n      final int total = (int) hits.get(\"total\");\n      if (total == 0) {\n        return Status.NOT_FOUND;\n      }\n      @SuppressWarnings(\"unchecked\") final Map<String, Object> hit =\n              (Map<String, Object>) ((List<Object>) hits.get(\"hits\")).get(0);\n      @SuppressWarnings(\"unchecked\") final Map<String, Object> source = (Map<String, Object>) hit.get(\"_source\");\n      for (final Map.Entry<String, String> entry : StringByteIterator.getStringMap(values).entrySet()) {\n        source.put(entry.getKey(), entry.getValue());\n      }\n      final Map<String, String> params = emptyMap();\n      final Response response = restClient.performRequest(\n              \"PUT\",\n              \"/\" + indexKey + \"/\" + table + \"/\" + hit.get(\"_id\"),\n              params,\n              new NStringEntity(new ObjectMapper().writeValueAsString(source), ContentType.APPLICATION_JSON));\n      if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) {\n        return Status.ERROR;\n      }\n\n      if (!isRefreshNeeded) {\n        synchronized (this) {\n          isRefreshNeeded = true;\n        }\n      }\n\n      return Status.OK;\n    } catch (final Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status scan(\n      final String table,\n      final String startkey,\n      final int recordcount,\n      final Set<String> fields,\n      final Vector<HashMap<String, ByteIterator>> result) {\n    try {\n      final Response response;\n      try (XContentBuilder builder = jsonBuilder()) {\n        builder.startObject();\n        builder.startObject(\"query\");\n        builder.startObject(\"range\");\n        builder.startObject(KEY);\n        builder.field(\"gte\", startkey);\n        builder.endObject();\n        builder.endObject();\n        builder.endObject();\n        builder.field(\"size\", recordcount);\n        builder.endObject();\n        response = search(table, builder);\n        @SuppressWarnings(\"unchecked\") final Map<String, Object> map = map(response);\n        @SuppressWarnings(\"unchecked\") final Map<String, Object> hits = (Map<String, Object>)map.get(\"hits\");\n        @SuppressWarnings(\"unchecked\") final List<Map<String, Object>> list =\n                (List<Map<String, Object>>) hits.get(\"hits\");\n\n        for (final Map<String, Object> hit : list) {\n          @SuppressWarnings(\"unchecked\") final Map<String, Object> source = (Map<String, Object>)hit.get(\"_source\");\n          final HashMap<String, ByteIterator> entry;\n          if (fields != null) {\n            entry = new HashMap<>(fields.size());\n            for (final String field : fields) {\n              entry.put(field, new StringByteIterator((String) source.get(field)));\n            }\n          } else {\n            entry = new HashMap<>(hit.size());\n            for (final Map.Entry<String, Object> field : source.entrySet()) {\n              if (KEY.equals(field.getKey())) {\n                continue;\n              }\n              entry.put(field.getKey(), new StringByteIterator((String) field.getValue()));\n            }\n          }\n          result.add(entry);\n        }\n      }\n      return Status.OK;\n    } catch (final Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  private void refreshIfNeeded() throws IOException {\n    if (isRefreshNeeded) {\n      final boolean refresh;\n      synchronized (this) {\n        if (isRefreshNeeded) {\n          refresh = true;\n          isRefreshNeeded = false;\n        } else {\n          refresh = false;\n        }\n      }\n      if (refresh) {\n        restClient.performRequest(\"POST\", \"/\" + indexKey + \"/_refresh\");\n      }\n    }\n  }\n\n  private Response search(final String table, final String key) throws IOException {\n    try (XContentBuilder builder = jsonBuilder()) {\n      builder.startObject();\n      builder.startObject(\"query\");\n      builder.startObject(\"term\");\n      builder.field(KEY, key);\n      builder.endObject();\n      builder.endObject();\n      builder.endObject();\n      return search(table, builder);\n    }\n  }\n\n  private Response search(final String table, final XContentBuilder builder) throws IOException {\n    refreshIfNeeded();\n    final Map<String, String> params = emptyMap();\n    final StringEntity entity = new StringEntity(builder.string());\n    final Header header = new BasicHeader(\"content-type\", ContentType.APPLICATION_JSON.toString());\n    return restClient.performRequest(\"GET\", \"/\" + indexKey + \"/\" + table + \"/_search\", params, entity, header);\n  }\n\n  private Map<String, Object> map(final Response response) throws IOException {\n    try (InputStream is = response.getEntity().getContent()) {\n      final ObjectMapper mapper = new ObjectMapper();\n      @SuppressWarnings(\"unchecked\") final Map<String, Object> map = mapper.readValue(is, Map.class);\n      return map;\n    }\n  }\n\n}\n"
  },
  {
    "path": "elasticsearch5/src/main/java/com/yahoo/ycsb/db/elasticsearch5/package-info.java",
    "content": "/*\n * Copyright (c) 2017 YCSB Contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for \n * <a href=\"https://www.elastic.co/products/elasticsearch\">Elasticsearch</a>.\n */\npackage com.yahoo.ycsb.db.elasticsearch5;\n\n"
  },
  {
    "path": "elasticsearch5/src/main/resources/log4j2.properties",
    "content": "appender.console.type = Console\nappender.console.name = console\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n\nappender.console.targetStr = SYSTEM_ERR\n\nrootLogger.level = info\nrootLogger.appenderRef.console.ref = console\n"
  },
  {
    "path": "elasticsearch5/src/test/java/com/yahoo/ycsb/db/elasticsearch5/ElasticsearchClientIT.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.elasticsearch5;\n\nimport com.yahoo.ycsb.DB;\n\npublic class ElasticsearchClientIT extends ElasticsearchIntegTestBase {\n\n    @Override\n    DB newDB() {\n        return new ElasticsearchClient();\n    }\n\n}\n"
  },
  {
    "path": "elasticsearch5/src/test/java/com/yahoo/ycsb/db/elasticsearch5/ElasticsearchIntegTestBase.java",
    "content": "package com.yahoo.ycsb.db.elasticsearch5;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.Client;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport java.util.HashMap;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport static org.junit.Assert.assertEquals;\n\npublic abstract class ElasticsearchIntegTestBase {\n\n    private DB db;\n\n    abstract DB newDB();\n\n    private final static HashMap<String, ByteIterator> MOCK_DATA;\n    private final static String MOCK_TABLE = \"MOCK_TABLE\";\n\n    static {\n        MOCK_DATA = new HashMap<>(10);\n        for (int i = 1; i <= 10; i++) {\n            MOCK_DATA.put(\"field\" + i, new StringByteIterator(\"value\" + i));\n        }\n    }\n\n    @Before\n    public void setUp() throws DBException {\n        final Properties props = new Properties();\n        props.put(\"es.new_index\", \"true\");\n        props.put(\"es.setting.cluster.name\", \"test\");\n        db = newDB();\n        db.setProperties(props);\n        db.init();\n        for (int i = 0; i < 16; i++) {\n            db.insert(MOCK_TABLE, Integer.toString(i), MOCK_DATA);\n        }\n    }\n\n    @After\n    public void tearDown() throws DBException {\n        db.cleanup();\n    }\n\n    @Test\n    public void testInsert() {\n        final Status result = db.insert(MOCK_TABLE, \"0\", MOCK_DATA);\n        assertEquals(Status.OK, result);\n    }\n\n    /**\n     * Test of delete method, of class ElasticsearchClient.\n     */\n    @Test\n    public void testDelete() {\n        final Status result = db.delete(MOCK_TABLE, \"1\");\n        assertEquals(Status.OK, result);\n    }\n\n    /**\n     * Test of read method, of class ElasticsearchClient.\n     */\n    @Test\n    public void testRead() {\n        final Set<String> fields = MOCK_DATA.keySet();\n        final HashMap<String, ByteIterator> resultParam = new HashMap<>(10);\n        final Status result = db.read(MOCK_TABLE, \"1\", fields, resultParam);\n        assertEquals(Status.OK, result);\n    }\n\n    /**\n     * Test of update method, of class ElasticsearchClient.\n     */\n    @Test\n    public void testUpdate() {\n        final HashMap<String, ByteIterator> newValues = new HashMap<>(10);\n\n        for (int i = 1; i <= 10; i++) {\n            newValues.put(\"field\" + i, new StringByteIterator(\"newvalue\" + i));\n        }\n\n        final Status updateResult = db.update(MOCK_TABLE, \"1\", newValues);\n        assertEquals(Status.OK, updateResult);\n\n        // validate that the values changed\n        final HashMap<String, ByteIterator> resultParam = new HashMap<>(10);\n        final Status readResult = db.read(MOCK_TABLE, \"1\", MOCK_DATA.keySet(), resultParam);\n        assertEquals(Status.OK, readResult);\n\n        for (int i = 1; i <= 10; i++) {\n            assertEquals(\"newvalue\" + i, resultParam.get(\"field\" + i).toString());\n        }\n\n    }\n\n    /**\n     * Test of scan method, of class ElasticsearchClient.\n     */\n    @Test\n    public void testScan() {\n        final int recordcount = 10;\n        final Set<String> fields = MOCK_DATA.keySet();\n        final Vector<HashMap<String, ByteIterator>> resultParam = new Vector<>(10);\n        final Status result = db.scan(MOCK_TABLE, \"1\", recordcount, fields, resultParam);\n        assertEquals(Status.OK, result);\n\n        assertEquals(10, resultParam.size());\n    }\n\n}\n"
  },
  {
    "path": "elasticsearch5/src/test/java/com/yahoo/ycsb/db/elasticsearch5/ElasticsearchRestClientIT.java",
    "content": "/**\n * Copyright (c) 2017 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.elasticsearch5;\n\nimport com.yahoo.ycsb.DB;\n\npublic class ElasticsearchRestClientIT extends ElasticsearchIntegTestBase {\n\n  @Override\n  DB newDB() {\n    return new ElasticsearchRestClient();\n  }\n\n}\n"
  },
  {
    "path": "geode/README.md",
    "content": "<!--\nCopyright (c) 2014 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on Apache Geode (incubating).\n\n### Get Apache Geode\n\nYou can download Geode from http://geode.incubator.apache.org/releases/\n\n#### Start Geode Cluster\n\nUse the Geode shell (gfsh) to start the cluster. You will need to start\nat-least one locator which is a member discovery service and one or more\nGeode servers.\n\nLaunch gfsh:\n\n```\n$ cd $GEODE_HOME\n$ ./bin/gfsh\n```\n\nStart a locator and two servers:\n\n```\ngfsh> start locator --name=locator1\ngfsh> configure pdx --read-serialized=true\ngfsh> start server --name=server1 --server-port=40404\ngfsh> start server --name=server2 --server-port=40405\n```\n\nCreate the \"usertable\" region required by YCSB driver:\n```\ngfsh>create region --name=usertable --type=PARTITION\n```\ngfsh has tab autocompletion, so you can play around with various options.\n\n### Start YCSB workload\n\nFrom your YCSB directory, you can run the ycsb workload as follows\n```\n./bin/ycsb load geode -P workloads/workloada -p geode.locator=host[port]\n```\n(default port of locator is 10334).\n\nIn the default mode, ycsb geode driver will connect as a client to the geode\ncluster. To make the ycsb driver a peer member of the distributed system\nuse the property\n`-p geode.topology=p2p -p geode.locator=host[port]`\n\nNote:\nFor update workloads, please use the property `-p writeallfields=true`\n"
  },
  {
    "path": "geode/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n  \n  <artifactId>geode-binding</artifactId>\n  <name>Geode DB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.apache.geode</groupId>\n      <artifactId>geode-core</artifactId>\n      <version>${geode.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "geode/src/main/java/com/yahoo/ycsb/db/GeodeClient.java",
    "content": "/**\n * Copyright (c) 2013 - 2016 YCSB Contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport org.apache.geode.cache.*;\nimport org.apache.geode.cache.client.ClientCache;\nimport org.apache.geode.cache.client.ClientCacheFactory;\nimport org.apache.geode.cache.client.ClientRegionFactory;\nimport org.apache.geode.cache.client.ClientRegionShortcut;\nimport org.apache.geode.internal.admin.remote.DistributionLocatorId;\nimport org.apache.geode.internal.cache.GemFireCacheImpl;\nimport org.apache.geode.pdx.JSONFormatter;\nimport org.apache.geode.pdx.PdxInstance;\nimport org.apache.geode.pdx.PdxInstanceFactory;\nimport com.yahoo.ycsb.*;\n\nimport java.util.*;\n\n/**\n * Apache Geode (incubating) client for the YCSB benchmark.<br />\n * <p>By default acts as a Geode client and tries to connect\n * to Geode cache server running on localhost with default\n * cache server port. Hostname and port of a Geode cacheServer\n * can be provided using <code>geode.serverport=port</code> and <code>\n * geode.serverhost=host</code> properties on YCSB command line.\n * A locator may also be used for discovering a cacheServer\n * by using the property <code>geode.locator=host[port]</code></p>\n * <p>\n * <p>To run this client in a peer-to-peer topology with other Geode\n * nodes, use the property <code>geode.topology=p2p</code>. Running\n * in p2p mode will enable embedded caching in this client.</p>\n * <p>\n * <p>YCSB by default does its operations against \"usertable\". When running\n * as a client this is a <code>ClientRegionShortcut.PROXY</code> region,\n * when running in p2p mode it is a <code>RegionShortcut.PARTITION</code>\n * region. A cache.xml defining \"usertable\" region can be placed in the\n * working directory to override these region definitions.</p>\n */\npublic class GeodeClient extends DB {\n  /**\n   * property name of the port where Geode server is listening for connections.\n   */\n  private static final String SERVERPORT_PROPERTY_NAME = \"geode.serverport\";\n\n  /**\n   * property name of the host where Geode server is running.\n   */\n  private static final String SERVERHOST_PROPERTY_NAME = \"geode.serverhost\";\n\n  /**\n   * default value of {@link #SERVERHOST_PROPERTY_NAME}.\n   */\n  private static final String SERVERHOST_PROPERTY_DEFAULT = \"localhost\";\n\n  /**\n   * property name to specify a Geode locator. This property can be used in both\n   * client server and p2p topology\n   */\n  private static final String LOCATOR_PROPERTY_NAME = \"geode.locator\";\n\n  /**\n   * property name to specify Geode topology.\n   */\n  private static final String TOPOLOGY_PROPERTY_NAME = \"geode.topology\";\n\n  /**\n   * value of {@value #TOPOLOGY_PROPERTY_NAME} when peer to peer topology should be used.\n   * (client-server topology is default)\n   */\n  private static final String TOPOLOGY_P2P_VALUE = \"p2p\";\n\n  private GemFireCache cache;\n\n  /**\n   * true if ycsb client runs as a client to a Geode cache server.\n   */\n  private boolean isClient;\n\n  @Override\n  public void init() throws DBException {\n    Properties props = getProperties();\n    // hostName where Geode cacheServer is running\n    String serverHost = null;\n    // port of Geode cacheServer\n    int serverPort = 0;\n    String locatorStr = null;\n\n    if (props != null && !props.isEmpty()) {\n      String serverPortStr = props.getProperty(SERVERPORT_PROPERTY_NAME);\n      if (serverPortStr != null) {\n        serverPort = Integer.parseInt(serverPortStr);\n      }\n      serverHost = props.getProperty(SERVERHOST_PROPERTY_NAME, SERVERHOST_PROPERTY_DEFAULT);\n      locatorStr = props.getProperty(LOCATOR_PROPERTY_NAME);\n\n      String topology = props.getProperty(TOPOLOGY_PROPERTY_NAME);\n      if (topology != null && topology.equals(TOPOLOGY_P2P_VALUE)) {\n        CacheFactory cf = new CacheFactory();\n        if (locatorStr != null) {\n          cf.set(\"locators\", locatorStr);\n        }\n        cache = cf.create();\n        isClient = false;\n        return;\n      }\n    }\n    isClient = true;\n    DistributionLocatorId locator = null;\n    if (locatorStr != null) {\n      locator = new DistributionLocatorId(locatorStr);\n    }\n    ClientCacheFactory ccf = new ClientCacheFactory();\n    ccf.setPdxReadSerialized(true);\n    if (serverPort != 0) {\n      ccf.addPoolServer(serverHost, serverPort);\n    } else if (locator != null) {\n      ccf.addPoolLocator(locator.getHost().getCanonicalHostName(), locator.getPort());\n    }\n    cache = ccf.create();\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n      Map<String, ByteIterator> result) {\n    Region<String, PdxInstance> r = getRegion(table);\n    PdxInstance val = r.get(key);\n    if (val != null) {\n      if (fields == null) {\n        for (String fieldName : val.getFieldNames()) {\n          result.put(fieldName, new ByteArrayByteIterator((byte[]) val.getField(fieldName)));\n        }\n      } else {\n        for (String field : fields) {\n          result.put(field, new ByteArrayByteIterator((byte[]) val.getField(field)));\n        }\n      }\n      return Status.OK;\n    }\n    return Status.ERROR;\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n                     Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    // Geode does not support scan\n    return Status.ERROR;\n  }\n\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    getRegion(table).put(key, convertToBytearrayMap(values));\n    return Status.OK;\n  }\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    getRegion(table).put(key, convertToBytearrayMap(values));\n    return Status.OK;\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    getRegion(table).destroy(key);\n    return Status.OK;\n  }\n\n  private PdxInstance convertToBytearrayMap(Map<String, ByteIterator> values) {\n    GemFireCacheImpl gci = (GemFireCacheImpl) CacheFactory.getAnyInstance();\n    PdxInstanceFactory pdxInstanceFactory = gci.createPdxInstanceFactory(JSONFormatter.JSON_CLASSNAME);\n\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      pdxInstanceFactory.writeByteArray(entry.getKey(), entry.getValue().toArray());\n    }\n    return pdxInstanceFactory.create();\n  }\n\n  private Region<String, PdxInstance> getRegion(String table) {\n    Region<String, PdxInstance> r = cache.getRegion(table);\n    if (r == null) {\n      try {\n        if (isClient) {\n          ClientRegionFactory<String, PdxInstance> crf =\n              ((ClientCache) cache).createClientRegionFactory(ClientRegionShortcut.PROXY);\n          r = crf.create(table);\n        } else {\n          RegionFactory<String, PdxInstance> rf = ((Cache) cache).createRegionFactory(RegionShortcut.PARTITION);\n          r = rf.create(table);\n        }\n      } catch (RegionExistsException e) {\n        // another thread created the region\n        r = cache.getRegion(table);\n      }\n    }\n    return r;\n  }\n}\n"
  },
  {
    "path": "geode/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2014-2016, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * YCSB binding for <a href=\"https://geode.incubator.apache.org/\">Apache Geode (incubating)</a>.\n */\npackage com.yahoo.ycsb.db;"
  },
  {
    "path": "googlebigtable/README.md",
    "content": "<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# Google Bigtable  Driver for YCSB\n\nThis driver provides a YCSB workload binding for Google's hosted Bigtable, the inspiration for a number of key-value stores like HBase and Cassandra. The Bigtable Java client provides both Protobuf based GRPC and HBase client APIs. This binding implements the Protobuf API for testing the native client. To test Bigtable using the HBase API, see the `hbase10` binding.\n\n## Quickstart\n\n### 1. Setup a Bigtable Instance\n\nLogin to the Google Cloud Console and follow the [Creating Instance](https://cloud.google.com/bigtable/docs/creating-instance) steps. Make a note of your instance ID and project ID.\n\n### 2. Launch the Bigtable Shell\n\nFrom the Cloud Console, launch a shell and follow the [Quickstart](https://cloud.google.com/bigtable/docs/quickstart) up to step 4 where you launch the HBase shell.\n\n### 3. Create a Table\n\nFor best results, use the pre-splitting strategy recommended in [HBASE-4163](https://issues.apache.org/jira/browse/HBASE-4163):\n\n```\nhbase(main):001:0> n_splits = 200 # HBase recommends (10 * number of regionservers)\nhbase(main):002:0> create 'usertable', 'cf', {SPLITS => (1..n_splits).map {|i| \"user#{1000+i*(9999-1000)/n_splits}\"}}\n```\n\nMake a note of the column family, in this example it's `cf``.\n\n### 4. Download JSON Credentials\n\nFollow these instructions for [Generating a JSON key](https://cloud.google.com/bigtable/docs/installing-hbase-shell#service-account) and save it to your host.\n\n### 5. Load a Workload\n\nSwitch to the root of the YCSB repo and choose the workload you want to run and `load` it first. With the CLI you must provide the column family and instance properties to load.\n\n```\nbin/ycsb load googlebigtable -p columnfamily=cf -p google.bigtable.project.id=<PROJECT_ID> -p google.bigtable.instance.id=<INSTANCE> -p google.bigtable.auth.json.keyfile=<PATH_TO_JSON_KEY> -P workloads/workloada\n\n```\n\nMake sure to replace the variables in the angle brackets above with the proper value from your instance. Additional configuration parameters are available below.\n\nThe `load` step only executes inserts into the datastore. After loading data, run the same workload to mix reads with writes.\n\n```\nbin/ycsb run googlebigtable -p columnfamily=cf -p google.bigtable.project.id=<PROJECT_ID> -p google.bigtable.instance.id=<INSTANCE> -p google.bigtable.auth.json.keyfile=<PATH_TO_JSON_KEY> -P workloads/workloada\n\n```\n\n## Configuration Options\n\nThe following options can be configured using CLI (using the `-p` parameter) or hbase-site.xml (add the HBase config directory to YCSB's class path via CLI). Check the [Cloud Bigtable Client](https://github.com/manolama/cloud-bigtable-client) project for additional tuning parameters.\n\n* `columnfamily`: (Required) The Bigtable column family to target.\n* `google.bigtable.project.id`: (Required) The ID of a Bigtable project.\n* `google.bigtable.instance.id`: (Required) The name of a Bigtable instance.\n* `google.bigtable.auth.service.account.enable`: Whether or not to authenticate with a service account. The default is true.\n* `google.bigtable.auth.json.keyfile`: (Required) A service account key for authentication.\n* `debug`: If true, prints debug information to standard out. The default is false.\n* `clientbuffering`: Whether or not to use client side buffering and batching of write operations. This can significantly improve performance and defaults to true.\n"
  },
  {
    "path": "googlebigtable/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent/</relativePath>\n  </parent>\n\n  <artifactId>googlebigtable-binding</artifactId>\n  <name>Google Cloud Bigtable Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.google.cloud.bigtable</groupId>\n      <artifactId>bigtable-hbase-1.0</artifactId>\n      <version>${googlebigtable.version}</version>\n    </dependency>\n    \n    <dependency>\n      <groupId>io.netty</groupId>\n      <artifactId>netty-tcnative-boringssl-static</artifactId>\n      <version>1.1.33.Fork26</version>\n    </dependency>\n\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    \n  </dependencies>\n</project>\n"
  },
  {
    "path": "googlebigtable/src/main/java/com/yahoo/ycsb/db/GoogleBigtableClient.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n * \n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport java.io.IOException;\nimport java.nio.charset.Charset;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Iterator;\nimport java.util.List;\nimport java.util.Map.Entry;\nimport java.util.Properties;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.hbase.HBaseConfiguration;\nimport org.apache.hadoop.hbase.util.Bytes;\n\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.concurrent.ExecutionException;\n\nimport com.google.bigtable.repackaged.com.google.protobuf.ByteString;\nimport com.google.bigtable.v2.Column;\nimport com.google.bigtable.v2.Family;\nimport com.google.bigtable.v2.MutateRowRequest;\nimport com.google.bigtable.v2.Mutation;\nimport com.google.bigtable.v2.ReadRowsRequest;\nimport com.google.bigtable.v2.Row;\nimport com.google.bigtable.v2.RowFilter;\nimport com.google.bigtable.v2.RowRange;\nimport com.google.bigtable.v2.RowSet;\nimport com.google.bigtable.v2.Mutation.DeleteFromRow;\nimport com.google.bigtable.v2.Mutation.SetCell;\nimport com.google.bigtable.v2.RowFilter.Chain.Builder;\nimport com.google.cloud.bigtable.config.BigtableOptions;\nimport com.google.cloud.bigtable.grpc.BigtableDataClient;\nimport com.google.cloud.bigtable.grpc.BigtableSession;\nimport com.google.cloud.bigtable.grpc.BigtableTableName;\nimport com.google.cloud.bigtable.grpc.async.AsyncExecutor;\nimport com.google.cloud.bigtable.grpc.async.BulkMutation;\nimport com.google.cloud.bigtable.hbase.BigtableOptionsFactory;\nimport com.google.cloud.bigtable.util.ByteStringer;\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\n/**\n * Google Bigtable Proto client for YCSB framework.\n * \n * Bigtable offers two APIs. These include a native Protobuf GRPC API as well as \n * an HBase API wrapper for the GRPC API. This client implements the Protobuf \n * API to test the underlying calls wrapped up in the HBase API. To use the \n * HBase API, see the hbase10 client binding.\n */\npublic class GoogleBigtableClient extends com.yahoo.ycsb.DB {\n  public static final Charset UTF8_CHARSET = Charset.forName(\"UTF8\");\n  \n  /** Property names for the CLI. */\n  private static final String ASYNC_MUTATOR_MAX_MEMORY = \"mutatorMaxMemory\";\n  private static final String ASYNC_MAX_INFLIGHT_RPCS = \"mutatorMaxInflightRPCs\";\n  private static final String CLIENT_SIDE_BUFFERING = \"clientbuffering\";\n  \n  /** Tracks running thread counts so we know when to close the session. */ \n  private static int threadCount = 0;\n  \n  /** This will load the hbase-site.xml config file and/or store CLI options. */\n  private static final Configuration CONFIG = HBaseConfiguration.create();\n  \n  /** Print debug information to standard out. */\n  private boolean debug = false;\n  \n  /** Global Bigtable native API objects. */ \n  private static BigtableOptions options;\n  private static BigtableSession session;\n  \n  /** Thread loacal Bigtable native API objects. */\n  private BigtableDataClient client;\n  private AsyncExecutor asyncExecutor;\n  \n  /** The column family use for the workload. */\n  private byte[] columnFamilyBytes;\n  \n  /** Cache for the last table name/ID to avoid byte conversions. */\n  private String lastTable = \"\";\n  private byte[] lastTableBytes;\n  \n  /**\n   * If true, buffer mutations on the client. For measuring insert/update/delete \n   * latencies, client side buffering should be disabled.\n   */\n  private boolean clientSideBuffering = false;\n\n  private BulkMutation bulkMutation;\n\n  @Override\n  public void init() throws DBException {\n    Properties props = getProperties();\n    \n    // Defaults the user can override if needed\n    if (getProperties().containsKey(ASYNC_MUTATOR_MAX_MEMORY)) {\n      CONFIG.set(BigtableOptionsFactory.BIGTABLE_BUFFERED_MUTATOR_MAX_MEMORY_KEY,\n          getProperties().getProperty(ASYNC_MUTATOR_MAX_MEMORY));\n    }\n    if (getProperties().containsKey(ASYNC_MAX_INFLIGHT_RPCS)) {\n      CONFIG.set(BigtableOptionsFactory.BIGTABLE_BULK_MAX_ROW_KEY_COUNT,\n          getProperties().getProperty(ASYNC_MAX_INFLIGHT_RPCS));\n    }\n    // make it easy on ourselves by copying all CLI properties into the config object.\n    final Iterator<Entry<Object, Object>> it = props.entrySet().iterator();\n    while (it.hasNext()) {\n      Entry<Object, Object> entry = it.next();\n      CONFIG.set((String)entry.getKey(), (String)entry.getValue());\n    }\n    \n    clientSideBuffering = getProperties().getProperty(CLIENT_SIDE_BUFFERING, \"false\")\n        .equals(\"true\") ? true : false;\n    \n    System.err.println(\"Running Google Bigtable with Proto API\" +\n         (clientSideBuffering ? \" and client side buffering.\" : \".\"));\n    \n    synchronized (CONFIG) {\n      ++threadCount;\n      if (session == null) {\n        try {\n          options = BigtableOptionsFactory.fromConfiguration(CONFIG);\n          session = new BigtableSession(options);\n          // important to instantiate the first client here, otherwise the\n          // other threads may receive an NPE from the options when they try\n          // to read the cluster name.\n          client = session.getDataClient();\n        } catch (IOException e) {\n          throw new DBException(\"Error loading options from config: \", e);\n        }\n      } else {\n        client = session.getDataClient();\n      }\n      \n      if (clientSideBuffering) {\n        asyncExecutor = session.createAsyncExecutor();\n      }\n    }\n    \n    if ((getProperties().getProperty(\"debug\") != null)\n        && (getProperties().getProperty(\"debug\").compareTo(\"true\") == 0)) {\n      debug = true;\n    }\n    \n    final String columnFamily = getProperties().getProperty(\"columnfamily\");\n    if (columnFamily == null) {\n      System.err.println(\"Error, must specify a columnfamily for Bigtable table\");\n      throw new DBException(\"No columnfamily specified\");\n    }\n    columnFamilyBytes = Bytes.toBytes(columnFamily);\n  }\n  \n  @Override\n  public void cleanup() throws DBException {\n    if (bulkMutation != null) {\n      try {\n        bulkMutation.flush();\n      } catch(RuntimeException e){\n        throw new DBException(e);\n      }\n    }\n    if (asyncExecutor != null) {\n      try {\n        asyncExecutor.flush();\n      } catch (IOException e) {\n        throw new DBException(e);\n      }\n    }\n    synchronized (CONFIG) {\n      --threadCount;\n      if (threadCount <= 0) {\n        try {\n          session.close();\n        } catch (IOException e) {\n          throw new DBException(e);\n        }\n      }\n    }\n  }\n  \n  @Override\n  public Status read(String table, String key, Set<String> fields,\n                     Map<String, ByteIterator> result) {\n    if (debug) {\n      System.out.println(\"Doing read from Bigtable columnfamily \" \n          + new String(columnFamilyBytes));\n      System.out.println(\"Doing read for key: \" + key);\n    }\n    \n    setTable(table);\n    \n    RowFilter filter = RowFilter.newBuilder()\n        .setFamilyNameRegexFilterBytes(ByteStringer.wrap(columnFamilyBytes))\n        .build();\n    if (fields != null && fields.size() > 0) {\n      Builder filterChain = RowFilter.Chain.newBuilder();\n      filterChain.addFilters(filter);\n      filterChain.addFilters(RowFilter.newBuilder()\n          .setCellsPerColumnLimitFilter(1)\n          .build());\n      int count = 0;\n      // usually \"field#\" so pre-alloc\n      final StringBuilder regex = new StringBuilder(fields.size() * 6);\n      for (final String field : fields) {\n        if (count++ > 0) {\n          regex.append(\"|\");\n        }\n        regex.append(field);\n      }\n      filterChain.addFilters(RowFilter.newBuilder()\n          .setColumnQualifierRegexFilter(\n              ByteStringer.wrap(regex.toString().getBytes()))).build();\n      filter = RowFilter.newBuilder().setChain(filterChain.build()).build();\n    }\n    \n    final ReadRowsRequest.Builder rrr = ReadRowsRequest.newBuilder()\n        .setTableNameBytes(ByteStringer.wrap(lastTableBytes))\n        .setFilter(filter)\n        .setRows(RowSet.newBuilder()\n          .addRowKeys(ByteStringer.wrap(key.getBytes())));\n    \n    List<Row> rows;\n    try {\n      rows = client.readRowsAsync(rrr.build()).get();\n      if (rows == null || rows.isEmpty()) {\n        return Status.NOT_FOUND;\n      }\n      for (final Row row : rows) {\n        for (final Family family : row.getFamiliesList()) {\n          if (Arrays.equals(family.getNameBytes().toByteArray(), columnFamilyBytes)) {\n            for (final Column column : family.getColumnsList()) {\n              // we should only have a single cell per column\n              result.put(column.getQualifier().toString(UTF8_CHARSET), \n                  new ByteArrayByteIterator(column.getCells(0).getValue().toByteArray()));\n              if (debug) {\n                System.out.println(\n                    \"Result for field: \" + column.getQualifier().toString(UTF8_CHARSET)\n                        + \" is: \" + column.getCells(0).getValue().toString(UTF8_CHARSET));\n              }\n            }\n          }\n        }\n      }\n      \n      return Status.OK;\n    } catch (InterruptedException e) {\n      System.err.println(\"Interrupted during get: \" + e);\n      Thread.currentThread().interrupt();\n      return Status.ERROR;\n    } catch (ExecutionException e) {\n      System.err.println(\"Exception during get: \" + e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    setTable(table);\n    \n    RowFilter filter = RowFilter.newBuilder()\n        .setFamilyNameRegexFilterBytes(ByteStringer.wrap(columnFamilyBytes))\n        .build();\n    if (fields != null && fields.size() > 0) {\n      Builder filterChain = RowFilter.Chain.newBuilder();\n      filterChain.addFilters(filter);\n      filterChain.addFilters(RowFilter.newBuilder()\n          .setCellsPerColumnLimitFilter(1)\n          .build());\n      int count = 0;\n      // usually \"field#\" so pre-alloc\n      final StringBuilder regex = new StringBuilder(fields.size() * 6);\n      for (final String field : fields) {\n        if (count++ > 0) {\n          regex.append(\"|\");\n        }\n        regex.append(field);\n      }\n      filterChain.addFilters(RowFilter.newBuilder()\n          .setColumnQualifierRegexFilter(\n              ByteStringer.wrap(regex.toString().getBytes()))).build();\n      filter = RowFilter.newBuilder().setChain(filterChain.build()).build();\n    }\n    \n    final RowRange range = RowRange.newBuilder()\n        .setStartKeyClosed(ByteStringer.wrap(startkey.getBytes()))\n        .build();\n\n    final RowSet rowSet = RowSet.newBuilder()\n        .addRowRanges(range)\n        .build();\n\n    final ReadRowsRequest.Builder rrr = ReadRowsRequest.newBuilder()\n        .setTableNameBytes(ByteStringer.wrap(lastTableBytes))\n        .setFilter(filter)\n        .setRows(rowSet);\n    \n    List<Row> rows;\n    try {\n      rows = client.readRowsAsync(rrr.build()).get();\n      if (rows == null || rows.isEmpty()) {\n        return Status.NOT_FOUND;\n      }\n      int numResults = 0;\n      \n      for (final Row row : rows) {\n        final HashMap<String, ByteIterator> rowResult =\n            new HashMap<String, ByteIterator>(fields != null ? fields.size() : 10);\n        \n        for (final Family family : row.getFamiliesList()) {\n          if (Arrays.equals(family.getNameBytes().toByteArray(), columnFamilyBytes)) {\n            for (final Column column : family.getColumnsList()) {\n              // we should only have a single cell per column\n              rowResult.put(column.getQualifier().toString(UTF8_CHARSET), \n                  new ByteArrayByteIterator(column.getCells(0).getValue().toByteArray()));\n              if (debug) {\n                System.out.println(\n                    \"Result for field: \" + column.getQualifier().toString(UTF8_CHARSET)\n                        + \" is: \" + column.getCells(0).getValue().toString(UTF8_CHARSET));\n              }\n            }\n          }\n        }\n        \n        result.add(rowResult);\n        \n        numResults++;\n        if (numResults >= recordcount) {// if hit recordcount, bail out\n          break;\n        }\n      }\n      return Status.OK;\n    } catch (InterruptedException e) {\n      System.err.println(\"Interrupted during scan: \" + e);\n      Thread.currentThread().interrupt();\n      return Status.ERROR;\n    } catch (ExecutionException e) {\n      System.err.println(\"Exception during scan: \" + e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status update(String table, String key,\n                       Map<String, ByteIterator> values) {\n    if (debug) {\n      System.out.println(\"Setting up put for key: \" + key);\n    }\n    \n    setTable(table);\n    \n    final MutateRowRequest.Builder rowMutation = MutateRowRequest.newBuilder();\n    rowMutation.setRowKey(ByteString.copyFromUtf8(key));\n    rowMutation.setTableNameBytes(ByteStringer.wrap(lastTableBytes));\n    \n    for (final Entry<String, ByteIterator> entry : values.entrySet()) {\n      final Mutation.Builder mutationBuilder = rowMutation.addMutationsBuilder();\n      final SetCell.Builder setCellBuilder = mutationBuilder.getSetCellBuilder();\n      \n      setCellBuilder.setFamilyNameBytes(ByteStringer.wrap(columnFamilyBytes));\n      setCellBuilder.setColumnQualifier(ByteStringer.wrap(entry.getKey().getBytes()));\n      setCellBuilder.setValue(ByteStringer.wrap(entry.getValue().toArray()));\n\n      // Bigtable uses a 1ms granularity\n      setCellBuilder.setTimestampMicros(System.currentTimeMillis() * 1000);\n    }\n    \n    try {\n      if (clientSideBuffering) {\n        bulkMutation.add(rowMutation.build());\n      } else {\n        client.mutateRow(rowMutation.build());\n      }\n      return Status.OK;\n    } catch (RuntimeException e) {\n      System.err.println(\"Failed to insert key: \" + key + \" \" + e.getMessage());\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status insert(String table, String key,\n                       Map<String, ByteIterator> values) {\n    return update(table, key, values);\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    if (debug) {\n      System.out.println(\"Doing delete for key: \" + key);\n    }\n    \n    setTable(table);\n    \n    final MutateRowRequest.Builder rowMutation = MutateRowRequest.newBuilder()\n        .setRowKey(ByteString.copyFromUtf8(key))\n        .setTableNameBytes(ByteStringer.wrap(lastTableBytes));\n    rowMutation.addMutationsBuilder().setDeleteFromRow(\n        DeleteFromRow.getDefaultInstance());\n    \n    try {\n      if (clientSideBuffering) {\n        bulkMutation.add(rowMutation.build());\n      } else {\n        client.mutateRow(rowMutation.build());\n      }\n      return Status.OK;\n    } catch (RuntimeException e) {\n      System.err.println(\"Failed to delete key: \" + key + \" \" + e.getMessage());\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Little helper to set the table byte array. If it's different than the last\n   * table we reset the byte array. Otherwise we just use the existing array.\n   * @param table The table we're operating against\n   */\n  private void setTable(final String table) {\n    if (!lastTable.equals(table)) {\n      lastTable = table;\n      BigtableTableName tableName = options\n          .getInstanceName()\n          .toTableName(table);\n      lastTableBytes = tableName\n          .toString()\n          .getBytes();\n      synchronized(this) {\n        if (bulkMutation != null) {\n          bulkMutation.flush();\n        }\n        bulkMutation = session.createBulkMutation(tableName, asyncExecutor);\n      }\n    }\n  }\n  \n}"
  },
  {
    "path": "googlebigtable/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for Google's <a href=\"https://cloud.google.com/bigtable/\">\n * Bigtable</a>.\n */\npackage com.yahoo.ycsb.db;\n"
  },
  {
    "path": "googledatastore/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors.\nAll rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# Google Cloud Datastore Binding\n\nhttps://cloud.google.com/datastore/docs/concepts/overview?hl=en\n\nPlease refer [here] (https://cloud.google.com/datastore/docs/apis/overview) for more information on\nGoogle Cloud Datastore API.\n\n## Configure\n\n    YCSB_HOME - YCSB home directory\n    DATASTORE_HOME - Google Cloud Datastore YCSB client package files\n\nPlease refer to https://github.com/brianfrankcooper/YCSB/wiki/Using-the-Database-Libraries\nfor more information on setup.\n\n# Benchmark\n\n    $YCSB_HOME/bin/ycsb load googledatastore -P workloads/workloada -P googledatastore.properties\n    $YCSB_HOME/bin/ycsb run googledatastore -P workloads/workloada -P googledatastore.properties\n\n# Properties\n\n    $DATASTORE_HOME/conf/googledatastore.properties\n\n# Details\n\nA. Configuration and setup:\n\nSee this link for instructions about setting up Google Cloud Datastore and\nauthentication:\n\nhttps://cloud.google.com/datastore/docs/activate#accessing_the_datastore_api_from_another_platform\n\nAfter you setup your environment, you will have 3 pieces of information ready:\n- datasetId,\n- service account email, and\n- a private key file in P12 format.\n\nThese will be configured via corresponding properties in the googledatastore.properties file.\n\nB. EntityGroupingMode\n\nIn Google Datastore, Entity Group is the unit in which the user can\nperform strongly consistent query on multiple items; Meanwhile, Entity group\nalso has certain limitations in performance, especially with write QPS.\n\nWe support two modes here:\n\n1. [default] One entity per group (ONE_ENTITY_PER_GROUP)\n\nIn this mode, every entity is a \"root\" entity and sits in one group,\nand every entity group has only one entity. Write QPS is high in this\nmode (and there is no documented limitation on this). But query across\nmultiple entities are eventually consistent.\n\nWhen this mode is set, every entity is created with no ancestor key (meaning\nthe entity itself is the \"root\" entity).\n\n2. Multiple entities per group (MULTI_ENTITY_PER_GROUP)\n\nIn this mode, all entities in one benchmark run are placed under one\nancestor (root) node therefore inside one entity group. Query/scan\nperformed on these entities will be strongly consistent but write QPS\nwill be subject to documented limitation (current is at 1 QPS).\n\nBecause of the write QPS limit, it's highly recommended that you rate\nlimit your benchmark's test rate to avoid excessive errors.\n\nThe goal of this MULTI_ENTITY_PER_GROUP mode is to allow user to\nbenchmark and understand performance characteristics of a single entity\ngroup of the Google Datastore.\n\nWhile in this mode, one can optionally specify a root key name. If not\nspecified, a default name will be used.\n\n\n"
  },
  {
    "path": "googledatastore/conf/googledatastore.properties",
    "content": "# Copyright (c) 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n#\n# Sample property file for Google Cloud Datastore DB client\n\n## Mandatory parameters\n#\n# Your credentials to Google datastore. See README.md for details.\n#\n# googledatastore.datasetId=<string id of your dataset>\n# googledatastore.privateKeyFile=<full path to your private key file>\n# googledatastore.serviceAccountEmail=<Your service account email>\n\n# Google Cloud Datastore's read and update APIs do not support\n# reading or updating a select subset of properties for an entity.\n# (as of version v1beta3)\n# Therefore, it's recommended that you set writeallfields and readallfields\n# to true to get stable and comparable performance numbers.\nwriteallfields = true\nreadallfields = true\n\n## Optional parameters\n#\n# Decides the consistency level of read requests. Acceptable values are:\n# EVENTUAL, STRONG (default is STRONG)\n#\n# googledatastore.readConsistency=STRONG\n\n# Decides how we group entities into entity groups.\n# (See the details section in README.md for documentation)\n#\n# googledatastore.entityGroupingMode=ONE_ENTITY_PER_GROUP\n\n# If you set the googledatastore.entityGroupingMode property to\n# MULTI_ENTITY_PER_GROUP, you can optionally specify the name of the root entity\n#\n# googledatastore.rootEntityName=\"YCSB_ROOT_ENTITY\"\n\n# Strongly recommended to set to uniform.\n# requestdistribution = uniform\n\n# Enable/disable debug message, default is false.\n# googledatastore.debug = false\n\n# Skip indexes, default is true.\n# googledatastore.skipIndex = true"
  },
  {
    "path": "googledatastore/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2015-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>googledatastore-binding</artifactId>\n  <name>Google Cloud Datastore Binding</name>\n  <url>https://github.com/GoogleCloudPlatform/google-cloud-datastore</url>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.google.cloud.datastore</groupId>\n      <artifactId>datastore-v1-proto-client</artifactId>\n      <version>1.1.0</version>\n    </dependency>\n    <dependency>\n      <groupId>log4j</groupId>\n      <artifactId>log4j</artifactId>\n      <version>1.2.17</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "googledatastore/src/main/java/com/yahoo/ycsb/db/GoogleDatastoreClient.java",
    "content": "/*\n * Copyright 2015 YCSB contributors. All Rights Reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.google.api.client.auth.oauth2.Credential;\nimport com.google.api.client.googleapis.auth.oauth2.GoogleCredential;\nimport com.google.datastore.v1.*;\nimport com.google.datastore.v1.CommitRequest.Mode;\nimport com.google.datastore.v1.ReadOptions.ReadConsistency;\nimport com.google.datastore.v1.client.Datastore;\nimport com.google.datastore.v1.client.DatastoreException;\nimport com.google.datastore.v1.client.DatastoreFactory;\nimport com.google.datastore.v1.client.DatastoreHelper;\nimport com.google.datastore.v1.client.DatastoreOptions;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport org.apache.log4j.Level;\nimport org.apache.log4j.Logger;\n\nimport java.io.IOException;\nimport java.security.GeneralSecurityException;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport javax.annotation.Nullable;\n\n/**\n * Google Cloud Datastore Client for YCSB.\n */\n\npublic class GoogleDatastoreClient extends DB {\n  /**\n   * Defines a MutationType used in this class.\n   */\n  private enum MutationType {\n    UPSERT,\n    UPDATE,\n    DELETE\n  }\n\n  /**\n   * Defines a EntityGroupingMode enum used in this class.\n   */\n  private enum EntityGroupingMode {\n    ONE_ENTITY_PER_GROUP,\n    MULTI_ENTITY_PER_GROUP\n  }\n\n  private static Logger logger =\n      Logger.getLogger(GoogleDatastoreClient.class);\n\n  // Read consistency defaults to \"STRONG\" per YCSB guidance.\n  // User can override this via configure.\n  private ReadConsistency readConsistency = ReadConsistency.STRONG;\n\n  private EntityGroupingMode entityGroupingMode =\n      EntityGroupingMode.ONE_ENTITY_PER_GROUP;\n\n  private String rootEntityName;\n\n  private Datastore datastore = null;\n\n  private static boolean skipIndex = true;\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is\n   * one DB instance per client thread.\n   */\n  @Override\n  public void init() throws DBException {\n    String debug = getProperties().getProperty(\"googledatastore.debug\", null);\n    if (null != debug && \"true\".equalsIgnoreCase(debug)) {\n      logger.setLevel(Level.DEBUG);\n    }\n\n    String skipIndexString = getProperties().getProperty(\n        \"googledatastore.skipIndex\", null);\n    if (null != skipIndexString && \"false\".equalsIgnoreCase(skipIndexString)) {\n      skipIndex = false;\n    }\n\n    // We need the following 3 essential properties to initialize datastore:\n    //\n    // - DatasetId,\n    // - Path to private key file,\n    // - Service account email address.\n    String datasetId = getProperties().getProperty(\n        \"googledatastore.datasetId\", null);\n    if (datasetId == null) {\n      throw new DBException(\n          \"Required property \\\"datasetId\\\" missing.\");\n    }\n\n    String privateKeyFile = getProperties().getProperty(\n        \"googledatastore.privateKeyFile\", null);\n    String serviceAccountEmail = getProperties().getProperty(\n        \"googledatastore.serviceAccountEmail\", null);\n\n    // Below are properties related to benchmarking.\n\n    String readConsistencyConfig = getProperties().getProperty(\n        \"googledatastore.readConsistency\", null);\n    if (readConsistencyConfig != null) {\n      try {\n        this.readConsistency = ReadConsistency.valueOf(\n            readConsistencyConfig.trim().toUpperCase());\n      } catch (IllegalArgumentException e) {\n        throw new DBException(\"Invalid read consistency specified: \" +\n            readConsistencyConfig + \". Expecting STRONG or EVENTUAL.\");\n      }\n    }\n\n    //\n    // Entity Grouping Mode (googledatastore.entitygroupingmode), see\n    // documentation in conf/googledatastore.properties.\n    //\n    String entityGroupingConfig = getProperties().getProperty(\n        \"googledatastore.entityGroupingMode\", null);\n    if (entityGroupingConfig != null) {\n      try {\n        this.entityGroupingMode = EntityGroupingMode.valueOf(\n            entityGroupingConfig.trim().toUpperCase());\n      } catch (IllegalArgumentException e) {\n        throw new DBException(\"Invalid entity grouping mode specified: \" +\n            entityGroupingConfig + \". Expecting ONE_ENTITY_PER_GROUP or \" +\n            \"MULTI_ENTITY_PER_GROUP.\");\n      }\n    }\n\n    this.rootEntityName = getProperties().getProperty(\n        \"googledatastore.rootEntityName\", \"YCSB_ROOT_ENTITY\");\n\n    try {\n      // Setup the connection to Google Cloud Datastore with the credentials\n      // obtained from the configure.\n      DatastoreOptions.Builder options = new DatastoreOptions.Builder();\n      Credential credential = GoogleCredential.getApplicationDefault();\n      if (serviceAccountEmail != null && privateKeyFile != null) {\n        credential = DatastoreHelper.getServiceAccountCredential(\n            serviceAccountEmail, privateKeyFile);\n        logger.info(\"Using JWT Service Account credential.\");\n        logger.info(\"DatasetID: \" + datasetId + \", Service Account Email: \" +\n            serviceAccountEmail + \", Private Key File Path: \" + privateKeyFile);\n      } else {\n        logger.info(\"Using default gcloud credential.\");\n        logger.info(\"DatasetID: \" + datasetId\n            + \", Service Account Email: \" + ((GoogleCredential) credential).getServiceAccountId());\n      }\n\n      datastore = DatastoreFactory.get().create(\n          options.credential(credential).projectId(datasetId).build());\n\n    } catch (GeneralSecurityException exception) {\n      throw new DBException(\"Security error connecting to the datastore: \" +\n          exception.getMessage(), exception);\n\n    } catch (IOException exception) {\n      throw new DBException(\"I/O error connecting to the datastore: \" +\n          exception.getMessage(), exception);\n    }\n\n    logger.info(\"Datastore client instance created: \" +\n        datastore.toString());\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n          Map<String, ByteIterator> result) {\n    LookupRequest.Builder lookupRequest = LookupRequest.newBuilder();\n    lookupRequest.addKeys(buildPrimaryKey(table, key));\n    lookupRequest.getReadOptionsBuilder().setReadConsistency(\n        this.readConsistency);\n    // Note above, datastore lookupRequest always reads the entire entity, it\n    // does not support reading a subset of \"fields\" (properties) of an entity.\n\n    logger.debug(\"Built lookup request as: \" + lookupRequest.toString());\n\n    LookupResponse response = null;\n    try {\n      response = datastore.lookup(lookupRequest.build());\n\n    } catch (DatastoreException exception) {\n      logger.error(\n          String.format(\"Datastore Exception when reading (%s): %s %s\",\n              exception.getMessage(),\n              exception.getMethodName(),\n              exception.getCode()));\n\n      // DatastoreException.getCode() returns an HTTP response code which we\n      // will bubble up to the user as part of the YCSB Status \"name\".\n      return new Status(\"ERROR-\" + exception.getCode(), exception.getMessage());\n    }\n\n    if (response.getFoundCount() == 0) {\n      return new Status(\"ERROR-404\", \"Not Found, key is: \" + key);\n    } else if (response.getFoundCount() > 1) {\n      // We only asked to lookup for one key, shouldn't have got more than one\n      // entity back. Unexpected State.\n      return Status.UNEXPECTED_STATE;\n    }\n\n    Entity entity = response.getFound(0).getEntity();\n    logger.debug(\"Read entity: \" + entity.toString());\n\n    Map<String, Value> properties = entity.getProperties();\n    Set<String> propertiesToReturn =\n        (fields == null ? properties.keySet() : fields);\n\n    for (String name : propertiesToReturn) {\n      if (properties.containsKey(name)) {\n        result.put(name, new StringByteIterator(properties.get(name)\n            .getStringValue()));\n      }\n    }\n\n    return Status.OK;\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    // TODO: Implement Scan as query on primary key.\n    return Status.NOT_IMPLEMENTED;\n  }\n\n  @Override\n  public Status update(String table, String key,\n                       Map<String, ByteIterator> values) {\n\n    return doSingleItemMutation(table, key, values, MutationType.UPDATE);\n  }\n\n  @Override\n  public Status insert(String table, String key,\n                       Map<String, ByteIterator> values) {\n    // Use Upsert to allow overwrite of existing key instead of failing the\n    // load (or run) just because the DB already has the key.\n    // This is the same behavior as what other DB does here (such as\n    // the DynamoDB client).\n    return doSingleItemMutation(table, key, values, MutationType.UPSERT);\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    return doSingleItemMutation(table, key, null, MutationType.DELETE);\n  }\n\n  private Key.Builder buildPrimaryKey(String table, String key) {\n    Key.Builder result = Key.newBuilder();\n\n    if (this.entityGroupingMode == EntityGroupingMode.MULTI_ENTITY_PER_GROUP) {\n      // All entities are in side the same group when we are in this mode.\n      result.addPath(Key.PathElement.newBuilder().setKind(table).\n          setName(rootEntityName));\n    }\n\n    return result.addPath(Key.PathElement.newBuilder().setKind(table)\n        .setName(key));\n  }\n\n  private Status doSingleItemMutation(String table, String key,\n      @Nullable Map<String, ByteIterator> values,\n      MutationType mutationType) {\n    // First build the key.\n    Key.Builder datastoreKey = buildPrimaryKey(table, key);\n\n    // Build a commit request in non-transactional mode.\n    // Single item mutation to google datastore\n    // is always atomic and strongly consistent. Transaction is only necessary\n    // for multi-item mutation, or Read-modify-write operation.\n    CommitRequest.Builder commitRequest = CommitRequest.newBuilder();\n    commitRequest.setMode(Mode.NON_TRANSACTIONAL);\n\n    if (mutationType == MutationType.DELETE) {\n      commitRequest.addMutationsBuilder().setDelete(datastoreKey);\n\n    } else {\n      // If this is not for delete, build the entity.\n      Entity.Builder entityBuilder = Entity.newBuilder();\n      entityBuilder.setKey(datastoreKey);\n      for (Entry<String, ByteIterator> val : values.entrySet()) {\n        entityBuilder.getMutableProperties()\n            .put(val.getKey(),\n                Value.newBuilder()\n                .setStringValue(val.getValue().toString())\n                .setExcludeFromIndexes(skipIndex).build());\n      }\n      Entity entity = entityBuilder.build();\n      logger.debug(\"entity built as: \" + entity.toString());\n\n      if (mutationType == MutationType.UPSERT) {\n        commitRequest.addMutationsBuilder().setUpsert(entity);\n      } else if (mutationType == MutationType.UPDATE){\n        commitRequest.addMutationsBuilder().setUpdate(entity);\n      } else {\n        throw new RuntimeException(\"Impossible MutationType, code bug.\");\n      }\n    }\n\n    try {\n      datastore.commit(commitRequest.build());\n      logger.debug(\"successfully committed.\");\n\n    } catch (DatastoreException exception) {\n      // Catch all Datastore rpc errors.\n      // Log the exception, the name of the method called and the error code.\n      logger.error(\n          String.format(\"Datastore Exception when committing (%s): %s %s\",\n              exception.getMessage(),\n              exception.getMethodName(),\n              exception.getCode()));\n\n      // DatastoreException.getCode() returns an HTTP response code which we\n      // will bubble up to the user as part of the YCSB Status \"name\".\n      return new Status(\"ERROR-\" + exception.getCode(), exception.getMessage());\n    }\n\n    return Status.OK;\n  }\n}\n"
  },
  {
    "path": "googledatastore/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/**\n * Copyright (c) 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * YCSB binding for\n<a href=\"https://cloud.google.com/datastore/\">Google Cloud Datastore</a>.\n */\npackage com.yahoo.ycsb.db;\n"
  },
  {
    "path": "googledatastore/src/main/resources/log4j.properties",
    "content": "# Copyright (c) 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n#define the console appender\nlog4j.appender.consoleAppender = org.apache.log4j.ConsoleAppender\n\n# now define the layout for the appender\nlog4j.appender.consoleAppender.layout = org.apache.log4j.PatternLayout\nlog4j.appender.consoleAppender.layout.ConversionPattern=%-4r [%t] %-5p %c %x -%m%n\n\n# now map our console appender as a root logger, means all log messages will go\n# to this appender\nlog4j.rootLogger = INFO, consoleAppender\n"
  },
  {
    "path": "hbase098/README.md",
    "content": "<!--\nCopyright (c) 2015-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# HBase (0.98.x) Driver for YCSB\nThis driver is a binding for the YCSB facilities to operate against a HBase 0.98.x Server cluster.\nTo run against an HBase >= 1.0 cluster, use the `hbase10` binding.\n\n## Quickstart\n\n### 1. Start a HBase Server\nYou need to start a single node or a cluster to point the client at. Please see [Apache HBase Reference Guide](http://hbase.apache.org/book.html) for more details and instructions.\n\n### 2. Set up YCSB\nYou need to clone the repository and compile everything.\n\n```\ngit clone git://github.com/brianfrankcooper/YCSB.git\ncd YCSB\nmvn clean package\n```\n\n### 3. Create a HBase table for testing\n\nFor best results, use the pre-splitting strategy recommended in [HBASE-4163](https://issues.apache.org/jira/browse/HBASE-4163):\n\n```\nhbase(main):001:0> n_splits = 200 # HBase recommends (10 * number of regionservers)\nhbase(main):002:0> create 'usertable', 'family', {SPLITS => (1..n_splits).map {|i| \"user#{1000+i*(9999-1000)/n_splits}\"}}\n```\n\n*Failing to do so will cause all writes to initially target a single region server*.\n\n### 4. Run the Workload\nBefore you can actually run the workload, you need to \"load\" the data first.\n\nYou should specify a HBase config directory(or any other directory containing your hbase-site.xml) and a table name and a column family(-cp is used to set java classpath and -p is used to set various properties).\n\n```\nbin/ycsb load hbase -P workloads/workloada -cp /HBASE-HOME-DIR/conf -p table=usertable -p columnfamily=family\n```\n\nThen, you can run the workload:\n\n```\nbin/ycsb run hbase -P workloads/workloada -cp /HBASE-HOME-DIR/conf -p table=usertable -p columnfamily=family\n```\n\nPlease see the general instructions in the `doc` folder if you are not sure how it all works. You can apply additional properties (as seen in the next section) like this:\n\n```\nbin/ycsb run hbase -P workloads/workloada -cp /HBASE-HOME-DIR/conf -p table=usertable -p columnfamily=family -p clientbuffering=true\n```\n\n## Configuration Options\nFollowing options can be configurable using `-p`.\n\n* `columnfamily`: The HBase column family to target.\n* `debug` : If true, debugging logs are activated. The default is false.\n* `hbase.usepagefilter` : If true, HBase\n  [PageFilter](https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/PageFilter.html)s\n  are used to limit the number of records consumed in a scan operation. The default is true.\n* `principal`: If testing need to be done against a secure HBase cluster using Kerberos Keytab, \n  this property can be used to pass the principal in the keytab file.\n* `keytab`: The Kerberos keytab file name and location can be passed through this property.\n* `writebuffersize`: The maximum amount, in bytes, of data to buffer on the client side before a flush is forced. The default is 12MB.\n\nAdditional HBase settings should be provided in the `hbase-site.xml` file located in your `/HBASE-HOME-DIR/conf` directory. Typically this will be `/etc/hbase/conf`.\n"
  },
  {
    "path": "hbase098/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2012 - 2017 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent/</relativePath>\n  </parent>\n\n  <artifactId>hbase098-binding</artifactId>\n  <name>HBase 0.98.x DB Binding</name>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.apache.hbase</groupId>\n      <artifactId>hbase-client</artifactId>\n      <version>${hbase098.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>jdk.tools</groupId>\n          <artifactId>jdk.tools</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "hbase098/src/main/java/com/yahoo/ycsb/db/HBaseClient.java",
    "content": "/**\n * Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.measurements.Measurements;\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.hbase.HBaseConfiguration;\nimport org.apache.hadoop.hbase.KeyValue;\nimport org.apache.hadoop.hbase.client.*;\nimport org.apache.hadoop.hbase.filter.PageFilter;\nimport org.apache.hadoop.hbase.util.Bytes;\nimport org.apache.hadoop.security.UserGroupInformation;\n\nimport java.io.IOException;\nimport java.util.*;\nimport java.util.concurrent.atomic.AtomicInteger;\n\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY;\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY_DEFAULT;\n\n/**\n * HBase client for YCSB framework.\n */\npublic class HBaseClient extends com.yahoo.ycsb.DB {\n  private static final Configuration CONFIG = HBaseConfiguration.create();\n  private static final AtomicInteger THREAD_COUNT = new AtomicInteger(0);\n\n  private boolean debug = false;\n\n  private String tableName = \"\";\n  private static HConnection hConn = null;\n  private HTableInterface hTable = null;\n  private String columnFamily = \"\";\n  private byte[] columnFamilyBytes;\n  private boolean clientSideBuffering = false;\n  private long writeBufferSize = 1024 * 1024 * 12;\n  /**\n   * Whether or not a page filter should be used to limit scan length.\n   */\n  private boolean usePageFilter = true;\n\n  private static final Object TABLE_LOCK = new Object();\n\n  /**\n   * Initialize any state for this DB.\n   * Called once per DB instance; there is one DB instance per client thread.\n   */\n  public void init() throws DBException {\n    if ((getProperties().getProperty(\"debug\") != null) &&\n        (getProperties().getProperty(\"debug\").compareTo(\"true\") == 0)) {\n      debug = true;\n    }\n\n    if (getProperties().containsKey(\"clientbuffering\")) {\n      clientSideBuffering = Boolean.parseBoolean(getProperties().getProperty(\"clientbuffering\"));\n    }\n    if (getProperties().containsKey(\"writebuffersize\")) {\n      writeBufferSize = Long.parseLong(getProperties().getProperty(\"writebuffersize\"));\n    }\n    if (\"false\".equals(getProperties().getProperty(\"hbase.usepagefilter\", \"true\"))) {\n      usePageFilter = false;\n    }\n    if (\"kerberos\".equalsIgnoreCase(CONFIG.get(\"hbase.security.authentication\"))) {\n      CONFIG.set(\"hadoop.security.authentication\", \"Kerberos\");\n      UserGroupInformation.setConfiguration(CONFIG);\n    }\n    if ((getProperties().getProperty(\"principal\") != null) && (getProperties().getProperty(\"keytab\") != null)) {\n      try {\n        UserGroupInformation.loginUserFromKeytab(getProperties().getProperty(\"principal\"),\n            getProperties().getProperty(\"keytab\"));\n      } catch (IOException e) {\n        System.err.println(\"Keytab file is not readable or not found\");\n        throw new DBException(e);\n      }\n    }\n    try {\n      THREAD_COUNT.getAndIncrement();\n      synchronized (THREAD_COUNT) {\n        if (hConn == null) {\n          hConn = HConnectionManager.createConnection(CONFIG);\n        }\n      }\n    } catch (IOException e) {\n      System.err.println(\"Connection to HBase was not successful\");\n      throw new DBException(e);\n    }\n    columnFamily = getProperties().getProperty(\"columnfamily\");\n    if (columnFamily == null) {\n      System.err.println(\"Error, must specify a columnfamily for HBase tableName\");\n      throw new DBException(\"No columnfamily specified\");\n    }\n    columnFamilyBytes = Bytes.toBytes(columnFamily);\n\n    // Terminate right now if tableName does not exist, since the client\n    // will not propagate this error upstream once the workload\n    // starts.\n    String table = getProperties().getProperty(TABLENAME_PROPERTY, TABLENAME_PROPERTY_DEFAULT);\n    try {\n      HTableInterface ht = hConn.getTable(table);\n      ht.getTableDescriptor();\n    } catch (IOException e) {\n      throw new DBException(e);\n    }\n  }\n\n  /**\n   * Cleanup any state for this DB.\n   * Called once per DB instance; there is one DB instance per client thread.\n   */\n  public void cleanup() throws DBException {\n    // Get the measurements instance as this is the only client that should\n    // count clean up time like an update since autoflush is off.\n    Measurements measurements = Measurements.getMeasurements();\n    try {\n      long st = System.nanoTime();\n      if (hTable != null) {\n        hTable.flushCommits();\n      }\n      synchronized (THREAD_COUNT) {\n        int threadCount = THREAD_COUNT.decrementAndGet();\n        if (threadCount <= 0 && hConn != null) {\n          hConn.close();\n        }\n      }\n      long en = System.nanoTime();\n      measurements.measure(\"UPDATE\", (int) ((en - st) / 1000));\n    } catch (IOException e) {\n      throw new DBException(e);\n    }\n  }\n\n  private void getHTable(String table) throws IOException {\n    synchronized (TABLE_LOCK) {\n      hTable = hConn.getTable(table);\n      //2 suggestions from http://ryantwopointoh.blogspot.com/2009/01/performance-of-hbase-importing.html\n      hTable.setAutoFlush(!clientSideBuffering, true);\n      hTable.setWriteBufferSize(writeBufferSize);\n      //return hTable;\n    }\n\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will be stored in a HashMap.\n   *\n   * @param table  The name of the tableName\n   * @param key    The record key of the record to read.\n   * @param fields The list of fields to read, or null for all of them\n   * @param result A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    //if this is a \"new\" tableName, init HTable object.  Else, use existing one\n    if (!this.tableName.equals(table)) {\n      hTable = null;\n      try {\n        getHTable(table);\n        this.tableName = table;\n      } catch (IOException e) {\n        System.err.println(\"Error accessing HBase tableName: \" + e);\n        return Status.ERROR;\n      }\n    }\n\n    Result r;\n    try {\n      if (debug) {\n        System.out.println(\"Doing read from HBase columnfamily \" + columnFamily);\n        System.out.println(\"Doing read for key: \" + key);\n      }\n      Get g = new Get(Bytes.toBytes(key));\n      if (fields == null) {\n        g.addFamily(columnFamilyBytes);\n      } else {\n        for (String field : fields) {\n          g.addColumn(columnFamilyBytes, Bytes.toBytes(field));\n        }\n      }\n      r = hTable.get(g);\n    } catch (IOException e) {\n      System.err.println(\"Error doing get: \" + e);\n      return Status.ERROR;\n    } catch (ConcurrentModificationException e) {\n      //do nothing for now...need to understand HBase concurrency model better\n      return Status.ERROR;\n    }\n\n    for (KeyValue kv : r.raw()) {\n      result.put(\n          Bytes.toString(kv.getQualifier()),\n          new ByteArrayByteIterator(kv.getValue()));\n      if (debug) {\n        System.out.println(\"Result for field: \" + Bytes.toString(kv.getQualifier()) +\n            \" is: \" + Bytes.toString(kv.getValue()));\n      }\n\n    }\n    return Status.OK;\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value pair from the result will be stored\n   * in a HashMap.\n   *\n   * @param table       The name of the tableName\n   * @param startkey    The record key of the first record to read.\n   * @param recordcount The number of records to read\n   * @param fields      The list of fields to read, or null for all of them\n   * @param result      A Vector of HashMaps, where each HashMap is a set field/value pairs for one record\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\n                     Vector<HashMap<String, ByteIterator>> result) {\n    //if this is a \"new\" tableName, init HTable object.  Else, use existing one\n    if (!this.tableName.equals(table)) {\n      hTable = null;\n      try {\n        getHTable(table);\n        this.tableName = table;\n      } catch (IOException e) {\n        System.err.println(\"Error accessing HBase tableName: \" + e);\n        return Status.ERROR;\n      }\n    }\n\n    Scan s = new Scan(Bytes.toBytes(startkey));\n    //HBase has no record limit.  Here, assume recordcount is small enough to bring back in one call.\n    //We get back recordcount records\n    s.setCaching(recordcount);\n    if (this.usePageFilter) {\n      s.setFilter(new PageFilter(recordcount));\n    }\n\n    //add specified fields or else all fields\n    if (fields == null) {\n      s.addFamily(columnFamilyBytes);\n    } else {\n      for (String field : fields) {\n        s.addColumn(columnFamilyBytes, Bytes.toBytes(field));\n      }\n    }\n\n    //get results\n    try (ResultScanner scanner = hTable.getScanner(s)) {\n      int numResults = 0;\n      for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {\n        //get row key\n        String key = Bytes.toString(rr.getRow());\n        if (debug) {\n          System.out.println(\"Got scan result for key: \" + key);\n        }\n\n        HashMap<String, ByteIterator> rowResult = new HashMap<>();\n\n        for (KeyValue kv : rr.raw()) {\n          rowResult.put(\n              Bytes.toString(kv.getQualifier()),\n              new ByteArrayByteIterator(kv.getValue()));\n        }\n        //add rowResult to result vector\n        result.add(rowResult);\n        numResults++;\n\n        // PageFilter does not guarantee that the number of results is <= pageSize, so this\n        // break is required.\n        //if hit recordcount, bail out\n        if (numResults >= recordcount) {\n          break;\n        }\n      } //done with row\n\n    } catch (IOException e) {\n      if (debug) {\n        System.out.println(\"Error in getting/parsing scan result: \" + e);\n      }\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified values HashMap will be written into the\n   * record with the specified record key, overwriting any existing values with the same field name.\n   *\n   * @param table  The name of the tableName\n   * @param key    The record key of the record to write\n   * @param values A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    //if this is a \"new\" tableName, init HTable object.  Else, use existing one\n    if (!this.tableName.equals(table)) {\n      hTable = null;\n      try {\n        getHTable(table);\n        this.tableName = table;\n      } catch (IOException e) {\n        System.err.println(\"Error accessing HBase tableName: \" + e);\n        return Status.ERROR;\n      }\n    }\n\n\n    if (debug) {\n      System.out.println(\"Setting up put for key: \" + key);\n    }\n    Put p = new Put(Bytes.toBytes(key));\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      byte[] value = entry.getValue().toArray();\n      if (debug) {\n        System.out.println(\"Adding field/value \" + entry.getKey() + \"/\" +\n            Bytes.toStringBinary(value) + \" to put request\");\n      }\n      p.add(columnFamilyBytes, Bytes.toBytes(entry.getKey()), value);\n    }\n\n    try {\n      hTable.put(p);\n    } catch (IOException e) {\n      if (debug) {\n        System.err.println(\"Error doing put: \" + e);\n      }\n      return Status.ERROR;\n    } catch (ConcurrentModificationException e) {\n      //do nothing for now...hope this is rare\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified values HashMap will be written into the\n   * record with the specified record key.\n   *\n   * @param table  The name of the tableName\n   * @param key    The record key of the record to insert.\n   * @param values A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    return update(table, key, values);\n  }\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table The name of the tableName\n   * @param key   The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status delete(String table, String key) {\n    //if this is a \"new\" tableName, init HTable object.  Else, use existing one\n    if (!this.tableName.equals(table)) {\n      hTable = null;\n      try {\n        getHTable(table);\n        this.tableName = table;\n      } catch (IOException e) {\n        System.err.println(\"Error accessing HBase tableName: \" + e);\n        return Status.ERROR;\n      }\n    }\n\n    if (debug) {\n      System.out.println(\"Doing delete for key: \" + key);\n    }\n\n    Delete d = new Delete(Bytes.toBytes(key));\n    try {\n      hTable.delete(d);\n    } catch (IOException e) {\n      if (debug) {\n        System.err.println(\"Error doing delete: \" + e);\n      }\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  public static void main(String[] args) {\n    if (args.length != 3) {\n      System.out.println(\"Please specify a threadcount, columnfamily and operation count\");\n      System.exit(0);\n    }\n\n    final int keyspace = 10000; //120000000;\n\n    final int threadcount = Integer.parseInt(args[0]);\n\n    final String columnfamily = args[1];\n\n\n    final int opcount = Integer.parseInt(args[2]) / threadcount;\n\n    Vector<Thread> allthreads = new Vector<>();\n\n    for (int i = 0; i < threadcount; i++) {\n      Thread t = new Thread() {\n        public void run() {\n          try {\n            Random random = new Random();\n\n            HBaseClient cli = new HBaseClient();\n\n            Properties props = new Properties();\n            props.setProperty(\"columnfamily\", columnfamily);\n            props.setProperty(\"debug\", \"true\");\n            cli.setProperties(props);\n\n            cli.init();\n\n            long accum = 0;\n\n            for (int i = 0; i < opcount; i++) {\n              int keynum = random.nextInt(keyspace);\n              String key = \"user\" + keynum;\n              long st = System.currentTimeMillis();\n              Status result;\n              Vector<HashMap<String, ByteIterator>> scanResults = new Vector<>();\n              Set<String> scanFields = new HashSet<String>();\n              result = cli.scan(\"table1\", \"user2\", 20, null, scanResults);\n\n              long en = System.currentTimeMillis();\n\n              accum += (en - st);\n\n              if (!result.equals(Status.OK)) {\n                System.out.println(\"Error \" + result + \" for \" + key);\n              }\n\n              if (i % 10 == 0) {\n                System.out.println(i + \" operations, average latency: \" + (((double) accum) / ((double) i)));\n              }\n            }\n          } catch (Exception e) {\n            e.printStackTrace();\n          }\n        }\n      };\n      allthreads.add(t);\n    }\n\n    long st = System.currentTimeMillis();\n    for (Thread t : allthreads) {\n      t.start();\n    }\n\n    for (Thread t : allthreads) {\n      try {\n        t.join();\n      } catch (InterruptedException ignored) {\n        System.err.println(\"interrupted\");\n        Thread.currentThread().interrupt();\n      }\n    }\n    long en = System.currentTimeMillis();\n\n    System.out.println(\"Throughput: \" + ((1000.0) * (((double) (opcount * threadcount)) / ((double) (en - st))))\n        + \" ops/sec\");\n  }\n}\n"
  },
  {
    "path": "hbase098/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2017, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n * use this file except in compliance with the License. You may obtain a copy of\n * the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n * License for the specific language governing permissions and limitations under\n * the License. See accompanying LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\n * \"https://issues.apache.org/jira/browse/HBASE/fixforversion/12333364/\">HBase\n * 0.98.X</a>.\n */\npackage com.yahoo.ycsb.db;\n"
  },
  {
    "path": "hbase10/README.md",
    "content": "<!--\nCopyright (c) 2015-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# HBase (1.0.x) Driver for YCSB\nThis driver is a binding for the YCSB facilities to operate against a HBase 1.0.x Server cluster or Google's hosted Bigtable.\nTo run against an HBase 0.98.x cluster, use the `hbase098` binding.\n\nSee `hbase098/README.md` for a quickstart to setup HBase for load testing and common configuration details.\n\n## Configuration Options\nIn addition to those options available for the `hbase098` binding, the following options are available for the `hbase10` binding:\n\n* `durability`: Whether or not writes should be appended to the WAL. Bypassing the WAL can improve throughput but data cannot be recovered in the event of a crash. The default is true.\n\n## Bigtable\n\nGoogle's Bigtable service provides an implementation of the HBase API for migrating existing applications. Users can perform load tests against Bigtable using this binding.\n\n### 1. Setup a Bigtable Cluster\n\nLogin to the Google Cloud Console and follow the [Creating Cluster](https://cloud.google.com/bigtable/docs/creating-cluster) steps. Make a note of your cluster name, zone and project ID.\n\n### 2. Launch the Bigtable Shell\n\nFrom the Cloud Console, launch a shell and follow the [Quickstart](https://cloud.google.com/bigtable/docs/quickstart) up to step 4 where you launch the HBase shell.\n\n### 3. Create a Table\n\nFor best results, use the pre-splitting strategy recommended in [HBASE-4163](https://issues.apache.org/jira/browse/HBASE-4163):\n\n```\nhbase(main):001:0> n_splits = 200 # HBase recommends (10 * number of regionservers)\nhbase(main):002:0> create 'usertable', 'cf', {SPLITS => (1..n_splits).map {|i| \"user#{1000+i*(9999-1000)/n_splits}\"}}\n```\n\nMake a note of the column family, in this example it's `cf``.\n\n### 4. Fetch the Proper ALPN Boot Jar\n\nThe Bigtable protocol uses HTTP/2 which requires an ALPN protocol negotiation implementation. On JVM instantiation the implementation must be loaded before attempting to connect to the cluster. If you're using Java 7 or 8, use this [Jetty Version Table](http://www.eclipse.org/jetty/documentation/current/alpn-chapter.html#alpn-versions) to determine the version appropriate for your JVM. (ALPN is included in JDK 9+). Download the proper jar from [Maven](http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.mortbay.jetty.alpn%22%20AND%20a%3A%22alpn-boot%22) somewhere on your system.\n\n### 5. Download the Bigtable Client Jar\n\nDownload one of the `bigtable-hbase-1.#` jars from [Maven](http://search.maven.org/#search%7Cga%7C1%7Ccom.google.cloud.bigtable) to your host.\n\n### 6. Download JSON Credentials\n\nFollow these instructions for [Generating a JSON key](https://cloud.google.com/bigtable/docs/installing-hbase-shell#service-account) and save it to your host.\n\n### 7. Create or Edit hbase-site.xml\n\nIf you have an existing HBase configuration directory with an `hbase-site.xml` file, edit the file as per below. If not, create a directory called `conf` under the `hbase10` directory. Create a file in the conf directory named `hbase-site.xml`. Provide the following settings in the XML file, making sure to replace the bracketed examples with the proper values from your Cloud console.\n\n```\n<configuration>\n  <property>\n    <name>hbase.client.connection.impl</name>\n    <value>com.google.cloud.bigtable.hbase1_0.BigtableConnection</value>\n  </property>\n  <property>\n    <name>google.bigtable.cluster.name</name>\n    <value>[YOUR-CLUSTER-ID]</value>\n  </property>\n  <property>\n    <name>google.bigtable.project.id</name>\n    <value>[YOUR-PROJECT-ID]</value>\n  </property>\n  <property>\n    <name>google.bigtable.zone.name</name>\n    <value>[YOUR-ZONE-NAME]</value>\n  </property>\n  <property>\n    <name>google.bigtable.auth.service.account.enable</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>google.bigtable.auth.json.keyfile</name>\n    <value>[PATH-TO-YOUR-KEY-FILE]</value>\n  </property>\n</configuration>\n```\n\nIf you wish to try other API implementations (1.1.x or 1.2.x) change the `hbase.client.connection.impl` appropriately to match the JAR you downloaded.\n\nIf you have an existing HBase config directory, make sure to add it to the class path via `-cp <PATH_TO_BIGTABLE_JAR>:<CONF_DIR>`.\n\n### 8. Execute a Workload\n\nSwitch to the root of the YCSB repo and choose the workload you want to run and `load` it first. With the CLI you must provide the column family, cluster properties and the ALPN jar to load.\n\n```\nbin/ycsb load hbase10 -p columnfamily=cf -cp <PATH_TO_BIGTABLE_JAR> -jvm-args='-Xbootclasspath/p:<PATH_TO_ALPN_JAR>' -P workloads/workloada\n\n```\n\nThe `load` step only executes inserts into the datastore. After loading data, run the same workload to mix reads with writes.\n\n```\nbin/ycsb run hbase10 -p columnfamily=cf -jvm-args='-Xbootclasspath/p:<PATH_TO_ALPN_JAR>' -P workloads/workloada\n\n```\n"
  },
  {
    "path": "hbase10/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent/</relativePath>\n  </parent>\n\n  <artifactId>hbase10-binding</artifactId>\n  <name>HBase 1.0 DB Binding</name>\n\n  <properties>\n    <!-- Tests do not run on jdk9 -->\n    <skipJDK9Tests>true</skipJDK9Tests>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>org.apache.hbase</groupId>\n      <artifactId>hbase-client</artifactId>\n      <version>${hbase10.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>jdk.tools</groupId>\n          <artifactId>jdk.tools</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.hbase</groupId>\n      <artifactId>hbase-testing-util</artifactId>\n      <version>${hbase10.version}</version>\n      <scope>test</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>jdk.tools</groupId>\n          <artifactId>jdk.tools</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "hbase10/src/main/java/com/yahoo/ycsb/db/HBaseClient10.java",
    "content": "/**\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport com.google.common.base.Preconditions;\n\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.measurements.Measurements;\n\nimport org.apache.hadoop.security.UserGroupInformation;\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.hbase.Cell;\nimport org.apache.hadoop.hbase.CellUtil;\nimport org.apache.hadoop.hbase.HBaseConfiguration;\nimport org.apache.hadoop.hbase.TableName;\nimport org.apache.hadoop.hbase.client.BufferedMutator;\nimport org.apache.hadoop.hbase.client.BufferedMutatorParams;\nimport org.apache.hadoop.hbase.client.Connection;\nimport org.apache.hadoop.hbase.client.ConnectionFactory;\nimport org.apache.hadoop.hbase.client.Delete;\nimport org.apache.hadoop.hbase.client.Durability;\nimport org.apache.hadoop.hbase.client.Get;\nimport org.apache.hadoop.hbase.client.Put;\nimport org.apache.hadoop.hbase.client.Result;\nimport org.apache.hadoop.hbase.client.ResultScanner;\nimport org.apache.hadoop.hbase.client.Scan;\nimport org.apache.hadoop.hbase.client.Table;\nimport org.apache.hadoop.hbase.filter.PageFilter;\nimport org.apache.hadoop.hbase.util.Bytes;\n\nimport java.io.IOException;\nimport java.util.ConcurrentModificationException;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.concurrent.atomic.AtomicInteger;\n\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY;\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY_DEFAULT;\n\n/**\n * HBase 1.0 client for YCSB framework.\n *\n * A modified version of HBaseClient (which targets HBase v0.9) utilizing the\n * HBase 1.0.0 API.\n *\n * This client also adds toggleable client-side buffering and configurable write\n * durability.\n */\npublic class HBaseClient10 extends com.yahoo.ycsb.DB {\n  private static final AtomicInteger THREAD_COUNT = new AtomicInteger(0);\n  \n  private Configuration config = HBaseConfiguration.create();\n  \n  private boolean debug = false;\n\n  private String tableName = \"\";\n\n  /**\n   * A Cluster Connection instance that is shared by all running ycsb threads.\n   * Needs to be initialized late so we pick up command-line configs if any.\n   * To ensure one instance only in a multi-threaded context, guard access\n   * with a 'lock' object.\n   * @See #CONNECTION_LOCK.\n   */\n  private static Connection connection = null;\n\n  // Depending on the value of clientSideBuffering, either bufferedMutator\n  // (clientSideBuffering) or currentTable (!clientSideBuffering) will be used.\n  private Table currentTable = null;\n  private BufferedMutator bufferedMutator = null;\n\n  private String columnFamily = \"\";\n  private byte[] columnFamilyBytes;\n\n  /**\n   * Durability to use for puts and deletes.\n   */\n  private Durability durability = Durability.USE_DEFAULT;\n\n  /** Whether or not a page filter should be used to limit scan length. */\n  private boolean usePageFilter = true;\n\n  /**\n   * If true, buffer mutations on the client. This is the default behavior for\n   * HBaseClient. For measuring insert/update/delete latencies, client side\n   * buffering should be disabled.\n   */\n  private boolean clientSideBuffering = false;\n  private long writeBufferSize = 1024 * 1024 * 12;\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is one\n   * DB instance per client thread.\n   */\n  @Override\n  public void init() throws DBException {\n    if (\"true\"\n        .equals(getProperties().getProperty(\"clientbuffering\", \"false\"))) {\n      this.clientSideBuffering = true;\n    }\n    if (getProperties().containsKey(\"writebuffersize\")) {\n      writeBufferSize =\n          Long.parseLong(getProperties().getProperty(\"writebuffersize\"));\n    }\n\n    if (getProperties().getProperty(\"durability\") != null) {\n      this.durability =\n          Durability.valueOf(getProperties().getProperty(\"durability\"));\n    }\n\n    if (\"kerberos\".equalsIgnoreCase(config.get(\"hbase.security.authentication\"))) {\n      config.set(\"hadoop.security.authentication\", \"Kerberos\");\n      UserGroupInformation.setConfiguration(config);\n    }\n\n    if ((getProperties().getProperty(\"principal\")!=null)\n        && (getProperties().getProperty(\"keytab\")!=null)) {\n      try {\n        UserGroupInformation.loginUserFromKeytab(getProperties().getProperty(\"principal\"),\n              getProperties().getProperty(\"keytab\"));\n      } catch (IOException e) {\n        System.err.println(\"Keytab file is not readable or not found\");\n        throw new DBException(e);\n      }\n    }\n\n    String table = getProperties().getProperty(TABLENAME_PROPERTY, TABLENAME_PROPERTY_DEFAULT);\n    try {\n      THREAD_COUNT.getAndIncrement();\n      synchronized (THREAD_COUNT) {\n        if (connection == null) {\n          // Initialize if not set up already.\n          connection = ConnectionFactory.createConnection(config);\n          \n          // Terminate right now if table does not exist, since the client\n          // will not propagate this error upstream once the workload\n          // starts.\n          final TableName tName = TableName.valueOf(table);\n          connection.getTable(tName).getTableDescriptor();\n        }\n      }\n    } catch (java.io.IOException e) {\n      throw new DBException(e);\n    }\n\n    if ((getProperties().getProperty(\"debug\") != null)\n        && (getProperties().getProperty(\"debug\").compareTo(\"true\") == 0)) {\n      debug = true;\n    }\n\n    if (\"false\"\n        .equals(getProperties().getProperty(\"hbase.usepagefilter\", \"true\"))) {\n      usePageFilter = false;\n    }\n\n    columnFamily = getProperties().getProperty(\"columnfamily\");\n    if (columnFamily == null) {\n      System.err.println(\"Error, must specify a columnfamily for HBase table\");\n      throw new DBException(\"No columnfamily specified\");\n    }\n    columnFamilyBytes = Bytes.toBytes(columnFamily);\n  }\n\n  /**\n   * Cleanup any state for this DB. Called once per DB instance; there is one DB\n   * instance per client thread.\n   */\n  @Override\n  public void cleanup() throws DBException {\n    // Get the measurements instance as this is the only client that should\n    // count clean up time like an update if client-side buffering is\n    // enabled.\n    Measurements measurements = Measurements.getMeasurements();\n    try {\n      long st = System.nanoTime();\n      if (bufferedMutator != null) {\n        bufferedMutator.close();\n      }\n      if (currentTable != null) {\n        currentTable.close();\n      }\n      long en = System.nanoTime();\n      final String type = clientSideBuffering ? \"UPDATE\" : \"CLEANUP\";\n      measurements.measure(type, (int) ((en - st) / 1000));\n      int threadCount = THREAD_COUNT.decrementAndGet();\n      if (threadCount <= 0) {\n        // Means we are done so ok to shut down the Connection.\n        synchronized (THREAD_COUNT) {\n          if (connection != null) {   \n            connection.close();   \n            connection = null;    \n          }   \n        }\n      }\n    } catch (IOException e) {\n      throw new DBException(e);\n    }\n  }\n\n  public void getHTable(String table) throws IOException {\n    final TableName tName = TableName.valueOf(table);\n    this.currentTable = connection.getTable(tName);\n    if (clientSideBuffering) {\n      final BufferedMutatorParams p = new BufferedMutatorParams(tName);\n      p.writeBufferSize(writeBufferSize);\n      this.bufferedMutator = connection.getBufferedMutator(p);\n    }\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will\n   * be stored in a HashMap.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to read.\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error\n   */\n  public Status read(String table, String key, Set<String> fields,\n      Map<String, ByteIterator> result) {\n    // if this is a \"new\" table, init HTable object. Else, use existing one\n    if (!tableName.equals(table)) {\n      currentTable = null;\n      try {\n        getHTable(table);\n        tableName = table;\n      } catch (IOException e) {\n        System.err.println(\"Error accessing HBase table: \" + e);\n        return Status.ERROR;\n      }\n    }\n\n    Result r = null;\n    try {\n      if (debug) {\n        System.out\n            .println(\"Doing read from HBase columnfamily \" + columnFamily);\n        System.out.println(\"Doing read for key: \" + key);\n      }\n      Get g = new Get(Bytes.toBytes(key));\n      if (fields == null) {\n        g.addFamily(columnFamilyBytes);\n      } else {\n        for (String field : fields) {\n          g.addColumn(columnFamilyBytes, Bytes.toBytes(field));\n        }\n      }\n      r = currentTable.get(g);\n    } catch (IOException e) {\n      if (debug) {\n        System.err.println(\"Error doing get: \" + e);\n      }\n      return Status.ERROR;\n    } catch (ConcurrentModificationException e) {\n      // do nothing for now...need to understand HBase concurrency model better\n      return Status.ERROR;\n    }\n\n    if (r.isEmpty()) {\n      return Status.NOT_FOUND;\n    }\n\n    while (r.advance()) {\n      final Cell c = r.current();\n      result.put(Bytes.toString(CellUtil.cloneQualifier(c)),\n          new ByteArrayByteIterator(CellUtil.cloneValue(c)));\n      if (debug) {\n        System.out.println(\n            \"Result for field: \" + Bytes.toString(CellUtil.cloneQualifier(c))\n                + \" is: \" + Bytes.toString(CellUtil.cloneValue(c)));\n      }\n    }\n    return Status.OK;\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value\n   * pair from the result will be stored in a HashMap.\n   *\n   * @param table\n   *          The name of the table\n   * @param startkey\n   *          The record key of the first record to read.\n   * @param recordcount\n   *          The number of records to read\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A Vector of HashMaps, where each HashMap is a set field/value\n   *          pairs for one record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    // if this is a \"new\" table, init HTable object. Else, use existing one\n    if (!tableName.equals(table)) {\n      currentTable = null;\n      try {\n        getHTable(table);\n        tableName = table;\n      } catch (IOException e) {\n        System.err.println(\"Error accessing HBase table: \" + e);\n        return Status.ERROR;\n      }\n    }\n\n    Scan s = new Scan(Bytes.toBytes(startkey));\n    // HBase has no record limit. Here, assume recordcount is small enough to\n    // bring back in one call.\n    // We get back recordcount records\n    s.setCaching(recordcount);\n    if (this.usePageFilter) {\n      s.setFilter(new PageFilter(recordcount));\n    }\n\n    // add specified fields or else all fields\n    if (fields == null) {\n      s.addFamily(columnFamilyBytes);\n    } else {\n      for (String field : fields) {\n        s.addColumn(columnFamilyBytes, Bytes.toBytes(field));\n      }\n    }\n\n    // get results\n    ResultScanner scanner = null;\n    try {\n      scanner = currentTable.getScanner(s);\n      int numResults = 0;\n      for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {\n        // get row key\n        String key = Bytes.toString(rr.getRow());\n\n        if (debug) {\n          System.out.println(\"Got scan result for key: \" + key);\n        }\n\n        HashMap<String, ByteIterator> rowResult =\n            new HashMap<String, ByteIterator>();\n\n        while (rr.advance()) {\n          final Cell cell = rr.current();\n          rowResult.put(Bytes.toString(CellUtil.cloneQualifier(cell)),\n              new ByteArrayByteIterator(CellUtil.cloneValue(cell)));\n        }\n\n        // add rowResult to result vector\n        result.add(rowResult);\n        numResults++;\n\n        // PageFilter does not guarantee that the number of results is <=\n        // pageSize, so this\n        // break is required.\n        if (numResults >= recordcount) {// if hit recordcount, bail out\n          break;\n        }\n      } // done with row\n    } catch (IOException e) {\n      if (debug) {\n        System.out.println(\"Error in getting/parsing scan result: \" + e);\n      }\n      return Status.ERROR;\n    } finally {\n      if (scanner != null) {\n        scanner.close();\n      }\n    }\n\n    return Status.OK;\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key, overwriting any existing values with the same field name.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to write\n   * @param values\n   *          A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status update(String table, String key,\n      Map<String, ByteIterator> values) {\n    // if this is a \"new\" table, init HTable object. Else, use existing one\n    if (!tableName.equals(table)) {\n      currentTable = null;\n      try {\n        getHTable(table);\n        tableName = table;\n      } catch (IOException e) {\n        System.err.println(\"Error accessing HBase table: \" + e);\n        return Status.ERROR;\n      }\n    }\n\n    if (debug) {\n      System.out.println(\"Setting up put for key: \" + key);\n    }\n    Put p = new Put(Bytes.toBytes(key));\n    p.setDurability(durability);\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      byte[] value = entry.getValue().toArray();\n      if (debug) {\n        System.out.println(\"Adding field/value \" + entry.getKey() + \"/\"\n            + Bytes.toStringBinary(value) + \" to put request\");\n      }\n      p.addColumn(columnFamilyBytes, Bytes.toBytes(entry.getKey()), value);\n    }\n\n    try {\n      if (clientSideBuffering) {\n        Preconditions.checkNotNull(bufferedMutator);\n        bufferedMutator.mutate(p);\n      } else {\n        currentTable.put(p);\n      }\n    } catch (IOException e) {\n      if (debug) {\n        System.err.println(\"Error doing put: \" + e);\n      }\n      return Status.ERROR;\n    } catch (ConcurrentModificationException e) {\n      // do nothing for now...hope this is rare\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to insert.\n   * @param values\n   *          A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status insert(String table, String key,\n                       Map<String, ByteIterator> values) {\n    return update(table, key, values);\n  }\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status delete(String table, String key) {\n    // if this is a \"new\" table, init HTable object. Else, use existing one\n    if (!tableName.equals(table)) {\n      currentTable = null;\n      try {\n        getHTable(table);\n        tableName = table;\n      } catch (IOException e) {\n        System.err.println(\"Error accessing HBase table: \" + e);\n        return Status.ERROR;\n      }\n    }\n\n    if (debug) {\n      System.out.println(\"Doing delete for key: \" + key);\n    }\n\n    final Delete d = new Delete(Bytes.toBytes(key));\n    d.setDurability(durability);\n    try {\n      if (clientSideBuffering) {\n        Preconditions.checkNotNull(bufferedMutator);\n        bufferedMutator.mutate(d);\n      } else {\n        currentTable.delete(d);\n      }\n    } catch (IOException e) {\n      if (debug) {\n        System.err.println(\"Error doing delete: \" + e);\n      }\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  @VisibleForTesting\n  void setConfiguration(final Configuration newConfig) {\n    this.config = newConfig;\n  }\n}\n\n/*\n * For customized vim control set autoindent set si set shiftwidth=4\n */\n"
  },
  {
    "path": "hbase10/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"https://hbase.apache.org/\">HBase</a> \n * using the HBase 1.0.0 API.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "hbase10/src/test/java/com/yahoo/ycsb/db/HBaseClient10Test.java",
    "content": "/**\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY;\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY_DEFAULT;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\nimport static org.junit.Assume.assumeTrue;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport com.yahoo.ycsb.measurements.Measurements;\nimport com.yahoo.ycsb.workloads.CoreWorkload;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.hbase.HBaseTestingUtility;\nimport org.apache.hadoop.hbase.TableName;\nimport org.apache.hadoop.hbase.client.Get;\nimport org.apache.hadoop.hbase.client.Put;\nimport org.apache.hadoop.hbase.client.Result;\nimport org.apache.hadoop.hbase.client.Table;\nimport org.apache.hadoop.hbase.util.Bytes;\nimport org.junit.After;\nimport org.junit.AfterClass;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Ignore;\nimport org.junit.Test;\n\nimport java.nio.ByteBuffer;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.List;\nimport java.util.Properties;\nimport java.util.Vector;\n\n/**\n * Integration tests for the YCSB HBase client 1.0, using an HBase minicluster.\n */\npublic class HBaseClient10Test {\n\n  private final static String COLUMN_FAMILY = \"cf\";\n\n  private static HBaseTestingUtility testingUtil;\n  private HBaseClient10 client;\n  private Table table = null;\n  private String tableName;\n\n  private static boolean isWindows() {\n    final String os = System.getProperty(\"os.name\");\n    return os.startsWith(\"Windows\");\n  }\n\n  /**\n   * Creates a mini-cluster for use in these tests.\n   *\n   * This is a heavy-weight operation, so invoked only once for the test class.\n   */\n  @BeforeClass\n  public static void setUpClass() throws Exception {\n    // Minicluster setup fails on Windows with an UnsatisfiedLinkError.\n    // Skip if windows.\n    assumeTrue(!isWindows());\n    testingUtil = HBaseTestingUtility.createLocalHTU();\n    testingUtil.startMiniCluster();\n  }\n\n  /**\n   * Tears down mini-cluster.\n   */\n  @AfterClass\n  public static void tearDownClass() throws Exception {\n    if (testingUtil != null) {\n      testingUtil.shutdownMiniCluster();\n    }\n  }\n\n  /**\n   * Sets up the mini-cluster for testing.\n   *\n   * We re-create the table for each test.\n   */\n  @Before\n  public void setUp() throws Exception {\n    client = new HBaseClient10();\n    client.setConfiguration(new Configuration(testingUtil.getConfiguration()));\n\n    Properties p = new Properties();\n    p.setProperty(\"columnfamily\", COLUMN_FAMILY);\n\n    Measurements.setProperties(p);\n    final CoreWorkload workload = new CoreWorkload();\n    workload.init(p);\n\n    tableName = p.getProperty(TABLENAME_PROPERTY, TABLENAME_PROPERTY_DEFAULT);\n    table = testingUtil.createTable(TableName.valueOf(tableName), Bytes.toBytes(COLUMN_FAMILY));\n\n    client.setProperties(p);\n    client.init();\n  }\n\n  @After\n  public void tearDown() throws Exception {\n    table.close();\n    testingUtil.deleteTable(tableName);\n  }\n\n  @Test\n  public void testRead() throws Exception {\n    final String rowKey = \"row1\";\n    final Put p = new Put(Bytes.toBytes(rowKey));\n    p.addColumn(Bytes.toBytes(COLUMN_FAMILY),\n        Bytes.toBytes(\"column1\"), Bytes.toBytes(\"value1\"));\n    p.addColumn(Bytes.toBytes(COLUMN_FAMILY),\n        Bytes.toBytes(\"column2\"), Bytes.toBytes(\"value2\"));\n    table.put(p);\n\n    final HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\n    final Status status = client.read(tableName, rowKey, null, result);\n    assertEquals(Status.OK, status);\n    assertEquals(2, result.size());\n    assertEquals(\"value1\", result.get(\"column1\").toString());\n    assertEquals(\"value2\", result.get(\"column2\").toString());\n  }\n\n  @Test\n  public void testReadMissingRow() throws Exception {\n    final HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\n    final Status status = client.read(tableName, \"Missing row\", null, result);\n    assertEquals(Status.NOT_FOUND, status);\n    assertEquals(0, result.size());\n  }\n\n  @Test\n  public void testScan() throws Exception {\n    // Fill with data\n    final String colStr = \"row_number\";\n    final byte[] col = Bytes.toBytes(colStr);\n    final int n = 10;\n    final List<Put> puts = new ArrayList<Put>(n);\n    for(int i = 0; i < n; i++) {\n      final byte[] key = Bytes.toBytes(String.format(\"%05d\", i));\n      final byte[] value = java.nio.ByteBuffer.allocate(4).putInt(i).array();\n      final Put p = new Put(key);\n      p.addColumn(Bytes.toBytes(COLUMN_FAMILY), col, value);\n      puts.add(p);\n    }\n    table.put(puts);\n\n    // Test\n    final Vector<HashMap<String, ByteIterator>> result =\n        new Vector<HashMap<String, ByteIterator>>();\n\n    // Scan 5 records, skipping the first\n    client.scan(tableName, \"00001\", 5, null, result);\n\n    assertEquals(5, result.size());\n    for(int i = 0; i < 5; i++) {\n      final Map<String, ByteIterator> row = result.get(i);\n      assertEquals(1, row.size());\n      assertTrue(row.containsKey(colStr));\n      final byte[] bytes = row.get(colStr).toArray();\n      final ByteBuffer buf = ByteBuffer.wrap(bytes);\n      final int rowNum = buf.getInt();\n      assertEquals(i + 1, rowNum);\n    }\n  }\n\n  @Test\n  public void testUpdate() throws Exception{\n    final String key = \"key\";\n    final Map<String, String> input = new HashMap<String, String>();\n    input.put(\"column1\", \"value1\");\n    input.put(\"column2\", \"value2\");\n    final Status status = client.insert(tableName, key, StringByteIterator.getByteIteratorMap(input));\n    assertEquals(Status.OK, status);\n\n    // Verify result\n    final Get get = new Get(Bytes.toBytes(key));\n    final Result result = this.table.get(get);\n    assertFalse(result.isEmpty());\n    assertEquals(2, result.size());\n    for(final java.util.Map.Entry<String, String> entry : input.entrySet()) {\n      assertEquals(entry.getValue(),\n          new String(result.getValue(Bytes.toBytes(COLUMN_FAMILY),\n            Bytes.toBytes(entry.getKey()))));\n    }\n  }\n\n  @Test\n  @Ignore(\"Not yet implemented\")\n  public void testDelete() {\n    fail(\"Not yet implemented\");\n  }\n}\n\n"
  },
  {
    "path": "hbase10/src/test/resources/hbase-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<configuration>\n  <property>\n    <name>hbase.master.info.port</name>\n    <value>-1</value>\n    <description>The port for the hbase master web UI\n    Set to -1 if you do not want the info server to run.\n    </description>\n  </property>\n  <property>\n    <name>hbase.regionserver.info.port</name>\n    <value>-1</value>\n    <description>The port for the hbase regionserver web UI\n    Set to -1 if you do not want the info server to run.\n    </description>\n  </property>\n</configuration>\n"
  },
  {
    "path": "hbase10/src/test/resources/log4j.properties",
    "content": "#\n# Copyright (c) 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n\n# Root logger option\nlog4j.rootLogger=WARN, stderr\n\nlog4j.appender.stderr=org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.target=System.err\nlog4j.appender.stderr.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.conversionPattern=%d{yyyy/MM/dd HH:mm:ss} %-5p %c %x - %m%n\n\n# Suppress messages from ZKTableStateManager: Creates a large number of table\n# state change messages.\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZKTableStateManager=ERROR\n"
  },
  {
    "path": "hbase12/README.md",
    "content": "<!--\nCopyright (c) 2015-2017 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# HBase (1.2+) Driver for YCSB\nThis driver is a binding for the YCSB facilities to operate against a HBase 1.2+ Server cluster, using a shaded client that tries to avoid leaking third party libraries.\n\nSee `hbase098/README.md` for a quickstart to setup HBase for load testing and common configuration details.\n\n## Configuration Options\nIn addition to those options available for the `hbase098` binding, the following options are available for the `hbase12` binding:\n\n* `durability`: Whether or not writes should be appended to the WAL. Bypassing the WAL can improve throughput but data cannot be recovered in the event of a crash. The default is true.\n\n"
  },
  {
    "path": "hbase12/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent/</relativePath>\n  </parent>\n\n  <artifactId>hbase12-binding</artifactId>\n  <name>HBase 1.2 DB Binding</name>\n\n  <properties>\n    <!-- Tests do not run on jdk9 -->\n    <skipJDK9Tests>true</skipJDK9Tests>\n    <!-- Tests can't run without a shaded hbase testing util.\n         See HBASE-15666, which blocks us.\n         For now, we rely on the HBase 1.0 binding and manual testing.\n      -->\n    <maven.test.skip>true</maven.test.skip>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>hbase10-binding</artifactId>\n      <version>${project.version}</version>\n      <!-- Should match all compile scoped dependencies -->\n      <exclusions>\n        <exclusion>\n          <groupId>org.apache.hbase</groupId>\n          <artifactId>hbase-client</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.hbase</groupId>\n      <artifactId>hbase-shaded-client</artifactId>\n      <version>${hbase12.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n<!-- blocked on HBASE-15666\n    <dependency>\n      <groupId>org.apache.hbase</groupId>\n      <artifactId>hbase-testing-util</artifactId>\n      <version>${hbase12.version}</version>\n      <scope>test</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>jdk.tools</groupId>\n          <artifactId>jdk.tools</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n-->\n  </dependencies>\n</project>\n"
  },
  {
    "path": "hbase12/src/main/java/com/yahoo/ycsb/db/hbase12/HBaseClient12.java",
    "content": "/**\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.hbase12;\n\n/**\n * HBase 1.2 client for YCSB framework.\n *\n * A modified version of HBaseClient (which targets HBase v1.2) utilizing the\n * shaded client.\n *\n * It should run equivalent to following the hbase098 binding README.\n *\n */\npublic class HBaseClient12 extends com.yahoo.ycsb.db.HBaseClient10 {\n}\n"
  },
  {
    "path": "hbase12/src/main/java/com/yahoo/ycsb/db/hbase12/package-info.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"https://hbase.apache.org/\">HBase</a> \n * using the HBase 1.2+ shaded API.\n */\npackage com.yahoo.ycsb.db.hbase12;\n\n"
  },
  {
    "path": "hbase12/src/test/java/com/yahoo/ycsb/db/hbase12/HBaseClient12Test.java",
    "content": "/**\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.hbase12;\n\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY;\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY_DEFAULT;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\nimport static org.junit.Assume.assumeTrue;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport com.yahoo.ycsb.measurements.Measurements;\nimport com.yahoo.ycsb.workloads.CoreWorkload;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.hbase.HBaseTestingUtility;\nimport org.apache.hadoop.hbase.TableName;\nimport org.apache.hadoop.hbase.client.Get;\nimport org.apache.hadoop.hbase.client.Put;\nimport org.apache.hadoop.hbase.client.Result;\nimport org.apache.hadoop.hbase.client.Table;\nimport org.apache.hadoop.hbase.util.Bytes;\nimport org.junit.After;\nimport org.junit.AfterClass;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Ignore;\nimport org.junit.Test;\n\nimport java.nio.ByteBuffer;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Properties;\nimport java.util.Vector;\n\n/**\n * Integration tests for the YCSB HBase client 1.2, using an HBase minicluster.\n */\npublic class HBaseClient12Test {\n\n  private final static String COLUMN_FAMILY = \"cf\";\n\n  private static HBaseTestingUtility testingUtil;\n  private HBaseClient12 client;\n  private Table table = null;\n  private String tableName;\n\n  private static boolean isWindows() {\n    final String os = System.getProperty(\"os.name\");\n    return os.startsWith(\"Windows\");\n  }\n\n  /**\n   * Creates a mini-cluster for use in these tests.\n   *\n   * This is a heavy-weight operation, so invoked only once for the test class.\n   */\n  @BeforeClass\n  public static void setUpClass() throws Exception {\n    // Minicluster setup fails on Windows with an UnsatisfiedLinkError.\n    // Skip if windows.\n    assumeTrue(!isWindows());\n    testingUtil = HBaseTestingUtility.createLocalHTU();\n    testingUtil.startMiniCluster();\n  }\n\n  /**\n   * Tears down mini-cluster.\n   */\n  @AfterClass\n  public static void tearDownClass() throws Exception {\n    if (testingUtil != null) {\n      testingUtil.shutdownMiniCluster();\n    }\n  }\n\n  /**\n   * Sets up the mini-cluster for testing.\n   *\n   * We re-create the table for each test.\n   */\n  @Before\n  public void setUp() throws Exception {\n    client = new HBaseClient12();\n    client.setConfiguration(new Configuration(testingUtil.getConfiguration()));\n\n    Properties p = new Properties();\n    p.setProperty(\"columnfamily\", COLUMN_FAMILY);\n\n    Measurements.setProperties(p);\n    final CoreWorkload workload = new CoreWorkload();\n    workload.init(p);\n\n    tableName = p.getProperty(TABLENAME_PROPERTY, TABLENAME_PROPERTY_DEFAULT);\n    table = testingUtil.createTable(TableName.valueOf(tableName), Bytes.toBytes(COLUMN_FAMILY));\n\n    client.setProperties(p);\n    client.init();\n  }\n\n  @After\n  public void tearDown() throws Exception {\n    table.close();\n    testingUtil.deleteTable(tableName);\n  }\n\n  @Test\n  public void testRead() throws Exception {\n    final String rowKey = \"row1\";\n    final Put p = new Put(Bytes.toBytes(rowKey));\n    p.addColumn(Bytes.toBytes(COLUMN_FAMILY),\n        Bytes.toBytes(\"column1\"), Bytes.toBytes(\"value1\"));\n    p.addColumn(Bytes.toBytes(COLUMN_FAMILY),\n        Bytes.toBytes(\"column2\"), Bytes.toBytes(\"value2\"));\n    table.put(p);\n\n    final HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\n    final Status status = client.read(tableName, rowKey, null, result);\n    assertEquals(Status.OK, status);\n    assertEquals(2, result.size());\n    assertEquals(\"value1\", result.get(\"column1\").toString());\n    assertEquals(\"value2\", result.get(\"column2\").toString());\n  }\n\n  @Test\n  public void testReadMissingRow() throws Exception {\n    final HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\n    final Status status = client.read(tableName, \"Missing row\", null, result);\n    assertEquals(Status.NOT_FOUND, status);\n    assertEquals(0, result.size());\n  }\n\n  @Test\n  public void testScan() throws Exception {\n    // Fill with data\n    final String colStr = \"row_number\";\n    final byte[] col = Bytes.toBytes(colStr);\n    final int n = 10;\n    final List<Put> puts = new ArrayList<Put>(n);\n    for(int i = 0; i < n; i++) {\n      final byte[] key = Bytes.toBytes(String.format(\"%05d\", i));\n      final byte[] value = java.nio.ByteBuffer.allocate(4).putInt(i).array();\n      final Put p = new Put(key);\n      p.addColumn(Bytes.toBytes(COLUMN_FAMILY), col, value);\n      puts.add(p);\n    }\n    table.put(puts);\n\n    // Test\n    final Vector<HashMap<String, ByteIterator>> result =\n        new Vector<HashMap<String, ByteIterator>>();\n\n    // Scan 5 records, skipping the first\n    client.scan(tableName, \"00001\", 5, null, result);\n\n    assertEquals(5, result.size());\n    for(int i = 0; i < 5; i++) {\n      final HashMap<String, ByteIterator> row = result.get(i);\n      assertEquals(1, row.size());\n      assertTrue(row.containsKey(colStr));\n      final byte[] bytes = row.get(colStr).toArray();\n      final ByteBuffer buf = ByteBuffer.wrap(bytes);\n      final int rowNum = buf.getInt();\n      assertEquals(i + 1, rowNum);\n    }\n  }\n\n  @Test\n  public void testUpdate() throws Exception{\n    final String key = \"key\";\n    final HashMap<String, String> input = new HashMap<String, String>();\n    input.put(\"column1\", \"value1\");\n    input.put(\"column2\", \"value2\");\n    final Status status = client.insert(tableName, key, StringByteIterator.getByteIteratorMap(input));\n    assertEquals(Status.OK, status);\n\n    // Verify result\n    final Get get = new Get(Bytes.toBytes(key));\n    final Result result = this.table.get(get);\n    assertFalse(result.isEmpty());\n    assertEquals(2, result.size());\n    for(final java.util.Map.Entry<String, String> entry : input.entrySet()) {\n      assertEquals(entry.getValue(),\n          new String(result.getValue(Bytes.toBytes(COLUMN_FAMILY),\n            Bytes.toBytes(entry.getKey()))));\n    }\n  }\n\n  @Test\n  @Ignore(\"Not yet implemented\")\n  public void testDelete() {\n    fail(\"Not yet implemented\");\n  }\n}\n\n"
  },
  {
    "path": "hbase12/src/test/resources/hbase-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<configuration>\n  <property>\n    <name>hbase.master.info.port</name>\n    <value>-1</value>\n    <description>The port for the hbase master web UI\n    Set to -1 if you do not want the info server to run.\n    </description>\n  </property>\n  <property>\n    <name>hbase.regionserver.info.port</name>\n    <value>-1</value>\n    <description>The port for the hbase regionserver web UI\n    Set to -1 if you do not want the info server to run.\n    </description>\n  </property>\n</configuration>\n"
  },
  {
    "path": "hbase12/src/test/resources/log4j.properties",
    "content": "#\n# Copyright (c) 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n\n# Root logger option\nlog4j.rootLogger=WARN, stderr\n\nlog4j.appender.stderr=org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.target=System.err\nlog4j.appender.stderr.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.conversionPattern=%d{yyyy/MM/dd HH:mm:ss} %-5p %c %x - %m%n\n\n# Suppress messages from ZKTableStateManager: Creates a large number of table\n# state change messages.\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZKTableStateManager=ERROR\n"
  },
  {
    "path": "hypertable/README.md",
    "content": "<!--\nCopyright (c) 2010 Yahoo! Inc., 2012 - 2015 YCSB contributors.\nAll rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# Install Hypertable\n\nInstallation instructions for Hypertable can be found at:\n\n    code.google.com/p/hypertable/wiki/HypertableManual\n\n\n# Set Up YCSB\n\nClone the YCSB git repository and compile:\n\n    ]$ git clone git://github.com/brianfrankcooper/YCSB.git\n    ]$ cd YCSB\n    ]$ mvn clean package\n\n# Run Hypertable\n\nOnce it has been installed, start Hypertable by running\n\n    ]$ ./bin/ht start all-servers hadoop\n\nif an instance of HDFS is running or\n\n    ]$ ./bin/ht start all-servers local\n\nif the database is backed by the local file system. YCSB accesses\na table called 'usertable' by default. Create this table through the\nHypertable shell by running\n\n    ]$ ./bin/ht shell\n    hypertable> use '/ycsb';\n    hypertable> create table usertable(family);\n    hypertable> quit\n\nAll iteractions by YCSB take place under the Hypertable namespace '/ycsb'.\nHypertable also uses an additional data grouping structure called a column \nfamily that must be set. YCSB doesn't offer fine grained operations on \ncolumn families so in this example the table is created with a single \ncolumn family named 'family' to which all column families will belong. \nThe name of this column family must be passed to YCSB. The table can be \nmanipulated from within the hypertable shell without interfering with the\noperation of YCSB. \n\n# Run YCSB\n\nMake sure that an instance of Hypertable is running. To access the database\nthrough the YCSB shell, from the YCSB directory run:\n\n    ]$ ./bin/ycsb shell hypertable -p columnfamily=family\n\nwhere the value passed to columnfamily matches that used in the table\ncreation. To run a workload, first load the data:\n\n    ]$ ./bin/ycsb load hypertable -P workloads/workloada -p columnfamily=family\n\nThen run the workload:\n\n    ]$ ./bin/ycsb run hypertable -P workloads/workloada -p columnfamily=family\n\nThis example runs the core workload 'workloada' that comes packaged with YCSB.\nThe state of the YCSB data in the Hypertable database can be reset by dropping\nusertable and recreating it.\n\n# Configuration Parameters\n\nHypertable configuration settings can be found in conf/hypertable.cfg under \nyour main hypertable directory. Make sure that the constant THRIFTBROKER_PORT\nin the class HypertableClient matches the setting ThriftBroker.Port in \nhypertable.cfg.\n\nTo change the amount of data returned on each call to the ThriftClient on\na Hypertable scan, one must add a new parameter to hypertable.cfg. Include\nThriftBroker.NextThreshold=x where x is set to the size desired in bytes.\nThe default setting of this parameter is 128000.\n\nTo alter the Hypertable namespace YCSB operates under, change the constant\nNAMESPACE in the class HypertableClient.\n"
  },
  {
    "path": "hypertable/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>hypertable-binding</artifactId>\n  <name>Hypertable DB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.thrift</groupId>\n      <artifactId>libthrift</artifactId>\n      <version>${thrift.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.hypertable</groupId>\n      <artifactId>hypertable</artifactId>\n      <version>${hypertable.version}</version>\n    </dependency>\n  </dependencies>\n\n  <repositories>\n    <repository>\n      <id>clojars.org</id>\n      <url>http://clojars.org/repo</url>\n    </repository>\n  </repositories>\n</project>\n"
  },
  {
    "path": "hypertable/src/main/java/com/yahoo/ycsb/db/HypertableClient.java",
    "content": "/**\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\nimport org.apache.thrift.TException;\nimport org.hypertable.thrift.SerializedCellsFlag;\nimport org.hypertable.thrift.SerializedCellsReader;\nimport org.hypertable.thrift.SerializedCellsWriter;\nimport org.hypertable.thrift.ThriftClient;\nimport org.hypertable.thriftgen.Cell;\nimport org.hypertable.thriftgen.ClientException;\nimport org.hypertable.thriftgen.Key;\nimport org.hypertable.thriftgen.KeyFlag;\nimport org.hypertable.thriftgen.RowInterval;\nimport org.hypertable.thriftgen.ScanSpec;\n\nimport java.nio.ByteBuffer;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.Vector;\n\n/**\n * Hypertable client for YCSB framework.\n */\npublic class HypertableClient extends com.yahoo.ycsb.DB {\n  public static final String NAMESPACE = \"/ycsb\";\n  public static final int THRIFTBROKER_PORT = 38080;\n  public static final int BUFFER_SIZE = 4096;\n\n  private boolean debug = false;\n\n  private ThriftClient connection;\n  private long ns;\n\n  private String columnFamily = \"\";\n\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is one\n   * DB instance per client thread.\n   */\n  @Override\n  public void init() throws DBException {\n    if ((getProperties().getProperty(\"debug\") != null)\n        && (getProperties().getProperty(\"debug\").equals(\"true\"))) {\n      debug = true;\n    }\n\n    try {\n      connection = ThriftClient.create(\"localhost\", THRIFTBROKER_PORT);\n\n      if (!connection.namespace_exists(NAMESPACE)) {\n        connection.namespace_create(NAMESPACE);\n      }\n      ns = connection.open_namespace(NAMESPACE);\n    } catch (ClientException e) {\n      throw new DBException(\"Could not open namespace\", e);\n    } catch (TException e) {\n      throw new DBException(\"Could not open namespace\", e);\n    }\n\n    columnFamily = getProperties().getProperty(\"columnfamily\");\n    if (columnFamily == null) {\n      System.err.println(\n          \"Error, must specify a \" + \"columnfamily for Hypertable table\");\n      throw new DBException(\"No columnfamily specified\");\n    }\n  }\n\n  /**\n   * Cleanup any state for this DB. Called once per DB instance; there is one DB\n   * instance per client thread.\n   */\n  @Override\n  public void cleanup() throws DBException {\n    try {\n      connection.namespace_close(ns);\n    } catch (ClientException e) {\n      throw new DBException(\"Could not close namespace\", e);\n    } catch (TException e) {\n      throw new DBException(\"Could not close namespace\", e);\n    }\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will\n   * be stored in a HashMap.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to read.\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n                     Map<String, ByteIterator> result) {\n    // SELECT _column_family:field[i]\n    // FROM table WHERE ROW=key MAX_VERSIONS 1;\n\n    if (debug) {\n      System.out\n          .println(\"Doing read from Hypertable columnfamily \" + columnFamily);\n      System.out.println(\"Doing read for key: \" + key);\n    }\n\n    try {\n      if (null != fields) {\n        Vector<HashMap<String, ByteIterator>> resMap =\n            new Vector<HashMap<String, ByteIterator>>();\n        if (!scan(table, key, 1, fields, resMap).equals(Status.OK)) {\n          return Status.ERROR;\n        }\n        if (!resMap.isEmpty()) {\n          result.putAll(resMap.firstElement());\n        }\n      } else {\n        SerializedCellsReader reader = new SerializedCellsReader(null);\n        reader.reset(connection.get_row_serialized(ns, table, key));\n        while (reader.next()) {\n          result.put(new String(reader.get_column_qualifier()),\n              new ByteArrayByteIterator(reader.get_value()));\n        }\n      }\n    } catch (ClientException e) {\n      if (debug) {\n        System.err.println(\"Error doing read: \" + e.message);\n      }\n      return Status.ERROR;\n    } catch (TException e) {\n      if (debug) {\n        System.err.println(\"Error doing read\");\n      }\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value\n   * pair from the result will be stored in a HashMap.\n   *\n   * @param table\n   *          The name of the table\n   * @param startkey\n   *          The record key of the first record to read.\n   * @param recordcount\n   *          The number of records to read\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A Vector of HashMaps, where each HashMap is a set field/value\n   *          pairs for one record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    // SELECT _columnFamily:fields FROM table WHERE (ROW >= startkey)\n    // LIMIT recordcount MAX_VERSIONS 1;\n\n    ScanSpec spec = new ScanSpec();\n    RowInterval elem = new RowInterval();\n    elem.setStart_inclusive(true);\n    elem.setStart_row(startkey);\n    spec.addToRow_intervals(elem);\n    if (null != fields) {\n      for (String field : fields) {\n        spec.addToColumns(columnFamily + \":\" + field);\n      }\n    }\n    spec.setVersions(1);\n    spec.setRow_limit(recordcount);\n\n    SerializedCellsReader reader = new SerializedCellsReader(null);\n\n    try {\n      long sc = connection.scanner_open(ns, table, spec);\n\n      String lastRow = null;\n      boolean eos = false;\n      while (!eos) {\n        reader.reset(connection.scanner_get_cells_serialized(sc));\n        while (reader.next()) {\n          String currentRow = new String(reader.get_row());\n          if (!currentRow.equals(lastRow)) {\n            result.add(new HashMap<String, ByteIterator>());\n            lastRow = currentRow;\n          }\n          result.lastElement().put(new String(reader.get_column_qualifier()),\n              new ByteArrayByteIterator(reader.get_value()));\n        }\n        eos = reader.eos();\n\n        if (debug) {\n          System.out\n              .println(\"Number of rows retrieved so far: \" + result.size());\n        }\n      }\n      connection.scanner_close(sc);\n    } catch (ClientException e) {\n      if (debug) {\n        System.err.println(\"Error doing scan: \" + e.message);\n      }\n      return Status.ERROR;\n    } catch (TException e) {\n      if (debug) {\n        System.err.println(\"Error doing scan\");\n      }\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key, overwriting any existing values with the same field name.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to write\n   * @param values\n   *          A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status update(String table, String key,\n                       Map<String, ByteIterator> values) {\n    return insert(table, key, values);\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to insert.\n   * @param values\n   *          A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status insert(String table, String key,\n            Map<String, ByteIterator> values) {\n    // INSERT INTO table VALUES\n    // (key, _column_family:entry,getKey(), entry.getValue()), (...);\n\n    if (debug) {\n      System.out.println(\"Setting up put for key: \" + key);\n    }\n\n    try {\n      long mutator = connection.mutator_open(ns, table, 0, 0);\n      SerializedCellsWriter writer =\n          new SerializedCellsWriter(BUFFER_SIZE * values.size(), true);\n      for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n        writer.add(key, columnFamily, entry.getKey(),\n            SerializedCellsFlag.AUTO_ASSIGN,\n            ByteBuffer.wrap(entry.getValue().toArray()));\n      }\n      connection.mutator_set_cells_serialized(mutator, writer.buffer(), true);\n      connection.mutator_close(mutator);\n    } catch (ClientException e) {\n      if (debug) {\n        System.err.println(\"Error doing set: \" + e.message);\n      }\n      return Status.ERROR;\n    } catch (TException e) {\n      if (debug) {\n        System.err.println(\"Error doing set\");\n      }\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status delete(String table, String key) {\n    // DELETE * FROM table WHERE ROW=key;\n\n    if (debug) {\n      System.out.println(\"Doing delete for key: \" + key);\n    }\n\n    Cell entry = new Cell();\n    entry.key = new Key();\n    entry.key.row = key;\n    entry.key.flag = KeyFlag.DELETE_ROW;\n\n    try {\n      connection.set_cell(ns, table, entry);\n    } catch (ClientException e) {\n      if (debug) {\n        System.err.println(\"Error doing delete: \" + e.message);\n      }\n      return Status.ERROR;\n    } catch (TException e) {\n      if (debug) {\n        System.err.println(\"Error doing delete\");\n      }\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n}\n"
  },
  {
    "path": "hypertable/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"http://hypertable.org/\">Hypertable</a>.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "infinispan/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on infinispan. \n\n### 1. Install Java and Maven\n\n### 2. Set Up YCSB\n1. Git clone YCSB and compile:\n  ```\ngit clone http://github.com/brianfrankcooper/YCSB.git\ncd YCSB\nmvn clean package\n  ```\n\n2. Copy and untar YCSB distribution in distribution/target/ycsb-x.x.x.tar.gz to target machine\n\n### 4. Load data and run tests\n####4.1 embedded mode with cluster or not\nLoad the data:\n```\n./bin/ycsb load infinispan -P workloads/workloada -p infinispan.clustered=<true or false>\n```\nRun the workload test:\n```\n./bin/ycsb run infinispan -s -P workloads/workloada -p infinispan.clustered=<true or false>\n```\n####4.2 client-server mode\n    \n1. start infinispan server\n\n2. read [RemoteCacheManager](http://docs.jboss.org/infinispan/7.2/apidocs/org/infinispan/client/hotrod/RemoteCacheManager.html) doc and customize hotrod client properties in infinispan-binding/conf/remote-cache.properties\n\n3. Load the data with specified cache:\n  ```\n./bin/ycsb load infinispan-cs -s -P workloads/workloada -P infinispan-binding/conf/remote-cache.properties -p cache=<cache name>\n  ```\n\n4. Run the workload test with specified cache:\n  ```\n./bin/ycsb run infinispan-cs -s -P workloads/workloada -P infinispan-binding/conf/remote-cache.properties -p cache=<cache name>\n  ```"
  },
  {
    "path": "infinispan/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>infinispan-binding</artifactId>\n  <name>Infinispan DB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.infinispan</groupId>\n      <artifactId>infinispan-client-hotrod</artifactId>\n      <version>${infinispan.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.infinispan</groupId>\n      <artifactId>infinispan-core</artifactId>\n      <version>${infinispan.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "infinispan/src/main/conf/infinispan-config.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<infinispan\n      xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n      xsi:schemaLocation=\"urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd\"\n      xmlns=\"urn:infinispan:config:5.0\">\n\n   <global>\n      <transport clusterName=\"x\" />\n   </global>\n\n   <default>\n      <transaction />\n      <invocationBatching enabled=\"true\" />\n   </default>\n\n</infinispan>"
  },
  {
    "path": "infinispan/src/main/conf/remote-cache.properties",
    "content": "# Copyright (c) 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\ninfinispan.client.hotrod.server_list=192.168.101.17:11222\ninfinispan.client.hotrod.force_return_values=false\n\nmaxActive=-1\nmaxIdle=-1\nminIdle=1\nmaxTotal=-1\n\nwhenExhaustedAction=1\n"
  },
  {
    "path": "infinispan/src/main/java/com/yahoo/ycsb/db/InfinispanClient.java",
    "content": "/**\n * Copyright (c) 2012-2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport org.infinispan.Cache;\nimport org.infinispan.atomic.AtomicMap;\nimport org.infinispan.atomic.AtomicMapLookup;\nimport org.infinispan.manager.DefaultCacheManager;\nimport org.infinispan.manager.EmbeddedCacheManager;\nimport org.infinispan.util.logging.Log;\nimport org.infinispan.util.logging.LogFactory;\n\nimport java.io.IOException;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.Vector;\n\n/**\n * This is a client implementation for Infinispan 5.x.\n */\npublic class InfinispanClient extends DB {\n  private static final Log LOGGER = LogFactory.getLog(InfinispanClient.class);\n\n  // An optimisation for clustered mode\n  private final boolean clustered;\n\n  private EmbeddedCacheManager infinispanManager;\n\n  public InfinispanClient() {\n    clustered = Boolean.getBoolean(\"infinispan.clustered\");\n  }\n\n  public void init() throws DBException {\n    try {\n      infinispanManager = new DefaultCacheManager(\"infinispan-config.xml\");\n    } catch (IOException e) {\n      throw new DBException(e);\n    }\n  }\n\n  public void cleanup() {\n    infinispanManager.stop();\n    infinispanManager = null;\n  }\n\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    try {\n      Map<String, String> row;\n      if (clustered) {\n        row = AtomicMapLookup.getAtomicMap(infinispanManager.getCache(table), key, false);\n      } else {\n        Cache<String, Map<String, String>> cache = infinispanManager.getCache(table);\n        row = cache.get(key);\n      }\n      if (row != null) {\n        result.clear();\n        if (fields == null || fields.isEmpty()) {\n          StringByteIterator.putAllAsByteIterators(result, row);\n        } else {\n          for (String field : fields) {\n            result.put(field, new StringByteIterator(row.get(field)));\n          }\n        }\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.error(e);\n      return Status.ERROR;\n    }\n  }\n\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    LOGGER.warn(\"Infinispan does not support scan semantics\");\n    return Status.OK;\n  }\n\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      if (clustered) {\n        AtomicMap<String, String> row = AtomicMapLookup.getAtomicMap(infinispanManager.getCache(table), key);\n        StringByteIterator.putAllAsStrings(row, values);\n      } else {\n        Cache<String, Map<String, String>> cache = infinispanManager.getCache(table);\n        Map<String, String> row = cache.get(key);\n        if (row == null) {\n          row = StringByteIterator.getStringMap(values);\n          cache.put(key, row);\n        } else {\n          StringByteIterator.putAllAsStrings(row, values);\n        }\n      }\n\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.error(e);\n      return Status.ERROR;\n    }\n  }\n\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      if (clustered) {\n        AtomicMap<String, String> row = AtomicMapLookup.getAtomicMap(infinispanManager.getCache(table), key);\n        row.clear();\n        StringByteIterator.putAllAsStrings(row, values);\n      } else {\n        infinispanManager.getCache(table).put(key, values);\n      }\n\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.error(e);\n      return Status.ERROR;\n    }\n  }\n\n  public Status delete(String table, String key) {\n    try {\n      if (clustered) {\n        AtomicMapLookup.removeAtomicMap(infinispanManager.getCache(table), key);\n      } else {\n        infinispanManager.getCache(table).remove(key);\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.error(e);\n      return Status.ERROR;\n    }\n  }\n}\n"
  },
  {
    "path": "infinispan/src/main/java/com/yahoo/ycsb/db/InfinispanRemoteClient.java",
    "content": "/**\n * Copyright (c) 2015-2016 YCSB contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.yahoo.ycsb.*;\nimport org.infinispan.client.hotrod.RemoteCache;\nimport org.infinispan.client.hotrod.RemoteCacheManager;\nimport org.infinispan.util.logging.Log;\nimport org.infinispan.util.logging.LogFactory;\n\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.Vector;\n\n/**\n * This is a client implementation for Infinispan 5.x in client-server mode.\n */\npublic class InfinispanRemoteClient extends DB {\n\n  private static final Log LOGGER = LogFactory.getLog(InfinispanRemoteClient.class);\n\n  private RemoteCacheManager remoteIspnManager;\n  private String cacheName = null;\n\n  @Override\n  public void init() throws DBException {\n    remoteIspnManager = RemoteCacheManagerHolder.getInstance(getProperties());\n    cacheName = getProperties().getProperty(\"cache\");\n  }\n\n  @Override\n  public void cleanup() {\n    remoteIspnManager.stop();\n    remoteIspnManager = null;\n  }\n\n  @Override\n  public Status insert(String table, String recordKey, Map<String, ByteIterator> values) {\n    String compositKey = createKey(table, recordKey);\n    Map<String, String> stringValues = new HashMap<>();\n    StringByteIterator.putAllAsStrings(stringValues, values);\n    try {\n      cache().put(compositKey, stringValues);\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.error(e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status read(String table, String recordKey, Set<String> fields, Map<String, ByteIterator> result) {\n    String compositKey = createKey(table, recordKey);\n    try {\n      Map<String, String> values = cache().get(compositKey);\n\n      if (values == null || values.isEmpty()) {\n        return Status.NOT_FOUND;\n      }\n\n      if (fields == null) { //get all field/value pairs\n        StringByteIterator.putAllAsByteIterators(result, values);\n      } else {\n        for (String field : fields) {\n          String value = values.get(field);\n          if (value != null) {\n            result.put(field, new StringByteIterator(value));\n          }\n        }\n      }\n\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.error(e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\n                     Vector<HashMap<String, ByteIterator>> result) {\n    LOGGER.warn(\"Infinispan does not support scan semantics\");\n    return Status.NOT_IMPLEMENTED;\n  }\n\n  @Override\n  public Status update(String table, String recordKey, Map<String, ByteIterator> values) {\n    String compositKey = createKey(table, recordKey);\n    try {\n      Map<String, String> stringValues = new HashMap<>();\n      StringByteIterator.putAllAsStrings(stringValues, values);\n      cache().put(compositKey, stringValues);\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.error(e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status delete(String table, String recordKey) {\n    String compositKey = createKey(table, recordKey);\n    try {\n      cache().remove(compositKey);\n      return Status.OK;\n    } catch (Exception e) {\n      LOGGER.error(e);\n      return Status.ERROR;\n    }\n  }\n\n  private RemoteCache<String, Map<String, String>> cache() {\n    if (this.cacheName != null) {\n      return remoteIspnManager.getCache(cacheName);\n    } else {\n      return remoteIspnManager.getCache();\n    }\n  }\n\n  private String createKey(String table, String recordKey) {\n    return table + \"-\" + recordKey;\n  }\n}\n"
  },
  {
    "path": "infinispan/src/main/java/com/yahoo/ycsb/db/RemoteCacheManagerHolder.java",
    "content": "/**\n * Copyright (c) 2015-2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport java.util.Properties;\n\nimport org.infinispan.client.hotrod.RemoteCacheManager;\n\n/**\n * Utility class to ensure only a single RemoteCacheManager is created.\n */\nfinal class RemoteCacheManagerHolder {\n\n  private static volatile RemoteCacheManager cacheManager = null;\n\n  private RemoteCacheManagerHolder() {\n  }\n\n  static RemoteCacheManager getInstance(Properties props) {\n    RemoteCacheManager result = cacheManager;\n    if (result == null) {\n      synchronized (RemoteCacheManagerHolder.class) {\n        result = cacheManager;\n        if (result == null) {\n          result = new RemoteCacheManager(props);\n          cacheManager = new RemoteCacheManager(props);\n        }\n      }\n    }\n    return result;\n  }\n}\n"
  },
  {
    "path": "infinispan/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2015-2016 YCSB Contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"http://infinispan.org/\">Infinispan</a>.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "jdbc/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# JDBC Driver for YCSB\nThis driver enables YCSB to work with databases accessible via the JDBC protocol.\n\n## Getting Started\n### 1. Start your database\nThis driver will connect to databases that use the JDBC protocol, please refer to your databases documentation on information on how to install, configure and start your system.\n\n### 2. Set up YCSB\nYou can clone the YCSB project and compile it to stay up to date with the latest changes. Or you can just download the latest release and unpack it. Either way, instructions for doing so can be found here: https://github.com/brianfrankcooper/YCSB.\n\n### 3. Configure your database and table.\nYou can name your database what ever you want, you will need to provide the database name in the JDBC connection string.\n\nYou can name your table whatever you like also, but it needs to be specified using the YCSB core properties, the default is to just use 'usertable' as the table name.\n\nThe expected table schema will look similar to the following, syntactical differences may exist with your specific database:\n\n```sql\nCREATE TABLE usertable (\n\tYCSB_KEY VARCHAR(255) PRIMARY KEY,\n\tFIELD0 TEXT, FIELD1 TEXT,\n\tFIELD2 TEXT, FIELD3 TEXT,\n\tFIELD4 TEXT, FIELD5 TEXT,\n\tFIELD6 TEXT, FIELD7 TEXT,\n\tFIELD8 TEXT, FIELD9 TEXT\n);\n```\n\nKey take aways:\n\n* The primary key field needs to be named YCSB_KEY\n* The other fields need to be prefixed with FIELD and count up starting from 1\n* Add the same number of FIELDs as you specify in the YCSB core properties, default is 10.\n* The type of the fields is not so important as long as they can accept strings of the length that you specify in the YCSB core properties, default is 100.\n\n#### JdbcDBCreateTable Utility\nYCSB has a utility to help create your SQL table. NOTE: It does not support all databases flavors, if it does not work for you, you will have to create your table manually with the schema given above. An example usage of the utility:\n\n```sh\njava -cp YCSB_HOME/jdbc-binding/lib/jdbc-binding-0.4.0.jar:mysql-connector-java-5.1.37-bin.jar com.yahoo.ycsb.db.JdbcDBCreateTable -P db.properties -n usertable\n```\n\nHint: you need to include your Driver jar in the classpath as well as specify JDBC connection information via a properties file, and a table name with ```-n```. \n\nSimply executing the JdbcDBCreateTable class without any other parameters will print out usage information.\n\n### 4. Configure YCSB connection properties\nYou need to set the following connection configurations:\n\n```sh\ndb.driver=com.mysql.jdbc.Driver\ndb.url=jdbc:mysql://127.0.0.1:3306/ycsb\ndb.user=admin\ndb.passwd=admin\n```\n\nBe sure to use your driver class, a valid JDBC connection string, and credentials to your database.\n\nYou can add these to your workload configuration or a separate properties file and specify it with ```-P``` or you can add the properties individually to your ycsb command with ```-p```.\n\n### 5. Add your JDBC Driver to the classpath\nThere are several ways to do this, but a couple easy methods are to put a copy of your Driver jar in ```YCSB_HOME/jdbc-binding/lib/``` or just specify the path to your Driver jar with ```-cp``` in your ycsb command.\n\n### 6. Running a workload\nBefore you can actually run the workload, you need to \"load\" the data first.\n\n```sh\nbin/ycsb load jdbc -P workloads/workloada -P db.properties -cp mysql-connector-java.jar\n```\n\nThen, you can run the workload:\n\n```sh\nbin/ycsb run jdbc -P workloads/workloada -P db.properties -cp mysql-connector-java.jar\n```\n\n## Configuration Properties\n\n```sh\ndb.driver=com.mysql.jdbc.Driver\t\t\t\t# The JDBC driver class to use.\ndb.url=jdbc:mysql://127.0.0.1:3306/ycsb\t\t# The Database connection URL.\ndb.user=admin\t\t\t\t\t\t\t\t# User name for the connection.\ndb.passwd=admin\t\t\t\t\t\t\t\t# Password for the connection.\ndb.batchsize=1000             # The batch size for doing batched inserts. Defaults to 0. Set to >0 to use batching.\njdbc.fetchsize=10\t\t\t\t\t\t\t# The JDBC fetch size hinted to the driver.\njdbc.autocommit=true\t\t\t\t\t\t# The JDBC connection auto-commit property for the driver.\njdbc.batchupdateapi=false     # Use addBatch()/executeBatch() JDBC methods instead of executeUpdate() for writes (default: false)\ndb.batchsize=1000             # The number of rows to be batched before commit (or executeBatch() when jdbc.batchupdateapi=true)\n```\n\nPlease refer to https://github.com/brianfrankcooper/YCSB/wiki/Core-Properties for all other YCSB core properties.\n"
  },
  {
    "path": "jdbc/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n  \n  <artifactId>jdbc-binding</artifactId>\n  <name>JDBC DB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.apache.openjpa</groupId>\n      <artifactId>openjpa-jdbc</artifactId>\n      <version>${openjpa.jdbc.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.hsqldb</groupId>\n      <artifactId>hsqldb</artifactId>\n      <version>2.3.3</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "jdbc/src/main/conf/db.properties",
    "content": "# Copyright (c) 2012 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n# Properties file that contains database connection information.\n\ndb.driver=org.h2.Driver\n# jdbc.fetchsize=20\ndb.url=jdbc:h2:tcp://foo.com:9092/~/h2/ycsb\ndb.user=sa\ndb.passwd=\n"
  },
  {
    "path": "jdbc/src/main/conf/h2.properties",
    "content": "# Copyright (c) 2012 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n# Properties file that contains database connection information.\n\ndb.driver=org.h2.Driver\ndb.url=jdbc:h2:tcp://foo.com:9092/~/h2/ycsb\ndb.user=sa\ndb.passwd=\n"
  },
  {
    "path": "jdbc/src/main/java/com/yahoo/ycsb/db/JdbcDBCli.java",
    "content": "/**\n * Copyright (c) 2010 - 2016 Yahoo! Inc. All rights reserved.\n * \n * Licensed under the Apache License, Version 2.0 (the \"License\"); you \n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at \n * \n * http://www.apache.org/licenses/LICENSE-2.0\n * \n * Unless required by applicable law or agreed to in writing, software \n * distributed under the License is distributed on an \"AS IS\" BASIS, \n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or \n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying \n * LICENSE file. \n */\npackage com.yahoo.ycsb.db;\n\nimport java.io.FileInputStream;\nimport java.io.IOException;\nimport java.sql.Connection;\nimport java.sql.DriverManager;\nimport java.sql.SQLException;\nimport java.sql.Statement;\nimport java.util.Enumeration;\nimport java.util.Properties;\n\n/**\n * Execute a JDBC command line.\n * \n * @author sudipto\n */\npublic final class JdbcDBCli {\n\n  private static void usageMessage() {\n    System.out.println(\"JdbcCli. Options:\");\n    System.out.println(\"  -p   key=value properties defined.\");\n    System.out.println(\"  -P   location of the properties file to load.\");\n    System.out.println(\"  -c   SQL command to execute.\");\n  }\n\n  private static void executeCommand(Properties props, String sql) throws SQLException {\n    String driver = props.getProperty(JdbcDBClient.DRIVER_CLASS);\n    String username = props.getProperty(JdbcDBClient.CONNECTION_USER);\n    String password = props.getProperty(JdbcDBClient.CONNECTION_PASSWD, \"\");\n    String url = props.getProperty(JdbcDBClient.CONNECTION_URL);\n    if (driver == null || username == null || url == null) {\n      throw new SQLException(\"Missing connection information.\");\n    }\n\n    Connection conn = null;\n\n    try {\n      Class.forName(driver);\n\n      conn = DriverManager.getConnection(url, username, password);\n      Statement stmt = conn.createStatement();\n      stmt.execute(sql);\n      System.out.println(\"Command  \\\"\" + sql + \"\\\" successfully executed.\");\n    } catch (ClassNotFoundException e) {\n      throw new SQLException(\"JDBC Driver class not found.\");\n    } finally {\n      if (conn != null) {\n        System.out.println(\"Closing database connection.\");\n        conn.close();\n      }\n    }\n  }\n\n  /**\n   * @param args\n   */\n  public static void main(String[] args) {\n\n    if (args.length == 0) {\n      usageMessage();\n      System.exit(0);\n    }\n\n    Properties props = new Properties();\n    Properties fileprops = new Properties();\n    String sql = null;\n\n    // parse arguments\n    int argindex = 0;\n    while (args[argindex].startsWith(\"-\")) {\n      if (args[argindex].compareTo(\"-P\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.exit(0);\n        }\n        String propfile = args[argindex];\n        argindex++;\n\n        Properties myfileprops = new Properties();\n        try {\n          myfileprops.load(new FileInputStream(propfile));\n        } catch (IOException e) {\n          System.out.println(e.getMessage());\n          System.exit(0);\n        }\n\n        // Issue #5 - remove call to stringPropertyNames to make compilable\n        // under Java 1.5\n        for (Enumeration<?> e = myfileprops.propertyNames(); e.hasMoreElements();) {\n          String prop = (String) e.nextElement();\n\n          fileprops.setProperty(prop, myfileprops.getProperty(prop));\n        }\n\n      } else if (args[argindex].compareTo(\"-p\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.exit(0);\n        }\n        int eq = args[argindex].indexOf('=');\n        if (eq < 0) {\n          usageMessage();\n          System.exit(0);\n        }\n\n        String name = args[argindex].substring(0, eq);\n        String value = args[argindex].substring(eq + 1);\n        props.put(name, value);\n        argindex++;\n      } else if (args[argindex].compareTo(\"-c\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.exit(0);\n        }\n        sql = args[argindex++];\n      } else {\n        System.out.println(\"Unknown option \" + args[argindex]);\n        usageMessage();\n        System.exit(0);\n      }\n\n      if (argindex >= args.length) {\n        break;\n      }\n    }\n\n    if (argindex != args.length) {\n      usageMessage();\n      System.exit(0);\n    }\n\n    // overwrite file properties with properties from the command line\n\n    // Issue #5 - remove call to stringPropertyNames to make compilable under\n    // Java 1.5\n    for (Enumeration<?> e = props.propertyNames(); e.hasMoreElements();) {\n      String prop = (String) e.nextElement();\n\n      fileprops.setProperty(prop, props.getProperty(prop));\n    }\n\n    if (sql == null) {\n      System.err.println(\"Missing command.\");\n      usageMessage();\n      System.exit(1);\n    }\n\n    try {\n      executeCommand(fileprops, sql);\n    } catch (SQLException e) {\n      System.err.println(\"Error in executing command. \" + e);\n      System.exit(1);\n    }\n  }\n\n  /**\n   * Hidden constructor.\n   */\n  private JdbcDBCli() {\n    super();\n  }\n}\n"
  },
  {
    "path": "jdbc/src/main/java/com/yahoo/ycsb/db/JdbcDBClient.java",
    "content": "/**\n * Copyright (c) 2010 - 2016 Yahoo! Inc., 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport java.sql.*;\nimport java.util.*;\nimport java.util.concurrent.ConcurrentHashMap;\nimport java.util.concurrent.ConcurrentMap;\nimport com.yahoo.ycsb.db.flavors.DBFlavor;\n\n/**\n * A class that wraps a JDBC compliant database to allow it to be interfaced\n * with YCSB. This class extends {@link DB} and implements the database\n * interface used by YCSB client.\n *\n * <br>\n * Each client will have its own instance of this class. This client is not\n * thread safe.\n *\n * <br>\n * This interface expects a schema <key> <field1> <field2> <field3> ... All\n * attributes are of type TEXT. All accesses are through the primary key.\n * Therefore, only one index on the primary key is needed.\n */\npublic class JdbcDBClient extends DB {\n\n  /** The class to use as the jdbc driver. */\n  public static final String DRIVER_CLASS = \"db.driver\";\n\n  /** The URL to connect to the database. */\n  public static final String CONNECTION_URL = \"db.url\";\n\n  /** The user name to use to connect to the database. */\n  public static final String CONNECTION_USER = \"db.user\";\n\n  /** The password to use for establishing the connection. */\n  public static final String CONNECTION_PASSWD = \"db.passwd\";\n\n  /** The batch size for batched inserts. Set to >0 to use batching */\n  public static final String DB_BATCH_SIZE = \"db.batchsize\";\n\n  /** The JDBC fetch size hinted to the driver. */\n  public static final String JDBC_FETCH_SIZE = \"jdbc.fetchsize\";\n\n  /** The JDBC connection auto-commit property for the driver. */\n  public static final String JDBC_AUTO_COMMIT = \"jdbc.autocommit\";\n\n  public static final String JDBC_BATCH_UPDATES = \"jdbc.batchupdateapi\";\n\n  /** The name of the property for the number of fields in a record. */\n  public static final String FIELD_COUNT_PROPERTY = \"fieldcount\";\n\n  /** Default number of fields in a record. */\n  public static final String FIELD_COUNT_PROPERTY_DEFAULT = \"10\";\n\n  /** Representing a NULL value. */\n  public static final String NULL_VALUE = \"NULL\";\n\n  /** The primary key in the user table. */\n  public static final String PRIMARY_KEY = \"YCSB_KEY\";\n\n  /** The field name prefix in the table. */\n  public static final String COLUMN_PREFIX = \"FIELD\";\n\n  private List<Connection> conns;\n  private boolean initialized = false;\n  private Properties props;\n  private int jdbcFetchSize;\n  private int batchSize;\n  private boolean autoCommit;\n  private boolean batchUpdates;\n  private static final String DEFAULT_PROP = \"\";\n  private ConcurrentMap<StatementType, PreparedStatement> cachedStatements;\n  private long numRowsInBatch = 0;\n  /** DB flavor defines DB-specific syntax and behavior for the\n   * particular database. Current database flavors are: {default, phoenix} */\n  private DBFlavor dbFlavor;\n\n  /**\n   * Ordered field information for insert and update statements.\n   */\n  private static class OrderedFieldInfo {\n    private String fieldKeys;\n    private List<String> fieldValues;\n\n    OrderedFieldInfo(String fieldKeys, List<String> fieldValues) {\n      this.fieldKeys = fieldKeys;\n      this.fieldValues = fieldValues;\n    }\n\n    String getFieldKeys() {\n      return fieldKeys;\n    }\n\n    List<String> getFieldValues() {\n      return fieldValues;\n    }\n  }\n\n  /**\n   * For the given key, returns what shard contains data for this key.\n   *\n   * @param key Data key to do operation on\n   * @return Shard index\n   */\n  private int getShardIndexByKey(String key) {\n    int ret = Math.abs(key.hashCode()) % conns.size();\n    return ret;\n  }\n\n  /**\n   * For the given key, returns Connection object that holds connection to the\n   * shard that contains this key.\n   *\n   * @param key Data key to get information for\n   * @return Connection object\n   */\n  private Connection getShardConnectionByKey(String key) {\n    return conns.get(getShardIndexByKey(key));\n  }\n\n  private void cleanupAllConnections() throws SQLException {\n    for (Connection conn : conns) {\n      if (!autoCommit) {\n        conn.commit();\n      }\n      conn.close();\n    }\n  }\n\n  /** Returns parsed int value from the properties if set, otherwise returns -1. */\n  private static int getIntProperty(Properties props, String key) throws DBException {\n    String valueStr = props.getProperty(key);\n    if (valueStr != null) {\n      try {\n        return Integer.parseInt(valueStr);\n      } catch (NumberFormatException nfe) {\n        System.err.println(\"Invalid \" + key + \" specified: \" + valueStr);\n        throw new DBException(nfe);\n      }\n    }\n    return -1;\n  }\n\n  /** Returns parsed boolean value from the properties if set, otherwise returns defaultVal. */\n  private static boolean getBoolProperty(Properties props, String key, boolean defaultVal) {\n    String valueStr = props.getProperty(key);\n    if (valueStr != null) {\n      return Boolean.parseBoolean(valueStr);\n    }\n    return defaultVal;\n  }\n\n  @Override\n  public void init() throws DBException {\n    if (initialized) {\n      System.err.println(\"Client connection already initialized.\");\n      return;\n    }\n    props = getProperties();\n    String urls = props.getProperty(CONNECTION_URL, DEFAULT_PROP);\n    String user = props.getProperty(CONNECTION_USER, DEFAULT_PROP);\n    String passwd = props.getProperty(CONNECTION_PASSWD, DEFAULT_PROP);\n    String driver = props.getProperty(DRIVER_CLASS);\n\n    this.jdbcFetchSize = getIntProperty(props, JDBC_FETCH_SIZE);\n    this.batchSize = getIntProperty(props, DB_BATCH_SIZE);\n\n    this.autoCommit = getBoolProperty(props, JDBC_AUTO_COMMIT, true);\n    this.batchUpdates = getBoolProperty(props, JDBC_BATCH_UPDATES, false);\n\n    try {\n      if (driver != null) {\n        Class.forName(driver);\n      }\n      int shardCount = 0;\n      conns = new ArrayList<Connection>(3);\n      final String[] urlArr = urls.split(\",\");\n      for (String url : urlArr) {\n        System.out.println(\"Adding shard node URL: \" + url);\n        Connection conn = DriverManager.getConnection(url, user, passwd);\n\n        // Since there is no explicit commit method in the DB interface, all\n        // operations should auto commit, except when explicitly told not to\n        // (this is necessary in cases such as for PostgreSQL when running a\n        // scan workload with fetchSize)\n        conn.setAutoCommit(autoCommit);\n\n        shardCount++;\n        conns.add(conn);\n      }\n\n      System.out.println(\"Using shards: \" + shardCount + \", batchSize:\" + batchSize + \", fetchSize: \" + jdbcFetchSize);\n\n      cachedStatements = new ConcurrentHashMap<StatementType, PreparedStatement>();\n\n      this.dbFlavor = DBFlavor.fromJdbcUrl(urlArr[0]);\n    } catch (ClassNotFoundException e) {\n      System.err.println(\"Error in initializing the JDBS driver: \" + e);\n      throw new DBException(e);\n    } catch (SQLException e) {\n      System.err.println(\"Error in database operation: \" + e);\n      throw new DBException(e);\n    } catch (NumberFormatException e) {\n      System.err.println(\"Invalid value for fieldcount property. \" + e);\n      throw new DBException(e);\n    }\n\n    initialized = true;\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    if (batchSize > 0) {\n      try {\n        // commit un-finished batches\n        for (PreparedStatement st : cachedStatements.values()) {\n          if (!st.getConnection().isClosed() && !st.isClosed() && (numRowsInBatch % batchSize != 0)) {\n            st.executeBatch();\n          }\n        }\n      } catch (SQLException e) {\n        System.err.println(\"Error in cleanup execution. \" + e);\n        throw new DBException(e);\n      }\n    }\n\n    try {\n      cleanupAllConnections();\n    } catch (SQLException e) {\n      System.err.println(\"Error in closing the connection. \" + e);\n      throw new DBException(e);\n    }\n  }\n\n  private PreparedStatement createAndCacheInsertStatement(StatementType insertType, String key)\n      throws SQLException {\n    String insert = dbFlavor.createInsertStatement(insertType, key);\n    PreparedStatement insertStatement = getShardConnectionByKey(key).prepareStatement(insert);\n    PreparedStatement stmt = cachedStatements.putIfAbsent(insertType, insertStatement);\n    if (stmt == null) {\n      return insertStatement;\n    }\n    return stmt;\n  }\n\n  private PreparedStatement createAndCacheReadStatement(StatementType readType, String key)\n      throws SQLException {\n    String read = dbFlavor.createReadStatement(readType, key);\n    PreparedStatement readStatement = getShardConnectionByKey(key).prepareStatement(read);\n    PreparedStatement stmt = cachedStatements.putIfAbsent(readType, readStatement);\n    if (stmt == null) {\n      return readStatement;\n    }\n    return stmt;\n  }\n\n  private PreparedStatement createAndCacheDeleteStatement(StatementType deleteType, String key)\n      throws SQLException {\n    String delete = dbFlavor.createDeleteStatement(deleteType, key);\n    PreparedStatement deleteStatement = getShardConnectionByKey(key).prepareStatement(delete);\n    PreparedStatement stmt = cachedStatements.putIfAbsent(deleteType, deleteStatement);\n    if (stmt == null) {\n      return deleteStatement;\n    }\n    return stmt;\n  }\n\n  private PreparedStatement createAndCacheUpdateStatement(StatementType updateType, String key)\n      throws SQLException {\n    String update = dbFlavor.createUpdateStatement(updateType, key);\n    PreparedStatement insertStatement = getShardConnectionByKey(key).prepareStatement(update);\n    PreparedStatement stmt = cachedStatements.putIfAbsent(updateType, insertStatement);\n    if (stmt == null) {\n      return insertStatement;\n    }\n    return stmt;\n  }\n\n  private PreparedStatement createAndCacheScanStatement(StatementType scanType, String key)\n      throws SQLException {\n    String select = dbFlavor.createScanStatement(scanType, key);\n    PreparedStatement scanStatement = getShardConnectionByKey(key).prepareStatement(select);\n    if (this.jdbcFetchSize > 0) {\n      scanStatement.setFetchSize(this.jdbcFetchSize);\n    }\n    PreparedStatement stmt = cachedStatements.putIfAbsent(scanType, scanStatement);\n    if (stmt == null) {\n      return scanStatement;\n    }\n    return stmt;\n  }\n\n  @Override\n  public Status read(String tableName, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    try {\n      StatementType type = new StatementType(StatementType.Type.READ, tableName, 1, \"\", getShardIndexByKey(key));\n      PreparedStatement readStatement = cachedStatements.get(type);\n      if (readStatement == null) {\n        readStatement = createAndCacheReadStatement(type, key);\n      }\n      readStatement.setString(1, key);\n      ResultSet resultSet = readStatement.executeQuery();\n      if (!resultSet.next()) {\n        resultSet.close();\n        return Status.NOT_FOUND;\n      }\n      if (result != null && fields != null) {\n        for (String field : fields) {\n          String value = resultSet.getString(field);\n          result.put(field, new StringByteIterator(value));\n        }\n      }\n      resultSet.close();\n      return Status.OK;\n    } catch (SQLException e) {\n      System.err.println(\"Error in processing read of table \" + tableName + \": \" + e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status scan(String tableName, String startKey, int recordcount, Set<String> fields,\n                     Vector<HashMap<String, ByteIterator>> result) {\n    try {\n      StatementType type = new StatementType(StatementType.Type.SCAN, tableName, 1, \"\", getShardIndexByKey(startKey));\n      PreparedStatement scanStatement = cachedStatements.get(type);\n      if (scanStatement == null) {\n        scanStatement = createAndCacheScanStatement(type, startKey);\n      }\n      scanStatement.setString(1, startKey);\n      scanStatement.setInt(2, recordcount);\n      ResultSet resultSet = scanStatement.executeQuery();\n      for (int i = 0; i < recordcount && resultSet.next(); i++) {\n        if (result != null && fields != null) {\n          HashMap<String, ByteIterator> values = new HashMap<String, ByteIterator>();\n          for (String field : fields) {\n            String value = resultSet.getString(field);\n            values.put(field, new StringByteIterator(value));\n          }\n          result.add(values);\n        }\n      }\n      resultSet.close();\n      return Status.OK;\n    } catch (SQLException e) {\n      System.err.println(\"Error in processing scan of table: \" + tableName + e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status update(String tableName, String key, Map<String, ByteIterator> values) {\n    try {\n      int numFields = values.size();\n      OrderedFieldInfo fieldInfo = getFieldInfo(values);\n      StatementType type = new StatementType(StatementType.Type.UPDATE, tableName,\n          numFields, fieldInfo.getFieldKeys(), getShardIndexByKey(key));\n      PreparedStatement updateStatement = cachedStatements.get(type);\n      if (updateStatement == null) {\n        updateStatement = createAndCacheUpdateStatement(type, key);\n      }\n      int index = 1;\n      for (String value: fieldInfo.getFieldValues()) {\n        updateStatement.setString(index++, value);\n      }\n      updateStatement.setString(index, key);\n      int result = updateStatement.executeUpdate();\n      if (result == 1) {\n        return Status.OK;\n      }\n      return Status.UNEXPECTED_STATE;\n    } catch (SQLException e) {\n      System.err.println(\"Error in processing update to table: \" + tableName + e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status insert(String tableName, String key, Map<String, ByteIterator> values) {\n    try {\n      int numFields = values.size();\n      OrderedFieldInfo fieldInfo = getFieldInfo(values);\n      StatementType type = new StatementType(StatementType.Type.INSERT, tableName,\n          numFields, fieldInfo.getFieldKeys(), getShardIndexByKey(key));\n      PreparedStatement insertStatement = cachedStatements.get(type);\n      if (insertStatement == null) {\n        insertStatement = createAndCacheInsertStatement(type, key);\n      }\n      insertStatement.setString(1, key);\n      int index = 2;\n      for (String value: fieldInfo.getFieldValues()) {\n        insertStatement.setString(index++, value);\n      }\n      // Using the batch insert API\n      if (batchUpdates) {\n        insertStatement.addBatch();\n        // Check for a sane batch size\n        if (batchSize > 0) {\n          // Commit the batch after it grows beyond the configured size\n          if (++numRowsInBatch % batchSize == 0) {\n            int[] results = insertStatement.executeBatch();\n            for (int r : results) {\n              if (r != 1) {\n                return Status.ERROR;\n              }\n            }\n            // If autoCommit is off, make sure we commit the batch\n            if (!autoCommit) {\n              getShardConnectionByKey(key).commit();\n            }\n            return Status.OK;\n          } // else, the default value of -1 or a nonsense. Treat it as an infinitely large batch.\n        } // else, we let the batch accumulate\n        // Added element to the batch, potentially committing the batch too.\n        return Status.BATCHED_OK;\n      } else {\n        // Normal update\n        int result = insertStatement.executeUpdate();\n        // If we are not autoCommit, we might have to commit now\n        if (!autoCommit) {\n          // Let updates be batcher locally\n          if (batchSize > 0) {\n            if (++numRowsInBatch % batchSize == 0) {\n              // Send the batch of updates\n              getShardConnectionByKey(key).commit();\n            }\n            // uhh\n            return Status.OK;\n          } else {\n            // Commit each update\n            getShardConnectionByKey(key).commit();\n          }\n        }\n        if (result == 1) {\n          return Status.OK;\n        }\n      }\n      return Status.UNEXPECTED_STATE;\n    } catch (SQLException e) {\n      System.err.println(\"Error in processing insert to table: \" + tableName + e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status delete(String tableName, String key) {\n    try {\n      StatementType type = new StatementType(StatementType.Type.DELETE, tableName, 1, \"\", getShardIndexByKey(key));\n      PreparedStatement deleteStatement = cachedStatements.get(type);\n      if (deleteStatement == null) {\n        deleteStatement = createAndCacheDeleteStatement(type, key);\n      }\n      deleteStatement.setString(1, key);\n      int result = deleteStatement.executeUpdate();\n      if (result == 1) {\n        return Status.OK;\n      }\n      return Status.UNEXPECTED_STATE;\n    } catch (SQLException e) {\n      System.err.println(\"Error in processing delete to table: \" + tableName + e);\n      return Status.ERROR;\n    }\n  }\n\n  private OrderedFieldInfo getFieldInfo(Map<String, ByteIterator> values) {\n    String fieldKeys = \"\";\n    List<String> fieldValues = new ArrayList<>();\n    int count = 0;\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      fieldKeys += entry.getKey();\n      if (count < values.size() - 1) {\n        fieldKeys += \",\";\n      }\n      fieldValues.add(count, entry.getValue().toString());\n      count++;\n    }\n\n    return new OrderedFieldInfo(fieldKeys, fieldValues);\n  }\n}\n"
  },
  {
    "path": "jdbc/src/main/java/com/yahoo/ycsb/db/JdbcDBCreateTable.java",
    "content": "/**\n * Copyright (c) 2010 - 2016 Yahoo! Inc. All rights reserved.\n * \n * Licensed under the Apache License, Version 2.0 (the \"License\"); you \n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at \n * \n * http://www.apache.org/licenses/LICENSE-2.0\n * \n * Unless required by applicable law or agreed to in writing, software \n * distributed under the License is distributed on an \"AS IS\" BASIS, \n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or \n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying \n * LICENSE file. \n */\npackage com.yahoo.ycsb.db;\n\nimport java.io.FileInputStream;\nimport java.io.IOException;\nimport java.sql.Connection;\nimport java.sql.DriverManager;\nimport java.sql.SQLException;\nimport java.sql.Statement;\nimport java.util.Enumeration;\nimport java.util.Properties;\n\n/**\n * Utility class to create the table to be used by the benchmark.\n * \n * @author sudipto\n */\npublic final class JdbcDBCreateTable {\n\n  private static void usageMessage() {\n    System.out.println(\"Create Table Client. Options:\");\n    System.out.println(\"  -p   key=value properties defined.\");\n    System.out.println(\"  -P   location of the properties file to load.\");\n    System.out.println(\"  -n   name of the table.\");\n    System.out.println(\"  -f   number of fields (default 10).\");\n  }\n\n  private static void createTable(Properties props, String tablename) throws SQLException {\n    String driver = props.getProperty(JdbcDBClient.DRIVER_CLASS);\n    String username = props.getProperty(JdbcDBClient.CONNECTION_USER);\n    String password = props.getProperty(JdbcDBClient.CONNECTION_PASSWD, \"\");\n    String url = props.getProperty(JdbcDBClient.CONNECTION_URL);\n    int fieldcount = Integer.parseInt(props.getProperty(JdbcDBClient.FIELD_COUNT_PROPERTY,\n        JdbcDBClient.FIELD_COUNT_PROPERTY_DEFAULT));\n\n    if (driver == null || username == null || url == null) {\n      throw new SQLException(\"Missing connection information.\");\n    }\n\n    Connection conn = null;\n\n    try {\n      Class.forName(driver);\n\n      conn = DriverManager.getConnection(url, username, password);\n      Statement stmt = conn.createStatement();\n\n      StringBuilder sql = new StringBuilder(\"DROP TABLE IF EXISTS \");\n      sql.append(tablename);\n      sql.append(\";\");\n\n      stmt.execute(sql.toString());\n\n      sql = new StringBuilder(\"CREATE TABLE \");\n      sql.append(tablename);\n      sql.append(\" (YCSB_KEY VARCHAR PRIMARY KEY\");\n\n      for (int idx = 0; idx < fieldcount; idx++) {\n        sql.append(\", FIELD\");\n        sql.append(idx);\n        sql.append(\" TEXT\");\n      }\n      sql.append(\");\");\n\n      stmt.execute(sql.toString());\n\n      System.out.println(\"Table \" + tablename + \" created..\");\n    } catch (ClassNotFoundException e) {\n      throw new SQLException(\"JDBC Driver class not found.\");\n    } finally {\n      if (conn != null) {\n        System.out.println(\"Closing database connection.\");\n        conn.close();\n      }\n    }\n  }\n\n  /**\n   * @param args\n   */\n  public static void main(String[] args) {\n\n    if (args.length == 0) {\n      usageMessage();\n      System.exit(0);\n    }\n\n    String tablename = null;\n    int fieldcount = -1;\n    Properties props = new Properties();\n    Properties fileprops = new Properties();\n\n    // parse arguments\n    int argindex = 0;\n    while (args[argindex].startsWith(\"-\")) {\n      if (args[argindex].compareTo(\"-P\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.exit(0);\n        }\n        String propfile = args[argindex];\n        argindex++;\n\n        Properties myfileprops = new Properties();\n        try {\n          myfileprops.load(new FileInputStream(propfile));\n        } catch (IOException e) {\n          System.out.println(e.getMessage());\n          System.exit(0);\n        }\n\n        // Issue #5 - remove call to stringPropertyNames to make compilable\n        // under Java 1.5\n        for (Enumeration<?> e = myfileprops.propertyNames(); e.hasMoreElements();) {\n          String prop = (String) e.nextElement();\n\n          fileprops.setProperty(prop, myfileprops.getProperty(prop));\n        }\n\n      } else if (args[argindex].compareTo(\"-p\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.exit(0);\n        }\n        int eq = args[argindex].indexOf('=');\n        if (eq < 0) {\n          usageMessage();\n          System.exit(0);\n        }\n\n        String name = args[argindex].substring(0, eq);\n        String value = args[argindex].substring(eq + 1);\n        props.put(name, value);\n        argindex++;\n      } else if (args[argindex].compareTo(\"-n\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.exit(0);\n        }\n        tablename = args[argindex++];\n      } else if (args[argindex].compareTo(\"-f\") == 0) {\n        argindex++;\n        if (argindex >= args.length) {\n          usageMessage();\n          System.exit(0);\n        }\n        try {\n          fieldcount = Integer.parseInt(args[argindex++]);\n        } catch (NumberFormatException e) {\n          System.err.println(\"Invalid number for field count\");\n          usageMessage();\n          System.exit(1);\n        }\n      } else {\n        System.out.println(\"Unknown option \" + args[argindex]);\n        usageMessage();\n        System.exit(0);\n      }\n\n      if (argindex >= args.length) {\n        break;\n      }\n    }\n\n    if (argindex != args.length) {\n      usageMessage();\n      System.exit(0);\n    }\n\n    // overwrite file properties with properties from the command line\n\n    // Issue #5 - remove call to stringPropertyNames to make compilable under\n    // Java 1.5\n    for (Enumeration<?> e = props.propertyNames(); e.hasMoreElements();) {\n      String prop = (String) e.nextElement();\n\n      fileprops.setProperty(prop, props.getProperty(prop));\n    }\n\n    props = fileprops;\n\n    if (tablename == null) {\n      System.err.println(\"table name missing.\");\n      usageMessage();\n      System.exit(1);\n    }\n\n    if (fieldcount > 0) {\n      props.setProperty(JdbcDBClient.FIELD_COUNT_PROPERTY, String.valueOf(fieldcount));\n    }\n\n    try {\n      createTable(props, tablename);\n    } catch (SQLException e) {\n      System.err.println(\"Error in creating table. \" + e);\n      System.exit(1);\n    }\n  }\n\n  /**\n   * Hidden constructor.\n   */\n  private JdbcDBCreateTable() {\n    super();\n  }\n}\n"
  },
  {
    "path": "jdbc/src/main/java/com/yahoo/ycsb/db/StatementType.java",
    "content": "/**\n * Copyright (c) 2010 Yahoo! Inc., 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\n/**\n * The statement type for the prepared statements.\n */\npublic class StatementType {\n\n  enum Type {\n    INSERT(1), DELETE(2), READ(3), UPDATE(4), SCAN(5);\n\n    private final int internalType;\n\n    private Type(int type) {\n      internalType = type;\n    }\n\n    int getHashCode() {\n      final int prime = 31;\n      int result = 1;\n      result = prime * result + internalType;\n      return result;\n    }\n  }\n\n  private Type type;\n  private int shardIndex;\n  private int numFields;\n  private String tableName;\n  private String fieldString;\n\n  public StatementType(Type type, String tableName, int numFields, String fieldString, int shardIndex) {\n    this.type = type;\n    this.tableName = tableName;\n    this.numFields = numFields;\n    this.fieldString = fieldString;\n    this.shardIndex = shardIndex;\n  }\n\n  public String getTableName() {\n    return tableName;\n  }\n\n  public String getFieldString() {\n    return fieldString;\n  }\n\n  public int getNumFields() {\n    return numFields;\n  }\n\n  @Override\n  public int hashCode() {\n    final int prime = 31;\n    int result = 1;\n    result = prime * result + numFields + 100 * shardIndex;\n    result = prime * result + ((tableName == null) ? 0 : tableName.hashCode());\n    result = prime * result + ((type == null) ? 0 : type.getHashCode());\n    return result;\n  }\n\n  @Override\n  public boolean equals(Object obj) {\n    if (this == obj) {\n      return true;\n    }\n    if (obj == null) {\n      return false;\n    }\n    if (getClass() != obj.getClass()) {\n      return false;\n    }\n    StatementType other = (StatementType) obj;\n    if (numFields != other.numFields) {\n      return false;\n    }\n    if (shardIndex != other.shardIndex) {\n      return false;\n    }\n    if (tableName == null) {\n      if (other.tableName != null) {\n        return false;\n      }\n    } else if (!tableName.equals(other.tableName)) {\n      return false;\n    }\n    if (type != other.type) {\n      return false;\n    }\n    if (!fieldString.equals(other.fieldString)) {\n      return false;\n    }\n    return true;\n  }\n}\n"
  },
  {
    "path": "jdbc/src/main/java/com/yahoo/ycsb/db/flavors/DBFlavor.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db.flavors;\n\nimport com.yahoo.ycsb.db.StatementType;\n\n/**\n * DBFlavor captures minor differences in syntax and behavior among JDBC implementations and SQL\n * dialects. This class also acts as a factory to instantiate concrete flavors based on the JDBC URL.\n */\npublic abstract class DBFlavor {\n\n  enum DBName {\n    DEFAULT,\n    PHOENIX\n  }\n\n  private final DBName dbName;\n\n  public DBFlavor(DBName dbName) {\n    this.dbName = dbName;\n  }\n\n  public static DBFlavor fromJdbcUrl(String url) {\n    if (url.startsWith(\"jdbc:phoenix\")) {\n      return new PhoenixDBFlavor();\n    }\n    return new DefaultDBFlavor();\n  }\n\n  /**\n   * Create and return a SQL statement for inserting data.\n   */\n  public abstract String createInsertStatement(StatementType insertType, String key);\n\n  /**\n   * Create and return a SQL statement for reading data.\n   */\n  public abstract String createReadStatement(StatementType readType, String key);\n\n  /**\n   * Create and return a SQL statement for deleting data.\n   */\n  public abstract String createDeleteStatement(StatementType deleteType, String key);\n\n  /**\n   * Create and return a SQL statement for updating data.\n   */\n  public abstract String createUpdateStatement(StatementType updateType, String key);\n\n  /**\n   * Create and return a SQL statement for scanning data.\n   */\n  public abstract String createScanStatement(StatementType scanType, String key);\n}\n"
  },
  {
    "path": "jdbc/src/main/java/com/yahoo/ycsb/db/flavors/DefaultDBFlavor.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db.flavors;\n\nimport com.yahoo.ycsb.db.JdbcDBClient;\nimport com.yahoo.ycsb.db.StatementType;\n\n/**\n * A default flavor for relational databases.\n */\npublic class DefaultDBFlavor extends DBFlavor {\n  public DefaultDBFlavor() {\n    super(DBName.DEFAULT);\n  }\n  public DefaultDBFlavor(DBName dbName) {\n    super(dbName);\n  }\n\n  @Override\n  public String createInsertStatement(StatementType insertType, String key) {\n    StringBuilder insert = new StringBuilder(\"INSERT INTO \");\n    insert.append(insertType.getTableName());\n    insert.append(\" (\" + JdbcDBClient.PRIMARY_KEY + \",\" + insertType.getFieldString() + \")\");\n    insert.append(\" VALUES(?\");\n    for (int i = 0; i < insertType.getNumFields(); i++) {\n      insert.append(\",?\");\n    }\n    insert.append(\")\");\n    return insert.toString();\n  }\n\n  @Override\n  public String createReadStatement(StatementType readType, String key) {\n    StringBuilder read = new StringBuilder(\"SELECT * FROM \");\n    read.append(readType.getTableName());\n    read.append(\" WHERE \");\n    read.append(JdbcDBClient.PRIMARY_KEY);\n    read.append(\" = \");\n    read.append(\"?\");\n    return read.toString();\n  }\n\n  @Override\n  public String createDeleteStatement(StatementType deleteType, String key) {\n    StringBuilder delete = new StringBuilder(\"DELETE FROM \");\n    delete.append(deleteType.getTableName());\n    delete.append(\" WHERE \");\n    delete.append(JdbcDBClient.PRIMARY_KEY);\n    delete.append(\" = ?\");\n    return delete.toString();\n  }\n\n  @Override\n  public String createUpdateStatement(StatementType updateType, String key) {\n    String[] fieldKeys = updateType.getFieldString().split(\",\");\n    StringBuilder update = new StringBuilder(\"UPDATE \");\n    update.append(updateType.getTableName());\n    update.append(\" SET \");\n    for (int i = 0; i < fieldKeys.length; i++) {\n      update.append(fieldKeys[i]);\n      update.append(\"=?\");\n      if (i < fieldKeys.length - 1) {\n        update.append(\", \");\n      }\n    }\n    update.append(\" WHERE \");\n    update.append(JdbcDBClient.PRIMARY_KEY);\n    update.append(\" = ?\");\n    return update.toString();\n  }\n\n  @Override\n  public String createScanStatement(StatementType scanType, String key) {\n    StringBuilder select = new StringBuilder(\"SELECT * FROM \");\n    select.append(scanType.getTableName());\n    select.append(\" WHERE \");\n    select.append(JdbcDBClient.PRIMARY_KEY);\n    select.append(\" >= ?\");\n    select.append(\" ORDER BY \");\n    select.append(JdbcDBClient.PRIMARY_KEY);\n    select.append(\" LIMIT ?\");\n    return select.toString();\n  }\n}\n"
  },
  {
    "path": "jdbc/src/main/java/com/yahoo/ycsb/db/flavors/PhoenixDBFlavor.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db.flavors;\n\nimport com.yahoo.ycsb.db.JdbcDBClient;\nimport com.yahoo.ycsb.db.StatementType;\n\n/**\n * Database flavor for Apache Phoenix. Captures syntax differences used by Phoenix.\n */\npublic class PhoenixDBFlavor extends DefaultDBFlavor {\n  public PhoenixDBFlavor() {\n    super(DBName.PHOENIX);\n  }\n\n  @Override\n  public String createInsertStatement(StatementType insertType, String key) {\n    // Phoenix uses UPSERT syntax\n    StringBuilder insert = new StringBuilder(\"UPSERT INTO \");\n    insert.append(insertType.getTableName());\n    insert.append(\" (\" + JdbcDBClient.PRIMARY_KEY + \",\" + insertType.getFieldString() + \")\");\n    insert.append(\" VALUES(?\");\n    for (int i = 0; i < insertType.getNumFields(); i++) {\n      insert.append(\",?\");\n    }\n    insert.append(\")\");\n    return insert.toString();\n  }\n\n  @Override\n  public String createUpdateStatement(StatementType updateType, String key) {\n    // Phoenix doesn't have UPDATE semantics, just re-use UPSERT VALUES on the specific columns\n    String[] fieldKeys = updateType.getFieldString().split(\",\");\n    StringBuilder update = new StringBuilder(\"UPSERT INTO \");\n    update.append(updateType.getTableName());\n    update.append(\" (\");\n    // Each column to update\n    for (int i = 0; i < fieldKeys.length; i++) {\n      update.append(fieldKeys[i]).append(\",\");\n    }\n    // And then set the primary key column\n    update.append(JdbcDBClient.PRIMARY_KEY).append(\") VALUES(\");\n    // Add an unbound param for each column to update\n    for (int i = 0; i < fieldKeys.length; i++) {\n      update.append(\"?, \");\n    }\n    // Then the primary key column's value\n    update.append(\"?)\");\n    return update.toString();\n  }\n}\n"
  },
  {
    "path": "jdbc/src/main/java/com/yahoo/ycsb/db/flavors/package-info.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n/**\n * This package contains a collection of database-specific overrides. This accounts for the variance\n * that can be present where JDBC does not explicitly define what a database must do or when a\n * database has a non-standard SQL implementation.\n */\npackage com.yahoo.ycsb.db.flavors;\n"
  },
  {
    "path": "jdbc/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2014 - 2016, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for stores that can be accessed via JDBC.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "jdbc/src/main/resources/sql/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors.\nAll rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\nContains all the SQL statements used by the JDBC client.\n"
  },
  {
    "path": "jdbc/src/main/resources/sql/create_table.mysql",
    "content": "-- Copyright (c) 2015 YCSB contributors. All rights reserved.\n--\n-- Licensed under the Apache License, Version 2.0 (the \"License\"); you\n-- may not use this file except in compliance with the License. You\n-- may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing, software\n-- distributed under the License is distributed on an \"AS IS\" BASIS,\n-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n-- implied. See the License for the specific language governing\n-- permissions and limitations under the License. See accompanying\n-- LICENSE file.\n\n-- Creates a Table.\n\n-- Drop the table if it exists;\nDROP TABLE IF EXISTS usertable;\n\n-- Create the user table with 5 fields.\nCREATE TABLE usertable(YCSB_KEY VARCHAR (255) PRIMARY KEY,\n  FIELD0 TEXT, FIELD1 TEXT,\n  FIELD2 TEXT, FIELD3 TEXT,\n  FIELD4 TEXT, FIELD5 TEXT,\n  FIELD6 TEXT, FIELD7 TEXT,\n  FIELD8 TEXT, FIELD9 TEXT);\n"
  },
  {
    "path": "jdbc/src/main/resources/sql/create_table.sql",
    "content": "-- Copyright (c) 2015 YCSB contributors. All rights reserved.\n--\n-- Licensed under the Apache License, Version 2.0 (the \"License\"); you\n-- may not use this file except in compliance with the License. You\n-- may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing, software\n-- distributed under the License is distributed on an \"AS IS\" BASIS,\n-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n-- implied. See the License for the specific language governing\n-- permissions and limitations under the License. See accompanying\n-- LICENSE file.\n\n-- Creates a Table.\n\n-- Drop the table if it exists;\nDROP TABLE IF EXISTS usertable;\n\n-- Create the user table with 5 fields.\nCREATE TABLE usertable(YCSB_KEY VARCHAR PRIMARY KEY,\n  FIELD0 VARCHAR, FIELD1 VARCHAR,\n  FIELD2 VARCHAR, FIELD3 VARCHAR,\n  FIELD4 VARCHAR, FIELD5 VARCHAR,\n  FIELD6 VARCHAR, FIELD7 VARCHAR,\n  FIELD8 VARCHAR, FIELD9 VARCHAR);\n"
  },
  {
    "path": "jdbc/src/test/java/com/yahoo/ycsb/db/JdbcDBClientTest.java",
    "content": "/**\n * Copyright (c) 2015 - 2016 Yahoo! Inc., 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport static org.junit.Assert.*;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.StringByteIterator;\nimport org.junit.*;\n\nimport java.sql.*;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.HashSet;\nimport java.util.Set;\nimport java.util.Properties;\nimport java.util.Vector;\n\npublic class JdbcDBClientTest {\n    private static final String TEST_DB_DRIVER = \"org.hsqldb.jdbc.JDBCDriver\";\n    private static final String TEST_DB_URL = \"jdbc:hsqldb:mem:ycsb\";\n    private static final String TEST_DB_USER = \"sa\";\n    private static final String TABLE_NAME = \"USERTABLE\";\n    private static final int FIELD_LENGTH = 32;\n    private static final String FIELD_PREFIX = \"FIELD\";\n    private static final String KEY_PREFIX = \"user\";\n    private static final String KEY_FIELD = \"YCSB_KEY\";\n    private static final int NUM_FIELDS = 3;\n\n    private static Connection jdbcConnection = null;\n    private static JdbcDBClient jdbcDBClient = null;\n\n    @BeforeClass\n    public static void setup() {\n      setupWithBatch(1, true);\n    }\n\n    public static void setupWithBatch(int batchSize, boolean autoCommit) {\n      try {\n        jdbcConnection = DriverManager.getConnection(TEST_DB_URL);\n        jdbcDBClient = new JdbcDBClient();\n\n        Properties p = new Properties();\n        p.setProperty(JdbcDBClient.CONNECTION_URL, TEST_DB_URL);\n        p.setProperty(JdbcDBClient.DRIVER_CLASS, TEST_DB_DRIVER);\n        p.setProperty(JdbcDBClient.CONNECTION_USER, TEST_DB_USER);\n        p.setProperty(JdbcDBClient.DB_BATCH_SIZE, Integer.toString(batchSize));\n        p.setProperty(JdbcDBClient.JDBC_BATCH_UPDATES, \"true\");\n        p.setProperty(JdbcDBClient.JDBC_AUTO_COMMIT, Boolean.toString(autoCommit));\n\n        jdbcDBClient.setProperties(p);\n        jdbcDBClient.init();\n      } catch (SQLException e) {\n        e.printStackTrace();\n        fail(\"Could not create local Database\");\n      } catch (DBException e) {\n        e.printStackTrace();\n        fail(\"Could not create JdbcDBClient instance\");\n      }\n    }\n\n    @AfterClass\n    public static void teardown() {\n        try {\n            if (jdbcConnection != null) {\n                jdbcConnection.close();\n            }\n        } catch (SQLException e) {\n            e.printStackTrace();\n        }\n\n        try {\n            if (jdbcDBClient != null) {\n                jdbcDBClient.cleanup();\n            }\n        } catch (DBException e) {\n            e.printStackTrace();\n        }\n    }\n\n    @Before\n    public void prepareTest() {\n        try {\n            DatabaseMetaData metaData = jdbcConnection.getMetaData();\n            ResultSet tableResults = metaData.getTables(null, null, TABLE_NAME, null);\n            if (tableResults.next()) {\n                // If the table already exists, just truncate it\n                jdbcConnection.prepareStatement(\n                    String.format(\"TRUNCATE TABLE %s\", TABLE_NAME)\n                ).execute();\n            } else {\n                // If the table does not exist then create it\n                StringBuilder createString = new StringBuilder(\n                    String.format(\"CREATE TABLE %s (%s VARCHAR(100) PRIMARY KEY\", TABLE_NAME, KEY_FIELD)\n                );\n                for (int i = 0; i < NUM_FIELDS; i++) {\n                    createString.append(\n                        String.format(\", %s%d VARCHAR(100)\", FIELD_PREFIX, i)\n                    );\n                }\n                createString.append(\")\");\n                jdbcConnection.prepareStatement(createString.toString()).execute();\n            }\n        } catch (SQLException e) {\n            e.printStackTrace();\n            fail(\"Failed to prepare test\");\n        }\n    }\n\n    /*\n        This is a copy of buildDeterministicValue() from core:com.yahoo.ycsb.workloads.CoreWorkload.java.\n        That method is neither public nor static so we need a copy.\n     */\n    private String buildDeterministicValue(String key, String fieldkey) {\n        int size = FIELD_LENGTH;\n        StringBuilder sb = new StringBuilder(size);\n        sb.append(key);\n        sb.append(':');\n        sb.append(fieldkey);\n        while (sb.length() < size) {\n            sb.append(':');\n            sb.append(sb.toString().hashCode());\n        }\n        sb.setLength(size);\n\n        return sb.toString();\n    }\n\n    /*\n        Inserts a row of deterministic values for the given insertKey using the jdbcDBClient.\n     */\n    private HashMap<String, ByteIterator> insertRow(String insertKey) {\n        HashMap<String, ByteIterator> insertMap = new HashMap<String, ByteIterator>();\n        for (int i = 0; i < 3; i++) {\n            insertMap.put(FIELD_PREFIX + i, new StringByteIterator(buildDeterministicValue(insertKey, FIELD_PREFIX + i)));\n        }\n        jdbcDBClient.insert(TABLE_NAME, insertKey, insertMap);\n\n        return insertMap;\n    }\n\n    @Test\n    public void insertTest() {\n        try {\n            String insertKey = \"user0\";\n            HashMap<String, ByteIterator> insertMap = insertRow(insertKey);\n\n            ResultSet resultSet = jdbcConnection.prepareStatement(\n                String.format(\"SELECT * FROM %s\", TABLE_NAME)\n            ).executeQuery();\n\n            // Check we have a result Row\n            assertTrue(resultSet.next());\n            // Check that all the columns have expected values\n            assertEquals(resultSet.getString(KEY_FIELD), insertKey);\n            for (int i = 0; i < 3; i++) {\n                assertEquals(resultSet.getString(FIELD_PREFIX + i), insertMap.get(FIELD_PREFIX + i).toString());\n            }\n            // Check that we do not have any more rows\n            assertFalse(resultSet.next());\n\n            resultSet.close();\n        } catch (SQLException e) {\n            e.printStackTrace();\n            fail(\"Failed insertTest\");\n        }\n    }\n\n    @Test\n    public void updateTest() {\n        try {\n            String preupdateString = \"preupdate\";\n            StringBuilder fauxInsertString = new StringBuilder(\n                String.format(\"INSERT INTO %s VALUES(?\", TABLE_NAME)\n            );\n            for (int i = 0; i < NUM_FIELDS; i++) {\n                fauxInsertString.append(\",?\");\n            }\n            fauxInsertString.append(\")\");\n\n            PreparedStatement fauxInsertStatement = jdbcConnection.prepareStatement(fauxInsertString.toString());\n            for (int i = 2; i < NUM_FIELDS + 2; i++) {\n                fauxInsertStatement.setString(i, preupdateString);\n            }\n\n            fauxInsertStatement.setString(1, \"user0\");\n            fauxInsertStatement.execute();\n            fauxInsertStatement.setString(1, \"user1\");\n            fauxInsertStatement.execute();\n            fauxInsertStatement.setString(1, \"user2\");\n            fauxInsertStatement.execute();\n\n            HashMap<String, ByteIterator> updateMap = new HashMap<String, ByteIterator>();\n            for (int i = 0; i < 3; i++) {\n                updateMap.put(FIELD_PREFIX + i, new StringByteIterator(buildDeterministicValue(\"user1\", FIELD_PREFIX + i)));\n            }\n\n            jdbcDBClient.update(TABLE_NAME, \"user1\", updateMap);\n\n            ResultSet resultSet = jdbcConnection.prepareStatement(\n                String.format(\"SELECT * FROM %s ORDER BY %s\", TABLE_NAME, KEY_FIELD)\n            ).executeQuery();\n\n            // Ensure that user0 record was not changed\n            resultSet.next();\n            assertEquals(\"Assert first row key is user0\", resultSet.getString(KEY_FIELD), \"user0\");\n            for (int i = 0; i < 3; i++) {\n                assertEquals(\"Assert first row fields contain preupdateString\", resultSet.getString(FIELD_PREFIX + i), preupdateString);\n            }\n\n            // Check that all the columns have expected values for user1 record\n            resultSet.next();\n            assertEquals(resultSet.getString(KEY_FIELD), \"user1\");\n            for (int i = 0; i < 3; i++) {\n                assertEquals(resultSet.getString(FIELD_PREFIX + i), updateMap.get(FIELD_PREFIX + i).toString());\n            }\n\n            // Ensure that user2 record was not changed\n            resultSet.next();\n            assertEquals(\"Assert third row key is user2\", resultSet.getString(KEY_FIELD), \"user2\");\n            for (int i = 0; i < 3; i++) {\n                assertEquals(\"Assert third row fields contain preupdateString\", resultSet.getString(FIELD_PREFIX + i), preupdateString);\n            }\n            resultSet.close();\n        } catch (SQLException e) {\n            e.printStackTrace();\n            fail(\"Failed updateTest\");\n        }\n    }\n\n    @Test\n    public void readTest() {\n        String insertKey = \"user0\";\n        HashMap<String, ByteIterator> insertMap = insertRow(insertKey);\n        Set<String> readFields = new HashSet<String>();\n        HashMap<String, ByteIterator> readResultMap = new HashMap<String, ByteIterator>();\n\n        // Test reading a single field\n        readFields.add(\"FIELD0\");\n        jdbcDBClient.read(TABLE_NAME, insertKey, readFields, readResultMap);\n        assertEquals(\"Assert that result has correct number of fields\", readFields.size(), readResultMap.size());\n        for (String field: readFields) {\n            assertEquals(\"Assert \" + field + \" was read correctly\", insertMap.get(field).toString(), readResultMap.get(field).toString());\n        }\n\n        readResultMap = new HashMap<String, ByteIterator>();\n\n        // Test reading all fields\n        readFields.add(\"FIELD1\");\n        readFields.add(\"FIELD2\");\n        jdbcDBClient.read(TABLE_NAME, insertKey, readFields, readResultMap);\n        assertEquals(\"Assert that result has correct number of fields\", readFields.size(), readResultMap.size());\n        for (String field: readFields) {\n            assertEquals(\"Assert \" + field + \" was read correctly\", insertMap.get(field).toString(), readResultMap.get(field).toString());\n        }\n    }\n\n    @Test\n    public void deleteTest() {\n        try {\n            insertRow(\"user0\");\n            String deleteKey = \"user1\";\n            insertRow(deleteKey);\n            insertRow(\"user2\");\n\n            jdbcDBClient.delete(TABLE_NAME, deleteKey);\n\n            ResultSet resultSet = jdbcConnection.prepareStatement(\n                String.format(\"SELECT * FROM %s\", TABLE_NAME)\n            ).executeQuery();\n\n            int totalRows = 0;\n            while (resultSet.next()) {\n                assertNotEquals(\"Assert this is not the deleted row key\", deleteKey, resultSet.getString(KEY_FIELD));\n                totalRows++;\n            }\n            // Check we do not have a result Row\n            assertEquals(\"Assert we ended with the correct number of rows\", totalRows, 2);\n\n            resultSet.close();\n        } catch (SQLException e) {\n            e.printStackTrace();\n            fail(\"Failed deleteTest\");\n        }\n    }\n\n    @Test\n    public void scanTest() throws SQLException {\n        Map<String, HashMap<String, ByteIterator>> keyMap = new HashMap<String, HashMap<String, ByteIterator>>();\n        for (int i = 0; i < 5; i++) {\n            String insertKey = KEY_PREFIX + i;\n            keyMap.put(insertKey, insertRow(insertKey));\n        }\n        Set<String> fieldSet = new HashSet<String>();\n        fieldSet.add(\"FIELD0\");\n        fieldSet.add(\"FIELD1\");\n        int startIndex = 1;\n        int resultRows = 3;\n\n        Vector<HashMap<String, ByteIterator>> resultVector = new Vector<HashMap<String, ByteIterator>>();\n        jdbcDBClient.scan(TABLE_NAME, KEY_PREFIX + startIndex, resultRows, fieldSet, resultVector);\n\n        // Check the resultVector is the correct size\n        assertEquals(\"Assert the correct number of results rows were returned\", resultRows, resultVector.size());\n        // Check each vector row to make sure we have the correct fields\n        int testIndex = startIndex;\n        for (Map<String, ByteIterator> result: resultVector) {\n            assertEquals(\"Assert that this row has the correct number of fields\", fieldSet.size(), result.size());\n            for (String field: fieldSet) {\n                assertEquals(\"Assert this field is correct in this row\", keyMap.get(KEY_PREFIX + testIndex).get(field).toString(), result.get(field).toString());\n            }\n            testIndex++;\n        }\n    }\n\n    @Test\n    public void insertBatchTest() throws DBException {\n      insertBatchTest(20);\n    }\n\n    @Test\n    public void insertPartialBatchTest() throws DBException {\n      insertBatchTest(19);\n    }\n\n    public void insertBatchTest(int numRows) throws DBException {\n      teardown();\n      setupWithBatch(10, false);\n      try {\n        String insertKey = \"user0\";\n        HashMap<String, ByteIterator> insertMap = insertRow(insertKey);\n        assertEquals(3, insertMap.size());\n\n        ResultSet resultSet = jdbcConnection.prepareStatement(\n          String.format(\"SELECT * FROM %s\", TABLE_NAME)\n            ).executeQuery();\n\n        // Check we do not have a result Row (because batch is not full yet)\n        assertFalse(resultSet.next());\n        // insert more rows, completing 1 batch (still results are partial).\n        for (int i = 1; i < numRows; i++) {\n          insertMap = insertRow(\"user\" + i);\n        }\n\n        //\n        assertNumRows(10 * (numRows / 10));\n\n        // call cleanup, which should insert the partial batch\n        jdbcDBClient.cleanup();\n        // Prevent a teardown() from printing an error\n        jdbcDBClient = null;\n\n        // Check that we have all rows\n        assertNumRows(numRows);\n\n      } catch (SQLException e) {\n        e.printStackTrace();\n        fail(\"Failed insertBatchTest\");\n      } finally {\n        teardown(); // for next tests\n        setup();\n      }\n    }\n\n    private void assertNumRows(long numRows) throws SQLException {\n      ResultSet resultSet = jdbcConnection.prepareStatement(\n        String.format(\"SELECT * FROM %s\", TABLE_NAME)\n          ).executeQuery();\n\n      for (int i = 0; i < numRows; i++) {\n        assertTrue(\"expecting \" + numRows + \" results, received only \" + i, resultSet.next());\n      }\n      assertFalse(\"expecting \" + numRows + \" results, received more\", resultSet.next());\n\n      resultSet.close();\n    }\n}\n"
  },
  {
    "path": "kudu/README.md",
    "content": "<!--\nCopyright (c) 2015-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# Kudu bindings for YCSB\n\n[Apache Kudu](https://kudu.apache.org) is a storage engine that enables fast\nanalytics on fast data.\n\n## Benchmarking Kudu\n\nUse the following command line to load the initial data into an existing Kudu\ncluster with default configurations.\n\n```\nbin/ycsb load kudu -P workloads/workloada\n```\n\nAdditional configurations:\n* `kudu_master_addresses`: The master's address. The default configuration\n  expects a master on localhost.\n* `kudu_pre_split_num_tablets`: The number of tablets (or partitions) to create\n  for the table. The default uses 4 tablets. A good rule of thumb is to use 5\n  per tablet server.\n* `kudu_table_num_replicas`: The number of replicas that each tablet will have.\n  The default is 3. Should only be configured to use 1 instead, for single node tests.\n* `kudu_sync_ops`: If the client should wait after every write operation. The\n  default is true.\n* `kudu_block_size`: The data block size used to configure columns. The default\n  is 4096 bytes.\n\nThen, you can run the workload:\n\n```\nbin/ycsb run kudu -P workloads/workloada\n```\n\n## Using a previous client version\n\nIf you wish to use a different Kudu client version than the one shipped with\nYCSB, you can specify on the command line with `-Dkudu.version=x`. For example:\n\n```\nmvn -pl com.yahoo.ycsb:kudu-binding -am package -DskipTests -Dkudu.version=1.0.1\n```\n\nNote that only versions since 1.0 are supported, since Kudu did not guarantee\nwire or API compatibility prior to 1.0.\n"
  },
  {
    "path": "kudu/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2015-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>kudu-binding</artifactId>\n  <name>Kudu DB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.apache.kudu</groupId>\n      <artifactId>kudu-client</artifactId>\n      <version>${kudu.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n      <version>1.7.21</version>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-log4j12</artifactId>\n      <version>1.7.21</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kudu/src/main/conf/log4j.properties",
    "content": "#\n# Copyright (c) 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n# Enables getting logs from the client.\n\nlog4j.rootLogger = INFO, out\nlog4j.appender.out = org.apache.log4j.ConsoleAppender\nlog4j.appender.out.layout = org.apache.log4j.PatternLayout\nlog4j.appender.out.layout.ConversionPattern = %d (%t) [%p - %l] %m%n\n\nlog4j.logger.kudu = INFO\n"
  },
  {
    "path": "kudu/src/main/java/com/yahoo/ycsb/db/KuduYCSBClient.java",
    "content": "/**\n * Copyright (c) 2015-2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.stumbleupon.async.TimeoutException;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport com.yahoo.ycsb.workloads.CoreWorkload;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.kudu.ColumnSchema;\nimport org.apache.kudu.Schema;\nimport org.apache.kudu.client.*;\n\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.List;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY;\nimport static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY_DEFAULT;\nimport static org.apache.kudu.Type.STRING;\nimport static org.apache.kudu.client.KuduPredicate.ComparisonOp.EQUAL;\nimport static org.apache.kudu.client.KuduPredicate.ComparisonOp.GREATER_EQUAL;\n\n/**\n * Kudu client for YCSB framework. Example to load: <blockquote>\n *\n * <pre>\n * <code>\n * $ ./bin/ycsb load kudu -P workloads/workloada -threads 5\n * </code>\n * </pre>\n *\n * </blockquote> Example to run:  <blockquote>\n *\n * <pre>\n * <code>\n * ./bin/ycsb run kudu -P workloads/workloada -p kudu_sync_ops=true -threads 5\n * </code>\n * </pre>\n *\n * </blockquote>\n */\npublic class KuduYCSBClient extends com.yahoo.ycsb.DB {\n  private static final Logger LOG = LoggerFactory.getLogger(KuduYCSBClient.class);\n  private static final String KEY = \"key\";\n  private static final Status TIMEOUT = new Status(\"TIMEOUT\", \"The operation timed out.\");\n  private static final int MAX_TABLETS = 9000;\n  private static final long DEFAULT_SLEEP = 60000;\n  private static final String SYNC_OPS_OPT = \"kudu_sync_ops\";\n  private static final String PRE_SPLIT_NUM_TABLETS_OPT = \"kudu_pre_split_num_tablets\";\n  private static final String TABLE_NUM_REPLICAS = \"kudu_table_num_replicas\";\n  private static final String BLOCK_SIZE_OPT = \"kudu_block_size\";\n  private static final String MASTER_ADDRESSES_OPT = \"kudu_master_addresses\";\n  private static final int BLOCK_SIZE_DEFAULT = 4096;\n  private static final List<String> COLUMN_NAMES = new ArrayList<>();\n  private static KuduClient client;\n  private static Schema schema;\n  private KuduSession session;\n  private KuduTable kuduTable;\n\n  @Override\n  public void init() throws DBException {\n    String tableName = getProperties().getProperty(TABLENAME_PROPERTY, TABLENAME_PROPERTY_DEFAULT);\n    initClient(tableName, getProperties());\n    this.session = client.newSession();\n    if (getProperties().getProperty(SYNC_OPS_OPT) != null\n        && getProperties().getProperty(SYNC_OPS_OPT).equals(\"false\")) {\n      this.session.setFlushMode(KuduSession.FlushMode.AUTO_FLUSH_BACKGROUND);\n      this.session.setMutationBufferSpace(100);\n    } else {\n      this.session.setFlushMode(KuduSession.FlushMode.AUTO_FLUSH_SYNC);\n    }\n\n    try {\n      this.kuduTable = client.openTable(tableName);\n    } catch (Exception e) {\n      throw new DBException(\"Could not open a table because of:\", e);\n    }\n  }\n\n  private static synchronized void initClient(String tableName,\n                                              Properties prop) throws DBException {\n    if (client != null) {\n      return;\n    }\n\n    String masterAddresses = prop.getProperty(MASTER_ADDRESSES_OPT);\n    if (masterAddresses == null) {\n      masterAddresses = \"localhost:7051\";\n    }\n\n    int numTablets = getIntFromProp(prop, PRE_SPLIT_NUM_TABLETS_OPT, 4);\n    if (numTablets > MAX_TABLETS) {\n      throw new DBException(String.format(\n          \"Specified number of tablets (%s) must be equal or below %s\", numTablets, MAX_TABLETS));\n    }\n\n    int numReplicas = getIntFromProp(prop, TABLE_NUM_REPLICAS, 3);\n    int blockSize = getIntFromProp(prop, BLOCK_SIZE_OPT, BLOCK_SIZE_DEFAULT);\n\n    client = new KuduClient.KuduClientBuilder(masterAddresses)\n                           .defaultSocketReadTimeoutMs(DEFAULT_SLEEP)\n                           .defaultOperationTimeoutMs(DEFAULT_SLEEP)\n                           .defaultAdminOperationTimeoutMs(DEFAULT_SLEEP)\n                           .build();\n    LOG.debug(\"Connecting to the masters at {}\", masterAddresses);\n\n    int fieldCount = getIntFromProp(prop, CoreWorkload.FIELD_COUNT_PROPERTY,\n                                    Integer.parseInt(CoreWorkload.FIELD_COUNT_PROPERTY_DEFAULT));\n\n    List<ColumnSchema> columns = new ArrayList<>(fieldCount + 1);\n\n    ColumnSchema keyColumn = new ColumnSchema.ColumnSchemaBuilder(KEY, STRING)\n                                             .key(true)\n                                             .desiredBlockSize(blockSize)\n                                             .build();\n    columns.add(keyColumn);\n    COLUMN_NAMES.add(KEY);\n    for (int i = 0; i < fieldCount; i++) {\n      String name = \"field\" + i;\n      COLUMN_NAMES.add(name);\n      columns.add(new ColumnSchema.ColumnSchemaBuilder(name, STRING)\n                                  .desiredBlockSize(blockSize)\n                                  .build());\n    }\n    schema = new Schema(columns);\n\n    CreateTableOptions builder = new CreateTableOptions();\n    builder.setRangePartitionColumns(new ArrayList<String>());\n    List<String> hashPartitionColumns = new ArrayList<>();\n    hashPartitionColumns.add(KEY);\n    builder.addHashPartitions(hashPartitionColumns, numTablets);\n    builder.setNumReplicas(numReplicas);\n\n    try {\n      client.createTable(tableName, schema, builder);\n    } catch (Exception e) {\n      if (!e.getMessage().contains(\"already exists\")) {\n        throw new DBException(\"Couldn't create the table\", e);\n      }\n    }\n  }\n\n  private static int getIntFromProp(Properties prop,\n                                    String propName,\n                                    int defaultValue) throws DBException {\n    String intStr = prop.getProperty(propName);\n    if (intStr == null) {\n      return defaultValue;\n    } else {\n      try {\n        return Integer.valueOf(intStr);\n      } catch (NumberFormatException ex) {\n        throw new DBException(\"Provided number for \" + propName + \" isn't a valid integer\");\n      }\n    }\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    try {\n      this.session.close();\n    } catch (Exception e) {\n      throw new DBException(\"Couldn't cleanup the session\", e);\n    }\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n                     Map<String, ByteIterator> result) {\n    Vector<HashMap<String, ByteIterator>> results = new Vector<>();\n    final Status status = scan(table, key, 1, fields, results);\n    if (!status.equals(Status.OK)) {\n      return status;\n    }\n    if (results.size() != 1) {\n      return Status.NOT_FOUND;\n    }\n    result.putAll(results.firstElement());\n    return Status.OK;\n  }\n\n  @Override\n  public Status scan(String table,\n                     String startkey,\n                     int recordcount,\n                     Set<String> fields,\n                     Vector<HashMap<String, ByteIterator>> result) {\n    try {\n      KuduScanner.KuduScannerBuilder scannerBuilder = client.newScannerBuilder(kuduTable);\n      List<String> querySchema;\n      if (fields == null) {\n        querySchema = COLUMN_NAMES;\n        // No need to set the projected columns with the whole schema.\n      } else {\n        querySchema = new ArrayList<>(fields);\n        scannerBuilder.setProjectedColumnNames(querySchema);\n      }\n\n      ColumnSchema column = schema.getColumnByIndex(0);\n      KuduPredicate.ComparisonOp predicateOp = recordcount == 1 ? EQUAL : GREATER_EQUAL;\n      KuduPredicate predicate = KuduPredicate.newComparisonPredicate(column, predicateOp, startkey);\n      scannerBuilder.addPredicate(predicate);\n      scannerBuilder.limit(recordcount); // currently noop\n\n      KuduScanner scanner = scannerBuilder.build();\n\n      while (scanner.hasMoreRows()) {\n        RowResultIterator data = scanner.nextRows();\n        addAllRowsToResult(data, recordcount, querySchema, result);\n        if (recordcount == result.size()) {\n          break;\n        }\n      }\n      RowResultIterator closer = scanner.close();\n      addAllRowsToResult(closer, recordcount, querySchema, result);\n    } catch (TimeoutException te) {\n      LOG.info(\"Waited too long for a scan operation with start key={}\", startkey);\n      return TIMEOUT;\n    } catch (Exception e) {\n      LOG.warn(\"Unexpected exception\", e);\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n\n  private void addAllRowsToResult(RowResultIterator it,\n                                  int recordcount,\n                                  List<String> querySchema,\n                                  Vector<HashMap<String, ByteIterator>> result) throws Exception {\n    RowResult row;\n    HashMap<String, ByteIterator> rowResult = new HashMap<>(querySchema.size());\n    if (it == null) {\n      return;\n    }\n    while (it.hasNext()) {\n      if (result.size() == recordcount) {\n        return;\n      }\n      row = it.next();\n      int colIdx = 0;\n      for (String col : querySchema) {\n        rowResult.put(col, new StringByteIterator(row.getString(colIdx)));\n        colIdx++;\n      }\n      result.add(rowResult);\n    }\n  }\n\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    Update update = this.kuduTable.newUpdate();\n    PartialRow row = update.getRow();\n    row.addString(KEY, key);\n    for (int i = 1; i < schema.getColumnCount(); i++) {\n      String columnName = schema.getColumnByIndex(i).getName();\n      if (values.containsKey(columnName)) {\n        String value = values.get(columnName).toString();\n        row.addString(columnName, value);\n      }\n    }\n    apply(update);\n    return Status.OK;\n  }\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    Insert insert = this.kuduTable.newInsert();\n    PartialRow row = insert.getRow();\n    row.addString(KEY, key);\n    for (int i = 1; i < schema.getColumnCount(); i++) {\n      row.addString(i, values.get(schema.getColumnByIndex(i).getName()).toString());\n    }\n    apply(insert);\n    return Status.OK;\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    Delete delete = this.kuduTable.newDelete();\n    PartialRow row = delete.getRow();\n    row.addString(KEY, key);\n    apply(delete);\n    return Status.OK;\n  }\n\n  private void apply(Operation op) {\n    try {\n      OperationResponse response = session.apply(op);\n      if (response != null && response.hasRowError()) {\n        LOG.info(\"Write operation failed: {}\", response.getRowError());\n      }\n    } catch (KuduException ex) {\n      LOG.warn(\"Write operation failed\", ex);\n    }\n  }\n}\n"
  },
  {
    "path": "kudu/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/**\n * Copyright (c) 2015-2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"http://kudu.apache.org/\">Apache Kudu</a>.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "kudu/src/main/resources/log4j.properties",
    "content": "# Root logger option\nlog4j.rootLogger=DEBUG, stderr\n\nlog4j.logger.com.stumbleupon.async=WARN\nlog4j.logger.org.apache.kudu=WARN\n\n# Direct log messages to stderr\nlog4j.appender.stderr = org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.Target=System.err\nlog4j.appender.stderr.layout = org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.ConversionPattern = [%p] %d{HH:mm:ss.SSS} (%F:%L) %m%n\n"
  },
  {
    "path": "leveldb/README.md",
    "content": "Interface to leveldb for integration into ycsb. Based on simpleleveldb, a http daemon for leveldb. Make sure to start the daemon before running any workloads.\n\nTo open shell for interactive debugging:\n./bin/ycsb shell leveldb\n\nleveldb: https://code.google.com/p/leveldb/\nsimpleleveldb: https://github.com/bitly/simplehttp/tree/master/simpleleveldb\n"
  },
  {
    "path": "leveldb/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>root</artifactId>\n    <version>0.1.4</version>\n  </parent>\n  \n  <artifactId>leveldb-binding</artifactId>\n  <name>LevelDB Binding</name>\n\n  <dependencies>\n    <dependency>\n\t  <groupId>commons-httpclient</groupId>\n\t  <artifactId>commons-httpclient</artifactId>\n\t  <version>3.1</version>\n\t  </dependency>\n\t<dependency>\n  \t  <groupId>org.apache.httpcomponents</groupId>\n  \t  <artifactId>httpclient</artifactId>\n  \t  <version>4.2.6</version>\n\t</dependency>\n    <dependency>\n      <groupId>org.mongodb</groupId>\n      <artifactId>mongo-java-driver</artifactId>\n      <version>${mongodb.version}</version>\n    </dependency>\n\t<dependency>\n\t  <groupId>com.googlecode.json-simple</groupId>\n\t  <artifactId>json-simple</artifactId>\n\t  <version>1.1</version>\n\t</dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n \n  <build>\n    <plugins>\n     <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-assembly-plugin</artifactId>\n        <version>${maven.assembly.version}</version>\n        <configuration>\n          <descriptorRefs>\n            <descriptorRef>jar-with-dependencies</descriptorRef>\n          </descriptorRefs>\n          <appendAssemblyId>false</appendAssemblyId>\n        </configuration>\n        <executions>\n          <execution>\n            <phase>package</phase>\n            <goals>\n              <goal>single</goal>\n            </goals>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n  \n</project>\n"
  },
  {
    "path": "leveldb/src/main/java/com/yahoo/ycsb/db/LevelDbClient.java",
    "content": "/**\n * LevelDB client binding for YCSB.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport java.io.BufferedReader;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.InputStreamReader;\nimport java.net.URLEncoder;\nimport java.text.MessageFormat;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.concurrent.atomic.AtomicInteger;\n\nimport org.apache.http.HttpResponse;\nimport org.apache.http.client.methods.HttpGet;\nimport org.apache.http.client.methods.HttpPost;\nimport org.apache.http.impl.client.DefaultHttpClient;\nimport org.apache.http.util.EntityUtils;\nimport org.json.simple.JSONArray;\nimport org.json.simple.JSONObject;\nimport org.json.simple.parser.JSONParser;\n\nimport com.mongodb.BasicDBObject;\nimport com.mongodb.DBObject;\nimport com.mongodb.util.JSON;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.StringByteIterator;\n\n/**\n * LevelDB client for YCSB framework.\n */\npublic class LevelDbClient extends DB {\n\n\tprivate static String dbUrl = \"http://localhost:8080\";\n\tprivate static String deleteUrl = dbUrl + \"/del\";\n\tprivate static String insertUrl = dbUrl + \"/put\";\n\tprivate static String readUrl = dbUrl + \"/get\";\n\tprivate static String scanUrl = dbUrl + \"/fwmatch\";\n\tprivate static JSONParser parser = new JSONParser();\n\n\tprivate static DefaultHttpClient httpClient;\n\tprivate static HttpPost httpPost;\n\tprivate static HttpResponse response;\n\tprivate static HttpGet httpGet;\n\n\t// having multiple tables in leveldb is a hack. must divide key\n\t// space into logical tables\n\tprivate static Map<String, Integer> tableKeyPrefix;\n\tprivate static final AtomicInteger prefix = new AtomicInteger(0);\n\n\tprivate static String getStringFromInputStream(InputStream is) {\n\t\tBufferedReader br = null;\n\t\tStringBuilder sb = new StringBuilder();\n\t\tString line;\n\t\ttry {\n\t\t\tbr = new BufferedReader(new InputStreamReader(is));\n\t\t\twhile ((line = br.readLine()) != null) {\n\t\t\t\tsb.append(line);\n\t\t\t}\n\t\t} catch (IOException e) {\n\t\t\te.printStackTrace();\n\t\t} finally {\n\t\t\tif (br != null) {\n\t\t\t\ttry {\n\t\t\t\t\tbr.close();\n\t\t\t\t} catch (IOException e) {\n\t\t\t\t\te.printStackTrace();\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn sb.toString();\n\t}\n\n\t/**\n\t * Initialize any state for this DB. Called once per DB instance; there is\n\t * one DB instance per client thread.\n\t */\n\t@Override\n\tpublic void init() throws DBException {\n\t\thttpClient = new DefaultHttpClient();\n\t\ttableKeyPrefix = new HashMap<String, Integer>();\n\t}\n\n\t/**\n\t * Cleanup any state for this DB. Called once per DB instance; there is one\n\t * DB instance per client thread.\n\t */\n\t@Override\n\tpublic void cleanup() throws DBException {\n\t}\n\n\t/**\n\t * Delete a record from the database.\n\t * \n\t * @param table\n\t *            The name of the table\n\t * @param key\n\t *            The record key of the record to delete.\n\t * @return Zero on success, a non-zero error code on error. See this class's\n\t *         description for a discussion of error codes.\n\t */\n\t@Override\n\tpublic int delete(String table, String key) {\n\t\ttry {\n\t\t\thttpPost = new HttpPost(MessageFormat.format(\"{0}?key={1}\",\n\t\t\t\t\tdeleteUrl, key));\n\t\t\t// System.out.println(\"# delete request - \" +\n\t\t\t// httpPost.getRequestLine());\n\t\t\tresponse = httpClient.execute(httpPost);\n\t\t\t// System.out.println(\"# delete response - \" +\n\t\t\t// getStringFromInputStream(response.getEntity().getContent()));\n\t\t\tEntityUtils.consume(response.getEntity());\n\t\t\treturn response.getStatusLine().getStatusCode() == 200 ? 0 : 1;\n\t\t} catch (Exception e) {\n\t\t\tSystem.err.println(e.toString());\n\t\t\treturn 1;\n\t\t}\n\t}\n\n\t/**\n\t * Insert a record in the database. Any field/value pairs in the specified\n\t * values HashMap will be written into the record with the specified record\n\t * key.\n\t * \n\t * @param table\n\t *            The name of the table\n\t * @param key\n\t *            The record key of the record to insert.\n\t * @param values\n\t *            A HashMap of field/value pairs to insert in the record\n\t * @return Zero on success, a non-zero error code on error. See this class's\n\t *         description for a discussion of error codes.\n\t */\n\t@Override\n\tpublic int insert(String table, String key,\n\t\t\tHashMap<String, ByteIterator> values) {\n\t\ttry {\n\t\t\tJSONObject jsonValues = new JSONObject();\n\t\t\tfor (Entry<String, String> entry : StringByteIterator.getStringMap(\n\t\t\t\t\tvalues).entrySet()) {\n\t\t\t\tjsonValues.put(entry.getKey(), entry.getValue());\n\t\t\t}\n\t\t\tString urlStringValues = URLEncoder.encode(\n\t\t\t\t\tjsonValues.toJSONString(), \"UTF-8\");\n\t\t\thttpPost = new HttpPost(MessageFormat.format(\n\t\t\t\t\t\"{0}?key={1}&value={2}\", insertUrl, key, urlStringValues));\n\t\t\t// System.out.println(\"# insert request - \" +\n\t\t\t// httpPost.getRequestLine());\n\t\t\tresponse = httpClient.execute(httpPost);\n\t\t\t// System.out.println(\"# insert response - \" +\n\t\t\t// getStringFromInputStream(response.getEntity().getContent()));\n\t\t\tEntityUtils.consume(response.getEntity());\n\t\t\treturn response.getStatusLine().getStatusCode() == 200 ? 0 : 1;\n\t\t} catch (Exception e) {\n\t\t\te.printStackTrace();\n\t\t\treturn 1;\n\t\t}\n\t}\n\n\t/**\n\t * Read a record from the database. Each field/value pair from the result\n\t * will be stored in a HashMap.\n\t * \n\t * @param table\n\t *            The name of the table\n\t * @param key\n\t *            The record key of the record to read.\n\t * @param fields\n\t *            The list of fields to read, or null for all of them\n\t * @param result\n\t *            A HashMap of field/value pairs for the result\n\t * @return Zero on success, a non-zero error code on error or \"not found\".\n\t */\n\t@Override\n\tpublic int read(String table, String key, Set<String> fields,\n\t\t\tHashMap<String, ByteIterator> result) {\n\t\ttry {\n\t\t\thttpGet = new HttpGet(MessageFormat.format(\"{0}?key={1}\", readUrl,\n\t\t\t\t\tkey));\n\t\t\t// System.out.println(\"# read request - \" +\n\t\t\t// httpGet.getRequestLine());\n\t\t\tresponse = httpClient.execute(httpGet);\n\t\t\tJSONObject jsonResponse = (JSONObject) parser.parse(EntityUtils\n\t\t\t\t\t.toString(response.getEntity()));\n\t\t\t// System.out.println(\"# read response - \" + jsonResponse);\n\t\t\t// Use Mongo DBObject to encode back to ByteIterator\n\t\t\tDBObject bson = (DBObject) JSON.parse(jsonResponse.get(\"data\")\n\t\t\t\t\t.toString());\n\t\t\tif (bson != null) {\n\t\t\t\tresult.putAll(bson.toMap());\n\t\t\t}\n\t\t\tEntityUtils.consume(response.getEntity());\n\t\t\treturn bson != null ? 0 : 1;\n\t\t} catch (Exception e) {\n\t\t\tSystem.err.println(e.toString());\n\t\t\treturn 1;\n\t\t}\n\t}\n\n\t/**\n\t * Update a record in the database. Any field/value pairs in the specified\n\t * values HashMap will be written into the record with the specified record\n\t * key, overwriting any existing values with the same field name.\n\t * \n\t * @param table\n\t *            The name of the table\n\t * @param key\n\t *            The record key of the record to write.\n\t * @param values\n\t *            A HashMap of field/value pairs to update in the record\n\t * @return Zero on success, a non-zero error code on error. See this class's\n\t *         description for a discussion of error codes.\n\t */\n\t@Override\n\tpublic int update(String table, String key,\n\t\t\tHashMap<String, ByteIterator> values) {\n\t\ttry {\n\t\t\thttpGet = new HttpGet(MessageFormat.format(\"{0}?key={1}\", readUrl,\n\t\t\t\t\tkey));\n\t\t\t// System.out.println(\"# update read request - \" +\n\t\t\t// httpGet.getRequestLine());\n\t\t\tresponse = httpClient.execute(httpGet);\n\t\t\tJSONObject jsonResponse = (JSONObject) parser.parse(EntityUtils\n\t\t\t\t\t.toString(response.getEntity()));\n\t\t\t// System.out.println(\"# update read response - \" + jsonResponse);\n\t\t\tJSONObject existingValues = new JSONObject();\n\t\t\t// check if key exists in the db\n\t\t\tif (response.getStatusLine().getStatusCode() == 200) {\n\t\t\t\texistingValues = (JSONObject) parser.parse(jsonResponse.get(\n\t\t\t\t\t\t\"data\").toString());\n\t\t\t}\n\t\t\tfor (Entry<String, String> entry : StringByteIterator.getStringMap(\n\t\t\t\t\tvalues).entrySet()) {\n\t\t\t\texistingValues.put(entry.getKey(), entry.getValue());\n\t\t\t}\n\t\t\tString urlStringValues = URLEncoder.encode(\n\t\t\t\t\texistingValues.toJSONString(), \"UTF-8\");\n\t\t\thttpPost = new HttpPost(MessageFormat.format(\n\t\t\t\t\t\"{0}?key={1}&value={2}\", insertUrl, key, urlStringValues));\n\t\t\t// System.out.println(\"# update insert request - \" +\n\t\t\t// httpPost.getRequestLine());\n\t\t\tresponse = httpClient.execute(httpPost);\n\t\t\t// System.out.println(\"# update insert response - \" +\n\t\t\t// getStringFromInputStream(response.getEntity().getContent()));\n\t\t\tEntityUtils.consume(response.getEntity());\n\t\t\treturn response.getStatusLine().getStatusCode() == 200 ? 0 : 1;\n\t\t} catch (Exception e) {\n\t\t\tSystem.err.println(e.toString());\n\t\t\treturn 1;\n\t\t}\n\t}\n\n\t/**\n\t * Perform a range scan for a set of records in the database. Each\n\t * field/value pair from the result will be stored in a HashMap.\n\t * \n\t * @param table\n\t *            The name of the table\n\t * @param startkey\n\t *            The record key of the first record to read.\n\t * @param recordcount\n\t *            The number of records to read\n\t * @param fields\n\t *            The list of fields to read, or null for all of them\n\t * @param result\n\t *            A Vector of HashMaps, where each HashMap is a set field/value\n\t *            pairs for one record\n\t * @return Zero on success, a non-zero error code on error. See this class's\n\t *         description for a discussion of error codes.\n\t */\n\t@Override\n\tpublic int scan(String table, String startkey, int recordcount,\n\t\t\tSet<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n\t\ttry {\n\t\t\thttpGet = new HttpGet(MessageFormat.format(\"{0}?key={1}&limit={2}\",\n\t\t\t\t\tscanUrl, startkey, recordcount));\n\t\t\t// System.out.println(\"# scan request - \" +\n\t\t\t// httpGet.getRequestLine());\n\t\t\tresponse = httpClient.execute(httpGet);\n\t\t\tJSONObject jsonResponse = (JSONObject) parser.parse(EntityUtils\n\t\t\t\t\t.toString(response.getEntity()));\n\t\t\t// System.out.println(\"# scan response - \" + jsonResponse);\n\t\t\tJSONArray scanEntries = (JSONArray) parser.parse(jsonResponse.get(\n\t\t\t\t\t\"data\").toString());\n\t\t\tfor (Object e : scanEntries) {\n\t\t\t\tJSONObject entry = (JSONObject) e;\n\t\t\t\tDBObject value = (DBObject) JSON.parse(entry.get(\"value\")\n\t\t\t\t\t\t.toString());\n\t\t\t\tDBObject bsonResult = new BasicDBObject();\n\t\t\t\tif (fields == null) {\n\t\t\t\t\tbsonResult = value;\n\t\t\t\t} else {\n\t\t\t\t\tfor (String s : fields) {\n\t\t\t\t\t\t// get has same result for missing keys and values that\n\t\t\t\t\t\t// are null\n\t\t\t\t\t\tbsonResult.put(s, value.get(s));\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tHashMap<String, ByteIterator> singleResult = new HashMap<String, ByteIterator>();\n\t\t\t\tsingleResult.putAll(bsonResult.toMap());\n\t\t\t\tresult.addElement(singleResult);\n\t\t\t}\n\t\t\tEntityUtils.consume(response.getEntity());\n\t\t\treturn response.getStatusLine().getStatusCode() == 200 ? 0 : 1;\n\t\t} catch (Exception e) {\n\t\t\tSystem.err.println(e.toString());\n\t\t\treturn 1;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "leveldbjni/README.md",
    "content": "<!--\nCopyright (c) 2014 - 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start - The way to connect ycsb and your own leveldb\n\nThis section describes how to run YCSB on Redis. \n\n### 1. Code JNI for leveldb and get a leveldb-jni.jar\n\n### 2. Install Java and Maven\n\n### 3. Set Up YCSB\n\nGit clone YCSB and compile:\n\n    git clone http://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:redis-binding -am clean package\n\n### 4. Set the dependcy about leveldb-jni.jar in pom.xml\n    \n <dependency>\n      <groupId>org.fusesource.leveldbjni</groupId>\n      <artifactId>leveldbjni-all</artifactId>\n      <version>1.7</version>\n      <scope>system</scope>\n      <systemPath>${project.basedir}/lib/leveldbjni-all-1.7.jar</systemPath>\n </dependency>\n\n### 5. Load data and run tests\n\nLoad the data:\n\n    ./bin/ycsb load redis -s -P workloads/workloada > outputLoad.txt\n\nRun the workload test:\n\n    ./bin/ycsb run redis -s -P workloads/workloada > outputRun.txt\n"
  },
  {
    "path": "leveldbjni/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>root</artifactId>\n    <version>0.1.4</version>\n  </parent>\n  \n  <artifactId>leveldbjni-binding</artifactId>\n  <name>LevelDBJNI Binding</name>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.fusesource.leveldbjni</groupId>\n      <artifactId>leveldbjni-all</artifactId>\n      <version>1.7</version>\n      <scope>system</scope>\n      <systemPath>${project.basedir}/lib/leveldbjni-all-1.7.jar</systemPath>\n    </dependency>\n  </dependencies>\n \n  <build>\n    <plugins>\n     <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-assembly-plugin</artifactId>\n        <version>${maven.assembly.version}</version>\n        <configuration>\n          <descriptorRefs>\n            <descriptorRef>jar-with-dependencies</descriptorRef>\n          </descriptorRefs>\n          <appendAssemblyId>false</appendAssemblyId>\n        </configuration>\n        <executions>\n          <execution>\n            <phase>package</phase>\n            <goals>\n              <goal>single</goal>\n            </goals>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n  \n</project>\n"
  },
  {
    "path": "leveldbjni/src/main/java/com/yahoo/ycsb/db/LevelDbJniClient.java",
    "content": "/**\n * LevelDB client binding for YCSB.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport static org.fusesource.leveldbjni.JniDBFactory.factory;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.ByteArrayOutputStream;\nimport java.io.File;\nimport java.io.IOException;\nimport java.io.ObjectInputStream;\nimport java.io.ObjectOutputStream;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.concurrent.atomic.AtomicInteger;\n\nimport org.iq80.leveldb.DBIterator;\nimport org.iq80.leveldb.Options;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.StringByteIterator;\n\n/**\n * LevelDBJni client for YCSB framework.\n */\npublic class LevelDbJniClient extends DB {\n\n\tprivate static org.iq80.leveldb.DB db = null;\n\n\tprivate static final AtomicInteger initCount = new AtomicInteger(0);\n\n\tprivate synchronized static void getDBInstance() {\n\t\tif (db == null) {\n\t\t\tOptions options = new Options();\n\t\t\t// options.cacheSize(100 * 1048576); // 100MB cache\n\t\t\toptions.createIfMissing(true);\n\t\t\ttry {\n\t\t\t\tdb = factory.open(new File(\"leveldb_database\"), options);\n\t\t\t} catch (IOException e) {\n\t\t\t\tSystem.out.println(\"Failed to open database\");\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\n\t}\n\n\tprivate static byte[] mapToBytes(Map<String, String> map)\n\t\t\tthrows IOException {\n\t\tByteArrayOutputStream byteOut = new ByteArrayOutputStream();\n\t\tObjectOutputStream out = new ObjectOutputStream(byteOut);\n\t\tout.writeObject(map);\n\t\treturn byteOut.toByteArray();\n\t}\n\n\tprivate static Map<String, String> bytesToMap(byte[] bytes)\n\t\t\tthrows IOException, ClassNotFoundException {\n\t\tByteArrayInputStream byteIn = new ByteArrayInputStream(bytes);\n\t\tObjectInputStream in = new ObjectInputStream(byteIn);\n\t\t@SuppressWarnings(\"unchecked\")\n\t\tMap<String, String> map = (Map<String, String>) in.readObject();\n\t\treturn map;\n\t}\n\n\t/**\n\t * Initialize any state for this DB. Called once per DB instance; there is\n\t * one DB instance per client thread.\n\t */\n\t@Override\n\tpublic void init() throws DBException {\n\t\tinitCount.incrementAndGet();\n\t\tgetDBInstance();\n\t}\n\n\t/**\n\t * Cleanup any state for this DB. Called once per DB instance; there is one\n\t * DB instance per client thread.\n\t */\n\t@Override\n\tpublic void cleanup() throws DBException {\n\t\tif (initCount.decrementAndGet() <= 0) {\n\t\t\ttry {\n\t\t\t\tdb.close();\n\t\t\t} catch (IOException e) {\n\t\t\t\tSystem.out.println(\"Failed to close db\");\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\n\t}\n\n\t/**\n\t * Delete a record from the database.\n\t * \n\t * @param table\n\t *            The name of the table\n\t * @param key\n\t *            The record key of the record to delete.\n\t * @return Zero on success, a non-zero error code on error. See this class's\n\t *         description for a discussion of error codes.\n\t */\n\t@Override\n\tpublic int delete(String table, String key) {\n\t\tdb.delete(key.getBytes());\n\t\treturn 0;\n\t}\n\n\t/**\n\t * Insert a record in the database. Any field/value pairs in the specified\n\t * values HashMap will be written into the record with the specified record\n\t * key.\n\t * \n\t * @param table\n\t *            The name of the table\n\t * @param key\n\t *            The record key of the record to insert.\n\t * @param values\n\t *            A HashMap of field/value pairs to insert in the record\n\t * @return Zero on success, a non-zero error code on error. See this class's\n\t *         description for a discussion of error codes.\n\t */\n\t@Override\n\tpublic int insert(String table, String key,\n\t\t\tHashMap<String, ByteIterator> values) {\n\t\tMap<String, String> stringValues = StringByteIterator\n\t\t\t\t.getStringMap(values);\n\t\ttry {\n\t\t\tdb.put(key.getBytes(), mapToBytes(stringValues));\n\t\t} catch (org.iq80.leveldb.DBException e) {\n\t\t\tSystem.out.println(\"Failed to insert \" + key);\n\t\t\te.printStackTrace();\n\t\t\treturn 1;\n\t\t} catch (IOException e) {\n\t\t\tSystem.out.println(\"Failed to insert \" + key);\n\t\t\te.printStackTrace();\n\t\t\treturn 1;\n\t\t}\n\t\treturn 0;\n\t}\n\n\t/**\n\t * Read a record from the database. Each field/value pair from the result\n\t * will be stored in a HashMap.\n\t * \n\t * @param table\n\t *            The name of the table\n\t * @param key\n\t *            The record key of the record to read.\n\t * @param fields\n\t *            The list of fields to read, or null for all of them\n\t * @param result\n\t *            A HashMap of field/value pairs for the result\n\t * @return Zero on success, a non-zero error code on error or \"not found\".\n\t */\n\t@Override\n\tpublic int read(String table, String key, Set<String> fields,\n\t\t\tHashMap<String, ByteIterator> result) {\n\t\tbyte[] value = db.get(key.getBytes());\n\t\tif (value == null) {\n\t\t\treturn 1;\n\t\t}\n\t\tMap<String, String> map;\n\t\ttry {\n\t\t\tmap = bytesToMap(value);\n\t\t} catch (IOException e) {\n\t\t\tSystem.out.println(\"Failed to read \" + key);\n\t\t\te.printStackTrace();\n\t\t\treturn 1;\n\t\t} catch (ClassNotFoundException e) {\n\t\t\tSystem.out.println(\"Failed to read \" + key);\n\t\t\te.printStackTrace();\n\t\t\treturn 1;\n\t\t}\n\t\tStringByteIterator.putAllAsByteIterators(result, map);\n\t\treturn 0;\n\t}\n\n\t/**\n\t * Update a record in the database. Any field/value pairs in the specified\n\t * values HashMap will be written into the record with the specified record\n\t * key, overwriting any existing values with the same field name.\n\t * \n\t * @param table\n\t *            The name of the table\n\t * @param key\n\t *            The record key of the record to write.\n\t * @param values\n\t *            A HashMap of field/value pairs to update in the record\n\t * @return Zero on success, a non-zero error code on error. See this class's\n\t *         description for a discussion of error codes.\n\t */\n\t@Override\n\tpublic int update(String table, String key,\n\t\t\tHashMap<String, ByteIterator> values) {\n\t\tbyte[] existingBytes = db.get(key.getBytes());\n\t\tMap<String, String> existingValues;\n\t\tif (existingBytes != null) {\n\t\t\ttry {\n\t\t\t\texistingValues = bytesToMap(existingBytes);\n\t\t\t} catch (IOException e) {\n\t\t\t\tSystem.out.println(\"Failed to read for update \" + key);\n\t\t\t\te.printStackTrace();\n\t\t\t\treturn 1;\n\t\t\t} catch (ClassNotFoundException e) {\n\t\t\t\tSystem.out.println(\"Failed to read for update \" + key);\n\t\t\t\te.printStackTrace();\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t} else {\n\t\t\texistingValues = new HashMap<String, String>();\n\t\t}\n\t\tMap<String, String> newValues = StringByteIterator.getStringMap(values);\n\t\texistingValues.putAll(newValues);\n\t\ttry {\n\t\t\tdb.put(key.getBytes(), mapToBytes(existingValues));\n\t\t} catch (org.iq80.leveldb.DBException e) {\n\t\t\tSystem.out.println(\"Failed to insert \" + key);\n\t\t\te.printStackTrace();\n\t\t\treturn 1;\n\t\t} catch (IOException e) {\n\t\t\tSystem.out.println(\"Failed to insert \" + key);\n\t\t\te.printStackTrace();\n\t\t\treturn 1;\n\t\t}\n\t\treturn 0;\n\t}\n\n\t/**\n\t * Perform a range scan for a set of records in the database. Each\n\t * field/value pair from the result will be stored in a HashMap.\n\t * \n\t * @param table\n\t *            The name of the table\n\t * @param startkey\n\t *            The record key of the first record to read.\n\t * @param recordcount\n\t *            The number of records to read\n\t * @param fields\n\t *            The list of fields to read, or null for all of them\n\t * @param result\n\t *            A Vector of HashMaps, where each HashMap is a set field/value\n\t *            pairs for one record\n\t * @return Zero on success, a non-zero error code on error. See this class's\n\t *         description for a discussion of error codes.\n\t */\n\t@Override\n\tpublic int scan(String table, String startkey, int recordcount,\n\t\t\tSet<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n\t\tDBIterator iterator = db.iterator();\n\t\tint count = 0;\n\t\ttry {\n\t\t\titerator.seek(startkey.getBytes());\n\t\t\twhile (iterator.hasNext() && count < recordcount) {\n\t\t\t\tString key = iterator.peekNext().getKey().toString();\n\t\t\t\tif (fields == null || fields.contains(key)) {\n\t\t\t\t\tHashMap<String, String> value;\n\t\t\t\t\tvalue = (HashMap<String, String>) bytesToMap(iterator\n\t\t\t\t\t\t\t.peekNext().getValue());\n\t\t\t\t\tHashMap<String, ByteIterator> byteValues = new HashMap<String, ByteIterator>();\n\t\t\t\t\tStringByteIterator.putAllAsByteIterators(byteValues, value);\n\t\t\t\t\tresult.addElement(byteValues);\n\t\t\t\t}\n\t\t\t\titerator.next();\n\t\t\t\tcount += 1;\n\t\t\t}\n\t\t} catch (IOException e) {\n\t\t\tSystem.out.println(\"Failed to scan\");\n\t\t\te.printStackTrace();\n\t\t\treturn 1;\n\t\t} catch (ClassNotFoundException e) {\n\t\t\tSystem.out.println(\"Failed to scan\");\n\t\t\te.printStackTrace();\n\t\t\treturn 1;\n\t\t} finally {\n\t\t\ttry {\n\t\t\t\titerator.close();\n\t\t\t} catch (IOException e) {\n\t\t\t\tSystem.out.println(\"Failed to close iterator\");\n\t\t\t\te.printStackTrace();\n\t\t\t}\n\t\t}\n\t\treturn 0;\n\t}\n}\n"
  },
  {
    "path": "mapkeeper/README.md",
    "content": "# MapKeeper-Specific Properties\n\n## mapkeeper.host\n\nSpecifies the host MapKeeper server is running on (default: localhost). \n\n## mapkeeper.port\n\nSpecifies the port MapKeeper server is listening to (default: 9090). \n"
  },
  {
    "path": "mapkeeper/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n  \n  <artifactId>mapkeeper-binding</artifactId>\n  <name>Mapkeeper DB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n     <dependency>\n       <groupId>com.yahoo.mapkeeper</groupId>\n       <artifactId>mapkeeper</artifactId>\n       <version>${mapkeeper.version}</version>\n     </dependency>\n     <dependency>\n       <groupId>com.yahoo.ycsb</groupId>\n       <artifactId>core</artifactId>\n       <version>${project.version}</version>\n       <scope>provided</scope>\n     </dependency>\n  </dependencies>\n  <repositories>\n    <repository>\n      <id>mapkeeper-releases</id>\n      <url>https://raw.github.com/m1ch1/m1ch1-mvn-repo/master/releases</url>\n    </repository>\n  </repositories>\t\n\n</project>\n"
  },
  {
    "path": "mapkeeper/src/main/java/com/yahoo/ycsb/db/MapKeeperClient.java",
    "content": "/**\n * Copyright (c) 2012 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport java.nio.ByteBuffer;\nimport java.util.HashMap;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport org.apache.thrift.TException;\nimport org.apache.thrift.protocol.TBinaryProtocol;\nimport org.apache.thrift.protocol.TProtocol;\nimport org.apache.thrift.transport.TFramedTransport;\nimport org.apache.thrift.transport.TSocket;\nimport org.apache.thrift.transport.TTransport;\n\nimport com.yahoo.mapkeeper.BinaryResponse;\nimport com.yahoo.mapkeeper.MapKeeper;\nimport com.yahoo.mapkeeper.Record;\nimport com.yahoo.mapkeeper.RecordListResponse;\nimport com.yahoo.mapkeeper.ResponseCode;\nimport com.yahoo.mapkeeper.ScanOrder;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.StringByteIterator;\nimport com.yahoo.ycsb.workloads.CoreWorkload;\n\npublic class MapKeeperClient extends DB {\n    private static final String HOST = \"mapkeeper.host\";\n    private static final String HOST_DEFAULT = \"localhost\";\n    private static final String PORT = \"mapkeeper.port\";\n    private static final String PORT_DEFAULT = \"9090\";\n    MapKeeper.Client c; \n    boolean writeallfields;\n    static boolean initteddb = false;\n    private synchronized static void initDB(Properties p, MapKeeper.Client c) throws TException {\n        if(!initteddb) {\n            initteddb = true;\n            c.addMap(p.getProperty(CoreWorkload.TABLENAME_PROPERTY, CoreWorkload.TABLENAME_PROPERTY_DEFAULT));\n        }\n    }\n\n    public void init() {\n        String host = getProperties().getProperty(HOST, HOST_DEFAULT);\n        int port = Integer.parseInt(getProperties().getProperty(PORT, PORT_DEFAULT));\n        TTransport tr = new TFramedTransport(new TSocket(host, port));\n        TProtocol proto = new TBinaryProtocol(tr);\n        c = new MapKeeper.Client(proto);\n        try {\n            tr.open();\n            initDB(getProperties(), c);\n        } catch(TException e) {\n            throw new RuntimeException(e);\n        }\n        writeallfields = Boolean.parseBoolean(getProperties().getProperty(CoreWorkload.WRITE_ALL_FIELDS_PROPERTY, \n                    CoreWorkload.WRITE_ALL_FIELDS_PROPERTY_DEFAULT));\n    }\n\n    ByteBuffer encode(HashMap<String, ByteIterator> values) {\n        int len = 0;\n        for(Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n            len += (entry.getKey().length() + 1 + entry.getValue().bytesLeft() + 1);\n        }\n        byte[] array = new byte[len];\n        int i = 0;\n        for(Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n            for(int j = 0; j < entry.getKey().length(); j++) {\n                array[i] = (byte)entry.getKey().charAt(j);\n                i++;\n            }\n            array[i] = '\\t'; // XXX would like to use sane delimiter (null, 254, 255, ...) but java makes this nearly impossible\n            i++;\n            ByteIterator v = entry.getValue();\n            i = v.nextBuf(array, i);\n            array[i] = '\\t';\n            i++;\n        }\n        array[array.length-1] = 0;\n        ByteBuffer buf = ByteBuffer.wrap(array);\n        buf.rewind();\n        return buf;\n    }\n    void decode(Set<String> fields, String tups, HashMap<String, ByteIterator> tup) {\n        String[] tok = tups.split(\"\\\\t\");\n        if(tok.length == 0) { throw new IllegalStateException(\"split returned empty array!\"); }\n        for(int i = 0; i < tok.length; i+=2) {\n            if(fields == null || fields.contains(tok[i])) {\n                if(tok.length < i+2) { throw new IllegalStateException(\"Couldn't parse tuple <\" + tups + \"> at index \" + i); }\n                if(tok[i] == null || tok[i+1] == null) throw new NullPointerException(\"Key is \" + tok[i] + \" val is + \" + tok[i+1]);\n                tup.put(tok[i], new StringByteIterator(tok[i+1]));\n            }\n        }\n        if(tok.length == 0) {\n            System.err.println(\"Empty tuple: \" + tups);\n        }\n    }\n\n    int ycsbThriftRet(BinaryResponse succ, ResponseCode zero, ResponseCode one) {\n        return ycsbThriftRet(succ.responseCode, zero, one);\n    }\n    int ycsbThriftRet(ResponseCode rc, ResponseCode zero, ResponseCode one) {\n        return\n            rc == zero ? 0 :\n            rc == one  ? 1 : 2;\n    }\n    ByteBuffer bufStr(String str) {\n        ByteBuffer buf = ByteBuffer.wrap(str.getBytes());\n        return buf;\n    }\n    String strResponse(BinaryResponse buf) {\n        return new String(buf.value.array());\n    }\n\n    @Override\n    public int read(String table, String key, Set<String> fields,\n            Map<String, ByteIterator> result) {\n        try {\n            ByteBuffer buf = bufStr(key);\n\n            BinaryResponse succ = c.get(table, buf);\n\n            int ret = ycsbThriftRet(\n                    succ,\n                    ResponseCode.RecordExists,\n                    ResponseCode.RecordNotFound);\n\n            if(ret == 0) {\n                decode(fields, strResponse(succ), result);\n            }\n            return ret;\n        } catch(TException e) {\n            e.printStackTrace();\n            return 2;\n        }\n    }\n\n    @Override\n    public int scan(String table, String startkey, int recordcount,\n            Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n        try {\n            //XXX what to pass in for nulls / zeros?\n            RecordListResponse res = c.scan(table, ScanOrder.Ascending, bufStr(startkey), true, null, false, recordcount, 0);\n            int ret = ycsbThriftRet(res.responseCode, ResponseCode.Success, ResponseCode.ScanEnded);\n            if(ret == 0) {\n                for(Record r : res.records) {\n                    HashMap<String, ByteIterator> tuple = new HashMap<String, ByteIterator>();\n                    // Note: r.getKey() and r.getValue() call special helper methods that trim the buffer\n                    // to an appropriate length, and memcpy it to a byte[].  Trying to manipulate the ByteBuffer\n                    // directly leads to trouble.\n                    tuple.put(\"key\", new StringByteIterator(new String(r.getKey())));\n                    decode(fields, new String(r.getValue())/*strBuf(r.bufferForValue())*/, tuple);\n                    result.add(tuple);\n                }\n            }\n            return ret;\n        } catch(TException e) {\n            e.printStackTrace();\n            return 2;\n        }\n    }\n\n    @Override\n    public int update(String table, String key,\n            Map<String, ByteIterator> values) {\n        try {\n            if(!writeallfields) {\n                HashMap<String, ByteIterator> oldval = new HashMap<String, ByteIterator>();\n                read(table, key, null, oldval);\n                for(Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n                    oldval.put(entry.getKey(), entry.getValue()));\n                }\n                values = oldval;\n            }\n            ResponseCode succ = c.update(table, bufStr(key), encode(values));\n            return ycsbThriftRet(succ, ResponseCode.RecordExists, ResponseCode.RecordNotFound);\n        } catch(TException e) {\n            e.printStackTrace();\n            return 2;\n        }\n    }\n\n    @Override\n    public int insert(String table, String key,\n            Map<String, ByteIterator> values) {\n        try {\n            int ret = ycsbThriftRet(c.insert(table, bufStr(key), encode(values)), ResponseCode.Success, ResponseCode.RecordExists);\n            return ret;\n        } catch(TException e) {\n            e.printStackTrace();\n            return 2;\n        }\n    }\n\n    @Override\n    public int delete(String table, String key) {\n        try {\n            return ycsbThriftRet(c.remove(table, bufStr(key)), ResponseCode.Success, ResponseCode.RecordExists);\n        } catch(TException e) {\n            e.printStackTrace();\n            return 2;\n        }\n    }\n}\n"
  },
  {
    "path": "memcached/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# YCSB Memcached binding\n\nThis section describes how to run YCSB on memcached.\n\n## 1. Install and start memcached service on the host(s)\n\nDebian / Ubuntu:\n\n    sudo apt-get install memcached\n\nRedHat / CentOS:\n\n    sudo yum install memcached\n\n## 2. Install Java and Maven\n\nSee step 2 in [`../mongodb/README.md`](../mongodb/README.md).\n\n## 3. Set up YCSB\n\nGit clone YCSB and compile:\n\n    git clone http://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:memcached-binding -am clean package\n\n## 4. Load data and run tests\n\nLoad the data:\n\n    ./bin/ycsb load memcached -s -P workloads/workloada > outputLoad.txt\n\nRun the workload test:\n\n    ./bin/ycsb run memcached -s -P workloads/workloada > outputRun.txt\n\n## 5. memcached Connection Parameters\n\nA sample configuration is provided in\n[`conf/memcached.properties`](conf/memcached.properties).\n\n### Required params\n\n- `memcached.hosts`\n\n  This is a comma-separated list of hosts providing the memcached interface.\n  You can use IPs or hostnames. The port is optional and defaults to the\n  memcached standard port of `11211` if not specified.\n\n### Optional params\n\n- `memcached.shutdownTimeoutMillis`\n\n  Shutdown timeout in milliseconds.\n\n- `memcached.objectExpirationTime`\n\n  Object expiration time for memcached; defaults to `Integer.MAX_VALUE`.\n\n- `memcached.checkOperationStatus`\n\n  Whether to verify the success of each operation; defaults to true.\n\n- `memcached.readBufferSize`\n\n  Read buffer size, in bytes.\n\n- `memcached.opTimeoutMillis`\n\n  Operation timeout, in milliseconds.\n\n- `memcached.failureMode`\n\n  What to do with failures; this is one of `net.spy.memcached.FailureMode` enum\n  values, which are currently: `Redistribute`, `Retry`, or `Cancel`.\n\n- `memcached.protocol`\n  Set to 'binary' to use memcached binary protocol. Set to 'text' or omit this field\n  to use memcached text protocol\n\nYou can set properties on the command line via `-p`, e.g.:\n\n    ./bin/ycsb load memcached -s -P workloads/workloada \\\n        -p \"memcached.hosts=127.0.0.1\" > outputLoad.txt\n"
  },
  {
    "path": "memcached/conf/memcached.properties",
    "content": "# Copyright (c) 2015 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n#\n# Sample property file for Memcached Client\n\n## Mandatory parameters\n\n# A comma-separated list of memcached server endpoints, each being an IP or\n# hostname with an optional port; the port defaults to the memcached-standard\n# port of 11211 if not specified.\n#\n# memcached.hosts =\n\n## Optional parameters\n\n# Shutdown timeout in milliseconds.\n#\n# memcached.shutdownTimeoutMillis = 30000\n\n# Object expiration time for memcached; defaults to `Integer.MAX_VALUE`.\n#\n# memcached.objectExpirationTime = 2147483647\n\n# Whether to verify the success of each operation; defaults to true.\n#\n# memcached.checkOperationStatus = true\n\n# Read buffer size, in bytes.\n#\n# memcached.readBufferSize = 3000000\n\n# Operation timeout, in milliseconds.\n#\n# memcached.opTimeoutMillis = 60000\n\n# What to do with failures; this is one of `net.spy.memcached.FailureMode` enum\n# values, which are currently: `Redistribute`, `Retry`, or `Cancel`.\n#\n# memcached.failureMode = Redistribute\n"
  },
  {
    "path": "memcached/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2014-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>memcached-binding</artifactId>\n  <name>memcached binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>log4j</groupId>\n      <artifactId>log4j</artifactId>\n      <version>1.2.17</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.codehaus.jackson</groupId>\n      <artifactId>jackson-mapper-asl</artifactId>\n      <version>1.9.13</version>\n    </dependency>\n    <dependency>\n      <groupId>net.spy</groupId>\n      <artifactId>spymemcached</artifactId>\n      <version>2.11.4</version>\n    </dependency>\n  </dependencies>\n\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-assembly-plugin</artifactId>\n        <version>${maven.assembly.version}</version>\n        <configuration>\n          <descriptorRefs>\n            <descriptorRef>jar-with-dependencies</descriptorRef>\n          </descriptorRefs>\n          <appendAssemblyId>false</appendAssemblyId>\n        </configuration>\n        <executions>\n          <execution>\n            <phase>package</phase>\n            <goals>\n              <goal>single</goal>\n            </goals>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "memcached/src/main/java/com/yahoo/ycsb/db/MemcachedClient.java",
    "content": "/**\n * Copyright (c) 2014-2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport java.io.IOException;\nimport java.io.StringWriter;\nimport java.io.Writer;\nimport java.net.InetSocketAddress;\nimport java.text.MessageFormat;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport net.spy.memcached.ConnectionFactoryBuilder;\nimport net.spy.memcached.FailureMode;\n// We also use `net.spy.memcached.MemcachedClient`; it is not imported\n// explicitly and referred to with its full path to avoid conflicts with the\n// class of the same name in this file.\nimport net.spy.memcached.internal.GetFuture;\nimport net.spy.memcached.internal.OperationFuture;\n\nimport org.codehaus.jackson.JsonFactory;\nimport org.codehaus.jackson.JsonGenerator;\nimport org.codehaus.jackson.JsonNode;\nimport org.codehaus.jackson.map.ObjectMapper;\nimport org.codehaus.jackson.node.ObjectNode;\n\nimport org.apache.log4j.Logger;\n\nimport static java.util.concurrent.TimeUnit.MILLISECONDS;\n\n/**\n * Concrete Memcached client implementation.\n */\npublic class MemcachedClient extends DB {\n\n  private final Logger logger = Logger.getLogger(getClass());\n\n  protected static final ObjectMapper MAPPER = new ObjectMapper();\n\n  private boolean checkOperationStatus;\n  private long shutdownTimeoutMillis;\n  private int objectExpirationTime;\n\n  public static final String HOSTS_PROPERTY = \"memcached.hosts\";\n\n  public static final int DEFAULT_PORT = 11211;\n\n  private static final String TEMPORARY_FAILURE_MSG = \"Temporary failure\";\n  private static final String CANCELLED_MSG = \"cancelled\";\n\n  public static final String SHUTDOWN_TIMEOUT_MILLIS_PROPERTY =\n      \"memcached.shutdownTimeoutMillis\";\n  public static final String DEFAULT_SHUTDOWN_TIMEOUT_MILLIS = \"30000\";\n\n  public static final String OBJECT_EXPIRATION_TIME_PROPERTY =\n      \"memcached.objectExpirationTime\";\n  public static final String DEFAULT_OBJECT_EXPIRATION_TIME =\n      String.valueOf(Integer.MAX_VALUE);\n\n  public static final String CHECK_OPERATION_STATUS_PROPERTY =\n      \"memcached.checkOperationStatus\";\n  public static final String CHECK_OPERATION_STATUS_DEFAULT = \"true\";\n\n  public static final String READ_BUFFER_SIZE_PROPERTY =\n      \"memcached.readBufferSize\";\n  public static final String DEFAULT_READ_BUFFER_SIZE = \"3000000\";\n\n  public static final String OP_TIMEOUT_PROPERTY = \"memcached.opTimeoutMillis\";\n  public static final String DEFAULT_OP_TIMEOUT = \"60000\";\n\n  public static final String FAILURE_MODE_PROPERTY = \"memcached.failureMode\";\n  public static final FailureMode FAILURE_MODE_PROPERTY_DEFAULT =\n      FailureMode.Redistribute;\n\n  public static final String PROTOCOL_PROPERTY = \"memcached.protocol\";\n  public static final ConnectionFactoryBuilder.Protocol DEFAULT_PROTOCOL =\n      ConnectionFactoryBuilder.Protocol.TEXT;\n\n  /**\n   * The MemcachedClient implementation that will be used to communicate\n   * with the memcached server.\n   */\n  private net.spy.memcached.MemcachedClient client;\n\n  /**\n   * @returns Underlying Memcached protocol client, implemented by\n   *     SpyMemcached.\n   */\n  protected net.spy.memcached.MemcachedClient memcachedClient() {\n    return client;\n  }\n\n  @Override\n  public void init() throws DBException {\n    try {\n      client = createMemcachedClient();\n      checkOperationStatus = Boolean.parseBoolean(\n          getProperties().getProperty(CHECK_OPERATION_STATUS_PROPERTY,\n                                      CHECK_OPERATION_STATUS_DEFAULT));\n      objectExpirationTime = Integer.parseInt(\n          getProperties().getProperty(OBJECT_EXPIRATION_TIME_PROPERTY,\n                                      DEFAULT_OBJECT_EXPIRATION_TIME));\n      shutdownTimeoutMillis = Integer.parseInt(\n          getProperties().getProperty(SHUTDOWN_TIMEOUT_MILLIS_PROPERTY,\n                                      DEFAULT_SHUTDOWN_TIMEOUT_MILLIS));\n    } catch (Exception e) {\n      throw new DBException(e);\n    }\n  }\n\n  protected net.spy.memcached.MemcachedClient createMemcachedClient()\n      throws Exception {\n    ConnectionFactoryBuilder connectionFactoryBuilder =\n        new ConnectionFactoryBuilder();\n\n    connectionFactoryBuilder.setReadBufferSize(Integer.parseInt(\n        getProperties().getProperty(READ_BUFFER_SIZE_PROPERTY,\n                                    DEFAULT_READ_BUFFER_SIZE)));\n\n    connectionFactoryBuilder.setOpTimeout(Integer.parseInt(\n        getProperties().getProperty(OP_TIMEOUT_PROPERTY, DEFAULT_OP_TIMEOUT)));\n\n    String protocolString = getProperties().getProperty(PROTOCOL_PROPERTY);\n    connectionFactoryBuilder.setProtocol(\n        protocolString == null ? DEFAULT_PROTOCOL\n                         : ConnectionFactoryBuilder.Protocol.valueOf(protocolString.toUpperCase()));\n\n    String failureString = getProperties().getProperty(FAILURE_MODE_PROPERTY);\n    connectionFactoryBuilder.setFailureMode(\n        failureString == null ? FAILURE_MODE_PROPERTY_DEFAULT\n                              : FailureMode.valueOf(failureString));\n\n    // Note: this only works with IPv4 addresses due to its assumption of\n    // \":\" being the separator of hostname/IP and port; this is not the case\n    // when dealing with IPv6 addresses.\n    //\n    // TODO(mbrukman): fix this.\n    List<InetSocketAddress> addresses = new ArrayList<InetSocketAddress>();\n    String[] hosts = getProperties().getProperty(HOSTS_PROPERTY).split(\",\");\n    for (String address : hosts) {\n      int colon = address.indexOf(\":\");\n      int port = DEFAULT_PORT;\n      String host = address;\n      if (colon != -1) {\n        port = Integer.parseInt(address.substring(colon + 1));\n        host = address.substring(0, colon);\n      }\n      addresses.add(new InetSocketAddress(host, port));\n    }\n    return new net.spy.memcached.MemcachedClient(\n        connectionFactoryBuilder.build(), addresses);\n  }\n\n  @Override\n  public Status read(\n      String table, String key, Set<String> fields,\n      Map<String, ByteIterator> result) {\n    key = createQualifiedKey(table, key);\n    try {\n      GetFuture<Object> future = memcachedClient().asyncGet(key);\n      Object document = future.get();\n      if (document != null) {\n        fromJson((String) document, fields, result);\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      logger.error(\"Error encountered for key: \" + key, e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status scan(\n      String table, String startkey, int recordcount, Set<String> fields,\n      Vector<HashMap<String, ByteIterator>> result){\n    return Status.NOT_IMPLEMENTED;\n  }\n\n  @Override\n  public Status update(\n      String table, String key, Map<String, ByteIterator> values) {\n    key = createQualifiedKey(table, key);\n    try {\n      OperationFuture<Boolean> future =\n          memcachedClient().replace(key, objectExpirationTime, toJson(values));\n      return getReturnCode(future);\n    } catch (Exception e) {\n      logger.error(\"Error updating value with key: \" + key, e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status insert(\n      String table, String key, Map<String, ByteIterator> values) {\n    key = createQualifiedKey(table, key);\n    try {\n      OperationFuture<Boolean> future =\n          memcachedClient().add(key, objectExpirationTime, toJson(values));\n      return getReturnCode(future);\n    } catch (Exception e) {\n      logger.error(\"Error inserting value\", e);\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    key = createQualifiedKey(table, key);\n    try {\n      OperationFuture<Boolean> future = memcachedClient().delete(key);\n      return getReturnCode(future);\n    } catch (Exception e) {\n      logger.error(\"Error deleting value\", e);\n      return Status.ERROR;\n    }\n  }\n\n  protected Status getReturnCode(OperationFuture<Boolean> future) {\n    if (!checkOperationStatus) {\n      return Status.OK;\n    }\n    if (future.getStatus().isSuccess()) {\n      return Status.OK;\n    } else if (TEMPORARY_FAILURE_MSG.equals(future.getStatus().getMessage())) {\n      return new Status(\"TEMPORARY_FAILURE\", TEMPORARY_FAILURE_MSG);\n    } else if (CANCELLED_MSG.equals(future.getStatus().getMessage())) {\n      return new Status(\"CANCELLED_MSG\", CANCELLED_MSG);\n    }\n    return new Status(\"ERROR\", future.getStatus().getMessage());\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    if (client != null) {\n      memcachedClient().shutdown(shutdownTimeoutMillis, MILLISECONDS);\n    }\n  }\n\n  protected static String createQualifiedKey(String table, String key) {\n    return MessageFormat.format(\"{0}-{1}\", table, key);\n  }\n\n  protected static void fromJson(\n      String value, Set<String> fields,\n      Map<String, ByteIterator> result) throws IOException {\n    JsonNode json = MAPPER.readTree(value);\n    boolean checkFields = fields != null && !fields.isEmpty();\n    for (Iterator<Map.Entry<String, JsonNode>> jsonFields = json.getFields();\n         jsonFields.hasNext();\n         /* increment in loop body */) {\n      Map.Entry<String, JsonNode> jsonField = jsonFields.next();\n      String name = jsonField.getKey();\n      if (checkFields && fields.contains(name)) {\n        continue;\n      }\n      JsonNode jsonValue = jsonField.getValue();\n      if (jsonValue != null && !jsonValue.isNull()) {\n        result.put(name, new StringByteIterator(jsonValue.asText()));\n      }\n    }\n  }\n\n  protected static String toJson(Map<String, ByteIterator> values)\n      throws IOException {\n    ObjectNode node = MAPPER.createObjectNode();\n    Map<String, String> stringMap = StringByteIterator.getStringMap(values);\n    for (Map.Entry<String, String> pair : stringMap.entrySet()) {\n      node.put(pair.getKey(), pair.getValue());\n    }\n    JsonFactory jsonFactory = new JsonFactory();\n    Writer writer = new StringWriter();\n    JsonGenerator jsonGenerator = jsonFactory.createJsonGenerator(writer);\n    MAPPER.writeTree(jsonGenerator, node);\n    return writer.toString();\n  }\n}\n"
  },
  {
    "path": "memcached/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/**\n * Copyright (c) 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * YCSB binding for memcached.\n */\npackage com.yahoo.ycsb.db;\n"
  },
  {
    "path": "mongodb/README.md",
    "content": "<!--\nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on MongoDB. \n\n### 1. Start MongoDB\n\nFirst, download MongoDB and start `mongod`. For example, to start MongoDB\non x86-64 Linux box:\n\n    wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-x.x.x.tgz\n    tar xfvz mongodb-linux-x86_64-*.tgz\n    mkdir /tmp/mongodb\n    cd mongodb-linux-x86_64-*\n    ./bin/mongod --dbpath /tmp/mongodb\n\nReplace x.x.x above with the latest stable release version for MongoDB.\nSee http://docs.mongodb.org/manual/installation/ for installation steps for various operating systems.\n\n### 2. Install Java and Maven\n\nGo to http://www.oracle.com/technetwork/java/javase/downloads/index.html\n\nand get the url to download the rpm into your server. For example:\n\n    wget http://download.oracle.com/otn-pub/java/jdk/7u40-b43/jdk-7u40-linux-x64.rpm?AuthParam=11232426132 -o jdk-7u40-linux-x64.rpm\n    rpm -Uvh jdk-7u40-linux-x64.rpm\n    \nOr install via yum/apt-get\n\n    sudo yum install java-devel\n\nDownload MVN from http://maven.apache.org/download.cgi\n\n    wget http://ftp.heanet.ie/mirrors/www.apache.org/dist/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.tar.gz\n    sudo tar xzf apache-maven-*-bin.tar.gz -C /usr/local\n    cd /usr/local\n    sudo ln -s apache-maven-* maven\n    sudo vi /etc/profile.d/maven.sh\n\nAdd the following to `maven.sh`\n\n    export M2_HOME=/usr/local/maven\n    export PATH=${M2_HOME}/bin:${PATH}\n\nReload bash and test mvn\n\n    bash\n    mvn -version\n\n### 3. Set Up YCSB\n\nDownload the YCSB zip file and compile:\n\n    curl -O --location https://github.com/brianfrankcooper/YCSB/releases/download/0.5.0/ycsb-0.5.0.tar.gz\n    tar xfvz ycsb-0.5.0.tar.gz\n    cd ycsb-0.5.0\n\n### 4. Run YCSB\n\nNow you are ready to run! First, use the asynchronous driver to load the data:\n\n    ./bin/ycsb load mongodb-async -s -P workloads/workloada > outputLoad.txt\n\nThen, run the workload:\n\n    ./bin/ycsb run mongodb-async -s -P workloads/workloada > outputRun.txt\n    \nSimilarly, to use the synchronous driver from MongoDB Inc. we load the data: \n\n    ./bin/ycsb load mongodb -s -P workloads/workloada > outputLoad.txt\n\nThen, run the workload:\n\n    ./bin/ycsb run mongodb -s -P workloads/workloada > outputRun.txt\n    \nSee the next section for the list of configuration parameters for MongoDB.\n\n## Log Level Control\nDue to the mongodb driver defaulting to a log level of DEBUG, a logback.xml file is included with this module that restricts the org.mongodb logging to WARN. You can control this by overriding the logback.xml and defining it in your ycsb command by adding this flag:\n\n```\nbin/ycsb run mongodb -jvm-args=\"-Dlogback.configurationFile=/path/to/logback.xml\"\n```\n\n## MongoDB Configuration Parameters\n\n- `mongodb.url`\n  - This should be a MongoDB URI or connection string. \n    - See http://docs.mongodb.org/manual/reference/connection-string/ for the standard options.\n    - For the complete set of options for the asynchronous driver see: \n      - http://www.allanbank.com/mongodb-async-driver/apidocs/index.html?com/allanbank/mongodb/MongoDbUri.html\n    - For the complete set of options for the synchronous driver see:\n      - http://api.mongodb.org/java/current/index.html?com/mongodb/MongoClientURI.html\n  - Default value is `mongodb://localhost:27017/ycsb?w=1`\n  - Default value of database is `ycsb`\n\n- `mongodb.batchsize`\n  - Useful for the insert workload as it will submit the inserts in batches inproving throughput.\n  - Default value is `1`.\n\n- `mongodb.upsert`\n  - Determines if the insert operation performs an update with the upsert operation or a insert. \n    Upserts have the advantage that they will continue to work for a partially loaded data set.\n  - Setting to `true` uses updates, `false` uses insert operations.\n  - Default value is `false`.\n\n- `mongodb.writeConcern`\n  - **Deprecated** - Use the `w` and `journal` options on the MongoDB URI provided by the `mongodb.url`.\n  - Allowed values are :\n    - `errors_ignored`\n    - `unacknowledged`\n    - `acknowledged`\n    - `journaled`\n    - `replica_acknowledged`\n    - `majority`\n  - Default value is `acknowledged`.\n \n- `mongodb.readPreference`\n  - **Deprecated** - Use the `readPreference` options on the MongoDB URI provided by the `mongodb.url`.\n  - Allowed values are :\n    - `primary`\n    - `primary_preferred`\n    - `secondary`\n    - `secondary_preferred`\n    - `nearest`\n  - Default value is `primary`.\n \n- `mongodb.maxconnections`\n  - **Deprecated** - Use the `maxPoolSize` options on the MongoDB URI provided by the `mongodb.url`.\n  - Default value is `100`.\n\n- `mongodb.threadsAllowedToBlockForConnectionMultiplier`\n  - **Deprecated** - Use the `waitQueueMultiple` options on the MongoDB URI provided by the `mongodb.url`.\n  - Default value is `5`.\n\nFor example:\n\n    ./bin/ycsb load mongodb-async -s -P workloads/workloada -p mongodb.url=mongodb://localhost:27017/ycsb?w=0\n\nTo run with the synchronous driver from MongoDB Inc.:\n\n    ./bin/ycsb load mongodb -s -P workloads/workloada -p mongodb.url=mongodb://localhost:27017/ycsb?w=0\n"
  },
  {
    "path": "mongodb/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>mongodb-binding</artifactId>\n  <name>MongoDB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.mongodb</groupId>\n      <artifactId>mongo-java-driver</artifactId>\n      <version>${mongodb.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.allanbank</groupId>\n      <artifactId>mongodb-async-driver</artifactId>\n      <version>${mongodb.async.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>ch.qos.logback</groupId>\n      <artifactId>logback-classic</artifactId>\n      <version>1.1.2</version>\n      <scope>runtime</scope>\n    </dependency>\n\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n  <repositories>\n    <repository>\n      <releases>\n        <enabled>true</enabled>\n        <updatePolicy>always</updatePolicy>\n        <checksumPolicy>warn</checksumPolicy>\n      </releases>\n      <snapshots>\n        <enabled>false</enabled>\n        <updatePolicy>never</updatePolicy>\n        <checksumPolicy>fail</checksumPolicy>\n      </snapshots>\n      <id>allanbank</id>\n      <name>Allanbank Releases</name>\n      <url>http://www.allanbank.com/repo/</url>\n      <layout>default</layout>\n    </repository>\n  </repositories>\n</project>\n"
  },
  {
    "path": "mongodb/src/main/java/com/yahoo/ycsb/db/AsyncMongoDbClient.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport static com.allanbank.mongodb.builder.QueryBuilder.where;\n\nimport com.allanbank.mongodb.Durability;\nimport com.allanbank.mongodb.LockType;\nimport com.allanbank.mongodb.MongoClient;\nimport com.allanbank.mongodb.MongoClientConfiguration;\nimport com.allanbank.mongodb.MongoCollection;\nimport com.allanbank.mongodb.MongoDatabase;\nimport com.allanbank.mongodb.MongoDbUri;\nimport com.allanbank.mongodb.MongoFactory;\nimport com.allanbank.mongodb.MongoIterator;\nimport com.allanbank.mongodb.ReadPreference;\nimport com.allanbank.mongodb.bson.Document;\nimport com.allanbank.mongodb.bson.Element;\nimport com.allanbank.mongodb.bson.ElementType;\nimport com.allanbank.mongodb.bson.builder.BuilderFactory;\nimport com.allanbank.mongodb.bson.builder.DocumentBuilder;\nimport com.allanbank.mongodb.bson.element.BinaryElement;\nimport com.allanbank.mongodb.builder.BatchedWrite;\nimport com.allanbank.mongodb.builder.BatchedWriteMode;\nimport com.allanbank.mongodb.builder.Find;\nimport com.allanbank.mongodb.builder.Sort;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.concurrent.atomic.AtomicInteger;\n\n/**\n * MongoDB asynchronous client for YCSB framework using the <a\n * href=\"http://www.allanbank.com/mongodb-async-driver/\">Asynchronous Java\n * Driver</a>\n * <p>\n * See the <code>README.md</code> for configuration information.\n * </p>\n *\n * @author rjm\n * @see <a href=\"http://www.allanbank.com/mongodb-async-driver/\">Asynchronous\n *      Java Driver</a>\n */\npublic class AsyncMongoDbClient extends DB {\n\n  /** Used to include a field in a response. */\n  protected static final int INCLUDE = 1;\n\n  /** The database to use. */\n  private static String databaseName;\n\n  /** Thread local document builder. */\n  private static final ThreadLocal<DocumentBuilder> DOCUMENT_BUILDER =\n      new ThreadLocal<DocumentBuilder>() {\n        @Override\n        protected DocumentBuilder initialValue() {\n          return BuilderFactory.start();\n        }\n      };\n\n  /** The write concern for the requests. */\n  private static final AtomicInteger INIT_COUNT = new AtomicInteger(0);\n\n  /** The connection to MongoDB. */\n  private static MongoClient mongoClient;\n\n  /** The write concern for the requests. */\n  private static Durability writeConcern;\n\n  /** Which servers to use for reads. */\n  private static ReadPreference readPreference;\n\n  /** The database to MongoDB. */\n  private MongoDatabase database;\n\n  /** The batch size to use for inserts. */\n  private static int batchSize;\n  \n  /** If true then use updates with the upsert option for inserts. */\n  private static boolean useUpsert;\n\n  /** The bulk inserts pending for the thread. */\n  private final BatchedWrite.Builder batchedWrite = BatchedWrite.builder()\n      .mode(BatchedWriteMode.REORDERED);\n\n  /** The number of writes in the batchedWrite. */\n  private int batchedWriteCount = 0;\n\n  /**\n   * Cleanup any state for this DB. Called once per DB instance; there is one DB\n   * instance per client thread.\n   */\n  @Override\n  public final void cleanup() throws DBException {\n    if (INIT_COUNT.decrementAndGet() == 0) {\n      try {\n        mongoClient.close();\n      } catch (final Exception e1) {\n        System.err.println(\"Could not close MongoDB connection pool: \"\n            + e1.toString());\n        e1.printStackTrace();\n        return;\n      } finally {\n        mongoClient = null;\n        database = null;\n      }\n    }\n  }\n\n  /**\n   * Delete a record from the database.\n   * \n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error. See this class's\n   *         description for a discussion of error codes.\n   */\n  @Override\n  public final Status delete(final String table, final String key) {\n    try {\n      final MongoCollection collection = database.getCollection(table);\n      final Document q = BuilderFactory.start().add(\"_id\", key).build();\n      final long res = collection.delete(q, writeConcern);\n      if (res == 0) {\n        System.err.println(\"Nothing deleted for key \" + key);\n        return Status.NOT_FOUND;\n      }\n      return Status.OK;\n    } catch (final Exception e) {\n      System.err.println(e.toString());\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is one\n   * DB instance per client thread.\n   */\n  @Override\n  public final void init() throws DBException {\n    final int count = INIT_COUNT.incrementAndGet();\n\n    synchronized (AsyncMongoDbClient.class) {\n      final Properties props = getProperties();\n\n      if (mongoClient != null) {\n        database = mongoClient.getDatabase(databaseName);\n\n        // If there are more threads (count) than connections then the\n        // Low latency spin lock is not really needed as we will keep\n        // the connections occupied.\n        if (count > mongoClient.getConfig().getMaxConnectionCount()) {\n          mongoClient.getConfig().setLockType(LockType.MUTEX);\n        }\n\n        return;\n      }\n\n      // Set insert batchsize, default 1 - to be YCSB-original equivalent\n      batchSize = Integer.parseInt(props.getProperty(\"mongodb.batchsize\", \"1\"));\n      \n      // Set is inserts are done as upserts. Defaults to false.\n      useUpsert = Boolean.parseBoolean(\n          props.getProperty(\"mongodb.upsert\", \"false\"));\n      \n      // Just use the standard connection format URL\n      // http://docs.mongodb.org/manual/reference/connection-string/\n      // to configure the client.\n      String url =\n          props\n              .getProperty(\"mongodb.url\", \"mongodb://localhost:27017/ycsb?w=1\");\n      if (!url.startsWith(\"mongodb://\")) {\n        System.err.println(\"ERROR: Invalid URL: '\" + url\n            + \"'. Must be of the form \"\n            + \"'mongodb://<host1>:<port1>,<host2>:<port2>/database?\"\n            + \"options'. See \"\n            + \"http://docs.mongodb.org/manual/reference/connection-string/.\");\n        System.exit(1);\n      }\n\n      MongoDbUri uri = new MongoDbUri(url);\n\n      try {\n        databaseName = uri.getDatabase();\n        if ((databaseName == null) || databaseName.isEmpty()) {\n          // Default database is \"ycsb\" if database is not\n          // specified in URL\n          databaseName = \"ycsb\";\n        }\n\n        mongoClient = MongoFactory.createClient(uri);\n\n        MongoClientConfiguration config = mongoClient.getConfig();\n        if (!url.toLowerCase().contains(\"locktype=\")) {\n          config.setLockType(LockType.LOW_LATENCY_SPIN); // assumed...\n        }\n\n        readPreference = config.getDefaultReadPreference();\n        writeConcern = config.getDefaultDurability();\n\n        database = mongoClient.getDatabase(databaseName);\n\n        System.out.println(\"mongo connection created with \" + url);\n      } catch (final Exception e1) {\n        System.err\n            .println(\"Could not initialize MongoDB connection pool for Loader: \"\n                + e1.toString());\n        e1.printStackTrace();\n        return;\n      }\n    }\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key.\n   * \n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to insert.\n   * @param values\n   *          A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error. See the {@link DB}\n   *         class's description for a discussion of error codes.\n   */\n  @Override\n  public final Status insert(final String table, final String key,\n      final Map<String, ByteIterator> values) {\n    try {\n      final MongoCollection collection = database.getCollection(table);\n      final DocumentBuilder toInsert =\n          DOCUMENT_BUILDER.get().reset().add(\"_id\", key);\n      final Document query = toInsert.build();\n      for (final Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n        toInsert.add(entry.getKey(), entry.getValue().toArray());\n      }\n\n      // Do an upsert.\n      if (batchSize <= 1) {\n        long result;\n        if (useUpsert) {\n          result = collection.update(query, toInsert,\n              /* multi= */false, /* upsert= */true, writeConcern);\n        } else {\n          // Return is not stable pre-SERVER-4381. No exception is success.\n          collection.insert(writeConcern, toInsert);\n          result = 1;\n        }\n        return result == 1 ? Status.OK : Status.NOT_FOUND;\n      }\n\n      // Use a bulk insert.\n      try {\n        if (useUpsert) {\n          batchedWrite.update(query, toInsert, /* multi= */false, \n              /* upsert= */true);\n        } else {\n          batchedWrite.insert(toInsert);\n        }\n        batchedWriteCount += 1;\n\n        if (batchedWriteCount < batchSize) {\n          return Status.BATCHED_OK;\n        }\n\n        long count = collection.write(batchedWrite);\n        if (count == batchedWriteCount) {\n          batchedWrite.reset().mode(BatchedWriteMode.REORDERED);\n          batchedWriteCount = 0;\n          return Status.OK;\n        }\n\n        System.err.println(\"Number of inserted documents doesn't match the \"\n            + \"number sent, \" + count + \" inserted, sent \" + batchedWriteCount);\n        batchedWrite.reset().mode(BatchedWriteMode.REORDERED);\n        batchedWriteCount = 0;\n        return Status.ERROR;\n      } catch (Exception e) {\n        System.err.println(\"Exception while trying bulk insert with \"\n            + batchedWriteCount);\n        e.printStackTrace();\n        return Status.ERROR;\n      }\n    } catch (final Exception e) {\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will\n   * be stored in a HashMap.\n   * \n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to read.\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error or \"not found\".\n   */\n  @Override\n  public final Status read(final String table, final String key,\n      final Set<String> fields, final Map<String, ByteIterator> result) {\n    try {\n      final MongoCollection collection = database.getCollection(table);\n      final DocumentBuilder query =\n          DOCUMENT_BUILDER.get().reset().add(\"_id\", key);\n\n      Document queryResult = null;\n      if (fields != null) {\n        final DocumentBuilder fieldsToReturn = BuilderFactory.start();\n        final Iterator<String> iter = fields.iterator();\n        while (iter.hasNext()) {\n          fieldsToReturn.add(iter.next(), 1);\n        }\n\n        final Find.Builder fb = new Find.Builder(query);\n        fb.projection(fieldsToReturn);\n        fb.setLimit(1);\n        fb.setBatchSize(1);\n        fb.readPreference(readPreference);\n\n        final MongoIterator<Document> ci = collection.find(fb.build());\n        if (ci.hasNext()) {\n          queryResult = ci.next();\n          ci.close();\n        }\n      } else {\n        queryResult = collection.findOne(query);\n      }\n\n      if (queryResult != null) {\n        fillMap(result, queryResult);\n      }\n      return queryResult != null ? Status.OK : Status.NOT_FOUND;\n    } catch (final Exception e) {\n      System.err.println(e.toString());\n      return Status.ERROR;\n    }\n\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value\n   * pair from the result will be stored in a HashMap.\n   * \n   * @param table\n   *          The name of the table\n   * @param startkey\n   *          The record key of the first record to read.\n   * @param recordcount\n   *          The number of records to read\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A Vector of HashMaps, where each HashMap is a set field/value\n   *          pairs for one record\n   * @return Zero on success, a non-zero error code on error. See the {@link DB}\n   *         class's description for a discussion of error codes.\n   */\n  @Override\n  public final Status scan(final String table, final String startkey,\n      final int recordcount, final Set<String> fields,\n      final Vector<HashMap<String, ByteIterator>> result) {\n    try {\n      final MongoCollection collection = database.getCollection(table);\n\n      final Find.Builder find =\n          Find.builder().query(where(\"_id\").greaterThanOrEqualTo(startkey))\n              .limit(recordcount).batchSize(recordcount).sort(Sort.asc(\"_id\"))\n              .readPreference(readPreference);\n\n      if (fields != null) {\n        final DocumentBuilder fieldsDoc = BuilderFactory.start();\n        for (final String field : fields) {\n          fieldsDoc.add(field, INCLUDE);\n        }\n\n        find.projection(fieldsDoc);\n      }\n\n      result.ensureCapacity(recordcount);\n\n      final MongoIterator<Document> cursor = collection.find(find);\n      if (!cursor.hasNext()) {\n        System.err.println(\"Nothing found in scan for key \" + startkey);\n        return Status.NOT_FOUND;\n      }\n      while (cursor.hasNext()) {\n        // toMap() returns a Map but result.add() expects a\n        // Map<String,String>. Hence, the suppress warnings.\n        final Document doc = cursor.next();\n        final HashMap<String, ByteIterator> docAsMap =\n            new HashMap<String, ByteIterator>();\n\n        fillMap(docAsMap, doc);\n\n        result.add(docAsMap);\n      }\n\n      return Status.OK;\n    } catch (final Exception e) {\n      System.err.println(e.toString());\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key, overwriting any existing values with the same field name.\n   * \n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to write.\n   * @param values\n   *          A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error. See the {@link DB}\n   *         class's description for a discussion of error codes.\n   */\n  @Override\n  public final Status update(final String table, final String key,\n      final Map<String, ByteIterator> values) {\n    try {\n      final MongoCollection collection = database.getCollection(table);\n      final DocumentBuilder query = BuilderFactory.start().add(\"_id\", key);\n      final DocumentBuilder update = BuilderFactory.start();\n      final DocumentBuilder fieldsToSet = update.push(\"$set\");\n\n      for (final Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n        fieldsToSet.add(entry.getKey(), entry.getValue().toArray());\n      }\n      final long res =\n          collection.update(query, update, false, false, writeConcern);\n      return writeConcern == Durability.NONE || res == 1 ? Status.OK : Status.NOT_FOUND;\n    } catch (final Exception e) {\n      System.err.println(e.toString());\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Fills the map with the ByteIterators from the document.\n   * \n   * @param result\n   *          The map to fill.\n   * @param queryResult\n   *          The document to fill from.\n   */\n  protected final void fillMap(final Map<String, ByteIterator> result,\n      final Document queryResult) {\n    for (final Element be : queryResult) {\n      if (be.getType() == ElementType.BINARY) {\n        result.put(be.getName(),\n            new BinaryByteArrayIterator((BinaryElement) be));\n      }\n    }\n  }\n\n  /**\n   * BinaryByteArrayIterator provides an adapter from a {@link BinaryElement} to\n   * a {@link ByteIterator}.\n   */\n  private static final class BinaryByteArrayIterator extends ByteIterator {\n\n    /** The binary data. */\n    private final BinaryElement binaryElement;\n\n    /** The current offset into the binary element. */\n    private int offset;\n\n    /**\n     * Creates a new BinaryByteArrayIterator.\n     * \n     * @param element\n     *          The {@link BinaryElement} to iterate over.\n     */\n    public BinaryByteArrayIterator(final BinaryElement element) {\n      this.binaryElement = element;\n      this.offset = 0;\n    }\n\n    /**\n     * {@inheritDoc}\n     * <p>\n     * Overridden to return the number of bytes remaining in the iterator.\n     * </p>\n     */\n    @Override\n    public long bytesLeft() {\n      return Math.max(0, binaryElement.length() - offset);\n    }\n\n    /**\n     * {@inheritDoc}\n     * <p>\n     * Overridden to return true if there is more data in the\n     * {@link BinaryElement}.\n     * </p>\n     */\n    @Override\n    public boolean hasNext() {\n      return (offset < binaryElement.length());\n    }\n\n    /**\n     * {@inheritDoc}\n     * <p>\n     * Overridden to return the next value and advance the iterator.\n     * </p>\n     */\n    @Override\n    public byte nextByte() {\n      final byte value = binaryElement.get(offset);\n      offset += 1;\n\n      return value;\n    }\n  }\n}\n"
  },
  {
    "path": "mongodb/src/main/java/com/yahoo/ycsb/db/MongoDbClient.java",
    "content": "/**\n * Copyright (c) 2012 - 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/*\n * MongoDB client binding for YCSB.\n *\n * Submitted by Yen Pai on 5/11/2010.\n *\n * https://gist.github.com/000a66b8db2caf42467b#file_mongo_database.java\n */\npackage com.yahoo.ycsb.db;\n\nimport com.mongodb.MongoClient;\nimport com.mongodb.MongoClientURI;\nimport com.mongodb.ReadPreference;\nimport com.mongodb.WriteConcern;\nimport com.mongodb.client.FindIterable;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoCursor;\nimport com.mongodb.client.MongoDatabase;\nimport com.mongodb.client.model.InsertManyOptions;\nimport com.mongodb.client.model.UpdateOneModel;\nimport com.mongodb.client.model.UpdateOptions;\nimport com.mongodb.client.result.DeleteResult;\nimport com.mongodb.client.result.UpdateResult;\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\nimport org.bson.Document;\nimport org.bson.types.Binary;\n\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.concurrent.atomic.AtomicInteger;\n\n/**\n * MongoDB binding for YCSB framework using the MongoDB Inc. <a\n * href=\"http://docs.mongodb.org/ecosystem/drivers/java/\">driver</a>\n * <p>\n * See the <code>README.md</code> for configuration information.\n * </p>\n * \n * @author ypai\n * @see <a href=\"http://docs.mongodb.org/ecosystem/drivers/java/\">MongoDB Inc.\n *      driver</a>\n */\npublic class MongoDbClient extends DB {\n\n  /** Used to include a field in a response. */\n  private static final Integer INCLUDE = Integer.valueOf(1);\n\n  /** The options to use for inserting many documents. */\n  private static final InsertManyOptions INSERT_UNORDERED =\n      new InsertManyOptions().ordered(false);\n\n  /** The options to use for inserting a single document. */\n  private static final UpdateOptions UPDATE_WITH_UPSERT = new UpdateOptions()\n      .upsert(true);\n\n  /**\n   * The database name to access.\n   */\n  private static String databaseName;\n\n  /** The database name to access. */\n  private static MongoDatabase database;\n\n  /**\n   * Count the number of times initialized to teardown on the last\n   * {@link #cleanup()}.\n   */\n  private static final AtomicInteger INIT_COUNT = new AtomicInteger(0);\n\n  /** A singleton Mongo instance. */\n  private static MongoClient mongoClient;\n\n  /** The default read preference for the test. */\n  private static ReadPreference readPreference;\n\n  /** The default write concern for the test. */\n  private static WriteConcern writeConcern;\n\n  /** The batch size to use for inserts. */\n  private static int batchSize;\n\n  /** If true then use updates with the upsert option for inserts. */\n  private static boolean useUpsert;\n\n  /** The bulk inserts pending for the thread. */\n  private final List<Document> bulkInserts = new ArrayList<Document>();\n\n  /**\n   * Cleanup any state for this DB. Called once per DB instance; there is one DB\n   * instance per client thread.\n   */\n  @Override\n  public void cleanup() throws DBException {\n    if (INIT_COUNT.decrementAndGet() == 0) {\n      try {\n        mongoClient.close();\n      } catch (Exception e1) {\n        System.err.println(\"Could not close MongoDB connection pool: \"\n            + e1.toString());\n        e1.printStackTrace();\n        return;\n      } finally {\n        database = null;\n        mongoClient = null;\n      }\n    }\n  }\n\n  /**\n   * Delete a record from the database.\n   * \n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error. See the {@link DB}\n   *         class's description for a discussion of error codes.\n   */\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      MongoCollection<Document> collection = database.getCollection(table);\n\n      Document query = new Document(\"_id\", key);\n      DeleteResult result =\n          collection.withWriteConcern(writeConcern).deleteOne(query);\n      if (result.wasAcknowledged() && result.getDeletedCount() == 0) {\n        System.err.println(\"Nothing deleted for key \" + key);\n        return Status.NOT_FOUND;\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      System.err.println(e.toString());\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is one\n   * DB instance per client thread.\n   */\n  @Override\n  public void init() throws DBException {\n    INIT_COUNT.incrementAndGet();\n    synchronized (INCLUDE) {\n      if (mongoClient != null) {\n        return;\n      }\n\n      Properties props = getProperties();\n\n      // Set insert batchsize, default 1 - to be YCSB-original equivalent\n      batchSize = Integer.parseInt(props.getProperty(\"batchsize\", \"1\"));\n\n      // Set is inserts are done as upserts. Defaults to false.\n      useUpsert = Boolean.parseBoolean(\n          props.getProperty(\"mongodb.upsert\", \"false\"));\n\n      // Just use the standard connection format URL\n      // http://docs.mongodb.org/manual/reference/connection-string/\n      // to configure the client.\n      String url = props.getProperty(\"mongodb.url\", null);\n      boolean defaultedUrl = false;\n      if (url == null) {\n        defaultedUrl = true;\n        url = \"mongodb://localhost:27017/ycsb?w=1\";\n      }\n\n      url = OptionsSupport.updateUrl(url, props);\n\n      if (!url.startsWith(\"mongodb://\")) {\n        System.err.println(\"ERROR: Invalid URL: '\" + url\n            + \"'. Must be of the form \"\n            + \"'mongodb://<host1>:<port1>,<host2>:<port2>/database?options'. \"\n            + \"http://docs.mongodb.org/manual/reference/connection-string/\");\n        System.exit(1);\n      }\n\n      try {\n        MongoClientURI uri = new MongoClientURI(url);\n\n        String uriDb = uri.getDatabase();\n        if (!defaultedUrl && (uriDb != null) && !uriDb.isEmpty()\n            && !\"admin\".equals(uriDb)) {\n          databaseName = uriDb;\n        } else {\n          // If no database is specified in URI, use \"ycsb\"\n          databaseName = \"ycsb\";\n\n        }\n\n        readPreference = uri.getOptions().getReadPreference();\n        writeConcern = uri.getOptions().getWriteConcern();\n\n        mongoClient = new MongoClient(uri);\n        database =\n            mongoClient.getDatabase(databaseName)\n                .withReadPreference(readPreference)\n                .withWriteConcern(writeConcern);\n\n        System.out.println(\"mongo client connection created with \" + url);\n      } catch (Exception e1) {\n        System.err\n            .println(\"Could not initialize MongoDB connection pool for Loader: \"\n                + e1.toString());\n        e1.printStackTrace();\n        return;\n      }\n    }\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key.\n   * \n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to insert.\n   * @param values\n   *          A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error. See the {@link DB}\n   *         class's description for a discussion of error codes.\n   */\n  @Override\n  public Status insert(String table, String key,\n      Map<String, ByteIterator> values) {\n    try {\n      MongoCollection<Document> collection = database.getCollection(table);\n      Document toInsert = new Document(\"_id\", key);\n      for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n        toInsert.put(entry.getKey(), entry.getValue().toArray());\n      }\n\n      if (batchSize == 1) {\n        if (useUpsert) {\n          // this is effectively an insert, but using an upsert instead due\n          // to current inability of the framework to clean up after itself\n          // between test runs.\n          collection.replaceOne(new Document(\"_id\", toInsert.get(\"_id\")),\n              toInsert, UPDATE_WITH_UPSERT);\n        } else {\n          collection.insertOne(toInsert);\n        }\n      } else {\n        bulkInserts.add(toInsert);\n        if (bulkInserts.size() == batchSize) {\n          if (useUpsert) {\n            List<UpdateOneModel<Document>> updates = \n                new ArrayList<UpdateOneModel<Document>>(bulkInserts.size());\n            for (Document doc : bulkInserts) {\n              updates.add(new UpdateOneModel<Document>(\n                  new Document(\"_id\", doc.get(\"_id\")),\n                  doc, UPDATE_WITH_UPSERT));\n            }\n            collection.bulkWrite(updates);\n          } else {\n            collection.insertMany(bulkInserts, INSERT_UNORDERED);\n          }\n          bulkInserts.clear();\n        } else {\n          return Status.BATCHED_OK;\n        }\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      System.err.println(\"Exception while trying bulk insert with \"\n          + bulkInserts.size());\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will\n   * be stored in a HashMap.\n   * \n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to read.\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error or \"not found\".\n   */\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n      Map<String, ByteIterator> result) {\n    try {\n      MongoCollection<Document> collection = database.getCollection(table);\n      Document query = new Document(\"_id\", key);\n\n      FindIterable<Document> findIterable = collection.find(query);\n\n      if (fields != null) {\n        Document projection = new Document();\n        for (String field : fields) {\n          projection.put(field, INCLUDE);\n        }\n        findIterable.projection(projection);\n      }\n\n      Document queryResult = findIterable.first();\n\n      if (queryResult != null) {\n        fillMap(result, queryResult);\n      }\n      return queryResult != null ? Status.OK : Status.NOT_FOUND;\n    } catch (Exception e) {\n      System.err.println(e.toString());\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value\n   * pair from the result will be stored in a HashMap.\n   * \n   * @param table\n   *          The name of the table\n   * @param startkey\n   *          The record key of the first record to read.\n   * @param recordcount\n   *          The number of records to read\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A Vector of HashMaps, where each HashMap is a set field/value\n   *          pairs for one record\n   * @return Zero on success, a non-zero error code on error. See the {@link DB}\n   *         class's description for a discussion of error codes.\n   */\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    MongoCursor<Document> cursor = null;\n    try {\n      MongoCollection<Document> collection = database.getCollection(table);\n\n      Document scanRange = new Document(\"$gte\", startkey);\n      Document query = new Document(\"_id\", scanRange);\n      Document sort = new Document(\"_id\", INCLUDE);\n\n      FindIterable<Document> findIterable =\n          collection.find(query).sort(sort).limit(recordcount);\n\n      if (fields != null) {\n        Document projection = new Document();\n        for (String fieldName : fields) {\n          projection.put(fieldName, INCLUDE);\n        }\n        findIterable.projection(projection);\n      }\n\n      cursor = findIterable.iterator();\n\n      if (!cursor.hasNext()) {\n        System.err.println(\"Nothing found in scan for key \" + startkey);\n        return Status.ERROR;\n      }\n\n      result.ensureCapacity(recordcount);\n\n      while (cursor.hasNext()) {\n        HashMap<String, ByteIterator> resultMap =\n            new HashMap<String, ByteIterator>();\n\n        Document obj = cursor.next();\n        fillMap(resultMap, obj);\n\n        result.add(resultMap);\n      }\n\n      return Status.OK;\n    } catch (Exception e) {\n      System.err.println(e.toString());\n      return Status.ERROR;\n    } finally {\n      if (cursor != null) {\n        cursor.close();\n      }\n    }\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified\n   * values HashMap will be written into the record with the specified record\n   * key, overwriting any existing values with the same field name.\n   * \n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to write.\n   * @param values\n   *          A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error. See this class's\n   *         description for a discussion of error codes.\n   */\n  @Override\n  public Status update(String table, String key,\n      Map<String, ByteIterator> values) {\n    try {\n      MongoCollection<Document> collection = database.getCollection(table);\n\n      Document query = new Document(\"_id\", key);\n      Document fieldsToSet = new Document();\n      for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n        fieldsToSet.put(entry.getKey(), entry.getValue().toArray());\n      }\n      Document update = new Document(\"$set\", fieldsToSet);\n\n      UpdateResult result = collection.updateOne(query, update);\n      if (result.wasAcknowledged() && result.getMatchedCount() == 0) {\n        System.err.println(\"Nothing updated for key \" + key);\n        return Status.NOT_FOUND;\n      }\n      return Status.OK;\n    } catch (Exception e) {\n      System.err.println(e.toString());\n      return Status.ERROR;\n    }\n  }\n\n  /**\n   * Fills the map with the values from the DBObject.\n   * \n   * @param resultMap\n   *          The map to fill/\n   * @param obj\n   *          The object to copy values from.\n   */\n  protected void fillMap(Map<String, ByteIterator> resultMap, Document obj) {\n    for (Map.Entry<String, Object> entry : obj.entrySet()) {\n      if (entry.getValue() instanceof Binary) {\n        resultMap.put(entry.getKey(),\n            new ByteArrayByteIterator(((Binary) entry.getValue()).getData()));\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "mongodb/src/main/java/com/yahoo/ycsb/db/OptionsSupport.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport java.util.Properties;\n\n/**\n * OptionsSupport provides methods for handling legacy options.\n *\n * @author rjm\n */\npublic final class OptionsSupport {\n\n  /** Value for an unavailable property. */\n  private static final String UNAVAILABLE = \"n/a\";\n\n  /**\n   * Updates the URL with the appropriate attributes if legacy properties are\n   * set and the URL does not have the property already set.\n   *\n   * @param url\n   *          The URL to update.\n   * @param props\n   *          The legacy properties.\n   * @return The updated URL.\n   */\n  public static String updateUrl(String url, Properties props) {\n    String result = url;\n\n    // max connections.\n    final String maxConnections =\n        props.getProperty(\"mongodb.maxconnections\", UNAVAILABLE).toLowerCase();\n    if (!UNAVAILABLE.equals(maxConnections)) {\n      result = addUrlOption(result, \"maxPoolSize\", maxConnections);\n    }\n\n    // Blocked thread multiplier.\n    final String threadsAllowedToBlockForConnectionMultiplier =\n        props\n            .getProperty(\n                \"mongodb.threadsAllowedToBlockForConnectionMultiplier\",\n                UNAVAILABLE).toLowerCase();\n    if (!UNAVAILABLE.equals(threadsAllowedToBlockForConnectionMultiplier)) {\n      result =\n          addUrlOption(result, \"waitQueueMultiple\",\n              threadsAllowedToBlockForConnectionMultiplier);\n    }\n\n    // write concern\n    String writeConcernType =\n        props.getProperty(\"mongodb.writeConcern\", UNAVAILABLE).toLowerCase();\n    if (!UNAVAILABLE.equals(writeConcernType)) {\n      if (\"errors_ignored\".equals(writeConcernType)) {\n        result = addUrlOption(result, \"w\", \"0\");\n      } else if (\"unacknowledged\".equals(writeConcernType)) {\n        result = addUrlOption(result, \"w\", \"0\");\n      } else if (\"acknowledged\".equals(writeConcernType)) {\n        result = addUrlOption(result, \"w\", \"1\");\n      } else if (\"journaled\".equals(writeConcernType)) {\n        result = addUrlOption(result, \"journal\", \"true\"); // this is the\n        // documented option\n        // name\n        result = addUrlOption(result, \"j\", \"true\"); // but keep this until\n        // MongoDB Java driver\n        // supports \"journal\" option\n      } else if (\"replica_acknowledged\".equals(writeConcernType)) {\n        result = addUrlOption(result, \"w\", \"2\");\n      } else if (\"majority\".equals(writeConcernType)) {\n        result = addUrlOption(result, \"w\", \"majority\");\n      } else {\n        System.err.println(\"WARNING: Invalid writeConcern: '\"\n            + writeConcernType + \"' will be ignored. \"\n            + \"Must be one of [ unacknowledged | acknowledged | \"\n            + \"journaled | replica_acknowledged | majority ]\");\n      }\n    }\n\n    // read preference\n    String readPreferenceType =\n        props.getProperty(\"mongodb.readPreference\", UNAVAILABLE).toLowerCase();\n    if (!UNAVAILABLE.equals(readPreferenceType)) {\n      if (\"primary\".equals(readPreferenceType)) {\n        result = addUrlOption(result, \"readPreference\", \"primary\");\n      } else if (\"primary_preferred\".equals(readPreferenceType)) {\n        result = addUrlOption(result, \"readPreference\", \"primaryPreferred\");\n      } else if (\"secondary\".equals(readPreferenceType)) {\n        result = addUrlOption(result, \"readPreference\", \"secondary\");\n      } else if (\"secondary_preferred\".equals(readPreferenceType)) {\n        result = addUrlOption(result, \"readPreference\", \"secondaryPreferred\");\n      } else if (\"nearest\".equals(readPreferenceType)) {\n        result = addUrlOption(result, \"readPreference\", \"nearest\");\n      } else {\n        System.err.println(\"WARNING: Invalid readPreference: '\"\n            + readPreferenceType + \"' will be ignored. \"\n            + \"Must be one of [ primary | primary_preferred | \"\n            + \"secondary | secondary_preferred | nearest ]\");\n      }\n    }\n\n    return result;\n  }\n\n  /**\n   * Adds an option to the url if it does not already contain the option.\n   *\n   * @param url\n   *          The URL to append the options to.\n   * @param name\n   *          The name of the option.\n   * @param value\n   *          The value for the option.\n   * @return The updated URL.\n   */\n  private static String addUrlOption(String url, String name, String value) {\n    String fullName = name + \"=\";\n    if (!url.contains(fullName)) {\n      if (url.contains(\"?\")) {\n        return url + \"&\" + fullName + value;\n      }\n      return url + \"?\" + fullName + value;\n    }\n    return url;\n  }\n\n  /**\n   * Hidden Constructor.\n   */\n  private OptionsSupport() {\n    // Nothing.\n  }\n}\n"
  },
  {
    "path": "mongodb/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"https://www.mongodb.org/\">MongoDB</a>.\n * For additional details on using and configuring the binding see the \n * accompanying <a \n * href=\"https://github.com/brianfrankcooper/YCSB/blob/master/mongodb/README.md\"\n * >README.md</a>.\n * <p>\n * A YCSB binding is provided for both the the\n * <a href=\"http://www.allanbank.com/mongodb-async-driver/\">Asynchronous\n * Java Driver</a> and the MongoDB Inc.\n * <a href=\"http://docs.mongodb.org/ecosystem/drivers/java/\">driver</a>.\n * </p>\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "mongodb/src/main/resources/log4j.properties",
    "content": "# Copyright (c) 2016 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n#define the console appender\nlog4j.appender.consoleAppender = org.apache.log4j.ConsoleAppender\n\n# now define the layout for the appender\nlog4j.appender.consoleAppender.layout = org.apache.log4j.PatternLayout\nlog4j.appender.consoleAppender.layout.ConversionPattern=%-4r [%t] %-5p %c %x -%m%n\n\n# now map our console appender as a root logger, means all log messages will go\n# to this appender\nlog4j.rootLogger = INFO, consoleAppender\n"
  },
  {
    "path": "mongodb/src/main/resources/logback.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n<configuration>\n\t<appender name=\"STDOUT\" class=\"ch.qos.logback.core.ConsoleAppender\">\n\t\t<encoder>\n\t\t\t<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>\n\t\t</encoder>\n\t</appender>\n\n\t<logger name=\"org.mongodb\" level=\"WARN\">\n\t\t<appender-ref ref=\"STDOUT\"/>\n\t</logger>\n\n\t<root level=\"INFO\">\n\t\t<appender-ref ref=\"STDOUT\"/>\n\t</root>\n</configuration>\n"
  },
  {
    "path": "mongodb/src/test/java/com/yahoo/ycsb/db/AbstractDBTestCases.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport static org.hamcrest.CoreMatchers.is;\nimport static org.hamcrest.CoreMatchers.not;\nimport static org.hamcrest.CoreMatchers.notNullValue;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertThat;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assume.assumeNoException;\n\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.Status;\n\nimport org.junit.BeforeClass;\nimport org.junit.Test;\n\nimport java.io.IOException;\nimport java.net.InetAddress;\nimport java.net.Socket;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\n/**\n * MongoDbClientTest provides runs the basic DB test cases.\n * <p>\n * The tests will be skipped if MongoDB is not running on port 27017 on the\n * local machine. See the README.md for how to get MongoDB running.\n * </p>\n */\n@SuppressWarnings(\"boxing\")\npublic abstract class AbstractDBTestCases {\n\n  /** The default port for MongoDB. */\n  private static final int MONGODB_DEFAULT_PORT = 27017;\n\n  /**\n   * Verifies the mongod process (or some process) is running on port 27017, if\n   * not the tests are skipped.\n   */\n  @BeforeClass\n  public static void setUpBeforeClass() {\n    // Test if we can connect.\n    Socket socket = null;\n    try {\n      // Connect\n      socket = new Socket(InetAddress.getLocalHost(), MONGODB_DEFAULT_PORT);\n      assertThat(\"Socket is not bound.\", socket.getLocalPort(), not(-1));\n    } catch (IOException connectFailed) {\n      assumeNoException(\"MongoDB is not running. Skipping tests.\",\n          connectFailed);\n    } finally {\n      if (socket != null) {\n        try {\n          socket.close();\n        } catch (IOException ignore) {\n          // Ignore.\n        }\n      }\n      socket = null;\n    }\n  }\n\n  /**\n   * Test method for {@link DB#insert}, {@link DB#read}, and {@link DB#delete} .\n   */\n  @Test\n  public void testInsertReadDelete() {\n    final DB client = getDB();\n\n    final String table = getClass().getSimpleName();\n    final String id = \"delete\";\n\n    HashMap<String, ByteIterator> inserted =\n        new HashMap<String, ByteIterator>();\n    inserted.put(\"a\", new ByteArrayByteIterator(new byte[] { 1, 2, 3, 4 }));\n    Status result = client.insert(table, id, inserted);\n    assertThat(\"Insert did not return success (0).\", result, is(Status.OK));\n\n    HashMap<String, ByteIterator> read = new HashMap<String, ByteIterator>();\n    Set<String> keys = Collections.singleton(\"a\");\n    result = client.read(table, id, keys, read);\n    assertThat(\"Read did not return success (0).\", result, is(Status.OK));\n    for (String key : keys) {\n      ByteIterator iter = read.get(key);\n\n      assertThat(\"Did not read the inserted field: \" + key, iter,\n          notNullValue());\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 1)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 2)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 3)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 4)));\n      assertFalse(iter.hasNext());\n    }\n\n    result = client.delete(table, id);\n    assertThat(\"Delete did not return success (0).\", result, is(Status.OK));\n\n    read.clear();\n    result = client.read(table, id, null, read);\n    assertThat(\"Read, after delete, did not return not found (1).\", result,\n        is(Status.NOT_FOUND));\n    assertThat(\"Found the deleted fields.\", read.size(), is(0));\n\n    result = client.delete(table, id);\n    assertThat(\"Delete did not return not found (1).\", result, is(Status.NOT_FOUND));\n  }\n\n  /**\n   * Test method for {@link DB#insert}, {@link DB#read}, and {@link DB#update} .\n   */\n  @Test\n  public void testInsertReadUpdate() {\n    DB client = getDB();\n\n    final String table = getClass().getSimpleName();\n    final String id = \"update\";\n\n    HashMap<String, ByteIterator> inserted =\n        new HashMap<String, ByteIterator>();\n    inserted.put(\"a\", new ByteArrayByteIterator(new byte[] { 1, 2, 3, 4 }));\n    Status result = client.insert(table, id, inserted);\n    assertThat(\"Insert did not return success (0).\", result, is(Status.OK));\n\n    HashMap<String, ByteIterator> read = new HashMap<String, ByteIterator>();\n    Set<String> keys = Collections.singleton(\"a\");\n    result = client.read(table, id, keys, read);\n    assertThat(\"Read did not return success (0).\", result, is(Status.OK));\n    for (String key : keys) {\n      ByteIterator iter = read.get(key);\n\n      assertThat(\"Did not read the inserted field: \" + key, iter,\n          notNullValue());\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 1)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 2)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 3)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 4)));\n      assertFalse(iter.hasNext());\n    }\n\n    HashMap<String, ByteIterator> updated = new HashMap<String, ByteIterator>();\n    updated.put(\"a\", new ByteArrayByteIterator(new byte[] { 5, 6, 7, 8 }));\n    result = client.update(table, id, updated);\n    assertThat(\"Update did not return success (0).\", result, is(Status.OK));\n\n    read.clear();\n    result = client.read(table, id, null, read);\n    assertThat(\"Read, after update, did not return success (0).\", result, is(Status.OK));\n    for (String key : keys) {\n      ByteIterator iter = read.get(key);\n\n      assertThat(\"Did not read the inserted field: \" + key, iter,\n          notNullValue());\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 5)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 6)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 7)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 8)));\n      assertFalse(iter.hasNext());\n    }\n  }\n\n  /**\n   * Test method for {@link DB#insert}, {@link DB#read}, and {@link DB#update} .\n   */\n  @Test\n  public void testInsertReadUpdateWithUpsert() {\n    Properties props = new Properties();\n    props.setProperty(\"mongodb.upsert\", \"true\");\n    DB client = getDB(props);\n\n    final String table = getClass().getSimpleName();\n    final String id = \"updateWithUpsert\";\n\n    HashMap<String, ByteIterator> inserted =\n        new HashMap<String, ByteIterator>();\n    inserted.put(\"a\", new ByteArrayByteIterator(new byte[] { 1, 2, 3, 4 }));\n    Status result = client.insert(table, id, inserted);\n    assertThat(\"Insert did not return success (0).\", result, is(Status.OK));\n\n    HashMap<String, ByteIterator> read = new HashMap<String, ByteIterator>();\n    Set<String> keys = Collections.singleton(\"a\");\n    result = client.read(table, id, keys, read);\n    assertThat(\"Read did not return success (0).\", result, is(Status.OK));\n    for (String key : keys) {\n      ByteIterator iter = read.get(key);\n\n      assertThat(\"Did not read the inserted field: \" + key, iter,\n          notNullValue());\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 1)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 2)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 3)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 4)));\n      assertFalse(iter.hasNext());\n    }\n\n    HashMap<String, ByteIterator> updated = new HashMap<String, ByteIterator>();\n    updated.put(\"a\", new ByteArrayByteIterator(new byte[] { 5, 6, 7, 8 }));\n    result = client.update(table, id, updated);\n    assertThat(\"Update did not return success (0).\", result, is(Status.OK));\n\n    read.clear();\n    result = client.read(table, id, null, read);\n    assertThat(\"Read, after update, did not return success (0).\", result, is(Status.OK));\n    for (String key : keys) {\n      ByteIterator iter = read.get(key);\n\n      assertThat(\"Did not read the inserted field: \" + key, iter,\n          notNullValue());\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 5)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 6)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 7)));\n      assertTrue(iter.hasNext());\n      assertThat(iter.nextByte(), is(Byte.valueOf((byte) 8)));\n      assertFalse(iter.hasNext());\n    }\n  }\n\n  /**\n   * Test method for {@link DB#scan}.\n   */\n  @Test\n  public void testScan() {\n    final DB client = getDB();\n\n    final String table = getClass().getSimpleName();\n\n    // Insert a bunch of documents.\n    for (int i = 0; i < 100; ++i) {\n      HashMap<String, ByteIterator> inserted =\n          new HashMap<String, ByteIterator>();\n      inserted.put(\"a\", new ByteArrayByteIterator(new byte[] {\n          (byte) (i & 0xFF), (byte) (i >> 8 & 0xFF), (byte) (i >> 16 & 0xFF),\n          (byte) (i >> 24 & 0xFF) }));\n      Status result = client.insert(table, padded(i), inserted);\n      assertThat(\"Insert did not return success (0).\", result, is(Status.OK));\n    }\n\n    Set<String> keys = Collections.singleton(\"a\");\n    Vector<HashMap<String, ByteIterator>> results =\n        new Vector<HashMap<String, ByteIterator>>();\n    Status result = client.scan(table, \"00050\", 5, null, results);\n    assertThat(\"Read did not return success (0).\", result, is(Status.OK));\n    assertThat(results.size(), is(5));\n    for (int i = 0; i < 5; ++i) {\n      Map<String, ByteIterator> read = results.get(i);\n      for (String key : keys) {\n        ByteIterator iter = read.get(key);\n\n        assertThat(\"Did not read the inserted field: \" + key, iter,\n            notNullValue());\n        assertTrue(iter.hasNext());\n        assertThat(iter.nextByte(), is(Byte.valueOf((byte) ((i + 50) & 0xFF))));\n        assertTrue(iter.hasNext());\n        assertThat(iter.nextByte(),\n            is(Byte.valueOf((byte) ((i + 50) >> 8 & 0xFF))));\n        assertTrue(iter.hasNext());\n        assertThat(iter.nextByte(),\n            is(Byte.valueOf((byte) ((i + 50) >> 16 & 0xFF))));\n        assertTrue(iter.hasNext());\n        assertThat(iter.nextByte(),\n            is(Byte.valueOf((byte) ((i + 50) >> 24 & 0xFF))));\n        assertFalse(iter.hasNext());\n      }\n    }\n  }\n\n  /**\n   * Gets the test DB.\n   * \n   * @return The test DB.\n   */\n  protected DB getDB() {\n    return getDB(new Properties());\n  }\n\n  /**\n   * Gets the test DB.\n   * \n   * @param props \n   *    Properties to pass to the client.\n   * @return The test DB.\n   */\n  protected abstract DB getDB(Properties props);\n\n  /**\n   * Creates a zero padded integer.\n   * \n   * @param i\n   *          The integer to padd.\n   * @return The padded integer.\n   */\n  private String padded(int i) {\n    String result = String.valueOf(i);\n    while (result.length() < 5) {\n      result = \"0\" + result;\n    }\n    return result;\n  }\n\n}"
  },
  {
    "path": "mongodb/src/test/java/com/yahoo/ycsb/db/AsyncMongoDbClientTest.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport com.yahoo.ycsb.DB;\n\n/**\n * AsyncMongoDbClientTest provides runs the basic workload operations.\n */\npublic class AsyncMongoDbClientTest extends MongoDbClientTest {\n\n  @Override\n  protected DB instantiateClient() {\n    return new AsyncMongoDbClient();\n  }\n}\n"
  },
  {
    "path": "mongodb/src/test/java/com/yahoo/ycsb/db/MongoDbClientTest.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport static org.junit.Assume.assumeNoException;\n\nimport java.util.Properties;\n\nimport org.junit.After;\n\nimport com.yahoo.ycsb.DB;\n\n/**\n * MongoDbClientTest provides runs the basic workload operations.\n */\npublic class MongoDbClientTest extends AbstractDBTestCases {\n\n  /** The client to use. */\n  private DB myClient = null;\n\n  protected DB instantiateClient() {\n    return new MongoDbClient();\n  }\n\n  /**\n   * Stops the test client.\n   */\n  @After\n  public void tearDown() {\n    try {\n      myClient.cleanup();\n    } catch (Exception error) {\n      // Ignore.\n    } finally {\n      myClient = null;\n    }\n  }\n\n  /**\n   * {@inheritDoc}\n   * <p>\n   * Overridden to return the {@link MongoDbClient}.\n   * </p>\n   */\n  @Override\n  protected DB getDB(Properties props) {\n    if( myClient == null ) {\n      myClient = instantiateClient();\n      myClient.setProperties(props);\n      try {\n        myClient.init();\n      } catch (Exception error) {\n        assumeNoException(error);\n      }\n    }\n    return myClient;\n  }\n}\n"
  },
  {
    "path": "mongodb/src/test/java/com/yahoo/ycsb/db/OptionsSupportTest.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport static com.yahoo.ycsb.db.OptionsSupport.updateUrl;\nimport static org.hamcrest.CoreMatchers.is;\nimport static org.junit.Assert.assertThat;\n\nimport java.util.Properties;\n\nimport org.junit.Test;\n\n/**\n * OptionsSupportTest provides tests for the OptionsSupport class.\n *\n * @author rjm\n */\npublic class OptionsSupportTest {\n\n  /**\n   * Test method for {@link OptionsSupport#updateUrl(String, Properties)} for\n   * {@code mongodb.maxconnections}.\n   */\n  @Test\n  public void testUpdateUrlMaxConnections() {\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/\",\n            props(\"mongodb.maxconnections\", \"1234\")),\n        is(\"mongodb://locahost:27017/?maxPoolSize=1234\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\",\n            props(\"mongodb.maxconnections\", \"1234\")),\n        is(\"mongodb://locahost:27017/?foo=bar&maxPoolSize=1234\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?maxPoolSize=1\",\n            props(\"mongodb.maxconnections\", \"1234\")),\n        is(\"mongodb://locahost:27017/?maxPoolSize=1\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\", props(\"foo\", \"1234\")),\n        is(\"mongodb://locahost:27017/?foo=bar\"));\n  }\n\n  /**\n   * Test method for {@link OptionsSupport#updateUrl(String, Properties)} for\n   * {@code mongodb.threadsAllowedToBlockForConnectionMultiplier}.\n   */\n  @Test\n  public void testUpdateUrlWaitQueueMultiple() {\n    assertThat(\n        updateUrl(\n            \"mongodb://locahost:27017/\",\n            props(\"mongodb.threadsAllowedToBlockForConnectionMultiplier\",\n                \"1234\")),\n        is(\"mongodb://locahost:27017/?waitQueueMultiple=1234\"));\n    assertThat(\n        updateUrl(\n            \"mongodb://locahost:27017/?foo=bar\",\n            props(\"mongodb.threadsAllowedToBlockForConnectionMultiplier\",\n                \"1234\")),\n        is(\"mongodb://locahost:27017/?foo=bar&waitQueueMultiple=1234\"));\n    assertThat(\n        updateUrl(\n            \"mongodb://locahost:27017/?waitQueueMultiple=1\",\n            props(\"mongodb.threadsAllowedToBlockForConnectionMultiplier\",\n                \"1234\")), is(\"mongodb://locahost:27017/?waitQueueMultiple=1\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\", props(\"foo\", \"1234\")),\n        is(\"mongodb://locahost:27017/?foo=bar\"));\n  }\n\n  /**\n   * Test method for {@link OptionsSupport#updateUrl(String, Properties)} for\n   * {@code mongodb.threadsAllowedToBlockForConnectionMultiplier}.\n   */\n  @Test\n  public void testUpdateUrlWriteConcern() {\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/\",\n            props(\"mongodb.writeConcern\", \"errors_ignored\")),\n        is(\"mongodb://locahost:27017/?w=0\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\",\n            props(\"mongodb.writeConcern\", \"unacknowledged\")),\n        is(\"mongodb://locahost:27017/?foo=bar&w=0\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\",\n            props(\"mongodb.writeConcern\", \"acknowledged\")),\n        is(\"mongodb://locahost:27017/?foo=bar&w=1\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\",\n            props(\"mongodb.writeConcern\", \"journaled\")),\n        is(\"mongodb://locahost:27017/?foo=bar&journal=true&j=true\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\",\n            props(\"mongodb.writeConcern\", \"replica_acknowledged\")),\n        is(\"mongodb://locahost:27017/?foo=bar&w=2\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\",\n            props(\"mongodb.writeConcern\", \"majority\")),\n        is(\"mongodb://locahost:27017/?foo=bar&w=majority\"));\n\n    // w already exists.\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?w=1\",\n            props(\"mongodb.writeConcern\", \"acknowledged\")),\n        is(\"mongodb://locahost:27017/?w=1\"));\n\n    // Unknown options\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\", props(\"foo\", \"1234\")),\n        is(\"mongodb://locahost:27017/?foo=bar\"));\n  }\n\n  /**\n   * Test method for {@link OptionsSupport#updateUrl(String, Properties)} for\n   * {@code mongodb.threadsAllowedToBlockForConnectionMultiplier}.\n   */\n  @Test\n  public void testUpdateUrlReadPreference() {\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/\",\n            props(\"mongodb.readPreference\", \"primary\")),\n        is(\"mongodb://locahost:27017/?readPreference=primary\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\",\n            props(\"mongodb.readPreference\", \"primary_preferred\")),\n        is(\"mongodb://locahost:27017/?foo=bar&readPreference=primaryPreferred\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\",\n            props(\"mongodb.readPreference\", \"secondary\")),\n        is(\"mongodb://locahost:27017/?foo=bar&readPreference=secondary\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\",\n            props(\"mongodb.readPreference\", \"secondary_preferred\")),\n        is(\"mongodb://locahost:27017/?foo=bar&readPreference=secondaryPreferred\"));\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\",\n            props(\"mongodb.readPreference\", \"nearest\")),\n        is(\"mongodb://locahost:27017/?foo=bar&readPreference=nearest\"));\n\n    // readPreference already exists.\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?readPreference=primary\",\n            props(\"mongodb.readPreference\", \"secondary\")),\n        is(\"mongodb://locahost:27017/?readPreference=primary\"));\n\n    // Unknown options\n    assertThat(\n        updateUrl(\"mongodb://locahost:27017/?foo=bar\", props(\"foo\", \"1234\")),\n        is(\"mongodb://locahost:27017/?foo=bar\"));\n  }\n\n  /**\n   * Factory method for a {@link Properties} object.\n   * \n   * @param key\n   *          The key for the property to set.\n   * @param value\n   *          The value for the property to set.\n   * @return The {@link Properties} with the property added.\n   */\n  private Properties props(String key, String value) {\n    Properties props = new Properties();\n\n    props.setProperty(key, value);\n\n    return props;\n  }\n\n}\n"
  },
  {
    "path": "nosqldb/README.md",
    "content": "CONFIGURE\n\n$KVHOME is Oracle NoSQL Database package files.\n$KVROOT is a data directory.\n$YCSBHOME is a YCSB home directory.\n\n    mkdir $KVROOT\n    java -jar $KVHOME/lib/kvstore-1.2.123.jar makebootconfig \\\n       -root $KVROOT -port 5000 -admin 5001 -host localhost \\\n       -harange 5010,5020\n    java -jar $KVHOME/lib/kvstore-1.2.123.jar start -root $KVROOT\n    java -jar $KVHOME/lib/kvstore-1.2.123.jar runadmin \\\n        -port 5000 -host localhost -script $YCSBHOME/conf/script.txt\n\nBENCHMARK\n\n    $YCSBHOME/bin/ycsb load nosqldb -P workloads/workloada\n    $YCSBHOME/bin/ycsb run nosqldb -P workloads/workloada\n\nPROPERTIES\n\nSee $YCSBHOME/conf/nosqldb.properties.\n\nSTOP\n\n$ java -jar $KVHOME/lib/kvstore-1.2.123.jar stop -root $KVROOT\n\n\nPlease refer to Oracle NoSQL Database docs here:\nhttp://docs.oracle.com/cd/NOSQL/html/index.html\n"
  },
  {
    "path": "nosqldb/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n  \n  <artifactId>nosqldb-binding</artifactId>\n  <name>Oracle NoSQL Database Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.oracle.kv</groupId>\n      <artifactId>oracle-nosql-client</artifactId>\n      <version>3.0.5</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "nosqldb/src/main/conf/nosqldb.properties",
    "content": "# Copyright (c) 2012 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n#\n# Sample property file for Oracle NoSQL Database client\n#\n# Refer to the Javadoc of oracle.kv.KVStoreConfig class\n# for more details.\n#\n\n# Store name\n#storeName=kvstore\n\n# Comma-separated list of helper host/port pairs\n#helperHost=localhost:5000\n\n# Read consistency\n# \"ABSOLUTE\" or \"NONE_REQUIRED\"\n#consistency=NONE_REQUIRED\n\n# Write durability\n# \"COMMIT_NO_SYNC\", \"COMMIT_SYNC\" or \"COMMIT_WRITE_NO_SYNC\"\n#durability=COMMIT_NO_SYNC\n\n# Limitations on the number of active requests to a node\n#requestLimit.maxActiveRequests=100\n#requestLimit.requestThresholdPercent=90\n#requestLimit.nodeLimitPercent=80\n\n# Request timeout in seconds (positive integer)\n#requestTimeout=5\n"
  },
  {
    "path": "nosqldb/src/main/conf/script.txt",
    "content": "# Copyright (c) 2012 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n# Simple configuration file; only one node in a system\nconfigure kvstore\nplan -execute -name \"Deploy DC\" deploy-datacenter \"Local\"\nplan -execute -name \"Deploy n01\" deploy-sn 1 localhost 5000\nplan -execute -name \"Deploy admin\" deploy-admin 1 5001\naddpool LocalPool\njoinpool LocalPool 1\nplan -execute -name \"Deploy the store\" deploy-store LocalPool 1 100\nquit\n"
  },
  {
    "path": "nosqldb/src/main/java/com/yahoo/ycsb/db/NoSqlDbClient.java",
    "content": "/**\n * Copyright (c) 2012 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.SortedMap;\nimport java.util.Vector;\nimport java.util.concurrent.TimeUnit;\n\nimport oracle.kv.Consistency;\nimport oracle.kv.Durability;\nimport oracle.kv.FaultException;\nimport oracle.kv.KVStore;\nimport oracle.kv.KVStoreConfig;\nimport oracle.kv.KVStoreFactory;\nimport oracle.kv.Key;\nimport oracle.kv.RequestLimitConfig;\nimport oracle.kv.Value;\nimport oracle.kv.ValueVersion;\n\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\n/**\n * A database interface layer for Oracle NoSQL Database.\n */\npublic class NoSqlDbClient extends DB {\n\n  private KVStore store;\n\n  private int getPropertyInt(Properties properties, String key,\n      int defaultValue) throws DBException {\n    String p = properties.getProperty(key);\n    int i = defaultValue;\n    if (p != null) {\n      try {\n        i = Integer.parseInt(p);\n      } catch (NumberFormatException e) {\n        throw new DBException(\"Illegal number format in \" + key + \" property\");\n      }\n    }\n    return i;\n  }\n\n  @Override\n  public void init() throws DBException {\n    Properties properties = getProperties();\n\n    /* Mandatory properties */\n    String storeName = properties.getProperty(\"storeName\", \"kvstore\");\n    String[] helperHosts =\n        properties.getProperty(\"helperHost\", \"localhost:5000\").split(\",\");\n\n    KVStoreConfig config = new KVStoreConfig(storeName, helperHosts);\n\n    /* Optional properties */\n    String p;\n\n    p = properties.getProperty(\"consistency\");\n    if (p != null) {\n      if (p.equalsIgnoreCase(\"ABSOLUTE\")) {\n        config.setConsistency(Consistency.ABSOLUTE);\n      } else if (p.equalsIgnoreCase(\"NONE_REQUIRED\")) {\n        config.setConsistency(Consistency.NONE_REQUIRED);\n      } else {\n        throw new DBException(\"Illegal value in consistency property\");\n      }\n    }\n\n    p = properties.getProperty(\"durability\");\n    if (p != null) {\n      if (p.equalsIgnoreCase(\"COMMIT_NO_SYNC\")) {\n        config.setDurability(Durability.COMMIT_NO_SYNC);\n      } else if (p.equalsIgnoreCase(\"COMMIT_SYNC\")) {\n        config.setDurability(Durability.COMMIT_SYNC);\n      } else if (p.equalsIgnoreCase(\"COMMIT_WRITE_NO_SYNC\")) {\n        config.setDurability(Durability.COMMIT_WRITE_NO_SYNC);\n      } else {\n        throw new DBException(\"Illegal value in durability property\");\n      }\n    }\n\n    int maxActiveRequests =\n        getPropertyInt(properties, \"requestLimit.maxActiveRequests\",\n            RequestLimitConfig.DEFAULT_MAX_ACTIVE_REQUESTS);\n    int requestThresholdPercent =\n        getPropertyInt(properties, \"requestLimit.requestThresholdPercent\",\n            RequestLimitConfig.DEFAULT_REQUEST_THRESHOLD_PERCENT);\n    int nodeLimitPercent =\n        getPropertyInt(properties, \"requestLimit.nodeLimitPercent\",\n            RequestLimitConfig.DEFAULT_NODE_LIMIT_PERCENT);\n    RequestLimitConfig requestLimitConfig;\n    /*\n     * It is said that the constructor could throw NodeRequestLimitException in\n     * Javadoc, the exception is not provided\n     */\n    // try {\n    requestLimitConfig = new RequestLimitConfig(maxActiveRequests,\n        requestThresholdPercent, nodeLimitPercent);\n    // } catch (NodeRequestLimitException e) {\n    // throw new DBException(e);\n    // }\n    config.setRequestLimit(requestLimitConfig);\n\n    p = properties.getProperty(\"requestTimeout\");\n    if (p != null) {\n      long timeout = 1;\n      try {\n        timeout = Long.parseLong(p);\n      } catch (NumberFormatException e) {\n        throw new DBException(\n            \"Illegal number format in requestTimeout property\");\n      }\n      try {\n        // TODO Support other TimeUnit\n        config.setRequestTimeout(timeout, TimeUnit.SECONDS);\n      } catch (IllegalArgumentException e) {\n        throw new DBException(e);\n      }\n    }\n\n    try {\n      store = KVStoreFactory.getStore(config);\n    } catch (FaultException e) {\n      throw new DBException(e);\n    }\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    store.close();\n  }\n\n  /**\n   * Create a key object. We map \"table\" and (YCSB's) \"key\" to a major component\n   * of the oracle.kv.Key, and \"field\" to a minor component.\n   * \n   * @return An oracle.kv.Key object.\n   */\n  private static Key createKey(String table, String key, String field) {\n    List<String> majorPath = new ArrayList<String>();\n    majorPath.add(table);\n    majorPath.add(key);\n    if (field == null) {\n      return Key.createKey(majorPath);\n    }\n\n    return Key.createKey(majorPath, field);\n  }\n\n  private static Key createKey(String table, String key) {\n    return createKey(table, key, null);\n  }\n\n  private static String getFieldFromKey(Key key) {\n    return key.getMinorPath().get(0);\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    Key kvKey = createKey(table, key);\n    SortedMap<Key, ValueVersion> kvResult;\n    try {\n      kvResult = store.multiGet(kvKey, null, null);\n    } catch (FaultException e) {\n      System.err.println(e);\n      return Status.ERROR;\n    }\n\n    for (Map.Entry<Key, ValueVersion> entry : kvResult.entrySet()) {\n      /* If fields is null, read all fields */\n      String field = getFieldFromKey(entry.getKey());\n      if (fields != null && !fields.contains(field)) {\n        continue;\n      }\n      result.put(field,\n          new ByteArrayByteIterator(entry.getValue().getValue().getValue()));\n    }\n\n    return Status.OK;\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    System.err.println(\"Oracle NoSQL Database does not support Scan semantics\");\n    return Status.ERROR;\n  }\n\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      Key kvKey = createKey(table, key, entry.getKey());\n      Value kvValue = Value.createValue(entry.getValue().toArray());\n      try {\n        store.put(kvKey, kvValue);\n      } catch (FaultException e) {\n        System.err.println(e);\n        return Status.ERROR;\n      }\n    }\n\n    return Status.OK;\n  }\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    return update(table, key, values);\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    Key kvKey = createKey(table, key);\n    try {\n      store.multiDelete(kvKey, null, null);\n    } catch (FaultException e) {\n      System.err.println(e);\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n}\n"
  },
  {
    "path": "nosqldb/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n * use this file except in compliance with the License. You may obtain a copy of\n * the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n * License for the specific language governing permissions and limitations under\n * the License. See accompanying LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\n * \"http://www.oracle.com/us/products/database/nosql/overview/index.html\">Oracle\n * 's NoSQL DB</a>.\n */\npackage com.yahoo.ycsb.db;\n"
  },
  {
    "path": "orientdb/README.md",
    "content": "<!--\nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on OrientDB running locally. \n\n### 1. Set Up YCSB\n\nClone the YCSB git repository and compile:\n\n    git clone https://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn clean package\n\n### 2. Run YCSB\n    \nNow you are ready to run! First, load the data:\n\n    ./bin/ycsb load orientdb -s -P workloads/workloada\n\nThen, run the workload:\n\n    ./bin/ycsb run orientdb -s -P workloads/workloada\n\nSee the next section for the list of configuration parameters for OrientDB.\n\n## DB creation with the OrientDBClient\nThis client will create a database for you if the connection database you specify does not exists. You can also specify connection information to a preexisting database.\n\nYou can use the ```orientdb.newdb=true``` property to allow this client to drop and create a new database instance during the ```load``` phase.\n\nNOTE: understand that using the ```orientdb.newdb=true``` property will drop and recreate databases even if it was a preexisting instance.\n\nWARNING: Creating a new database will be done safely with multiple threads on a single YCSB instance, but is not guaranteed to work when launching multiple YCSB instances. In that scenario it is suggested that you create the db before hand, or run the ```load``` phase with a single YCSB instance.\n\n## OrientDB Configuration Parameters\n\n* ```orientdb.url``` - (required) The address to your database.\n    * Supported storage types: memory, plocal, remote\n    * EX. ```plocal:/path/to/database```\n* ```orientdb.user``` - The user to connect to the database with.\n    * Default: ```admin```\n* ```orientdb.password``` - The password to connect to the database with.\n    * Default: ```admin```\n* ```orientdb.newdb``` - Overwrite the database if it already exists.\n    * Only effects the ```load``` phase.\n    * Default: ```false```\n* ```orientdb.remote.storagetype``` - Storage type of the database on remote server\n    * This is only required if using a ```remote:``` connection url\n\n## Known Issues\n\n* There is a performance issue around the scan operation. This binding uses OIndex.iterateEntriesMajor() which will return unnecessarily large iterators. This has a performance impact as the recordcount goes up. There are ideas in the works to fix it, track it here: [#568](https://github.com/brianfrankcooper/YCSB/issues/568).\n* Iterator methods needed to perform scans are Unsupported in the OrientDB API for remote database connections and so will return NOT_IMPLEMENTED status if attempted.\n"
  },
  {
    "path": "orientdb/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>orientdb-binding</artifactId>\n  <name>OrientDB Binding</name>\n  <packaging>jar</packaging>\n  <repositories>\n    <repository>\n      <id>sonatype-nexus-snapshots</id>\n      <name>Sonatype Nexus Snapshots</name>\n      <url>https://oss.sonatype.org/content/repositories/snapshots</url>\n    </repository>\n  </repositories>\n  <properties>\n    <!-- Tests do not run on jdk9 -->\n    <skipJDK9Tests>true</skipJDK9Tests>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.orientechnologies</groupId>\n      <artifactId>orientdb-client</artifactId>\n      <version>${orientdb.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-log4j12</artifactId>\n      <version>1.7.10</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "orientdb/src/main/java/com/yahoo/ycsb/db/OrientDBClient.java",
    "content": "/**\n * Copyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.orientechnologies.orient.client.remote.OServerAdmin;\nimport com.orientechnologies.orient.core.config.OGlobalConfiguration;\nimport com.orientechnologies.orient.core.db.OPartitionedDatabasePool;\nimport com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx;\nimport com.orientechnologies.orient.core.dictionary.ODictionary;\nimport com.orientechnologies.orient.core.exception.OConcurrentModificationException;\nimport com.orientechnologies.orient.core.index.OIndexCursor;\nimport com.orientechnologies.orient.core.record.ORecord;\nimport com.orientechnologies.orient.core.record.impl.ODocument;\nimport com.yahoo.ycsb.*;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.io.File;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.concurrent.locks.Lock;\nimport java.util.concurrent.locks.ReentrantLock;\n\n/**\n * OrientDB client for YCSB framework.\n */\npublic class OrientDBClient extends DB {\n  private static final String URL_PROPERTY         = \"orientdb.url\";\n  private static final String URL_PROPERTY_DEFAULT =\n      \"plocal:.\" + File.separator + \"target\" + File.separator + \"databases\" + File.separator + \"ycsb\";\n\n  private static final String USER_PROPERTY         = \"orientdb.user\";\n  private static final String USER_PROPERTY_DEFAULT = \"admin\";\n\n  private static final String PASSWORD_PROPERTY         = \"orientdb.password\";\n  private static final String PASSWORD_PROPERTY_DEFAULT = \"admin\";\n\n  private static final String NEWDB_PROPERTY         = \"orientdb.newdb\";\n  private static final String NEWDB_PROPERTY_DEFAULT = \"false\";\n\n  private static final String STORAGE_TYPE_PROPERTY = \"orientdb.remote.storagetype\";\n\n  private static final String ORIENTDB_DOCUMENT_TYPE = \"document\";\n\n  private static final String CLASS = \"usertable\";\n\n  private static final Lock    INIT_LOCK = new ReentrantLock();\n  private static       boolean dbChecked = false;\n  private static volatile OPartitionedDatabasePool databasePool;\n  private static boolean initialized   = false;\n  private static int     clientCounter = 0;\n\n  private boolean isRemote = false;\n\n  private static final Logger LOG = LoggerFactory.getLogger(OrientDBClient.class);\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is one DB instance per client thread.\n   */\n  public void init() throws DBException {\n    // initialize OrientDB driver\n    final Properties props = getProperties();\n    String url = props.getProperty(URL_PROPERTY, URL_PROPERTY_DEFAULT);\n    String user = props.getProperty(USER_PROPERTY, USER_PROPERTY_DEFAULT);\n\n    String password = props.getProperty(PASSWORD_PROPERTY, PASSWORD_PROPERTY_DEFAULT);\n    Boolean newdb = Boolean.parseBoolean(props.getProperty(NEWDB_PROPERTY, NEWDB_PROPERTY_DEFAULT));\n    String remoteStorageType = props.getProperty(STORAGE_TYPE_PROPERTY);\n\n    INIT_LOCK.lock();\n    try {\n      clientCounter++;\n      if (!initialized) {\n        OGlobalConfiguration.dumpConfiguration(System.out);\n\n        LOG.info(\"OrientDB loading database url = \" + url);\n\n        ODatabaseDocumentTx db = new ODatabaseDocumentTx(url);\n\n        if (db.getStorage().isRemote()) {\n          isRemote = true;\n        }\n\n        if (!dbChecked) {\n          if (!isRemote) {\n            if (newdb) {\n              if (db.exists()) {\n                db.open(user, password);\n                LOG.info(\"OrientDB drop and recreate fresh db\");\n\n                db.drop();\n              }\n\n              db.create();\n            } else {\n              if (!db.exists()) {\n                LOG.info(\"OrientDB database not found, creating fresh db\");\n\n                db.create();\n              }\n            }\n          } else {\n            OServerAdmin server = new OServerAdmin(url).connect(user, password);\n\n            if (remoteStorageType == null) {\n              throw new DBException(\n                  \"When connecting to a remote OrientDB instance, \"\n                      + \"specify a database storage type (plocal or memory) with \"\n                      + STORAGE_TYPE_PROPERTY);\n            }\n\n            if (newdb) {\n              if (server.existsDatabase()) {\n                LOG.info(\"OrientDB drop and recreate fresh db\");\n\n                server.dropDatabase(remoteStorageType);\n              }\n\n              server.createDatabase(db.getName(), ORIENTDB_DOCUMENT_TYPE, remoteStorageType);\n            } else {\n              if (!server.existsDatabase()) {\n\n                LOG.info(\"OrientDB database not found, creating fresh db\");\n                server.createDatabase(server.getURL(), ORIENTDB_DOCUMENT_TYPE, remoteStorageType);\n              }\n            }\n\n            server.close();\n          }\n\n          dbChecked = true;\n        }\n\n        if (db.isClosed()) {\n          db.open(user, password);\n        }\n\n        if (!db.getMetadata().getSchema().existsClass(CLASS)) {\n          db.getMetadata().getSchema().createClass(CLASS);\n        }\n\n        db.close();\n\n        if (databasePool == null) {\n          databasePool = new OPartitionedDatabasePool(url, user, password);\n        }\n\n        initialized = true;\n      }\n    } catch (Exception e) {\n      LOG.error(\"Could not initialize OrientDB connection pool for Loader: \" + e.toString());\n      e.printStackTrace();\n    } finally {\n      INIT_LOCK.unlock();\n    }\n\n  }\n\n  OPartitionedDatabasePool getDatabasePool() {\n    return databasePool;\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    INIT_LOCK.lock();\n    try {\n      clientCounter--;\n      if (clientCounter == 0) {\n        databasePool.close();\n      }\n\n      databasePool = null;\n      initialized = false;\n    } finally {\n      INIT_LOCK.unlock();\n    }\n\n  }\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    try (ODatabaseDocumentTx db = databasePool.acquire()) {\n      final ODocument document = new ODocument(CLASS);\n\n      for (Map.Entry<String, String> entry : StringByteIterator.getStringMap(values).entrySet()) {\n        document.field(entry.getKey(), entry.getValue());\n      }\n\n      document.save();\n      final ODictionary<ORecord> dictionary = db.getMetadata().getIndexManager().getDictionary();\n      dictionary.put(key, document);\n\n      return Status.OK;\n    } catch (Exception e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    while (true) {\n      try (ODatabaseDocumentTx db = databasePool.acquire()) {\n        final ODictionary<ORecord> dictionary = db.getMetadata().getIndexManager().getDictionary();\n        dictionary.remove(key);\n        return Status.OK;\n      } catch (OConcurrentModificationException cme) {\n        continue;\n      } catch (Exception e) {\n        e.printStackTrace();\n        return Status.ERROR;\n      }\n    }\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    try (ODatabaseDocumentTx db = databasePool.acquire()) {\n      final ODictionary<ORecord> dictionary = db.getMetadata().getIndexManager().getDictionary();\n      final ODocument document = dictionary.get(key);\n      if (document != null) {\n        if (fields != null) {\n          for (String field : fields) {\n            result.put(field, new StringByteIterator((String) document.field(field)));\n          }\n        } else {\n          for (String field : document.fieldNames()) {\n            result.put(field, new StringByteIterator((String) document.field(field)));\n          }\n        }\n        return Status.OK;\n      }\n    } catch (Exception e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    while (true) {\n      try (ODatabaseDocumentTx db = databasePool.acquire()) {\n        final ODictionary<ORecord> dictionary = db.getMetadata().getIndexManager().getDictionary();\n        final ODocument document = dictionary.get(key);\n        if (document != null) {\n          for (Map.Entry<String, String> entry : StringByteIterator.getStringMap(values).entrySet()) {\n            document.field(entry.getKey(), entry.getValue());\n          }\n\n          document.save();\n          return Status.OK;\n        }\n      } catch (OConcurrentModificationException cme) {\n        continue;\n      } catch (Exception e) {\n        e.printStackTrace();\n        return Status.ERROR;\n      }\n    }\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\n      Vector<HashMap<String, ByteIterator>> result) {\n\n    if (isRemote) {\n      // Iterator methods needed for scanning are Unsupported for remote database connections.\n      LOG.warn(\"OrientDB scan operation is not implemented for remote database connections.\");\n      return Status.NOT_IMPLEMENTED;\n    }\n\n    try (ODatabaseDocumentTx db = databasePool.acquire()) {\n      final ODictionary<ORecord> dictionary = db.getMetadata().getIndexManager().getDictionary();\n      final OIndexCursor entries = dictionary.getIndex().iterateEntriesMajor(startkey, true, true);\n\n      int currentCount = 0;\n      while (entries.hasNext()) {\n        final ODocument document = entries.next().getRecord();\n\n        final HashMap<String, ByteIterator> map = new HashMap<>();\n        result.add(map);\n\n        if (fields != null) {\n          for (String field : fields) {\n            map.put(field, new StringByteIterator((String) document.field(field)));\n          }\n        } else {\n          for (String field : document.fieldNames()) {\n            map.put(field, new StringByteIterator((String) document.field(field)));\n          }\n        }\n\n        currentCount++;\n\n        if (currentCount >= recordcount) {\n          break;\n        }\n      }\n\n      return Status.OK;\n    } catch (Exception e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n}\n"
  },
  {
    "path": "orientdb/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2015 - 2016, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"http://orientdb.com/orientdb/\">OrientDB</a>.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "orientdb/src/main/resources/log4j.properties",
    "content": "# Root logger option\nlog4j.rootLogger=INFO, stderr\n\n# Direct log messages to stderr\nlog4j.appender.stderr=org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.Target=System.err\nlog4j.appender.stderr.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n\n"
  },
  {
    "path": "orientdb/src/test/java/com/yahoo/ycsb/db/OrientDBClientTest.java",
    "content": "/**\n * Copyright (c) 2015 - 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.orientechnologies.orient.core.db.OPartitionedDatabasePool;\nimport com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx;\nimport com.orientechnologies.orient.core.dictionary.ODictionary;\nimport com.orientechnologies.orient.core.record.ORecord;\nimport com.orientechnologies.orient.core.record.impl.ODocument;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport org.junit.*;\n\nimport java.util.*;\n\nimport static org.junit.Assert.*;\n\n/**\n * Created by kruthar on 12/29/15.\n */\npublic class OrientDBClientTest {\n  // TODO: This must be copied because it is private in OrientDBClient, but this should defer to table property.\n  private static final String CLASS        = \"usertable\";\n  private static final int    FIELD_LENGTH = 32;\n  private static final String FIELD_PREFIX = \"FIELD\";\n  private static final String KEY_PREFIX   = \"user\";\n  private static final int    NUM_FIELDS   = 3;\n  private static final String TEST_DB_URL  = \"memory:test\";\n\n  private static OrientDBClient orientDBClient = null;\n\n  @Before\n  public void setup() throws DBException {\n    orientDBClient = new OrientDBClient();\n\n    Properties p = new Properties();\n    // TODO: Extract the property names into final variables in OrientDBClient\n    p.setProperty(\"orientdb.url\", TEST_DB_URL);\n\n    orientDBClient.setProperties(p);\n    orientDBClient.init();\n  }\n\n  @After\n  public void teardown() throws DBException {\n    if (orientDBClient != null) {\n      orientDBClient.cleanup();\n    }\n  }\n\n  /*\n      This is a copy of buildDeterministicValue() from core:com.yahoo.ycsb.workloads.CoreWorkload.java.\n      That method is neither public nor static so we need a copy.\n   */\n  private String buildDeterministicValue(String key, String fieldkey) {\n    int size = FIELD_LENGTH;\n    StringBuilder sb = new StringBuilder(size);\n    sb.append(key);\n    sb.append(':');\n    sb.append(fieldkey);\n    while (sb.length() < size) {\n      sb.append(':');\n      sb.append(sb.toString().hashCode());\n    }\n    sb.setLength(size);\n\n    return sb.toString();\n  }\n\n  /*\n      Inserts a row of deterministic values for the given insertKey using the orientDBClient.\n   */\n  private Map<String, ByteIterator> insertRow(String insertKey) {\n    HashMap<String, ByteIterator> insertMap = new HashMap<>();\n    for (int i = 0; i < 3; i++) {\n      insertMap.put(FIELD_PREFIX + i, new StringByteIterator(buildDeterministicValue(insertKey, FIELD_PREFIX + i)));\n    }\n    orientDBClient.insert(CLASS, insertKey, insertMap);\n\n    return insertMap;\n  }\n\n  @Test\n  public void insertTest() {\n    String insertKey = \"user0\";\n    Map<String, ByteIterator> insertMap = insertRow(insertKey);\n\n    OPartitionedDatabasePool pool = orientDBClient.getDatabasePool();\n    try(ODatabaseDocumentTx db = pool.acquire()) {\n      ODictionary<ORecord> dictionary = db.getDictionary();\n      ODocument result = dictionary.get(insertKey);\n\n      assertTrue(\"Assert a row was inserted.\", result != null);\n\n      for (int i = 0; i < NUM_FIELDS; i++) {\n        assertEquals(\"Assert all inserted columns have correct values.\", result.field(FIELD_PREFIX + i),\n            insertMap.get(FIELD_PREFIX + i).toString());\n      }\n    }\n  }\n\n  @Test\n  public void updateTest() {\n    String preupdateString = \"preupdate\";\n    String user0 = \"user0\";\n    String user1 = \"user1\";\n    String user2 = \"user2\";\n\n    OPartitionedDatabasePool pool = orientDBClient.getDatabasePool();\n    try(ODatabaseDocumentTx db = pool.acquire()) {\n      // Manually insert three documents\n      for (String key : Arrays.asList(user0, user1, user2)) {\n        ODocument doc = new ODocument(CLASS);\n        for (int i = 0; i < NUM_FIELDS; i++) {\n          doc.field(FIELD_PREFIX + i, preupdateString);\n        }\n        doc.save();\n\n        ODictionary<ORecord> dictionary = db.getDictionary();\n        dictionary.put(key, doc);\n      }\n    }\n\n    HashMap<String, ByteIterator> updateMap = new HashMap<>();\n    for (int i = 0; i < NUM_FIELDS; i++) {\n      updateMap.put(FIELD_PREFIX + i, new StringByteIterator(buildDeterministicValue(user1, FIELD_PREFIX + i)));\n    }\n\n    orientDBClient.update(CLASS, user1, updateMap);\n\n    try(ODatabaseDocumentTx db = pool.acquire()) {\n      ODictionary<ORecord> dictionary = db.getDictionary();\n      // Ensure that user0 record was not changed\n      ODocument result = dictionary.get(user0);\n      for (int i = 0; i < NUM_FIELDS; i++) {\n        assertEquals(\"Assert first row fields contain preupdateString\", result.field(FIELD_PREFIX + i), preupdateString);\n      }\n\n      // Check that all the columns have expected values for user1 record\n      result = dictionary.get(user1);\n      for (int i = 0; i < NUM_FIELDS; i++) {\n        assertEquals(\"Assert updated row fields are correct\", result.field(FIELD_PREFIX + i),\n            updateMap.get(FIELD_PREFIX + i).toString());\n      }\n\n      // Ensure that user2 record was not changed\n      result = dictionary.get(user2);\n      for (int i = 0; i < NUM_FIELDS; i++) {\n        assertEquals(\"Assert third row fields contain preupdateString\", result.field(FIELD_PREFIX + i), preupdateString);\n      }\n    }\n  }\n\n  @Test\n  public void readTest() {\n    String insertKey = \"user0\";\n    Map<String, ByteIterator> insertMap = insertRow(insertKey);\n    HashSet<String> readFields = new HashSet<>();\n    HashMap<String, ByteIterator> readResultMap = new HashMap<>();\n\n    // Test reading a single field\n    readFields.add(\"FIELD0\");\n    orientDBClient.read(CLASS, insertKey, readFields, readResultMap);\n    assertEquals(\"Assert that result has correct number of fields\", readFields.size(), readResultMap.size());\n    for (String field : readFields) {\n      assertEquals(\"Assert \" + field + \" was read correctly\", insertMap.get(field).toString(), readResultMap.get(field).toString());\n    }\n\n    readResultMap = new HashMap<>();\n\n    // Test reading all fields\n    readFields.add(\"FIELD1\");\n    readFields.add(\"FIELD2\");\n    orientDBClient.read(CLASS, insertKey, readFields, readResultMap);\n    assertEquals(\"Assert that result has correct number of fields\", readFields.size(), readResultMap.size());\n    for (String field : readFields) {\n      assertEquals(\"Assert \" + field + \" was read correctly\", insertMap.get(field).toString(), readResultMap.get(field).toString());\n    }\n  }\n\n  @Test\n  public void deleteTest() {\n    String user0 = \"user0\";\n    String user1 = \"user1\";\n    String user2 = \"user2\";\n\n    insertRow(user0);\n    insertRow(user1);\n    insertRow(user2);\n\n    orientDBClient.delete(CLASS, user1);\n\n    OPartitionedDatabasePool pool = orientDBClient.getDatabasePool();\n    try(ODatabaseDocumentTx db = pool.acquire()) {\n      ODictionary<ORecord> dictionary = db.getDictionary();\n\n      assertNotNull(\"Assert user0 still exists\", dictionary.get(user0));\n      assertNull(\"Assert user1 does not exist\", dictionary.get(user1));\n      assertNotNull(\"Assert user2 still exists\", dictionary.get(user2));\n    }\n  }\n\n  @Test\n  public void scanTest() {\n    Map<String, Map<String, ByteIterator>> keyMap = new HashMap<>();\n    for (int i = 0; i < 5; i++) {\n      String insertKey = KEY_PREFIX + i;\n      keyMap.put(insertKey, insertRow(insertKey));\n    }\n\n    Set<String> fieldSet = new HashSet<>();\n    fieldSet.add(\"FIELD0\");\n    fieldSet.add(\"FIELD1\");\n    int startIndex = 0;\n    int resultRows = 3;\n\n    Vector<HashMap<String, ByteIterator>> resultVector = new Vector<>();\n    orientDBClient.scan(CLASS, KEY_PREFIX + startIndex, resultRows, fieldSet, resultVector);\n\n    // Check the resultVector is the correct size\n    assertEquals(\"Assert the correct number of results rows were returned\", resultRows, resultVector.size());\n\n    int testIndex = startIndex;\n\n    // Check each vector row to make sure we have the correct fields\n    for (HashMap<String, ByteIterator> result : resultVector) {\n      assertEquals(\"Assert that this row has the correct number of fields\", fieldSet.size(), result.size());\n      for (String field : fieldSet) {\n        assertEquals(\"Assert this field is correct in this row\", keyMap.get(KEY_PREFIX + testIndex).get(field).toString(),\n            result.get(field).toString());\n      }\n      testIndex++;\n    }\n  }\n}\n"
  },
  {
    "path": "pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <groupId>com.yahoo.ycsb</groupId>\n  <artifactId>root</artifactId>\n  <version>0.14.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n\n  <name>YCSB Root</name>\n\n  <description>\n    This is the top level project that builds, packages the core and all the DB bindings for YCSB infrastructure.\n  </description>\n\n  <scm>\n    <connection>scm:git:git://github.com/brianfrankcooper/YCSB.git</connection>\n    <tag>master</tag>\n    <url>https://github.com/brianfrankcooper/YCSB</url>\n  </scm>\n  <dependencyManagement>\n    <dependencies>\n      <dependency>\n        <groupId>com.puppycrawl.tools</groupId>\n        <artifactId>checkstyle</artifactId>\n        <version>7.7.1</version>\n      </dependency>\n      <dependency>\n        <groupId>org.jdom</groupId>\n        <artifactId>jdom</artifactId>\n        <version>1.1</version>\n      </dependency>\n      <dependency>\n        <groupId>com.google.collections</groupId>\n        <artifactId>google-collections</artifactId>\n        <version>1.0</version>\n      </dependency>\n      <!--\n      Nail down slf4j version to 1.6 so that it defaults to no-op logger.\n      http://www.slf4j.org/codes.html#StaticLoggerBinder\n      -->\n      <dependency>\n        <groupId>org.slf4j</groupId>\n        <artifactId>slf4j-api</artifactId>\n        <version>1.6.4</version>\n      </dependency>\n    </dependencies>\n  </dependencyManagement>\n\n  <!-- Properties Management -->\n  <properties>\n    <maven.assembly.version>2.5.5</maven.assembly.version>\n    <maven.dependency.version>2.10</maven.dependency.version>\n    <asynchbase.version>1.7.1</asynchbase.version>\n    <hbase098.version>0.98.14-hadoop2</hbase098.version>\n    <hbase10.version>1.0.2</hbase10.version>\n    <hbase12.version>1.2.5</hbase12.version>\n    <accumulo.1.6.version>1.6.6</accumulo.1.6.version>\n    <accumulo.1.7.version>1.7.3</accumulo.1.7.version>\n    <accumulo.1.8.version>1.8.1</accumulo.1.8.version>\n    <cassandra.cql.version>3.0.0</cassandra.cql.version>\n    <geode.version>1.2.0</geode.version>\n    <azuredocumentdb.version>1.8.1</azuredocumentdb.version>\n    <googlebigtable.version>0.9.7</googlebigtable.version>\n    <infinispan.version>7.2.2.Final</infinispan.version>\n    <kudu.version>1.1.0</kudu.version>\n    <openjpa.jdbc.version>2.1.1</openjpa.jdbc.version>\n    <!--<mapkeeper.version>1.0</mapkeeper.version>-->\n    <mongodb.version>3.0.3</mongodb.version>\n    <mongodb.async.version>2.0.1</mongodb.async.version>\n    <orientdb.version>2.2.10</orientdb.version>\n    <redis.version>2.0.0</redis.version>\n    <s3.version>1.10.20</s3.version>\n    <voldemort.version>0.81</voldemort.version>\n    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n    <thrift.version>0.8.0</thrift.version>\n    <hypertable.version>0.9.5.6</hypertable.version>\n    <elasticsearch5-version>5.5.1</elasticsearch5-version>\n    <couchbase.version>1.4.10</couchbase.version>\n    <couchbase2.version>2.3.1</couchbase2.version>\n    <tarantool.version>1.6.5</tarantool.version>\n    <riak.version>2.0.5</riak.version>\n    <aerospike.version>3.1.2</aerospike.version>\n    <solr.version>5.5.3</solr.version>\n    <solr6.version>6.4.1</solr6.version>\n    <arangodb.version>2.7.3</arangodb.version>\n    <arangodb3.version>4.1.7</arangodb3.version>\n    <azurestorage.version>4.0.0</azurestorage.version>\n    <cloudspanner.version>0.24.0-beta</cloudspanner.version>\n  </properties>\n\n  <modules>\n    <!-- our internals -->\n    <module>core</module>\n    <module>binding-parent</module>\n    <module>distribution</module>\n    <!-- all the datastore bindings, lex sorted please -->\n    <module>accumulo1.6</module>\n    <module>accumulo1.7</module>\n    <module>accumulo1.8</module>\n    <module>aerospike</module>\n    <module>arangodb</module>\n    <module>arangodb3</module>\n    <module>asynchbase</module>\n    <module>azuredocumentdb</module>\n    <module>azuretablestorage</module>\n    <module>cassandra</module>\n    <module>cloudspanner</module>\n    <module>couchbase</module>\n    <module>couchbase2</module>\n    <module>dynamodb</module>\n    <module>elasticsearch</module>\n    <module>elasticsearch5</module>\n    <module>geode</module>\n    <module>googlebigtable</module>\n    <module>googledatastore</module>\n    <module>hbase098</module>\n    <module>hbase10</module>\n    <module>hbase12</module>\n    <module>hypertable</module>\n    <module>infinispan</module>\n    <module>jdbc</module>\n    <module>kudu</module>\n    <!--<module>mapkeeper</module>-->\n    <module>memcached</module>\n    <module>mongodb</module>\n    <module>nosqldb</module>\n    <module>orientdb</module>\n    <module>rados</module>\n    <module>redis</module>\n    <module>rest</module>\n    <module>riak</module>\n    <module>s3</module>\n    <module>solr</module>\n    <module>solr6</module>\n    <module>tarantool</module>\n    <!--<module>voldemort</module>-->\n  </modules>\n\n  <build>\n    <pluginManagement>\n      <plugins>\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-checkstyle-plugin</artifactId>\n          <version>2.16</version>\n        </plugin>\n      </plugins>\n    </pluginManagement>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-enforcer-plugin</artifactId>\n        <version>3.0.0-M1</version>\n        <executions>\n          <execution>\n            <id>enforce-maven</id>\n            <goals>\n              <goal>enforce</goal>\n            </goals>\n            <configuration>\n              <rules>\n                <requireMavenVersion>\n                  <version>3.1.0</version>\n                </requireMavenVersion>\n              </rules>    \n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-compiler-plugin</artifactId>\n        <version>3.7.0</version>\n        <configuration>\n          <source>1.7</source>\n          <target>1.7</target>\n        </configuration>\n      </plugin>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-checkstyle-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>validate</id>\n            <phase>validate</phase>\n            <goals>\n              <goal>check</goal>\n            </goals>\n            <configuration>\n              <configLocation>checkstyle.xml</configLocation>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "rados/README.md",
    "content": "<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on RADOS of Ceph.\n\n### 1. Start RADOS\n\nAfter you start your Ceph cluster, check your cluster’s health first. You can check on the health of your cluster with the following:\n\n    ceph health\n\n### 2. Install Java and Maven\n\n### 3. Set Up YCSB\n\nGit clone YCSB and compile:\n\n    git clone http://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn clean package\n\nYou can compile only RADOS-binding, EG:\n\n    mvn -pl com.yahoo.ycsb:rados-binding -am clean package\n\nYou can skip the test, EG:\n\n    mvn -pl com.yahoo.ycsb:rados-binding -am clean package -DskipTests\n\n### 4. Configuration Parameters\n\n- `rados.configfile`\n  - The path of the Ceph configuration file\n  - Default value is '/etc/ceph/ceph.conf'\n\n- `rados.id`\n  - The user id to access the RADOS service\n  - Default value is 'admin'\n\n- `rados.pool`\n  - The pool name to be used for benchmark\n  - Default value is 'data'\n\nYou can set configurations with the shell command, EG:\n\n    ./bin/ycsb load rados -s -P workloads/workloada -p \"rados.configfile=/etc/ceph/ceph.conf\" -p \"rados.id=admin\" -p \"rados.pool=data\" > outputLoad.txt\n\n### 5. Load data and run tests\n\nLoad the data:\n\n    ./bin/ycsb load rados -s -P workloads/workloada > outputLoad.txt\n\nRun the workload test:\n\n    ./bin/ycsb run rados -s -P workloads/workloada > outputRun.txt\n"
  },
  {
    "path": "rados/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>rados-binding</artifactId>\n  <name>rados of Ceph FS binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.ceph</groupId>\n      <artifactId>rados</artifactId>\n      <version>${rados.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n\t    <groupId>org.json</groupId>\n  \t\t<artifactId>json</artifactId>\n  \t\t<version>${json.version}</version>\n  \t</dependency>\n    <dependency>\n      <groupId>net.java.dev.jna</groupId>\n      <artifactId>jna</artifactId>\n      <version>4.2.2</version>\n    </dependency>\n\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n\n  <properties>\n  \t<rados.version>0.2.0</rados.version>\n  \t<json.version>20160212</json.version>\n  </properties>\n</project>\n"
  },
  {
    "path": "rados/src/main/java/com/yahoo/ycsb/db/RadosClient.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.ceph.rados.Rados;\nimport com.ceph.rados.IoCTX;\nimport com.ceph.rados.jna.RadosObjectInfo;\nimport com.ceph.rados.ReadOp;\nimport com.ceph.rados.ReadOp.ReadResult;\nimport com.ceph.rados.exceptions.RadosException;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport java.io.File;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport org.json.JSONObject;\n\n/**\n * YCSB binding for <a href=\"http://ceph.org/\">RADOS of Ceph</a>.\n *\n * See {@code rados/README.md} for details.\n */\npublic class RadosClient extends DB {\n\n  private Rados rados;\n  private IoCTX ioctx;\n\n  public static final String CONFIG_FILE_PROPERTY = \"rados.configfile\";\n  public static final String CONFIG_FILE_DEFAULT = \"/etc/ceph/ceph.conf\";\n  public static final String ID_PROPERTY = \"rados.id\";\n  public static final String ID_DEFAULT = \"admin\";\n  public static final String POOL_PROPERTY = \"rados.pool\";\n  public static final String POOL_DEFAULT = \"data\";\n\n  private boolean isInited = false;\n\n  public void init() throws DBException {\n    Properties props = getProperties();\n\n    String configfile = props.getProperty(CONFIG_FILE_PROPERTY);\n    if (configfile == null) {\n      configfile = CONFIG_FILE_DEFAULT;\n    }\n\n    String id = props.getProperty(ID_PROPERTY);\n    if (id == null) {\n      id = ID_DEFAULT;\n    }\n\n    String pool = props.getProperty(POOL_PROPERTY);\n    if (pool == null) {\n      pool = POOL_DEFAULT;\n    }\n\n    // try {\n    // } catch (UnsatisfiedLinkError e) {\n    //   throw new DBException(\"RADOS library is not loaded.\");\n    // }\n\n    rados = new Rados(id);\n    try {\n      rados.confReadFile(new File(configfile));\n      rados.connect();\n      ioctx = rados.ioCtxCreate(pool);\n    } catch (RadosException e) {\n      throw new DBException(e.getMessage() + \": \" + e.getReturnValue());\n    }\n\n    isInited = true;\n  }\n\n  public void cleanup() throws DBException {\n    if (isInited) {\n      rados.shutDown();\n      rados.ioCtxDestroy(ioctx);\n      isInited = false;\n    }\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    byte[] buffer;\n\n    try {\n      RadosObjectInfo info = ioctx.stat(key);\n      buffer = new byte[(int)info.getSize()];\n\n      ReadOp rop = ioctx.readOpCreate();\n      ReadResult readResult = rop.queueRead(0, info.getSize());\n      // TODO: more size than byte length possible;\n      // rop.operate(key, Rados.OPERATION_NOFLAG); // for rados-java 0.3.0\n      rop.operate(key, 0);\n      // readResult.raiseExceptionOnError(\"Error ReadOP(%d)\", readResult.getRVal()); // for rados-java 0.3.0\n      if (readResult.getRVal() < 0) {\n        throw new RadosException(\"Error ReadOP\", readResult.getRVal());\n      }\n      if (info.getSize() != readResult.getBytesRead()) {\n        return new Status(\"ERROR\", \"Error the object size read\");\n      }\n      readResult.getBuffer().get(buffer);\n    } catch (RadosException e) {\n      return new Status(\"ERROR-\" + e.getReturnValue(), e.getMessage());\n    }\n\n    JSONObject json = new JSONObject(new String(buffer, java.nio.charset.StandardCharsets.UTF_8));\n    Set<String> fieldsToReturn = (fields == null ? json.keySet() : fields);\n\n    for (String name : fieldsToReturn) {\n      result.put(name, new StringByteIterator(json.getString(name)));\n    }\n\n    return result.isEmpty() ? Status.ERROR : Status.OK;\n  }\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    JSONObject json = new JSONObject();\n    for (final Entry<String, ByteIterator> e : values.entrySet()) {\n      json.put(e.getKey(), e.getValue().toString());\n    }\n\n    try {\n      ioctx.write(key, json.toString());\n    } catch (RadosException e) {\n      return new Status(\"ERROR-\" + e.getReturnValue(), e.getMessage());\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      ioctx.remove(key);\n    } catch (RadosException e) {\n      return new Status(\"ERROR-\" + e.getReturnValue(), e.getMessage());\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    Status rtn = delete(table, key);\n    if (rtn.equals(Status.OK)) {\n      return insert(table, key, values);\n    }\n    return rtn;\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\n      Vector<HashMap<String, ByteIterator>> result) {\n    return Status.NOT_IMPLEMENTED;\n  }\n}\n"
  },
  {
    "path": "rados/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * YCSB binding for RADOS of Ceph.\n */\npackage com.yahoo.ycsb.db;\n"
  },
  {
    "path": "rados/src/test/java/com/yahoo/ycsb/db/RadosClientTest.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assume.assumeNoException;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport org.junit.AfterClass;\nimport org.junit.After;\nimport org.junit.BeforeClass;\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.UUID;\n\n\n /**\n  * Test for the binding of <a href=\"http://ceph.org/\">RADOS of Ceph</a>.\n  *\n  * See {@code rados/README.md} for details.\n  */\n\npublic class RadosClientTest {\n\n  private static RadosClient radosclient;\n\n  public static final String POOL_PROPERTY = \"rados.pool\";\n  public static final String POOL_TEST = \"rbd\";\n\n  private static final String TABLE_NAME = \"table0\";\n  private static final String KEY0 = \"key0\";\n  private static final String KEY1 = \"key1\";\n  private static final String KEY2 = \"key2\";\n  private static final HashMap<String, ByteIterator> DATA;\n  private static final HashMap<String, ByteIterator> DATA_UPDATED;\n\n  static {\n    DATA = new HashMap<String, ByteIterator>(10);\n    DATA_UPDATED = new HashMap<String, ByteIterator>(10);\n    for (int i = 0; i < 10; i++) {\n      String key = \"key\" + UUID.randomUUID();\n      DATA.put(key, new StringByteIterator(\"data\" + UUID.randomUUID()));\n      DATA_UPDATED.put(key, new StringByteIterator(\"data\" + UUID.randomUUID()));\n    }\n  }\n\n  @BeforeClass\n  public static void setupClass() throws DBException {\n    radosclient = new RadosClient();\n\n    Properties p = new Properties();\n    p.setProperty(POOL_PROPERTY, POOL_TEST);\n\n    try {\n      radosclient.setProperties(p);\n      radosclient.init();\n    }\n    catch (DBException|UnsatisfiedLinkError e) {\n      assumeNoException(\"Ceph cluster is not running. Skipping tests.\", e);\n    }\n  }\n\n  @AfterClass\n  public static void teardownClass() throws DBException {\n    if (radosclient != null) {\n      radosclient.cleanup();\n    }\n  }\n\n  @Before\n  public void setUp() {\n    radosclient.insert(TABLE_NAME, KEY0, DATA);\n  }\n\n  @After\n  public void tearDown() {\n    radosclient.delete(TABLE_NAME, KEY0);\n  }\n\n  @Test\n  public void insertTest() {\n    Status result = radosclient.insert(TABLE_NAME, KEY1, DATA);\n    assertEquals(Status.OK, result);\n  }\n\n  @Test\n  public void updateTest() {\n    radosclient.insert(TABLE_NAME, KEY2, DATA);\n\n    Status result = radosclient.update(TABLE_NAME, KEY2, DATA_UPDATED);\n    assertEquals(Status.OK, result);\n\n    HashMap<String, ByteIterator> ret = new HashMap<String, ByteIterator>(10);\n    radosclient.read(TABLE_NAME, KEY2, DATA.keySet(), ret);\n    compareMap(DATA_UPDATED, ret);\n\n    radosclient.delete(TABLE_NAME, KEY2);\n  }\n\n  @Test\n  public void readTest() {\n    HashMap<String, ByteIterator> ret = new HashMap<String, ByteIterator>(10);\n    Status result = radosclient.read(TABLE_NAME, KEY0, DATA.keySet(), ret);\n    assertEquals(Status.OK, result);\n    compareMap(DATA, ret);\n  }\n\n  private void compareMap(HashMap<String, ByteIterator> src, HashMap<String, ByteIterator> dest) {\n    assertEquals(src.size(), dest.size());\n\n    Set setSrc = src.entrySet();\n    Iterator<Map.Entry> itSrc = setSrc.iterator();\n    for (int i = 0; i < 10; i++) {\n      Map.Entry<String, ByteIterator> entrySrc = itSrc.next();\n      assertEquals(entrySrc.getValue().toString(), dest.get(entrySrc.getKey()).toString());\n    }\n  }\n\n  @Test\n  public void deleteTest() {\n    Status result = radosclient.delete(TABLE_NAME, KEY0);\n    assertEquals(Status.OK, result);\n  }\n\n}\n"
  },
  {
    "path": "redis/README.md",
    "content": "<!--\nCopyright (c) 2014 - 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on Redis. \n\n### 1. Start Redis\n\n### 2. Install Java and Maven\n\n### 3. Set Up YCSB\n\nGit clone YCSB and compile:\n\n    git clone http://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:redis-binding -am clean package\n\n### 4. Provide Redis Connection Parameters\n    \nSet the host, port, and password (do not redis auth is not turned on) in the \nworkload you plan to run.\n\n- `redis.host`\n- `redis.port`\n- `redis.password`\n\nOr, you can set configs with the shell command, EG:\n\n    ./bin/ycsb load redis -s -P workloads/workloada -p \"redis.host=127.0.0.1\" -p \"redis.port=6379\" > outputLoad.txt\n\n### 5. Load data and run tests\n\nLoad the data:\n\n    ./bin/ycsb load redis -s -P workloads/workloada > outputLoad.txt\n\nRun the workload test:\n\n    ./bin/ycsb run redis -s -P workloads/workloada > outputRun.txt\n\n"
  },
  {
    "path": "redis/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n  \n  <artifactId>redis-binding</artifactId>\n  <name>Redis DB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>redis.clients</groupId>\n      <artifactId>jedis</artifactId>\n      <version>${redis.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "redis/src/main/java/com/yahoo/ycsb/db/RedisClient.java",
    "content": "/**\n * Copyright (c) 2012 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * Redis client binding for YCSB.\n *\n * All YCSB records are mapped to a Redis *hash field*.  For scanning\n * operations, all keys are saved (by an arbitrary hash) in a sorted set.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport redis.clients.jedis.Jedis;\nimport redis.clients.jedis.Protocol;\n\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Iterator;\nimport java.util.List;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\n/**\n * YCSB binding for <a href=\"http://redis.io/\">Redis</a>.\n *\n * See {@code redis/README.md} for details.\n */\npublic class RedisClient extends DB {\n\n  private Jedis jedis;\n\n  public static final String HOST_PROPERTY = \"redis.host\";\n  public static final String PORT_PROPERTY = \"redis.port\";\n  public static final String PASSWORD_PROPERTY = \"redis.password\";\n\n  public static final String INDEX_KEY = \"_indices\";\n\n  public void init() throws DBException {\n    Properties props = getProperties();\n    int port;\n\n    String portString = props.getProperty(PORT_PROPERTY);\n    if (portString != null) {\n      port = Integer.parseInt(portString);\n    } else {\n      port = Protocol.DEFAULT_PORT;\n    }\n    String host = props.getProperty(HOST_PROPERTY);\n\n    jedis = new Jedis(host, port);\n    jedis.connect();\n\n    String password = props.getProperty(PASSWORD_PROPERTY);\n    if (password != null) {\n      jedis.auth(password);\n    }\n  }\n\n  public void cleanup() throws DBException {\n    jedis.disconnect();\n  }\n\n  /*\n   * Calculate a hash for a key to store it in an index. The actual return value\n   * of this function is not interesting -- it primarily needs to be fast and\n   * scattered along the whole space of doubles. In a real world scenario one\n   * would probably use the ASCII values of the keys.\n   */\n  private double hash(String key) {\n    return key.hashCode();\n  }\n\n  // XXX jedis.select(int index) to switch to `table`\n\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n      Map<String, ByteIterator> result) {\n    if (fields == null) {\n      StringByteIterator.putAllAsByteIterators(result, jedis.hgetAll(key));\n    } else {\n      String[] fieldArray =\n          (String[]) fields.toArray(new String[fields.size()]);\n      List<String> values = jedis.hmget(key, fieldArray);\n\n      Iterator<String> fieldIterator = fields.iterator();\n      Iterator<String> valueIterator = values.iterator();\n\n      while (fieldIterator.hasNext() && valueIterator.hasNext()) {\n        result.put(fieldIterator.next(),\n            new StringByteIterator(valueIterator.next()));\n      }\n      assert !fieldIterator.hasNext() && !valueIterator.hasNext();\n    }\n    return result.isEmpty() ? Status.ERROR : Status.OK;\n  }\n\n  @Override\n  public Status insert(String table, String key,\n      Map<String, ByteIterator> values) {\n    if (jedis.hmset(key, StringByteIterator.getStringMap(values))\n        .equals(\"OK\")) {\n      jedis.zadd(INDEX_KEY, hash(key), key);\n      return Status.OK;\n    }\n    return Status.ERROR;\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    return jedis.del(key) == 0 && jedis.zrem(INDEX_KEY, key) == 0 ? Status.ERROR\n        : Status.OK;\n  }\n\n  @Override\n  public Status update(String table, String key,\n      Map<String, ByteIterator> values) {\n    return jedis.hmset(key, StringByteIterator.getStringMap(values))\n        .equals(\"OK\") ? Status.OK : Status.ERROR;\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    Set<String> keys = jedis.zrangeByScore(INDEX_KEY, hash(startkey),\n        Double.POSITIVE_INFINITY, 0, recordcount);\n\n    HashMap<String, ByteIterator> values;\n    for (String key : keys) {\n      values = new HashMap<String, ByteIterator>();\n      read(table, key, fields, values);\n      result.add(values);\n    }\n\n    return Status.OK;\n  }\n\n}\n"
  },
  {
    "path": "redis/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"http://redis.io/\">Redis</a>.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "rest/README.md",
    "content": "<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB to benchmark HTTP RESTful\nwebservices. The aim of the rest binding is to benchmark the \nperformance of any sepecific HTTP RESTful webservices with real\nlife (production) dataset. This must not be confused with benchmarking\nvarious webservers (like Apache Tomcat, Nginx, Jetty) using a dummy \ndataset.\n\n### 1. Set Up YCSB\n\nClone the YCSB git repository and compile:\n\n    git clone git://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:rest-binding -am clean package\n\n### 2. Set Up an HTTP Web Service\n\nThere must be a running HTTP RESTful webservice accesible from \nthe instance on which YCSB is running. If the webservice is \nrunning on the local instance default HTTP port 80, it's base\nURL will look like http://127.0.0.1:80/{service_endpoint}. The\nrest binding assumes that the webservice to be benchmarked already\nhas a valid dataset. THe rest module has been designed in this \nway for two reasons:\n\n1. The performance of most webservices depends on the size, pattern\nand the nature of the real life dataset accesible from these services.\nHence creating a dummy dataset might not actually reflect the true\nperformance of a webservice to be benchmarked.\n\n2. Since many webservices have a non-naive backend which includes\ninteraction with multiple backend components, tables and databases.\nGenerating a dummy dataset for such webservices is a non-trivial and\na time consuming task.\n\nHowever to benchmark a webservice before it has access to a real \ndataset, support for automatic data insertion can be added in the\nfuture. An example of such a scenario is benchmarking a webservice\nbefore it moves to production.\n\n### 3. Run YCSB\n    \nAt this point we assume that you've setup a webservice accesible at\nan HTTP endpoint like this: http://{host}:{port}/{service_endpoint}.\n\nBefore you are ready to run please ensure that you have prepared a\ntrace for the CRUD operations to benchmark your webservice. \n\nTrace is a collection of URL resources that should be hit in order\nto benchmark any webservice. The more realistic this collection of\nURL is, the more reliable and accurate are the benchmarking results\nbecause this means simulating the real life workload more accurately.\nTracefile is a file that holds the trace. For example, if your \nwebservice exists at http://{host}:{port}/{endpoint}, and you want\nto benchmark the performance of READS on this webservice with five\nresources (namely resource_1, resource_2 ... resource_5) then the\nurl.trace.read file will look like this:\n\nhttp://{host}:{port}/{endpoint}/resource_1\nhttp://{host}:{port}/{endpoint}/resource_2\nhttp://{host}:{port}/{endpoint}/resource_3\nhttp://{host}:{port}/{endpoint}/resource_4\nhttp://{host}:{port}/{endpoint}/resource_5\n\nThe rest module will pick up URLs from the above file according to\nthe `requestdistribution` property (default is zipfian) mentioned in\nthe rest_workload. In the example above we assume that the property \n`url.prefix` (see below for property description) is set to empty. If\nurl.prefix property is set to `http://{host}:{port}/{endpoint}/` the \nequivalent of the read trace given above would look like:\n\nresource_1\nresource_2\nresource_3\nresource_4\nresource_5\n\nIn real life the traces for various CRUD operations are diffent\nfrom one another. HTTP GET will rarely have the same URL access\npattern as that of HTTP POST or HTTP PUT. Hence to give enough\nflexibility to benchmark webservices, different trace files can\nbe used for different CRUD operations. However if you wish to use\nthe same trace for all these operations, just pass the same file\nto all these properties - `url.trace.read`, `url.trace.insert`,\n`url.trace.update` & `url.trace.delete`.\n\nNow you are ready to run! Run the rest_workload:\n\n    ./bin/ycsb run rest -s -P workloads/rest_workload\n\nFor further configuration see below: \n\n### Default Configuration Parameters\nThe default settings for the rest binding are as follows:\n\n- `url.prefix` \n  - The base endpoint URL where the webservice is running. URLs from trace files (DELETE, GET, POST, PUT) will be prefixed with this value before making an HTTP request. A common usage value would be http://127.0.0.1:8080/{yourService}\n  - Default value is `http://127.0.0.1:80/`.\n  \n- `url.trace.read` \n  - The path to a trace file that holds the URLs to be invoked for HTTP GET method. URLs must be seperated by a newline.\n  \n- `url.trace.insert` \n  - The path to a trace file that holds the URLs to be invoked for HTTP POST method. URLs must be seperated by a newline. \n\n- `url.trace.update` \n  - The path to a trace file that holds the URLs to be invoked for HTTP PUT method. URLs must be seperated by a newline.\n\n- `url.trace.delete` \n  - The path to a trace file that holds the URLs to be invoked for HTTP DELETE method. URLs must be seperated by a newline.\n\n- `headers` \n  - The HTTP request headers used for all requests. Headers must be separated by space as a delimiter.\n  - Default value is `Accept */* Accept-Language en-US,en;q=0.5 Content-Type application/x-www-form-urlencoded user-agent Mozilla/5.0`\n\n- `timeout.con` \n  - The HTTP connection timeout in seconds. The response will be considered as an error if the client fails to connect with the server within this time limit.\n  - Default value is `10` seconds.\n\n- `timeout.read` \n  - The HTTP read timeout in seconds. The response will be considered as an error if the client fails to read from the server within this time limit.\n  - Default value is `10` seconds.\n  \n- `timeout.exec` \n  - The time within which request must return a response. The response will be considered as an error if the client fails to complete the request within this time limit.\n  - Default value is `10` seconds.\n\n- `log.enable` \n  - A Boolean value to enable console status logs. When true, it will print all the HTTP requests being made and thier response status on the YCSB console window.\n  - Default value is `false`.\n\n- `readrecordcount` \n  - An integer value that signifies the top k URLs (entries) to be picked from the `url.trace.read` file for making HTTP GET requests. Must have a value greater than 0. If this value exceeds the number of entries present in `url.trace.read` file, then k will be set to the number of entries in the file.\n  - Default value is `10000`. \n\n- `insertrecordcount` \n  - An integer value that signifies the top k URLs to be picked from the `url.trace.insert` file for making HTTP POST requests. Must have a value greater than 0. If this value exceeds the number of entries present in `url.trace.insert` file, then k will be set to the number of entries in the file.\n  - Default value is `5000`. \n\n- `deleterecordcount` \n  - An integer value that signifies the top k URLs to be picked from the `url.trace.delete` file for making HTTP DELETE requests. Must have a value greater than 0. If this value exceeds the number of entries present in `url.trace.delete` file, then k will be set to the number of entries in the file.\n  - Default value is `1000`.\n\n- `updaterecordcount` \n  - An integer value that signifies the top k URLs to be picked from the `url.trace.update` file for making HTTP PUT requests. Must have a value greater than 0. If this value exceeds the number of entries present in `url.trace.update` file, then k will be set to the number of entries in the file.\n  - Default value is `1000`.\n\n- `readzipfconstant` \n  - An double value of the Zipf's constant to be used for insert requests. Applicable only if the requestdistribution = `zipfian`.\n  - Default value is `0.9`.\n\n- `insertzipfconstant` \n  - An double value of the Zipf's constant to be used for insert requests. Applicable only if the requestdistribution = `zipfian`. \n  - Default value is `0.9`.\n  \n- `updatezipfconstant` \n  - An double value of the Zipf's constant to be used for insert requests. Applicable only if the requestdistribution = `zipfian`. \n  - Default value is `0.9`.\n\n- `deletezipfconstant` \n  - An double value of the Zipf's constant to be used for insert requests. Applicable only if the requestdistribution = `zipfian`. \n  - Default value is `0.9`.\n"
  },
  {
    "path": "rest/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\n<!-- Copyright (c) 2011 YCSB project, 2016 YCSB contributors. All \r\n\trights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); \r\n\tyou may not use this file except in compliance with the License. You may \r\n\tobtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 \r\n\tUnless required by applicable law or agreed to in writing, software distributed \r\n\tunder the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES \r\n\tOR CONDITIONS OF ANY KIND, either express or implied. See the License for \r\n\tthe specific language governing permissions and limitations under the License. \r\n\tSee accompanying LICENSE file. -->\r\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\r\n\txsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\r\n  <modelVersion>4.0.0</modelVersion>\r\n  <parent>\r\n    <groupId>com.yahoo.ycsb</groupId>\r\n    <artifactId>binding-parent</artifactId>\r\n    <version>0.14.0-SNAPSHOT</version>\r\n    <relativePath>../binding-parent</relativePath>\r\n  </parent>\r\n\r\n  <artifactId>rest-binding</artifactId>\r\n  <name>Rest Client Binding</name>\r\n  <packaging>jar</packaging>\r\n\r\n  <properties>\r\n    <!-- Tests do not run on jdk9 -->\r\n    <skipJDK9Tests>true</skipJDK9Tests>\r\n\r\n    <tomcat.version>8.0.28</tomcat.version>\r\n    <jersey.version>2.6</jersey.version>\r\n    <httpclient.version>4.5.1</httpclient.version>\r\n    <httpcore.version>4.4.4</httpcore.version>\r\n    <junit.version>4.12</junit.version>\r\n    <system-rules.version>1.16.0</system-rules.version>\r\n  </properties>\r\n\r\n  <dependencies>\r\n    <dependency>\r\n      <groupId>com.yahoo.ycsb</groupId>\r\n      <artifactId>core</artifactId>\r\n      <version>${project.version}</version>\r\n      <scope>provided</scope>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.apache.httpcomponents</groupId>\r\n      <artifactId>httpclient</artifactId>\r\n      <version>${httpclient.version}</version>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.apache.httpcomponents</groupId>\r\n      <artifactId>httpcore</artifactId>\r\n      <version>${httpcore.version}</version>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>junit</groupId>\r\n      <artifactId>junit</artifactId>\r\n      <version>${junit.version}</version>\r\n      <scope>test</scope>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>com.github.stefanbirkner</groupId>\r\n      <artifactId>system-rules</artifactId>\r\n      <version>${system-rules.version}</version>\r\n    </dependency>\r\n  \t<dependency>\r\n      <groupId>org.glassfish.jersey.core</groupId>\r\n      <artifactId>jersey-server</artifactId>\r\n      <version>${jersey.version}</version>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.glassfish.jersey.core</groupId>\r\n      <artifactId>jersey-client</artifactId>\r\n      <version>${jersey.version}</version>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.glassfish.jersey.containers</groupId>\r\n      <artifactId>jersey-container-servlet-core</artifactId>\r\n      <version>${jersey.version}</version>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.apache.tomcat</groupId>\r\n      <artifactId>tomcat-dbcp</artifactId>\r\n      <version>${tomcat.version}</version>\r\n      <exclusions>\r\n        <exclusion>\r\n          <groupId>org.apache.tomcat</groupId>\r\n          <artifactId>tomcat-juli</artifactId>\r\n        </exclusion>\r\n      </exclusions>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.apache.tomcat.embed</groupId>\r\n      <artifactId>tomcat-embed-core</artifactId>\r\n      <version>${tomcat.version}</version>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.apache.tomcat.embed</groupId>\r\n      <artifactId>tomcat-embed-logging-juli</artifactId>\r\n      <version>${tomcat.version}</version>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.apache.tomcat.embed</groupId>\r\n      <artifactId>tomcat-embed-logging-log4j</artifactId>\r\n      <version>${tomcat.version}</version>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.apache.tomcat.embed</groupId>\r\n      <artifactId>tomcat-embed-jasper</artifactId>\r\n      <version>${tomcat.version}</version>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.apache.tomcat.embed</groupId>\r\n      <artifactId>tomcat-embed-websocket</artifactId>\r\n      <version>${tomcat.version}</version>\r\n    </dependency>\r\n  </dependencies>\r\n\r\n</project>\r\n"
  },
  {
    "path": "rest/src/main/java/com/yahoo/ycsb/webservice/rest/RestClient.java",
    "content": "/**\r\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\r\n *\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n *\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n *\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.webservice.rest;\r\n\r\nimport java.io.BufferedReader;\r\nimport java.io.ByteArrayInputStream;\r\nimport java.io.IOException;\r\nimport java.io.InputStream;\r\nimport java.io.InputStreamReader;\r\nimport java.util.HashMap;\r\nimport java.util.Map;\r\nimport java.util.Properties;\r\nimport java.util.Set;\r\nimport java.util.Vector;\r\nimport java.util.zip.GZIPInputStream;\r\n\r\nimport javax.ws.rs.HttpMethod;\r\n\r\nimport org.apache.http.HttpEntity;\r\nimport org.apache.http.client.ClientProtocolException;\r\nimport org.apache.http.client.config.RequestConfig;\r\nimport org.apache.http.client.methods.CloseableHttpResponse;\r\nimport org.apache.http.client.methods.HttpDelete;\r\nimport org.apache.http.client.methods.HttpEntityEnclosingRequestBase;\r\nimport org.apache.http.client.methods.HttpGet;\r\nimport org.apache.http.client.methods.HttpPost;\r\nimport org.apache.http.client.methods.HttpPut;\r\nimport org.apache.http.entity.ContentType;\r\nimport org.apache.http.entity.InputStreamEntity;\r\nimport org.apache.http.impl.client.CloseableHttpClient;\r\nimport org.apache.http.impl.client.HttpClientBuilder;\r\nimport org.apache.http.util.EntityUtils;\r\n\r\nimport com.yahoo.ycsb.ByteIterator;\r\nimport com.yahoo.ycsb.DB;\r\nimport com.yahoo.ycsb.DBException;\r\nimport com.yahoo.ycsb.Status;\r\nimport com.yahoo.ycsb.StringByteIterator;\r\n\r\n/**\r\n * Class responsible for making web service requests for benchmarking purpose.\r\n * Using Apache HttpClient over standard Java HTTP API as this is more flexible\r\n * and provides better functionality. For example HttpClient can automatically\r\n * handle redirects and proxy authentication which the standard Java API can't.\r\n */\r\npublic class RestClient extends DB {\r\n\r\n  private static final String URL_PREFIX = \"url.prefix\";\r\n  private static final String CON_TIMEOUT = \"timeout.con\";\r\n  private static final String READ_TIMEOUT = \"timeout.read\";\r\n  private static final String EXEC_TIMEOUT = \"timeout.exec\";\r\n  private static final String LOG_ENABLED = \"log.enable\";\r\n  private static final String HEADERS = \"headers\";\r\n  private static final String COMPRESSED_RESPONSE = \"response.compression\";\r\n  private boolean compressedResponse;\r\n  private boolean logEnabled;\r\n  private String urlPrefix;\r\n  private Properties props;\r\n  private String[] headers;\r\n  private CloseableHttpClient client;\r\n  private int conTimeout = 10000;\r\n  private int readTimeout = 10000;\r\n  private int execTimeout = 10000;\r\n  private volatile Criteria requestTimedout = new Criteria(false);\r\n\r\n  @Override\r\n  public void init() throws DBException {\r\n    props = getProperties();\r\n    urlPrefix = props.getProperty(URL_PREFIX, \"http://127.0.0.1:8080\");\r\n    conTimeout = Integer.valueOf(props.getProperty(CON_TIMEOUT, \"10\")) * 1000;\r\n    readTimeout = Integer.valueOf(props.getProperty(READ_TIMEOUT, \"10\")) * 1000;\r\n    execTimeout = Integer.valueOf(props.getProperty(EXEC_TIMEOUT, \"10\")) * 1000;\r\n    logEnabled = Boolean.valueOf(props.getProperty(LOG_ENABLED, \"false\").trim());\r\n    compressedResponse = Boolean.valueOf(props.getProperty(COMPRESSED_RESPONSE, \"false\").trim());\r\n    headers = props.getProperty(HEADERS, \"Accept */* Content-Type application/xml user-agent Mozilla/5.0 \").trim()\r\n          .split(\" \");\r\n    setupClient();\r\n  }\r\n\r\n  private void setupClient() {\r\n    RequestConfig.Builder requestBuilder = RequestConfig.custom();\r\n    requestBuilder = requestBuilder.setConnectTimeout(conTimeout);\r\n    requestBuilder = requestBuilder.setConnectionRequestTimeout(readTimeout);\r\n    requestBuilder = requestBuilder.setSocketTimeout(readTimeout);\r\n    HttpClientBuilder clientBuilder = HttpClientBuilder.create().setDefaultRequestConfig(requestBuilder.build());\r\n    this.client = clientBuilder.setConnectionManagerShared(true).build();\r\n  }\r\n\r\n  @Override\r\n  public Status read(String table, String endpoint, Set<String> fields, Map<String, ByteIterator> result) {\r\n    int responseCode;\r\n    try {\r\n      responseCode = httpGet(urlPrefix + endpoint, result);\r\n    } catch (Exception e) {\r\n      responseCode = handleExceptions(e, urlPrefix + endpoint, HttpMethod.GET);\r\n    }\r\n    if (logEnabled) {\r\n      System.err.println(new StringBuilder(\"GET Request: \").append(urlPrefix).append(endpoint)\r\n            .append(\" | Response Code: \").append(responseCode).toString());\r\n    }\r\n    return getStatus(responseCode);\r\n  }\r\n\r\n  @Override\r\n  public Status insert(String table, String endpoint, Map<String, ByteIterator> values) {\r\n    int responseCode;\r\n    try {\r\n      responseCode = httpExecute(new HttpPost(urlPrefix + endpoint), values.get(\"data\").toString());\r\n    } catch (Exception e) {\r\n      responseCode = handleExceptions(e, urlPrefix + endpoint, HttpMethod.POST);\r\n    }\r\n    if (logEnabled) {\r\n      System.err.println(new StringBuilder(\"POST Request: \").append(urlPrefix).append(endpoint)\r\n            .append(\" | Response Code: \").append(responseCode).toString());\r\n    }\r\n    return getStatus(responseCode);\r\n  }\r\n\r\n  @Override\r\n  public Status delete(String table, String endpoint) {\r\n    int responseCode;\r\n    try {\r\n      responseCode = httpDelete(urlPrefix + endpoint);\r\n    } catch (Exception e) {\r\n      responseCode = handleExceptions(e, urlPrefix + endpoint, HttpMethod.DELETE);\r\n    }\r\n    if (logEnabled) {\r\n      System.err.println(new StringBuilder(\"DELETE Request: \").append(urlPrefix).append(endpoint)\r\n            .append(\" | Response Code: \").append(responseCode).toString());\r\n    }\r\n    return getStatus(responseCode);\r\n  }\r\n\r\n  @Override\r\n  public Status update(String table, String endpoint, Map<String, ByteIterator> values) {\r\n    int responseCode;\r\n    try {\r\n      responseCode = httpExecute(new HttpPut(urlPrefix + endpoint), values.get(\"data\").toString());\r\n    } catch (Exception e) {\r\n      responseCode = handleExceptions(e, urlPrefix + endpoint, HttpMethod.PUT);\r\n    }\r\n    if (logEnabled) {\r\n      System.err.println(new StringBuilder(\"PUT Request: \").append(urlPrefix).append(endpoint)\r\n            .append(\" | Response Code: \").append(responseCode).toString());\r\n    }\r\n    return getStatus(responseCode);\r\n  }\r\n\r\n  @Override\r\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\r\n      Vector<HashMap<String, ByteIterator>> result) {\r\n    return Status.NOT_IMPLEMENTED;\r\n  }\r\n\r\n  // Maps HTTP status codes to YCSB status codes.\r\n  private Status getStatus(int responseCode) {\r\n    int rc = responseCode / 100;\r\n    if (responseCode == 400) {\r\n      return Status.BAD_REQUEST;\r\n    } else if (responseCode == 403) {\r\n      return Status.FORBIDDEN;\r\n    } else if (responseCode == 404) {\r\n      return Status.NOT_FOUND;\r\n    } else if (responseCode == 501) {\r\n      return Status.NOT_IMPLEMENTED;\r\n    } else if (responseCode == 503) {\r\n      return Status.SERVICE_UNAVAILABLE;\r\n    } else if (rc == 5) {\r\n      return Status.ERROR;\r\n    }\r\n    return Status.OK;\r\n  }\r\n\r\n  private int handleExceptions(Exception e, String url, String method) {\r\n    if (logEnabled) {\r\n      System.err.println(new StringBuilder(method).append(\" Request: \").append(url).append(\" | \")\r\n          .append(e.getClass().getName()).append(\" occured | Error message: \")\r\n          .append(e.getMessage()).toString());\r\n    }\r\n      \r\n    if (e instanceof ClientProtocolException) {\r\n      return 400;\r\n    }\r\n    return 500;\r\n  }\r\n\r\n  // Connection is automatically released back in case of an exception.\r\n  private int httpGet(String endpoint, Map<String, ByteIterator> result) throws IOException {\r\n    requestTimedout.setIsSatisfied(false);\r\n    Thread timer = new Thread(new Timer(execTimeout, requestTimedout));\r\n    timer.start();\r\n    int responseCode = 200;\r\n    HttpGet request = new HttpGet(endpoint);\r\n    for (int i = 0; i < headers.length; i = i + 2) {\r\n      request.setHeader(headers[i], headers[i + 1]);\r\n    }\r\n    CloseableHttpResponse response = client.execute(request);\r\n    responseCode = response.getStatusLine().getStatusCode();\r\n    HttpEntity responseEntity = response.getEntity();\r\n    // If null entity don't bother about connection release.\r\n    if (responseEntity != null) {\r\n      InputStream stream = responseEntity.getContent();\r\n      /*\r\n       * TODO: Gzip Compression must be supported in the future. Header[]\r\n       * header = response.getAllHeaders();\r\n       * if(response.getHeaders(\"Content-Encoding\")[0].getValue().contains\r\n       * (\"gzip\")) stream = new GZIPInputStream(stream);\r\n       */\r\n      BufferedReader reader = new BufferedReader(new InputStreamReader(stream, \"UTF-8\"));\r\n      StringBuffer responseContent = new StringBuffer();\r\n      String line = \"\";\r\n      while ((line = reader.readLine()) != null) {\r\n        if (requestTimedout.isSatisfied()) {\r\n          // Must avoid memory leak.\r\n          reader.close();\r\n          stream.close();\r\n          EntityUtils.consumeQuietly(responseEntity);\r\n          response.close();\r\n          client.close();\r\n          throw new TimeoutException();\r\n        }\r\n        responseContent.append(line);\r\n      }\r\n      timer.interrupt();\r\n      result.put(\"response\", new StringByteIterator(responseContent.toString()));\r\n      // Closing the input stream will trigger connection release.\r\n      stream.close();\r\n    }\r\n    EntityUtils.consumeQuietly(responseEntity);\r\n    response.close();\r\n    client.close();\r\n    return responseCode;\r\n  }\r\n\r\n  private int httpExecute(HttpEntityEnclosingRequestBase request, String data) throws IOException {\r\n    requestTimedout.setIsSatisfied(false);\r\n    Thread timer = new Thread(new Timer(execTimeout, requestTimedout));\r\n    timer.start();\r\n    int responseCode = 200;\r\n    for (int i = 0; i < headers.length; i = i + 2) {\r\n      request.setHeader(headers[i], headers[i + 1]);\r\n    }\r\n    InputStreamEntity reqEntity = new InputStreamEntity(new ByteArrayInputStream(data.getBytes()),\r\n          ContentType.APPLICATION_FORM_URLENCODED);\r\n    reqEntity.setChunked(true);\r\n    request.setEntity(reqEntity);\r\n    CloseableHttpResponse response = client.execute(request);\r\n    responseCode = response.getStatusLine().getStatusCode();\r\n    HttpEntity responseEntity = response.getEntity();\r\n    // If null entity don't bother about connection release.\r\n    if (responseEntity != null) {\r\n      InputStream stream = responseEntity.getContent();\r\n      if (compressedResponse) {\r\n        stream = new GZIPInputStream(stream); \r\n      }\r\n      BufferedReader reader = new BufferedReader(new InputStreamReader(stream, \"UTF-8\"));\r\n      StringBuffer responseContent = new StringBuffer();\r\n      String line = \"\";\r\n      while ((line = reader.readLine()) != null) {\r\n        if (requestTimedout.isSatisfied()) {\r\n          // Must avoid memory leak.\r\n          reader.close();\r\n          stream.close();\r\n          EntityUtils.consumeQuietly(responseEntity);\r\n          response.close();\r\n          client.close();\r\n          throw new TimeoutException();\r\n        }\r\n        responseContent.append(line);\r\n      }\r\n      timer.interrupt();\r\n      // Closing the input stream will trigger connection release.\r\n      stream.close();\r\n    }\r\n    EntityUtils.consumeQuietly(responseEntity);\r\n    response.close();\r\n    client.close();\r\n    return responseCode;\r\n  }\r\n  \r\n  private int httpDelete(String endpoint) throws IOException {\r\n    requestTimedout.setIsSatisfied(false);\r\n    Thread timer = new Thread(new Timer(execTimeout, requestTimedout));\r\n    timer.start();\r\n    int responseCode = 200;\r\n    HttpDelete request = new HttpDelete(endpoint);\r\n    for (int i = 0; i < headers.length; i = i + 2) {\r\n      request.setHeader(headers[i], headers[i + 1]);\r\n    }\r\n    CloseableHttpResponse response = client.execute(request);\r\n    responseCode = response.getStatusLine().getStatusCode();\r\n    response.close();\r\n    client.close();\r\n    return responseCode;\r\n  }\r\n\r\n  /**\r\n   * Marks the input {@link Criteria} as satisfied when the input time has elapsed.\r\n   */\r\n  class Timer implements Runnable {\r\n\r\n    private long timeout;\r\n    private Criteria timedout;\r\n\r\n    public Timer(long timeout, Criteria timedout) {\r\n      this.timedout = timedout;\r\n      this.timeout = timeout;\r\n    }\r\n\r\n    @Override\r\n    public void run() {\r\n      try {\r\n        Thread.sleep(timeout);\r\n        this.timedout.setIsSatisfied(true);\r\n      } catch (InterruptedException e) {\r\n        // Do nothing.\r\n      }\r\n    }\r\n\r\n  }\r\n\r\n  /**\r\n   * Sets the flag when a criteria is fulfilled.\r\n   */\r\n  class Criteria {\r\n\r\n    private boolean isSatisfied;\r\n\r\n    public Criteria(boolean isSatisfied) {\r\n      this.isSatisfied = isSatisfied;\r\n    }\r\n\r\n    public boolean isSatisfied() {\r\n      return isSatisfied;\r\n    }\r\n\r\n    public void setIsSatisfied(boolean satisfied) {\r\n      this.isSatisfied = satisfied;\r\n    }\r\n\r\n  }\r\n\r\n  /**\r\n   * Private exception class for execution timeout.\r\n   */\r\n  class TimeoutException extends RuntimeException {\r\n\r\n    private static final long serialVersionUID = 1L;\r\n    \r\n    public TimeoutException() {\r\n      super(\"HTTP Request exceeded execution time limit.\");\r\n    }\r\n\r\n  }\r\n\r\n}\r\n"
  },
  {
    "path": "rest/src/main/java/com/yahoo/ycsb/webservice/rest/package-info.java",
    "content": "/**\r\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\r\n *\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n *\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n *\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\n/**\r\n * YCSB binding for RESTFul Web Services.\r\n */\r\npackage com.yahoo.ycsb.webservice.rest;\r\n\r\n"
  },
  {
    "path": "rest/src/test/java/com/yahoo/ycsb/webservice/rest/IntegrationTest.java",
    "content": "/**\r\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\r\n *\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n *\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n *\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.webservice.rest;\r\n\r\nimport static org.junit.Assert.assertEquals;\r\n\r\nimport java.io.File;\r\nimport java.io.FileNotFoundException;\r\nimport java.io.IOException;\r\nimport java.util.List;\r\n\r\nimport javax.servlet.ServletException;\r\n\r\nimport org.apache.catalina.Context;\r\nimport org.apache.catalina.LifecycleException;\r\nimport org.apache.catalina.startup.Tomcat;\r\nimport org.glassfish.jersey.server.ResourceConfig;\r\nimport org.glassfish.jersey.servlet.ServletContainer;\r\nimport org.junit.AfterClass;\r\nimport org.junit.BeforeClass;\r\nimport org.junit.FixMethodOrder;\r\nimport org.junit.Rule;\r\nimport org.junit.Test;\r\nimport org.junit.contrib.java.lang.system.Assertion;\r\nimport org.junit.contrib.java.lang.system.ExpectedSystemExit;\r\nimport org.junit.runners.MethodSorters;\r\n\r\nimport com.yahoo.ycsb.Client;\r\nimport com.yahoo.ycsb.DBException;\r\nimport com.yahoo.ycsb.webservice.rest.Utils;\r\n\r\n/**\r\n * Integration test cases to verify the end to end working of the rest-binding\r\n * module. It performs these steps in order. 1. Runs an embedded Tomcat\r\n * server with a mock RESTFul web service. 2. Invokes the {@link Client} \r\n * class with the required parameters to start benchmarking the mock REST\r\n * service. 3. Compares the response stored in the output file by {@link Client}\r\n * class with the response expected. 4. Stops the embedded Tomcat server.\r\n * Cases for verifying the handling of different HTTP status like 2xx & 5xx have\r\n * been included in success and failure test cases.\r\n */\r\n@FixMethodOrder(MethodSorters.NAME_ASCENDING)\r\npublic class IntegrationTest {\r\n\r\n  @Rule\r\n  public final ExpectedSystemExit exit = ExpectedSystemExit.none();\r\n\r\n  private static int port = 8080;\r\n  private static Tomcat tomcat;\r\n  private static final String WORKLOAD_FILEPATH =  IntegrationTest.class.getClassLoader().getResource(\"workload_rest\").getPath();\r\n  private static final String TRACE_FILEPATH = IntegrationTest.class.getClassLoader().getResource(\"trace.txt\").getPath();\r\n  private static final String ERROR_TRACE_FILEPATH = IntegrationTest.class.getClassLoader().getResource(\"error_trace.txt\").getPath();\r\n  private static final String RESULTS_FILEPATH = IntegrationTest.class.getClassLoader().getResource(\".\").getPath() + \"results.txt\";\r\n\r\n  @BeforeClass\r\n  public static void init() throws ServletException, LifecycleException, FileNotFoundException, IOException,\r\n      DBException, InterruptedException {\r\n    String webappDirLocation =  IntegrationTest.class.getClassLoader().getResource(\"WebContent\").getPath();\r\n    while (!Utils.available(port)) {\r\n      port++;\r\n    }\r\n    tomcat = new Tomcat();\r\n    tomcat.setPort(Integer.valueOf(port));\r\n    Context context = tomcat.addWebapp(\"/webService\", new File(webappDirLocation).getAbsolutePath());\r\n    Tomcat.addServlet(context, \"jersey-container-servlet\", resourceConfig());\r\n    context.addServletMapping(\"/rest/*\", \"jersey-container-servlet\");\r\n    tomcat.start();\r\n    // Allow time for proper startup.\r\n    Thread.sleep(1000);\r\n  }\r\n\r\n  @AfterClass\r\n  public static void cleanUp() throws LifecycleException {\r\n    tomcat.stop();\r\n  }\r\n\r\n  // All read operations during benchmark are executed successfully with an HTTP OK status.\r\n  @Test\r\n  public void testReadOpsBenchmarkSuccess() throws InterruptedException {\r\n    exit.expectSystemExit();\r\n    exit.checkAssertionAfterwards(new Assertion() {\r\n      @Override\r\n      public void checkAssertion() throws Exception {\r\n        List<String> results = Utils.read(RESULTS_FILEPATH);\r\n        assertEquals(true, results.contains(\"[READ], Return=OK, 1\"));\r\n        Utils.delete(RESULTS_FILEPATH);\r\n      }\r\n    });\r\n    Client.main(getArgs(TRACE_FILEPATH, 1, 0, 0, 0));\r\n  }\r\n  \r\n  //All read operations during benchmark are executed with an HTTP 500 error.\r\n  @Test\r\n  public void testReadOpsBenchmarkFailure() throws InterruptedException {\r\n    exit.expectSystemExit();\r\n    exit.checkAssertionAfterwards(new Assertion() {\r\n      @Override\r\n      public void checkAssertion() throws Exception {\r\n        List<String> results = Utils.read(RESULTS_FILEPATH);\r\n        assertEquals(true, results.contains(\"[READ], Return=ERROR, 1\"));\r\n        Utils.delete(RESULTS_FILEPATH);\r\n      }\r\n    });\r\n    Client.main(getArgs(ERROR_TRACE_FILEPATH, 1, 0, 0, 0));\r\n  }\r\n  \r\n  //All insert operations during benchmark are executed successfully with an HTTP OK status.\r\n  @Test\r\n  public void testInsertOpsBenchmarkSuccess() throws InterruptedException {\r\n    exit.expectSystemExit();\r\n    exit.checkAssertionAfterwards(new Assertion() {\r\n      @Override\r\n      public void checkAssertion() throws Exception {\r\n        List<String> results = Utils.read(RESULTS_FILEPATH);\r\n        assertEquals(true, results.contains(\"[INSERT], Return=OK, 1\"));\r\n        Utils.delete(RESULTS_FILEPATH);\r\n      }\r\n    });\r\n    Client.main(getArgs(TRACE_FILEPATH, 0, 1, 0, 0));\r\n  }\r\n  \r\n  //All read operations during benchmark are executed with an HTTP 500 error.\r\n  @Test\r\n  public void testInsertOpsBenchmarkFailure() throws InterruptedException {\r\n    exit.expectSystemExit();\r\n    exit.checkAssertionAfterwards(new Assertion() {\r\n      @Override\r\n      public void checkAssertion() throws Exception {\r\n        List<String> results = Utils.read(RESULTS_FILEPATH);\r\n        assertEquals(true, results.contains(\"[INSERT], Return=ERROR, 1\"));\r\n        Utils.delete(RESULTS_FILEPATH);\r\n      }\r\n    });\r\n    Client.main(getArgs(ERROR_TRACE_FILEPATH, 0, 1, 0, 0));\r\n  }\r\n\r\n  //All update operations during benchmark are executed successfully with an HTTP OK status.\r\n  @Test\r\n  public void testUpdateOpsBenchmarkSuccess() throws InterruptedException {\r\n    exit.expectSystemExit();\r\n    exit.checkAssertionAfterwards(new Assertion() {\r\n      @Override\r\n      public void checkAssertion() throws Exception {\r\n        List<String> results = Utils.read(RESULTS_FILEPATH);\r\n        assertEquals(true, results.contains(\"[UPDATE], Return=OK, 1\"));\r\n        Utils.delete(RESULTS_FILEPATH);\r\n      }\r\n    });\r\n    Client.main(getArgs(TRACE_FILEPATH, 0, 0, 1, 0));\r\n  }\r\n  \r\n  //All read operations during benchmark are executed with an HTTP 500 error.\r\n  @Test\r\n  public void testUpdateOpsBenchmarkFailure() throws InterruptedException {\r\n    exit.expectSystemExit();\r\n    exit.checkAssertionAfterwards(new Assertion() {\r\n      @Override\r\n      public void checkAssertion() throws Exception {\r\n        List<String> results = Utils.read(RESULTS_FILEPATH);\r\n        assertEquals(true, results.contains(\"[UPDATE], Return=ERROR, 1\"));\r\n        Utils.delete(RESULTS_FILEPATH);\r\n      }\r\n    });\r\n    Client.main(getArgs(ERROR_TRACE_FILEPATH, 0, 0, 1, 0));\r\n  }\r\n\r\n  //All delete operations during benchmark are executed successfully with an HTTP OK status.\r\n  @Test\r\n  public void testDeleteOpsBenchmarkSuccess() throws InterruptedException {\r\n    exit.expectSystemExit();\r\n    exit.checkAssertionAfterwards(new Assertion() {\r\n      @Override\r\n      public void checkAssertion() throws Exception {\r\n        List<String> results = Utils.read(RESULTS_FILEPATH);\r\n        assertEquals(true, results.contains(\"[DELETE], Return=OK, 1\"));\r\n        Utils.delete(RESULTS_FILEPATH);\r\n      }\r\n    });\r\n    Client.main(getArgs(TRACE_FILEPATH, 0, 0, 0, 1));\r\n  }\r\n  \r\n  //All read operations during benchmark are executed with an HTTP 500 error.\r\n  @Test\r\n  public void testDeleteOpsBenchmarkFailure() throws InterruptedException {\r\n    exit.expectSystemExit();\r\n    exit.checkAssertionAfterwards(new Assertion() {\r\n      @Override\r\n      public void checkAssertion() throws Exception {\r\n        List<String> results = Utils.read(RESULTS_FILEPATH);\r\n        assertEquals(true, results.contains(\"[DELETE], Return=ERROR, 1\"));\r\n        Utils.delete(RESULTS_FILEPATH);\r\n      }\r\n    });\r\n    Client.main(getArgs(ERROR_TRACE_FILEPATH, 0, 0, 0, 1));\r\n  }\r\n\r\n  private String[] getArgs(String traceFilePath, float rp, float ip, float up, float dp) {\r\n    String[] args = new String[25];\r\n    args[0] = \"-target\";\r\n    args[1] = \"1\";\r\n    args[2] = \"-t\";\r\n    args[3] = \"-P\";\r\n    args[4] = WORKLOAD_FILEPATH;\r\n    args[5] = \"-p\";\r\n    args[6] = \"url.prefix=http://127.0.0.1:\"+port+\"/webService/rest/resource/\";\r\n    args[7] = \"-p\";\r\n    args[8] = \"url.trace.read=\" + traceFilePath;\r\n    args[9] = \"-p\";\r\n    args[10] = \"url.trace.insert=\" + traceFilePath;\r\n    args[11] = \"-p\";\r\n    args[12] = \"url.trace.update=\" + traceFilePath;\r\n    args[13] = \"-p\";\r\n    args[14] = \"url.trace.delete=\" + traceFilePath;\r\n    args[15] = \"-p\";\r\n    args[16] = \"exportfile=\" + RESULTS_FILEPATH;\r\n    args[17] = \"-p\";\r\n    args[18] = \"readproportion=\" + rp;\r\n    args[19] = \"-p\";\r\n    args[20] = \"updateproportion=\" + up;\r\n    args[21] = \"-p\";\r\n    args[22] = \"deleteproportion=\" + dp;\r\n    args[23] = \"-p\";\r\n    args[24] = \"insertproportion=\" + ip;\r\n    return args;\r\n  }\r\n\r\n  private static ServletContainer resourceConfig() {\r\n    return new ServletContainer(new ResourceConfig(new ResourceLoader().getClasses()));\r\n  }\r\n\r\n}"
  },
  {
    "path": "rest/src/test/java/com/yahoo/ycsb/webservice/rest/ResourceLoader.java",
    "content": "/**\r\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\r\n *\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n *\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n *\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.webservice.rest;\r\n\r\nimport java.util.HashSet;\r\nimport java.util.Set;\r\n\r\nimport javax.ws.rs.core.Application;\r\n\r\n/**\r\n * Class responsible for loading mock rest resource class like\r\n * {@link RestTestResource}.\r\n */\r\npublic class ResourceLoader extends Application {\r\n\r\n  @Override\r\n  public Set<Class<?>> getClasses() {\r\n    final Set<Class<?>> classes = new HashSet<Class<?>>();\r\n    classes.add(RestTestResource.class);\r\n    return classes;\r\n  }\r\n\r\n}"
  },
  {
    "path": "rest/src/test/java/com/yahoo/ycsb/webservice/rest/RestClientTest.java",
    "content": "/**\r\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\r\n *\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n *\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n *\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.webservice.rest;\r\n\r\nimport static org.junit.Assert.assertEquals;\r\n\r\nimport java.io.File;\r\nimport java.io.FileReader;\r\nimport java.io.IOException;\r\nimport java.util.HashMap;\r\nimport java.util.Properties;\r\n\r\nimport javax.servlet.ServletException;\r\n\r\nimport org.apache.catalina.Context;\r\nimport org.apache.catalina.LifecycleException;\r\nimport org.apache.catalina.startup.Tomcat;\r\nimport org.glassfish.jersey.server.ResourceConfig;\r\nimport org.glassfish.jersey.servlet.ServletContainer;\r\nimport org.junit.AfterClass;\r\nimport org.junit.BeforeClass;\r\nimport org.junit.Test;\r\n\r\nimport com.yahoo.ycsb.ByteIterator;\r\nimport com.yahoo.ycsb.DBException;\r\nimport com.yahoo.ycsb.Status;\r\nimport com.yahoo.ycsb.StringByteIterator;\r\n\r\n/**\r\n * Test cases to verify the {@link RestClient} of the rest-binding\r\n * module. It performs these steps in order. 1. Runs an embedded Tomcat\r\n * server with a mock RESTFul web service. 2. Invokes the {@link RestClient} \r\n * class for all the various methods which make HTTP calls to the mock REST\r\n * service. 3. Compares the response from such calls to the mock REST\r\n * service with the response expected. 4. Stops the embedded Tomcat server.\r\n * Cases for verifying the handling of different HTTP status like 2xx, 4xx &\r\n * 5xx have been included in success and failure test cases.\r\n */\r\npublic class RestClientTest {\r\n\r\n  private static Integer port = 8080;\r\n  private static Tomcat tomcat;\r\n  private static RestClient rc = new RestClient();\r\n  private static final String RESPONSE_TAG = \"response\";\r\n  private static final String DATA_TAG = \"data\";\r\n  private static final String VALID_RESOURCE = \"resource_valid\";\r\n  private static final String INVALID_RESOURCE = \"resource_invalid\";\r\n  private static final String ABSENT_RESOURCE = \"resource_absent\";\r\n  private static final String UNAUTHORIZED_RESOURCE = \"resource_unauthorized\";\r\n  private static final String INPUT_DATA = \"<field1>one</field1><field2>two</field2>\";\r\n\r\n  @BeforeClass\r\n  public static void init() throws IOException, DBException, ServletException, LifecycleException, InterruptedException {\r\n    String webappDirLocation =  IntegrationTest.class.getClassLoader().getResource(\"WebContent\").getPath();\r\n    while (!Utils.available(port)) {\r\n      port++;\r\n    }\r\n    tomcat = new Tomcat();\r\n    tomcat.setPort(Integer.valueOf(port));\r\n    Context context = tomcat.addWebapp(\"/webService\", new File(webappDirLocation).getAbsolutePath());\r\n    Tomcat.addServlet(context, \"jersey-container-servlet\", resourceConfig());\r\n    context.addServletMapping(\"/rest/*\", \"jersey-container-servlet\");\r\n    tomcat.start();\r\n    // Allow time for proper startup.\r\n    Thread.sleep(1000);\r\n    Properties props = new Properties();\r\n    props.load(new FileReader(RestClientTest.class.getClassLoader().getResource(\"workload_rest\").getPath()));\r\n    // Update the port value in the url.prefix property.\r\n    props.setProperty(\"url.prefix\", props.getProperty(\"url.prefix\").replaceAll(\"PORT\", port.toString()));\r\n    rc.setProperties(props);\r\n    rc.init();\r\n  }\r\n\r\n  @AfterClass\r\n  public static void cleanUp() throws DBException {\r\n    rc.cleanup();\r\n  }\r\n  \r\n  // Read success.\r\n  @Test\r\n  public void read_200() {\r\n    HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\r\n    Status status = rc.read(null, VALID_RESOURCE, null, result);\r\n    assertEquals(Status.OK, status);\r\n    assertEquals(result.get(RESPONSE_TAG).toString(), \"HTTP GET response to: \"+ VALID_RESOURCE);\r\n  }\r\n  \r\n  // Unauthorized request error.\r\n  @Test\r\n  public void read_403() {\r\n    HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\r\n    Status status = rc.read(null, UNAUTHORIZED_RESOURCE, null, result);\r\n    assertEquals(Status.FORBIDDEN, status);\r\n  }\r\n  \r\n  //Not found error.\r\n  @Test\r\n  public void read_404() {\r\n    HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\r\n    Status status = rc.read(null, ABSENT_RESOURCE, null, result);\r\n    assertEquals(Status.NOT_FOUND, status);\r\n  }\r\n  \r\n  // Server error.\r\n  @Test\r\n  public void read_500() {\r\n    HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\r\n    Status status = rc.read(null, INVALID_RESOURCE, null, result);\r\n    assertEquals(Status.ERROR, status);\r\n  }\r\n  \r\n  // Insert success.\r\n  @Test\r\n  public void insert_200() {\r\n    HashMap<String, ByteIterator> data = new HashMap<String, ByteIterator>();\r\n    data.put(DATA_TAG, new StringByteIterator(INPUT_DATA));\r\n    Status status = rc.insert(null, VALID_RESOURCE, data);\r\n    assertEquals(Status.OK, status);\r\n  }\r\n  \r\n  @Test\r\n  public void insert_403() {\r\n    HashMap<String, ByteIterator> data = new HashMap<String, ByteIterator>();\r\n    data.put(DATA_TAG, new StringByteIterator(INPUT_DATA));\r\n    Status status = rc.insert(null, UNAUTHORIZED_RESOURCE, data);\r\n    assertEquals(Status.FORBIDDEN, status);\r\n  }\r\n  \r\n  @Test\r\n  public void insert_404() {\r\n    HashMap<String, ByteIterator> data = new HashMap<String, ByteIterator>();\r\n    data.put(DATA_TAG, new StringByteIterator(INPUT_DATA));\r\n    Status status = rc.insert(null, ABSENT_RESOURCE, data);\r\n    assertEquals(Status.NOT_FOUND, status);\r\n  }\r\n  \r\n  @Test\r\n  public void insert_500() {\r\n    HashMap<String, ByteIterator> data = new HashMap<String, ByteIterator>();\r\n    data.put(DATA_TAG, new StringByteIterator(INPUT_DATA));\r\n    Status status = rc.insert(null, INVALID_RESOURCE, data);\r\n    assertEquals(Status.ERROR, status);\r\n  }\r\n\r\n  // Delete success.\r\n  @Test\r\n  public void delete_200() {\r\n    Status status = rc.delete(null, VALID_RESOURCE);\r\n    assertEquals(Status.OK, status);\r\n  }\r\n  \r\n  @Test\r\n  public void delete_403() {\r\n    Status status = rc.delete(null, UNAUTHORIZED_RESOURCE);\r\n    assertEquals(Status.FORBIDDEN, status);\r\n  }\r\n  \r\n  @Test\r\n  public void delete_404() {\r\n    Status status = rc.delete(null, ABSENT_RESOURCE);\r\n    assertEquals(Status.NOT_FOUND, status);\r\n  }\r\n\r\n  @Test\r\n  public void delete_500() {\r\n    Status status = rc.delete(null, INVALID_RESOURCE);\r\n    assertEquals(Status.ERROR, status);\r\n  }\r\n  \r\n  @Test\r\n  public void update_200() {\r\n    HashMap<String, ByteIterator> data = new HashMap<String, ByteIterator>();\r\n    data.put(DATA_TAG, new StringByteIterator(INPUT_DATA));\r\n    Status status = rc.update(null, VALID_RESOURCE, data);\r\n    assertEquals(Status.OK, status);\r\n  }\r\n  \r\n  @Test\r\n  public void update_403() {\r\n    HashMap<String, ByteIterator> data = new HashMap<String, ByteIterator>();\r\n    data.put(DATA_TAG, new StringByteIterator(INPUT_DATA));\r\n    Status status = rc.update(null, UNAUTHORIZED_RESOURCE, data);\r\n    assertEquals(Status.FORBIDDEN, status);\r\n  }\r\n  \r\n  @Test\r\n  public void update_404() {\r\n    HashMap<String, ByteIterator> data = new HashMap<String, ByteIterator>();\r\n    data.put(DATA_TAG, new StringByteIterator(INPUT_DATA));\r\n    Status status = rc.update(null, ABSENT_RESOURCE, data);\r\n    assertEquals(Status.NOT_FOUND, status);\r\n  }\r\n  \r\n  @Test\r\n  public void update_500() {\r\n    HashMap<String, ByteIterator> data = new HashMap<String, ByteIterator>();\r\n    data.put(DATA_TAG, new StringByteIterator(INPUT_DATA));\r\n    Status status = rc.update(null, INVALID_RESOURCE, data);\r\n    assertEquals(Status.ERROR, status);\r\n  }\r\n  \r\n  @Test\r\n  public void scan() {\r\n    assertEquals(Status.NOT_IMPLEMENTED, rc.scan(null, null, 0, null, null));\r\n  }\r\n\r\n  private static ServletContainer resourceConfig() {\r\n    return new ServletContainer(new ResourceConfig(new ResourceLoader().getClasses()));\r\n  }\r\n\r\n}\r\n"
  },
  {
    "path": "rest/src/test/java/com/yahoo/ycsb/webservice/rest/RestTestResource.java",
    "content": "/**\r\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\r\n *\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n *\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n *\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.webservice.rest;\r\n\r\nimport javax.ws.rs.DELETE;\r\nimport javax.ws.rs.GET;\r\nimport javax.ws.rs.HttpMethod;\r\nimport javax.ws.rs.POST;\r\nimport javax.ws.rs.PUT;\r\nimport javax.ws.rs.Path;\r\nimport javax.ws.rs.PathParam;\r\nimport javax.ws.rs.Produces;\r\nimport javax.ws.rs.core.MediaType;\r\nimport javax.ws.rs.core.Response;\r\n\r\n/**\r\n * Class that implements a mock RESTFul web service to be used for integration\r\n * testing.\r\n */\r\n@Path(\"/resource/{id}\")\r\npublic class RestTestResource {\r\n\r\n  @GET\r\n  @Produces(MediaType.TEXT_PLAIN)\r\n  public Response respondToGET(@PathParam(\"id\") String id) {\r\n    return processRequests(id, HttpMethod.GET);\r\n  }\r\n  \r\n  @POST\r\n  @Produces(MediaType.TEXT_PLAIN)\r\n  public Response respondToPOST(@PathParam(\"id\") String id) {\r\n    return processRequests(id, HttpMethod.POST);\r\n  }\r\n\r\n  @DELETE\r\n  @Produces(MediaType.TEXT_PLAIN)\r\n  public Response respondToDELETE(@PathParam(\"id\") String id) {\r\n    return processRequests(id, HttpMethod.DELETE);\r\n  }\r\n\r\n  @PUT\r\n  @Produces(MediaType.TEXT_PLAIN)\r\n  public Response respondToPUT(@PathParam(\"id\") String id) {\r\n    return processRequests(id, HttpMethod.PUT);\r\n  }\r\n  \r\n  private static Response processRequests(String id, String method) {\r\n    if (id.equals(\"resource_invalid\"))\r\n      return Response.serverError().build();\r\n    else if (id.equals(\"resource_absent\"))\r\n      return Response.status(Response.Status.NOT_FOUND).build();\r\n    else if (id.equals(\"resource_unauthorized\"))\r\n      return Response.status(Response.Status.FORBIDDEN).build();\r\n    return Response.ok(\"HTTP \" + method + \" response to: \" + id).build();\r\n  }\r\n}"
  },
  {
    "path": "rest/src/test/java/com/yahoo/ycsb/webservice/rest/Utils.java",
    "content": "/**\r\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\r\n *\r\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\r\n * may not use this file except in compliance with the License. You\r\n * may obtain a copy of the License at\r\n *\r\n * http://www.apache.org/licenses/LICENSE-2.0\r\n *\r\n * Unless required by applicable law or agreed to in writing, software\r\n * distributed under the License is distributed on an \"AS IS\" BASIS,\r\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\r\n * implied. See the License for the specific language governing\r\n * permissions and limitations under the License. See accompanying\r\n * LICENSE file.\r\n */\r\n\r\npackage com.yahoo.ycsb.webservice.rest;\r\n\r\nimport java.io.BufferedReader;\r\nimport java.io.File;\r\nimport java.io.FileReader;\r\nimport java.io.IOException;\r\nimport java.net.DatagramSocket;\r\nimport java.net.ServerSocket;\r\nimport java.util.ArrayList;\r\nimport java.util.List;\r\n\r\n/**\r\n * Holds the common utility methods.\r\n */\r\npublic class Utils {\r\n\r\n  /**\r\n   * Returns true if the port is available.\r\n   * \r\n   * @param port\r\n   * @return isAvailable\r\n   */\r\n  public static boolean available(int port) {\r\n    ServerSocket ss = null;\r\n    DatagramSocket ds = null;\r\n    try {\r\n      ss = new ServerSocket(port);\r\n      ss.setReuseAddress(true);\r\n      ds = new DatagramSocket(port);\r\n      ds.setReuseAddress(true);\r\n      return true;\r\n    } catch (IOException e) {\r\n    } finally {\r\n      if (ds != null) {\r\n        ds.close();\r\n      }\r\n      if (ss != null) {\r\n        try {\r\n          ss.close();\r\n        } catch (IOException e) {\r\n          /* should not be thrown */\r\n        }\r\n      }\r\n    }\r\n    return false;\r\n  }\r\n\r\n  public static List<String> read(String filepath) {\r\n    List<String> list = new ArrayList<String>();\r\n    try {\r\n      BufferedReader file = new BufferedReader(new FileReader(filepath));\r\n      String line = null;\r\n      while ((line = file.readLine()) != null) {\r\n        list.add(line.trim());\r\n      }\r\n      file.close();\r\n    } catch (IOException e) {\r\n      e.printStackTrace();\r\n    }\r\n    return list;\r\n  }\r\n\r\n  public static void delete(String filepath) {\r\n    try {\r\n      new File(filepath).delete();\r\n    } catch (Exception e) {\r\n      e.printStackTrace();\r\n    }\r\n  }\r\n\r\n}\r\n"
  },
  {
    "path": "rest/src/test/resources/WebContent/index.html",
    "content": "<!DOCTYPE html>\n<html>\n \n <head>\n  <meta charset=\"ISO-8859-1\">\n  <title>rest-binding</title>\n </head>\n\n <body>\n  Welcome to the rest-binding integration test cases!\n </body>\n\n</html>"
  },
  {
    "path": "rest/src/test/resources/error_trace.txt",
    "content": "resource_invalid"
  },
  {
    "path": "rest/src/test/resources/trace.txt",
    "content": "resource_1\r\nresource_2\r\nresource_3\r\nresource_4\r\nresource_5"
  },
  {
    "path": "rest/src/test/resources/workload_rest",
    "content": "# Copyright (c) 2016 Yahoo! Inc. All rights reserved.                                                                                                                             \r\n#                                                                                                                                                                                 \r\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you                                                                                                             \r\n# may not use this file except in compliance with the License. You                                                                                                                \r\n# may obtain a copy of the License at                                                                                                                                             \r\n#                                                                                                                                                                                 \r\n# http://www.apache.org/licenses/LICENSE-2.0                                                                                                                                      \r\n#                                                                                                                                                                                 \r\n# Unless required by applicable law or agreed to in writing, software                                                                                                             \r\n# distributed under the License is distributed on an \"AS IS\" BASIS,                                                                                                               \r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or                                                                                                                 \r\n# implied. See the License for the specific language governing                                                                                                                    \r\n# permissions and limitations under the License. See accompanying                                                                                                                 \r\n# LICENSE file.                                                                                                                                                                   \r\n\r\n\r\n# Yahoo! Cloud System Benchmark\r\n# Workload A: Update heavy workload\r\n#   Application example: Session store recording recent actions\r\n#                        \r\n#   Read/update ratio: 50/50\r\n#   Default data size: 1 KB records (10 fields, 100 bytes each, plus key)\r\n#   Request distribution: zipfian\r\n\r\n#\tCore Properties\r\nworkload=com.yahoo.ycsb.workloads.RestWorkload\r\ndb=com.yahoo.ycsb.webservice.rest.RestClient\r\nexporter=com.yahoo.ycsb.measurements.exporter.TextMeasurementsExporter\r\nthreadcount=1\r\nfieldlengthdistribution=uniform\r\nmeasurementtype=hdrhistogram\r\n\r\n#\tWorkload Properties\r\nfieldcount=1\r\nfieldlength=2500\r\nreadproportion=1\r\nupdateproportion=0\r\ndeleteproportion=0\r\ninsertproportion=0\r\nrequestdistribution=zipfian\r\noperationcount=1\r\nmaxexecutiontime=720\r\n\r\n#\tCustom Properties\r\nurl.prefix=http://127.0.0.1:PORT/webService/rest/resource/\r\nurl.trace.read=/src/test/resource/trace.txt\r\nurl.trace.insert=/src/test/resource/trace.txt\r\nurl.trace.update=/src/test/resource/trace.txt\r\nurl.trace.delete=/src/test/resource/trace.txt\r\n# Header must be separated by space. Other delimiters might occur as header values and hence can not be used.\r\nheaders=Accept */* Accept-Language en-US,en;q=0.5 Content-Type application/x-www-form-urlencoded user-agent Mozilla/5.0 Connection close\r\ntimeout.con=60\r\ntimeout.read=60\r\ntimeout.exec=60\r\nlog.enable=false\r\nreadrecordcount=10000\r\ninsertrecordcount=5000\r\ndeleterecordcount=1000\r\nupdaterecordcount=1000\r\nreadzipfconstant=0.9\r\ninsertzipfconstant=0.9\r\nupdatezipfconstant=0.9\r\ndeletezipfconstant=0.9\r\n\r\n\r\n#\tMeasurement Properties\r\nhdrhistogram.percentiles=50,90,95,99\r\nhistogram.buckets=1"
  },
  {
    "path": "riak/README.md",
    "content": "<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\nCopyright 2014 Basho Technologies, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\nRiak KV Client for Yahoo! Cloud System Benchmark (YCSB)\n=======================================================\n\nThe Riak KV YCSB client is designed to work with the Yahoo! Cloud System Benchmark (YCSB) project (https://github.com/brianfrankcooper/YCSB) to support performance testing for the 2.x.y line of the Riak KV database.\n\nCreating a <i>bucket-type</i> to use with YCSB\n----------------------------\n\nPerform the following operations on your Riak cluster to configure it for the benchmarks.\n\nSet the default backend for Riak to <i>LevelDB</i> in the `riak.conf` file of every node of your cluster. This is required to support <i>secondary indexes</i>, which are used for the `scan` transactions. You can do this by modifying the proper line as shown below.\n\n```\nstorage_backend = leveldb\n```\nAfter this, create a bucket type named \"ycsb\"<sup id=\"a1\">[1](#f1)</sup> by logging into one of the nodes in your cluster. Now you're ready to set up the cluster to operate using one between strong and eventual consistency model as shown in the next two subsections. \n\n###Strong consistency model \n\nTo use the <i>strong consistency model</i> (default), you need to follow the next two steps.\n\n1. In every `riak.conf` file, search for the `##strong_consistency=on` line and uncomment it. It's important that you do this <b>before you start your cluster</b>!\n2. Run the following `riak-admin` commands:\n\n  ```\n  riak-admin bucket-type create ycsb '{\"props\":{\"consistent\":true}}'\n  riak-admin bucket-type activate ycsb\n  ```\n\nWhen using this model, you **may want to specify the number of replicas to create for each object**<sup id=\"a2\">[2](#f2)</sup>: the *R* and *W* parameters (see next section) will in fact be ignored. The only information needed by this consistency model is how many nodes the system has to successfully query to consider a transaction completed. To set this parameter, you can add `\"n_val\":N` to the list of properties shown above (by default `N` is set to 3).\n\n####A note on the scan transactions \nCurrently, `scan` transactions are not _directly_ supported, as there is no suitable mean to perform them properly. This will not cause the benchmark to fail, it simply won't perform any scan transaction at all (these will immediately return with a `Status.NOT_IMPLEMENTED` code).\n\nHowever, a possible workaround has been provided: considering that Riak doesn't allow strong-consistent bucket-types to use secondary indexes, we can create an eventually consistent one just to store (*key*, *2i indexes*) pairs. This will be later used only to obtain the keys where the objects are located, which will be then used to retrieve the actual objects from the strong-consistent bucket. If you want to use this workaround, then you have to create and activate a \"_fake bucket-type_\" using the following commands:\n```\nriak-admin bucket-type create fakeBucketType '{\"props\":{\"allow_mult\":\"false\",\"n_val\":1,\"dvv_enabled\":false,\"last_write_wins\":true}}'\nriak-admin bucket-type activate fakeBucketType\n```\nA bucket-type so defined isn't allowed to _create siblings_ (`allow_mult\":\"false\"`), it'll have just _one replica_ (`\"n_val\":1`) which'll store the _last value provided_ (`\"last_write_wins\":true`) and _vector clocks_ will be used instead of _dotted version vectors_ (`\"dvv_enabled\":false`). Note that setting `\"n_val\":1` means that the `scan` transactions won't be much *fault-tolerant*, considering that if a node fails then a lot of them could potentially fail. You may indeed increase this value, but this choice will necessarily load the cluster with more work. So, the choice is yours to make!\nThen you have to set the `riak.strong_consistent_scans_bucket_type` property (see next section) equal to the name you gave to the aforementioned \"fake bucket-type\" (e.g. `fakeBucketType` in this case).\n\nPlease note that this workaround involves a **double store operation for each insert transaction**, one to store the actual object and another one to save the corresponding 2i index. In practice, the client won't notice any difference, as the latter operation is performed asynchronously. However, the cluster will be obviously loaded more, and this is why the proposed \"fake bucket-type\" to create is as less _resource-demanding_ as possible.\n\n###Eventual consistency model\n\nIf you want to use the <i>eventual consistency model</i> implemented in Riak, you have just to type: \n```\nriak-admin bucket-type create ycsb '{\"props\":{\"allow_mult\":\"false\"}}'\nriak-admin bucket-type activate ycsb\n```\n\nRiak KV configuration parameters\n----------------------------\nYou can either specify these configuration parameters via command line or set them in the `riak.properties` file.\n\n* `riak.hosts` - <b>string list</b>, comma separated list of IPs or FQDNs. For example: `riak.hosts=127.0.0.1,127.0.0.2,127.0.0.3` or `riak.hosts=riak1.mydomain.com,riak2.mydomain.com,riak3.mydomain.com`.\n* `riak.port` - <b>int</b>, the port on which every node is listening. It must match the one specified in the `riak.conf` file at the line `listener.protobuf.internal`.\n* `riak.bucket_type` - <b>string</b>, it must match the name of the bucket type created during setup (see section above).\n* `riak.r_val` - <b>int</b>, this value represents the number of Riak nodes that must return results for a read operation before the transaction is considered successfully completed. \n* `riak.w_val` - <b>int</b>, this value represents the number of Riak nodes that must report success before an insert/update transaction is considered complete.\n* `riak.read_retry_count` - <b>int</b>, the number of times the client will try to read a key from Riak.\n* `riak.wait_time_before_retry` - <b>int</b>, the time (in milliseconds) before the client attempts to perform another read if the previous one failed.\n* `riak.transaction_time_limit` - <b>int</b>, the time (in seconds) the client waits before aborting the current transaction.\n* `riak.strong_consistency` - <b>boolean</b>, indicates whether to use *strong consistency* (true) or *eventual consistency* (false).\n* `riak.strong_consistent_scans_bucket_type` - **string**, indicates the bucket-type to use to allow scans transactions when using strong consistency mode.\n* `riak.debug` - <b>boolean</b>, enables debug mode. This displays all the properties (specified or defaults) when a benchmark is started. Moreover, it shows error causes whenever these occur.\n\n<b>Note</b>: For more information on workloads and how to run them please see: https://github.com/brianfrankcooper/YCSB/wiki/Running-a-Workload\n\n<b id=\"f1\">1</b> As specified in the `riak.properties` file.  See parameters configuration section for further info. [↩](#a1)\n\n<b id=\"f2\">2</b> More info about properly setting up a fault-tolerant cluster can be found at http://docs.basho.com/riak/kv/2.1.4/configuring/strong-consistency/#enabling-strong-consistency.[↩](#a2)\n\n"
  },
  {
    "path": "riak/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\nCopyright 2014 Basho Technologies, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>riak-binding</artifactId>\n  <name>Riak KV Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.basho.riak</groupId>\n      <artifactId>riak-client</artifactId>\n      <version>2.0.5</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.google.collections</groupId>\n      <artifactId>google-collections</artifactId>\n      <version>1.0</version>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>4.12</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n\n</project>\n"
  },
  {
    "path": "riak/src/main/java/com/yahoo/ycsb/db/riak/RiakKVClient.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors All rights reserved.\n * Copyright 2014 Basho Technologies, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.riak;\n\nimport com.basho.riak.client.api.commands.buckets.StoreBucketProperties;\nimport com.basho.riak.client.api.commands.kv.StoreValue;\nimport com.basho.riak.client.api.commands.kv.UpdateValue;\nimport com.basho.riak.client.core.RiakFuture;\nimport com.basho.riak.client.core.query.RiakObject;\nimport com.basho.riak.client.core.query.indexes.LongIntIndex;\nimport com.basho.riak.client.core.util.BinaryValue;\nimport com.yahoo.ycsb.*;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.util.*;\nimport java.util.concurrent.TimeUnit;\nimport java.util.concurrent.TimeoutException;\n\nimport com.basho.riak.client.api.RiakClient;\nimport com.basho.riak.client.api.cap.Quorum;\nimport com.basho.riak.client.api.commands.indexes.IntIndexQuery;\nimport com.basho.riak.client.api.commands.kv.DeleteValue;\nimport com.basho.riak.client.api.commands.kv.FetchValue;\nimport com.basho.riak.client.core.RiakCluster;\nimport com.basho.riak.client.core.RiakNode;\nimport com.basho.riak.client.core.query.Location;\nimport com.basho.riak.client.core.query.Namespace;\n\nimport static com.yahoo.ycsb.db.riak.RiakUtils.createResultHashMap;\nimport static com.yahoo.ycsb.db.riak.RiakUtils.getKeyAsLong;\nimport static com.yahoo.ycsb.db.riak.RiakUtils.serializeTable;\n\n/**\n * Riak KV 2.x.y client for YCSB framework.\n *\n */\npublic class RiakKVClient extends DB {\n  private static final String HOST_PROPERTY = \"riak.hosts\";\n  private static final String PORT_PROPERTY = \"riak.port\";\n  private static final String BUCKET_TYPE_PROPERTY = \"riak.bucket_type\";\n  private static final String R_VALUE_PROPERTY = \"riak.r_val\";\n  private static final String W_VALUE_PROPERTY = \"riak.w_val\";\n  private static final String READ_RETRY_COUNT_PROPERTY = \"riak.read_retry_count\";\n  private static final String WAIT_TIME_BEFORE_RETRY_PROPERTY = \"riak.wait_time_before_retry\";\n  private static final String TRANSACTION_TIME_LIMIT_PROPERTY = \"riak.transaction_time_limit\";\n  private static final String STRONG_CONSISTENCY_PROPERTY = \"riak.strong_consistency\";\n  private static final String STRONG_CONSISTENT_SCANS_BUCKET_TYPE_PROPERTY = \"riak.strong_consistent_scans_bucket_type\";\n  private static final String DEBUG_PROPERTY = \"riak.debug\";\n\n  private static final Status TIME_OUT = new Status(\"TIME_OUT\", \"Cluster didn't respond after maximum wait time.\");\n\n  private String[] hosts;\n  private int port;\n  private String bucketType;\n  private String bucketType2i;\n  private Quorum rvalue;\n  private Quorum wvalue;\n  private int readRetryCount;\n  private int waitTimeBeforeRetry;\n  private int transactionTimeLimit;\n  private boolean strongConsistency;\n  private String strongConsistentScansBucketType;\n  private boolean performStrongConsistentScans;\n  private boolean debug;\n\n  private RiakClient riakClient;\n  private RiakCluster riakCluster;\n\n  private void loadDefaultProperties() {\n    InputStream propFile = RiakKVClient.class.getClassLoader().getResourceAsStream(\"riak.properties\");\n    Properties propsPF = new Properties(System.getProperties());\n\n    try {\n      propsPF.load(propFile);\n    } catch (IOException e) {\n      e.printStackTrace();\n    }\n\n    hosts = propsPF.getProperty(HOST_PROPERTY).split(\",\");\n    port = Integer.parseInt(propsPF.getProperty(PORT_PROPERTY));\n    bucketType = propsPF.getProperty(BUCKET_TYPE_PROPERTY);\n    rvalue = new Quorum(Integer.parseInt(propsPF.getProperty(R_VALUE_PROPERTY)));\n    wvalue = new Quorum(Integer.parseInt(propsPF.getProperty(W_VALUE_PROPERTY)));\n    readRetryCount = Integer.parseInt(propsPF.getProperty(READ_RETRY_COUNT_PROPERTY));\n    waitTimeBeforeRetry = Integer.parseInt(propsPF.getProperty(WAIT_TIME_BEFORE_RETRY_PROPERTY));\n    transactionTimeLimit = Integer.parseInt(propsPF.getProperty(TRANSACTION_TIME_LIMIT_PROPERTY));\n    strongConsistency = Boolean.parseBoolean(propsPF.getProperty(STRONG_CONSISTENCY_PROPERTY));\n    strongConsistentScansBucketType = propsPF.getProperty(STRONG_CONSISTENT_SCANS_BUCKET_TYPE_PROPERTY);\n    debug = Boolean.parseBoolean(propsPF.getProperty(DEBUG_PROPERTY));\n  }\n\n  private void loadProperties() {\n    // First, load the default properties...\n    loadDefaultProperties();\n\n    // ...then, check for some props set at command line!\n    Properties props = getProperties();\n\n    String portString = props.getProperty(PORT_PROPERTY);\n    if (portString != null) {\n      port = Integer.parseInt(portString);\n    }\n\n    String hostsString = props.getProperty(HOST_PROPERTY);\n    if (hostsString != null) {\n      hosts = hostsString.split(\",\");\n    }\n\n    String bucketTypeString = props.getProperty(BUCKET_TYPE_PROPERTY);\n    if (bucketTypeString != null) {\n      bucketType = bucketTypeString;\n    }\n\n    String rValueString = props.getProperty(R_VALUE_PROPERTY);\n    if (rValueString != null) {\n      rvalue = new Quorum(Integer.parseInt(rValueString));\n    }\n\n    String wValueString = props.getProperty(W_VALUE_PROPERTY);\n    if (wValueString != null) {\n      wvalue = new Quorum(Integer.parseInt(wValueString));\n    }\n\n    String readRetryCountString = props.getProperty(READ_RETRY_COUNT_PROPERTY);\n    if (readRetryCountString != null) {\n      readRetryCount = Integer.parseInt(readRetryCountString);\n    }\n\n    String waitTimeBeforeRetryString = props.getProperty(WAIT_TIME_BEFORE_RETRY_PROPERTY);\n    if (waitTimeBeforeRetryString != null) {\n      waitTimeBeforeRetry = Integer.parseInt(waitTimeBeforeRetryString);\n    }\n\n    String transactionTimeLimitString = props.getProperty(TRANSACTION_TIME_LIMIT_PROPERTY);\n    if (transactionTimeLimitString != null) {\n      transactionTimeLimit = Integer.parseInt(transactionTimeLimitString);\n    }\n\n    String strongConsistencyString = props.getProperty(STRONG_CONSISTENCY_PROPERTY);\n    if (strongConsistencyString != null) {\n      strongConsistency = Boolean.parseBoolean(strongConsistencyString);\n    }\n\n    String strongConsistentScansBucketTypeString = props.getProperty(STRONG_CONSISTENT_SCANS_BUCKET_TYPE_PROPERTY);\n    if (strongConsistentScansBucketTypeString != null) {\n      strongConsistentScansBucketType = strongConsistentScansBucketTypeString;\n    }\n\n    String debugString = props.getProperty(DEBUG_PROPERTY);\n    if (debugString != null) {\n      debug = Boolean.parseBoolean(debugString);\n    }\n  }\n\n  public void init() throws DBException {\n    loadProperties();\n\n    RiakNode.Builder builder = new RiakNode.Builder().withRemotePort(port);\n    List<RiakNode> nodes = RiakNode.Builder.buildNodes(builder, Arrays.asList(hosts));\n    riakCluster = new RiakCluster.Builder(nodes).build();\n\n    try {\n      riakCluster.start();\n      riakClient = new RiakClient(riakCluster);\n    } catch (Exception e) {\n      System.err.println(\"Unable to properly start up the cluster. Reason: \" + e.toString());\n      throw new DBException(e);\n    }\n\n    // If strong consistency is in use, we need to change the bucket-type where the 2i indexes will be stored.\n    if (strongConsistency && !strongConsistentScansBucketType.isEmpty()) {\n      // The 2i indexes have to be stored in the appositely created strongConsistentScansBucketType: this however has\n      // to be done only if the user actually created it! So, if the latter doesn't exist, then the scan transactions\n      // will not be performed at all.\n      bucketType2i = strongConsistentScansBucketType;\n      performStrongConsistentScans = true;\n    } else {\n      // If instead eventual consistency is in use, then the 2i indexes have to be stored in the bucket-type\n      // indicated with the bucketType variable.\n      bucketType2i = bucketType;\n      performStrongConsistentScans = false;\n    }\n\n    if (debug) {\n      System.err.println(\"DEBUG ENABLED. Configuration parameters:\");\n      System.err.println(\"-----------------------------------------\");\n      System.err.println(\"Hosts: \" + Arrays.toString(hosts));\n      System.err.println(\"Port: \" + port);\n      System.err.println(\"Bucket Type: \" + bucketType);\n      System.err.println(\"R Val: \" + rvalue.toString());\n      System.err.println(\"W Val: \" + wvalue.toString());\n      System.err.println(\"Read Retry Count: \" + readRetryCount);\n      System.err.println(\"Wait Time Before Retry: \" + waitTimeBeforeRetry + \" ms\");\n      System.err.println(\"Transaction Time Limit: \" + transactionTimeLimit + \" s\");\n      System.err.println(\"Consistency model: \" + (strongConsistency ? \"Strong\" : \"Eventual\"));\n\n      if (strongConsistency) {\n        System.err.println(\"Strong Consistent Scan Transactions \" +  (performStrongConsistentScans ? \"\" : \"NOT \") +\n            \"allowed.\");\n      }\n    }\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will be stored in a HashMap.\n   *\n   * @param table  The name of the table (Riak bucket)\n   * @param key    The record key of the record to read.\n   * @param fields The list of fields to read, or null for all of them\n   * @param result A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    Location location = new Location(new Namespace(bucketType, table), key);\n    FetchValue fv = new FetchValue.Builder(location).withOption(FetchValue.Option.R, rvalue).build();\n    FetchValue.Response response;\n\n    try {\n      response = fetch(fv);\n\n      if (response.isNotFound()) {\n        if (debug) {\n          System.err.println(\"Unable to read key \" + key + \". Reason: NOT FOUND\");\n        }\n\n        return Status.NOT_FOUND;\n      }\n    } catch (TimeoutException e) {\n      if (debug) {\n        System.err.println(\"Unable to read key \" + key + \". Reason: TIME OUT\");\n      }\n\n      return TIME_OUT;\n    } catch (Exception e) {\n      if (debug) {\n        System.err.println(\"Unable to read key \" + key + \". Reason: \" + e.toString());\n      }\n\n      return Status.ERROR;\n    }\n\n    // Create the result HashMap.\n    HashMap<String, ByteIterator> partialResult = new HashMap<>();\n    createResultHashMap(fields, response, partialResult);\n    result.putAll(partialResult);\n    return Status.OK;\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value pair from the result will be stored in\n   * a HashMap.\n   * Note: The scan operation requires the use of secondary indexes (2i) and LevelDB.\n   *\n   * @param table       The name of the table (Riak bucket)\n   * @param startkey    The record key of the first record to read.\n   * @param recordcount The number of records to read\n   * @param fields      The list of fields to read, or null for all of them\n   * @param result      A Vector of HashMaps, where each HashMap is a set field/value pairs for one record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\n                     Vector<HashMap<String, ByteIterator>> result) {\n    if (strongConsistency && !performStrongConsistentScans) {\n      return Status.NOT_IMPLEMENTED;\n    }\n\n    // The strong consistent bucket-type is not capable of storing 2i indexes. So, we need to read them from the fake\n    // one (which we use only to store indexes). This is why, when using such a consistency model, the bucketType2i\n    // variable is set to FAKE_BUCKET_TYPE.\n    IntIndexQuery iiq = new IntIndexQuery\n        .Builder(new Namespace(bucketType2i, table), \"key\", getKeyAsLong(startkey), Long.MAX_VALUE)\n        .withMaxResults(recordcount)\n        .withPaginationSort(true)\n        .build();\n\n    Location location;\n    RiakFuture<IntIndexQuery.Response, IntIndexQuery> future = riakClient.executeAsync(iiq);\n\n    try {\n      IntIndexQuery.Response response = future.get(transactionTimeLimit, TimeUnit.SECONDS);\n      List<IntIndexQuery.Response.Entry> entries = response.getEntries();\n\n      // If no entries were retrieved, then something bad happened...\n      if (entries.size() == 0) {\n        if (debug) {\n          System.err.println(\"Unable to scan any record starting from key \" + startkey + \", aborting transaction. \" +\n              \"Reason: NOT FOUND\");\n        }\n\n        return Status.NOT_FOUND;\n      }\n\n      for (IntIndexQuery.Response.Entry entry : entries) {\n        // If strong consistency is in use, then the actual location of the object we want to read is obtained by\n        // fetching the key from the one retrieved with the 2i indexes search operation.\n        if (strongConsistency) {\n          location = new Location(new Namespace(bucketType, table), entry.getRiakObjectLocation().getKeyAsString());\n        } else {\n          location = entry.getRiakObjectLocation();\n        }\n\n        FetchValue fv = new FetchValue.Builder(location)\n            .withOption(FetchValue.Option.R, rvalue)\n            .build();\n\n        FetchValue.Response keyResponse = fetch(fv);\n\n        if (keyResponse.isNotFound()) {\n          if (debug) {\n            System.err.println(\"Unable to scan all requested records starting from key \" + startkey + \", aborting \" +\n                \"transaction. Reason: NOT FOUND\");\n          }\n\n          return Status.NOT_FOUND;\n        }\n\n        // Create the partial result to add to the result vector.\n        HashMap<String, ByteIterator> partialResult = new HashMap<>();\n        createResultHashMap(fields, keyResponse, partialResult);\n        result.add(partialResult);\n      }\n    } catch (TimeoutException e) {\n      if (debug) {\n        System.err.println(\"Unable to scan all requested records starting from key \" + startkey + \", aborting \" +\n            \"transaction. Reason: TIME OUT\");\n      }\n\n      return TIME_OUT;\n    } catch (Exception e) {\n      if (debug) {\n        System.err.println(\"Unable to scan all records starting from key \" + startkey + \", aborting transaction. \" +\n            \"Reason: \" + e.toString());\n      }\n\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  /**\n   * Tries to perform a read and, whenever it fails, retries to do it. It actually does try as many time as indicated,\n   * even if the function riakClient.execute(fv) throws an exception. This is needed for those situation in which the\n   * cluster is unable to respond properly due to overload. Note however that if the cluster doesn't respond after\n   * transactionTimeLimit, the transaction is discarded immediately.\n   *\n   * @param fv The value to fetch from the cluster.\n   */\n  private FetchValue.Response fetch(FetchValue fv) throws TimeoutException {\n    FetchValue.Response response = null;\n\n    for (int i = 0; i < readRetryCount; i++) {\n      RiakFuture<FetchValue.Response, Location> future = riakClient.executeAsync(fv);\n\n      try {\n        response = future.get(transactionTimeLimit, TimeUnit.SECONDS);\n\n        if (!response.isNotFound()) {\n          break;\n        }\n      } catch (TimeoutException e) {\n        // Let the callee decide how to handle this exception...\n        throw new TimeoutException();\n      } catch (Exception e) {\n        // Sleep for a few ms before retrying...\n        try {\n          Thread.sleep(waitTimeBeforeRetry);\n        } catch (InterruptedException e1) {\n          e1.printStackTrace();\n        }\n      }\n    }\n\n    return response;\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified values HashMap will be written into the\n   * record with the specified record key. Also creates a secondary index (2i) for each record consisting of the key\n   * converted to long to be used for the scan operation.\n   *\n   * @param table  The name of the table (Riak bucket)\n   * @param key    The record key of the record to insert.\n   * @param values A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    Location location = new Location(new Namespace(bucketType, table), key);\n    RiakObject object = new RiakObject();\n\n    // Strong consistency doesn't support secondary indexing, but eventually consistent model does. So, we can mock a\n    // 2i usage by creating a fake object stored in an eventually consistent bucket-type with the SAME KEY THAT THE\n    // ACTUAL OBJECT HAS. This latter is obviously stored in the strong consistent bucket-type indicated with the\n    // riak.bucket_type property.\n    if (strongConsistency && performStrongConsistentScans) {\n      // Create a fake object to store in the default bucket-type just to keep track of the 2i indices.\n      Location fakeLocation = new Location(new Namespace(strongConsistentScansBucketType, table), key);\n\n      // Obviously, we want the fake object to contain as less data as possible. We can't create a void object, so\n      // we have to choose the minimum data size allowed: it is one byte.\n      RiakObject fakeObject = new RiakObject();\n      fakeObject.setValue(BinaryValue.create(new byte[]{0x00}));\n      fakeObject.getIndexes().getIndex(LongIntIndex.named(\"key_int\")).add(getKeyAsLong(key));\n\n      StoreValue fakeStore = new StoreValue.Builder(fakeObject)\n          .withLocation(fakeLocation)\n          .build();\n\n      // We don't mind whether the operation is finished or not, because waiting for it to complete would slow down the\n      // client and make our solution too heavy to be seen as a valid compromise. This will obviously mean that under\n      // heavy load conditions a scan operation could fail due to an unfinished \"fakeStore\".\n      riakClient.executeAsync(fakeStore);\n    } else if (!strongConsistency) {\n      // The next operation is useless when using strong consistency model, so it's ok to perform it only when using\n      // eventual consistency.\n      object.getIndexes().getIndex(LongIntIndex.named(\"key_int\")).add(getKeyAsLong(key));\n    }\n\n    // Store proper values into the object.\n    object.setValue(BinaryValue.create(serializeTable(values)));\n\n    StoreValue store = new StoreValue.Builder(object)\n        .withOption(StoreValue.Option.W, wvalue)\n        .withLocation(location)\n        .build();\n\n    RiakFuture<StoreValue.Response, Location> future = riakClient.executeAsync(store);\n\n    try {\n      future.get(transactionTimeLimit, TimeUnit.SECONDS);\n    } catch (TimeoutException e) {\n      if (debug) {\n        System.err.println(\"Unable to insert key \" + key + \". Reason: TIME OUT\");\n      }\n\n      return TIME_OUT;\n    } catch (Exception e) {\n      if (debug) {\n        System.err.println(\"Unable to insert key \" + key + \". Reason: \" + e.toString());\n      }\n\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  /**\n   * Auxiliary class needed for object substitution within the update operation. It is a fundamental part of the\n   * fetch-update (locally)-store cycle described by Basho to properly perform a strong-consistent update.\n   */\n  private static final class UpdateEntity extends UpdateValue.Update<RiakObject> {\n    private final RiakObject object;\n\n    private UpdateEntity(RiakObject object) {\n      this.object = object;\n    }\n\n    //Simply returns the object.\n    @Override\n    public RiakObject apply(RiakObject original) {\n      return object;\n    }\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified values HashMap will be written into the\n   * record with the specified record key, overwriting any existing values with the same field name.\n   *\n   * @param table  The name of the table (Riak bucket)\n   * @param key    The record key of the record to write.\n   * @param values A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    // If eventual consistency model is in use, then an update operation is pratically equivalent to an insert one.\n    if (!strongConsistency) {\n      return insert(table, key, values);\n    }\n\n    Location location = new Location(new Namespace(bucketType, table), key);\n\n    UpdateValue update = new UpdateValue.Builder(location)\n        .withUpdate(new UpdateEntity(new RiakObject().setValue(BinaryValue.create(serializeTable(values)))))\n        .build();\n\n    RiakFuture<UpdateValue.Response, Location> future = riakClient.executeAsync(update);\n\n    try {\n      // For some reason, the update transaction doesn't throw any exception when no cluster has been started, so one\n      // needs to check whether it was done or not. When calling the wasUpdated() function with no nodes available, a\n      // NullPointerException is thrown.\n      // Moreover, such exception could be thrown when more threads are trying to update the same key or, more\n      // generally, when the system is being queried by many clients (i.e. overloaded). This is a known limitation of\n      // Riak KV's strong consistency implementation.\n      future.get(transactionTimeLimit, TimeUnit.SECONDS).wasUpdated();\n    } catch (TimeoutException e) {\n      if (debug) {\n        System.err.println(\"Unable to update key \" + key + \". Reason: TIME OUT\");\n      }\n\n      return TIME_OUT;\n    } catch (Exception e) {\n      if (debug) {\n        System.err.println(\"Unable to update key \" + key + \". Reason: \" + e.toString());\n      }\n\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table The name of the table (Riak bucket)\n   * @param key   The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error\n   */\n  @Override\n  public Status delete(String table, String key) {\n    Location location = new Location(new Namespace(bucketType, table), key);\n    DeleteValue dv = new DeleteValue.Builder(location).build();\n\n    RiakFuture<Void, Location> future = riakClient.executeAsync(dv);\n\n    try {\n      future.get(transactionTimeLimit, TimeUnit.SECONDS);\n    } catch (TimeoutException e) {\n      if (debug) {\n        System.err.println(\"Unable to delete key \" + key + \". Reason: TIME OUT\");\n      }\n\n      return TIME_OUT;\n    } catch (Exception e) {\n      if (debug) {\n        System.err.println(\"Unable to delete key \" + key + \". Reason: \" + e.toString());\n      }\n\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  public void cleanup() throws DBException {\n    try {\n      riakCluster.shutdown();\n    } catch (Exception e) {\n      System.err.println(\"Unable to properly shutdown the cluster. Reason: \" + e.toString());\n      throw new DBException(e);\n    }\n  }\n\n  /**\n   * Auxiliary function needed for testing. It configures the default bucket-type to take care of the consistency\n   * problem by disallowing the siblings creation. Moreover, it disables strong consistency, because we don't have\n   * the possibility to create a proper bucket-type to use to fake 2i indexes usage.\n   *\n   * @param bucket     The bucket name.\n   * @throws Exception Thrown if something bad happens.\n     */\n  void setTestEnvironment(String bucket) throws Exception {\n    bucketType = \"default\";\n    bucketType2i = bucketType;\n    strongConsistency = false;\n\n    Namespace ns = new Namespace(bucketType, bucket);\n    StoreBucketProperties newBucketProperties = new StoreBucketProperties.Builder(ns).withAllowMulti(false).build();\n\n    riakClient.execute(newBucketProperties);\n  }\n}\n"
  },
  {
    "path": "riak/src/main/java/com/yahoo/ycsb/db/riak/RiakUtils.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors All rights reserved.\n * Copyright 2014 Basho Technologies, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.riak;\n\nimport java.io.*;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Set;\n\nimport com.basho.riak.client.api.commands.kv.FetchValue;\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\n\nimport static com.google.common.base.Preconditions.checkArgument;\n\n/**\n * Utility class for Riak KV Client.\n *\n */\nfinal class RiakUtils {\n\n  private RiakUtils() {\n    super();\n  }\n\n  private static byte[] toBytes(final int anInteger) {\n    byte[] aResult = new byte[4];\n\n    aResult[0] = (byte) (anInteger >> 24);\n    aResult[1] = (byte) (anInteger >> 16);\n    aResult[2] = (byte) (anInteger >> 8);\n    aResult[3] = (byte) (anInteger /* >> 0 */);\n\n    return aResult;\n  }\n\n  private static int fromBytes(final byte[] aByteArray) {\n    checkArgument(aByteArray.length == 4);\n\n    return (aByteArray[0] << 24) | (aByteArray[1] & 0xFF) << 16 | (aByteArray[2] & 0xFF) << 8 | (aByteArray[3] & 0xFF);\n  }\n\n  private static void close(final OutputStream anOutputStream) {\n    try {\n      anOutputStream.close();\n    } catch (IOException e) {\n      e.printStackTrace();\n    }\n  }\n\n  private static void close(final InputStream anInputStream) {\n    try {\n      anInputStream.close();\n    } catch (IOException e) {\n      e.printStackTrace();\n    }\n  }\n\n  /**\n   * Serializes a Map, transforming the contained list of (String, ByteIterator) couples into a byte array.\n   *\n   * @param aTable A Map to serialize.\n   * @return A byte array containng the serialized table.\n     */\n  static byte[] serializeTable(Map<String, ByteIterator> aTable) {\n    final ByteArrayOutputStream anOutputStream = new ByteArrayOutputStream();\n    final Set<Map.Entry<String, ByteIterator>> theEntries = aTable.entrySet();\n\n    try {\n      for (final Map.Entry<String, ByteIterator> anEntry : theEntries) {\n        final byte[] aColumnName = anEntry.getKey().getBytes();\n\n        anOutputStream.write(toBytes(aColumnName.length));\n        anOutputStream.write(aColumnName);\n\n        final byte[] aColumnValue = anEntry.getValue().toArray();\n\n        anOutputStream.write(toBytes(aColumnValue.length));\n        anOutputStream.write(aColumnValue);\n      }\n      return anOutputStream.toByteArray();\n    } catch (IOException e) {\n      throw new IllegalStateException(e);\n    } finally {\n      close(anOutputStream);\n    }\n  }\n\n  /**\n   * Deserializes an input byte array, transforming it into a list of (String, ByteIterator) pairs (i.e. a Map).\n   *\n   * @param aValue    A byte array containing the table to deserialize.\n   * @param theResult A Map containing the deserialized table.\n     */\n  private static void deserializeTable(final byte[] aValue, final Map<String, ByteIterator> theResult) {\n    final ByteArrayInputStream anInputStream = new ByteArrayInputStream(aValue);\n    byte[] aSizeBuffer = new byte[4];\n\n    try {\n      while (anInputStream.available() > 0) {\n        anInputStream.read(aSizeBuffer);\n        final int aColumnNameLength = fromBytes(aSizeBuffer);\n\n        final byte[] aColumnNameBuffer = new byte[aColumnNameLength];\n        anInputStream.read(aColumnNameBuffer);\n\n        anInputStream.read(aSizeBuffer);\n        final int aColumnValueLength = fromBytes(aSizeBuffer);\n\n        final byte[] aColumnValue = new byte[aColumnValueLength];\n        anInputStream.read(aColumnValue);\n\n        theResult.put(new String(aColumnNameBuffer), new ByteArrayByteIterator(aColumnValue));\n      }\n    } catch (Exception e) {\n      throw new IllegalStateException(e);\n    } finally {\n      close(anInputStream);\n    }\n  }\n\n  /**\n   * Obtains a Long number from a key string. This will be the key used by Riak for all the transactions.\n   *\n   * @param key The key to convert from String to Long.\n   * @return A Long number parsed from the key String.\n     */\n  static Long getKeyAsLong(String key) {\n    String keyString = key.replaceFirst(\"[a-zA-Z]*\", \"\");\n\n    return Long.parseLong(keyString);\n  }\n\n  /**\n   * Function that retrieves all the fields searched within a read or scan operation and puts them in the result\n   * HashMap.\n   *\n   * @param fields        The list of fields to read, or null for all of them.\n   * @param response      A Vector of HashMaps, where each HashMap is a set field/value pairs for one record.\n   * @param resultHashMap The HashMap to return as result.\n   */\n  static void createResultHashMap(Set<String> fields, FetchValue.Response response,\n                                  HashMap<String, ByteIterator>resultHashMap) {\n    // If everything went fine, then a result must be given. Such an object is a hash table containing the (field,\n    // value) pairs based on the requested fields. Note that in a read operation, ONLY ONE OBJECT IS RETRIEVED!\n    // The following line retrieves the previously serialized table which was store with an insert transaction.\n    byte[] responseFieldsAndValues = response.getValues().get(0).getValue().getValue();\n\n    // Deserialize the stored response table.\n    HashMap<String, ByteIterator> deserializedTable = new HashMap<>();\n    deserializeTable(responseFieldsAndValues, deserializedTable);\n\n    // If only specific fields are requested, then only these should be put in the result object!\n    if (fields != null) {\n      // Populate the HashMap to provide as result.\n      for (Object field : fields.toArray()) {\n        // Comparison between a requested field and the ones retrieved. If they're equal (i.e. the get() operation\n        // DOES NOT return a null value), then  proceed to store the pair in the resultHashMap.\n        ByteIterator value = deserializedTable.get(field);\n\n        if (value != null) {\n          resultHashMap.put((String) field, value);\n        }\n      }\n    } else {\n      // If, instead, no field is specified, then all those retrieved must be provided as result.\n      for (String field : deserializedTable.keySet()) {\n        resultHashMap.put(field, deserializedTable.get(field));\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "riak/src/main/java/com/yahoo/ycsb/db/riak/package-info.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors All rights reserved.\n * Copyright 2014 Basho Technologies, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for <a href=\"http://basho.com/products/riak-kv/\">Riak KV</a> 2.x.y.\n *\n */\npackage com.yahoo.ycsb.db.riak;"
  },
  {
    "path": "riak/src/main/resources/riak.properties",
    "content": "##\n# Copyright (c) 2016 YCSB contributors All rights reserved.\n# Copyright 2014 Basho Technologies, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n\n# RiakKVClient - Default Properties\n# Note: Change the properties below to set the values to use for your test. You can set them either here or from the\n# command line. Note that the latter choice overrides these settings.\n\n# riak.hosts - string list, comma separated list of IPs or FQDNs.\n# EX: 127.0.0.1,127.0.0.2,127.0.0.3 or riak1.mydomain.com,riak2.mydomain.com,riak3.mydomain.com\nriak.hosts=127.0.0.1\n\n# riak.port - int, the port on which every node is listening. It must match the one specified in the riak.conf file\n# at the line \"listener.protobuf.internal\".\nriak.port=8087\n\n# riak.bucket_type - string, must match value of bucket type created during setup. See readme.md for more information\nriak.bucket_type=ycsb\n\n# riak.r_val - int, the R value represents the number of Riak nodes that must return results for a read before the read\n# is considered successful.\nriak.r_val=2\n\n# riak.w_val - int, the W value represents the number of Riak nodes that must report success before an update is\n# considered complete.\nriak.w_val=2\n\n# riak.read_retry_count - int, number of times the client will try to read a key from Riak.\nriak.read_retry_count=5\n\n# riak.wait_time_before_retry - int, time (in milliseconds) the client waits before attempting to perform another\n# read if the previous one failed.\nriak.wait_time_before_retry=200\n\n# riak.transaction_time_limit - int, time (in seconds) the client waits before aborting the current transaction.\nriak.transaction_time_limit=10\n\n# riak.strong_consistency - boolean, indicates whether to use strong consistency (true) or eventual consistency (false).\nriak.strong_consistency=true\n\n# riak.strong_consistent_scans_bucket_type - string, indicates the bucket-type to use to allow scans transactions\n# when using strong consistency mode. Example: fakeBucketType.\nriak.strong_consistent_scans_bucket_type=\n\n# riak.debug - boolean, enables debug mode. This displays all the properties (specified or defaults) when a benchmark\n# is started.\nriak.debug=false\n"
  },
  {
    "path": "riak/src/test/java/com/yahoo/ycsb/db/riak/RiakKVClientTest.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors All rights reserved.\n * Copyright 2014 Basho Technologies, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.riak;\n\nimport java.util.*;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport org.junit.AfterClass;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\n\nimport static org.hamcrest.CoreMatchers.is;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assume.assumeNoException;\nimport static org.junit.Assume.assumeThat;\n\n/**\n * Integration tests for the Riak KV client.\n */\npublic class RiakKVClientTest {\n  private static RiakKVClient riakClient;\n\n  private static final String bucket = \"testBucket\";\n  private static final String keyPrefix = \"testKey\";\n  private static final int recordsToInsert = 20;\n  private static final int recordsToScan = 7;\n  private static final String firstField = \"Key number\";\n  private static final String secondField = \"Key number doubled\";\n  private static final String thirdField = \"Key number square\";\n\n  private static boolean testStarted = false;\n\n  /**\n   * Creates a cluster for testing purposes.\n   */\n  @BeforeClass\n  public static void setUpClass() throws Exception {\n    riakClient = new RiakKVClient();\n    riakClient.init();\n\n    // Set the test bucket environment with the appropriate parameters.\n    try {\n      riakClient.setTestEnvironment(bucket);\n    } catch(Exception e) {\n      assumeNoException(\"Unable to configure Riak KV for test, aborting.\", e);\n    }\n\n    // Just add some records to work on...\n    for (int i = 0; i < recordsToInsert; i++) {\n      // Abort the entire test whenever the dataset population operation fails.\n      assumeThat(\"Riak KV is NOT RUNNING, aborting test.\",\n          riakClient.insert(bucket, keyPrefix + String.valueOf(i), StringByteIterator.getByteIteratorMap(\n              createExpectedHashMap(i))),\n              is(Status.OK));\n    }\n\n    // Variable to check to determine whether the test has started or not.\n    testStarted = true;\n  }\n\n  /**\n   * Shuts down the cluster created.\n   */\n  @AfterClass\n  public static void tearDownClass() throws Exception {\n    // Delete all added keys before cleanup ONLY IF TEST ACTUALLY STARTED.\n    if (testStarted) {\n      for (int i = 0; i <= recordsToInsert; i++) {\n        delete(keyPrefix + Integer.toString(i));\n      }\n    }\n\n    riakClient.cleanup();\n  }\n\n  /**\n   * Test method for read transaction. It is designed to read two of the three fields stored for each key, to also test\n   * if the createResultHashMap() function implemented in RiakKVClient.java works as expected.\n   */\n  @Test\n  public void testRead() {\n    // Choose a random key to read, among the available ones.\n    int readKeyNumber = new Random().nextInt(recordsToInsert);\n\n    // Prepare two fields to read.\n    Set<String> fields = new HashSet<>();\n    fields.add(firstField);\n    fields.add(thirdField);\n\n    // Prepare an expected result.\n    HashMap<String, String> expectedValue = new HashMap<>();\n    expectedValue.put(firstField, Integer.toString(readKeyNumber));\n    expectedValue.put(thirdField, Integer.toString(readKeyNumber * readKeyNumber));\n\n    // Define a HashMap to store the actual result.\n    HashMap<String, ByteIterator> readValue = new HashMap<>();\n\n    // If a read transaction has been properly done, then one has to receive a Status.OK return from the read()\n    // function. Moreover, the actual returned result MUST match the expected one.\n    assertEquals(\"Read transaction FAILED.\",\n        Status.OK,\n        riakClient.read(bucket, keyPrefix + Integer.toString(readKeyNumber), fields, readValue));\n\n    assertEquals(\"Read test FAILED. Actual read transaction value is NOT MATCHING the expected one.\",\n        expectedValue.toString(),\n        readValue.toString());\n  }\n\n  /**\n   * Test method for scan transaction. A scan transaction has to be considered successfully completed only if all the\n   * requested values are read (i.e. scan transaction returns with Status.OK). Moreover, one has to check if the\n   * obtained results match the expected ones.\n   */\n  @Test\n  public void testScan() {\n    // Choose, among the available ones, a random key as starting point for the scan transaction.\n    int startScanKeyNumber = new Random().nextInt(recordsToInsert - recordsToScan);\n\n    // Prepare a HashMap vector to store the scan transaction results.\n    Vector<HashMap<String, ByteIterator>> scannedValues = new Vector<>();\n\n    // Check whether the scan transaction is correctly performed or not.\n    assertEquals(\"Scan transaction FAILED.\",\n        Status.OK,\n        riakClient.scan(bucket, keyPrefix + Integer.toString(startScanKeyNumber), recordsToScan, null,\n            scannedValues));\n\n    // After the scan transaction completes, compare the obtained results with the expected ones.\n    for (int i = 0; i < recordsToScan; i++) {\n      assertEquals(\"Scan test FAILED: the current scanned key is NOT MATCHING the expected one.\",\n          createExpectedHashMap(startScanKeyNumber + i).toString(),\n          scannedValues.get(i).toString());\n    }\n  }\n\n  /**\n   * Test method for update transaction. The test is designed to restore the previously read key. It is assumed to be\n   * correct when, after performing the update transaction, one reads the just provided values.\n   */\n  @Test\n  public void testUpdate() {\n    // Choose a random key to read, among the available ones.\n    int updateKeyNumber = new Random().nextInt(recordsToInsert);\n\n    // Define a HashMap to save the previously stored values for eventually restoring them.\n    HashMap<String, ByteIterator> readValueBeforeUpdate = new HashMap<>();\n    riakClient.read(bucket, keyPrefix + Integer.toString(updateKeyNumber), null, readValueBeforeUpdate);\n\n    // Prepare an update HashMap to store.\n    HashMap<String, String> updateValue = new HashMap<>();\n    updateValue.put(firstField, \"UPDATED\");\n    updateValue.put(secondField, \"UPDATED\");\n    updateValue.put(thirdField, \"UPDATED\");\n\n    // First of all, perform the update and check whether it's failed or not.\n    assertEquals(\"Update transaction FAILED.\",\n        Status.OK,\n        riakClient.update(bucket, keyPrefix + Integer.toString(updateKeyNumber), StringByteIterator\n            .getByteIteratorMap(updateValue)));\n\n    // Then, read the key again and...\n    HashMap<String, ByteIterator> readValueAfterUpdate = new HashMap<>();\n    assertEquals(\"Update test FAILED. Unable to read key value.\",\n        Status.OK,\n        riakClient.read(bucket, keyPrefix + Integer.toString(updateKeyNumber), null, readValueAfterUpdate));\n\n    // ...compare the result with the new one!\n    assertEquals(\"Update transaction NOT EXECUTED PROPERLY. Values DID NOT CHANGE.\",\n        updateValue.toString(),\n        readValueAfterUpdate.toString());\n\n    // Finally, restore the previously read key.\n    assertEquals(\"Update test FAILED. Unable to restore previous key value.\",\n        Status.OK,\n        riakClient.update(bucket, keyPrefix + Integer.toString(updateKeyNumber), readValueBeforeUpdate));\n  }\n\n  /**\n   * Test method for insert transaction. It is designed to insert a key just after the last key inserted in the setUp()\n   * phase.\n   */\n  @Test\n  public void testInsert() {\n    // Define a HashMap to insert and another one for the comparison operation.\n    HashMap<String, String> insertValue = createExpectedHashMap(recordsToInsert);\n    HashMap<String, ByteIterator> readValue = new HashMap<>();\n\n    // Check whether the insertion transaction was performed or not.\n    assertEquals(\"Insert transaction FAILED.\",\n        Status.OK,\n        riakClient.insert(bucket, keyPrefix + Integer.toString(recordsToInsert), StringByteIterator.\n            getByteIteratorMap(insertValue)));\n\n    // Finally, compare the insertion performed with the one expected by reading the key.\n    assertEquals(\"Insert test FAILED. Unable to read inserted value.\",\n        Status.OK,\n        riakClient.read(bucket, keyPrefix + Integer.toString(recordsToInsert), null, readValue));\n    assertEquals(\"Insert test FAILED. Actual read transaction value is NOT MATCHING the inserted one.\",\n        insertValue.toString(),\n        readValue.toString());\n  }\n\n  /**\n   * Test method for delete transaction. The test deletes a key, then performs a read that should give a\n   * Status.NOT_FOUND response. Finally, it restores the previously read key.\n   */\n  @Test\n  public void testDelete() {\n    // Choose a random key to delete, among the available ones.\n    int deleteKeyNumber = new Random().nextInt(recordsToInsert);\n\n    // Define a HashMap to save the previously stored values for its eventual restore.\n    HashMap<String, ByteIterator> readValueBeforeDelete = new HashMap<>();\n    riakClient.read(bucket, keyPrefix + Integer.toString(deleteKeyNumber), null, readValueBeforeDelete);\n\n    // First of all, delete the key.\n    assertEquals(\"Delete transaction FAILED.\",\n        Status.OK,\n        delete(keyPrefix + Integer.toString(deleteKeyNumber)));\n\n    // Then, check if the deletion was actually achieved.\n    assertEquals(\"Delete test FAILED. Key NOT deleted.\",\n        Status.NOT_FOUND,\n        riakClient.read(bucket, keyPrefix + Integer.toString(deleteKeyNumber), null, null));\n\n    // Finally, restore the previously deleted key.\n    assertEquals(\"Delete test FAILED. Unable to restore previous key value.\",\n        Status.OK,\n        riakClient.insert(bucket, keyPrefix + Integer.toString(deleteKeyNumber), readValueBeforeDelete));\n  }\n\n  private static Status delete(String key) {\n    return riakClient.delete(bucket, key);\n  }\n\n  private static HashMap<String, String> createExpectedHashMap(int value) {\n    HashMap<String, String> values = new HashMap<>();\n\n    values.put(firstField, Integer.toString(value));\n    values.put(secondField, Integer.toString(2 * value));\n    values.put(thirdField, Integer.toString(value * value));\n\n    return values;\n  }\n}\n"
  },
  {
    "path": "rocksdb/README.md",
    "content": "<!--\nCopyright (c) 2017 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on Redis. \n\n### 1. Set the paths of dependencies in pom.xml\n\nSet the path of  rocksdbjni-{version}-{platform}.jar.\n\nSet the path of  librocksdbjni-{platform}.so.\n\n\n### 2. Install Java and Maven\n\n### 3. Set Up YCSB\n\nGit clone YCSB and compile:\n\n    git clone http://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:rocksdb-binding -am clean package\n\n### 5. Load data and run tests\n\nLoad the data:\n\n    ./bin/ycsb load rocksdb -s -P workloads/workloada > outputLoad.txt\n\nRun the workload test:\n\n    ./bin/ycsb run rocksdb -s -P workloads/workloada > outputRun.txt\n\n    Add rocksdb-binding for ycsb.\n\n"
  },
  {
    "path": "rocksdb/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n     <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.13.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n\n    <artifactId>rocksdb-binding</artifactId>\n    <name>RocksDB Binding</name>\n    <packaging>jar</packaging>\n\n <dependencies>\n\n    <dependency>\n      <groupId>org.rocksdb</groupId>\n      <artifactId>rocksdbjni</artifactId>\n      <scope>system</scope>\n      <version>5.7.2-linux64</version>\n      <systemPath>/home/compaction/rocksdb-5.7.2-slot/java/target/rocksdbjni-5.7.2-linux64.jar</systemPath>\n    </dependency>\n    <dependency>\n      <groupId>org.rocksdb</groupId>\n      <artifactId>librocksdbjni</artifactId>\n      <scope>system</scope>\n      <version>linux64</version>\n      <type>so</type>\n      <systemPath>/home/compaction/rocksdb-5.7.2-slot/java/target/librocksdbjni-linux64.so</systemPath>\n    </dependency>\n\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <!--<version>0.13.0-SNAPSHOT</version>-->\n      <scope>provided</scope>\n    </dependency>\n </dependencies>\n\n\n</project>"
  },
  {
    "path": "rocksdb/src/main/java/RocksdbClient.java",
    "content": "import com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport org.rocksdb.*;\nimport java.nio.ByteBuffer;\nimport java.util.*;\n\n/**\n * Created by Yosub on 10/2/14.\n * Modified by whb on 2017.10\n */\npublic class RocksdbClient extends DB {\n\n  static {\n    RocksDB.loadLibrary();\n  }\n\n  private static final String DB_PATH = \"/tmp/rocksdb_slot_modified_statsdump_100s_ycsb_1G_60G_onlyrun\";\n  private static final int BYTE_BUFFER_SIZE = 4096;\n\n\n  private Date date;\n  private RocksDB db;\n  private Options options;\n\n  public void init() throws DBException {\n    System.out.println(\"Initializing RocksDB...\");\n    date = new Date();\n    String dbPath = DB_PATH;\n    options = new Options();\n    options.setCreateIfMissing(true);\n    options.setStatsDumpPeriodSec(100);\n    options.setCompressionType(CompressionType.NO_COMPRESSION);\n    System.out.println(\"options.statsDumpPeriodSec() = \" + options.statsDumpPeriodSec());\n    try {\n      db = RocksDB.open(options, dbPath);\n    } catch (RocksDBException e) {\n      System.out.format(\"[ERROR] caught the unexpceted exception -- %s\\n\", e);\n      assert(false);\n    }\n\n    System.out.println(\"Initializing RocksDB is over\");\n  }\n\n  public void cleanup() throws DBException {\n    super.cleanup();\n    try {\n      System.out.println(\"end operation\");\n      String str = db.getProperty(\"rocksdb.stats\");\n      System.out.println(str);\n   //   System.out.println(\"Begin full compaction\");\n   //   db.compactRange();\n   //   str = db.getProperty(\"rocksdb.stats\");\n   //   System.out.println(str);\n    } catch (RocksDBException e) {\n      throw new DBException(\"Error while trying to print RocksDB statistics\");\n    }\n    System.out.println(\"Beginning Sleep...\");\n    try{\n      for(int i=1; i<=10; i++){\n        System.out.println(\"begin the \"+i+\" interval :\");\n        Thread.sleep(1000*60);\n        String str = db.getProperty(\"rocksdb.stats\");\n        System.out.println(\"after \"+i+\" mins\");\n        System.out.println(str);\n      }\n    } catch (RocksDBException e) {\n      throw new DBException(\"Error while trying to print RocksDB statistics while sleeping\");\n    } catch (InterruptedException e){\n      e.printStackTrace();\n      System.out.println(\"Exception!!!!  sleep found exception\");\n    }\n    System.out.println(\"Disconnecting RocksDB database...\");\n    db.close();\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    try {\n      byte[] value = db.get(key.getBytes());\n      Map<String, ByteIterator> deserialized = deserialize(value);\n      result.putAll(deserialized);\n    } catch (RocksDBException e) {\n      System.out.format(\"[ERROR] caught the unexpceted exception -- %s\\n\", e);\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n                  Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    System.out.println(\"Scan called! NOP for now\");\n    return Status.OK;\n  }\n\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      byte[] serialized = serialize(values);\n      db.put(key.getBytes(), serialized);\n    } catch (RocksDBException e) {\n      System.out.format(\"[ERROR] caught the unexpceted exception -- %s\\n\", e);\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      byte[] serialized = serialize(values);\n      db.put(key.getBytes(), serialized);\n    } catch (RocksDBException e) {\n      System.out.format(\"[ERROR] caught the unexpceted exception -- %s\\n\", e);\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      db.remove(key.getBytes());\n    } catch (RocksDBException e) {\n      System.out.format(\"[ERROR] caught the unexpceted exception -- %s\\n\", e);\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n\n  private byte[] serialize(Map<String, ByteIterator> values) {\n    ByteBuffer buf = ByteBuffer.allocate(BYTE_BUFFER_SIZE);\n    // Number of elements in HashMap (int)\n    buf.put((byte) values.size());\n    for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {\n      // Key string length (int)\n      buf.put((byte) entry.getKey().length());\n      // Key bytes\n      buf.put(entry.getKey().getBytes());\n      // Value bytes length (long)\n      buf.put((byte) entry.getValue().bytesLeft());\n      // Value bytes\n      buf.put((entry.getValue().toArray()));\n    }\n\n    byte[] result = new byte[buf.position()];\n    buf.get(result, 0, buf.position());\n    return result;\n  }\n\n  private HashMap<String, ByteIterator> deserialize(byte[] bytes) {\n    HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();\n    ByteBuffer buf = ByteBuffer.wrap(bytes);\n    int count = buf.getInt();\n    for (int i = 0; i < count; i++) {\n      int keyLength = buf.getInt();\n      byte[] keyBytes = new byte[keyLength];\n      buf.get(keyBytes, buf.position(), keyLength);\n\n      int valueLength = buf.getInt();\n      byte[] valueBytes = new byte[valueLength];\n      buf.get(valueBytes, buf.position(), valueLength);\n\n      result.put(new String(keyBytes), new ByteArrayByteIterator(valueBytes));\n    }\n    return result;\n  }\n}\n"
  },
  {
    "path": "s3/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\nQuick Start\n===============\n### 1. Set Up YCSB\n\nDownload the YCSB from this website:\n\n    https://github.com/brianfrankcooper/YCSB/releases/\n\nYou can choose to download either the full stable version or just one of the available binding.\n\n### 2. Configuration of the AWS credentials\n\nThe access key ID and secret access key as well as the endPoint and region and the Client configurations like the maxErrorRetry can be set in a properties file under s3-binding/conf/s3.properties or sent by command line (see below).\nIt is highly suggested to use the property file instead of to send the credentials through the command line.\n    \n\n### 3. Run YCSB\n\nTo execute the benchmark using the S3 storage binding, first files must be uploaded using the \"load\" option with this command:\n\n       ./bin/ycsb load s3 -p table=theBucket -p s3.endPoint=s3.amazonaws.com -p s3.accessKeyId=yourAccessKeyId -p s3.secretKey=yourSecretKey -p fieldlength=10 -p fieldcount=20 -P workloads/workloada\n\nWith this command, the workload A will be executing with the loading phase. The file size is determined by the number of fields (fieldcount) and by the field size (fieldlength). In this case each file is 200 bytes (10 bytes for each field multiplied by 20 fields).\n\nRunning the command:\n\n       ./bin/ycsb -t s3 -p table=theBucket -p s3.endPoint=s3.amazonaws.com -p s3.accessKeyId=yourAccessKeyId -p s3.secretKey=yourSecretKey -p fieldlength=10 -p fieldcount=20 -P workloads/workloada\n\nthe workload A will be executed with file size 200 bytes. \n\n#### S3 Storage Configuration Parameters\n\nThe parameters to configure the S3 client can be set using the file \"s3-binding/conf/s3.properties\". This is highly advisable for the parameters s3.accessKeyId and s3.secretKey. All the other parameters can be set also on the command line. Here the list of all the parameters that is possible to configure:\n\n- `table`\n  - This should be a S3 Storage bucket name and it replace the standard table name assigned by YCSB. \n \n- `s3.endpoint`\n  - This indicate the endpoint used to connect to the S3 Storage service.\n  - Default value is `s3.amazonaws.com`.\n\n- `s3.region`\n  - This indicate the region where your buckets are.\n  - Default value is `us-east-1`.\n \n- `s3.accessKeyId`\n  - This is the accessKey of your S3 account.\n \n- `s3.secretKey`\n  - This is the secret associated with your S3 account.\n\n- `s3.maxErrorRetry`\n  - This is the maxErrorRetry parameter for the S3Client.\n\n- `s3.protocol`\n  - This is the protocol parameter for the S3Client. The default value is HTTPS.\n\n- `s3.sse`\n  - This parameter set to true activates the Server Side Encryption.\n\n- `s3.ssec`\n  - This parameter if not null activates the SSE-C client side encryption. The value passed with this parameter is the client key used to encrpyt the files.\n\n"
  },
  {
    "path": "s3/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2015-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n    <modelVersion>4.0.0</modelVersion>\n    <parent>\n        <groupId>com.yahoo.ycsb</groupId>\n        <artifactId>binding-parent</artifactId>\n        <version>0.14.0-SNAPSHOT</version>\n        <relativePath>../binding-parent</relativePath>\n    </parent>\n  \n  <artifactId>s3-binding</artifactId>\n  <name>S3 Storage Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n\t<groupId>com.amazonaws</groupId>\n\t<artifactId>aws-java-sdk-s3</artifactId>\n\t<version>${s3.version}</version>\n    </dependency>\n\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n            <artifactId>core</artifactId>\n            <version>${project.version}</version>\n            <scope>provided</scope>\n    </dependency>\n   </dependencies>\n</project>\n"
  },
  {
    "path": "s3/src/main/conf/s3.properties",
    "content": "#\n# Sample S3 configuration properties\n#\n# You may either set properties here or via the command line.\n#\n\n# the AWS S3 access key ID\ns3.accessKeyId=yourKey\n\n# the AWS S3 secret access key ID\ns3.secretKey=YourSecret\n\n# the AWS endpoint\ns3.endpoint=s3.amazonaws.com\n\n# activating the SSE server side encryption if true\ns3.sse=false\n\n# activating the SSE-C client side encryption if used\n#s3.ssec=U2CccCI40he2mZtg2aCEzofP7nQsfy4nP14VSYu6bFA=\n\n# set the protocol to use for the Client, default is HTTPS\n#s3.protocol=HTTPS\n\n# set the maxConnections to use for the Client, it should be not less than the\n# threads since only one client is created and shared between threads\n#s3.maxConnections=\n\n# set the maxErrorRetry parameter to use for the Client\n#s3.maxErrorRetry=\n\n"
  },
  {
    "path": "s3/src/main/java/com/yahoo/ycsb/db/S3Client.java",
    "content": "/**\n * Copyright (c) 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n *\n * S3 storage client binding for YCSB.\n */\npackage com.yahoo.ycsb.db;\n\nimport java.util.HashMap;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.io.ByteArrayInputStream;\nimport java.io.InputStream;\nimport java.util.*;\nimport java.util.concurrent.atomic.AtomicInteger;\nimport java.net.*;\n\nimport com.amazonaws.util.IOUtils;\nimport com.yahoo.ycsb.ByteArrayByteIterator;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.services.s3.*;\nimport com.amazonaws.auth.*;\nimport com.amazonaws.services.s3.model.ObjectMetadata;\nimport com.amazonaws.services.s3.model.PutObjectResult;\nimport com.amazonaws.services.s3.model.S3Object;\nimport com.amazonaws.services.s3.model.GetObjectRequest;\nimport com.amazonaws.ClientConfiguration;\nimport com.amazonaws.regions.Region;\nimport com.amazonaws.regions.Regions;\nimport com.amazonaws.Protocol;\nimport com.amazonaws.services.s3.model.DeleteObjectRequest;\nimport com.amazonaws.services.s3.model.ObjectListing;\nimport com.amazonaws.services.s3.model.S3ObjectSummary;\nimport com.amazonaws.services.s3.model.SSECustomerKey;\nimport com.amazonaws.services.s3.model.PutObjectRequest;\nimport com.amazonaws.services.s3.model.GetObjectMetadataRequest;\n\n/**\n * S3 Storage client for YCSB framework.\n *\n * Properties to set:\n *\n * s3.accessKeyId=access key S3 aws\n * s3.secretKey=secret key S3 aws\n * s3.endPoint=s3.amazonaws.com\n * s3.region=us-east-1\n * The parameter table is the name of the Bucket where to upload the files.\n * This must be created before to start the benchmark\n * The size of the file to upload is determined by two parameters:\n * - fieldcount this is the number of fields of a record in YCSB\n * - fieldlength this is the size in bytes of a single field in the record\n * together these two parameters define the size of the file to upload,\n * the size in bytes is given by the fieldlength multiplied by the fieldcount.\n * The name of the file is determined by the parameter key.\n *This key is automatically generated by YCSB.\n *\n */\npublic class S3Client extends DB {\n\n  private static AmazonS3Client s3Client;\n  private static String sse;\n  private static SSECustomerKey ssecKey;\n  private static final AtomicInteger INIT_COUNT = new AtomicInteger(0);\n\n  /**\n  * Cleanup any state for this storage.\n  * Called once per S3 instance;\n  */\n  @Override\n  public void cleanup() throws DBException {\n    if (INIT_COUNT.decrementAndGet() == 0) {\n      try {\n        s3Client.shutdown();\n        System.out.println(\"The client is shutdown successfully\");\n      } catch (Exception e){\n        System.err.println(\"Could not shutdown the S3Client: \"+e.toString());\n        e.printStackTrace();\n      } finally {\n        if (s3Client != null){\n          s3Client = null;\n        }\n      }\n    }\n  }\n  /**\n  * Delete a file from S3 Storage.\n  *\n  * @param bucket\n  *            The name of the bucket\n  * @param key\n  * The record key of the file to delete.\n  * @return OK on success, otherwise ERROR. See the\n  * {@link DB} class's description for a discussion of error codes.\n  */\n  @Override\n  public Status delete(String bucket, String key) {\n    try {\n      s3Client.deleteObject(new DeleteObjectRequest(bucket, key));\n    } catch (Exception e){\n      System.err.println(\"Not possible to delete the key \"+key);\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n  /**\n  * Initialize any state for the storage.\n  * Called once per S3 instance; If the client is not null it is re-used.\n  */\n  @Override\n  public void init() throws DBException {\n    final int count = INIT_COUNT.incrementAndGet();\n    synchronized (S3Client.class){\n      Properties propsCL = getProperties();\n      int recordcount = Integer.parseInt(\n          propsCL.getProperty(\"recordcount\"));\n      int operationcount = Integer.parseInt(\n          propsCL.getProperty(\"operationcount\"));\n      int numberOfOperations = 0;\n      if (recordcount > 0){\n        if (recordcount > operationcount){\n          numberOfOperations = recordcount;\n        } else {\n          numberOfOperations = operationcount;\n        }\n      } else {\n        numberOfOperations = operationcount;\n      }\n      if (count <= numberOfOperations) {\n        String accessKeyId = null;\n        String secretKey = null;\n        String endPoint = null;\n        String region = null;\n        String maxErrorRetry = null;\n        String maxConnections = null;\n        String protocol = null;\n        BasicAWSCredentials s3Credentials;\n        ClientConfiguration clientConfig;\n        if (s3Client != null) {\n          System.out.println(\"Reusing the same client\");\n          return;\n        }\n        try {\n          InputStream propFile = S3Client.class.getClassLoader()\n              .getResourceAsStream(\"s3.properties\");\n          Properties props = new Properties(System.getProperties());\n          props.load(propFile);\n          accessKeyId = props.getProperty(\"s3.accessKeyId\");\n          if (accessKeyId == null){\n            accessKeyId = propsCL.getProperty(\"s3.accessKeyId\");\n          }\n          System.out.println(accessKeyId);\n          secretKey = props.getProperty(\"s3.secretKey\");\n          if (secretKey == null){\n            secretKey = propsCL.getProperty(\"s3.secretKey\");\n          }\n          System.out.println(secretKey);\n          endPoint = props.getProperty(\"s3.endPoint\");\n          if (endPoint == null){\n            endPoint = propsCL.getProperty(\"s3.endPoint\", \"s3.amazonaws.com\");\n          }\n          System.out.println(endPoint);\n          region = props.getProperty(\"s3.region\");\n          if (region == null){\n            region = propsCL.getProperty(\"s3.region\", \"us-east-1\");\n          }\n          System.out.println(region);\n          maxErrorRetry = props.getProperty(\"s3.maxErrorRetry\");\n          if (maxErrorRetry == null){\n            maxErrorRetry = propsCL.getProperty(\"s3.maxErrorRetry\", \"15\");\n          }\n          maxConnections = props.getProperty(\"s3.maxConnections\");\n          if (maxConnections == null){\n            maxConnections = propsCL.getProperty(\"s3.maxConnections\");\n          }\n          protocol = props.getProperty(\"s3.protocol\");\n          if (protocol == null){\n            protocol = propsCL.getProperty(\"s3.protocol\", \"HTTPS\");\n          }\n          sse = props.getProperty(\"s3.sse\");\n          if (sse == null){\n            sse = propsCL.getProperty(\"s3.sse\", \"false\");\n          }\n          String ssec = props.getProperty(\"s3.ssec\");\n          if (ssec == null){\n            ssec = propsCL.getProperty(\"s3.ssec\", null);\n          } else {\n            ssecKey = new SSECustomerKey(ssec);\n          }\n        } catch (Exception e){\n          System.err.println(\"The file properties doesn't exist \"+e.toString());\n          e.printStackTrace();\n        }\n        try {\n          System.out.println(\"Inizializing the S3 connection\");\n          s3Credentials = new BasicAWSCredentials(accessKeyId, secretKey);\n          clientConfig = new ClientConfiguration();\n          clientConfig.setMaxErrorRetry(Integer.parseInt(maxErrorRetry));\n          if(protocol.equals(\"HTTP\")) {\n            clientConfig.setProtocol(Protocol.HTTP);\n          } else {\n            clientConfig.setProtocol(Protocol.HTTPS);\n          }\n          if(maxConnections != null) {\n            clientConfig.setMaxConnections(Integer.parseInt(maxConnections));\n          }\n          s3Client = new AmazonS3Client(s3Credentials, clientConfig);\n          s3Client.setRegion(Region.getRegion(Regions.fromName(region)));\n          s3Client.setEndpoint(endPoint);\n          System.out.println(\"Connection successfully initialized\");\n        } catch (Exception e){\n          System.err.println(\"Could not connect to S3 storage: \"+ e.toString());\n          e.printStackTrace();\n          throw new DBException(e);\n        }\n      } else {\n        System.err.println(\n            \"The number of threads must be less or equal than the operations\");\n        throw new DBException(new Error(\n            \"The number of threads must be less or equal than the operations\"));\n      }\n    }\n  }\n  /**\n  * Create a new File in the Bucket. Any field/value pairs in the specified\n  * values HashMap will be written into the file with the specified record\n  * key.\n  *\n  * @param bucket\n  *            The name of the bucket\n  * @param key\n  *      The record key of the file to insert.\n  * @param values\n  *            A HashMap of field/value pairs to insert in the file.\n  *            Only the content of the first field is written to a byteArray\n  *            multiplied by the number of field. In this way the size\n  *            of the file to upload is determined by the fieldlength\n  *            and fieldcount parameters.\n  * @return OK on success, ERROR otherwise. See the\n  *         {@link DB} class's description for a discussion of error codes.\n  */\n  @Override\n  public Status insert(String bucket, String key,\n                       Map<String, ByteIterator> values) {\n    return writeToStorage(bucket, key, values, true, sse, ssecKey);\n  }\n  /**\n  * Read a file from the Bucket. Each field/value pair from the result\n  * will be stored in a HashMap.\n  *\n  * @param bucket\n  *            The name of the bucket\n  * @param key\n  *            The record key of the file to read.\n  * @param fields\n  *            The list of fields to read, or null for all of them,\n  *            it is null by default\n  * @param result\n  *          A HashMap of field/value pairs for the result\n  * @return OK on success, ERROR otherwise.\n  */\n  @Override\n  public Status read(String bucket, String key, Set<String> fields,\n                     Map<String, ByteIterator> result) {\n    return readFromStorage(bucket, key, result, ssecKey);\n  }\n  /**\n  * Update a file in the database. Any field/value pairs in the specified\n  * values HashMap will be written into the file with the specified file\n  * key, overwriting any existing values with the same field name.\n  *\n  * @param bucket\n  *            The name of the bucket\n  * @param key\n  *            The file key of the file to write.\n  * @param values\n  *            A HashMap of field/value pairs to update in the record\n  * @return OK on success, ERORR otherwise.\n  */\n  @Override\n  public Status update(String bucket, String key,\n                       Map<String, ByteIterator> values) {\n    return writeToStorage(bucket, key, values, false, sse, ssecKey);\n  }\n  /**\n  * Perform a range scan for a set of files in the bucket. Each\n  * field/value pair from the result will be stored in a HashMap.\n  *\n  * @param bucket\n  *            The name of the bucket\n  * @param startkey\n  *            The file key of the first file to read.\n  * @param recordcount\n  *            The number of files to read\n  * @param fields\n  *            The list of fields to read, or null for all of them\n  * @param result\n  *            A Vector of HashMaps, where each HashMap is a set field/value\n  *            pairs for one file\n  * @return OK on success, ERROR otherwise.\n  */\n  @Override\n  public Status scan(String bucket, String startkey, int recordcount,\n        Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    return scanFromStorage(bucket, startkey, recordcount, result, ssecKey);\n  }\n  /**\n  * Upload a new object to S3 or update an object on S3.\n  *\n  * @param bucket\n  *            The name of the bucket\n  * @param key\n  *            The file key of the object to upload/update.\n  * @param values\n  *            The data to be written on the object\n  * @param updateMarker\n  *            A boolean value. If true a new object will be uploaded\n  *            to S3. If false an existing object will be re-uploaded\n  *\n  */\n  protected Status writeToStorage(String bucket, String key,\n                                  Map<String, ByteIterator> values, Boolean updateMarker,\n                                  String sseLocal, SSECustomerKey ssecLocal) {\n    int totalSize = 0;\n    int fieldCount = values.size(); //number of fields to concatenate\n    // getting the first field in the values\n    Object keyToSearch = values.keySet().toArray()[0];\n    // getting the content of just one field\n    byte[] sourceArray = values.get(keyToSearch).toArray();\n    int sizeArray = sourceArray.length; //size of each array\n    if (updateMarker){\n      totalSize = sizeArray*fieldCount;\n    } else {\n      try {\n        Map.Entry<S3Object, ObjectMetadata> objectAndMetadata = getS3ObjectAndMetadata(bucket, key, ssecLocal);\n        int sizeOfFile = (int)objectAndMetadata.getValue().getContentLength();\n        fieldCount = sizeOfFile/sizeArray;\n        totalSize = sizeOfFile;\n        objectAndMetadata.getKey().close();\n      } catch (Exception e){\n        System.err.println(\"Not possible to get the object :\"+key);\n        e.printStackTrace();\n        return Status.ERROR;\n      }\n    }\n    byte[] destinationArray = new byte[totalSize];\n    int offset = 0;\n    for (int i = 0; i < fieldCount; i++) {\n      System.arraycopy(sourceArray, 0, destinationArray, offset, sizeArray);\n      offset += sizeArray;\n    }\n    try (InputStream input = new ByteArrayInputStream(destinationArray)) {\n      ObjectMetadata metadata = new ObjectMetadata();\n      metadata.setContentLength(totalSize);\n      PutObjectRequest putObjectRequest = null;\n      if (sseLocal.equals(\"true\")) {\n        metadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);\n        putObjectRequest = new PutObjectRequest(bucket, key,\n            input, metadata);\n      } else if (ssecLocal != null) {\n        putObjectRequest = new PutObjectRequest(bucket, key,\n            input, metadata).withSSECustomerKey(ssecLocal);\n      } else {\n        putObjectRequest = new PutObjectRequest(bucket, key,\n            input, metadata);\n      }\n\n      try {\n        PutObjectResult res =\n            s3Client.putObject(putObjectRequest);\n        if(res.getETag() == null) {\n          return Status.ERROR;\n        } else {\n          if (sseLocal.equals(\"true\")) {\n            System.out.println(\"Uploaded object encryption status is \" +\n                res.getSSEAlgorithm());\n          } else if (ssecLocal != null) {\n            System.out.println(\"Uploaded object encryption status is \" +\n                res.getSSEAlgorithm());\n          }\n        }\n      } catch (Exception e) {\n        System.err.println(\"Not possible to write object :\"+key);\n        e.printStackTrace();\n        return Status.ERROR;\n      }\n    } catch (Exception e) {\n      System.err.println(\"Error in the creation of the stream :\"+e.toString());\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  /**\n  * Download an object from S3.\n  *\n  * @param bucket\n  *            The name of the bucket\n  * @param key\n  *            The file key of the object to upload/update.\n  * @param result\n  *            The Hash map where data from the object are written\n  *\n  */\n  protected Status readFromStorage(String bucket, String key,\n                                   Map<String, ByteIterator> result, SSECustomerKey ssecLocal) {\n    try {\n      Map.Entry<S3Object, ObjectMetadata> objectAndMetadata = getS3ObjectAndMetadata(bucket, key, ssecLocal);\n      InputStream objectData = objectAndMetadata.getKey().getObjectContent(); //consuming the stream\n      // writing the stream to bytes and to results\n      result.put(key, new ByteArrayByteIterator(IOUtils.toByteArray(objectData)));\n      objectData.close();\n      objectAndMetadata.getKey().close();\n    } catch (Exception e){\n      System.err.println(\"Not possible to get the object \"+key);\n      e.printStackTrace();\n      return Status.ERROR;\n    }\n\n    return Status.OK;\n  }\n\n  private Map.Entry<S3Object, ObjectMetadata> getS3ObjectAndMetadata(String bucket,\n                                                                     String key, SSECustomerKey ssecLocal) {\n    GetObjectRequest getObjectRequest;\n    GetObjectMetadataRequest getObjectMetadataRequest;\n    if (ssecLocal != null) {\n      getObjectRequest = new GetObjectRequest(bucket,\n              key).withSSECustomerKey(ssecLocal);\n      getObjectMetadataRequest = new GetObjectMetadataRequest(bucket,\n              key).withSSECustomerKey(ssecLocal);\n    } else {\n      getObjectRequest = new GetObjectRequest(bucket, key);\n      getObjectMetadataRequest = new GetObjectMetadataRequest(bucket,\n              key);\n    }\n\n    return new AbstractMap.SimpleEntry<>(s3Client.getObject(getObjectRequest),\n            s3Client.getObjectMetadata(getObjectMetadataRequest));\n  }\n\n  /**\n  * Perform an emulation of a database scan operation on a S3 bucket.\n  *\n  * @param bucket\n  *            The name of the bucket\n  * @param startkey\n  *            The file key of the first file to read.\n  * @param recordcount\n  *            The number of files to read\n  * @param fields\n  *            The list of fields to read, or null for all of them\n  * @param result\n  *            A Vector of HashMaps, where each HashMap is a set field/value\n  *            pairs for one file\n  *\n  */\n  protected Status scanFromStorage(String bucket, String startkey,\n      int recordcount, Vector<HashMap<String, ByteIterator>> result,\n          SSECustomerKey ssecLocal) {\n\n    int counter = 0;\n    ObjectListing listing = s3Client.listObjects(bucket);\n    List<S3ObjectSummary> summaries = listing.getObjectSummaries();\n    List<String> keyList = new ArrayList();\n    int startkeyNumber = 0;\n    int numberOfIteration = 0;\n    // getting the list of files in the bucket\n    while (listing.isTruncated()) {\n      listing = s3Client.listNextBatchOfObjects(listing);\n      summaries.addAll(listing.getObjectSummaries());\n    }\n    for (S3ObjectSummary summary : summaries) {\n      String summaryKey = summary.getKey();\n      keyList.add(summaryKey);\n    }\n    // Sorting the list of files in Alphabetical order\n    Collections.sort(keyList); // sorting the list\n    // Getting the position of the startingfile for the scan\n    for (String key : keyList) {\n      if (key.equals(startkey)){\n        startkeyNumber = counter;\n      } else {\n        counter = counter + 1;\n      }\n    }\n    // Checking if the total number of file is bigger than the file to read,\n    // if not using the total number of Files\n    if (recordcount < keyList.size()) {\n      numberOfIteration = recordcount;\n    } else {\n      numberOfIteration = keyList.size();\n    }\n    // Reading the Files starting from the startkey File till the end\n    // of the Files or Till the recordcount number\n    for (int i = startkeyNumber; i < numberOfIteration; i++){\n      HashMap<String, ByteIterator> resultTemp =\n          new HashMap<String, ByteIterator>();\n      readFromStorage(bucket, keyList.get(i), resultTemp,\n          ssecLocal);\n      result.add(resultTemp);\n    }\n    return Status.OK;\n  }\n}\n"
  },
  {
    "path": "s3/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/**\n * Copyright (c) 2015 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n *\n * S3 storage client binding for YCSB.\n */\n\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "solr/README.md",
    "content": "<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on Solr running locally. \n\n### 1. Set Up YCSB\n\nClone the YCSB git repository and compile:\n\n    git clone git://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:solr-binding -am clean package\n\n### 2. Set Up Solr\n\nThere must be a running Solr instance with a core/collection pre-defined and configured. \n- See this [API](https://cwiki.apache.org/confluence/display/solr/CoreAdmin+API#CoreAdminAPI-CREATE) reference on how to create a core.\n- See this [API](https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api1) reference on how to create a collection in SolrCloud mode.\n\nThe `conf/schema.xml` configuration file present in the core/collection just created must be configured to handle the expected field names during benchmarking.\nBelow illustrates a sample from a schema config file that matches the default field names used by the ycsb client:\n\n\t<field name=\"id\" type=\"string\" indexed=\"true\" stored=\"true\" required=\"true\" multiValued=\"false\"/>\n\t<field name=\"field0\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field1\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field2\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field3\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field4\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field5\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field6\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field7\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field8\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field9\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\nIf running in SolrCloud mode ensure there is an external Zookeeper cluster running.\n- See [here](https://cwiki.apache.org/confluence/display/solr/Setting+Up+an+External+ZooKeeper+Ensemble) for details on how to set up an external Zookeeper cluster.\n- See [here](https://cwiki.apache.org/confluence/display/solr/Using+ZooKeeper+to+Manage+Configuration+Files) for instructions on how to use Zookeeper to manage your core/collection configuration files.\n\n### 3. Run YCSB\n    \nNow you are ready to run! First, load the data:\n\n    ./bin/ycsb load solr -s -P workloads/workloada -p table=<core/collection name>\n\nThen, run the workload:\n\n    ./bin/ycsb run solr -s -P workloads/workloada -p table=<core/collection name>\n\nFor further configuration see below: \n\n### Default Configuration Parameters\nThe default settings for the Solr node that is created is as follows:\n\t\n- `solr.cloud` \n  - A Boolean value indicating if Solr is running in SolrCloud mode. If so there must be an external Zookeeper cluster running also.\n  - Default value is `false` and therefore expects solr to be running in stand-alone mode.\n\n- `solr.base.url` \n  - The base URL in which to interface with a running Solr instance in stand-alone mode\n  - Default value is `http://localhost:8983/solr\n\n- `solr.commit.within.time`\n  - The max time in ms to wait for a commit when in batch mode, ignored otherwise\n  - Default value is `1000ms`\n\n- `solr.batch.mode`\n  - Indicates if inserts/updates/deletes should be commited in batches (frequency controlled by the `solr.commit.within.time` parameter) or commit 1 document at a time.\n  - Default value is `false`\n\n- `solr.zookeeper.hosts`\n  - A list of comma seperated host:port pairs of Zookeeper nodes used to manage SolrCloud configurations.\n  - Must be passed when in [SolrCloud](https://cwiki.apache.org/confluence/display/solr/SolrCloud) mode.\n  - Default value is `localhost:2181`\n\n### Custom Configuration\nIf you wish to customize the settings used to create the Solr node\nyou can created a new property file that contains your desired Solr \nnode settings and pass it in via the parameter to 'bin/ycsb' script. Note that \nthe default properties will be kept if you don't explicitly overwrite them.\n\nAssuming that we have a properties file named \"myproperties.data\" that contains \ncustom Solr node configuration you can execute the following to\npass it into the Solr client:\n\n    ./bin/ycsb run solr -P workloads/workloada -P myproperties.data -s\n\nIf you wish to use SolrCloud mode ensure a Solr cluster is running with an\nexternal zookeeper cluster and an appropriate collection has been created.\nMake sure to pass the following properties as parameters to 'bin/ycsb' script.\n\n\tsolr.cloud=true\n\tsolr.zookeeper.hosts=<zkHost2>:<zkPort1>,...,<zkHostN>:<zkPortN>\n\n\n"
  },
  {
    "path": "solr/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. \nAll rights reserved. \n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you \nmay not use this file except in compliance with the License. You \nmay obtain a copy of the License at \n\t\nhttp://www.apache.org/licenses/LICENSE-2.0 \n\nUnless required by applicable law or agreed to in writing, software \ndistributed under the License is distributed on an \"AS IS\" BASIS, \nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or \nimplied. See the License for the specific language governing \npermissions and limitations under the License. See accompanying \nLICENSE file. \n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>solr-binding</artifactId>\n  <name>Solr Binding</name>\n  <packaging>jar</packaging>\n\n  <properties>\n    <!-- Tests do not run on jdk9 -->\n    <skipJDK9Tests>true</skipJDK9Tests>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.solr</groupId>\n      <artifactId>solr-solrj</artifactId>\n      <version>${solr.version}</version>\n    </dependency>\n    <!-- commons-codec required for Solr Kerberos support -->\n    <dependency>\n      <groupId>commons-codec</groupId>\n      <artifactId>commons-codec</artifactId>\n      <version>1.10</version>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-log4j12</artifactId>\n      <version>1.7.21</version>\n    </dependency>\n\n    <dependency>\n      <groupId>org.apache.solr</groupId>\n      <artifactId>solr-test-framework</artifactId>\n      <version>${solr.version}</version>\n      <scope>test</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>jdk.tools</groupId>\n          <artifactId>jdk.tools</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "solr/src/main/java/com/yahoo/ycsb/db/solr/SolrClient.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.solr;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport org.apache.solr.client.solrj.SolrQuery;\nimport org.apache.solr.client.solrj.impl.CloudSolrClient;\nimport org.apache.solr.client.solrj.impl.HttpClientUtil;\nimport org.apache.solr.client.solrj.impl.HttpSolrClient;\nimport org.apache.solr.client.solrj.impl.Krb5HttpClientConfigurer;\nimport org.apache.solr.client.solrj.response.QueryResponse;\nimport org.apache.solr.client.solrj.response.UpdateResponse;\nimport org.apache.solr.client.solrj.SolrServerException;\nimport org.apache.solr.common.SolrDocument;\nimport org.apache.solr.common.SolrDocumentList;\nimport org.apache.solr.common.SolrInputDocument;\n\nimport java.io.IOException;\nimport java.util.*;\nimport java.util.Map.Entry;\n\n/**\n * Solr client for YCSB framework.\n *\n * <p>\n * Default properties to set:\n * </p>\n * <ul>\n * See README.md\n * </ul>\n *\n */\npublic class SolrClient extends DB {\n\n  public static final String DEFAULT_CLOUD_MODE = \"false\";\n  public static final String DEFAULT_BATCH_MODE = \"false\";\n  public static final String DEFAULT_ZOOKEEPER_HOSTS = \"localhost:2181\";\n  public static final String DEFAULT_SOLR_BASE_URL = \"http://localhost:8983/solr\";\n  public static final String DEFAULT_COMMIT_WITHIN_TIME = \"1000\";\n\n  private org.apache.solr.client.solrj.SolrClient client;\n  private Integer commitTime;\n  private Boolean batchMode;\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is one DB instance per\n   * client thread.\n   */\n  @Override\n  public void init() throws DBException {\n    Properties props = getProperties();\n    commitTime = Integer\n        .parseInt(props.getProperty(\"solr.commit.within.time\", DEFAULT_COMMIT_WITHIN_TIME));\n    batchMode = Boolean.parseBoolean(props.getProperty(\"solr.batch.mode\", DEFAULT_BATCH_MODE));\n\n\n    String jaasConfPath = props.getProperty(\"solr.jaas.conf.path\");\n    if(jaasConfPath != null) {\n      System.setProperty(\"java.security.auth.login.config\", jaasConfPath);\n      HttpClientUtil.setConfigurer(new Krb5HttpClientConfigurer());\n    }\n\n    // Check if Solr cluster is running in SolrCloud or Stand-alone mode\n    Boolean cloudMode = Boolean.parseBoolean(props.getProperty(\"solr.cloud\", DEFAULT_CLOUD_MODE));\n    System.err.println(\"Solr Cloud Mode = \" + cloudMode);\n    if (cloudMode) {\n      System.err.println(\"Solr Zookeeper Remote Hosts = \"\n          + props.getProperty(\"solr.zookeeper.hosts\", DEFAULT_ZOOKEEPER_HOSTS));\n      client = new CloudSolrClient(\n          props.getProperty(\"solr.zookeeper.hosts\", DEFAULT_ZOOKEEPER_HOSTS));\n    } else {\n      client = new HttpSolrClient(props.getProperty(\"solr.base.url\", DEFAULT_SOLR_BASE_URL));\n    }\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    try {\n      client.close();\n    } catch (IOException e) {\n      throw new DBException(e);\n    }\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified values HashMap will be\n   * written into the record with the specified record key.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to insert.\n   * @param values\n   *          A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error. See this class's description for a\n   *         discussion of error codes.\n   */\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      SolrInputDocument doc = new SolrInputDocument();\n\n      doc.addField(\"id\", key);\n      for (Entry<String, String> entry : StringByteIterator.getStringMap(values).entrySet()) {\n        doc.addField(entry.getKey(), entry.getValue());\n      }\n      UpdateResponse response;\n      if (batchMode) {\n        response = client.add(table, doc, commitTime);\n      } else {\n        response = client.add(table, doc);\n        client.commit(table);\n      }\n      return checkStatus(response.getStatus());\n\n    } catch (IOException | SolrServerException e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error. See this class's description for a\n   *         discussion of error codes.\n   */\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      UpdateResponse response;\n      if (batchMode) {\n        response = client.deleteById(table, key, commitTime);\n      } else {\n        response = client.deleteById(table, key);\n        client.commit(table);\n      }\n      return checkStatus(response.getStatus());\n    } catch (IOException | SolrServerException e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will be stored in a\n   * HashMap.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to read.\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error or \"not found\".\n   */\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n      Map<String, ByteIterator> result) {\n    try {\n      Boolean returnFields = false;\n      String[] fieldList = null;\n      if (fields != null) {\n        returnFields = true;\n        fieldList = fields.toArray(new String[fields.size()]);\n      }\n      SolrQuery query = new SolrQuery();\n      query.setQuery(\"id:\" + key);\n      if (returnFields) {\n        query.setFields(fieldList);\n      }\n      final QueryResponse response = client.query(table, query);\n      SolrDocumentList results = response.getResults();\n      if ((results != null) && (results.getNumFound() > 0)) {\n        for (String field : results.get(0).getFieldNames()) {\n          result.put(field,\n              new StringByteIterator(String.valueOf(results.get(0).getFirstValue(field))));\n        }\n      }\n      return checkStatus(response.getStatus());\n    } catch (IOException | SolrServerException e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified values HashMap will be\n   * written into the record with the specified record key, overwriting any existing values with the\n   * same field name.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to write.\n   * @param values\n   *          A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error. See this class's description for a\n   *         discussion of error codes.\n   */\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      SolrInputDocument updatedDoc = new SolrInputDocument();\n      updatedDoc.addField(\"id\", key);\n\n      for (Entry<String, String> entry : StringByteIterator.getStringMap(values).entrySet()) {\n        updatedDoc.addField(entry.getKey(), Collections.singletonMap(\"set\", entry.getValue()));\n      }\n\n      UpdateResponse writeResponse;\n      if (batchMode) {\n        writeResponse = client.add(table, updatedDoc, commitTime);\n      } else {\n        writeResponse = client.add(table, updatedDoc);\n        client.commit(table);\n      }\n      return checkStatus(writeResponse.getStatus());\n    } catch (IOException | SolrServerException e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value pair from the\n   * result will be stored in a HashMap.\n   *\n   * @param table\n   *          The name of the table\n   * @param startkey\n   *          The record key of the first record to read.\n   * @param recordcount\n   *          The number of records to read\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A Vector of HashMaps, where each HashMap is a set field/value pairs for one record\n   * @return Zero on success, a non-zero error code on error. See this class's description for a\n   *         discussion of error codes.\n   */\n  @Override\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\n      Vector<HashMap<String, ByteIterator>> result) {\n    try {\n      Boolean returnFields = false;\n      String[] fieldList = null;\n      if (fields != null) {\n        returnFields = true;\n        fieldList = fields.toArray(new String[fields.size()]);\n      }\n      SolrQuery query = new SolrQuery();\n      query.setQuery(\"*:*\");\n      query.setParam(\"fq\", \"id:[ \" + startkey + \" TO * ]\");\n      if (returnFields) {\n        query.setFields(fieldList);\n      }\n      query.setRows(recordcount);\n      final QueryResponse response = client.query(table, query);\n      SolrDocumentList results = response.getResults();\n\n      HashMap<String, ByteIterator> entry;\n\n      for (SolrDocument hit : results) {\n        entry = new HashMap<String, ByteIterator>((int) results.getNumFound());\n        for (String field : hit.getFieldNames()) {\n          entry.put(field, new StringByteIterator(String.valueOf(hit.getFirstValue(field))));\n        }\n        result.add(entry);\n      }\n      return checkStatus(response.getStatus());\n    } catch (IOException | SolrServerException e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  private Status checkStatus(int status) {\n    Status responseStatus;\n    switch (status) {\n    case 0:\n      responseStatus = Status.OK;\n      break;\n    case 400:\n      responseStatus = Status.BAD_REQUEST;\n      break;\n    case 403:\n      responseStatus = Status.FORBIDDEN;\n      break;\n    case 404:\n      responseStatus = Status.NOT_FOUND;\n      break;\n    case 500:\n      responseStatus = Status.ERROR;\n      break;\n    case 503:\n      responseStatus = Status.SERVICE_UNAVAILABLE;\n      break;\n    default:\n      responseStatus = Status.UNEXPECTED_STATE;\n      break;\n    }\n    return responseStatus;\n  }\n\n}\n"
  },
  {
    "path": "solr/src/main/java/com/yahoo/ycsb/db/solr/package-info.java",
    "content": "/*\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for \n * <a href=\"http://lucene.apache.org/solr/\">Solr</a>.\n */\npackage com.yahoo.ycsb.db.solr;\n\n"
  },
  {
    "path": "solr/src/main/resources/log4j.properties",
    "content": "# Root logger option\nlog4j.rootLogger=INFO, stderr\n\n# Direct log messages to stderr\nlog4j.appender.stderr=org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.Target=System.err\nlog4j.appender.stderr.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n\n"
  },
  {
    "path": "solr/src/test/java/com/yahoo/ycsb/db/solr/SolrClientBaseTest.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n * <p/>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p/>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p/>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.solr;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport org.apache.solr.client.solrj.embedded.JettyConfig;\nimport org.apache.solr.cloud.MiniSolrCloudCluster;\nimport org.apache.solr.common.util.NamedList;\nimport org.junit.*;\n\nimport java.io.File;\nimport java.net.URL;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.HashMap;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNotNull;\n\npublic abstract class SolrClientBaseTest {\n\n  protected static MiniSolrCloudCluster miniSolrCloudCluster;\n  private DB instance;\n  private final static HashMap<String, ByteIterator> MOCK_DATA;\n  protected final static String MOCK_TABLE = \"ycsb\";\n  private final static String MOCK_KEY0 = \"0\";\n  private final static String MOCK_KEY1 = \"1\";\n  private final static int NUM_RECORDS = 10;\n\n  static {\n    MOCK_DATA = new HashMap<>(NUM_RECORDS);\n    for (int i = 0; i < NUM_RECORDS; i++) {\n      MOCK_DATA.put(\"field\" + i, new StringByteIterator(\"value\" + i));\n    }\n  }\n\n  @BeforeClass\n  public static void onlyOnce() throws Exception {\n    Path miniSolrCloudClusterTempDirectory = Files.createTempDirectory(\"miniSolrCloudCluster\");\n    miniSolrCloudClusterTempDirectory.toFile().deleteOnExit();\n    miniSolrCloudCluster = new MiniSolrCloudCluster(1, miniSolrCloudClusterTempDirectory, JettyConfig.builder().build());\n\n    // Upload Solr configuration\n    URL configDir = SolrClientBaseTest.class.getClassLoader().getResource(\"solr_config\");\n    assertNotNull(configDir);\n    miniSolrCloudCluster.uploadConfigDir(new File(configDir.toURI()), MOCK_TABLE);\n  }\n\n  @AfterClass\n  public static void destroy() throws Exception {\n    if(miniSolrCloudCluster != null) {\n      miniSolrCloudCluster.shutdown();\n    }\n  }\n\n  @Before\n  public void setup() throws Exception {\n    NamedList<Object> namedList = miniSolrCloudCluster.createCollection(MOCK_TABLE, 1, 1, MOCK_TABLE, null);\n    assertEquals(namedList.indexOf(\"success\", 0), 1);\n    Thread.sleep(1000);\n\n    instance = getDB();\n  }\n\n  @After\n  public void tearDown() throws Exception {\n    if(miniSolrCloudCluster != null) {\n      NamedList<Object> namedList = miniSolrCloudCluster.deleteCollection(MOCK_TABLE);\n      assertEquals(namedList.indexOf(\"success\", 0), 1);\n      Thread.sleep(1000);\n    }\n  }\n\n  @Test\n  public void testInsert() throws Exception {\n    Status result = instance.insert(MOCK_TABLE, MOCK_KEY0, MOCK_DATA);\n    assertEquals(Status.OK, result);\n  }\n\n  @Test\n  public void testDelete() throws Exception {\n    Status result = instance.delete(MOCK_TABLE, MOCK_KEY1);\n    assertEquals(Status.OK, result);\n  }\n\n  @Test\n  public void testRead() throws Exception {\n    Set<String> fields = MOCK_DATA.keySet();\n    HashMap<String, ByteIterator> resultParam = new HashMap<>(NUM_RECORDS);\n    Status result = instance.read(MOCK_TABLE, MOCK_KEY1, fields, resultParam);\n    assertEquals(Status.OK, result);\n  }\n\n  @Test\n  public void testUpdate() throws Exception {\n    HashMap<String, ByteIterator> newValues = new HashMap<>(NUM_RECORDS);\n\n    for (int i = 0; i < NUM_RECORDS; i++) {\n      newValues.put(\"field\" + i, new StringByteIterator(\"newvalue\" + i));\n    }\n\n    Status result = instance.update(MOCK_TABLE, MOCK_KEY1, newValues);\n    assertEquals(Status.OK, result);\n\n    //validate that the values changed\n    HashMap<String, ByteIterator> resultParam = new HashMap<>(NUM_RECORDS);\n    instance.read(MOCK_TABLE, MOCK_KEY1, MOCK_DATA.keySet(), resultParam);\n\n    for (int i = 0; i < NUM_RECORDS; i++) {\n      assertEquals(\"newvalue\" + i, resultParam.get(\"field\" + i).toString());\n    }\n  }\n\n  @Test\n  public void testScan() throws Exception {\n    Set<String> fields = MOCK_DATA.keySet();\n    Vector<HashMap<String, ByteIterator>> resultParam = new Vector<>(NUM_RECORDS);\n    Status result = instance.scan(MOCK_TABLE, MOCK_KEY1, NUM_RECORDS, fields, resultParam);\n    assertEquals(Status.OK, result);\n  }\n\n  /**\n   * Gets the test DB.\n   *\n   * @return The test DB.\n   */\n  protected DB getDB() {\n    return getDB(new Properties());\n  }\n\n  /**\n   * Gets the test DB.\n   *\n   * @param props\n   *    Properties to pass to the client.\n   * @return The test DB.\n   */\n  protected abstract DB getDB(Properties props);\n}\n"
  },
  {
    "path": "solr/src/test/java/com/yahoo/ycsb/db/solr/SolrClientCloudTest.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n * <p/>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p/>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p/>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db.solr;\n\nimport com.yahoo.ycsb.DB;\nimport org.junit.After;\n\nimport java.util.Properties;\n\nimport static org.junit.Assume.assumeNoException;\n\npublic class SolrClientCloudTest extends SolrClientBaseTest {\n\n  private SolrClient instance;\n\n  @After\n  public void tearDown() throws Exception {\n    try {\n      if(instance != null) {\n        instance.cleanup();\n      }\n    } finally {\n      super.tearDown();\n    }\n  }\n\n  @Override\n  protected DB getDB(Properties props) {\n    instance = new SolrClient();\n\n    props.setProperty(\"solr.cloud\", \"true\");\n    props.setProperty(\"solr.zookeeper.hosts\", miniSolrCloudCluster.getSolrClient().getZkHost());\n\n    instance.setProperties(props);\n    try {\n      instance.init();\n    } catch (Exception error) {\n      assumeNoException(error);\n    }\n    return instance;\n  }\n}\n"
  },
  {
    "path": "solr/src/test/java/com/yahoo/ycsb/db/solr/SolrClientTest.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n * <p/>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p/>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p/>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db.solr;\n\nimport com.yahoo.ycsb.DB;\nimport org.apache.solr.client.solrj.embedded.JettySolrRunner;\nimport org.junit.After;\n\nimport java.util.Properties;\n\nimport static org.junit.Assume.assumeNoException;\n\npublic class SolrClientTest extends SolrClientBaseTest {\n\n  private SolrClient instance;\n\n  @After\n  public void tearDown() throws Exception {\n    try {\n      if(instance != null) {\n        instance.cleanup();\n      }\n    } finally {\n      super.tearDown();\n    }\n  }\n\n  @Override\n  protected DB getDB(Properties props) {\n    instance = new SolrClient();\n\n    // Use the first Solr server in the cluster.\n    // Doesn't matter if there are more since requests will be forwarded properly by Solr.\n    JettySolrRunner jettySolrRunner = miniSolrCloudCluster.getJettySolrRunners().get(0);\n    String solrBaseUrl = String.format(\"http://localhost:%s%s\", jettySolrRunner.getLocalPort(),\n      jettySolrRunner.getBaseUrl());\n\n    props.setProperty(\"solr.base.url\", solrBaseUrl);\n    instance.setProperties(props);\n\n    try {\n      instance.init();\n    } catch (Exception error) {\n      assumeNoException(error);\n    }\n    return instance;\n  }\n}\n"
  },
  {
    "path": "solr/src/test/resources/log4j.properties",
    "content": "# Copyright (c) 2015-2016 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n\n# Root logger option\nlog4j.rootLogger=INFO, stderr\n\nlog4j.appender.stderr=org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.target=System.err\nlog4j.appender.stderr.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.conversionPattern=%d{yyyy/MM/dd HH:mm:ss} %-5p %c %x - %m%n\n\n# Suppress messages from ZooKeeper\nlog4j.logger.org.apache.zookeeper=ERROR\n# Solr classes are too chatty in test at INFO\nlog4j.logger.org.apache.solr=ERROR\nlog4j.logger.org.eclipse.jetty=ERROR\n"
  },
  {
    "path": "solr/src/test/resources/solr_config/schema.xml",
    "content": "<?xml version=\"1.0\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n<!--\n Copied from Apache Solr 5.4.0 solr/solrj/src/test-files/solrj/solr/collection1/conf/schema.xml\n Modified to only required types and fields for YCSB testing.\n-->\n<schema name=\"test\" version=\"1.6\">\n  <types>\n    <fieldType name=\"int\" docValues=\"true\" class=\"solr.TrieIntField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"long\" class=\"solr.TrieLongField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n\n    <fieldtype name=\"text\" class=\"solr.TextField\">\n      <analyzer>\n        <tokenizer class=\"solr.StandardTokenizerFactory\"/>\n        <filter class=\"solr.StandardFilterFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.StopFilterFactory\"/>\n        <filter class=\"solr.PorterStemFilterFactory\"/>\n      </analyzer>\n    </fieldtype>\n  </types>\n\n  <fields>\n    <field name=\"id\" type=\"int\" indexed=\"true\" stored=\"true\" multiValued=\"false\" required=\"false\"/>\n    <field name=\"text\" type=\"text\" indexed=\"true\" stored=\"false\"/>\n\n    <field name=\"_version_\" type=\"long\" indexed=\"true\" stored=\"true\"/>\n\n    <field name=\"field0\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field1\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field2\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field3\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field4\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field5\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field6\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field7\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field8\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field9\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n  </fields>\n\n  <defaultSearchField>text</defaultSearchField>\n  <uniqueKey>id</uniqueKey>\n</schema>\n"
  },
  {
    "path": "solr/src/test/resources/solr_config/solrconfig.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n\n<!--\n Copied from Apache Solr 5.4.0 solr/src/test/resources/solr_config/solrconfig.xml\n-->\n\n<!--\n This is a stripped down config file used for a simple example...  \n It is *not* a good example to work from. \n-->\n<config>\n  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>\n  <indexConfig>\n    <useCompoundFile>${useCompoundFile:false}</useCompoundFile>\n  </indexConfig>\n   <dataDir>${solr.data.dir:}</dataDir>\n  <directoryFactory name=\"DirectoryFactory\" class=\"${solr.directoryFactory:solr.StandardDirectoryFactory}\"/>\n\n  <updateHandler class=\"solr.DirectUpdateHandler2\">\n    <updateLog>\n      <str name=\"dir\">${solr.data.dir:}</str>\n    </updateLog>\n  </updateHandler>\n\n  <requestDispatcher handleSelect=\"true\" >\n    <requestParsers enableRemoteStreaming=\"false\" multipartUploadLimitInKB=\"2048\" />\n  </requestDispatcher>\n\n  <requestHandler name=\"standard\" class=\"solr.StandardRequestHandler\" default=\"true\" />\n\n  <requestHandler name=\"/admin/ping\" class=\"solr.PingRequestHandler\">\n    <lst name=\"invariants\">\n      <str name=\"q\">*:*</str>\n    </lst>\n    <lst name=\"defaults\">\n       <str name=\"echoParams\">all</str>\n    </lst>\n    <str name=\"healthcheckFile\">server-enabled.txt</str>\n  </requestHandler>\n\n  <!-- config for the admin interface --> \n  <admin>\n    <defaultQuery>solr</defaultQuery>\n  </admin>\n\n</config>\n\n"
  },
  {
    "path": "solr6/README.md",
    "content": "<!--\nCopyright (c) 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n## Quick Start\n\nThis section describes how to run YCSB on Solr running locally. \n\n### 1. Set Up YCSB\n\nClone the YCSB git repository and compile:\n\n    git clone git://github.com/brianfrankcooper/YCSB.git\n    cd YCSB\n    mvn -pl com.yahoo.ycsb:solr6-binding -am clean package\n\n### 2. Set Up Solr\n\nThere must be a running Solr instance with a core/collection pre-defined and configured. \n- See this [API](https://cwiki.apache.org/confluence/display/solr/CoreAdmin+API#CoreAdminAPI-CREATE) reference on how to create a core.\n- See this [API](https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api1) reference on how to create a collection in SolrCloud mode.\n\nThe `conf/schema.xml` configuration file present in the core/collection just created must be configured to handle the expected field names during benchmarking.\nBelow illustrates a sample from a schema config file that matches the default field names used by the ycsb client:\n\n\t<field name=\"id\" type=\"string\" indexed=\"true\" stored=\"true\" required=\"true\" multiValued=\"false\"/>\n\t<field name=\"field0\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field1\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field2\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field3\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field4\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field5\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field6\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field7\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field8\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\t<field name=\"field9\" type=\"text_general\" indexed=\"true\" stored=\"true\"/>\n\nIf running in SolrCloud mode ensure there is an external Zookeeper cluster running.\n- See [here](https://cwiki.apache.org/confluence/display/solr/Setting+Up+an+External+ZooKeeper+Ensemble) for details on how to set up an external Zookeeper cluster.\n- See [here](https://cwiki.apache.org/confluence/display/solr/Using+ZooKeeper+to+Manage+Configuration+Files) for instructions on how to use Zookeeper to manage your core/collection configuration files.\n\n### 3. Run YCSB\n    \nNow you are ready to run! First, load the data:\n\n    ./bin/ycsb load solr6 -s -P workloads/workloada -p table=<core/collection name>\n\nThen, run the workload:\n\n    ./bin/ycsb run solr6 -s -P workloads/workloada -p table=<core/collection name>\n\nFor further configuration see below: \n\n### Default Configuration Parameters\nThe default settings for the Solr node that is created is as follows:\n\t\n- `solr.cloud` \n  - A Boolean value indicating if Solr is running in SolrCloud mode. If so there must be an external Zookeeper cluster running also.\n  - Default value is `false` and therefore expects solr to be running in stand-alone mode.\n\n- `solr.base.url` \n  - The base URL in which to interface with a running Solr instance in stand-alone mode\n  - Default value is `http://localhost:8983/solr\n\n- `solr.commit.within.time`\n  - The max time in ms to wait for a commit when in batch mode, ignored otherwise\n  - Default value is `1000ms`\n\n- `solr.batch.mode`\n  - Indicates if inserts/updates/deletes should be commited in batches (frequency controlled by the `solr.commit.within.time` parameter) or commit 1 document at a time.\n  - Default value is `false`\n\n- `solr.zookeeper.hosts`\n  - A list of comma seperated host:port pairs of Zookeeper nodes used to manage SolrCloud configurations.\n  - Must be passed when in [SolrCloud](https://cwiki.apache.org/confluence/display/solr/SolrCloud) mode.\n  - Default value is `localhost:2181`\n\n### Custom Configuration\nIf you wish to customize the settings used to create the Solr node\nyou can created a new property file that contains your desired Solr \nnode settings and pass it in via the parameter to 'bin/ycsb' script. Note that \nthe default properties will be kept if you don't explicitly overwrite them.\n\nAssuming that we have a properties file named \"myproperties.data\" that contains \ncustom Solr node configuration you can execute the following to\npass it into the Solr client:\n\n    ./bin/ycsb run solr6 -P workloads/workloada -P myproperties.data -s\n\nIf you wish to use SolrCloud mode ensure a Solr cluster is running with an\nexternal zookeeper cluster and an appropriate collection has been created.\nMake sure to pass the following properties as parameters to 'bin/ycsb' script.\n\n\tsolr.cloud=true\n\tsolr.zookeeper.hosts=<zkHost2>:<zkPort1>,...,<zkHostN>:<zkPortN>\n\n\n"
  },
  {
    "path": "solr6/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. \nAll rights reserved. \n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you \nmay not use this file except in compliance with the License. You \nmay obtain a copy of the License at \n\t\nhttp://www.apache.org/licenses/LICENSE-2.0 \n\nUnless required by applicable law or agreed to in writing, software \ndistributed under the License is distributed on an \"AS IS\" BASIS, \nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or \nimplied. See the License for the specific language governing \npermissions and limitations under the License. See accompanying \nLICENSE file. \n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n  </parent>\n\n  <artifactId>solr6-binding</artifactId>\n  <name>Solr 6 Binding</name>\n  <packaging>jar</packaging>\n\n  <properties>\n    <!-- Skip tests by default. will be activated by jdk8 profile -->\n    <skipTests>true</skipTests>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.solr</groupId>\n      <artifactId>solr-solrj</artifactId>\n      <version>${solr6.version}</version>\n    </dependency>\n    <!-- commons-codec required for Solr Kerberos support -->\n    <dependency>\n      <groupId>commons-codec</groupId>\n      <artifactId>commons-codec</artifactId>\n      <version>1.10</version>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-log4j12</artifactId>\n      <version>1.7.21</version>\n    </dependency>\n\n    <dependency>\n      <groupId>org.apache.solr</groupId>\n      <artifactId>solr-test-framework</artifactId>\n      <version>${solr6.version}</version>\n      <scope>test</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>jdk.tools</groupId>\n          <artifactId>jdk.tools</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n  </dependencies>\n\n  <profiles>\n    <!-- Solr 6+ requires JDK8 to run, so none of our tests\n         will work unless we're using jdk8.\n      -->\n    <profile>\n      <id>jdk8-tests</id>\n      <activation>\n        <jdk>1.8</jdk>\n      </activation>\n      <properties>\n        <skipTests>false</skipTests>\n      </properties>\n    </profile>\n  </profiles>\n</project>\n"
  },
  {
    "path": "solr6/src/main/java/com/yahoo/ycsb/db/solr6/SolrClient.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.solr6;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\n\nimport org.apache.solr.client.solrj.SolrQuery;\nimport org.apache.solr.client.solrj.impl.CloudSolrClient;\nimport org.apache.solr.client.solrj.impl.HttpClientUtil;\nimport org.apache.solr.client.solrj.impl.HttpSolrClient;\nimport org.apache.solr.client.solrj.impl.Krb5HttpClientConfigurer;\nimport org.apache.solr.client.solrj.response.QueryResponse;\nimport org.apache.solr.client.solrj.response.UpdateResponse;\nimport org.apache.solr.client.solrj.SolrServerException;\nimport org.apache.solr.common.SolrDocument;\nimport org.apache.solr.common.SolrDocumentList;\nimport org.apache.solr.common.SolrInputDocument;\n\nimport java.io.IOException;\nimport java.util.*;\nimport java.util.Map.Entry;\n\n/**\n * Solr client for YCSB framework.\n *\n * <p>\n * Default properties to set:\n * </p>\n * <ul>\n * See README.md\n * </ul>\n *\n */\npublic class SolrClient extends DB {\n\n  public static final String DEFAULT_CLOUD_MODE = \"false\";\n  public static final String DEFAULT_BATCH_MODE = \"false\";\n  public static final String DEFAULT_ZOOKEEPER_HOSTS = \"localhost:2181\";\n  public static final String DEFAULT_SOLR_BASE_URL = \"http://localhost:8983/solr\";\n  public static final String DEFAULT_COMMIT_WITHIN_TIME = \"1000\";\n\n  private org.apache.solr.client.solrj.SolrClient client;\n  private Integer commitTime;\n  private Boolean batchMode;\n\n  /**\n   * Initialize any state for this DB. Called once per DB instance; there is one DB instance per\n   * client thread.\n   */\n  @Override\n  public void init() throws DBException {\n    Properties props = getProperties();\n    commitTime = Integer\n        .parseInt(props.getProperty(\"solr.commit.within.time\", DEFAULT_COMMIT_WITHIN_TIME));\n    batchMode = Boolean.parseBoolean(props.getProperty(\"solr.batch.mode\", DEFAULT_BATCH_MODE));\n\n    String jaasConfPath = props.getProperty(\"solr.jaas.conf.path\");\n    if(jaasConfPath != null) {\n      System.setProperty(\"java.security.auth.login.config\", jaasConfPath);\n      HttpClientUtil.setConfigurer(new Krb5HttpClientConfigurer());\n    }\n\n    // Check if Solr cluster is running in SolrCloud or Stand-alone mode\n    Boolean cloudMode = Boolean.parseBoolean(props.getProperty(\"solr.cloud\", DEFAULT_CLOUD_MODE));\n    System.err.println(\"Solr Cloud Mode = \" + cloudMode);\n    if (cloudMode) {\n      System.err.println(\"Solr Zookeeper Remote Hosts = \"\n          + props.getProperty(\"solr.zookeeper.hosts\", DEFAULT_ZOOKEEPER_HOSTS));\n      client = new CloudSolrClient.Builder().withZkHost(\n        Arrays.asList(props.getProperty(\"solr.zookeeper.hosts\", DEFAULT_ZOOKEEPER_HOSTS).split(\",\"))).build();\n    } else {\n      client = new HttpSolrClient.Builder(props.getProperty(\"solr.base.url\", DEFAULT_SOLR_BASE_URL)).build();\n    }\n  }\n\n  @Override\n  public void cleanup() throws DBException {\n    try {\n      client.close();\n    } catch (IOException e) {\n      throw new DBException(e);\n    }\n  }\n\n  /**\n   * Insert a record in the database. Any field/value pairs in the specified values HashMap will be\n   * written into the record with the specified record key.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to insert.\n   * @param values\n   *          A HashMap of field/value pairs to insert in the record\n   * @return Zero on success, a non-zero error code on error. See this class's description for a\n   *         discussion of error codes.\n   */\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      SolrInputDocument doc = new SolrInputDocument();\n\n      doc.addField(\"id\", key);\n      for (Entry<String, String> entry : StringByteIterator.getStringMap(values).entrySet()) {\n        doc.addField(entry.getKey(), entry.getValue());\n      }\n      UpdateResponse response;\n      if (batchMode) {\n        response = client.add(table, doc, commitTime);\n      } else {\n        response = client.add(table, doc);\n        client.commit(table);\n      }\n      return checkStatus(response.getStatus());\n\n    } catch (IOException | SolrServerException e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Delete a record from the database.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to delete.\n   * @return Zero on success, a non-zero error code on error. See this class's description for a\n   *         discussion of error codes.\n   */\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      UpdateResponse response;\n      if (batchMode) {\n        response = client.deleteById(table, key, commitTime);\n      } else {\n        response = client.deleteById(table, key);\n        client.commit(table);\n      }\n      return checkStatus(response.getStatus());\n    } catch (IOException | SolrServerException e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Read a record from the database. Each field/value pair from the result will be stored in a\n   * HashMap.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to read.\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A HashMap of field/value pairs for the result\n   * @return Zero on success, a non-zero error code on error or \"not found\".\n   */\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n      Map<String, ByteIterator> result) {\n    try {\n      Boolean returnFields = false;\n      String[] fieldList = null;\n      if (fields != null) {\n        returnFields = true;\n        fieldList = fields.toArray(new String[fields.size()]);\n      }\n      SolrQuery query = new SolrQuery();\n      query.setQuery(\"id:\" + key);\n      if (returnFields) {\n        query.setFields(fieldList);\n      }\n      final QueryResponse response = client.query(table, query);\n      SolrDocumentList results = response.getResults();\n      if ((results != null) && (results.getNumFound() > 0)) {\n        for (String field : results.get(0).getFieldNames()) {\n          result.put(field,\n              new StringByteIterator(String.valueOf(results.get(0).getFirstValue(field))));\n        }\n      }\n      return checkStatus(response.getStatus());\n    } catch (IOException | SolrServerException e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Update a record in the database. Any field/value pairs in the specified values HashMap will be\n   * written into the record with the specified record key, overwriting any existing values with the\n   * same field name.\n   *\n   * @param table\n   *          The name of the table\n   * @param key\n   *          The record key of the record to write.\n   * @param values\n   *          A HashMap of field/value pairs to update in the record\n   * @return Zero on success, a non-zero error code on error. See this class's description for a\n   *         discussion of error codes.\n   */\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    try {\n      SolrInputDocument updatedDoc = new SolrInputDocument();\n      updatedDoc.addField(\"id\", key);\n\n      for (Entry<String, String> entry : StringByteIterator.getStringMap(values).entrySet()) {\n        updatedDoc.addField(entry.getKey(), Collections.singletonMap(\"set\", entry.getValue()));\n      }\n\n      UpdateResponse writeResponse;\n      if (batchMode) {\n        writeResponse = client.add(table, updatedDoc, commitTime);\n      } else {\n        writeResponse = client.add(table, updatedDoc);\n        client.commit(table);\n      }\n      return checkStatus(writeResponse.getStatus());\n    } catch (IOException | SolrServerException e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  /**\n   * Perform a range scan for a set of records in the database. Each field/value pair from the\n   * result will be stored in a HashMap.\n   *\n   * @param table\n   *          The name of the table\n   * @param startkey\n   *          The record key of the first record to read.\n   * @param recordcount\n   *          The number of records to read\n   * @param fields\n   *          The list of fields to read, or null for all of them\n   * @param result\n   *          A Vector of HashMaps, where each HashMap is a set field/value pairs for one record\n   * @return Zero on success, a non-zero error code on error. See this class's description for a\n   *         discussion of error codes.\n   */\n  @Override\n  public Status scan(String table, String startkey, int recordcount, Set<String> fields,\n      Vector<HashMap<String, ByteIterator>> result) {\n    try {\n      Boolean returnFields = false;\n      String[] fieldList = null;\n      if (fields != null) {\n        returnFields = true;\n        fieldList = fields.toArray(new String[fields.size()]);\n      }\n      SolrQuery query = new SolrQuery();\n      query.setQuery(\"*:*\");\n      query.setParam(\"fq\", \"id:[ \" + startkey + \" TO * ]\");\n      if (returnFields) {\n        query.setFields(fieldList);\n      }\n      query.setRows(recordcount);\n      final QueryResponse response = client.query(table, query);\n      SolrDocumentList results = response.getResults();\n\n      HashMap<String, ByteIterator> entry;\n\n      for (SolrDocument hit : results) {\n        entry = new HashMap<>((int) results.getNumFound());\n        for (String field : hit.getFieldNames()) {\n          entry.put(field, new StringByteIterator(String.valueOf(hit.getFirstValue(field))));\n        }\n        result.add(entry);\n      }\n      return checkStatus(response.getStatus());\n    } catch (IOException | SolrServerException e) {\n      e.printStackTrace();\n    }\n    return Status.ERROR;\n  }\n\n  private Status checkStatus(int status) {\n    Status responseStatus;\n    switch (status) {\n    case 0:\n      responseStatus = Status.OK;\n      break;\n    case 400:\n      responseStatus = Status.BAD_REQUEST;\n      break;\n    case 403:\n      responseStatus = Status.FORBIDDEN;\n      break;\n    case 404:\n      responseStatus = Status.NOT_FOUND;\n      break;\n    case 500:\n      responseStatus = Status.ERROR;\n      break;\n    case 503:\n      responseStatus = Status.SERVICE_UNAVAILABLE;\n      break;\n    default:\n      responseStatus = Status.UNEXPECTED_STATE;\n      break;\n    }\n    return responseStatus;\n  }\n\n}\n"
  },
  {
    "path": "solr6/src/main/java/com/yahoo/ycsb/db/solr6/package-info.java",
    "content": "/*\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * The YCSB binding for \n * <a href=\"http://lucene.apache.org/solr/\">Solr</a>.\n */\npackage com.yahoo.ycsb.db.solr6;\n\n"
  },
  {
    "path": "solr6/src/main/resources/log4j.properties",
    "content": "# Root logger option\nlog4j.rootLogger=INFO, stderr\n\n# Direct log messages to stderr\nlog4j.appender.stderr=org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.Target=System.err\nlog4j.appender.stderr.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n\n"
  },
  {
    "path": "solr6/src/test/java/com/yahoo/ycsb/db/solr6/SolrClientBaseTest.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n * <p/>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p/>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p/>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db.solr6;\n\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.StringByteIterator;\nimport org.apache.solr.client.solrj.embedded.JettyConfig;\nimport org.apache.solr.cloud.MiniSolrCloudCluster;\nimport org.apache.solr.common.util.NamedList;\nimport org.junit.*;\n\nimport java.io.File;\nimport java.net.URL;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.util.HashMap;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNotNull;\n\npublic abstract class SolrClientBaseTest {\n\n  protected static MiniSolrCloudCluster miniSolrCloudCluster;\n  private DB instance;\n  private final static HashMap<String, ByteIterator> MOCK_DATA;\n  protected final static String MOCK_TABLE = \"ycsb\";\n  private final static String MOCK_KEY0 = \"0\";\n  private final static String MOCK_KEY1 = \"1\";\n  private final static int NUM_RECORDS = 10;\n\n  static {\n    MOCK_DATA = new HashMap<>(NUM_RECORDS);\n    for (int i = 0; i < NUM_RECORDS; i++) {\n      MOCK_DATA.put(\"field\" + i, new StringByteIterator(\"value\" + i));\n    }\n  }\n\n  @BeforeClass\n  public static void onlyOnce() throws Exception {\n    Path miniSolrCloudClusterTempDirectory = Files.createTempDirectory(\"miniSolrCloudCluster\");\n    miniSolrCloudClusterTempDirectory.toFile().deleteOnExit();\n    miniSolrCloudCluster = new MiniSolrCloudCluster(1, miniSolrCloudClusterTempDirectory, JettyConfig.builder().build());\n\n    // Upload Solr configuration\n    URL configDir = SolrClientBaseTest.class.getClassLoader().getResource(\"solr_config\");\n    assertNotNull(configDir);\n    miniSolrCloudCluster.uploadConfigDir(new File(configDir.toURI()), MOCK_TABLE);\n  }\n\n  @AfterClass\n  public static void destroy() throws Exception {\n    if(miniSolrCloudCluster != null) {\n      miniSolrCloudCluster.shutdown();\n    }\n  }\n\n  @Before\n  public void setup() throws Exception {\n    NamedList<Object> namedList = miniSolrCloudCluster.createCollection(MOCK_TABLE, 1, 1, MOCK_TABLE, null);\n    assertEquals(namedList.indexOf(\"success\", 0), 1);\n    Thread.sleep(1000);\n\n    instance = getDB();\n  }\n\n  @After\n  public void tearDown() throws Exception {\n    if(miniSolrCloudCluster != null) {\n      NamedList<Object> namedList = miniSolrCloudCluster.deleteCollection(MOCK_TABLE);\n      assertEquals(namedList.indexOf(\"success\", 0), 1);\n      Thread.sleep(1000);\n    }\n  }\n\n  @Test\n  public void testInsert() throws Exception {\n    Status result = instance.insert(MOCK_TABLE, MOCK_KEY0, MOCK_DATA);\n    assertEquals(Status.OK, result);\n  }\n\n  @Test\n  public void testDelete() throws Exception {\n    Status result = instance.delete(MOCK_TABLE, MOCK_KEY1);\n    assertEquals(Status.OK, result);\n  }\n\n  @Test\n  public void testRead() throws Exception {\n    Set<String> fields = MOCK_DATA.keySet();\n    HashMap<String, ByteIterator> resultParam = new HashMap<>(NUM_RECORDS);\n    Status result = instance.read(MOCK_TABLE, MOCK_KEY1, fields, resultParam);\n    assertEquals(Status.OK, result);\n  }\n\n  @Test\n  public void testUpdate() throws Exception {\n    HashMap<String, ByteIterator> newValues = new HashMap<>(NUM_RECORDS);\n\n    for (int i = 0; i < NUM_RECORDS; i++) {\n      newValues.put(\"field\" + i, new StringByteIterator(\"newvalue\" + i));\n    }\n\n    Status result = instance.update(MOCK_TABLE, MOCK_KEY1, newValues);\n    assertEquals(Status.OK, result);\n\n    //validate that the values changed\n    HashMap<String, ByteIterator> resultParam = new HashMap<>(NUM_RECORDS);\n    instance.read(MOCK_TABLE, MOCK_KEY1, MOCK_DATA.keySet(), resultParam);\n\n    for (int i = 0; i < NUM_RECORDS; i++) {\n      assertEquals(\"newvalue\" + i, resultParam.get(\"field\" + i).toString());\n    }\n  }\n\n  @Test\n  public void testScan() throws Exception {\n    Set<String> fields = MOCK_DATA.keySet();\n    Vector<HashMap<String, ByteIterator>> resultParam = new Vector<>(NUM_RECORDS);\n    Status result = instance.scan(MOCK_TABLE, MOCK_KEY1, NUM_RECORDS, fields, resultParam);\n    assertEquals(Status.OK, result);\n  }\n\n  /**\n   * Gets the test DB.\n   *\n   * @return The test DB.\n   */\n  protected DB getDB() {\n    return getDB(new Properties());\n  }\n\n  /**\n   * Gets the test DB.\n   *\n   * @param props\n   *    Properties to pass to the client.\n   * @return The test DB.\n   */\n  protected abstract DB getDB(Properties props);\n}\n"
  },
  {
    "path": "solr6/src/test/java/com/yahoo/ycsb/db/solr6/SolrClientCloudTest.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n * <p/>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p/>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p/>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db.solr6;\n\nimport com.yahoo.ycsb.DB;\nimport org.junit.After;\n\nimport java.util.Properties;\n\nimport static org.junit.Assume.assumeNoException;\n\npublic class SolrClientCloudTest extends SolrClientBaseTest {\n\n  private SolrClient instance;\n\n  @After\n  public void tearDown() throws Exception {\n    try {\n      if(instance != null) {\n        instance.cleanup();\n      }\n    } finally {\n      super.tearDown();\n    }\n  }\n\n  @Override\n  protected DB getDB(Properties props) {\n    instance = new SolrClient();\n\n    props.setProperty(\"solr.cloud\", \"true\");\n    props.setProperty(\"solr.zookeeper.hosts\", miniSolrCloudCluster.getSolrClient().getZkHost());\n\n    instance.setProperties(props);\n    try {\n      instance.init();\n    } catch (Exception error) {\n      assumeNoException(error);\n    }\n    return instance;\n  }\n}\n"
  },
  {
    "path": "solr6/src/test/java/com/yahoo/ycsb/db/solr6/SolrClientTest.java",
    "content": "/**\n * Copyright (c) 2016 YCSB contributors. All rights reserved.\n * <p/>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p/>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p/>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db.solr6;\n\nimport com.yahoo.ycsb.DB;\nimport org.apache.solr.client.solrj.embedded.JettySolrRunner;\nimport org.junit.After;\n\nimport java.util.Properties;\n\nimport static org.junit.Assume.assumeNoException;\n\npublic class SolrClientTest extends SolrClientBaseTest {\n\n  private SolrClient instance;\n\n  @After\n  public void tearDown() throws Exception {\n    try {\n      if(instance != null) {\n        instance.cleanup();\n      }\n    } finally {\n      super.tearDown();\n    }\n  }\n\n  @Override\n  protected DB getDB(Properties props) {\n    instance = new SolrClient();\n\n    // Use the first Solr server in the cluster.\n    // Doesn't matter if there are more since requests will be forwarded properly by Solr.\n    JettySolrRunner jettySolrRunner = miniSolrCloudCluster.getJettySolrRunners().get(0);\n    String solrBaseUrl = String.format(\"http://localhost:%s%s\", jettySolrRunner.getLocalPort(),\n      jettySolrRunner.getBaseUrl());\n\n    props.setProperty(\"solr.base.url\", solrBaseUrl);\n    instance.setProperties(props);\n\n    try {\n      instance.init();\n    } catch (Exception error) {\n      assumeNoException(error);\n    }\n    return instance;\n  }\n}\n"
  },
  {
    "path": "solr6/src/test/resources/log4j.properties",
    "content": "# Copyright (c) 2015-2016 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n#\n\n# Root logger option\nlog4j.rootLogger=INFO, stderr\n\nlog4j.appender.stderr=org.apache.log4j.ConsoleAppender\nlog4j.appender.stderr.target=System.err\nlog4j.appender.stderr.layout=org.apache.log4j.PatternLayout\nlog4j.appender.stderr.layout.conversionPattern=%d{yyyy/MM/dd HH:mm:ss} %-5p %c %x - %m%n\n\n# Suppress messages from ZooKeeper\nlog4j.logger.org.apache.zookeeper=ERROR\n# Solr classes are too chatty in test at INFO\nlog4j.logger.org.apache.solr=ERROR\nlog4j.logger.org.eclipse.jetty=ERROR\n"
  },
  {
    "path": "solr6/src/test/resources/solr_config/schema.xml",
    "content": "<?xml version=\"1.0\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n<!--\n Copied from Apache Solr 5.4.0 solr/solrj/src/test-files/solrj/solr/collection1/conf/schema.xml\n Modified to only required types and fields for YCSB testing.\n-->\n<schema name=\"test\" version=\"1.6\">\n  <types>\n    <fieldType name=\"int\" docValues=\"true\" class=\"solr.TrieIntField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n    <fieldType name=\"long\" class=\"solr.TrieLongField\" precisionStep=\"0\" omitNorms=\"true\" positionIncrementGap=\"0\"/>\n\n    <fieldtype name=\"text\" class=\"solr.TextField\">\n      <analyzer>\n        <tokenizer class=\"solr.StandardTokenizerFactory\"/>\n        <filter class=\"solr.StandardFilterFactory\"/>\n        <filter class=\"solr.LowerCaseFilterFactory\"/>\n        <filter class=\"solr.StopFilterFactory\"/>\n        <filter class=\"solr.PorterStemFilterFactory\"/>\n      </analyzer>\n    </fieldtype>\n  </types>\n\n  <fields>\n    <field name=\"id\" type=\"int\" indexed=\"true\" stored=\"true\" multiValued=\"false\" required=\"false\"/>\n    <field name=\"text\" type=\"text\" indexed=\"true\" stored=\"false\"/>\n\n    <field name=\"_version_\" type=\"long\" indexed=\"true\" stored=\"true\"/>\n\n    <field name=\"field0\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field1\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field2\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field3\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field4\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field5\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field6\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field7\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field8\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n    <field name=\"field9\" type=\"text\" indexed=\"true\" stored=\"true\"/>\n  </fields>\n\n  <defaultSearchField>text</defaultSearchField>\n  <uniqueKey>id</uniqueKey>\n</schema>\n"
  },
  {
    "path": "solr6/src/test/resources/solr_config/solrconfig.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n<!--\n Licensed to the Apache Software Foundation (ASF) under one or more\n contributor license agreements.  See the NOTICE file distributed with\n this work for additional information regarding copyright ownership.\n The ASF licenses this file to You under the Apache License, Version 2.0\n (the \"License\"); you may not use this file except in compliance with\n the License.  You may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n-->\n\n<!--\n Copied from Apache Solr 5.4.0 solr/src/test/resources/solr_config/solrconfig.xml\n-->\n\n<!--\n This is a stripped down config file used for a simple example...  \n It is *not* a good example to work from. \n-->\n<config>\n  <luceneMatchVersion>${tests.luceneMatchVersion:LATEST}</luceneMatchVersion>\n  <indexConfig>\n    <useCompoundFile>${useCompoundFile:false}</useCompoundFile>\n  </indexConfig>\n   <dataDir>${solr.data.dir:}</dataDir>\n  <directoryFactory name=\"DirectoryFactory\" class=\"${solr.directoryFactory:solr.StandardDirectoryFactory}\"/>\n\n  <updateHandler class=\"solr.DirectUpdateHandler2\">\n    <updateLog>\n      <str name=\"dir\">${solr.data.dir:}</str>\n    </updateLog>\n  </updateHandler>\n\n  <requestDispatcher handleSelect=\"true\" >\n    <requestParsers enableRemoteStreaming=\"false\" multipartUploadLimitInKB=\"2048\" />\n  </requestDispatcher>\n\n  <requestHandler name=\"standard\" class=\"solr.StandardRequestHandler\" default=\"true\" />\n\n  <requestHandler name=\"/admin/ping\" class=\"solr.PingRequestHandler\">\n    <lst name=\"invariants\">\n      <str name=\"q\">*:*</str>\n    </lst>\n    <lst name=\"defaults\">\n       <str name=\"echoParams\">all</str>\n    </lst>\n    <str name=\"healthcheckFile\">server-enabled.txt</str>\n  </requestHandler>\n\n  <!-- config for the admin interface --> \n  <admin>\n    <defaultQuery>solr</defaultQuery>\n  </admin>\n\n</config>\n\n"
  },
  {
    "path": "tarantool/README.md",
    "content": "<!--\nCopyright (c) 2015 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n# Tarantool\n\n## Introduction\n\nTarantool is a NoSQL In-Memory database.\nIt's distributed under BSD licence and is hosted on [github][tarantool-github].\n\nTarantool features:\n\n* Defferent index types with iterators:\n\t- HASH (the fastest)\n\t- TREE (range and ordered retreival)\n\t- BITSET (bit mask search)\n\t- RTREE (geo search)\n* multipart keys for HASH and TREE indexes\n* Data persistence with by Write Ahead Log (WAL) and snapshots.\n* asynchronous master-master replication, hot standby.\n* coroutines and async. IO are used to implement high-performance lock-free access to data.\n  - socket-io/file-io with yeilds from lua\n* stored procedures in Lua (Using LuaJIT)\n* supports plugins written on C/C++ (Have two basic plugins for working with MySQL and PostgreSQL)\n* Authentication and access control\n\n## Quick start\n\nThis section descrives how to run YCSB against a local Tarantool instance\n\n### 1. Start Tarantool\n\nFirst, clone Tarantool from it's own git repo and build it (described in our [README.md][tarantool-readme]):\n\n    cp %YCSB%/tarantool/conf/tarantool-tree.lua <vardir>/tarantool.lua\n    cp %TNT%/src/box/tarantool <vardir>\n    cd <vardir>\n    ./tarantool tarantool.lua\n\nOR you can simply download ans install a binary package for your GNU/Linux or BSD distro from http://tarantool.org/download.html\n\n### 2. Run YCSB\n\nNow you are ready to run! First, load the data:\n\n    ./bin/ycsb load tarantool -s -P workloads/workloada\n\nThen, run the workload:\n\n    ./bin/ycsb run tarantool -s -P workloads/workloada\n\nSee the next section for the list of configuration parameters for Tarantool.\n\n## Tarantool Configuration Parameters\n\n#### 'tarantool.host' (default : 'localhost')\nWhich host YCSB must use for connection with Tarantool\n#### 'tarantool.port' (default : 3301)\nWhich port YCSB must use for connection with Tarantool\n#### 'tarantool.space' (default : 1024)\n    (possible values: 0 .. 255)\nWhich space YCSB must use for benchmark Tarantool\n\n[tarantool-github]: https://github.com/tarantool/tarantool/\n[tarantool-readme]: https://github.com/tarantool/tarantool/blob/master/README.md\n"
  },
  {
    "path": "tarantool/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\nCopyright (c) 2015-2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent/</relativePath>\n  </parent>\n\n  <artifactId>tarantool-binding</artifactId>\n  <name>Tarantool DB Binding</name>\n  <packaging>jar</packaging>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.tarantool</groupId>\n      <artifactId>connector</artifactId>\n      <version>${tarantool.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.yahoo.ycsb</groupId>\n      <artifactId>core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "tarantool/src/main/conf/tarantool-hash.lua",
    "content": "-- Copyright (c) 2015 YCSB contributors. All rights reserved.\n--\n-- Licensed under the Apache License, Version 2.0 (the \"License\"); you\n-- may not use this file except in compliance with the License. You\n-- may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing, software\n-- distributed under the License is distributed on an \"AS IS\" BASIS,\n-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n-- implied. See the License for the specific language governing\n-- permissions and limitations under the License. See accompanying\n-- LICENSE file.\n\nbox.cfg {\n   listen=3303,\n   logger=\"tarantool.log\",\n   log_level=5,\n   logger_nonblock=true,\n   wal_mode=\"none\",\n   pid_file=\"tarantool.pid\"\n}\n\nbox.schema.space.create(\"ycsb\", {id = 1024})\nbox.space.ycsb:create_index('primary', {type = 'hash', parts = {1, 'STR'}})\nbox.schema.user.grant('guest', 'read,write,execute', 'universe')\n"
  },
  {
    "path": "tarantool/src/main/conf/tarantool-tree.lua",
    "content": "-- Copyright (c) 2015 YCSB contributors. All rights reserved.\n--\n-- Licensed under the Apache License, Version 2.0 (the \"License\"); you\n-- may not use this file except in compliance with the License. You\n-- may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing, software\n-- distributed under the License is distributed on an \"AS IS\" BASIS,\n-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n-- implied. See the License for the specific language governing\n-- permissions and limitations under the License. See accompanying\n-- LICENSE file.\n\nbox.cfg {\n   listen=3303,\n   logger=\"tarantool.log\",\n   log_level=5,\n   logger_nonblock=true,\n   wal_mode=\"none\",\n   pid_file=\"tarantool.pid\"\n}\n\nbox.schema.space.create(\"ycsb\", {id = 1024})\nbox.space.ycsb:create_index('primary', {type = 'tree', parts = {1, 'STR'}})\nbox.schema.user.grant('guest', 'read,write,execute', 'universe')\n"
  },
  {
    "path": "tarantool/src/main/java/com/yahoo/ycsb/db/TarantoolClient.java",
    "content": "/**\n * Copyright (c) 2014 - 2016 YCSB Contributors. All rights reserved.\n * <p>\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n * <p>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\npackage com.yahoo.ycsb.db;\n\nimport com.yahoo.ycsb.*;\nimport org.tarantool.TarantoolConnection16;\nimport org.tarantool.TarantoolConnection16Impl;\nimport org.tarantool.TarantoolException;\n\nimport java.util.*;\nimport java.util.logging.Level;\nimport java.util.logging.Logger;\n\n/**\n * YCSB binding for <a href=\"http://tarantool.org/\">Tarantool</a>.\n */\npublic class TarantoolClient extends DB {\n  private static final Logger LOGGER = Logger.getLogger(TarantoolClient.class.getName());\n\n  private static final String HOST_PROPERTY = \"tarantool.host\";\n  private static final String PORT_PROPERTY = \"tarantool.port\";\n  private static final String SPACE_PROPERTY = \"tarantool.space\";\n  private static final String DEFAULT_HOST = \"localhost\";\n  private static final String DEFAULT_PORT = \"3301\";\n  private static final String DEFAULT_SPACE = \"1024\";\n\n  private TarantoolConnection16 connection;\n  private int spaceNo;\n\n  public void init() throws DBException {\n    Properties props = getProperties();\n\n    int port = Integer.parseInt(props.getProperty(PORT_PROPERTY, DEFAULT_PORT));\n    String host = props.getProperty(HOST_PROPERTY, DEFAULT_HOST);\n    spaceNo = Integer.parseInt(props.getProperty(SPACE_PROPERTY, DEFAULT_SPACE));\n\n    try {\n      this.connection = new TarantoolConnection16Impl(host, port);\n    } catch (Exception exc) {\n      throw new DBException(\"Can't initialize Tarantool connection\", exc);\n    }\n  }\n\n  public void cleanup() throws DBException {\n    this.connection.close();\n  }\n\n  @Override\n  public Status insert(String table, String key, Map<String, ByteIterator> values) {\n    return replace(key, values, \"Can't insert element\");\n  }\n\n  private HashMap<String, ByteIterator> tupleConvertFilter(List<String> input, Set<String> fields) {\n    HashMap<String, ByteIterator> result = new HashMap<>();\n    if (input == null) {\n      return result;\n    }\n    for (int i = 1; i < input.toArray().length; i += 2) {\n      if (fields == null || fields.contains(input.get(i))) {\n        result.put(input.get(i), new StringByteIterator(input.get(i + 1)));\n      }\n    }\n    return result;\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {\n    try {\n      List<String> response = this.connection.select(this.spaceNo, 0, Arrays.asList(key), 0, 1, 0);\n      result = tupleConvertFilter(response, fields);\n      return Status.OK;\n    } catch (TarantoolException exc) {\n      LOGGER.log(Level.SEVERE, \"Can't select element\", exc);\n      return Status.ERROR;\n    } catch (NullPointerException exc) {\n      return Status.ERROR;\n    }\n  }\n\n  @Override\n  public Status scan(String table, String startkey,\n                     int recordcount, Set<String> fields,\n                     Vector<HashMap<String, ByteIterator>> result) {\n    List<List<String>> response;\n    try {\n      response = this.connection.select(this.spaceNo, 0, Arrays.asList(startkey), 0, recordcount, 6);\n    } catch (TarantoolException exc) {\n      LOGGER.log(Level.SEVERE, \"Can't select range elements\", exc);\n      return Status.ERROR;\n    } catch (NullPointerException exc) {\n      return Status.ERROR;\n    }\n    for (List<String> i : response) {\n      HashMap<String, ByteIterator> temp = tupleConvertFilter(i, fields);\n      if (!temp.isEmpty()) {\n        result.add((HashMap<String, ByteIterator>) temp.clone());\n      }\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    try {\n      this.connection.delete(this.spaceNo, Collections.singletonList(key));\n    } catch (TarantoolException exc) {\n      LOGGER.log(Level.SEVERE, \"Can't delete element\", exc);\n      return Status.ERROR;\n    } catch (NullPointerException e) {\n      return Status.ERROR;\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status update(String table, String key, Map<String, ByteIterator> values) {\n    return replace(key, values, \"Can't replace element\");\n  }\n\n  private Status replace(String key, Map<String, ByteIterator> values, String exceptionDescription) {\n    int j = 0;\n    String[] tuple = new String[1 + 2 * values.size()];\n    tuple[0] = key;\n    for (Map.Entry<String, ByteIterator> i : values.entrySet()) {\n      tuple[j + 1] = i.getKey();\n      tuple[j + 2] = i.getValue().toString();\n      j += 2;\n    }\n    try {\n      this.connection.replace(this.spaceNo, tuple);\n    } catch (TarantoolException exc) {\n      LOGGER.log(Level.SEVERE, exceptionDescription, exc);\n      return Status.ERROR;\n    }\n    return Status.OK;\n\n  }\n}\n"
  },
  {
    "path": "tarantool/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2014 - 2016 YCSB Contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\n/**\n * YCSB binding for <a href=\"http://tarantool.org/\">Tarantool</a>.\n */\npackage com.yahoo.ycsb.db;\n\n"
  },
  {
    "path": "voldemort/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- \nCopyright (c) 2012 - 2016 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n\t\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>com.yahoo.ycsb</groupId>\n    <artifactId>binding-parent</artifactId>\n    <version>0.14.0-SNAPSHOT</version>\n    <relativePath>../binding-parent</relativePath>\n   </parent>\n  \n   <artifactId>voldemort-binding</artifactId>\n   <name>Voldemort DB Binding</name>\n   <packaging>jar</packaging>\n\t\n   <dependencies>\n     <dependency>\n       <groupId>voldemort</groupId>\n       <artifactId>voldemort</artifactId>\n       <version>${voldemort.version}</version>\n     </dependency>\n     <dependency>\n       <groupId>log4j</groupId>\n       <artifactId>log4j</artifactId>\n       <version>1.2.16</version>\n     </dependency>\n     <dependency>\n       <groupId>com.yahoo.ycsb</groupId>\n       <artifactId>core</artifactId>\n       <version>${project.version}</version>\n       <scope>provided</scope>\n     </dependency>\n   </dependencies>\n</project>\n"
  },
  {
    "path": "voldemort/src/main/conf/cluster.xml",
    "content": "<!-- \nCopyright (c) 2012 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<cluster>\n        <name>mycluster</name>\n        <server>\n                <id>0</id>\n                <host>localhost</host>\n                <http-port>8081</http-port>\n                <socket-port>6666</socket-port>\n                <partitions>0, 1</partitions>\n        </server>\n</cluster>\n\n"
  },
  {
    "path": "voldemort/src/main/conf/server.properties",
    "content": "# Copyright (c) 2012 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n# The ID of *this* particular cluster node\nnode.id=0\n\nmax.threads=100\n\n############### DB options ######################\n\nhttp.enable=true\nsocket.enable=true\n\n# BDB\nbdb.write.transactions=false\nbdb.flush.transactions=false\nbdb.cache.size=1G\n\n# Mysql\nmysql.host=localhost\nmysql.port=1521\nmysql.user=root\nmysql.password=3306\nmysql.database=test\n\n#NIO connector settings.\nenable.nio.connector=true\n\nstorage.configs=voldemort.store.bdb.BdbStorageConfiguration, voldemort.store.readonly.ReadOnlyStorageConfiguration\n"
  },
  {
    "path": "voldemort/src/main/conf/stores.xml",
    "content": "<!-- \nCopyright (c) 2012 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<stores>\n  <store>\n    <name>usertable</name>\n    <persistence>bdb</persistence>\n    <routing>client</routing>\n    <replication-factor>1</replication-factor>\n    <required-reads>1</required-reads>\n    <required-writes>1</required-writes>\n    <key-serializer>\n      <type>string</type>\n    </key-serializer>\n    <value-serializer>\n      <type>java-serialization</type>\n    </value-serializer>\n  </store>\n</stores>\n"
  },
  {
    "path": "voldemort/src/main/java/com/yahoo/ycsb/db/VoldemortClient.java",
    "content": "/**\n * Copyright (c) 2012 YCSB contributors. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you\n * may not use this file except in compliance with the License. You\n * may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n * implied. See the License for the specific language governing\n * permissions and limitations under the License. See accompanying\n * LICENSE file.\n */\n\npackage com.yahoo.ycsb.db;\n\nimport java.util.HashMap;\nimport java.util.Set;\nimport java.util.Vector;\nimport java.util.Map.Entry;\n\nimport org.apache.log4j.Logger;\n\nimport voldemort.client.ClientConfig;\nimport voldemort.client.SocketStoreClientFactory;\nimport voldemort.client.StoreClient;\nimport voldemort.versioning.VectorClock;\nimport voldemort.versioning.Versioned;\n\nimport com.yahoo.ycsb.DB;\nimport com.yahoo.ycsb.DBException;\nimport com.yahoo.ycsb.Status;\nimport com.yahoo.ycsb.ByteIterator;\nimport com.yahoo.ycsb.StringByteIterator;\n\n/**\n * YCSB binding for\n * <a href=\"http://www.project-voldemort.com/voldemort/\">Voldemort</a>.\n */\npublic class VoldemortClient extends DB {\n  private static final Logger LOGGER = Logger.getLogger(VoldemortClient.class);\n\n  private StoreClient<String, HashMap<String, String>> storeClient;\n  private SocketStoreClientFactory socketFactory;\n  private String storeName;\n\n  /**\n   * Initialize the DB layer. This accepts all properties allowed by the\n   * Voldemort client. A store maps to a table. Required : bootstrap_urls\n   * Additional property : store_name -> to preload once, should be same as -t\n   * {@link ClientConfig}\n   */\n  public void init() throws DBException {\n    ClientConfig clientConfig = new ClientConfig(getProperties());\n    socketFactory = new SocketStoreClientFactory(clientConfig);\n\n    // Retrieve store name\n    storeName = getProperties().getProperty(\"store_name\", \"usertable\");\n\n    // Use store name to retrieve client\n    storeClient = socketFactory.getStoreClient(storeName);\n    if (storeClient == null) {\n      throw new DBException(\"Unable to instantiate store client\");\n    }\n  }\n\n  public void cleanup() throws DBException {\n    socketFactory.close();\n  }\n\n  @Override\n  public Status delete(String table, String key) {\n    if (checkStore(table) == Status.ERROR) {\n      return Status.ERROR;\n    }\n\n    if (storeClient.delete(key)) {\n      return Status.OK;\n    }\n    return Status.ERROR;\n  }\n\n  @Override\n  public Status insert(String table, String key,\n      Map<String, ByteIterator> values) {\n    if (checkStore(table) == Status.ERROR) {\n      return Status.ERROR;\n    }\n    storeClient.put(key,\n        (HashMap<String, String>) StringByteIterator.getStringMap(values));\n    return Status.OK;\n  }\n\n  @Override\n  public Status read(String table, String key, Set<String> fields,\n      Map<String, ByteIterator> result) {\n    if (checkStore(table) == Status.ERROR) {\n      return Status.ERROR;\n    }\n\n    Versioned<HashMap<String, String>> versionedValue = storeClient.get(key);\n\n    if (versionedValue == null) {\n      return Status.NOT_FOUND;\n    }\n\n    if (fields != null) {\n      for (String field : fields) {\n        String val = versionedValue.getValue().get(field);\n        if (val != null) {\n          result.put(field, new StringByteIterator(val));\n        }\n      }\n    } else {\n      StringByteIterator.putAllAsByteIterators(result,\n          versionedValue.getValue());\n    }\n    return Status.OK;\n  }\n\n  @Override\n  public Status scan(String table, String startkey, int recordcount,\n      Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {\n    LOGGER.warn(\"Voldemort does not support Scan semantics\");\n    return Status.OK;\n  }\n\n  @Override\n  public Status update(String table, String key,\n      Map<String, ByteIterator> values) {\n    if (checkStore(table) == Status.ERROR) {\n      return Status.ERROR;\n    }\n\n    Versioned<HashMap<String, String>> versionedValue = storeClient.get(key);\n    HashMap<String, String> value = new HashMap<String, String>();\n    VectorClock version;\n    if (versionedValue != null) {\n      version = ((VectorClock) versionedValue.getVersion()).incremented(0, 1);\n      value = versionedValue.getValue();\n      for (Entry<String, ByteIterator> entry : values.entrySet()) {\n        value.put(entry.getKey(), entry.getValue().toString());\n      }\n    } else {\n      version = new VectorClock();\n      StringByteIterator.putAllAsStrings(value, values);\n    }\n\n    storeClient.put(key, Versioned.value(value, version));\n    return Status.OK;\n  }\n\n  private Status checkStore(String table) {\n    if (table.compareTo(storeName) != 0) {\n      try {\n        storeClient = socketFactory.getStoreClient(table);\n        if (storeClient == null) {\n          LOGGER.error(\"Could not instantiate storeclient for \" + table);\n          return Status.ERROR;\n        }\n        storeName = table;\n      } catch (Exception e) {\n        return Status.ERROR;\n      }\n    }\n    return Status.OK;\n  }\n\n}\n"
  },
  {
    "path": "voldemort/src/main/java/com/yahoo/ycsb/db/package-info.java",
    "content": "/*\n * Copyright (c) 2014, Yahoo!, Inc. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n * use this file except in compliance with the License. You may obtain a copy of\n * the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n * License for the specific language governing permissions and limitations under\n * the License. See accompanying LICENSE file.\n */\n\n/**\n * The YCSB binding for\n * <a href=\"http://www.project-voldemort.com/voldemort/\">Voldemort</a>.\n */\npackage com.yahoo.ycsb.db;\n"
  },
  {
    "path": "voldemort/src/main/resources/config/cluster.xml",
    "content": "<!-- \nCopyright (c) 2012 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<cluster>\n        <name>mycluster</name>\n        <server>\n                <id>0</id>\n                <host>localhost</host>\n                <http-port>8081</http-port>\n                <socket-port>6666</socket-port>\n                <partitions>0, 1</partitions>\n        </server>\n</cluster>\n\n"
  },
  {
    "path": "voldemort/src/main/resources/config/server.properties",
    "content": "# Copyright (c) 2012 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n# The ID of *this* particular cluster node\nnode.id=0\n\nmax.threads=100\n\n############### DB options ######################\n\nhttp.enable=true\nsocket.enable=true\n\n# BDB\nbdb.write.transactions=false\nbdb.flush.transactions=false\nbdb.cache.size=1G\n\n# Mysql\nmysql.host=localhost\nmysql.port=1521\nmysql.user=root\nmysql.password=3306\nmysql.database=test\n\n#NIO connector settings.\nenable.nio.connector=true\n\nstorage.configs=voldemort.store.bdb.BdbStorageConfiguration, voldemort.store.readonly.ReadOnlyStorageConfiguration\n"
  },
  {
    "path": "voldemort/src/main/resources/config/stores.xml",
    "content": "<!-- \nCopyright (c) 2012 YCSB contributors. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you\nmay not use this file except in compliance with the License. You\nmay obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License. See accompanying\nLICENSE file.\n-->\n\n<stores>\n  <store>\n    <name>usertable</name>\n    <persistence>bdb</persistence>\n    <routing>client</routing>\n    <replication-factor>1</replication-factor>\n    <required-reads>1</required-reads>\n    <required-writes>1</required-writes>\n    <key-serializer>\n      <type>string</type>\n    </key-serializer>\n    <value-serializer>\n      <type>java-serialization</type>\n    </value-serializer>\n  </store>\n</stores>\n"
  },
  {
    "path": "workloads/tsworkload_template",
    "content": "# Copyright (c) 2017 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n# Yahoo! Cloud System Benchmark\n# Time Series Workload Template: Default Values\n#\n# File contains all properties that can be set to define a\n# YCSB session. All properties are set to their default\n# value if one exists. If not, the property is commented\n# out. When a property has a finite number of settings,\n# the default is enabled and the alternates are shown in\n# comments below it.\n# \n# Use of each property is explained through comments in Client.java, \n# CoreWorkload.java, TimeSeriesWorkload.java or on the YCSB wiki page:\n# https://github.com/brianfrankcooper/YCSB/wiki/TimeSeriesWorkload\n\n# The name of the workload class to use. Always the following.\nworkload=com.yahoo.ycsb.workloads.TimeSeriesWorkload\n\n# The default is Java's Long.MAX_VALUE.\n# The number of records in the table to be inserted in\n# the load phase or the number of records already in the \n# table before the run phase.\nrecordcount=1000000\n\n# There is no default setting for operationcount but it is\n# required to be set.\n# The number of operations to use during the run phase.\noperationcount=3000000\n\n# The number of insertions to do, if different from recordcount.\n# Used with insertstart to grow an existing table.\n#insertcount=\n\n# ..::NOTE::.. This is different from the CoreWorkload!\n# The starting timestamp of a run as a Unix Epoch numeral in the \n# unit set in 'timestampunits'. This is used to determine what \n# the first timestamp should be when writing or querying as well\n# as how many offsets (based on 'timestampinterval').\n#insertstart=\n\n# The units represented by the 'insertstart' timestamp as well as\n# durations such as 'timestampinterval', 'querytimespan', etc.\n# For values, see https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/TimeUnit.html\n# Note that only seconds through nanoseconds are supported.\ntimestampunits=SECONDS\n\n# The amount of time between each value in every time series in\n# the units of 'timestampunits'.\ntimestampinterval=60\n\n# ..::NOTE::.. This is different from the CoreWorkload!\n# Represents the number of unique \"metrics\" or \"keys\" for time series.\n# E.g. \"sys.cpu\" may be a single field or \"metric\" while there may be many\n# time series sharing that key (perhaps a host tag with \"web01\" and \"web02\"\n# as options).\nfieldcount=16\n\n# The number of characters in the \"metric\" or \"key\".\nfieldlength=8\n\n# --- TODO ---?\n# The distribution used to choose the length of a field\nfieldlengthdistribution=constant\n#fieldlengthdistribution=uniform\n#fieldlengthdistribution=zipfian\n\n# The number of unique tag combinations for each time series. E.g\n# if this value is 4, each record will have a key and 4 tag combinations\n# such as A=A, B=A, C=A, D=A.\ntagcount=4\n\n# The cardinality (number of unique values) of each tag value for \n# every \"metric\" or field as a  comma separated list. Each value must \n# be a number from 1 to Java's Integer.MAX_VALUE and there must be \n# 'tagcount' values. If there are  more or fewer values than \n#'tagcount' then either it is ignored or 1 is substituted respectively.\ntagcardinality=1,2,4,8\n\n# The length of each tag key in characters.\ntagkeylength=8\n\n# The length of each tag value in characters.\ntagvaluelength=8\n\n# The character separating tag keys from tag values when reads, deletes\n# or scans are executed against a database. The default is the equals sign\n# so a field passed in a read to a DB may look like 'AA=AB'.\ntagpairdelimiter==\n\n# The delimiter between keys and tags when a delete is passed to the DB.\n# E.g. if there was a key and a field, the request key would look like:\n# 'AA:AA=AB'\ndeletedelimiter=:\n\n# Whether or not to randomize the timestamp order when performing inserts\n# and updates against a DB. By default all writes perform with the \n# timestamps moving linearly forward in time once all time series for a\n# given key have been written.\nrandomwritetimestamporder=false\n\n# Whether or not to randomly shuffle the time series order when writing.\n# This will shuffle the keys, tag keys and tag values.\n# ************************************************************************\n# WARNING - When this is enabled, reads and scans will likely return many\n# empty results as invalid tag combinations will be chosen. Likewise \n# this setting is INCOMPATIBLE with data integrity checks.\n# ************************************************************************\nrandomtimeseriesorder=false\n\n# The type of numerical data generated for each data point. The values are\n# 64 bit signed integers, double precision floating points or a random mix.\n# For data integrity, this setting is ignored and values are switched to\n# 64 bit signed ints.\n#valuetype=integers\nvaluetype=floats\n#valuetype=mixed\n\n# A value from 0 to 0.999999 representing how sparse each time series\n# should be. The higher this value, the greater the time interval between\n# values in a single series. For example, if sparsity is 0 and there are\n# 10 time series with a 'timestampinterval' of 60 seconds with a total\n# time range of 10 intervals, you would see 100 values written, one per\n# timestamp interval per time series. If the sparsity is 0.50 then there\n# would be only about 50 values written so some time series would have\n# missing values at each interval.\nsparsity=0.00\n\n# The percentage of time series that are \"lagging\" behind the current\n# timestamp of the writer. This is used to mimic a common behavior where\n# most sources (agents, sensors, etc) are writing data in sync (same timestamp)\n# but a subset are running behind due to buffering, latency issues, etc.\ndelayedSeries=0.10\n\n# The maximum amount of delay for delayed series in interval counts. The \n# actual delay is chosen based on a modulo of the series index.\ndelayedIntervals=5\n\n# The fixed or maximum amount of time added to the start time of a \n# read or scan operation to generate a query over a range of time \n# instead of a single timestamp. Units are shared with 'timestampunits'.\n# For example if the value is set to 3600 seconds (1 hour) then \n# each read would pick a random start timestamp based on the \n#'insertstart' value and number of intervals, then add 3600 seconds\n# to create the end time of the query. If this value is 0 then reads\n# will only provide a single timestamp. \n# WARNING: Cannot be used with 'dataintegrity'.\nquerytimespan=0\n\n# Whether or not reads should choose a random time span (aligned to\n# the 'timestampinterval' value) for each read or scan request starting\n# at 0 and reaching 'querytimespan' as the max.\nqueryrandomtimespan=false\n\n# A delimiter character used to separate the start and end timestamps\n# of a read query when 'querytimespan' is enabled.\nquerytimespandelimiter=,\n\n# A unique key given to read, scan and delete operations when the\n# operation should perform a group-by (multi-series aggregation) on one \n# or more tags. If 'groupbyfunction' is set, this key will be given with\n# the configured function.\ngroupbykey=YCSBGB\n\n# A function name (e.g. 'sum', 'max' or 'avg') passed during reads, \n# scans and deletes to cause the database to perform a group-by \n# operation on one or more tags. If this value is empty or null \n# (default), group-by operations are not performed\n#groupbyfunction=\n\n# A comma separated list of 0s or 1s to denote which of the tag keys\n# should be grouped during group-by operations. The number of values\n# must match the number of tags in 'tagcount'.\n#groupbykeys=0,0,1,1\n\n# A unique key given to read and scan operations when the operation\n# should downsample the results of a query into lower resolution\n# data. If 'downsamplingfunction' is set, this key will be given with\n# the configured function.\ndownsamplingkey=YCSBDS\n\n# A function name (e.g. 'sum', 'max' or 'avg') passed during reads and\n# scans to cause the database to perform a downsampling operation\n# returning lower resolution data. If this value is empty or null \n# (default), downsampling is not performed.\n#downsamplingfunction=\n\n# A time interval for which to downsample the raw data into. Shares\n# the same units as 'timestampinterval'. This value must be greater\n# than 'timestampinterval'. E.g. if the timestamp interval for raw\n# data is 60 seconds, the downsampling interval could be 3600 seconds\n# to roll up the data into 1 hour buckets.\n#downsamplinginterval=\n\n# What proportion of operations are reads\nreadproportion=0.10\n\n# What proportion of operations are updates\nupdateproportion=0.00\n\n# What proportion of operations are inserts\ninsertproportion=0.90\n\n# The distribution of requests across the keyspace\nrequestdistribution=zipfian\n#requestdistribution=uniform\n#requestdistribution=latest\n\n# The name of the database table to run queries against\ntable=usertable\n\n# Whether or not data should be validated during writes and reads. If\n# set then the data type is always a 64 bit signed integer and is the\n# hash code of the key, timestamp and tags. \ndataintegrity=false\n\n# How the latency measurements are presented\nmeasurementtype=histogram\n#measurementtype=timeseries\n#measurementtype=raw\n# When measurementtype is set to raw, measurements will be output\n# as RAW datapoints in the following csv format:\n# \"operation, timestamp of the measurement, latency in us\"\n#\n# Raw datapoints are collected in-memory while the test is running. Each\n# data point consumes about 50 bytes (including java object overhead).\n# For a typical run of 1 million to 10 million operations, this should\n# fit into memory most of the time. If you plan to do 100s of millions of\n# operations per run, consider provisioning a machine with larger RAM when using\n# the RAW measurement type, or split the run into multiple runs.\n#\n# Optionally, you can specify an output file to save raw datapoints.\n# Otherwise, raw datapoints will be written to stdout.\n# The output file will be appended to if it already exists, otherwise\n# a new output file will be created.\n#measurement.raw.output_file = /tmp/your_output_file_for_this_run\n\n# JVM Reporting.\n#\n# Measure JVM information over time including GC counts, max and min memory\n# used, max and min thread counts, max and min system load and others. This\n# setting must be enabled in conjunction with the \"-s\" flag to run the status\n# thread. Every \"status.interval\", the status thread will capture JVM \n# statistics and record the results. At the end of the run, max and mins will\n# be recorded.\n# measurement.trackjvm = false\n\n# The range of latencies to track in the histogram (milliseconds)\nhistogram.buckets=1000\n\n# Granularity for time series (in milliseconds)\ntimeseries.granularity=1000\n\n# Latency reporting.\n#\n# YCSB records latency of failed operations separately from successful ones.\n# Latency of all OK operations will be reported under their operation name,\n# such as [READ], [UPDATE], etc.\n#\n# For failed operations:\n# By default we don't track latency numbers of specific error status.\n# We just report latency of all failed operation under one measurement name\n# such as [READ-FAILED]. But optionally, user can configure to have either:\n# 1. Record and report latency for each and every error status code by\n#    setting reportLatencyForEachError to true, or\n# 2. Record and report latency for a select set of error status codes by\n#    providing a CSV list of Status codes via the \"latencytrackederrors\"\n#    property.\n# reportlatencyforeacherror=false\n# latencytrackederrors=\"<comma separated strings of error codes>\"\n"
  },
  {
    "path": "workloads/tsworkloada",
    "content": "# Copyright (c) 2017 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n# Yahoo! Cloud System Benchmark\n# Workload A: Small cardinality consistent data for 2 days\n#   Application example: Typical monitoring of a single compute or small \n#   sensor station where 90% of the load is write and only 10% is read \n#   (it's usually much less). All writes are inserts. No sparsity so \n#   every series will have a value at every timestamp.\n#\n#   Read/insert ratio: 10/90\n#   Cardinality: 16 per key (field), 64 fields for a total of 1,024 \n#                time series.\nworkload=com.yahoo.ycsb.workloads.TimeSeriesWorkload\n\nrecordcount=1474560\noperationcount=2949120\n\nfieldlength=8\nfieldcount=64\ntagcount=4\ntagcardinality=1,2,4,2\n\nsparsity=0.0\ndelayedSeries=0.0\ndelayedIntervals=0\n\ntimestampunits=SECONDS\ntimestampinterval=60\nquerytimespan=3600\n\nreadproportion=0.10\nupdateproportion=0.00\ninsertproportion=0.90\n"
  },
  {
    "path": "workloads/workload_template",
    "content": "# Copyright (c) 2012-2016 YCSB contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you\n# may not use this file except in compliance with the License. You\n# may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied. See the License for the specific language governing\n# permissions and limitations under the License. See accompanying\n# LICENSE file.\n\n# Yahoo! Cloud System Benchmark\n# Workload Template: Default Values\n#\n# File contains all properties that can be set to define a\n# YCSB session. All properties are set to their default\n# value if one exists. If not, the property is commented\n# out. When a property has a finite number of settings,\n# the default is enabled and the alternates are shown in\n# comments below it.\n# \n# Use of most explained through comments in Client.java or \n# CoreWorkload.java or on the YCSB wiki page:\n# https://github.com/brianfrankcooper/YCSB/wiki/Core-Properties\n\n# The name of the workload class to use\nworkload=com.yahoo.ycsb.workloads.CoreWorkload\n\n# There is no default setting for recordcount but it is\n# required to be set.\n# The number of records in the table to be inserted in\n# the load phase or the number of records already in the \n# table before the run phase.\nrecordcount=1000000\n\n# There is no default setting for operationcount but it is\n# required to be set.\n# The number of operations to use during the run phase.\noperationcount=3000000\n\n# The number of insertions to do, if different from recordcount.\n# Used with insertstart to grow an existing table.\n#insertcount=\n\n# The offset of the first insertion\ninsertstart=0\n\n# The number of fields in a record\nfieldcount=10\n\n# The size of each field (in bytes)\nfieldlength=100\n\n# Should read all fields\nreadallfields=true\n\n# Should write all fields on update\nwriteallfields=false\n\n# The distribution used to choose the length of a field\nfieldlengthdistribution=constant\n#fieldlengthdistribution=uniform\n#fieldlengthdistribution=zipfian\n\n# What proportion of operations are reads\nreadproportion=0.95\n\n# What proportion of operations are updates\nupdateproportion=0.05\n\n# What proportion of operations are inserts\ninsertproportion=0\n\n# What proportion of operations read then modify a record\nreadmodifywriteproportion=0\n\n# What proportion of operations are scans\nscanproportion=0\n\n# On a single scan, the maximum number of records to access\nmaxscanlength=1000\n\n# The distribution used to choose the number of records to access on a scan\nscanlengthdistribution=uniform\n#scanlengthdistribution=zipfian\n\n# Should records be inserted in order or pseudo-randomly\ninsertorder=hashed\n#insertorder=ordered\n\n# The distribution of requests across the keyspace\nrequestdistribution=zipfian\n#requestdistribution=uniform\n#requestdistribution=latest\n\n# Percentage of data items that constitute the hot set\nhotspotdatafraction=0.2\n\n# Percentage of operations that access the hot set\nhotspotopnfraction=0.8\n\n# Maximum execution time in seconds\n#maxexecutiontime= \n\n# The name of the database table to run queries against\ntable=usertable\n\n# The column family of fields (required by some databases)\n#columnfamily=\n\n# How the latency measurements are presented\nmeasurementtype=histogram\n#measurementtype=timeseries\n#measurementtype=raw\n# When measurementtype is set to raw, measurements will be output\n# as RAW datapoints in the following csv format:\n# \"operation, timestamp of the measurement, latency in us\"\n#\n# Raw datapoints are collected in-memory while the test is running. Each\n# data point consumes about 50 bytes (including java object overhead).\n# For a typical run of 1 million to 10 million operations, this should\n# fit into memory most of the time. If you plan to do 100s of millions of\n# operations per run, consider provisioning a machine with larger RAM when using\n# the RAW measurement type, or split the run into multiple runs.\n#\n# Optionally, you can specify an output file to save raw datapoints.\n# Otherwise, raw datapoints will be written to stdout.\n# The output file will be appended to if it already exists, otherwise\n# a new output file will be created.\n#measurement.raw.output_file = /tmp/your_output_file_for_this_run\n\n# JVM Reporting.\n#\n# Measure JVM information over time including GC counts, max and min memory\n# used, max and min thread counts, max and min system load and others. This\n# setting must be enabled in conjunction with the \"-s\" flag to run the status\n# thread. Every \"status.interval\", the status thread will capture JVM \n# statistics and record the results. At the end of the run, max and mins will\n# be recorded.\n# measurement.trackjvm = false\n\n# The range of latencies to track in the histogram (milliseconds)\nhistogram.buckets=1000\n\n# Granularity for time series (in milliseconds)\ntimeseries.granularity=1000\n\n# Latency reporting.\n#\n# YCSB records latency of failed operations separately from successful ones.\n# Latency of all OK operations will be reported under their operation name,\n# such as [READ], [UPDATE], etc.\n#\n# For failed operations:\n# By default we don't track latency numbers of specific error status.\n# We just report latency of all failed operation under one measurement name\n# such as [READ-FAILED]. But optionally, user can configure to have either:\n# 1. Record and report latency for each and every error status code by\n#    setting reportLatencyForEachError to true, or\n# 2. Record and report latency for a select set of error status codes by\n#    providing a CSV list of Status codes via the \"latencytrackederrors\"\n#    property.\n# reportlatencyforeacherror=false\n# latencytrackederrors=\"<comma separated strings of error codes>\"\n\n# Insertion error retry for the core workload.\n#\n# By default, the YCSB core workload does not retry any operations.\n# However, during the load process, if any insertion fails, the entire\n# load process is terminated.\n# If a user desires to have more robust behavior during this phase, they can\n# enable retry for insertion by setting the following property to a positive\n# number.\n# core_workload_insertion_retry_limit = 0\n#\n# the following number controls the interval between retries (in seconds):\n# core_workload_insertion_retry_interval = 3\n\n# Distributed Tracing via Apache HTrace (http://htrace.incubator.apache.org/)\n#\n# Defaults to blank / no tracing\n# Below sends to a local file, sampling at 0.1%\n#\n# htrace.sampler.classes=ProbabilitySampler\n# htrace.sampler.fraction=0.001\n# htrace.span.receiver.classes=org.apache.htrace.core.LocalFileSpanReceiver\n# htrace.local.file.span.receiver.path=/some/path/to/local/file\n#\n# To capture all spans, use the AlwaysSampler\n#\n# htrace.sampler.classes=AlwaysSampler\n#\n# To send spans to an HTraced receiver, use the below and ensure\n# your classpath contains the htrace-htraced jar (i.e. when invoking the ycsb\n# command add -cp /path/to/htrace-htraced.jar)\n#\n# htrace.span.receiver.classes=org.apache.htrace.impl.HTracedSpanReceiver\n# htrace.htraced.receiver.address=example.com:9075\n# htrace.htraced.error.log.period.ms=10000\n"
  },
  {
    "path": "workloads/workloada",
    "content": "# Copyright (c) 2010 Yahoo! Inc. All rights reserved.                                                                                                                             \n#                                                                                                                                                                                 \n# Licensed under the Apache License, Version 2.0 (the \"License\"); you                                                                                                             \n# may not use this file except in compliance with the License. You                                                                                                                \n# may obtain a copy of the License at                                                                                                                                             \n#                                                                                                                                                                                 \n# http://www.apache.org/licenses/LICENSE-2.0                                                                                                                                      \n#                                                                                                                                                                                 \n# Unless required by applicable law or agreed to in writing, software                                                                                                             \n# distributed under the License is distributed on an \"AS IS\" BASIS,                                                                                                               \n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or                                                                                                                 \n# implied. See the License for the specific language governing                                                                                                                    \n# permissions and limitations under the License. See accompanying                                                                                                                 \n# LICENSE file.                                                                                                                                                                   \n\n\n# Yahoo! Cloud System Benchmark\n# Workload A: Update heavy workload\n#   Application example: Session store recording recent actions\n#                        \n#   Read/update ratio: 50/50\n#   Default data size: 1 KB records (10 fields, 100 bytes each, plus key)\n#   Request distribution: zipfian\n\nrecordcount=1000\noperationcount=1000\nworkload=com.yahoo.ycsb.workloads.CoreWorkload\n\nreadallfields=true\n\nreadproportion=0.5\nupdateproportion=0.5\nscanproportion=0\ninsertproportion=0\n\nrequestdistribution=zipfian\n\n"
  },
  {
    "path": "workloads/workloadb",
    "content": "# Copyright (c) 2010 Yahoo! Inc. All rights reserved.                                                                                                                             \n#                                                                                                                                                                                 \n# Licensed under the Apache License, Version 2.0 (the \"License\"); you                                                                                                             \n# may not use this file except in compliance with the License. You                                                                                                                \n# may obtain a copy of the License at                                                                                                                                             \n#                                                                                                                                                                                 \n# http://www.apache.org/licenses/LICENSE-2.0                                                                                                                                      \n#                                                                                                                                                                                 \n# Unless required by applicable law or agreed to in writing, software                                                                                                             \n# distributed under the License is distributed on an \"AS IS\" BASIS,                                                                                                               \n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or                                                                                                                 \n# implied. See the License for the specific language governing                                                                                                                    \n# permissions and limitations under the License. See accompanying                                                                                                                 \n# LICENSE file.                                                                                                                                                                   \n\n# Yahoo! Cloud System Benchmark\n# Workload B: Read mostly workload\n#   Application example: photo tagging; add a tag is an update, but most operations are to read tags\n#                        \n#   Read/update ratio: 95/5\n#   Default data size: 1 KB records (10 fields, 100 bytes each, plus key)\n#   Request distribution: zipfian\n\nrecordcount=1000\noperationcount=1000\nworkload=com.yahoo.ycsb.workloads.CoreWorkload\n\nreadallfields=true\n\nreadproportion=0.95\nupdateproportion=0.05\nscanproportion=0\ninsertproportion=0\n\nrequestdistribution=zipfian\n\n"
  },
  {
    "path": "workloads/workloadc",
    "content": "# Copyright (c) 2010 Yahoo! Inc. All rights reserved.                                                                                                                             \n#                                                                                                                                                                                 \n# Licensed under the Apache License, Version 2.0 (the \"License\"); you                                                                                                             \n# may not use this file except in compliance with the License. You                                                                                                                \n# may obtain a copy of the License at                                                                                                                                             \n#                                                                                                                                                                                 \n# http://www.apache.org/licenses/LICENSE-2.0                                                                                                                                      \n#                                                                                                                                                                                 \n# Unless required by applicable law or agreed to in writing, software                                                                                                             \n# distributed under the License is distributed on an \"AS IS\" BASIS,                                                                                                               \n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or                                                                                                                 \n# implied. See the License for the specific language governing                                                                                                                    \n# permissions and limitations under the License. See accompanying                                                                                                                 \n# LICENSE file.                                                                                                                                                                   \n\n# Yahoo! Cloud System Benchmark\n# Workload C: Read only\n#   Application example: user profile cache, where profiles are constructed elsewhere (e.g., Hadoop)\n#                        \n#   Read/update ratio: 100/0\n#   Default data size: 1 KB records (10 fields, 100 bytes each, plus key)\n#   Request distribution: zipfian\n\nrecordcount=1000\noperationcount=1000\nworkload=com.yahoo.ycsb.workloads.CoreWorkload\n\nreadallfields=true\n\nreadproportion=1\nupdateproportion=0\nscanproportion=0\ninsertproportion=0\n\nrequestdistribution=zipfian\n\n\n\n"
  },
  {
    "path": "workloads/workloadd",
    "content": "# Copyright (c) 2010 Yahoo! Inc. All rights reserved.                                                                                                                             \r\n#                                                                                                                                                                                 \r\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you                                                                                                             \r\n# may not use this file except in compliance with the License. You                                                                                                                \r\n# may obtain a copy of the License at                                                                                                                                             \r\n#                                                                                                                                                                                 \r\n# http://www.apache.org/licenses/LICENSE-2.0                                                                                                                                      \r\n#                                                                                                                                                                                 \r\n# Unless required by applicable law or agreed to in writing, software                                                                                                             \r\n# distributed under the License is distributed on an \"AS IS\" BASIS,                                                                                                               \r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or                                                                                                                 \r\n# implied. See the License for the specific language governing                                                                                                                    \r\n# permissions and limitations under the License. See accompanying                                                                                                                 \r\n# LICENSE file.                                                                                                                                                                   \r\n\r\n# Yahoo! Cloud System Benchmark\r\n# Workload D: Read latest workload\r\n#   Application example: user status updates; people want to read the latest\r\n#                        \r\n#   Read/update/insert ratio: 95/0/5\r\n#   Default data size: 1 KB records (10 fields, 100 bytes each, plus key)\r\n#   Request distribution: latest\r\n\r\n# The insert order for this is hashed, not ordered. The \"latest\" items may be \r\n# scattered around the keyspace if they are keyed by userid.timestamp. A workload\r\n# which orders items purely by time, and demands the latest, is very different than \r\n# workload here (which we believe is more typical of how people build systems.)\r\n\r\nrecordcount=1000\r\noperationcount=1000\r\nworkload=com.yahoo.ycsb.workloads.CoreWorkload\r\n\r\nreadallfields=true\r\n\r\nreadproportion=0.95\r\nupdateproportion=0\r\nscanproportion=0\r\ninsertproportion=0.05\r\n\r\nrequestdistribution=latest\r\n\r\n"
  },
  {
    "path": "workloads/workloade",
    "content": "# Copyright (c) 2010 Yahoo! Inc. All rights reserved.                                                                                                                             \n#                                                                                                                                                                                 \n# Licensed under the Apache License, Version 2.0 (the \"License\"); you                                                                                                             \n# may not use this file except in compliance with the License. You                                                                                                                \n# may obtain a copy of the License at                                                                                                                                             \n#                                                                                                                                                                                 \n# http://www.apache.org/licenses/LICENSE-2.0                                                                                                                                      \n#                                                                                                                                                                                 \n# Unless required by applicable law or agreed to in writing, software                                                                                                             \n# distributed under the License is distributed on an \"AS IS\" BASIS,                                                                                                               \n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or                                                                                                                 \n# implied. See the License for the specific language governing                                                                                                                    \n# permissions and limitations under the License. See accompanying                                                                                                                 \n# LICENSE file.                                                                                                                                                                   \n\n# Yahoo! Cloud System Benchmark\n# Workload E: Short ranges\n#   Application example: threaded conversations, where each scan is for the posts in a given thread (assumed to be clustered by thread id)\n#                        \n#   Scan/insert ratio: 95/5\n#   Default data size: 1 KB records (10 fields, 100 bytes each, plus key)\n#   Request distribution: zipfian\n\n# The insert order is hashed, not ordered. Although the scans are ordered, it does not necessarily\n# follow that the data is inserted in order. For example, posts for thread 342 may not be inserted contiguously, but\n# instead interspersed with posts from lots of other threads. The way the YCSB client works is that it will pick a start\n# key, and then request a number of records; this works fine even for hashed insertion.\n\nrecordcount=1000\noperationcount=1000\nworkload=com.yahoo.ycsb.workloads.CoreWorkload\n\nreadallfields=true\n\nreadproportion=0\nupdateproportion=0\nscanproportion=0.95\ninsertproportion=0.05\n\nrequestdistribution=zipfian\n\nmaxscanlength=100\n\nscanlengthdistribution=uniform\n\n\n"
  },
  {
    "path": "workloads/workloadf",
    "content": "# Copyright (c) 2010 Yahoo! Inc. All rights reserved.                                                                                                                             \r\n#                                                                                                                                                                                 \r\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you                                                                                                             \r\n# may not use this file except in compliance with the License. You                                                                                                                \r\n# may obtain a copy of the License at                                                                                                                                             \r\n#                                                                                                                                                                                 \r\n# http://www.apache.org/licenses/LICENSE-2.0                                                                                                                                      \r\n#                                                                                                                                                                                 \r\n# Unless required by applicable law or agreed to in writing, software                                                                                                             \r\n# distributed under the License is distributed on an \"AS IS\" BASIS,                                                                                                               \r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or                                                                                                                 \r\n# implied. See the License for the specific language governing                                                                                                                    \r\n# permissions and limitations under the License. See accompanying                                                                                                                 \r\n# LICENSE file.                                                                                                                                                                   \r\n\r\n# Yahoo! Cloud System Benchmark\r\n# Workload F: Read-modify-write workload\r\n#   Application example: user database, where user records are read and modified by the user or to record user activity.\r\n#                        \r\n#   Read/read-modify-write ratio: 50/50\r\n#   Default data size: 1 KB records (10 fields, 100 bytes each, plus key)\r\n#   Request distribution: zipfian\r\n\r\nrecordcount=1000\r\noperationcount=1000\r\nworkload=com.yahoo.ycsb.workloads.CoreWorkload\r\n\r\nreadallfields=true\r\n\r\nreadproportion=0.5\r\nupdateproportion=0\r\nscanproportion=0\r\ninsertproportion=0\r\nreadmodifywriteproportion=0.5\r\n\r\nrequestdistribution=zipfian\r\n\r\n"
  }
]