Full Code of facebookarchive/linkbench for AI

master ac67d54bf291 cached
89 files
521.8 KB
131.7k tokens
749 symbols
1 requests
Download .txt
Showing preview only (554K chars total). Download the full file or copy to clipboard to get everything.
Repository: facebookarchive/linkbench
Branch: master
Commit: ac67d54bf291
Files: 89
Total size: 521.8 KB

Directory structure:
gitextract_nozznvpx/

├── .arcconfig
├── .gitignore
├── DataModel.md
├── LICENSE
├── NOTICES
├── README.md
├── bin/
│   ├── genswift
│   ├── linkbench
│   └── swift-generator-cli-0.11.0-standalone.jar
├── config/
│   ├── FBWorkload.properties
│   ├── LinkConfigMysql.properties
│   └── LinkConfigRocksDb.properties
├── pom.xml
└── src/
    ├── main/
    │   └── java/
    │       └── com/
    │           └── facebook/
    │               ├── LinkBench/
    │               │   ├── Config.java
    │               │   ├── ConfigUtil.java
    │               │   ├── GraphStore.java
    │               │   ├── InvertibleShuffler.java
    │               │   ├── Link.java
    │               │   ├── LinkBenchConfigError.java
    │               │   ├── LinkBenchDriver.java
    │               │   ├── LinkBenchDriverMR.java
    │               │   ├── LinkBenchLoad.java
    │               │   ├── LinkBenchOp.java
    │               │   ├── LinkBenchRequest.java
    │               │   ├── LinkBenchTask.java
    │               │   ├── LinkCount.java
    │               │   ├── LinkStore.java
    │               │   ├── LinkStoreHBaseGeneralAtomicityTesting.java
    │               │   ├── LinkStoreMysql.java
    │               │   ├── LinkStoreRocksDb.java
    │               │   ├── MemoryLinkStore.java
    │               │   ├── Node.java
    │               │   ├── NodeLoader.java
    │               │   ├── NodeStore.java
    │               │   ├── Phase.java
    │               │   ├── RealDistribution.java
    │               │   ├── Shuffler.java
    │               │   ├── Timer.java
    │               │   ├── distributions/
    │               │   │   ├── AccessDistributions.java
    │               │   │   ├── ApproxHarmonic.java
    │               │   │   ├── GeometricDistribution.java
    │               │   │   ├── Harmonic.java
    │               │   │   ├── ID2Chooser.java
    │               │   │   ├── LinkDistributions.java
    │               │   │   ├── LogNormalDistribution.java
    │               │   │   ├── PiecewiseLinearDistribution.java
    │               │   │   ├── ProbabilityDistribution.java
    │               │   │   ├── UniformDistribution.java
    │               │   │   └── ZipfDistribution.java
    │               │   ├── generators/
    │               │   │   ├── DataGenerator.java
    │               │   │   ├── MotifDataGenerator.java
    │               │   │   └── UniformDataGenerator.java
    │               │   ├── stats/
    │               │   │   ├── LatencyStats.java
    │               │   │   ├── RunningMean.java
    │               │   │   └── SampledStats.java
    │               │   └── util/
    │               │       └── ClassLoadUtil.java
    │               └── rocks/
    │                   └── swift/
    │                       ├── rocks.thrift
    │                       └── rocks_common.thrift
    └── test/
        └── java/
            └── com/
                └── facebook/
                    └── LinkBench/
                        ├── DistributionTestBase.java
                        ├── DummyLinkStore.java
                        ├── DummyLinkStoreTest.java
                        ├── GeneratedDataDump.java
                        ├── GeomDistTest.java
                        ├── GraphStoreTestBase.java
                        ├── HarmonicTest.java
                        ├── ID2ChooserTest.java
                        ├── InvertibleShufflerTest.java
                        ├── LinkStoreTestBase.java
                        ├── LogNormalTest.java
                        ├── MemoryGraphStoreTest.java
                        ├── MemoryLinkStoreTest.java
                        ├── MemoryNodeStoreTest.java
                        ├── MySqlGraphStoreTest.java
                        ├── MySqlLinkStoreTest.java
                        ├── MySqlNodeStoreTest.java
                        ├── MySqlTestConfig.java
                        ├── NodeStoreTestBase.java
                        ├── PiecewiseDistTest.java
                        ├── TestAccessDistribution.java
                        ├── TestDataGen.java
                        ├── TestRealDistribution.java
                        ├── TestStats.java
                        ├── TimerTest.java
                        ├── UniformDistTest.java
                        ├── ZipfDistTest.java
                        └── testtypes/
                            ├── MySqlTest.java
                            ├── ProviderTest.java
                            ├── RocksDbTest.java
                            └── SlowTest.java

================================================
FILE CONTENTS
================================================

================================================
FILE: .arcconfig
================================================
{
  "project_id" : "linkbench",
  "conduit_uri" : "https://reviews.facebook.net/",
  "copyright_holder" : "",
  "lint.engine" : "ArcanistSingleLintEngine",
  "lint.engine.single.linter" : "ArcanistTextLinter"
}



================================================
FILE: .gitignore
================================================
target
out
.project
.classpath
.settings
*.iws
*.ipr
*.iml
.idea
.DS_Store



================================================
FILE: DataModel.md
================================================
LinkBench Data and Queries
==========================
Facebook has various types of object nodes, such as users, pages etc and various
types of associations between those objects. The collection of objects and
associations can be viewed as a social graph.

A goal of Facebook's database infrastructure is to store this graph in
a way to achieve good performance and high efficiency.

LinkBench is an attempt to simulate the workload of Facebook's social
graph database. This is done in two ways:

* Using database schema and operations that are similar to Facebook
    schema and operations.
* Generating data and queries which are broadly similar to the
    real production data and applications.

This document describes the *node* and *link* concepts
and outlines the concrete data representation and
various SQL statements used for operations on links.

Node
----
A node represents and object with arbitrary associated data.

    class Node {
      public long id;       // Unique identifier for node
      public int type;      // Type of node
      public long version;  // Version, incremented each change
      public int time;      // Last modification time
      public byte data[];   // Arbitrary payload data
    }

Link
----
We use the term <i>link</i> to denote an association of some type between two
objects.  The term id1 is used to denote source object of an association, and
the term id2 is used to denote destination object of an association.

Here is a simple conceptual representation of a link object coming from an
application.

The key members of a link are type, id1, id2, time (while this could be the
time of link creation, it could also be any other attribute on which we might
want to sort all id2s assocated with an id1 through this type), version,
visibility (could be VISIBILITY_DEFAULT ie visible, or VISIBILITY_HIDDEN i.e
hidden) and some link data (uninterpreted stream of bytes).

    class Link {
      public long id1;        // id of source node
      public long link_type;  // type of link
      public long id2;        // id of destination node
      public byte visibility; // is link visible?
      public byte[] data;     // arbitrary data (must be short)
      public int version;     // version of link
      public long time;       // client-defined sort key (often timestamp)
    }

SQL schema
----------
Nodes are stored in a straightforward way.  In our production databases
different types can be stored in different tables, but for LinkBench
we use one big table:

    CREATE TABLE `nodetable` (
      `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
      `type` int(10) unsigned NOT NULL,
      `version` bigint(20) unsigned NOT NULL,
      `time` int(10) unsigned NOT NULL,
      `data` mediumtext NOT NULL,
      PRIMARY KEY(`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

Links are stored as adjacency lists.  Similarly to nodes, we use
one big table.

    CREATE TABLE `linktable` (
      `id1` bigint(20) unsigned NOT NULL DEFAULT '0',
      `id2` bigint(20) unsigned NOT NULL DEFAULT '0',
      `link_type` bigint(20) unsigned NOT NULL DEFAULT '0',
      `visibility` tinyint(3) NOT NULL DEFAULT '0',
      `data` varchar(255) NOT NULL DEFAULT '',
      `time` bigint(20) unsigned NOT NULL DEFAULT '0',
      `version` int(11) unsigned NOT NULL DEFAULT '0',
      PRIMARY KEY (`id1`,`id2`,`link_type`),
      KEY `id1_type` (`id1`,`link_type`,`visibility`,`time`,`version`,`data`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

We also have a separate table to track link counts, in order
to allow efficient querying of link counts for long adjacency
lists.

    CREATE TABLE `counttable` (
      `id` bigint(20) unsigned NOT NULL DEFAULT '0',
      `link_type` bigint(20) unsigned NOT NULL DEFAULT '0',
      `count` int(10) unsigned NOT NULL DEFAULT '0',
      `time` bigint(20) unsigned NOT NULL DEFAULT '0',
      `version` bigint(20) unsigned NOT NULL DEFAULT '0',
      PRIMARY KEY (`id`,`link_type`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1

SQL for ADD_LINK(Link l) operation
----------------------------------

This takes a link object represented by l as argument.
Here we step through the multiple stages of the add link
transaction:

    START TRANSACTION

First we try to insert into link table.

    INSERT INTO linktable (
      id1, id2, link_type,
      visibility, data, time, version
    ) VALUES (
      l.id1, l.id2, l.link_type,
      VISIBILITY_DEFAULT, l.data, l.time. l.version
    ) ON DUPLICATE KEY UPDATE visibility = VISIBILITY_DEFAULT;

Depending upon the number of affect rows reported by MySQL
we decide whether to update count and/or data. Here is pseudocode for that:

    if (affectedrows == 0) {
      // nothing changed. A row is found but was already visible
      update remaining link fields
    } else if (affectedrows == 1) {
      // a new row was inserted
      update count table
    } else {
      // affectedrows is 2
      // link switch from hidden to visible
      update remaining link fields
      update count table
    }

Here is statement used for updating data (the first query only updates
visibility):

    UPDATE linktable SET
      visibility = VISIBILITY_DEFAULT,
      data = l.data,
      time = l.time,
      version = l.version
    WHERE id1 = l.id1 AND
      id2 = l.id2 AND
      link_type = l.link_type;

Updating counttable:

    INSERT INTO counttable(
      id, link_type, count, time, version
    ) VALUES (
      l.id1, l.link_type, 1, l.time, l.version
    ) ON DUPLICATE KEY UPDATE count = count + 1;

And finally we commit:

    COMMIT //commit transaction

SQL for UPDATE_LINK(Link l) operation
-----------------------------------

This also takes a link object as argument and we do the same thing as
ADD_LINK(l).

SQL for DELETE_LINK (id1, id2, link_type)
-----------------------------------------
We only require l.id1, l.id2 and l.link_type to delete a link.
We have the option of expunging (actually deleting), or just
hiding the lnk.

    START TRANSACTION

Updating linktable.
First do a select to check if the link is not there, is there and hidden,
or is there and visible. In case of a visible link, later we need to
mark the link as hidden, and update counttable.

    SELECT visibility FROM linktable
    WHERE id1 = <id1> AND
      id2 = <id2> AND
      link_type = <link_type>;

    if (row does not exist || link is hidden)
      // do nothing
    } else if (link is visible) {
      // either delete or mark the link as hidden
      if (expunge) {
        DELETE FROM linktable
        WHERE id1 = <id1> AND
          id2 = <id2> AND
          link_type = <link_type>;
      } else {
        UPDATE linktable SET
          visibility = VISIBILITY_HIDDEN
        WHERE id1 = <id1> AND
          id2 = <id2> AND
          link_type = <link_type>;
      }
    }

Then, if needed, we update the count table:

    if (update_count_needed) {

      INSERT INTO counttable (
        id, assoc_type, count, time, version
      ) VALUES (
        <id1>, <link_type>, 0, getSystemTime(), 0
      ) ON DUPLICATE KEY UPDATE
        count = IF (count = 0, 0, count - 1),
        time = getSystemTime(),
        version = version + 1;

    }

That finishes the transaction:

    COMMIT

SQL for COUNT_LINKS(id1, link_type) operation
---------------------------------------------

    SELECT COUNT FROM counttable
    WHERE id = <id1> AND
      link_type = <link_type>;

SQL for MULTIGET_LINK(id1, link_type, id2s) operation
-----------------------------------------------

    SELECT id1, id2, link_type, visibility, data, time, version
    FROM linktable
    WHERE id1 = <id1> AND
      id2 in (<id2s>) AND
      link_type = <link_type>;

SQL for GET_LINK_RANGE(id1, link_type, minTime, maxTime, offset, limit) operation
----------------------------------------------------------------------

    SELECT id1, id2, link_type, visibility, data, time, version FROM linktable
    WHERE id1 = <id1> AND
      link_type = <link_type> AND
      time >= <minTime> AND
      time <= <maxTimestamp> AND
      visibility = VISIBILITY_DEFAULT
    ORDER BY time DESC LIMIT <offset>, <limit>;



================================================
FILE: LICENSE
================================================

                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: NOTICES
================================================
Several third party components are included with the LinkBench source
distribution.

Apache Commons CLI, Apache Commons Math, Apache log4j, Apache Hadoop,
and Apache HBase are licensed under the Apache 2.0 license, which is
available at:

    http://www.apache.org/licenses/LICENSE-2.0


JUnit is licensed under the Common Public License v1.0, which is
available at:

   http://junit.sourceforge.net/cpl-v10.html

MySQL Connector/J is licensed under the General Public License v2.0 with
the MySQL FOSS exception.  Further license information is included in
the Connector/J source distribution.  The license can also be viewed at:
   
   http://www.mysql.com/about/legal/licensing/foss-exception/


================================================
FILE: README.md
================================================
- - -

**_This project is not actively maintained. Proceed at your own risk!_**

- - -  

LinkBench Overview
====================
LinkBench is a database benchmark developed to evaluate database
performance for workloads similar to those of Facebook's production MySQL deployment.
LinkBench is highly configurable and extensible.  It can be reconfigured
to simulate a variety of workloads and plugins can be written for
benchmarking additional database systems.

LinkBench is released under the Apache License, Version 2.0.

Background
----------
One way of modeling social network data is as a *social graph*, where
entities or *nodes* such as people, posts, comments and pages are connected
by *links* which model different relationships between the nodes.  Different
types of links can represent friendship between two users, a user liking another object,
ownership of a post, or any relationship you like.  These nodes and links carry metadata such as
their type, timestamps and version numbers, along with arbitrary *payload data*.

Facebook represents much of its data in this way, with the data stored in MySQL
databases.  The goal of LinkBench is to emulate the social graph database workload
and provide a realistic benchmark for database performance on social workloads.
LinkBench's data model is based on the social graph, and LinkBench has the ability to generate
a large synthetic social graph with key properties similar to the real graph.
The workload of database operations is based on Facebook's production workload, and is also
generated in such a way that key properties of the workload match the production workload.

LinkBench Architecture
----------------------
<pre>
++====================================++
||          LinkBench Driver          ||
++====================================++
||   +---------------------------+    ||
||   | Graph      | Workload     |    ||      Open connections  +=======+
||   | Generator  | Generator    |    ||   /------------------> | Graph |
||   +---------------------------+    ||  /-------------------> | Store |
||   |                           |&lt;======---------------------> | Shard |
||   |   Graph Store Adapter     |&lt;======---------------------> |       |
||   |   (e.g. MySQL adapter)    |&lt;======---------------------> | e.g.  |
||   +---------------------------+    ||  \-------------------> | MySQL |
||                                    ||   \------------------> | Server|
||   ~~~~~~~~~~~~    ~~~~~~~~~~~~     ||                        +=======+
||   ~~~~~~~~~~~~    ~~~~~~~~~~~~     ||
||   ~~~~~~~~~~~~    ~~~~~~~~~~~~     ||
||   ~~~~~~~~~~~~    ~~~~~~~~~~~~     ||
||     Requester Threads              ||
++====================================++
</pre>

The main software component of LinkBench is the driver, which acts as the
client to the database being benchmarked.
LinkBench is designed to support benchmarking of any database system that can support
all of the require graph operations through a *Graph Store Adapter*.

The LinkBench benchmark typically proceeds in two phases.

The first is the *load phase*,
where an initial graph is generated using the
*graph generator* and loaded into the graph store in bulk.
On a large benchmark run, this graph might have a billion nodes, and occupy over a terabyte
on disk.  The generated graph is designed to have similar properties to the Facebook
social graph.  For example, the number of links out from each node follows a power-law
distribution, where most nodes have at most a few links, but a few nodes have many
more links.

The second is the *request phase*, where the actual benchmarking occurs.  In
the request phase, the benchmark driver spawns many request threads, which make
concurrent requests to the database.  The *workload generator* is used by each
request thread to generate a series of database operations that mimics the
Facebook production workload in many aspects.  For example, the mix of
different varieties of read and write operations is the same, and the
access patterns create a similar pattern of hot (frequently access)
and cold nodes in the graph.  At the end of the request phase LinkBench
will report a range of statistics such as latency and throughput.

Getting Started
===============
In this README we'll walk you through compiling LinkBench and running
a MySQL benchmark.

Prerequisites:
--------------
These instructions assume you are using a UNIX-like system such as a Linux distribution
or Mac OS X.

**Java**: You will need a Java 7+ runtime environment.  LinkBench by default
      uses the version of Java on your path.  You can override this by setting the
      JAVA\_HOME environment variable to the directory of the desired
      Java runtime version.  You will also need a Java JDK to compile from source.

**Maven**: To build LinkBench, you will need the Apache Maven build tool. If
    you do not have it already, it is available from http://maven.apache.org .

**MySQL Connector**:  To benchmark MySQL with LinkBench, you need MySQL
    Connector/J, A version of the MySQL connector is bundled with
    LinkBench.  If you wish to use a more recent version, replace the
    mysql jar under lib/.  See http://dev.mysql.com/downloads/connector/j/

**MySQL Server**: To benchmark MySQL you will need a running MySQL
    server with free disk space.

Getting and Building LinkBench
----------------------------
First get the source code

    git clone git@github.com:facebook/linkbench.git

Then enter the directory and build LinkBench

    cd linkbench
    mvn clean package

In order to skip slower tests (some run quite long), type

    mvn clean package -P fast-test

To skip all tests

    mvn clean package -DskipTests

If the build is successful, you should get a message like this at the end of the output:

    BUILD SUCCESSFUL
    Total time: 3 seconds

If the build fails while downloading required files, you may need to configure Maven,
for example to use a proxy.  Example Maven proxy configuration is shown here:
http://maven.apache.org/guides/mini/guide-proxies.html

Now you can run the LinkBench command line tool:

    ./bin/linkbench

Running it without arguments will show a brief help message:

    Did not select benchmark mode
    usage: linkbench [-c <file>] [-csvstats <file>] [-csvstream <file>] [-D
           <property=value>] [-L <file>] [-l] [-r]
     -c <file>                       Linkbench config file
     -csvstats,--csvstats <file>     CSV stats output
     -csvstream,--csvstream <file>   CSV streaming stats output
     -D <property=value>             Override a config setting
     -L <file>                       Log to this file
     -l                              Execute loading stage of benchmark
     -r                              Execute request stage of benchmark

Running a Benchmark with MySQL
==============================
In this section we will document the process of setting
up a new MySQL database and running a benchmark with LinkBench.

MySQL Setup
-----------
We need to create a new database and tables on the MySQL server.
We'll create a new database called `linkdb` and
the needed tables to store graph nodes, links and link counts.
Run the following commands in the MySQL console:

    create database linkdb;
    use linkdb;

    CREATE TABLE `linktable` (
      `id1` bigint(20) unsigned NOT NULL DEFAULT '0',
      `id2` bigint(20) unsigned NOT NULL DEFAULT '0',
      `link_type` bigint(20) unsigned NOT NULL DEFAULT '0',
      `visibility` tinyint(3) NOT NULL DEFAULT '0',
      `data` varchar(255) NOT NULL DEFAULT '',
      `time` bigint(20) unsigned NOT NULL DEFAULT '0',
      `version` int(11) unsigned NOT NULL DEFAULT '0',
      PRIMARY KEY (link_type, `id1`,`id2`),
      KEY `id1_type` (`id1`,`link_type`,`visibility`,`time`,`id2`,`version`,`data`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY key(id1) PARTITIONS 16;

    CREATE TABLE `counttable` (
      `id` bigint(20) unsigned NOT NULL DEFAULT '0',
      `link_type` bigint(20) unsigned NOT NULL DEFAULT '0',
      `count` int(10) unsigned NOT NULL DEFAULT '0',
      `time` bigint(20) unsigned NOT NULL DEFAULT '0',
      `version` bigint(20) unsigned NOT NULL DEFAULT '0',
      PRIMARY KEY (`id`,`link_type`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

    CREATE TABLE `nodetable` (
      `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
      `type` int(10) unsigned NOT NULL,
      `version` bigint(20) unsigned NOT NULL,
      `time` int(10) unsigned NOT NULL,
      `data` mediumtext NOT NULL,
      PRIMARY KEY(`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

You may want to set up a special database user account for benchmarking:

    -- Note: replace 'linkbench'@'localhost' with 'linkbench'@'%' to allow remote connections
    CREATE USER 'linkbench'@'localhost' IDENTIFIED BY 'mypassword';
    -- Grant all privileges on linkdb to this user
    GRANT ALL ON linkdb TO 'linkbench'@'localhost'

If you want to obtain representative benchmark results, we highly
recommend that you invest some time configuring and tuning MySQL.
MySQL performance tuning can be complex and a comprehensive guide
is beyond the scope of this readme, but here are a few basic guidelines:
* Read the [Optimization section of the MySQL user manual](http://dev.mysql.com/doc/refman/5.6/en/optimization.html).
* Make sure you have a sensible size setting for the [InnoDB buffer pool size](http://dev.mysql.com/doc/refman/5.6/en/optimizing-innodb-diskio.html),
  so as to reduce disk I/O.
* Table partitioning (as shown above) can eliminate some bottlenecks
  that occur with LinkBench where the linktable is heavily accessed.

Configuration Files
-------------------
LinkBench requires several configuration files that specify the
benchmark setup, the parameters of the graph to be generated, etc.
Before benchmarking you will want to make a copy of the example config file:

    cp config/LinkConfigMysql.properties config/MyConfig.properties

Open MyConfig.properties.  At a minimum you will need to fill in the
settings under *MySQL Connection Information* to match the server, user
and database you set up earlier. E.g.

    # MySQL connection information
    host = localhost
    user = linkbench
    password = your_password
    port = 3306
    dbid = linkdb

You can read through the settings in this file.  There are a lot of settings
that control the benchmark itself, and the output of the LinkBench
command link tool.  Notice that MyConfig.properties
references another file in this line:

    workload_file = config/FBWorkload.properties

This workload file defines how the social
graph should be generated and what mix of operations should make
up the benchmark.  The included workload file has been tuned to
match our production workload in query mix.  If you want to change
the scale of the benchmark (the default graph is quite small for
benchmarking purposes), you should look at the maxid1 setting.  This
controls the number of nodes in the initial graph created in the load
phase: increase it to get a larger database.

      # start node id (inclusive)
      startid1 = 1

      # end node id for initial load (exclusive)
      # With default config and MySQL/InnoDB, 1M ids ~= 1GB
      maxid1 = 10000001

Loading Data
------------
First we need to do an initial load of data using our new config file:

    ./bin/linkbench -c config/MyConfig.properties -l

This will take a while to load, and you should get frequent progress updates.
Once loading is finished you should see a notification like:

    LOAD PHASE COMPLETED.  Loaded 10000000 nodes (Expected 10000000).
      Loaded 47423071 links (4.74 links per node).  Took 620.4 seconds.
      Links/second = 76435

At the end LinkBench reports a range of statistics on load time that are
of limited interest at this stage.


You can significantly speed up the LinkBench load phase by making these
temporary changes in the MySQL command shell before loading:

    alter table linktable drop key `id1_type`;
    set global innodb_flush_log_at_trx_commit = 2;
    set global sync_binlog = 0;

After loading you should revert the changes:

    set global innodb_flush_log_at_trx_commit = 1;
    set global sync_binlog = 1;
    alter table linktable add key `id1_type`
      (`id1`,`link_type`,`visibility`,`time`,`id2`,`version`,`data`);

Request Phase
-------------
Now you can do some benchmarking.
Run the request phase using the below command:

    ./bin/linkbench -c config/MyConfig.properties -r

LinkBench will log progress to the console, along with statistics.
Once all requests have been sent, or the time limit has elapsed, LinkBench
will notify you of completion:

    REQUEST PHASE COMPLETED. 25000000 requests done in 2266 seconds.
      Requests/second = 11029

You can also inspect the latency statistics. For example, the following line tells us the mean latency
for link range scan operations, along with latency ranges for median (p50), 99th percentile (p99) and
so on.

    GET_LINKS_LIST count = 12678653  p25 = [0.7,0.8]ms  p50 = [1,2]ms
                   p75 = [1,2]ms  p95 = [10,11]ms  p99 = [15,16]ms
                   max = 2064.476ms  mean = 2.427ms

Advanced LinkBench Command Line Usage
-------------------------------------
Here are some further examples of how to use the LinkBench command link utility.

You can override any properties from the configuration file from the
command line with -D key=value.  For example, this runs the benchmark
with a 10 minute warmup before collecting statistics:

    ./bin/linkbench -c config/MyConfig.properties -D warmup_time=600 -r

This runs the benchmark with more detailed logging, and all output going to the file linkbench.log:

    ./bin/linkbench -c config/MyConfig.properties -D debuglevel=DEBUG -L linkbench.log -r

LinkBench supports output of statistics in csv format for easier analysis.
There are two categories of statistic: the final summary and per-thread statistics
output periodically through the benchmark.  -csvstats controls the former and -csvstream the latter:

    ./bin/linkbench -c config/MyConfig.properties -csvstats final-stats.csv -csvstreams streaming-stats.csv -r


Benchmark Guidelines
====================
Benchmarks are often controversial and are challenging to do well.
Here are some guidelines for avoiding common pitfalls with LinkBench.

Database Tuning
---------------
To remove confounding factors in database setup, there are several steps you can take
to obtain better results:

* Warm up the databases before collecting statistics.  LinkBench has a
  *warmup_time* setting that sends requests for a period before starting to
  collect statistics.
* Run benchmarks for long periods of time (hours rather than minutes)
    to reduce impact of random variation and to allow the database to
    reach a steady state.
* If at all possible, get expert help tuning the database for your
  hardware and workload.
* Benchmarks where the database fits mostly or entirely in RAM are interesting
  but aren't comparable to benchmarks where
  the database is much larger than RAM.  Typically for MySQL benchmarks our databases
  are 10-15x larger than the buffer pool.
* Databases should be benchmarked in comparable configurations.  We
  always run LinkBench with durable writes (i.e. so that
  after an operation returns, the data is written to persistent storage and can be
  recovered in the event of a system crash).
  Similarly, our LinkBench MySQL implementation provides serializable consistency of
  operations.  Weaker durability or consistency properties should be
  disclosed alongside benchmark results.

Understanding Performance Profile Under Varying Load
----------------------------------------------------
Different systems can behave different when heavily or lightly loaded.
The default benchmark settings simulate a heavily loaded database,
with 100 concurrent request threads each sending requests as quickly as they can.
Some database systems perform better than others with many concurrent
clients or heavy load, so performance under heavy load does not
give a complete picture of performance.
Typically databases are not fully loaded all of the time,
so latency of requests under moderate load is also an important
measure of database performance.

To get a better understanding of database performance under varying load it
can be helpful to:
* Modify the *requesters* parameter to test database performance with varying
  numbers of clients.
* Modify the *requestrate* config setting so that requests are throttled.
  Request latency vs. throughput curves help with understanding the full
  performance profile of a database system.

Understanding Resource Utilization
-------------------------
If you are doing a benchmark exercise, it is often a good idea to collect
additional information about system resource utilization, particularly
for CPU and I/O.  This can aid a lot in understanding and comparing
benchmark results beyond headline performance numbers.  It is
easiest to make use of collected data if you can match up timestamps
to your benchmark logs, so the examples here will append
timestamps to each line of output.

vmstat reports useful summary information on CPU and memory:

    vmstat 1 | gawk '{now=strftime("%Y-%m-%d %T "); print now $0}' > linkbench.run.1/vmstat.out

iostat reports some useful I/O statistics:

    iostat -d -x 1 |  gawk '{now=strftime("%Y-%m-%d %T "); print now $0}' > linkbench.run.1/iostat.out


Extending/Customizing LinkBench
===============================
You can customize LinkBench in several ways.

Reconfiguring Workload
---------------------
We have already introduced you to the LinkBench configuration files.
All settings in these files are documented and a great deal can be changed
simply through these configuration files.  For example:

* You can experiment with read-intensive or write-intensive workloads by
  modifying the mix of operations.
* You can alter the mix of hot/cold rows by modifying the shape
  parameter for ZipfDistribution.  If you set it close to 1, there will be only
  a few very hot nodes in the database, or if you set if close to 0, accesses
  will be spread evenly across all nodes.

Additional Workload Generators
------------------------------
It is possible to further customize the data and workload by providing
new implementations of some key classes:
* ProbabilityDistribution: which can be used to control the distribution of
    out-edges in the graph, or the access patterns for requests.
* DataGenerator: which can be used to generate data in different ways for requests.

Additional Database Systems
---------------------------
You can write plugins to benchmark additional database systems
simply by writing a Java class implementing a small set of graph operations.
Any classes implementing the `com.facebook.LinkBench.LinkStore`
and `com.facebook.LinkBench.NodeStore` interfaces can be loaded
through the *linkstore* and *nodestore* configuration file keys.

There are several steps you will have to go through to add a
new plugin .
First you need to choose you will represent LinkBench
nodes and links.  Several factors play a role in the design, but
speed of range scans and atomicity of updates are particularly important.
The MySQL schema from earlier in this README serves as a reference
implementation.

Next you need to create a new Java class, such as `public class MyStore
extends GraphStore`, and implement all of the required methods of
`LinkStore` and `NodeStore`.  Two reference implementations are provided:
`LinkStoreMysql`, a fully-fledged implementation,  and `MemoryLinkStore`,
a toy in-memory implementation.

LinkBench provides some tests to validate your implementation that you
can use during development.  If you extend any of the test classes
`LinkStoreTestBase`, `NodeStoreTestBase` and `GraphStoreTestBase` with
the required methods that set up your database, then a range of tests
will be run against it.  These tests are sanity checks rather than
comprehensive verification of your implementation.  In particular,
they do not try to verify the atomicity, consistency or durability
properties of the implementation.

Database-specific tests are not run by default.  You can enable them
with Maven profiles.  For example, to run the MySQL tests you can
run:

    mvn test -P mysql-test

The MySQL related unit tests are run against a test database that needs
setting up before running the unit tests. The default settings for this
test database are hardcoded in src/test/java/com/facebook/LinkBench/MySqlTestConfig.java.
The default settings uses localhost:3306 to connect to the database and
uses username "linkbench" and password "linkbench".  The unit test code 
creates all the required tables, so the developer needs to setup a 
MySql database called "linkbench_unittestdb" to which the linkbench user 
has permissions to create and drop tables.

**If you implement a plugin for a new database, please consider contributing
it back to the main LinkBench distribution with a pull request.**


================================================
FILE: bin/genswift
================================================
#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


# The command script
#
# Environment Variables
#
#   JAVA_HOME        The java implementation to use.  Overrides JAVA_HOME.
#

BIN=$(dirname $(readlink -f "$0"))
HOME=`dirname $BIN`
echo "HOME is at $HOME"

if [ "$JAVA_HOME" = "" ]; then
  JAVA=`which java`
  if [ ! -x $JAVA ]; then
    echo "Error: java not found, set JAVA_HOME or add java to PATH."
    exit 1
  fi
else
  JAVA=$JAVA_HOME/bin/java
fi

echo "Using java at: $JAVA"

SWIFT="$BIN/swift-generator-cli-0.11.0-standalone.jar"

# run it
pushd "$HOME/src/main/java/com/facebook/rocks/swift"
rm *.java
rm -rf gen-swift
"$JAVA" -jar "$SWIFT" -tweak ADD_CLOSEABLE_INTERFACE rocks_common.thrift
"$JAVA" -jar "$SWIFT" -tweak ADD_CLOSEABLE_INTERFACE rocks.thrift
mv gen-swift/com/facebook/rocks/swift/* .
rm -rf gen-swift
popd


================================================
FILE: bin/linkbench
================================================
#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


# The command script
#
# Environment Variables
#
#   JAVA_HOME        The java implementation to use.  Overrides JAVA_HOME.
#
#   HEAPSIZE  The maximum amount of heap to use, in MB.
#                    Default is 1000.
#
#   OPTS      Extra Java runtime options.
#
#   CONF_DIR  Alternate conf dir. Default is ./config.
#
# This script creates the benchmark data and then runs the workload
# on it

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

# Export LINKBENCH_HOME so that LinkBench Java code can access env var
export LINKBENCH_HOME=`dirname $bin`

if [ "$JAVA_HOME" = "" ]; then
  JAVA=`which java`
  if [ ! -x $JAVA ]; then
    echo "Error: java not found, set JAVA_HOME or add java to PATH."
    exit 1
  fi
else
  JAVA=$JAVA_HOME/bin/java
fi

echo "Using java at: $JAVA"

JAVA_HEAP_MAX=-Xmx1000m

# check envvars which might override default args
if [ "$HEAPSIZE" != "" ]; then
  #echo "run with heapsize $HEAPSIZE"
  JAVA_HEAP_MAX="-Xmx""$HEAPSIZE""m"
  #echo $JAVA_HEAP_MAX
fi

# CLASSPATH initially contains $CONF_DIR
CLASSPATH="${CONF_DIR}"
CLASSPATH=${CLASSPATH}:$JAVA_HOME/lib/tools.jar

# so that filenames w/ spaces are handled correctly in loops below
IFS=

# add latest jar to CLASSPATH
CLASSPATH=${CLASSPATH}:target/FacebookLinkBench.jar:

# restore ordinary behaviour
unset IFS

# figure out which class to run
CLASS='com.facebook.LinkBench.LinkBenchDriver'

# run it
exec "$JAVA" $JAVA_HEAP_MAX $OPTS $JMX_OPTS -classpath "$CLASSPATH" $CLASS \
          "$@"


================================================
FILE: bin/swift-generator-cli-0.11.0-standalone.jar
================================================


================================================
FILE: config/FBWorkload.properties
================================================
# LinkBench workload configuration file for Facebook social graph workload
#
#
# Default parameters emulate a scaled-down version of Facebook's real
# social graph workload.  The default parameters generate a benchmark
# database of approximately 10GB, which is approximate for testing, but
# too small for full-scale benchmarking. To generate a bigger graph,
# increase maxid1.

# Optionally you can change workload parameters to modify benchmark data
# and the request workload.

######################
#    Data Files      #
######################

# Path for file with real distributions for links, accesses, etc.
# Can be absolute path, or relative path from LinkBench home directory
data_file = config/Distribution.dat

#####################################
#                                   #
#  Graph Generation Configuration   #
#                                   #
#####################################

# start node id (inclusive)
startid1 = 1

# end node id for initial load (exclusive)
# With default config and MySQL/InnoDB, 1M ids ~= 1GB
maxid1 = 10000001

# Number of distinct link types (link outdegree is shared among types)
link_type_count = 2

# +----------------------------+
# |Graph outdegree distribution|
# +----------------------------+
# These parameters control how the outdegree of each node in the graph
# is chosen.  

# nlinks_func selects the outdegree distribution function.  Options are:
## REAL: use the empirically observed distribution in the data file
nlinks_func = real

## ProbabilityDistribution class name: use the probability distribution
#     with other parameters for that class with the nlinks_ prefix. E.g.
# nlinks_func = com.facebook.LinkBench.distributions.ZipfDistribution
# nlinks_shape = 1.5
# nlinks_mean = 2000000

## A synthetic distribution
# RECIPROCAL: small id1s tend to get more #links : 
#          #links(id1) = maxid1/(1+id1)
# MULTIPLES: id1s that are multiples of nlinks_config get 
#   nlinks_config links  (rest get nlinks_default)
# PERFECT_POWERS means perfect squares/cubes/etc get more #links 
#   (rest get nlinks_default)
#   the larger a perfect square is, the more #links it gets.
#   nlinks_config controls whether it is squares, cubes, etc
# EXPONENTIAL means exponential i.e powers of nlinks_config get more #links
#nlinks_func = RECIPROCAL
# config param that goes along with nlinks_func
#nlinks_config = 1
# minimum link count: use 0 or 1 for this
#nlinks_default = 0

# +--------------------------+ 
# | Link ID2 selection       |
# +--------------------------+
# These options allow selection of alternative behavior for selecting
# link id2s of edges in graph

# if nonzero, generate id2 uniformly between 0 and this - 1 during load
# and lookups.  Must be < 2^31
# randomid2max = 0

# +----------------------+
# |Node/link payload data|
# +----------------------+
# Median payload data size of links
link_datasize = 8

# Data generator for new links
# Default settings give ~30% compression ratio
link_add_datagen = com.facebook.LinkBench.generators.MotifDataGenerator
link_add_datagen_startbyte = 32
link_add_datagen_endbyte = 100
link_add_datagen_uniqueness = 0.225
link_add_datagen_motif_length = 128

# Data generator for link updates
link_up_datagen = com.facebook.LinkBench.generators.MotifDataGenerator
link_up_datagen_startbyte = 32
link_up_datagen_endbyte = 100
link_up_datagen_uniqueness = 0.225
link_up_datagen_motif_length = 128

# Median payload data size of nodes
node_datasize = 128

# Data generator for new nodes
# Node data generators give ~60% compression ratio
node_add_datagen = com.facebook.LinkBench.generators.MotifDataGenerator
node_add_datagen_startbyte = 50
node_add_datagen_endbyte = 220
node_add_datagen_uniqueness = 0.63

# Data generator for node updates
node_up_datagen = com.facebook.LinkBench.generators.MotifDataGenerator
node_up_datagen_startbyte = 50
node_up_datagen_endbyte = 220
node_up_datagen_uniqueness = 0.63

#####################################
#                                   #
#  Request Workload Configuration   #
#                                   #
#####################################

# configuration for generating id2 in the request phase
# 0 means thread i generates id2 randomly without restriction;
# 1 means thread i generates id2 such that id2 % nrequesters = i,
#   this is to prevent threads from adding/deleting/updating same cells,
#   always use this configuration (1) when using HBaseGeneralAtomicityTesting;
id2gen_config = 0

# Operation mix for request phase
# numbers are percentages and must sum to 100
addlink = 8.9886601
deletelink = 2.9907664
updatelink = 8.0122125
countlink = 4.8863567
getlink = 0.5261142
getlinklist = 50.7119145
getnode = 12.9326683
addnode = 2.5732789
updatenode = 7.366437
deletenode = 1.0115914

# Controls what proportion of linklist queries above will try
# to retrieve more history
getlinklist_history = 0.3

# +-------------------------+
# |Node access distributions|
# +-------------------------+
# These control the access patterns of different classes of operations.
# The following distributions can be configured

# read_* : link reads (dist is correlated with outdegree)
# write_* : link writes (dist is correlated with outdegree)
# read_uncorr_* : optionally, mix in an uncorrelated distribution
# write_uncorr_* : optionally, mix in an uncorrelated distribution
# node_read_* : node reads
# node_update_* : node updates
# node_delete_* : node deletes

# For each of these the *_func parameter selects an access pattern.
# The available options are: 
# * Any ProbabilityDistribution class (e.g. ZipfDistribution)
# * REAL - Real empirical distribution for reads/writes as appropriate
# * ROUND_ROBIN - Cycle through ids
# * RECIPROCAL - Pick with probability 
# * MULTIPLE - Pick a multiple of config parameter
# * POWER - Pick a power of config parameter
# * PERFECT_POWER - Pick a perfect power (square, cube, etc) with exponent
#                    as configured

# read_function controls access patterns for link reads
# shape for Zipf based on pareto parameter of 1.25
read_function = com.facebook.LinkBench.distributions.ZipfDistribution
read_shape = 0.8

# Example of using POWER
#read_function = POWER
#read_param = 2

# read_uncorr_function is alternative to read_function that is
#  uncorrelated to outdegree
# blend: % of link reads to use uncorrelated func for
# Here we have high proportion uncorrelated to keep range scan size down
read_uncorr_blend = 99.5
read_uncorr_function = com.facebook.LinkBench.distributions.ZipfDistribution
read_uncorr_shape = 0.8

# write_function controls access patterns for link writes
# shape for Zipf based on pareto parameter of 1.35
write_function = com.facebook.LinkBench.distributions.ZipfDistribution
write_shape = 0.741

# write_uncorr_function is alternative to write_* that is uncorrelated
#  to outdegree
# 95% uncorrelated give weak correlation with outdegree
write_uncorr_blend = 95
write_uncorr_shape = 0.741
write_uncorr_config = 1

# node_read_function controls reads for graph nodes
# shape for Zipf based on pareto parameter of 1.6
node_read_function = com.facebook.LinkBench.distributions.ZipfDistribution
node_read_shape = 0.625

# node_update_functions controls updates for graph nodes
# shape for Zipf based on pareto parameter of 1.65
node_update_function = com.facebook.LinkBench.distributions.ZipfDistribution
node_update_shape = 0.606

# Use uniform rather than skewed distribution for deletes, because:
# a) we don't want to delete the most frequently read nodes
# b) nodes can only be deleted once
node_delete_function = com.facebook.LinkBench.distributions.UniformDistribution

# Distribution to select how many ids per link get request.  Comment
# out to only get one link at a time
link_multiget_dist = com.facebook.LinkBench.distributions.GeometricDistribution
link_multiget_dist_min = 1
link_multiget_dist_max = 128
# Prob param for geometric distribution approximating real mean of ~2.6
link_multiget_dist_prob = 0.382


================================================
FILE: config/LinkConfigMysql.properties
================================================
# Sample MySQL LinkBench configuration file.
#
# This file contains settings for the data store, as well as controlling
# benchmark output and behavior.  The workload is defined in a separate
# file.
# 
# At a minimum to use this file, you will need to fill in MySQL
# connection information.  

##########################
# Workload Configuration #
##########################

# Path for workload properties file.  Properties in this file will override
# those in workload properties file.
# Can be absolute path, or relative path from LinkBench home directory
workload_file = config/FBWorkload.properties

#################################
#                               #
#   Data Source Configuration   #
#                               #
#################################

# Implementation of LinkStore and NodeStore to use 
linkstore = com.facebook.LinkBench.LinkStoreMysql
nodestore = com.facebook.LinkBench.LinkStoreMysql

# MySQL connection information
host = yourhostname.here
user = MySQLuser
password = MySQLpass
port = 3306
# dbid: the database name to use
dbid = linkdb

# database table names
linktable = linktable
# counttable not required for all databases
counttable = counttable
nodetable = nodetable

###############################
#                             #
#   Logging and Stats Setup   #
#                             #
###############################

# This controls logging output.  Settings are, in order of increasing
# verbosity:
# ERROR: only output serious errors
# WARN: output warnings
# INFO: output additional information such as progress
# DEBUG: output high-level debugging information
# TRACE: output more detailed lower-level debugging information
debuglevel = INFO

# display frequency of per-thread progress in seconds
progressfreq = 300

# display frequency of per-thread stats (latency, etc) in seconds
displayfreq = 1800

# display global load update (% complete, etc) after this many links loaded
load_progress_interval = 50000

# display global update on request phase (% complete, etc) after this many ops
req_progress_interval = 10000

# max number of samples to store for each per-thread statistic
maxsamples = 10000

###############################
#                             #
#  Load Phase Configuration   #
#                             #
###############################

# number of threads to run during load phase
loaders = 10

# whether to generate graph nodes during load process
generate_nodes = true

# partition loading work into chunks of id1s of this size
loader_chunk_size = 2048

# seed for initial data load random number generation (optional)
# load_random_seed = 12345

##################################
#                                #
#  Request Phase Configuration   #
#                                #
##################################

# number of threads to run during request phase
requesters = 100

# read + write requests per thread
requests = 500000

# request rate per thread.  <= 0 means unthrottled requests, > 0 limits
#  the average request rate to that number of requests per second per thread,
#  with the inter-request intervals governed by an exponential distribution
requestrate = 0

# max duration in seconds for request phase of benchmark
maxtime = 100000

# warmup time in seconds.  The benchmark is run for a warmup period
# during which no statistics are recorded. This allows database caches,
# etc to warm up.
warmup_time = 0

# seed for request random number generation (optional)
# request_random_seed = 12345

# maximum number of failures per requester to tolerate before aborting
# negative number means never abort
max_failed_requests = 100

###############################
#                             #
#   MySQL Tuning              #
#                             #
###############################

# Optional tuning parameters

# # of link inserts to batch together when loading
# MySQL_bulk_insert_batch = 1024

# optional tuning - disable binary logging during load phase
# WARNING: do not use unless you know what you are doing, it can
# break replication amongst other things
# MySQL_disable_binlog_load = true



================================================
FILE: config/LinkConfigRocksDb.properties
================================================
# Sample RocksDb LinkBench configuration file.
#
# This file contains settings for the data store, as well as controlling
# benchmark output and behavior.  The workload is defined in a separate
# file.
# 

##########################
# Workload Configuration #
##########################

# Path for workload properties file.  Properties in this file will override
# those in workload properties file.
# Can be absolute path, or relative path from LinkBench home directory
workload_file = config/FBWorkload.properties

#################################
#                               #
#   Data Source Configuration   #
#                               #
#################################

# Implementation of LinkStore and NodeStore to use 
linkstore = com.facebook.LinkBench.LinkStoreRocksDb
nodestore = com.facebook.LinkBench.LinkStoreRocksDb

# RocksDb connection information
host = yourhostname.here
port = 9090

# dbid: the database name to use
dbid = linkdb

###############################
#                             #
#   Logging and Stats Setup   #
#                             #
###############################

# This controls logging output.  Settings are, in order of increasing
# verbosity:
# ERROR: only output serious errors
# WARN: output warnings
# INFO: output additional information such as progress
# DEBUG: output high-level debugging information
# TRACE: output more detailed lower-level debugging information
debuglevel = INFO

# display frequency of per-thread progress in seconds
progressfreq = 300

# display frequency of per-thread stats (latency, etc) in seconds
displayfreq = 1800

# display global load update (% complete, etc) after this many links loaded
load_progress_interval = 50000

# display global update on request phase (% complete, etc) after this many ops
req_progress_interval = 10000

# max number of samples to store for each per-thread statistic
maxsamples = 10000

###############################
#                             #
#  Load Phase Configuration   #
#                             #
###############################

# number of threads to run during load phase
loaders = 10

# whether to generate graph nodes during load process
generate_nodes = true

# partition loading work into chunks of id1s of this size
loader_chunk_size = 2048

# seed for initial data load random number generation (optional)
# load_random_seed = 12345

##################################
#                                #
#  Request Phase Configuration   #
#                                #
##################################

# number of threads to run during request phase
requesters = 100

# read + write requests per thread
requests = 500000

# request rate per thread.  <= 0 means unthrottled requests, > 0 limits
#  the average request rate to that number of requests per second per thread,
#  with the inter-request intervals governed by an exponential distribution
requestrate = 0

# max duration in seconds for request phase of benchmark
maxtime = 100000

# warmup time in seconds.  The benchmark is run for a warmup period
# during which no statistics are recorded. This allows database caches,
# etc to warm up.
warmup_time = 0

# seed for request random number generation (optional)
# request_random_seed = 12345

# maximum number of failures per requester to tolerate before aborting
# negative number means never abort
max_failed_requests = 100

# write options
write_options_sync = false
write_options_disableWAL = false


================================================
FILE: pom.xml
================================================
<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <groupId>com.facebook.linkbench</groupId>
    <artifactId>linkbench</artifactId>
    <version>0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <url>https://github.com/facebook/linkbench</url>
    <inceptionYear>2012</inceptionYear>

    <developers>
        <developer>
            <id>tarmstrong</id>
            <name>Tim Armstrong</name>
            <email>tarmstrong@fb.com</email>
        </developer>
        <developer>
            <id>dhruba</id>
            <name>Dhruba Borthakur</name>
            <email>dhruba@fb.com</email>
        </developer>
        <developer>
            <id>amayank</id>
            <name>Mayank Agarwal</name>
            <email>amayank@fb.com</email>
        </developer>
        <developer>
            <id>andrewcox</id>
            <name>Andrew Cox</name>
            <email>andrewcox@fb.com</email>
        </developer>
    </developers>

    <scm>
        <connection>scm:git:https://github.com/facebook/linkbench.git</connection>
        <developerConnection>scm:git@github.com:facebook/linkbench.git</developerConnection>
        <url>https://github.com/facebook/linkbench</url>
        <tag>HEAD</tag>
    </scm>

    <dependencies>
        <dependency>
            <groupId>commons-cli</groupId>
            <artifactId>commons-cli</artifactId>
            <version>1.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-math3</artifactId>
            <version>3.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-core</artifactId>
            <version>0.20.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase</artifactId>
            <version>0.94.3</version>
            <exclusions>
                <exclusion>
                    <groupId>com.google.guava</groupId>
                    <artifactId>guava</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.jboss.netty</groupId>
                    <artifactId>netty</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>asm</groupId>
                    <artifactId>asm</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>com.facebook.swift</groupId>
            <artifactId>swift-codec</artifactId>
            <version>0.13.2</version>
        </dependency>
        <dependency>
            <groupId>com.facebook.swift</groupId>
            <artifactId>swift-service</artifactId>
            <version>0.13.2</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-simple</artifactId>
            <version>1.7.0</version>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.22</version>
        </dependency>
    </dependencies>

    <profiles>
        <profile>
            <id>nexus</id>
            <!--Enable snapshots for the built in central repo to direct -->
            <!--all requests to nexus via the mirror -->
            <repositories>
                <repository>
                    <id>central</id>
                    <url>http://repo1.maven.org/maven2</url>
                    <releases>
                        <enabled>true</enabled>
                    </releases>
                    <snapshots>
                        <enabled>true</enabled>
                    </snapshots>
                </repository>
            </repositories>
            <pluginRepositories>
                <pluginRepository>
                    <id>central</id>
                    <url>http://repo1.maven.org/maven2</url>
                    <releases>
                        <enabled>true</enabled>
                    </releases>
                    <snapshots>
                        <enabled>true</enabled>
                    </snapshots>
                </pluginRepository>
            </pluginRepositories>
        </profile>
        <profile>
            <id>mysql-tests</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.apache.maven.plugins</groupId>
                        <artifactId>maven-surefire-plugin</artifactId>
                        <configuration>
                            <includes>
                              <include>**/*MySql*.class</include>
                            </includes>
                            <excludedGroups></excludedGroups>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
        <profile>
            <id>fast-test</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.apache.maven.plugins</groupId>
                        <artifactId>maven-surefire-plugin</artifactId>
                        <configuration>
                           <excludedGroups>com.facebook.LinkBench.testtypes.ProviderTest,
                                           com.facebook.LinkBench.testtypes.SlowTest</excludedGroups>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
    </profiles>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.0</version>
                <configuration>
                    <source>1.7</source>
                    <target>1.7</target>
                    <showDeprecation>true</showDeprecation>
                    <showWarnings>true</showWarnings>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-source-plugin</artifactId>
                <version>2.1.2</version>
                <executions>
                    <execution>
                        <id>attach-sources</id>
                        <phase>verify</phase>
                        <goals>
                            <goal>jar-no-fork</goal>
                            <goal>test-jar</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-jar-plugin</artifactId>
                <version>2.4</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>test-jar</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>2.4</version>
                <configuration>
                    <appendAssemblyId>false</appendAssemblyId>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                    <finalName>FacebookLinkBench</finalName>
                </configuration>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
              <groupId>org.apache.maven.plugins</groupId>
              <artifactId>maven-surefire-plugin</artifactId>
              <version>2.14</version>
              <dependencies>
                  <dependency>
                      <groupId>org.apache.maven.surefire</groupId>
                      <artifactId>surefire-junit47</artifactId>
                      <version>2.14</version>
                  </dependency>
              </dependencies>
              <configuration>
                  <redirectTestOutputToFile>true</redirectTestOutputToFile>
                  <!-- Configure basic set of tests that don't have special
                       provider dependencies and aren't long-running -->
                  <excludedGroups>com.facebook.LinkBench.testtypes.ProviderTest</excludedGroups>
              </configuration>
            </plugin>
            <plugin>
              <groupId>com.facebook.mojo</groupId>
              <artifactId>swift-maven-plugin</artifactId>
              <version>0.13.2</version>
              <executions>
                <execution>
                  <goals>
                    <goal>generate</goal>
                  </goals>
                </execution>
              </executions>
              <configuration>
                <idlFiles>
                  <directory>
                    ${basedir}/src/main/java/com/facebook/rocks/swift/
                  </directory>
                  <includes>
                    <include>*</include>
                  </includes>
                </idlFiles>
                <addCloseableInterface>true</addCloseableInterface>
              </configuration>
            </plugin>
        </plugins>
    </build>
</project>


================================================
FILE: src/main/java/com/facebook/LinkBench/Config.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

/**
 * Consolidate shared config key strings in this file
 * See sample config file for documentation of config properties
 * @author tarmstrong
 *
 */
public class Config {

  public static final String DEBUGLEVEL = "debuglevel";

  /* Control store implementations used */
  public static final String LINKSTORE_CLASS = "linkstore";
  public static final String NODESTORE_CLASS = "nodestore";

  /* Schema and tables used */
  public static final String DBID = "dbid";
  public static final String LINK_TABLE = "linktable";
  public static final String COUNT_TABLE = "counttable";
  public static final String NODE_TABLE = "nodetable";

  /* Control graph structure */
  public static final String LOAD_RANDOM_SEED = "load_random_seed";
  public static final String MIN_ID = "startid1";
  public static final String MAX_ID = "maxid1";
  public static final String GENERATE_NODES = "generate_nodes";
  public static final String RANDOM_ID2_MAX = "randomid2max";
  public static final String NLINKS_PREFIX = "nlinks_";
  public static final String NLINKS_FUNC = "nlinks_func";
  public static final String NLINKS_CONFIG = "nlinks_config";
  public static final String NLINKS_DEFAULT = "nlinks_default";
  public static final String LINK_TYPE_COUNT ="link_type_count";

  /* Data generation */
  public static final String LINK_DATASIZE = "link_datasize";
  public static final String NODE_DATASIZE = "node_datasize";
  public static final String UNIFORM_GEN_STARTBYTE = "startbyte";
  public static final String UNIFORM_GEN_ENDBYTE = "endbyte";
  public static final String MOTIF_GEN_UNIQUENESS = "uniqueness";
  public static final String MOTIF_GEN_LENGTH = "motif_length";
  public static final String LINK_ADD_DATAGEN = "link_add_datagen";
  public static final String LINK_ADD_DATAGEN_PREFIX = "link_add_datagen_";
  public static final String LINK_UP_DATAGEN = "link_up_datagen";
  public static final String LINK_UP_DATAGEN_PREFIX = "link_up_datagen_";
  public static final String NODE_ADD_DATAGEN = "node_add_datagen";
  public static final String NODE_ADD_DATAGEN_PREFIX = "node_add_datagen_";
  public static final String NODE_UP_DATAGEN = "node_up_datagen";
  public static final String NODE_UP_DATAGEN_PREFIX = "node_up_datagen_";
  // Sigma values control variance of data size log normal distribution
  public static final double LINK_DATASIZE_SIGMA = 1.0;
  public static final double NODE_DATASIZE_SIGMA = 1.0;

  /* Loading performance tuning */
  public static final String NUM_LOADERS = "loaders";
  public static final String LOADER_CHUNK_SIZE = "loader_chunk_size";

  /* Request workload */
  public static final String NUM_REQUESTERS = "requesters";
  public static final String REQUEST_RANDOM_SEED = "request_random_seed";

  // Distribution of accesses to IDs
  public static final String READ_CONFIG_PREFIX = "read_";
  public static final String WRITE_CONFIG_PREFIX = "write_";
  public static final String NODE_READ_CONFIG_PREFIX = "node_read_";
  public static final String NODE_UPDATE_CONFIG_PREFIX = "node_update_";
  public static final String NODE_DELETE_CONFIG_PREFIX = "node_delete_";
  public static final String ACCESS_FUNCTION_SUFFIX = "function";
  public static final String ACCESS_CONFIG_SUFFIX = "config";
  public static final String READ_FUNCTION = "read_function";
  public static final String READ_CONFIG = "read_config";
  public static final String WRITE_FUNCTION = "write_function";
  public static final String WRITE_CONFIG = "write_config";
  public static final String READ_UNCORR_CONFIG_PREFIX = "read_uncorr_";
  public static final String WRITE_UNCORR_CONFIG_PREFIX = "read_uncorr_";
  public static final String READ_UNCORR_FUNCTION = READ_UNCORR_CONFIG_PREFIX
                                                    + ACCESS_FUNCTION_SUFFIX;
  public static final String WRITE_UNCORR_FUNCTION = WRITE_UNCORR_CONFIG_PREFIX
                                                    + ACCESS_FUNCTION_SUFFIX;
  public static final String BLEND_SUFFIX = "blend";
  public static final String READ_UNCORR_BLEND =  READ_UNCORR_CONFIG_PREFIX
                                                    + BLEND_SUFFIX;
  public static final String WRITE_UNCORR_BLEND = WRITE_UNCORR_CONFIG_PREFIX
                                                    + BLEND_SUFFIX;

  // Probability of different operations
  public static final String PR_ADD_LINK = "addlink";
  public static final String PR_DELETE_LINK = "deletelink";
  public static final String PR_UPDATE_LINK = "updatelink";
  public static final String PR_COUNT_LINKS = "countlink";
  public static final String PR_GET_LINK = "getlink";
  public static final String PR_GET_LINK_LIST = "getlinklist";
  public static final String PR_ADD_NODE = "addnode";
  public static final String PR_UPDATE_NODE = "updatenode";
  public static final String PR_DELETE_NODE = "deletenode";
  public static final String PR_GET_NODE = "getnode";
  public static final String PR_GETLINKLIST_HISTORY = "getlinklist_history";
  public static final String WARMUP_TIME = "warmup_time";
  public static final String MAX_TIME = "maxtime";
  public static final String REQUEST_RATE = "requestrate";
  public static final String NUM_REQUESTS = "requests";
  public static final String MAX_FAILED_REQUESTS = "max_failed_requests";
  public static final String ID2GEN_CONFIG = "id2gen_config";
  public static final String LINK_MULTIGET_DIST = "link_multiget_dist";
  public static final String LINK_MULTIGET_DIST_MIN = "link_multiget_dist_min";
  public static final String LINK_MULTIGET_DIST_MAX = "link_multiget_dist_max";
  public static final String LINK_MULTIGET_DIST_PREFIX = "link_multiget_dist_";

  /* Probability distribution parameters */
  public static final String PROB_MEAN = "mean";

  /* Statistics collection and reporting */
  public static final String MAX_STAT_SAMPLES = "maxsamples";
  public static final String DISPLAY_FREQ = "displayfreq";
  public static final String MAPRED_REPORT_PROGRESS = "reportprogress";
  public static final String PROGRESS_FREQ = "progressfreq";

  /* Reporting for progress indicators */
  public static String REQ_PROG_INTERVAL = "req_progress_interval";
  public static String LOAD_PROG_INTERVAL = "load_progress_interval";

  /* MapReduce specific configuration */
  public static final String TEMPDIR = "tempdir";
  public static final String LOAD_DATA = "loaddata";
  public static final String MAPRED_USE_INPUT_FILES = "useinputfiles";

  /* External data */
  public static final String DISTRIBUTION_DATA_FILE = "data_file";
  public static final String WORKLOAD_CONFIG_FILE = "workload_file";
}


================================================
FILE: src/main/java/com/facebook/LinkBench/ConfigUtil.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

import java.io.File;
import java.io.IOException;
import java.util.Properties;

import org.apache.log4j.ConsoleAppender;
import org.apache.log4j.EnhancedPatternLayout;
import org.apache.log4j.FileAppender;
import org.apache.log4j.Layout;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;

public class ConfigUtil {
  public static final String linkbenchHomeEnvVar = "LINKBENCH_HOME";
  public static final String LINKBENCH_LOGGER = "com.facebook.linkbench";

  /**
   * @return null if not set, or if not valid path
   */
  public static String findLinkBenchHome() {
    String linkBenchHome = System.getenv("LINKBENCH_HOME");
    if (linkBenchHome != null && linkBenchHome.length() > 0) {
      File dir = new File(linkBenchHome);
      if (dir.exists() && dir.isDirectory()) {
        return linkBenchHome;
      }
    }
    return null;
  }

  public static Level getDebugLevel(Properties props)
                                    throws LinkBenchConfigError {
    if (props == null) {
      return Level.DEBUG;
    }
    String levStr = props.getProperty(Config.DEBUGLEVEL);

    if (levStr == null) {
      return Level.DEBUG;
    }

    try {
      int level = Integer.parseInt(levStr);
      if (level <= 0) {
        return Level.INFO;
      } else if (level == 1) {
        return Level.DEBUG;
      } else {
        return Level.TRACE;
      }
    } catch (NumberFormatException e) {
      Level lev = Level.toLevel(levStr, null);
      if (lev != null) {
        return lev;
      } else {
        throw new LinkBenchConfigError("Invalid setting for debug level: " +
                                       levStr);
      }
    }
  }

  /**
   * Setup log4j to log to stderr with a timestamp and thread id
   * Could add in configuration from file later if it was really necessary
   * @param props
   * @param logFile if not null, info logging will be diverted to this file
   * @throws IOException
   * @throws Exception
   */
  public static void setupLogging(Properties props, String logFile)
                                    throws LinkBenchConfigError, IOException {
    Layout fmt = new EnhancedPatternLayout("%p %d [%t]: %m%n%throwable{30}");
    Level logLevel = ConfigUtil.getDebugLevel(props);
    Logger.getRootLogger().removeAllAppenders();
    Logger lbLogger = Logger.getLogger(LINKBENCH_LOGGER);
    lbLogger.setLevel(logLevel);
    ConsoleAppender console = new ConsoleAppender(fmt, "System.err");

    /* If logfile is specified, put full stream in logfile and only
     * print important messages to terminal
     */
    if (logFile != null) {
      console.setThreshold(Level.WARN); // Only print salient messages
      lbLogger.addAppender(new FileAppender(fmt, logFile));
    }
    lbLogger.addAppender(console);
  }

  /**
   * Look up key in props, failing if not present
   * @param props
   * @param key
   * @return
   * @throws LinkBenchConfigError thrown if key not present
   */
  public static String getPropertyRequired(Properties props, String key)
    throws LinkBenchConfigError {
    String v = props.getProperty(key);
    if (v == null) {
      throw new LinkBenchConfigError("Expected configuration key " + key +
                                     " to be defined");
    }
    return v;
  }

  public static int getInt(Properties props, String key)
      throws LinkBenchConfigError {
    return getInt(props, key, null);
  }

  /**
   * Retrieve a config key and convert to integer
   * @param props
   * @param key
   * @return a non-null string value
   * @throws LinkBenchConfigError if not present or not integer
   */
  public static int getInt(Properties props, String key, Integer defaultVal)
      throws LinkBenchConfigError {
    if (defaultVal != null && !props.containsKey(key)) {
      return defaultVal;
    }
    String v = getPropertyRequired(props, key);
    try {
      return Integer.parseInt(v);
    } catch (NumberFormatException e) {
      throw new LinkBenchConfigError("Expected configuration key " + key +
                " to be integer, but was '" + v + "'");
    }
  }

  public static long getLong(Properties props, String key)
      throws LinkBenchConfigError {
    return getLong(props, key, null);
  }

  /**
   * Retrieve a config key and convert to long integer
   * @param props
   * @param key
   * @param defaultVal default value if key not present
   * @return
   * @throws LinkBenchConfigError if not present or not integer
   */
  public static long getLong(Properties props, String key, Long defaultVal)
      throws LinkBenchConfigError {
    if (defaultVal != null && !props.containsKey(key)) {
      return defaultVal;
    }
    String v = getPropertyRequired(props, key);
    try {
      return Long.parseLong(v);
    } catch (NumberFormatException e) {
      throw new LinkBenchConfigError("Expected configuration key " + key +
                " to be long integer, but was '" + v + "'");
    }
  }


  public static double getDouble(Properties props, String key)
                throws LinkBenchConfigError {
    return getDouble(props, key, null);
  }

  /**
   * Retrieve a config key and convert to double
   * @param props
   * @param key
   * @param defaultVal default value if key not present
   * @return
   * @throws LinkBenchConfigError if not present or not double
   */
  public static double getDouble(Properties props, String key,
        Double defaultVal) throws LinkBenchConfigError {
    if (defaultVal != null && !props.containsKey(key)) {
      return defaultVal;
    }
    String v = getPropertyRequired(props, key);
    try {
      return Double.parseDouble(v);
    } catch (NumberFormatException e) {
      throw new LinkBenchConfigError("Expected configuration key " + key +
                " to be double, but was '" + v + "'");
    }
  }

  /**
   * Retrieve a config key and convert to boolean.
   * Valid boolean strings are "true" or "false", case insensitive
   * @param props
   * @param key
   * @return
   * @throws LinkBenchConfigError if not present or not boolean
   */
  public static boolean getBool(Properties props, String key)
      throws LinkBenchConfigError {
    return getBool(props, key, null);
  }

  public static boolean getBool(Properties props, String key,
      Boolean defaultVal) throws LinkBenchConfigError {
    if (defaultVal != null && !props.containsKey(key)) {
      return defaultVal;
    }

    String v = getPropertyRequired(props, key).trim().toLowerCase();
    // Parse manually since parseBoolean accepts many things as "false"
    if (v.equals("true")) {
      return true;
    } else if (v.equals("false")) {
      return false;
    } else {
      throw new LinkBenchConfigError("Expected configuration key " + key +
                " to be true or false, but was '" + v + "'");
    }
  }
}


================================================
FILE: src/main/java/com/facebook/LinkBench/GraphStore.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

import java.util.List;

/**
 * An abstract class for storing both nodes and edges
 * @author tarmstrong
 */
public abstract class GraphStore extends LinkStore implements NodeStore {

  /** Provide generic implementation */
  public long[] bulkAddNodes(String dbid, List<Node> nodes) throws Exception {
    long ids[] = new long[nodes.size()];
    int i = 0;
    for (Node node: nodes) {
      long id = addNode(dbid, node);
      ids[i++] = id;
    }
    return ids;
  }
}


================================================
FILE: src/main/java/com/facebook/LinkBench/InvertibleShuffler.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

import java.util.Random;

/**
 * Shuffler designed to make computing permutation and inverse easy
 */
public class InvertibleShuffler {
  private final long[] params;
  private final int shuffleGroups;
  long n;
  long nRoundedUp; // n rounded up to next multiple of shuffleGroups
  long nRoundedDown; // n rounded down to next multiple of shuffleGroups
  int minGroupSize;

  public InvertibleShuffler(long seed, int shuffleGroups, long n) {
    this(new Random(seed), shuffleGroups, n);
  }
  public InvertibleShuffler(Random rng, int shuffleGroups, long n) {
    if (shuffleGroups > n) {
      // Can't have more shuffle groups than items
      shuffleGroups = (int)n;
    }
    this.shuffleGroups = shuffleGroups;
    this.n = n;
    this.params = new long[shuffleGroups];
    this.minGroupSize = (int)n / shuffleGroups;

    for (int i = 0; i < shuffleGroups; i++) {
      // Positive long
      params[i] = Math.abs(rng.nextInt(minGroupSize));
    }
    this.nRoundedDown = (n / shuffleGroups) * shuffleGroups;
    this.nRoundedUp = n == nRoundedDown ? n : nRoundedDown + shuffleGroups;
  }

  public long permute(long i) {
    return permute(i, false);
  }

  public long invertPermute(long i) {
    return permute(i, true);
  }

  public long permute(long i, boolean inverse) {
    if (i < 0 || i >= n) {
      throw new IllegalArgumentException("Bad index to permute: " + i
          + ": out of range [0:" + (n - 1) + "]");
    }
    // Number of the group
    int group = (int) (i % shuffleGroups);

    // Whether this is a big or small group
    boolean bigGroup = group < n % shuffleGroups;

    // Calculate the (positive) rotation
    long rotate = params[group];
    if (inverse) {
      // Reverse the rotation
      if (bigGroup) {
        rotate = minGroupSize + 1 - rotate;
      } else {
        rotate = minGroupSize - rotate;
      }
      assert(rotate >= 0);
    }

    long j = (i + shuffleGroups * rotate);
    long result;
    if (j < n) {
      result = j;
    } else {
      // Depending on the group there might be different numbers of
      // ids in the ring
      if (bigGroup) {
        result = j % nRoundedUp;
      } else {
        result = j % nRoundedDown;
      }
      if (result >= n) {
        result = group;
      }
    }
    assert(result % shuffleGroups == group);
    return result;
  }

}


================================================
FILE: src/main/java/com/facebook/LinkBench/Link.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

import java.util.Arrays;


public class Link {

  public Link(long id1, long link_type, long id2,
      byte visibility, byte[] data, int version, long time) {
    this.id1 = id1;
    this.link_type = link_type;
    this.id2 = id2;
    this.visibility = visibility;
    this.data = data;
    this.version = version;
    this.time = time;
  }

  Link() {
    link_type = LinkStore.DEFAULT_LINK_TYPE;
    visibility = LinkStore.VISIBILITY_DEFAULT;
  }

  public boolean equals(Object other) {
    if (other instanceof Link) {
      Link o = (Link) other;
      return id1 == o.id1 && id2 == o.id2 &&
          link_type == o.link_type &&
          visibility == o.visibility &&
          version == o.version && time == o.time &&
          Arrays.equals(data, o.data);
    } else {
      return false;
    }
  }

  public String toString() {
    return String.format("Link(id1=%d, id2=%d, link_type=%d," +
        "visibility=%d, version=%d," +
        "time=%d, data=%s", id1, id2, link_type,
        visibility, version, time, data.toString());
  }

  /**
   * Clone an existing link
   * @param l
   */
  public Link clone() {
    Link l = new Link();
    l.id1 = this.id1;
    l.link_type = this.link_type;
    l.id2 = this.id2;
    l.visibility = this.visibility;
    l.data = this.data.clone();
    l.version = this.version;
    l.time = this.time;
    return l;
  }

  /** The node id of the source of directed edge */
  public long id1;

  /** The node id of the target of directed edge */
  public long id2;

  /** Type of link */
  public long link_type;

  /** Visibility mode */
  public byte visibility;

  /** Version of link */
  public int version;

  /** time is the sort key for links.  Often it contains a timestamp,
      but it can be used as a arbitrary user-defined sort key. */
  public long time;

  /** Arbitrary payload data */
  public byte[] data;

}


================================================
FILE: src/main/java/com/facebook/LinkBench/LinkBenchConfigError.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

public class LinkBenchConfigError extends RuntimeException {
  private static final long serialVersionUID = 1L;

  public LinkBenchConfigError(String msg) {
    super(msg);
  }
}


================================================
FILE: src/main/java/com/facebook/LinkBench/LinkBenchDriver.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.PrintStream;
import java.nio.ByteBuffer;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.Properties;
import java.util.Random;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.atomic.AtomicLong;

import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.CommandLineParser;
import org.apache.commons.cli.GnuParser;
import org.apache.commons.cli.HelpFormatter;
import org.apache.commons.cli.Option;
import org.apache.commons.cli.Options;
import org.apache.commons.cli.ParseException;
import org.apache.log4j.Logger;

import com.facebook.LinkBench.LinkBenchLoad.LoadChunk;
import com.facebook.LinkBench.LinkBenchLoad.LoadProgress;
import com.facebook.LinkBench.LinkBenchRequest.RequestProgress;
import com.facebook.LinkBench.stats.LatencyStats;
import com.facebook.LinkBench.stats.SampledStats;
import com.facebook.LinkBench.util.ClassLoadUtil;

/*
 LinkBenchDriver class.
 First loads data using multi-threaded LinkBenchLoad class.
 Then does read and write requests of various types (addlink, deletelink,
 updatelink, getlink, countlinks, getlinklist) using multi-threaded
 LinkBenchRequest class.
 Config options are taken from config file passed as argument.
 */

public class LinkBenchDriver {

  public static final int EXIT_BADARGS = 1;
  public static final int EXIT_BADCONFIG = 2;

  /* Command line arguments */
  private static String configFile = null;
  private static String workloadConfigFile = null;
  private static Properties cmdLineProps = null;
  private static String logFile = null;
  /** File for final statistics */
  private static PrintStream csvStatsFile = null;
  /** File for output of incremental csv data */
  private static PrintStream csvStreamFile = null;
  private static boolean doLoad = false;
  private static boolean doRequest = false;

  private Properties props;

  private final Logger logger = Logger.getLogger(ConfigUtil.LINKBENCH_LOGGER);

  LinkBenchDriver(String configfile, Properties
                  overrideProps, String logFile)
    throws java.io.FileNotFoundException, IOException, LinkBenchConfigError {
    // which link store to use
    props = new Properties();
    props.load(new FileInputStream(configfile));
    for (String key: overrideProps.stringPropertyNames()) {
      props.setProperty(key, overrideProps.getProperty(key));
    }

    loadWorkloadProps();

    ConfigUtil.setupLogging(props, logFile);

    logger.info("Config file: " + configfile);
    logger.info("Workload config file: " + workloadConfigFile);
  }

  /**
   * Load properties from auxilliary workload properties file if provided.
   * Properties from workload properties file do not override existing
   * properties
   * @throws IOException
   * @throws FileNotFoundException
   */
  private void loadWorkloadProps() throws IOException, FileNotFoundException {
    if (props.containsKey(Config.WORKLOAD_CONFIG_FILE)) {
      workloadConfigFile = props.getProperty(Config.WORKLOAD_CONFIG_FILE);
      if (!new File(workloadConfigFile).isAbsolute()) {
        String linkBenchHome = ConfigUtil.findLinkBenchHome();
        if (linkBenchHome == null) {
          throw new RuntimeException("Data file config property "
              + Config.WORKLOAD_CONFIG_FILE
              + " was specified using a relative path, but linkbench home"
              + " directory was not specified through environment var "
              + ConfigUtil.linkbenchHomeEnvVar);
        } else {
          workloadConfigFile = linkBenchHome + File.separator + workloadConfigFile;
        }
      }
      Properties workloadProps = new Properties();
      workloadProps.load(new FileInputStream(workloadConfigFile));
      // Add workload properties, but allow other values to override
      for (String key: workloadProps.stringPropertyNames()) {
        if (props.getProperty(key) == null) {
          props.setProperty(key, workloadProps.getProperty(key));
        }
      }
    }
  }

  private static class Stores {
    final LinkStore linkStore;
    final NodeStore nodeStore;
    public Stores(LinkStore linkStore, NodeStore nodeStore) {
      super();
      this.linkStore = linkStore;
      this.nodeStore = nodeStore;
    }
  }

  // generate instances of LinkStore and NodeStore
  private Stores initStores()
    throws Exception {
    LinkStore linkStore = createLinkStore();
    NodeStore nodeStore = createNodeStore(linkStore);

    return new Stores(linkStore, nodeStore);
  }

  private LinkStore createLinkStore() throws Exception, IOException {
    // The property "linkstore" defines the class name that will be used to
    // store data in a database. The folowing class names are pre-packaged
    // for easy access:
    //   LinkStoreMysql :  run benchmark on  mySQL
    //   LinkStoreHBaseGeneralAtomicityTesting : atomicity testing on HBase.

    String linkStoreClassName = ConfigUtil.getPropertyRequired(props,
                                            Config.LINKSTORE_CLASS);

    logger.debug("Using LinkStore implementation: " + linkStoreClassName);

    LinkStore linkStore;
    try {
      linkStore = ClassLoadUtil.newInstance(linkStoreClassName,
                                            LinkStore.class);
    } catch (ClassNotFoundException nfe) {
      throw new IOException("Cound not find class for " + linkStoreClassName);
    }

    return linkStore;
  }

  /**
   * @param linkStore a LinkStore instance to be reused if it turns out
   * that linkStore and nodeStore classes are same
   * @return
   * @throws Exception
   * @throws IOException
   */
  private NodeStore createNodeStore(LinkStore linkStore) throws Exception,
      IOException {
    String nodeStoreClassName = props.getProperty(Config.NODESTORE_CLASS);
    if (nodeStoreClassName == null) {
      logger.debug("No NodeStore implementation provided");
    } else {
      logger.debug("Using NodeStore implementation: " + nodeStoreClassName);
    }

    if (linkStore != null && linkStore.getClass().getName().equals(
                                                nodeStoreClassName)) {
      // Same class, reuse object
      if (!NodeStore.class.isAssignableFrom(linkStore.getClass())) {
        throw new Exception("Specified NodeStore class " + nodeStoreClassName
                          + " is not a subclass of NodeStore");
      }
      return (NodeStore)linkStore;
    } else {
      NodeStore nodeStore;
      try {
        nodeStore = ClassLoadUtil.newInstance(nodeStoreClassName,
                                                            NodeStore.class);
        return nodeStore;
      } catch (java.lang.ClassNotFoundException nfe) {
        throw new IOException("Cound not find class for " + nodeStoreClassName);
      }
    }
  }

  void load() throws IOException, InterruptedException, Throwable {

    if (!doLoad) {
      logger.info("Skipping load data per the cmdline arg");
      return;
    }
    // load data
    int nLinkLoaders = ConfigUtil.getInt(props, Config.NUM_LOADERS);


    boolean bulkLoad = true;
    BlockingQueue<LoadChunk> chunk_q = new LinkedBlockingQueue<LoadChunk>();

    // max id1 to generate
    long maxid1 = ConfigUtil.getLong(props, Config.MAX_ID);
    // id1 at which to start
    long startid1 = ConfigUtil.getLong(props, Config.MIN_ID);

    // Create loaders
    logger.info("Starting loaders " + nLinkLoaders);
    logger.debug("Bulk Load setting: " + bulkLoad);

    Random masterRandom = createMasterRNG(props, Config.LOAD_RANDOM_SEED);


    boolean genNodes = ConfigUtil.getBool(props, Config.GENERATE_NODES);
    int nTotalLoaders = genNodes ? nLinkLoaders + 1 : nLinkLoaders;

    LatencyStats latencyStats = new LatencyStats(nTotalLoaders);
    List<Runnable> loaders = new ArrayList<Runnable>(nTotalLoaders);

    LoadProgress loadTracker = LoadProgress.create(logger, props);
    for (int i = 0; i < nLinkLoaders; i++) {
      LinkStore linkStore = createLinkStore();

      bulkLoad = bulkLoad && linkStore.bulkLoadBatchSize() > 0;
      LinkBenchLoad l = new LinkBenchLoad(linkStore, props, latencyStats,
              csvStreamFile, i, maxid1 == startid1 + 1, chunk_q, loadTracker);
      loaders.add(l);
    }

    if (genNodes) {
      logger.info("Will generate graph nodes during loading");
      int loaderId = nTotalLoaders - 1;
      NodeStore nodeStore = createNodeStore(null);
      Random rng = new Random(masterRandom.nextLong());
      loaders.add(new NodeLoader(props, logger, nodeStore, rng,
          latencyStats, csvStreamFile, loaderId));
    }
    enqueueLoadWork(chunk_q, startid1, maxid1, nLinkLoaders,
                    new Random(masterRandom.nextLong()));
    // run loaders
    loadTracker.startTimer();
    long loadTime = concurrentExec(loaders);

    long expectedNodes = maxid1 - startid1;
    long actualLinks = 0;
    long actualNodes = 0;
    for (final Runnable l:loaders) {
      if (l instanceof LinkBenchLoad) {
        actualLinks += ((LinkBenchLoad)l).getLinksLoaded();
      } else {
        assert(l instanceof NodeLoader);
        actualNodes += ((NodeLoader)l).getNodesLoaded();
      }
    }

    latencyStats.displayLatencyStats();

    if (csvStatsFile != null) {
      latencyStats.printCSVStats(csvStatsFile, true);
    }

    double loadTime_s = (loadTime/1000.0);
    logger.info(String.format("LOAD PHASE COMPLETED. " +
        " Loaded %d nodes (Expected %d)." +
        " Loaded %d links (%.2f links per node). " +
        " Took %.1f seconds.  Links/second = %d",
        actualNodes, expectedNodes, actualLinks,
        actualLinks / (double) actualNodes, loadTime_s,
        (long) Math.round(actualLinks / loadTime_s)));
  }

  /**
   * Create a new random number generated, optionally seeded to a known
   * value from the config file.  If seed value not provided, a seed
   * is chosen.  In either case the seed is logged for later reproducibility.
   * @param props
   * @param configKey config key for the seed value
   * @return
   */
  private Random createMasterRNG(Properties props, String configKey) {
    long seed;
    if (props.containsKey(configKey)) {
      seed = ConfigUtil.getLong(props, configKey);
      logger.info("Using configured random seed " + configKey + "=" + seed);
    } else {
      seed = System.nanoTime() ^ (long)configKey.hashCode();
      logger.info("Using random seed " + seed + " since " + configKey
          + " not specified");
    }

    SecureRandom masterRandom;
    try {
      masterRandom = SecureRandom.getInstance("SHA1PRNG");
    } catch (NoSuchAlgorithmException e) {
      logger.warn("SHA1PRNG not available, defaulting to default SecureRandom" +
          " implementation");
      masterRandom = new SecureRandom();
    }
    masterRandom.setSeed(ByteBuffer.allocate(8).putLong(seed).array());

    // Can be used to check that rng is behaving as expected
    logger.debug("First number generated by master " + configKey +
                 ": " + masterRandom.nextLong());
    return masterRandom;
  }

  private void enqueueLoadWork(BlockingQueue<LoadChunk> chunk_q, long startid1,
      long maxid1, int nloaders, Random rng) {
    // Enqueue work chunks.  Do it in reverse order as a heuristic to improve
    // load balancing, since queue is FIFO and later chunks tend to be larger

    int chunkSize = ConfigUtil.getInt(props, Config.LOADER_CHUNK_SIZE, 2048);
    long chunk_num = 0;
    ArrayList<LoadChunk> stack = new ArrayList<LoadChunk>();
    for (long id1 = startid1; id1 < maxid1; id1 += chunkSize) {
      stack.add(new LoadChunk(chunk_num, id1,
                    Math.min(id1 + chunkSize, maxid1), rng));
      chunk_num++;
    }

    for (int i = stack.size() - 1; i >= 0; i--) {
      chunk_q.add(stack.get(i));
    }

    for (int i = 0; i < nloaders; i++) {
      // Add a shutdown signal for each loader
      chunk_q.add(LoadChunk.SHUTDOWN);
    }
  }

  void sendrequests() throws IOException, InterruptedException, Throwable {

    if (!doRequest) {
      logger.info("Skipping request phase per the cmdline arg");
      return;
    }

    // config info for requests
    int nrequesters = ConfigUtil.getInt(props, Config.NUM_REQUESTERS);
    if (nrequesters == 0) {
      logger.info("NO REQUEST PHASE CONFIGURED. ");
      return;
    }
    LatencyStats latencyStats = new LatencyStats(nrequesters);
    List<LinkBenchRequest> requesters = new LinkedList<LinkBenchRequest>();

    RequestProgress progress = LinkBenchRequest.createProgress(logger, props);

    Random masterRandom = createMasterRNG(props, Config.REQUEST_RANDOM_SEED);

    // create requesters
    for (int i = 0; i < nrequesters; i++) {
      Stores stores = initStores();
      LinkBenchRequest l = new LinkBenchRequest(stores.linkStore,
              stores.nodeStore, props, latencyStats, csvStreamFile,
              progress, new Random(masterRandom.nextLong()), i, nrequesters);
      requesters.add(l);
    }
    progress.startTimer();
    // run requesters
    concurrentExec(requesters);
    long finishTime = System.currentTimeMillis();
    // Calculate duration accounting for warmup time
    long benchmarkTime = finishTime - progress.getBenchmarkStartTime();

    long requestsdone = 0;
    int abortedRequesters = 0;
    // wait for requesters
    for (LinkBenchRequest requester: requesters) {
      requestsdone += requester.getRequestsDone();
      if (requester.didAbort()) {
        abortedRequesters++;
      }
    }

    latencyStats.displayLatencyStats();

    if (csvStatsFile != null) {
      latencyStats.printCSVStats(csvStatsFile, true);
    }

    logger.info("REQUEST PHASE COMPLETED. " + requestsdone +
                 " requests done in " + (benchmarkTime/1000) + " seconds." +
                 " Requests/second = " + (1000*requestsdone)/benchmarkTime);
    if (abortedRequesters > 0) {
      logger.error(String.format("Benchmark did not complete cleanly: %d/%d " +
          "request threads aborted.  See error log entries for details.",
          abortedRequesters, nrequesters));
    }
  }

  /**
   * Start all runnables at the same time. Then block till all
   * tasks are completed. Returns the elapsed time (in millisec)
   * since the start of the first task to the completion of all tasks.
   */
  static long concurrentExec(final List<? extends Runnable> tasks)
      throws Throwable {
    final CountDownLatch startSignal = new CountDownLatch(tasks.size());
    final CountDownLatch doneSignal = new CountDownLatch(tasks.size());
    final AtomicLong startTime = new AtomicLong(0);
    for (final Runnable task : tasks) {
      new Thread(new Runnable() {
        @Override
        public void run() {
          /*
           * Run a task.  If an uncaught exception occurs, bail
           * out of the benchmark immediately, since any results
           * of the benchmark will no longer be valid anyway
           */
          try {
            startSignal.countDown();
            startSignal.await();
            long now = System.currentTimeMillis();
            startTime.compareAndSet(0, now);
            task.run();
          } catch (Throwable e) {
            Logger threadLog = Logger.getLogger(ConfigUtil.LINKBENCH_LOGGER);
            threadLog.error("Unrecoverable exception in worker thread:", e);
            Runtime.getRuntime().halt(1);
          }
          doneSignal.countDown();
        }
      }).start();
    }
    doneSignal.await(); // wait for all threads to finish
    long endTime = System.currentTimeMillis();
    return endTime - startTime.get();
  }

  void drive() throws IOException, InterruptedException, Throwable {
    load();
    sendrequests();
  }

  public static void main(String[] args)
    throws IOException, InterruptedException, Throwable {
    processArgs(args);
    LinkBenchDriver d = new LinkBenchDriver(configFile,
                                cmdLineProps, logFile);
    try {
      d.drive();
    } catch (LinkBenchConfigError e) {
      System.err.println("Configuration error: " + e.toString());
      System.exit(EXIT_BADCONFIG);
    }
  }

  private static void printUsage(Options options) {
    //PrintWriter writer = new PrintWriter(System.err);
    HelpFormatter fmt = new HelpFormatter();
    fmt.printHelp("linkbench", options, true);
  }

  private static Options initializeOptions() {
    Options options = new Options();
    Option config = new Option("c", true, "Linkbench config file");
    config.setArgName("file");
    options.addOption(config);

    Option log = new Option("L", true, "Log to this file");
    log.setArgName("file");
    options.addOption(log);

    Option csvStats = new Option("csvstats", "csvstats", true,
                                 "CSV stats output");
    csvStats.setArgName("file");
    options.addOption(csvStats);

    Option csvStream = new Option("csvstream", "csvstream", true,
        "CSV streaming stats output");
    csvStream.setArgName("file");
    options.addOption(csvStream);

    options.addOption("l", false,
               "Execute loading stage of benchmark");
    options.addOption("r", false,
               "Execute request stage of benchmark");

    // Java-style properties to override config file
    // -Dkey=value
    Option property = new Option("D", "Override a config setting");
    property.setArgs(2);
    property.setArgName("property=value");
    property.setValueSeparator('=');
    options.addOption(property);

    return options;
  }

  /**
   * Process command line arguments and set static variables
   * exits program if invalid arguments provided
   * @param options
   * @param args
   * @throws ParseException
   */
  private static void processArgs(String[] args)
              throws ParseException {
    Options options = initializeOptions();

    CommandLine cmd = null;
    try {
      CommandLineParser parser = new GnuParser();
      cmd = parser.parse( options, args);
    } catch (ParseException ex) {
      // Use Apache CLI-provided messages
      System.err.println(ex.getMessage());
      printUsage(options);
      System.exit(EXIT_BADARGS);
    }

    /*
     * Apache CLI validates arguments, so can now assume
     * all required options are present, etc
     */
    if (cmd.getArgs().length > 0) {
      System.err.print("Invalid trailing arguments:");
      for (String arg: cmd.getArgs()) {
        System.err.print(' ');
        System.err.print(arg);
      }
      System.err.println();
      printUsage(options);
      System.exit(EXIT_BADARGS);
    }

    // Set static option variables
    doLoad = cmd.hasOption('l');
    doRequest = cmd.hasOption('r');

    logFile = cmd.getOptionValue('L'); // May be null

    configFile = cmd.getOptionValue('c');
    if (configFile == null) {
      // Try to find in usual location
      String linkBenchHome = ConfigUtil.findLinkBenchHome();
      if (linkBenchHome != null) {
        configFile = linkBenchHome + File.separator +
              "config" + File.separator + "LinkConfigMysql.properties";
      } else {
        System.err.println("Config file not specified through command "
            + "line argument and " + ConfigUtil.linkbenchHomeEnvVar
            + " environment variable not set to valid directory");
        printUsage(options);
        System.exit(EXIT_BADARGS);
      }
    }

    String csvStatsFileName = cmd.getOptionValue("csvstats"); // May be null
    if (csvStatsFileName != null) {
      try {
        csvStatsFile = new PrintStream(new FileOutputStream(csvStatsFileName));
      } catch (FileNotFoundException e) {
        System.err.println("Could not open file " + csvStatsFileName +
                           " for writing");
        printUsage(options);
        System.exit(EXIT_BADARGS);
      }
    }

    String csvStreamFileName = cmd.getOptionValue("csvstream"); // May be null
    if (csvStreamFileName != null) {
      try {
        csvStreamFile = new PrintStream(
                        new FileOutputStream(csvStreamFileName));
        // File is written to by multiple threads, first write header
        SampledStats.writeCSVHeader(csvStreamFile);
      } catch (FileNotFoundException e) {
        System.err.println("Could not open file " + csvStreamFileName +
                           " for writing");
        printUsage(options);
        System.exit(EXIT_BADARGS);
      }
    }

    cmdLineProps = cmd.getOptionProperties("D");

    if (!(doLoad || doRequest)) {
      System.err.println("Did not select benchmark mode");
      printUsage(options);
      System.exit(EXIT_BADARGS);
    }
  }
}




================================================
FILE: src/main/java/com/facebook/LinkBench/LinkBenchDriverMR.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.FileInputStream;
import java.io.IOException;
import java.lang.reflect.Constructor;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Properties;
import java.util.Random;

import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.SequenceFile.CompressionType;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.InputFormat;
import org.apache.hadoop.mapred.InputSplit;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.RecordReader;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.SequenceFileInputFormat;
import org.apache.hadoop.mapred.SequenceFileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.apache.log4j.Logger;

import com.facebook.LinkBench.LinkBenchLoad.LoadProgress;
import com.facebook.LinkBench.LinkBenchRequest.RequestProgress;
import com.facebook.LinkBench.stats.LatencyStats;

/**
 * LinkBenchDriverMR class.
 * First loads data using map-reduced LinkBenchLoad class.
 * Then does read and write requests of various types (addlink, deletelink,
 * updatelink, getlink, countlinks, getlinklist) using map-reduced
 * LinkBenchRequest class.
 * Config options are taken from config file passed as argument.
 */

public class LinkBenchDriverMR extends Configured implements Tool {
  public static final int LOAD = 1;
  public static final int REQUEST = 2;
  private static Path TMP_DIR = new Path("TMP_Link_Bench");
  private static boolean REPORT_PROGRESS = false;
  private static boolean USE_INPUT_FILES = false; //use generate input by default

  private static final Logger logger =
              Logger.getLogger(ConfigUtil.LINKBENCH_LOGGER);

  static enum Counters { LINK_LOADED, REQUEST_DONE }

  private static Properties props;
  private static String store;

  private static final Class<?>[] EMPTY_ARRAY = new Class[]{};

  /**
   * generate an instance of LinkStore
   * @param currentphase LOAD or REQUEST
   * @param mapperid id of the mapper 0, 1, ...
   */
  private static LinkStore initStore(Phase currentphase, int mapperid)
    throws IOException {

    LinkStore newstore = null;

    if (store == null) {
      store = ConfigUtil.getPropertyRequired(props, Config.LINKSTORE_CLASS);
      logger.info("Using store class: " + store);
    }

    // The property "store" defines the class name that will be used to
    // store data in a database. The folowing class names are pre-packaged
    // for easy access:
    //   LinkStoreMysql :  run benchmark on  mySQL
    //   LinkStoreHBase :  run benchmark on  HBase
    //   LinkStoreHBaseGeneralAtomicityTesting : atomicity testing on HBase.
    //   LinkStoreTaoAtomicityTesting:  atomicity testing for Facebook's HBase
    //
    Class<?> clazz = null;
    try {
      clazz = getClassByName(store);
    } catch (java.lang.ClassNotFoundException nfe) {
      throw new IOException("Cound not find class for " + store);
    }
    newstore = (LinkStore)newInstance(clazz);
    if (clazz == null) {
      throw new IOException("Unknown data store " + store);
    }

    return newstore;
  }

  /**
   * InputSplit for generated inputs
   */
  private class LinkBenchInputSplit implements InputSplit {
    private int id;  // id of mapper
    private int num; // total number of mappers

    LinkBenchInputSplit() {}
    public LinkBenchInputSplit(int i, int n) {
      this.id = i;
      this.num = n;
    }
    public int getID() {return this.id;}
    public int getNum() {return this.num;}
    public long getLength() {return 1;}

    public String[] getLocations() throws IOException {
      return new String[]{};
    }

    public void readFields(DataInput in) throws IOException {
        this.id = in.readInt();
        this.num = in.readInt();
    }

    public void write(DataOutput out) throws IOException {
        out.writeInt(this.id);
        out.writeInt(this.num);
    }

  }

  /**
   * RecordReader for generated inputs
   */
  private class LinkBenchRecordReader
    implements RecordReader<IntWritable, IntWritable> {
    private int id;
    private int num;
    private boolean done;

    public LinkBenchRecordReader(LinkBenchInputSplit split) {
      this.id = split.getID();
      this.num = split.getNum();
      this.done = false;
    }

    public IntWritable createKey() {return new IntWritable();}
    public IntWritable createValue() {return new IntWritable();}
    public void close() throws IOException { }

    // one loader per split
    public float getProgress() { return 0.5f;}
    // one loader per split
    public long getPos() {return 1;}

    public boolean next(IntWritable key, IntWritable value)
      throws IOException {
      if (this.done) {
        return false;
      } else {
        key.set(this.id);
        value.set(this.num);
        this.done = true;
      }
      return true;
    }
  }

  /**
   * InputFormat for generated inputs
   */
  private class LinkBenchInputFormat
    implements InputFormat<IntWritable, IntWritable> {

    public InputSplit[] getSplits(JobConf conf, int numsplits) {
      InputSplit[] splits = new InputSplit[numsplits];
      for (int i = 0; i < numsplits; ++i) {
        splits[i] = (InputSplit) new LinkBenchInputSplit(i, numsplits);
      }
      return splits;
    }

    public RecordReader<IntWritable, IntWritable> getRecordReader(
      InputSplit split, JobConf conf, Reporter reporter) {
      return (RecordReader)(new LinkBenchRecordReader((LinkBenchInputSplit)split));
    }

    public void validateInput(JobConf conf) {}  // no need to validate
  }

  /**
   * create JobConf for map reduce job
   * @param currentphase LOAD or REQUEST
   * @param nmappers number of mappers (loader or requester)
   */
  private JobConf createJobConf(int currentphase, int nmappers) {
    final JobConf jobconf = new JobConf(getConf(), getClass());
    jobconf.setJobName("LinkBench MapReduce Driver");

    if (USE_INPUT_FILES) {
      jobconf.setInputFormat(SequenceFileInputFormat.class);
    } else {
      jobconf.setInputFormat(LinkBenchInputFormat.class);
    }
    jobconf.setOutputKeyClass(IntWritable.class);
    jobconf.setOutputValueClass(LongWritable.class);
    jobconf.setOutputFormat(SequenceFileOutputFormat.class);
    if(currentphase == LOAD) {
      jobconf.setMapperClass(LoadMapper.class);
    } else { //REQUEST
      jobconf.setMapperClass(RequestMapper.class);
    }
    jobconf.setNumMapTasks(nmappers);
    jobconf.setReducerClass(LoadRequestReducer.class);
    jobconf.setNumReduceTasks(1);

    // turn off speculative execution, because DFS doesn't handle
    // multiple writers to the same file.
    jobconf.setSpeculativeExecution(false);

    return jobconf;
  }

  /**
   * setup input files for map reduce job
   * @param jobconf configuration of the map reduce job
   * @param nmappers number of mappers (loader or requester)
   */
  private static FileSystem setupInputFiles(JobConf jobconf, int nmappers)
    throws IOException, InterruptedException {
    //setup input/output directories
    final Path indir = new Path(TMP_DIR, "in");
    final Path outdir = new Path(TMP_DIR, "out");
    FileInputFormat.setInputPaths(jobconf, indir);
    FileOutputFormat.setOutputPath(jobconf, outdir);

    final FileSystem fs = FileSystem.get(jobconf);
    if (fs.exists(TMP_DIR)) {
      throw new IOException("Tmp directory " + fs.makeQualified(TMP_DIR)
          + " already exists.  Please remove it first.");
    }
    if (!fs.mkdirs(indir)) {
      throw new IOException("Cannot create input directory " + indir);
    }

    //generate an input file for each map task
    if (USE_INPUT_FILES) {
      for(int i=0; i < nmappers; ++i) {
        final Path file = new Path(indir, "part"+i);
        final IntWritable mapperid = new IntWritable(i);
        final IntWritable nummappers = new IntWritable(nmappers);
        final SequenceFile.Writer writer = SequenceFile.createWriter(
          fs, jobconf, file,
          IntWritable.class, IntWritable.class, CompressionType.NONE);
        try {
          writer.append(mapperid, nummappers);
        } finally {
          writer.close();
        }
        logger.info("Wrote input for Map #"+i);
      }
    }
    return fs;
  }

  /**
   * read output from the map reduce job
   * @param fs the DFS FileSystem
   * @param jobconf configuration of the map reduce job
   */
  public static long readOutput(FileSystem fs, JobConf jobconf)
    throws IOException, InterruptedException {
    //read outputs
    final Path outdir = new Path(TMP_DIR, "out");
    Path infile = new Path(outdir, "reduce-out");
    IntWritable nworkers = new IntWritable();
    LongWritable result = new LongWritable();
    long output = 0;
    SequenceFile.Reader reader = new SequenceFile.Reader(fs, infile, jobconf);
    try {
      reader.next(nworkers, result);
      output = result.get();
    } finally {
      reader.close();
    }
    return output;
  }

  /**
   * Mapper for LOAD phase
   * Load data to the store
   * Output the number of loaded links
   */
  public static class LoadMapper extends MapReduceBase
    implements Mapper<IntWritable, IntWritable, IntWritable, LongWritable> {

    public void map(IntWritable loaderid,
                    IntWritable nloaders,
                    OutputCollector<IntWritable, LongWritable> output,
                    Reporter reporter) throws IOException {
      ConfigUtil.setupLogging(props, null);
      LinkStore store = initStore(Phase.LOAD, loaderid.get());
      LatencyStats latencyStats = new LatencyStats(nloaders.get());

      long maxid1 = ConfigUtil.getLong(props, Config.MAX_ID);
      long startid1 = ConfigUtil.getLong(props, Config.MIN_ID);

      LoadProgress prog_tracker = LoadProgress.create(
            Logger.getLogger(ConfigUtil.LINKBENCH_LOGGER), props);

      LinkBenchLoad loader = new LinkBenchLoad(store, props, latencyStats,
                               null,
                               loaderid.get(), maxid1 == startid1 + 1,
                               nloaders.get(), prog_tracker, new Random());

      LinkedList<LinkBenchLoad> tasks = new LinkedList<LinkBenchLoad>();
      tasks.add(loader);
      long linksloaded = 0;
      try {
        LinkBenchDriver.concurrentExec(tasks);
        linksloaded = loader.getLinksLoaded();
      } catch (java.lang.Throwable t) {
        throw new IOException(t);
      }
      output.collect(new IntWritable(nloaders.get()),
                     new LongWritable(linksloaded));
      if (REPORT_PROGRESS) {
        reporter.incrCounter(Counters.LINK_LOADED, linksloaded);
      }
    }
  }

  /**
   * Mapper for REQUEST phase
   * Send requests
   * Output the number of finished requests
   */
  public static class RequestMapper extends MapReduceBase
    implements Mapper<IntWritable, IntWritable, IntWritable, LongWritable> {

    public void map(IntWritable requesterid,
                    IntWritable nrequesters,
                    OutputCollector<IntWritable, LongWritable> output,
                    Reporter reporter) throws IOException {
      ConfigUtil.setupLogging(props, null);
      LinkStore store = initStore(Phase.REQUEST, requesterid.get());
      LatencyStats latencyStats = new LatencyStats(nrequesters.get());
      RequestProgress progress =
                              LinkBenchRequest.createProgress(logger, props);
      progress.startTimer();
      // TODO: Don't support NodeStore yet
      final LinkBenchRequest requester =
        new LinkBenchRequest(store, null, props, latencyStats, null, progress,
                new Random(), requesterid.get(), nrequesters.get());


      // Wrap in runnable to handle error
      Thread t = new Thread(new Runnable() {
        public void run() {
          try {
            requester.run();
          } catch (Throwable t) {
            logger.error("Uncaught error in requester:", t);
          }
        }
      });
      t.start();
      long requestdone = 0;
      try {
        t.join();
        requestdone = requester.getRequestsDone();
      } catch (InterruptedException e) {
      }
      output.collect(new IntWritable(nrequesters.get()),
                     new LongWritable(requestdone));
      if (REPORT_PROGRESS) {
        reporter.incrCounter(Counters.REQUEST_DONE, requestdone);
      }
    }
  }

  /**
   * Reducer for both LOAD and REQUEST
   * Get the sum of "loaded links" or "finished requests"
   */
  public static class LoadRequestReducer extends MapReduceBase
    implements Reducer<IntWritable, LongWritable, IntWritable, LongWritable> {
    private long sum = 0;
    private int nummappers = 0;
    private JobConf conf;

    /** Store job configuration. */
    @Override
    public void configure(JobConf job) {
      conf = job;
    }

    public void reduce(IntWritable nmappers,
                       Iterator<LongWritable> values,
                       OutputCollector<IntWritable, LongWritable> output,
                       Reporter reporter) throws IOException {

      nummappers = nmappers.get();
      while(values.hasNext()) {
        sum += values.next().get();
      }
      output.collect(new IntWritable(nmappers.get()),
                     new LongWritable(sum));
    }

    /**
     * Reduce task done, write output to a file.
     */
    @Override
    public void close() throws IOException {
      //write output to a file
      Path outDir = new Path(TMP_DIR, "out");
      Path outFile = new Path(outDir, "reduce-out");
      FileSystem fileSys = FileSystem.get(conf);
      SequenceFile.Writer writer = SequenceFile.createWriter(fileSys, conf,
          outFile, IntWritable.class, LongWritable.class,
          CompressionType.NONE);
      writer.append(new IntWritable(nummappers), new LongWritable(sum));
      writer.close();
    }
  }

  /**
   * main route of the LOAD phase
   */
  private void load() throws IOException, InterruptedException {
    boolean loaddata = (!props.containsKey(Config.LOAD_DATA)) ||
                        ConfigUtil.getBool(props, Config.LOAD_DATA);
    if (!loaddata) {
      logger.info("Skipping load data per the config");
      return;
    }

    int nloaders = ConfigUtil.getInt(props, Config.NUM_LOADERS);
    final JobConf jobconf = createJobConf(LOAD, nloaders);
    FileSystem fs = setupInputFiles(jobconf, nloaders);

    try {
      logger.info("Starting loaders " + nloaders);
      final long starttime = System.currentTimeMillis();
      JobClient.runJob(jobconf);
      long loadtime = (System.currentTimeMillis() - starttime);

      // compute total #links loaded
      long maxid1 = ConfigUtil.getLong(props, Config.MAX_ID);
      long startid1 = ConfigUtil.getLong(props, Config.MIN_ID);
      int nlinks_default = ConfigUtil.getInt(props, Config.NLINKS_DEFAULT);
      long expectedlinks = (1 + nlinks_default) * (maxid1 - startid1);
      long actuallinks = readOutput(fs, jobconf);

      logger.info("LOAD PHASE COMPLETED. Expected to load " +
                         expectedlinks + " links. " +
                         actuallinks + " loaded in " + (loadtime/1000) + " seconds." +
                         "Links/second = " + ((1000*actuallinks)/loadtime));
    } finally {
      fs.delete(TMP_DIR, true);
    }
  }

  /**
   * main route of the REQUEST phase
   */
  private void sendrequests() throws IOException, InterruptedException {
    // config info for requests
    int nrequesters = ConfigUtil.getInt(props, Config.NUM_REQUESTERS);
    final JobConf jobconf = createJobConf(REQUEST, nrequesters);
    FileSystem fs = setupInputFiles(jobconf, nrequesters);

    try {
      logger.info("Starting requesters " + nrequesters);
      final long starttime = System.currentTimeMillis();
      JobClient.runJob(jobconf);
      long endtime = System.currentTimeMillis();

      // request time in millis
      long requesttime = (endtime - starttime);
      long requestsdone = readOutput(fs, jobconf);

      logger.info("REQUEST PHASE COMPLETED. " + requestsdone +
                         " requests done in " + (requesttime/1000) + " seconds." +
                         "Requests/second = " + (1000*requestsdone)/requesttime);
    } finally {
      fs.delete(TMP_DIR, true);
    }
  }

  /**
   * read in configuration and invoke LOAD and REQUEST
   */
  @Override
  public int run(String[] args) throws Exception {
    if (args.length < 1) {
      System.err.println("Args : LinkBenchDriver configfile");
      ToolRunner.printGenericCommandUsage(System.err);
      return -1;
    }
    props = new Properties();
    props.load(new FileInputStream(args[0]));

    // get name or temporary directory
    String tempdirname = props.getProperty(Config.TEMPDIR);
    if (tempdirname != null) {
      TMP_DIR = new Path(tempdirname);
    }
    // whether report progress through reporter
    REPORT_PROGRESS = (!props.containsKey(Config.MAPRED_REPORT_PROGRESS)) ||
        ConfigUtil.getBool(props, Config.MAPRED_REPORT_PROGRESS);

    // whether store mapper input in files
    USE_INPUT_FILES = (!props.containsKey(Config.MAPRED_USE_INPUT_FILES)) ||
                    ConfigUtil.getBool(props, Config.MAPRED_USE_INPUT_FILES);

    load();
    sendrequests();
    return 0;
  }

  /**
   * Load a class by name.
   * @param name the class name.
   * @return the class object.
   * @throws ClassNotFoundException if the class is not found.
   */
  public static Class<?> getClassByName(String name)
    throws ClassNotFoundException {
    ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
    return Class.forName(name, true, classLoader);
  }

  /** Create an object for the given class and initialize it from conf
   *
   * @param theClass class of which an object is created
   * @param conf Configuration
   * @return a new object
   */
  public static <T> T newInstance(Class<T> theClass) {
    T result;
    try {
      Constructor<T> meth = theClass.getDeclaredConstructor(EMPTY_ARRAY);
      meth.setAccessible(true);
      result = meth.newInstance();
    } catch (Exception e) {
      throw new RuntimeException(e);
    }
    return result;
  }

  public static void main(String[] args) throws Exception {
    System.exit(ToolRunner.run(null, new LinkBenchDriverMR(), args));
  }
}




================================================
FILE: src/main/java/com/facebook/LinkBench/LinkBenchLoad.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

import java.io.PrintStream;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.Random;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.atomic.AtomicLong;

import org.apache.log4j.Level;
import org.apache.log4j.Logger;

import com.facebook.LinkBench.distributions.ID2Chooser;
import com.facebook.LinkBench.distributions.LogNormalDistribution;
import com.facebook.LinkBench.generators.DataGenerator;
import com.facebook.LinkBench.stats.LatencyStats;
import com.facebook.LinkBench.stats.SampledStats;
import com.facebook.LinkBench.util.ClassLoadUtil;


/*
 * Multi-threaded loader for loading graph edges (but not nodes) into
 * LinkStore. The range from startid1 to maxid1 is chunked up into equal sized
 * disjoint ranges.  These are then enqueued for processing by a number
 * of loader threads to be loaded in parallel. The #links generated for
 * an id1 is based on the configured distribution.  The # of link types,
 * and link payload data is also controlled by the configuration file.
 *  The actual counts of #links generated is tracked in nlinks_counts.
 */

public class LinkBenchLoad implements Runnable {

  private final Logger logger = Logger.getLogger(ConfigUtil.LINKBENCH_LOGGER);

  private long maxid1;   // max id1 to generate
  private long startid1; // id1 at which to start
  private int  loaderID; // ID for this loader
  private LinkStore store;// store interface (several possible implementations
                          // like mysql, hbase etc)

  private final LogNormalDistribution linkDataSize;
  private final DataGenerator linkDataGen; // Generate link data
  private SampledStats stats;
  private LatencyStats latencyStats;

  Level debuglevel;
  String dbid;

  private ID2Chooser id2chooser;

  // Counters for load statistics
  long sameShuffle;
  long diffShuffle;
  long linksloaded;

  /**
   * special case for single hot row benchmark. If singleAssoc is set,
   * then make this method not print any statistics message, all statistics
   * are collected at a higher layer. */
  boolean singleAssoc;

  private BlockingQueue<LoadChunk> chunk_q;

  // Track last time stats were updated in ms
  private long lastDisplayTime;
  // How often stats should be reported
  private final long displayFreq_ms;

  private LoadProgress prog_tracker;

  private Properties props;

  /**
   * Convenience constructor
   * @param store2
   * @param props
   * @param latencyStats
   * @param loaderID
   * @param nloaders
   */
  public LinkBenchLoad(LinkStore store, Properties props,
      LatencyStats latencyStats, PrintStream csvStreamOut,
      int loaderID, boolean singleAssoc,
      int nloaders, LoadProgress prog_tracker, Random rng) {
    this(store, props, latencyStats, csvStreamOut, loaderID, singleAssoc,
              new ArrayBlockingQueue<LoadChunk>(2), prog_tracker);

    // Just add a single chunk to the queue
    chunk_q.add(new LoadChunk(loaderID, startid1, maxid1, rng));
    chunk_q.add(LoadChunk.SHUTDOWN);
  }


  public LinkBenchLoad(LinkStore linkStore,
                       Properties props,
                       LatencyStats latencyStats,
                       PrintStream csvStreamOut,
                       int loaderID,
                       boolean singleAssoc,
                       BlockingQueue<LoadChunk> chunk_q,
                       LoadProgress prog_tracker) throws LinkBenchConfigError {
    /*
     * Initialize fields from arguments
     */
    this.store = linkStore;
    this.props = props;
    this.latencyStats = latencyStats;
    this.loaderID = loaderID;
    this.singleAssoc = singleAssoc;
    this.chunk_q = chunk_q;
    this.prog_tracker = prog_tracker;


    /*
     * Load settings from properties
     */
    maxid1 = ConfigUtil.getLong(props, Config.MAX_ID);
    startid1 = ConfigUtil.getLong(props, Config.MIN_ID);

    // math functions may cause problems for id1 = 0. Start at 1.
    if (startid1 <= 0) {
      throw new LinkBenchConfigError("startid1 must be >= 1");
    }

    debuglevel = ConfigUtil.getDebugLevel(props);

    double medianLinkDataSize = ConfigUtil.getDouble(props,
                                              Config.LINK_DATASIZE);
    linkDataSize = new LogNormalDistribution();
    linkDataSize.init(0, LinkStore.MAX_LINK_DATA, medianLinkDataSize,
                                         Config.LINK_DATASIZE_SIGMA);

    try {
      linkDataGen = ClassLoadUtil.newInstance(
          ConfigUtil.getPropertyRequired(props, Config.LINK_ADD_DATAGEN),
          DataGenerator.class);
      linkDataGen.init(props, Config.LINK_ADD_DATAGEN_PREFIX);
    } catch (ClassNotFoundException ex) {
      logger.error(ex);
      throw new LinkBenchConfigError("Error loading data generator class: "
            + ex.getMessage());
    }

    displayFreq_ms = ConfigUtil.getLong(props, Config.DISPLAY_FREQ) * 1000;
    int maxsamples = ConfigUtil.getInt(props, Config.MAX_STAT_SAMPLES);

    dbid = ConfigUtil.getPropertyRequired(props, Config.DBID);

    /*
     * Initialize statistics
     */
    linksloaded = 0;
    sameShuffle = 0;
    diffShuffle = 0;
    stats = new SampledStats(loaderID, maxsamples, csvStreamOut);

    id2chooser = new ID2Chooser(props, startid1, maxid1, 1, 1);
  }

  public long getLinksLoaded() {
    return linksloaded;
  }

  @Override
  public void run() {
    try {
      this.store.initialize(props, Phase.LOAD, loaderID);
    } catch (Exception e) {
      logger.error("Error while initializing store", e);
      throw new RuntimeException(e);
    }

    int bulkLoadBatchSize = store.bulkLoadBatchSize();
    boolean bulkLoad = bulkLoadBatchSize > 0;
    ArrayList<Link> loadBuffer = null;
    ArrayList<LinkCount> countLoadBuffer = null;
    if (bulkLoad) {
      loadBuffer = new ArrayList<Link>(bulkLoadBatchSize);
      countLoadBuffer = new ArrayList<LinkCount>(bulkLoadBatchSize);
    }

    logger.info("Starting loader thread  #" + loaderID + " loading links");
    lastDisplayTime = System.currentTimeMillis();

    while (true) {
      LoadChunk chunk;
      try {
        chunk = chunk_q.take();
        //logger.info("chunk end="+chunk.end);
      } catch (InterruptedException ie) {
        logger.warn("InterruptedException not expected, try again", ie);
        continue;
      }

      // Shutdown signal is received though special chunk type
      if (chunk.shutdown) {
        break;
      }

      // Load the link range specified in the chunk
      processChunk(chunk, bulkLoad, bulkLoadBatchSize,
                    loadBuffer, countLoadBuffer);
    }

    if (bulkLoad) {
      // Load any remaining links or counts
      loadLinks(loadBuffer);
      loadCounts(countLoadBuffer);
    }

    if (!singleAssoc) {
      logger.debug(" Same shuffle = " + sameShuffle +
                         " Different shuffle = " + diffShuffle);
      displayStats(lastDisplayTime, bulkLoad);
    }

    store.close();
  }


  private void displayStats(long startTime, boolean bulkLoad) {
    long endTime = System.currentTimeMillis();
    if (bulkLoad) {
      stats.displayStats(startTime, endTime,
          Arrays.asList(LinkBenchOp.LOAD_LINKS_BULK,
          LinkBenchOp.LOAD_COUNTS_BULK, LinkBenchOp.LOAD_LINKS_BULK_NLINKS,
          LinkBenchOp.LOAD_COUNTS_BULK_NLINKS));
    } else {
      stats.displayStats(startTime, endTime,
                         Arrays.asList(LinkBenchOp.LOAD_LINK));
    }
  }

  private void processChunk(LoadChunk chunk, boolean bulkLoad,
      int bulkLoadBatchSize, ArrayList<Link> loadBuffer,
      ArrayList<LinkCount> countLoadBuffer) {
    if (Level.DEBUG.isGreaterOrEqual(debuglevel)) {
      logger.debug("Loader thread  #" + loaderID + " processing "
                  + chunk.toString());
    }

    // Counter for total number of links loaded in chunk;
    long links_in_chunk = 0;

    Link link = null;
    if (!bulkLoad) {
      // When bulk-loading, need to have multiple link objects at a time
      // otherwise reuse object
      link = initLink();
    }

    long prevPercentPrinted = 0;
    for (long id1 = chunk.start; id1 < chunk.end; id1 += chunk.step) {
      long added_links= createOutLinks(chunk.rng, link, loadBuffer, countLoadBuffer,
          id1, singleAssoc, bulkLoad, bulkLoadBatchSize);
      links_in_chunk += added_links;

      if (!singleAssoc) {
        long nloaded = (id1 - chunk.start) / chunk.step;
        if (bulkLoad) {
          nloaded -= loadBuffer.size();
        }
        long percent = 100 * nloaded/(chunk.size);
        if ((percent % 10 == 0) && (percent > prevPercentPrinted)) {
          logger.debug(chunk.toString() +  ": Percent done = " + percent);
          prevPercentPrinted = percent;
        }
      }

      // Check if stats should be flushed and reset
      long now = System.currentTimeMillis();
      if (lastDisplayTime + displayFreq_ms <= now) {
        displayStats(lastDisplayTime, bulkLoad);
        stats.resetSamples();
        lastDisplayTime = now;
      }
    }

    // Update progress and maybe print message
    prog_tracker.update(chunk.size, links_in_chunk);
  }

  /**
   * Create the out links for a given id1
   * @param link
   * @param loadBuffer
   * @param id1
   * @param singleAssoc
   * @param bulkLoad
   * @param bulkLoadBatchSize
   * @return total number of links added
   */
  private long createOutLinks(Random rng,
      Link link, ArrayList<Link> loadBuffer,
      ArrayList<LinkCount> countLoadBuffer,
      long id1, boolean singleAssoc, boolean bulkLoad,
      int bulkLoadBatchSize) {
    Map<Long, LinkCount> linkTypeCounts = null;
    if (bulkLoad) {
      linkTypeCounts = new HashMap<Long, LinkCount>();
    }
    long nlinks_total = 0;

    for (long link_type: id2chooser.getLinkTypes()) {
      long nlinks = id2chooser.calcLinkCount(id1, link_type);
      nlinks_total += nlinks;
      if (id2chooser.sameShuffle) {
        sameShuffle++;
      } else {
        diffShuffle++;
      }

      if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
        logger.trace("id1 = " + id1 + " link_type = " + link_type +
                           " nlinks = " + nlinks);
      }
      for (long j = 0; j < nlinks; j++) {
        if (bulkLoad) {
          // Can't reuse link object
          link = initLink();
        }
        constructLink(rng, link, id1, link_type, j, singleAssoc);

        if (bulkLoad) {
          loadBuffer.add(link);
          if (loadBuffer.size() >= bulkLoadBatchSize) {
            loadLinks(loadBuffer);
          }

          // Update link counts for this type
          LinkCount count = linkTypeCounts.get(link.link_type);
          if (count == null) {
            count = new LinkCount(id1, link.link_type,
                                  link.time, link.version, 1);
            linkTypeCounts.put(link.link_type, count);
          } else {
            count.count++;
            count.time = Math.max(count.time, link.time);
            count.version = link.version;
          }
        } else {
          loadLink(link, j, nlinks, singleAssoc);
        }
      }

    }

    // Maintain the counts separately
    if (bulkLoad) {
      for (LinkCount count: linkTypeCounts.values()) {
        countLoadBuffer.add(count);
        if (countLoadBuffer.size() >= bulkLoadBatchSize) {
          loadCounts(countLoadBuffer);
        }
      }
    }
    return nlinks_total;
  }

  private Link initLink() {
    Link link = new Link();
    link.link_type = LinkStore.DEFAULT_LINK_TYPE;
    link.visibility = LinkStore.VISIBILITY_DEFAULT;
    link.version = 0;
    link.data = new byte[0];
    link.time = System.currentTimeMillis();
    return link;
  }

  /**
   * Helper method to fill in link data
   * @param link this link is filled in.  Should have been initialized with
   *            initLink() earlier
   * @param outlink_ix the number of this link out of all outlinks from
   *                    id1
   * @param singleAssoc whether we are in singleAssoc mode
   */
  private void constructLink(Random rng, Link link, long id1,
      long link_type, long outlink_ix, boolean singleAssoc) {
    link.id1 = id1;
    link.link_type = link_type;

    // Using random number generator for id2 means we won't know
    // which id2s exist. So link id1 to
    // maxid1 + id1 + 1 thru maxid1 + id1 + nlinks(id1) UNLESS
    // config randomid2max is nonzero.
    if (singleAssoc) {
      link.id2 = 45; // some constant
    } else {
      link.id2 = id2chooser.chooseForLoad(rng, id1, link_type,outlink_ix);
      int datasize = (int)linkDataSize.choose(rng);
      link.data = linkDataGen.fill(rng, new byte[datasize]);
    }

    if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
      logger.trace("id2 chosen is " + link.id2);
    }

    // Randomize time so that id2 and timestamp aren't closely correlated
    link.time = chooseInitialTimestamp(rng);
  }


  private long chooseInitialTimestamp(Random rng) {
    // Choose something from now back to about 50 days
    return (System.currentTimeMillis() - Integer.MAX_VALUE - 1L)
                                        + rng.nextInt();
  }

  /**
   * Load an individual link into the db.
   *
   * If an error occurs during loading, this method will log it,
   *  add stats, and reset the connection.
   * @param link
   * @param outlink_ix
   * @param nlinks
   * @param singleAssoc
   */
  private void loadLink(Link link, long outlink_ix, long nlinks,
      boolean singleAssoc) {
    long timestart = 0;
    if (!singleAssoc) {
      timestart = System.nanoTime();
    }

    try {
      // no inverses for now
      store.addLink(dbid, link, true);
      linksloaded++;

      if (!singleAssoc && outlink_ix == nlinks - 1) {
        long timetaken = (System.nanoTime() - timestart);

        // convert to microseconds
        stats.addStats(LinkBenchOp.LOAD_LINK, timetaken/1000, false);

        latencyStats.recordLatency(loaderID,
                      LinkBenchOp.LOAD_LINK, timetaken/1000);
      }

    } catch (Throwable e){//Catch exception if any
        long endtime2 = System.nanoTime();
        long timetaken2 = (endtime2 - timestart)/1000;
        logger.error("Error: " + e.getMessage(), e);
        stats.addStats(LinkBenchOp.LOAD_LINK, timetaken2, true);
        store.clearErrors(loaderID);
    }
  }

  private void loadLinks(ArrayList<Link> loadBuffer) {
    long timestart = System.nanoTime();
    try {
      // no inverses for now
      int nlinks = loadBuffer.size();
      store.addBulkLinks(dbid, loadBuffer, true);
      linksloaded += nlinks;
      loadBuffer.clear();

      long timetaken = (System.nanoTime() - timestart);

      // convert to microseconds
      stats.addStats(LinkBenchOp.LOAD_LINKS_BULK, timetaken/1000, false);
      stats.addStats(LinkBenchOp.LOAD_LINKS_BULK_NLINKS, nlinks, false);

      latencyStats.recordLatency(loaderID, LinkBenchOp.LOAD_LINKS_BULK,
                                                             timetaken/1000);
    } catch (Throwable e){//Catch exception if any
        long endtime2 = System.nanoTime();
        long timetaken2 = (endtime2 - timestart)/1000;
        logger.error("Error: " + e.getMessage(), e);
        stats.addStats(LinkBenchOp.LOAD_LINKS_BULK, timetaken2, true);
        store.clearErrors(loaderID);
    }
  }

  private void loadCounts(ArrayList<LinkCount> loadBuffer) {
    long timestart = System.nanoTime();

    try {
      // no inverses for now
      int ncounts = loadBuffer.size();
      store.addBulkCounts(dbid, loadBuffer);
      loadBuffer.clear();

      long timetaken = (System.nanoTime() - timestart);

      // convert to microseconds
      stats.addStats(LinkBenchOp.LOAD_COUNTS_BULK, timetaken/1000, false);
      stats.addStats(LinkBenchOp.LOAD_COUNTS_BULK_NLINKS, ncounts, false);

      latencyStats.recordLatency(loaderID, LinkBenchOp.LOAD_COUNTS_BULK,
                                                             timetaken/1000);
    } catch (Throwable e){//Catch exception if any
        long endtime2 = System.nanoTime();
        long timetaken2 = (endtime2 - timestart)/1000;
        logger.error("Error: " + e.getMessage(), e);
        stats.addStats(LinkBenchOp.LOAD_COUNTS_BULK, timetaken2, true);
        store.clearErrors(loaderID);
    }
  }

  /**
   * Represents a portion of the id space, starting with
   * start, going up until end (non-inclusive) with step size
   * step
   *
   */
  public static class LoadChunk {
    public static LoadChunk SHUTDOWN = new LoadChunk(true,
                                              0, 0, 0, 1, null);

    public LoadChunk(long id, long start, long end, Random rng) {
      this(false, id, start, end, 1, rng);
    }
    public LoadChunk(boolean shutdown,
                      long id, long start, long end, long step, Random rng) {
      super();
      this.shutdown = shutdown;
      this.id = id;
      this.start = start;
      this.end = end;
      this.step = step;
      this.size = (end - start) / step;
      this.rng = rng;
    }
    public final boolean shutdown;
    public final long id;
    public final long start;
    public final long end;
    public final long step;
    public final long size;
    public Random rng;

    public String toString() {
      if (shutdown) {
        return "chunk SHUTDOWN";
      }
      String range;
      if (step == 1) {
        range = "[" + start + ":" + end + "]";
      } else {
        range = "[" + start + ":" + step + ":" + end + "]";
      }
      return "chunk " + id + range;
    }
  }
  public static class LoadProgress {
    /** report progress at intervals of progressReportInterval links */
    private final long progressReportInterval;

    public LoadProgress(Logger progressLogger,
                        long id1s_total, long progressReportInterval) {
      super();
      this.progressReportInterval = progressReportInterval;
      this.progressLogger = progressLogger;
      this.id1s_total = id1s_total;
      this.starttime_ms = 0;
      this.id1s_loaded = new AtomicLong();
      this.links_loaded = new AtomicLong();
    }

    public static LoadProgress create(Logger progressLogger, Properties props) {
      long maxid1 = ConfigUtil.getLong(props, Config.MAX_ID);
      long startid1 = ConfigUtil.getLong(props, Config.MIN_ID);
      long nids = maxid1 - startid1;
      long progressReportInterval = ConfigUtil.getLong(props,
                           Config.LOAD_PROG_INTERVAL, 50000L);
      return new LoadProgress(progressLogger, nids, progressReportInterval);
    }

    private final Logger progressLogger;
    private final AtomicLong id1s_loaded; // progress
    private final AtomicLong links_loaded; // progress
    private final long id1s_total; // goal
    private long starttime_ms;

    /** Mark current time as start time for load */
    public void startTimer() {
      starttime_ms = System.currentTimeMillis();
    }

    /**
     * Update progress
     * @param id1_incr number of additional id1s loaded since last call
     * @param links_incr number of links loaded since last call
     */
    public void update(long id1_incr, long links_incr) {
      long curr_id1s = id1s_loaded.addAndGet(id1_incr);

      long curr_links = links_loaded.addAndGet(links_incr);
      long prev_links = curr_links - links_incr;

      if ((curr_links / progressReportInterval) >
          (prev_links / progressReportInterval) || curr_id1s == id1s_total) {
        double percentage = (curr_id1s / (double)id1s_total) * 100.0;

        // Links per second loaded
        long now = System.currentTimeMillis();
        double link_rate = ((curr_links) / ((double) now - starttime_ms))*1000;
        double id1_rate = ((curr_id1s) / ((double) now - starttime_ms))*1000;
        progressLogger.info(String.format(
            "%d/%d id1s loaded (%.1f%% complete) at %.2f id1s/sec avg. " +
            "%d links loaded at %.2f links/sec avg.",
            curr_id1s, id1s_total, percentage, id1_rate,
            curr_links, link_rate));
      }
    }
  }
}


================================================
FILE: src/main/java/com/facebook/LinkBench/LinkBenchOp.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

// Various operation types for which we want to gather stats
public enum LinkBenchOp {
  ADD_NODE,
  UPDATE_NODE,
  DELETE_NODE,
  GET_NODE,
  ADD_LINK,
  DELETE_LINK,
  UPDATE_LINK,
  COUNT_LINK,
  MULTIGET_LINK,
  GET_LINKS_LIST,
  LOAD_NODE_BULK,
  LOAD_LINK,
  LOAD_LINKS_BULK,
  LOAD_COUNTS_BULK,
  // Although the following are not truly operations, we need stats
  // for them
  RANGE_SIZE,    // how big range scans are
  LOAD_LINKS_BULK_NLINKS, // how many links inserted in bulk
  LOAD_COUNTS_BULK_NLINKS, // how many counts inserted in bulk
  UNKNOWN;

  public String displayName() {
    return name();
  }
}


================================================
FILE: src/main/java/com/facebook/LinkBench/LinkBenchRequest.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

import java.io.PrintStream;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Properties;
import java.util.Random;
import java.util.concurrent.atomic.AtomicLong;

import org.apache.log4j.Level;
import org.apache.log4j.Logger;

import com.facebook.LinkBench.RealDistribution.DistributionType;
import com.facebook.LinkBench.distributions.AccessDistributions;
import com.facebook.LinkBench.distributions.AccessDistributions.AccessDistribution;
import com.facebook.LinkBench.distributions.ID2Chooser;
import com.facebook.LinkBench.distributions.LogNormalDistribution;
import com.facebook.LinkBench.distributions.ProbabilityDistribution;
import com.facebook.LinkBench.generators.DataGenerator;
import com.facebook.LinkBench.stats.LatencyStats;
import com.facebook.LinkBench.stats.SampledStats;
import com.facebook.LinkBench.util.ClassLoadUtil;


public class LinkBenchRequest implements Runnable {
  private final Logger logger = Logger.getLogger(ConfigUtil.LINKBENCH_LOGGER);
  Properties props;
  LinkStore linkStore;
  NodeStore nodeStore;

  RequestProgress progressTracker;

  long numRequests;
  /** Requests per second: <= 0 for unlimited rate */
  private long requestrate;

  /** Maximum number of failed requests: < 0 for unlimited */
  private long maxFailedRequests;

  /**
   * Time to run benchmark for before collecting stats. Allows
   * caches, etc to warm up.
   */
  private long warmupTime;

  /** Maximum time to run benchmark for, not including warmup time */
  long maxTime;
  int nrequesters;
  int requesterID;
  long maxid1;
  long startid1;
  Level debuglevel;
  long displayFreq_ms;
  long progressFreq_ms;
  String dbid;
  boolean singleAssoc = false;

  // Control data generation settings
  private LogNormalDistribution linkDataSize;
  private DataGenerator linkAddDataGen;
  private DataGenerator linkUpDataGen;
  private LogNormalDistribution nodeDataSize;
  private DataGenerator nodeAddDataGen;
  private DataGenerator nodeUpDataGen;

  // cummulative percentages
  double pc_addlink;
  double pc_deletelink;
  double pc_updatelink;
  double pc_countlink;
  double pc_getlink;
  double pc_getlinklist;
  double pc_addnode;
  double pc_deletenode;
  double pc_updatenode;
  double pc_getnode;

  // Chance of doing historical range query
  double p_historical_getlinklist;

  private static class HistoryKey {
    public final long id1;
    public final long link_type;
    public HistoryKey(long id1, long link_type) {
      super();
      this.id1 = id1;
      this.link_type = link_type;
    }

    public HistoryKey(Link l) {
      this(l.id1, l.link_type);
    }

    @Override
    public int hashCode() {
      final int prime = 31;
      int result = 1;
      result = prime * result + (int) (id1 ^ (id1 >>> 32));
      result = prime * result + (int) (link_type ^ (link_type >>> 32));
      return result;
    }

    @Override
    public boolean equals(Object obj) {
      if (!(obj instanceof HistoryKey))
        return false;
      HistoryKey other = (HistoryKey) obj;
      return id1 == other.id1 && link_type == other.link_type;
    }

  }

  // Cache of last link in lists where full list wasn't retrieved
  ArrayList<Link> listTailHistory;

  // Index of history to avoid duplicates
  HashMap<HistoryKey, Integer> listTailHistoryIndex;

  // Limit of cache size
  private int listTailHistoryLimit;

  // Probability distribution for ids in multiget
  ProbabilityDistribution multigetDist;

  // Statistics
  SampledStats stats;
  LatencyStats latencyStats;

  // Other informational counters
  long numfound = 0;
  long numnotfound = 0;
  long numHistoryQueries = 0;

  /**
   * Random number generator use for generating workload.  If
   * initialized with same seed, should generate same sequence of requests
   * so that tests and benchmarks are repeatable.
   */
  Random rng;

  // Last node id accessed
  long lastNodeId;

  long requestsDone = 0;
  long errors = 0;
  boolean aborted;

  // Access distributions
  private AccessDistribution writeDist; // link writes
  private AccessDistribution writeDistUncorr; // to blend with link writes
  private double writeDistUncorrBlend; // Percentage to used writeDist2 for
  private AccessDistribution readDist; // link reads
  private AccessDistribution readDistUncorr; // to blend with link reads
  private double readDistUncorrBlend; // Percentage to used readDist2 for
  private AccessDistribution nodeReadDist; // node reads
  private AccessDistribution nodeUpdateDist; // node writes
  private AccessDistribution nodeDeleteDist; // node deletes

  private ID2Chooser id2chooser;
  public LinkBenchRequest(LinkStore linkStore,
                          NodeStore nodeStore,
                          Properties props,
                          LatencyStats latencyStats,
                          PrintStream csvStreamOut,
                          RequestProgress progressTracker,
                          Random rng,
                          int requesterID,
                          int nrequesters) {
    assert(linkStore != null);
    if (requesterID < 0 ||  requesterID >= nrequesters) {
      throw new IllegalArgumentException("Bad requester id "
          + requesterID + "/" + nrequesters);
    }

    this.linkStore = linkStore;
    this.nodeStore = nodeStore;
    this.props = props;
    this.latencyStats = latencyStats;
    this.progressTracker = progressTracker;
    this.rng = rng;
    this.nrequesters = nrequesters;
    this.requesterID = requesterID;

    debuglevel = ConfigUtil.getDebugLevel(props);
    dbid = ConfigUtil.getPropertyRequired(props, Config.DBID);
    numRequests = ConfigUtil.getLong(props, Config.NUM_REQUESTS);
    requestrate = ConfigUtil.getLong(props, Config.REQUEST_RATE, 0L);
    maxFailedRequests = ConfigUtil.getLong(props,  Config.MAX_FAILED_REQUESTS, 0L);
    warmupTime = Math.max(0, ConfigUtil.getLong(props, Config.WARMUP_TIME, 0L));
    maxTime = ConfigUtil.getLong(props, Config.MAX_TIME);
    maxid1 = ConfigUtil.getLong(props, Config.MAX_ID);
    startid1 = ConfigUtil.getLong(props, Config.MIN_ID);

    // math functions may cause problems for id1 < 1
    if (startid1 <= 0) {
      throw new LinkBenchConfigError("startid1 must be >= 1");
    }
    if (maxid1 <= startid1) {
      throw new LinkBenchConfigError("maxid1 must be > startid1");
    }

    // is this a single assoc test?
    if (startid1 + 1 == maxid1) {
      singleAssoc = true;
      logger.info("Testing single row assoc read.");
    }

    initRequestProbabilities(props);
    initLinkDataGeneration(props);
    initLinkRequestDistributions(props, requesterID, nrequesters);
    if (pc_getnode > pc_getlinklist) {
      // Load stuff for node workload if needed
      if (nodeStore == null) {
        throw new IllegalArgumentException("nodeStore not provided but non-zero " +
                                         "probability of node operation");
      }
      initNodeDataGeneration(props);
      initNodeRequestDistributions(props);
    }

    displayFreq_ms = ConfigUtil.getLong(props, Config.DISPLAY_FREQ, 60L) * 1000;
    progressFreq_ms = ConfigUtil.getLong(props, Config.PROGRESS_FREQ, 6L) * 1000;
    int maxsamples = ConfigUtil.getInt(props, Config.MAX_STAT_SAMPLES);
    stats = new SampledStats(requesterID, maxsamples, csvStreamOut);

    listTailHistoryLimit = 2048; // Hardcoded limit for now
    listTailHistory = new ArrayList<Link>(listTailHistoryLimit);
    listTailHistoryIndex = new HashMap<HistoryKey, Integer>();
    p_historical_getlinklist = ConfigUtil.getDouble(props,
                        Config.PR_GETLINKLIST_HISTORY, 0.0) / 100;

    lastNodeId = startid1;
  }

  private void initRequestProbabilities(Properties props) {
    pc_addlink = ConfigUtil.getDouble(props, Config.PR_ADD_LINK);
    pc_deletelink = pc_addlink + ConfigUtil.getDouble(props, Config.PR_DELETE_LINK);
    pc_updatelink = pc_deletelink + ConfigUtil.getDouble(props, Config.PR_UPDATE_LINK);
    pc_countlink = pc_updatelink + ConfigUtil.getDouble(props, Config.PR_COUNT_LINKS);
    pc_getlink = pc_countlink + ConfigUtil.getDouble(props, Config.PR_GET_LINK);
    pc_getlinklist = pc_getlink + ConfigUtil.getDouble(props, Config.PR_GET_LINK_LIST);

    pc_addnode = pc_getlinklist + ConfigUtil.getDouble(props, Config.PR_ADD_NODE, 0.0);
    pc_updatenode = pc_addnode + ConfigUtil.getDouble(props, Config.PR_UPDATE_NODE, 0.0);
    pc_deletenode = pc_updatenode + ConfigUtil.getDouble(props, Config.PR_DELETE_NODE, 0.0);
    pc_getnode = pc_deletenode + ConfigUtil.getDouble(props, Config.PR_GET_NODE, 0.0);

    if (Math.abs(pc_getnode - 100.0) > 1e-5) {//compare real numbers
      throw new LinkBenchConfigError("Percentages of request types do not " +
                  "add to 100, only " + pc_getnode + "!");
    }
  }

  private void initLinkRequestDistributions(Properties props, int requesterID,
      int nrequesters) {
    writeDist = AccessDistributions.loadAccessDistribution(props,
            startid1, maxid1, DistributionType.LINK_WRITES);
    readDist = AccessDistributions.loadAccessDistribution(props,
        startid1, maxid1, DistributionType.LINK_READS);

    // Load uncorrelated distributions for blending if needed
    writeDistUncorr = null;
    if (props.containsKey(Config.WRITE_UNCORR_BLEND)) {
      // Ratio of queries to use uncorrelated.  Convert from percentage
      writeDistUncorrBlend = ConfigUtil.getDouble(props,
                Config.WRITE_UNCORR_BLEND) / 100.0;
      if (writeDistUncorrBlend > 0.0) {
        writeDistUncorr = AccessDistributions.loadAccessDistribution(props,
            startid1, maxid1, DistributionType.LINK_WRITES_UNCORR);
      }
    }

    readDistUncorr = null;
    if (props.containsKey(Config.READ_UNCORR_BLEND)) {
      // Ratio of queries to use uncorrelated.  Convert from percentage
      readDistUncorrBlend = ConfigUtil.getDouble(props,
                Config.READ_UNCORR_BLEND) / 100.0;
      if (readDistUncorrBlend > 0.0) {
        readDistUncorr = AccessDistributions.loadAccessDistribution(props,
            startid1, maxid1, DistributionType.LINK_READS_UNCORR);
      }
    }

    id2chooser = new ID2Chooser(props, startid1, maxid1,
                                nrequesters, requesterID);

    // Distribution of #id2s per multiget
    String multigetDistClass = props.getProperty(Config.LINK_MULTIGET_DIST);
    if (multigetDistClass != null && multigetDistClass.trim().length() != 0) {
      int multigetMin = ConfigUtil.getInt(props, Config.LINK_MULTIGET_DIST_MIN);
      int multigetMax = ConfigUtil.getInt(props, Config.LINK_MULTIGET_DIST_MAX);
      try {
        multigetDist = ClassLoadUtil.newInstance(multigetDistClass,
                                            ProbabilityDistribution.class);
        multigetDist.init(multigetMin, multigetMax, props,
                                             Config.LINK_MULTIGET_DIST_PREFIX);
      } catch (ClassNotFoundException e) {
        logger.error(e);
        throw new LinkBenchConfigError("Class" + multigetDistClass +
            " could not be loaded as ProbabilityDistribution");
      }
    } else {
      multigetDist = null;
    }
  }

  private void initLinkDataGeneration(Properties props) {
    try {
      double medLinkDataSize = ConfigUtil.getDouble(props,
                                            Config.LINK_DATASIZE);
      linkDataSize = new LogNormalDistribution();
      linkDataSize.init(0, LinkStore.MAX_LINK_DATA, medLinkDataSize,
                           Config.LINK_DATASIZE_SIGMA);
      linkAddDataGen = ClassLoadUtil.newInstance(
          ConfigUtil.getPropertyRequired(props, Config.LINK_ADD_DATAGEN),
          DataGenerator.class);
      linkAddDataGen.init(props, Config.LINK_ADD_DATAGEN_PREFIX);

      linkUpDataGen = ClassLoadUtil.newInstance(
          ConfigUtil.getPropertyRequired(props, Config.LINK_UP_DATAGEN),
          DataGenerator.class);
      linkUpDataGen.init(props, Config.LINK_UP_DATAGEN_PREFIX);
    } catch (ClassNotFoundException ex) {
      logger.error(ex);
      throw new LinkBenchConfigError("Error loading data generator class: "
            + ex.getMessage());
    }
  }

  private void initNodeRequestDistributions(Properties props) {
    try {
      nodeReadDist  = AccessDistributions.loadAccessDistribution(props,
        startid1, maxid1, DistributionType.NODE_READS);
    } catch (LinkBenchConfigError e) {
      // Not defined
      logger.info("Node access distribution not configured: " +
          e.getMessage());
      throw new LinkBenchConfigError("Node read distribution not " +
            "configured but node read operations have non-zero probability");
    }

    try {
      nodeUpdateDist  = AccessDistributions.loadAccessDistribution(props,
        startid1, maxid1, DistributionType.NODE_UPDATES);
    } catch (LinkBenchConfigError e) {
      // Not defined
      logger.info("Node access distribution not configured: " +
              e.getMessage());
      throw new LinkBenchConfigError("Node write distribution not " +
            "configured but node write operations have non-zero probability");
    }

    try {
      nodeDeleteDist = AccessDistributions.loadAccessDistribution(props,
        startid1, maxid1, DistributionType.NODE_DELETES);
    } catch (LinkBenchConfigError e) {
      // Not defined
      logger.info("Node delete distribution not configured: " +
              e.getMessage());
      throw new LinkBenchConfigError("Node delete distribution not " +
            "configured but node write operations have non-zero probability");
    }
  }

  private void initNodeDataGeneration(Properties props) {
    try {
      double medNodeDataSize = ConfigUtil.getDouble(props,
                                              Config.NODE_DATASIZE);
      nodeDataSize = new LogNormalDistribution();
      nodeDataSize.init(0, NodeStore.MAX_NODE_DATA, medNodeDataSize,
                        Config.NODE_DATASIZE_SIGMA);

      String dataGenClass = ConfigUtil.getPropertyRequired(props,
                                         Config.NODE_ADD_DATAGEN);
      nodeAddDataGen = ClassLoadUtil.newInstance(dataGenClass,
                                                 DataGenerator.class);
      nodeAddDataGen.init(props, Config.NODE_ADD_DATAGEN_PREFIX);

      dataGenClass = ConfigUtil.getPropertyRequired(props,
                        Config.NODE_UP_DATAGEN);
      nodeUpDataGen = ClassLoadUtil.newInstance(dataGenClass,
                                                 DataGenerator.class);
      nodeUpDataGen.init(props, Config.NODE_UP_DATAGEN_PREFIX);
    } catch (ClassNotFoundException ex) {
      logger.error(ex);
      throw new LinkBenchConfigError("Error loading data generator class: "
            + ex.getMessage());
    }
  }

  public long getRequestsDone() {
    return requestsDone;
  }

  public boolean didAbort() {
    return aborted;
  }

  // gets id1 for the request based on desired distribution
  private long chooseRequestID(DistributionType type, long previousId1) {
    AccessDistribution dist;
    switch (type) {
    case LINK_READS:
      // Blend between distributions if needed
      if (readDistUncorr == null || rng.nextDouble() >= readDistUncorrBlend) {
        dist = readDist;
      } else {
        dist = readDistUncorr;
      }
      break;
    case LINK_WRITES:
      // Blend between distributions if needed
      if (writeDistUncorr == null || rng.nextDouble() >= writeDistUncorrBlend) {
        dist = writeDist;
      } else {
        dist = writeDistUncorr;
      }
      break;
    case LINK_WRITES_UNCORR:
      dist = writeDistUncorr;
      break;
    case NODE_READS:
      dist = nodeReadDist;
      break;
    case NODE_UPDATES:
      dist = nodeUpdateDist;
      break;
    case NODE_DELETES:
      dist = nodeDeleteDist;
      break;
    default:
      throw new RuntimeException("Unknown value for type: " + type);
    }
    long newid1 = dist.nextID(rng, previousId1);
    // Distribution responsible for generating number in range
    assert((newid1 >= startid1) && (newid1 < maxid1));
    if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
      logger.trace("id1 generated = " + newid1 +
         " for access distribution: " + dist.getClass().getName() + ": " +
         dist.toString());
    }

    if (dist.getShuffler() != null) {
      // Shuffle to go from position in space ranked from most to least accessed,
      // to the real id space
      newid1 = startid1 + dist.getShuffler().permute(newid1 - startid1);
    }
    return newid1;
  }

  /**
   * Randomly choose a single request and execute it, updating statistics
   * @param recordStats If true, record latency and other stats.
   * @return true if successful, false on error
   */
  private boolean oneRequest(boolean recordStats) {

    double r = rng.nextDouble() * 100.0;

    long starttime = 0;
    long endtime = 0;

    LinkBenchOp type = LinkBenchOp.UNKNOWN; // initialize to invalid value
    Link link = new Link();
    try {

      if (r <= pc_addlink) {
        // generate add request
        type = LinkBenchOp.ADD_LINK;
        link.id1 = chooseRequestID(DistributionType.LINK_WRITES, link.id1);
        link.link_type = id2chooser.chooseRandomLinkType(rng);
        link.id2 = id2chooser.chooseForOp(rng, link.id1, link.link_type,
                                                ID2Chooser.P_ADD_EXIST);
        link.visibility = LinkStore.VISIBILITY_DEFAULT;
        link.version = 0;
        link.time = System.currentTimeMillis();
        link.data = linkAddDataGen.fill(rng,
                                      new byte[(int)linkDataSize.choose(rng)]);

        starttime = System.nanoTime();
        // no inverses for now
        boolean alreadyExists = linkStore.addLink(dbid, link, true);
        boolean added = !alreadyExists;
        endtime = System.nanoTime();
        if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
          logger.trace("addLink id1=" + link.id1 + " link_type="
                    + link.link_type + " id2=" + link.id2 + " added=" + added);
        }
      } else if (r <= pc_deletelink) {
        type = LinkBenchOp.DELETE_LINK;
        long id1 = chooseRequestID(DistributionType.LINK_WRITES, link.id1);
        long link_type = id2chooser.chooseRandomLinkType(rng);
        long id2 = id2chooser.chooseForOp(rng, id1, link_type,
                                          ID2Chooser.P_DELETE_EXIST);
        starttime = System.nanoTime();
        linkStore.deleteLink(dbid, id1, link_type, id2, true, // no inverse
            false);
        endtime = System.nanoTime();
        if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
          logger.trace("deleteLink id1=" + id1 + " link_type=" + link_type
                     + " id2=" + id2);
        }
      } else if (r <= pc_updatelink) {
        type = LinkBenchOp.UPDATE_LINK;
        link.id1 = chooseRequestID(DistributionType.LINK_WRITES, link.id1);
        link.link_type = id2chooser.chooseRandomLinkType(rng);
        // Update one of the existing links
        link.id2 = id2chooser.chooseForOp(rng, link.id1, link.link_type,
                                              ID2Chooser.P_UPDATE_EXIST);
        link.visibility = LinkStore.VISIBILITY_DEFAULT;
        link.version = 0;
        link.time = System.currentTimeMillis();
        link.data = linkUpDataGen.fill(rng,
                            new byte[(int)linkDataSize.choose(rng)]);

        starttime = System.nanoTime();
        // no inverses for now
        boolean found1 = linkStore.addLink(dbid, link, true);
        boolean found = found1;
        endtime = System.nanoTime();
        if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
          logger.trace("updateLink id1=" + link.id1 + " link_type="
                + link.link_type + " id2=" + link.id2 + " found=" + found);
        }
      } else if (r <= pc_countlink) {

        type = LinkBenchOp.COUNT_LINK;

        long id1 = chooseRequestID(DistributionType.LINK_READS, link.id1);
        long link_type = id2chooser.chooseRandomLinkType(rng);
        starttime = System.nanoTime();
        long count = linkStore.countLinks(dbid, id1, link_type);
        endtime = System.nanoTime();
        if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
          logger.trace("countLink id1=" + id1 + " link_type=" + link_type
                     + " count=" + count);
        }
      } else if (r <= pc_getlink) {

        type = LinkBenchOp.MULTIGET_LINK;

        long id1 = chooseRequestID(DistributionType.LINK_READS, link.id1);
        long link_type = id2chooser.chooseRandomLinkType(rng);
        int nid2s = 1;
        if (multigetDist != null) {
          nid2s = (int)multigetDist.choose(rng);
        }
        long id2s[] = id2chooser.chooseMultipleForOp(rng, id1, link_type, nid2s,
                                                 ID2Chooser.P_GET_EXIST);

        starttime = System.nanoTime();
        int found = getLink(id1, link_type, id2s);
        assert(found >= 0 && found <= nid2s);
        endtime = System.nanoTime();

        if (found > 0) {
          numfound += found;
        } else {
          numnotfound += nid2s - found;
        }

      } else if (r <= pc_getlinklist) {

        type = LinkBenchOp.GET_LINKS_LIST;
        Link links[];

        if (rng.nextDouble() < p_historical_getlinklist &&
                    !this.listTailHistory.isEmpty()) {
          links = getLinkListTail();
        } else {
          long id1 = chooseRequestID(DistributionType.LINK_READS, link.id1);
          long link_type = id2chooser.chooseRandomLinkType(rng);
          starttime = System.nanoTime();
          links = getLinkList(id1, link_type);
          endtime = System.nanoTime();
        }

        int count = ((links == null) ? 0 : links.length);
        if (recordStats) {
          stats.addStats(LinkBenchOp.RANGE_SIZE, count, false);
        }
      } else if (r <= pc_addnode) {
        type = LinkBenchOp.ADD_NODE;
        Node newNode = createAddNode();
        starttime = System.nanoTime();
        lastNodeId = nodeStore.addNode(dbid, newNode);
        endtime = System.nanoTime();
        if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
          logger.trace("addNode " + newNode);
        }
      } else if (r <= pc_updatenode) {
        type = LinkBenchOp.UPDATE_NODE;
        // Choose an id that has previously been created (but might have
        // been since deleted
        long upId = chooseRequestID(DistributionType.NODE_UPDATES,
                                     lastNodeId);
        // Generate new data randomly
        Node newNode = createUpdateNode(upId);

        starttime = System.nanoTime();
        boolean changed = nodeStore.updateNode(dbid, newNode);
        endtime = System.nanoTime();
        lastNodeId = upId;
        if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
          logger.trace("updateNode " + newNode + " changed=" + changed);
        }
      } else if (r <= pc_deletenode) {
        type = LinkBenchOp.DELETE_NODE;
        long idToDelete = chooseRequestID(DistributionType.NODE_DELETES,
                                          lastNodeId);
        starttime = System.nanoTime();
        boolean deleted = nodeStore.deleteNode(dbid, LinkStore.DEFAULT_NODE_TYPE,
                                                     idToDelete);
        endtime = System.nanoTime();
        lastNodeId = idToDelete;
        if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
          logger.trace("deleteNode " + idToDelete + " deleted=" + deleted);
        }
      } else if (r <= pc_getnode) {
        type = LinkBenchOp.GET_NODE;
        starttime = System.nanoTime();
        long idToFetch = chooseRequestID(DistributionType.NODE_READS,
                                         lastNodeId);
        Node fetched = nodeStore.getNode(dbid, LinkStore.DEFAULT_NODE_TYPE, idToFetch);
        endtime = System.nanoTime();
        lastNodeId = idToFetch;
        if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
          if (fetched == null) {
            logger.trace("getNode " + idToFetch + " not found");
          } else {
            logger.trace("getNode " + fetched);
          }
        }
      } else {
        logger.error("No-op in requester: last probability < 1.0");
        return false;
      }


      // convert to microseconds
      long timetaken = (endtime - starttime)/1000;

      if (recordStats) {
        // record statistics
        stats.addStats(type, timetaken, false);
        latencyStats.recordLatency(requesterID, type, timetaken);
      }

      return true;
    } catch (Throwable e){//Catch exception if any

      long endtime2 = System.nanoTime();

      long timetaken2 = (endtime2 - starttime)/1000;

      logger.error(type.displayName() + " error " +
                         e.getMessage(), e);
      if (recordStats) {
        stats.addStats(type, timetaken2, true);
      }
      linkStore.clearErrors(requesterID);
      return false;
    }
  }

  /**
   * Create a new node for adding to database
   * @return
   */
  private Node createAddNode() {
    byte data[] = nodeAddDataGen.fill(rng, new byte[(int)nodeDataSize.choose(rng)]);
    return new Node(-1, LinkStore.DEFAULT_NODE_TYPE, 1,
                    (int)(System.currentTimeMillis()/1000), data);
  }

  /**
   * Create new node for updating in database
   */
  private Node createUpdateNode(long id) {
    byte data[] = nodeUpDataGen.fill(rng, new byte[(int)nodeDataSize.choose(rng)]);
    return new Node(id, LinkStore.DEFAULT_NODE_TYPE, 2,
                    (int)(System.currentTimeMillis()/1000), data);
  }

  @Override
  public void run() {
    logger.info("Requester thread #" + requesterID + " started: will do "
        + numRequests + " ops after " + warmupTime + " second warmup");
    logger.debug("Requester thread #" + requesterID + " first random number "
                  + rng.nextLong());

    try {
      this.linkStore.initialize(props, Phase.REQUEST, requesterID);
      if (this.nodeStore != null && this.nodeStore != this.linkStore) {
        this.nodeStore.initialize(props, Phase.REQUEST, requesterID);
      }
    } catch (Exception e) {
      logger.error("Error while initializing store", e);
      throw new RuntimeException(e);
    }

    long warmupStartTime = System.currentTimeMillis();
    boolean warmupDone = warmupTime <= 0;
    long benchmarkStartTime;
    if (!warmupDone) {
      benchmarkStartTime = warmupStartTime + warmupTime * 1000;
    } else {
      benchmarkStartTime = warmupStartTime;
    }
    long endTime = benchmarkStartTime + maxTime * 1000;
    long lastUpdate = warmupStartTime;
    long curTime = warmupStartTime;

    long i;

    if (singleAssoc) {
      LinkBenchOp type = LinkBenchOp.UNKNOWN;
      try {
        Link link = new Link();
        // add a single assoc to the database
        link.id1 = 45;
        link.id1 = 46;
        type = LinkBenchOp.ADD_LINK;
        // no inverses for now
        linkStore.addLink(dbid, link, true);

        // read this assoc from the database over and over again
        type = LinkBenchOp.MULTIGET_LINK;
        for (i = 0; i < numRequests; i++) {
          int found = getLink(link.id1, link.link_type,
                                  new long[]{link.id2});
          if (found == 1) {
            requestsDone++;
          } else {
            logger.warn("ThreadID = " + requesterID +
                               " not found link for id1=45");
          }
        }
      } catch (Throwable e) {
        logger.error(type.displayName() + "error " +
                         e.getMessage(), e);
        aborted = true;
      }
      closeStores();
      return;
    }

    long warmupRequests = 0;
    long requestsSinceLastUpdate = 0;
    long lastStatDisplay_ms = curTime;
    long reqTime_ns = System.nanoTime();
    double requestrate_ns = ((double)requestrate)/1e9;
    while (requestsDone < numRequests) {
      if (requestrate > 0) {
        reqTime_ns = Timer.waitExpInterval(rng, reqTime_ns, requestrate_ns);
      }
      boolean success = oneRequest(warmupDone);
      if (!success) {
        errors++;
        if (maxFailedRequests >= 0 && errors > maxFailedRequests) {
          logger.error(String.format("Requester #%d aborting: %d failed requests" +
              " (out of %d total) ", requesterID, errors, requestsDone));
          aborted = true;
          break;
        }
      }

      curTime = System.currentTimeMillis();

      // Track requests done
      if (warmupDone) {
        requestsDone++;
        requestsSinceLastUpdate++;
        if (requestsSinceLastUpdate >= RequestProgress.THREAD_REPORT_INTERVAL) {
          progressTracker.update(requestsSinceLastUpdate);
          requestsSinceLastUpdate = 0;
        }
      } else {
        warmupRequests++;
      }

      // Per-thread periodic progress updates
      if (curTime > lastUpdate + progressFreq_ms) {
        if (warmupDone) {
          logger.info(String.format("Requester #%d %d/%d requests done",
              requesterID, requestsDone, numRequests));
          lastUpdate = curTime;
        } else {
          logger.info(String.format("Requester #%d warming up.  " +
              " %d warmup requests done. %d/%d seconds of warmup done",
              requesterID, warmupRequests, (curTime - warmupStartTime) / 1000,
              warmupTime));
          lastUpdate = curTime;
        }
      }

      // Per-thread periodic stat dumps after warmup done
      if (warmupDone && (lastStatDisplay_ms + displayFreq_ms) <= curTime) {
        displayStats(lastStatDisplay_ms, curTime);
        stats.resetSamples();
        lastStatDisplay_ms = curTime;
      }

      // Check if warmup completed
      if (!warmupDone && curTime >= benchmarkStartTime) {
        warmupDone = true;
        lastUpdate = curTime;
        lastStatDisplay_ms = curTime;
        requestsSinceLastUpdate = 0;
        logger.info(String.format("Requester #%d warmup finished " +
            " after %d warmup requests.  0/%d requests done",
            requesterID, warmupRequests, numRequests));
        lastUpdate = curTime;
      }

      // Enforce time limit
      if (curTime > endTime) {
        logger.info(String.format("Requester #%d: time limit of %ds elapsed" +
              ", shutting down.", requesterID, maxTime));
        break;
      }
    }

    // Do final update of statistics
    progressTracker.update(requestsSinceLastUpdate);
    displayStats(lastStatDisplay_ms, System.currentTimeMillis());

    // Report final stats
    logger.info("ThreadID = " + requesterID +
                       " total requests = " + requestsDone +
                       " requests/second = " + ((1000 * requestsDone)/
                                                Math.max(1, (curTime - benchmarkStartTime))) +
                       " found = " + numfound +
                       " not found = " + numnotfound +
                       " history queries = " + numHistoryQueries + "/" +
                                   stats.getCount(LinkBenchOp.GET_LINKS_LIST));
    closeStores();
  }

  /**
   * Close datastores before finishing
   */
  private void closeStores() {
    linkStore.close();
    if (nodeStore != null && nodeStore != linkStore) {
      nodeStore.close();
    }
  }

  private void displayStats(long lastStatDisplay_ms, long now_ms) {
    stats.displayStats(lastStatDisplay_ms, now_ms,
        Arrays.asList(
        LinkBenchOp.MULTIGET_LINK, LinkBenchOp.GET_LINKS_LIST,
        LinkBenchOp.COUNT_LINK,
        LinkBenchOp.UPDATE_LINK, LinkBenchOp.ADD_LINK,
        LinkBenchOp.RANGE_SIZE, LinkBenchOp.ADD_NODE,
        LinkBenchOp.UPDATE_NODE, LinkBenchOp.DELETE_NODE,
        LinkBenchOp.GET_NODE));
  }

  int getLink(long id1, long link_type, long id2s[]) throws Exception {
    Link links[] = linkStore.multigetLinks(dbid, id1, link_type, id2s);
    return links == null ? 0 : links.length;
  }

  Link[] getLinkList(long id1, long link_type) throws Exception {
    Link links[] = linkStore.getLinkList(dbid, id1, link_type);
    if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
       logger.trace("getLinkList(id1=" + id1 + ", link_type="  + link_type
                     + ") => count=" + (links == null ? 0 : links.length));
    }
    // If there were more links than limit, record
    if (links != null && links.length >= linkStore.getRangeLimit()) {
      Link lastLink = links[links.length-1];
      if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
        logger.trace("Maybe more history for (" + id1 +"," +
                      link_type + " older than " + lastLink.time);
      }

      addTailCacheEntry(lastLink);
    }
    return links;
  }

  Link[] getLinkListTail() throws Exception {
    assert(!listTailHistoryIndex.isEmpty());
    assert(!listTailHistory.isEmpty());
    int choice = rng.nextInt(listTailHistory.size());
    Link prevLast = listTailHistory.get(choice);

    // Get links past the oldest last retrieved
    Link links[] = linkStore.getLinkList(dbid, prevLast.id1,
        prevLast.link_type, 0, prevLast.time, 1, linkStore.getRangeLimit());

    if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
      logger.trace("getLinkListTail(id1=" + prevLast.id1 + ", link_type="
                + prevLast.link_type + ", max_time=" + prevLast.time
                + " => count=" + (links == null ? 0 : links.length));
   }
    if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
      logger.trace("Historical range query for (" + prevLast.id1 +"," +
                    prevLast.link_type + " older than " + prevLast.time +
                    ": " + (links == null ? 0 : links.length) + " results");
    }

    if (links != null && links.length == linkStore.getRangeLimit()) {
      // There might be yet more history
      Link last = links[links.length-1];
      if (Level.TRACE.isGreaterOrEqual(debuglevel)) {
        logger.trace("might be yet more history for (" + last.id1 +"," +
                      last.link_type + " older than " + last.time);
      }
      // Update in place
      listTailHistory.set(choice, last.clone());
    } else {
      // No more history after this, remove from cache
      removeTailCacheEntry(choice, null);
    }
    numHistoryQueries++;
    return links;
  }

  /**
   * Add a new link to the history cache, unless already present
   * @param lastLink the last (i.e. lowest timestamp) link retrieved
   */
  private void addTailCacheEntry(Link lastLink) {
    HistoryKey key = new HistoryKey(lastLink);
    if (listTailHistoryIndex.containsKey(key)) {
      // Already present
      return;
    }

    if (listTailHistory.size() < listTailHistoryLimit) {
      listTailHistory.add(lastLink.clone());
      listTailHistoryIndex.put(key, listTailHistory.size() - 1);
    } else {
      // Need to evict entry
      int choice = rng.nextInt(listTailHistory.size());
      removeTailCacheEntry(choice, lastLink.clone());
    }
  }

  /**
   * Remove or replace entry in listTailHistory and update index
   * @param pos index of entry in listTailHistory
   * @param repl replace with this if not null
   */
  private void removeTailCacheEntry(int pos, Link repl) {
    Link entry = listTailHistory.get(pos);
    if (pos == listTailHistory.size() - 1) {
      // removing from last position, don't need to fill gap
      listTailHistoryIndex.remove(new HistoryKey(entry));
      int lastIx = listTailHistory.size() - 1;
      if (repl == null) {
        listTailHistory.remove(lastIx);
      } else {
        listTailHistory.set(lastIx, repl);
        listTailHistoryIndex.put(new HistoryKey(repl), lastIx);
      }
    } else {
      if (repl == null) {
        // Replace with last entry in cache to fill gap
        repl = listTailHistory.get(listTailHistory.size() - 1);
        listTailHistory.remove(listTailHistory.size() - 1);
      }
      listTailHistory.set(pos, repl);
      listTailHistoryIndex.put(new HistoryKey(repl), pos);
    }
  }

  public static class RequestProgress {
    // How many ops before a thread should register its progress
    static final int THREAD_REPORT_INTERVAL = 250;
    /** How many ops before a progress update should be printed to console */
    private final long interval;

    private final Logger progressLogger;

    private long totalRequests;
    private final AtomicLong requestsDone;

    private long benchmarkStartTime;
    private long warmupTime_s;
    private long timeLimit_s;

    public RequestProgress(Logger progressLogger, long totalRequests,
                            long timeLimit_s, long warmupTime_s, long interval) {
      this.interval = interval;
      this.progressLogger = progressLogger;
      this.totalRequests = totalRequests;
      this.requestsDone = new AtomicLong();
      this.timeLimit_s = timeLimit_s;
      this.warmupTime_s = warmupTime_s;
    }

    public void startTimer() {
      benchmarkStartTime = System.currentTimeMillis() + warmupTime_s * 1000;
    }

    public long getBenchmarkStartTime() {
      return benchmarkStartTime;
    }

    public void update(long requestIncr) {
      long curr = requestsDone.addAndGet(requestIncr);
      long prev = curr - requestIncr;

      if ((curr / interval) > (prev / interval) || curr == totalRequests) {
        float progressPercent = ((float) curr) / totalRequests * 100;
        long now = System.currentTimeMillis();
        long elapsed = now - benchmarkStartTime;
        float elapsed_s = ((float) elapsed) / 1000;
        float limitPercent = (elapsed_s / ((float) timeLimit_s)) * 100;
        float rate = curr / ((float)elapsed_s);
        progressLogger.info(String.format(
            "%d/%d requests finished: %.1f%% complete at %.1f ops/sec" +
            " %.1f/%d secs elapsed: %.1f%% of time limit used",
            curr, totalRequests, progressPercent, rate,
            elapsed_s, timeLimit_s, limitPercent));

      }
    }
  }

  public static RequestProgress createProgress(Logger logger,
       Properties props) {
    long total_requests = ConfigUtil.getLong(props, Config.NUM_REQUESTS)
                      * ConfigUtil.getLong(props, Config.NUM_REQUESTERS);
    long progressInterval = ConfigUtil.getLong(props, Config.REQ_PROG_INTERVAL,
                                               10000L);
    long warmupTime = ConfigUtil.getLong(props, Config.WARMUP_TIME, 0L);
    long maxTime = ConfigUtil.getLong(props, Config.MAX_TIME);
    return new RequestProgress(logger, total_requests,
              maxTime, warmupTime, progressInterval);
  }
}



================================================
FILE: src/main/java/com/facebook/LinkBench/LinkBenchTask.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.facebook.LinkBench;

/**
 * The same as runnable, except run() can throw Exceptions
 * to be handled by the caller.
 */
public interface LinkBenchTask {
  public abstract void run() throws Exception;
}


================================================
FILE: src/main/java/com/facebook/LinkBench/LinkCount.java
================================================
/*
 * Copyright 2012, Facebook, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance
Download .txt
gitextract_nozznvpx/

├── .arcconfig
├── .gitignore
├── DataModel.md
├── LICENSE
├── NOTICES
├── README.md
├── bin/
│   ├── genswift
│   ├── linkbench
│   └── swift-generator-cli-0.11.0-standalone.jar
├── config/
│   ├── FBWorkload.properties
│   ├── LinkConfigMysql.properties
│   └── LinkConfigRocksDb.properties
├── pom.xml
└── src/
    ├── main/
    │   └── java/
    │       └── com/
    │           └── facebook/
    │               ├── LinkBench/
    │               │   ├── Config.java
    │               │   ├── ConfigUtil.java
    │               │   ├── GraphStore.java
    │               │   ├── InvertibleShuffler.java
    │               │   ├── Link.java
    │               │   ├── LinkBenchConfigError.java
    │               │   ├── LinkBenchDriver.java
    │               │   ├── LinkBenchDriverMR.java
    │               │   ├── LinkBenchLoad.java
    │               │   ├── LinkBenchOp.java
    │               │   ├── LinkBenchRequest.java
    │               │   ├── LinkBenchTask.java
    │               │   ├── LinkCount.java
    │               │   ├── LinkStore.java
    │               │   ├── LinkStoreHBaseGeneralAtomicityTesting.java
    │               │   ├── LinkStoreMysql.java
    │               │   ├── LinkStoreRocksDb.java
    │               │   ├── MemoryLinkStore.java
    │               │   ├── Node.java
    │               │   ├── NodeLoader.java
    │               │   ├── NodeStore.java
    │               │   ├── Phase.java
    │               │   ├── RealDistribution.java
    │               │   ├── Shuffler.java
    │               │   ├── Timer.java
    │               │   ├── distributions/
    │               │   │   ├── AccessDistributions.java
    │               │   │   ├── ApproxHarmonic.java
    │               │   │   ├── GeometricDistribution.java
    │               │   │   ├── Harmonic.java
    │               │   │   ├── ID2Chooser.java
    │               │   │   ├── LinkDistributions.java
    │               │   │   ├── LogNormalDistribution.java
    │               │   │   ├── PiecewiseLinearDistribution.java
    │               │   │   ├── ProbabilityDistribution.java
    │               │   │   ├── UniformDistribution.java
    │               │   │   └── ZipfDistribution.java
    │               │   ├── generators/
    │               │   │   ├── DataGenerator.java
    │               │   │   ├── MotifDataGenerator.java
    │               │   │   └── UniformDataGenerator.java
    │               │   ├── stats/
    │               │   │   ├── LatencyStats.java
    │               │   │   ├── RunningMean.java
    │               │   │   └── SampledStats.java
    │               │   └── util/
    │               │       └── ClassLoadUtil.java
    │               └── rocks/
    │                   └── swift/
    │                       ├── rocks.thrift
    │                       └── rocks_common.thrift
    └── test/
        └── java/
            └── com/
                └── facebook/
                    └── LinkBench/
                        ├── DistributionTestBase.java
                        ├── DummyLinkStore.java
                        ├── DummyLinkStoreTest.java
                        ├── GeneratedDataDump.java
                        ├── GeomDistTest.java
                        ├── GraphStoreTestBase.java
                        ├── HarmonicTest.java
                        ├── ID2ChooserTest.java
                        ├── InvertibleShufflerTest.java
                        ├── LinkStoreTestBase.java
                        ├── LogNormalTest.java
                        ├── MemoryGraphStoreTest.java
                        ├── MemoryLinkStoreTest.java
                        ├── MemoryNodeStoreTest.java
                        ├── MySqlGraphStoreTest.java
                        ├── MySqlLinkStoreTest.java
                        ├── MySqlNodeStoreTest.java
                        ├── MySqlTestConfig.java
                        ├── NodeStoreTestBase.java
                        ├── PiecewiseDistTest.java
                        ├── TestAccessDistribution.java
                        ├── TestDataGen.java
                        ├── TestRealDistribution.java
                        ├── TestStats.java
                        ├── TimerTest.java
                        ├── UniformDistTest.java
                        ├── ZipfDistTest.java
                        └── testtypes/
                            ├── MySqlTest.java
                            ├── ProviderTest.java
                            ├── RocksDbTest.java
                            └── SlowTest.java
Download .txt
SYMBOL INDEX (749 symbols across 74 files)

FILE: src/main/java/com/facebook/LinkBench/Config.java
  class Config (line 24) | public class Config {

FILE: src/main/java/com/facebook/LinkBench/ConfigUtil.java
  class ConfigUtil (line 29) | public class ConfigUtil {
    method findLinkBenchHome (line 36) | public static String findLinkBenchHome() {
    method getDebugLevel (line 47) | public static Level getDebugLevel(Properties props)
    method setupLogging (line 86) | public static void setupLogging(Properties props, String logFile)
    method getPropertyRequired (line 112) | public static String getPropertyRequired(Properties props, String key)
    method getInt (line 122) | public static int getInt(Properties props, String key)
    method getInt (line 134) | public static int getInt(Properties props, String key, Integer default...
    method getLong (line 148) | public static long getLong(Properties props, String key)
    method getLong (line 161) | public static long getLong(Properties props, String key, Long defaultVal)
    method getDouble (line 176) | public static double getDouble(Properties props, String key)
    method getDouble (line 189) | public static double getDouble(Properties props, String key,
    method getBool (line 211) | public static boolean getBool(Properties props, String key)
    method getBool (line 216) | public static boolean getBool(Properties props, String key,

FILE: src/main/java/com/facebook/LinkBench/GraphStore.java
  class GraphStore (line 24) | public abstract class GraphStore extends LinkStore implements NodeStore {
    method bulkAddNodes (line 27) | public long[] bulkAddNodes(String dbid, List<Node> nodes) throws Excep...

FILE: src/main/java/com/facebook/LinkBench/InvertibleShuffler.java
  class InvertibleShuffler (line 23) | public class InvertibleShuffler {
    method InvertibleShuffler (line 31) | public InvertibleShuffler(long seed, int shuffleGroups, long n) {
    method InvertibleShuffler (line 34) | public InvertibleShuffler(Random rng, int shuffleGroups, long n) {
    method permute (line 52) | public long permute(long i) {
    method invertPermute (line 56) | public long invertPermute(long i) {
    method permute (line 60) | public long permute(long i, boolean inverse) {

FILE: src/main/java/com/facebook/LinkBench/Link.java
  class Link (line 21) | public class Link {
    method Link (line 23) | public Link(long id1, long link_type, long id2,
    method Link (line 34) | Link() {
    method equals (line 39) | public boolean equals(Object other) {
    method toString (line 52) | public String toString() {
    method clone (line 63) | public Link clone() {

FILE: src/main/java/com/facebook/LinkBench/LinkBenchConfigError.java
  class LinkBenchConfigError (line 18) | public class LinkBenchConfigError extends RuntimeException {
    method LinkBenchConfigError (line 21) | public LinkBenchConfigError(String msg) {

FILE: src/main/java/com/facebook/LinkBench/LinkBenchDriver.java
  class LinkBenchDriver (line 62) | public class LinkBenchDriver {
    method LinkBenchDriver (line 83) | LinkBenchDriver(String configfile, Properties
    method loadWorkloadProps (line 108) | private void loadWorkloadProps() throws IOException, FileNotFoundExcep...
    class Stores (line 134) | private static class Stores {
      method Stores (line 137) | public Stores(LinkStore linkStore, NodeStore nodeStore) {
    method initStores (line 145) | private Stores initStores()
    method createLinkStore (line 153) | private LinkStore createLinkStore() throws Exception, IOException {
    method createNodeStore (line 183) | private NodeStore createNodeStore(LinkStore linkStore) throws Exception,
    method load (line 212) | void load() throws IOException, InterruptedException, Throwable {
    method createMasterRNG (line 303) | private Random createMasterRNG(Properties props, String configKey) {
    method enqueueLoadWork (line 330) | private void enqueueLoadWork(BlockingQueue<LoadChunk> chunk_q, long st...
    method sendrequests (line 354) | void sendrequests() throws IOException, InterruptedException, Throwable {
    method concurrentExec (line 420) | static long concurrentExec(final List<? extends Runnable> tasks)
    method drive (line 454) | void drive() throws IOException, InterruptedException, Throwable {
    method main (line 459) | public static void main(String[] args)
    method printUsage (line 472) | private static void printUsage(Options options) {
    method initializeOptions (line 478) | private static Options initializeOptions() {
    method processArgs (line 521) | private static void processArgs(String[] args)

FILE: src/main/java/com/facebook/LinkBench/LinkBenchDriverMR.java
  class LinkBenchDriverMR (line 66) | public class LinkBenchDriverMR extends Configured implements Tool {
    type Counters (line 76) | static enum Counters { LINK_LOADED, REQUEST_DONE }
    method initStore (line 88) | private static LinkStore initStore(Phase currentphase, int mapperid)
    class LinkBenchInputSplit (line 123) | private class LinkBenchInputSplit implements InputSplit {
      method LinkBenchInputSplit (line 127) | LinkBenchInputSplit() {}
      method LinkBenchInputSplit (line 128) | public LinkBenchInputSplit(int i, int n) {
      method getID (line 132) | public int getID() {return this.id;}
      method getNum (line 133) | public int getNum() {return this.num;}
      method getLength (line 134) | public long getLength() {return 1;}
      method getLocations (line 136) | public String[] getLocations() throws IOException {
      method readFields (line 140) | public void readFields(DataInput in) throws IOException {
      method write (line 145) | public void write(DataOutput out) throws IOException {
    class LinkBenchRecordReader (line 155) | private class LinkBenchRecordReader
      method LinkBenchRecordReader (line 161) | public LinkBenchRecordReader(LinkBenchInputSplit split) {
      method createKey (line 167) | public IntWritable createKey() {return new IntWritable();}
      method createValue (line 168) | public IntWritable createValue() {return new IntWritable();}
      method close (line 169) | public void close() throws IOException { }
      method getProgress (line 172) | public float getProgress() { return 0.5f;}
      method getPos (line 174) | public long getPos() {return 1;}
      method next (line 176) | public boolean next(IntWritable key, IntWritable value)
    class LinkBenchInputFormat (line 192) | private class LinkBenchInputFormat
      method getSplits (line 195) | public InputSplit[] getSplits(JobConf conf, int numsplits) {
      method getRecordReader (line 203) | public RecordReader<IntWritable, IntWritable> getRecordReader(
      method validateInput (line 208) | public void validateInput(JobConf conf) {}
    method createJobConf (line 216) | private JobConf createJobConf(int currentphase, int nmappers) {
    method setupInputFiles (line 249) | private static FileSystem setupInputFiles(JobConf jobconf, int nmappers)
    method readOutput (line 291) | public static long readOutput(FileSystem fs, JobConf jobconf)
    class LoadMapper (line 314) | public static class LoadMapper extends MapReduceBase
      method map (line 317) | public void map(IntWritable loaderid,
    class RequestMapper (line 358) | public static class RequestMapper extends MapReduceBase
      method map (line 361) | public void map(IntWritable requesterid,
    class LoadRequestReducer (line 406) | public static class LoadRequestReducer extends MapReduceBase
      method configure (line 413) | @Override
      method reduce (line 418) | public void reduce(IntWritable nmappers,
      method close (line 434) | @Override
    method load (line 451) | private void load() throws IOException, InterruptedException {
    method sendrequests (line 488) | private void sendrequests() throws IOException, InterruptedException {
    method run (line 515) | @Override
    method getClassByName (line 549) | public static Class<?> getClassByName(String name)
    method newInstance (line 561) | public static <T> T newInstance(Class<T> theClass) {
    method main (line 573) | public static void main(String[] args) throws Exception {

FILE: src/main/java/com/facebook/LinkBench/LinkBenchLoad.java
  class LinkBenchLoad (line 50) | public class LinkBenchLoad implements Runnable {
    method LinkBenchLoad (line 100) | public LinkBenchLoad(LinkStore store, Properties props,
    method LinkBenchLoad (line 113) | public LinkBenchLoad(LinkStore linkStore,
    method getLinksLoaded (line 179) | public long getLinksLoaded() {
    method run (line 183) | @Override
    method displayStats (line 240) | private void displayStats(long startTime, boolean bulkLoad) {
    method processChunk (line 253) | private void processChunk(LoadChunk chunk, boolean bulkLoad,
    method createOutLinks (line 312) | private long createOutLinks(Random rng,
    method initLink (line 379) | private Link initLink() {
    method constructLink (line 397) | private void constructLink(Random rng, Link link, long id1,
    method chooseInitialTimestamp (line 423) | private long chooseInitialTimestamp(Random rng) {
    method loadLink (line 439) | private void loadLink(Link link, long outlink_ix, long nlinks,
    method loadLinks (line 470) | private void loadLinks(ArrayList<Link> loadBuffer) {
    method loadCounts (line 496) | private void loadCounts(ArrayList<LinkCount> loadBuffer) {
    class LoadChunk (line 528) | public static class LoadChunk {
      method LoadChunk (line 532) | public LoadChunk(long id, long start, long end, Random rng) {
      method LoadChunk (line 535) | public LoadChunk(boolean shutdown,
      method toString (line 554) | public String toString() {
    class LoadProgress (line 567) | public static class LoadProgress {
      method LoadProgress (line 571) | public LoadProgress(Logger progressLogger,
      method create (line 582) | public static LoadProgress create(Logger progressLogger, Properties ...
      method startTimer (line 598) | public void startTimer() {
      method update (line 607) | public void update(long id1_incr, long links_incr) {

FILE: src/main/java/com/facebook/LinkBench/LinkBenchOp.java
  type LinkBenchOp (line 19) | public enum LinkBenchOp {
    method displayName (line 41) | public String displayName() {

FILE: src/main/java/com/facebook/LinkBench/LinkBenchRequest.java
  class LinkBenchRequest (line 41) | public class LinkBenchRequest implements Runnable {
    class HistoryKey (line 97) | private static class HistoryKey {
      method HistoryKey (line 100) | public HistoryKey(long id1, long link_type) {
      method HistoryKey (line 106) | public HistoryKey(Link l) {
      method hashCode (line 110) | @Override
      method equals (line 119) | @Override
    method LinkBenchRequest (line 176) | public LinkBenchRequest(LinkStore linkStore,
    method initRequestProbabilities (line 251) | private void initRequestProbabilities(Properties props) {
    method initLinkRequestDistributions (line 270) | private void initLinkRequestDistributions(Properties props, int reques...
    method initLinkDataGeneration (line 323) | private void initLinkDataGeneration(Properties props) {
    method initNodeRequestDistributions (line 346) | private void initNodeRequestDistributions(Properties props) {
    method initNodeDataGeneration (line 381) | private void initNodeDataGeneration(Properties props) {
    method getRequestsDone (line 407) | public long getRequestsDone() {
    method didAbort (line 411) | public boolean didAbort() {
    method chooseRequestID (line 416) | private long chooseRequestID(DistributionType type, long previousId1) {
    method oneRequest (line 472) | private boolean oneRequest(boolean recordStats) {
    method createAddNode (line 686) | private Node createAddNode() {
    method createUpdateNode (line 695) | private Node createUpdateNode(long id) {
    method run (line 701) | @Override
    method closeStores (line 859) | private void closeStores() {
    method displayStats (line 866) | private void displayStats(long lastStatDisplay_ms, long now_ms) {
    method getLink (line 877) | int getLink(long id1, long link_type, long id2s[]) throws Exception {
    method getLinkList (line 882) | Link[] getLinkList(long id1, long link_type) throws Exception {
    method getLinkListTail (line 901) | Link[] getLinkListTail() throws Exception {
    method addTailCacheEntry (line 943) | private void addTailCacheEntry(Link lastLink) {
    method removeTailCacheEntry (line 965) | private void removeTailCacheEntry(int pos, Link repl) {
    class RequestProgress (line 988) | public static class RequestProgress {
      method RequestProgress (line 1003) | public RequestProgress(Logger progressLogger, long totalRequests,
      method startTimer (line 1013) | public void startTimer() {
      method getBenchmarkStartTime (line 1017) | public long getBenchmarkStartTime() {
      method update (line 1021) | public void update(long requestIncr) {
    method createProgress (line 1042) | public static RequestProgress createProgress(Logger logger,

FILE: src/main/java/com/facebook/LinkBench/LinkBenchTask.java
  type LinkBenchTask (line 22) | public interface LinkBenchTask {
    method run (line 23) | public abstract void run() throws Exception;

FILE: src/main/java/com/facebook/LinkBench/LinkCount.java
  class LinkCount (line 18) | public class LinkCount {
    method LinkCount (line 25) | public LinkCount(long id1, long link_type,

FILE: src/main/java/com/facebook/LinkBench/LinkStore.java
  class LinkStore (line 23) | public abstract class LinkStore {
    method LinkStore (line 42) | public LinkStore() {
    method getRangeLimit (line 46) | public int getRangeLimit() {
    method setRangeLimit (line 50) | public void setRangeLimit(int rangeLimit) {
    method initialize (line 55) | public abstract void initialize(Properties p,
    method close (line 61) | public abstract void close();
    method clearErrors (line 65) | public abstract void clearErrors(int threadID);
    method addLink (line 76) | public abstract boolean addLink(String dbid, Link a, boolean noinverse...
    method deleteLink (line 90) | public abstract boolean deleteLink(String dbid, long id1, long link_type,
    method updateLink (line 102) | public abstract boolean updateLink(String dbid, Link a, boolean noinve...
    method getLink (line 115) | public abstract Link getLink(String dbid, long id1, long link_type, lo...
    method multigetLinks (line 123) | public Link[] multigetLinks(String dbid, long id1, long link_type,
    method getLinkList (line 147) | public abstract Link[] getLinkList(String dbid, long id1, long link_type)
    method getLinkList (line 165) | public abstract Link[] getLinkList(String dbid, long id1, long link_type,
    method countLinks (line 171) | public abstract long countLinks(String dbid, long id1, long link_type)...
    method bulkLoadBatchSize (line 177) | public int bulkLoadBatchSize() {
    method addBulkLinks (line 182) | public void addBulkLinks(String dbid, List<Link> a, boolean noinverse)
    method addBulkCounts (line 189) | public void addBulkCounts(String dbid, List<LinkCount> a)

FILE: src/main/java/com/facebook/LinkBench/LinkStoreHBaseGeneralAtomicityTesting.java
  class LinkStoreHBaseGeneralAtomicityTesting (line 55) | public class LinkStoreHBaseGeneralAtomicityTesting extends LinkStore {
    method linkToBytes (line 78) | private byte[] linkToBytes(Link a) {
    method combine (line 91) | private String combine(long id1, long link_type, long id2) {
    method bytesToLink (line 98) | private Link bytesToLink(byte[] blink) throws Exception {
    method bytesToString (line 115) | private String bytesToString(byte[] value) {
    method assertTrue (line 120) | private void assertTrue(boolean expression, String message)
    method LinkStoreHBaseGeneralAtomicityTesting (line 134) | public LinkStoreHBaseGeneralAtomicityTesting(
    method LinkStoreHBaseGeneralAtomicityTesting (line 140) | public LinkStoreHBaseGeneralAtomicityTesting() {
    method initialize (line 144) | @Override
    method close (line 185) | @Override
    method clearErrors (line 193) | @Override
    method addLink (line 206) | @Override
    method deleteLink (line 263) | @Override
    method updateLink (line 305) | @Override
    method getLink (line 313) | @Override
    method getLinkList (line 346) | @Override
    method getLinkList (line 353) | @Override
    method countLinks (line 363) | @Override

FILE: src/main/java/com/facebook/LinkBench/LinkStoreMysql.java
  class LinkStoreMysql (line 37) | public class LinkStoreMysql extends GraphStore {
    method LinkStoreMysql (line 75) | public LinkStoreMysql() {
    method LinkStoreMysql (line 79) | public LinkStoreMysql(Properties props) throws IOException, Exception {
    method initialize (line 84) | public void initialize(Properties props, Phase currentPhase,
    method openConnection (line 133) | private void openConnection() throws Exception {
    method close (line 190) | @Override
    method clearErrors (line 202) | public void clearErrors(int threadID) {
    method populateRetrySQLStates (line 231) | private static HashSet<String> populateRetrySQLStates() {
    method processSQLException (line 243) | private boolean processSQLException(SQLException ex, String op) {
    method testCount (line 261) | private void testCount(Statement stmt, String dbid,
    method addLink (line 293) | @Override
    method addLinkImpl (line 307) | private boolean addLinkImpl(String dbid, Link l, boolean noinverse)
    method addLinksNoCount (line 430) | private int addLinksNoCount(String dbid, List<Link> links)
    method deleteLink (line 465) | @Override
    method deleteLinkImpl (line 480) | private boolean deleteLinkImpl(String dbid, long id1, long link_type, ...
    method updateLink (line 589) | @Override
    method getLink (line 599) | @Override
    method getLinkImpl (line 613) | private Link getLinkImpl(String dbid, long id1, long link_type, long id2)
    method multigetLinks (line 622) | @Override
    method multigetLinksImpl (line 636) | private Link[] multigetLinksImpl(String dbid, long id1, long link_type,
    method getLinkList (line 683) | @Override
    method getLinkList (line 690) | @Override
    method getLinkListImpl (line 707) | private Link[] getLinkListImpl(String dbid, long id1, long link_type,
    method createLinkFromRow (line 755) | private Link createLinkFromRow(ResultSet rs) throws SQLException {
    method countLinks (line 768) | @Override
    method countLinksImpl (line 782) | private long countLinksImpl(String dbid, long id1, long link_type)
    method bulkLoadBatchSize (line 809) | @Override
    method addBulkLinks (line 814) | @Override
    method addBulkLinksImpl (line 829) | private void addBulkLinksImpl(String dbid, List<Link> links, boolean n...
    method addBulkCounts (line 839) | @Override
    method addBulkCountsImpl (line 854) | private void addBulkCountsImpl(String dbid, List<LinkCount> counts)
    method checkNodeTableConfigured (line 888) | private void checkNodeTableConfigured() throws Exception {
    method resetNodeStore (line 895) | @Override
    method addNode (line 905) | @Override
    method addNodeImpl (line 918) | private long addNodeImpl(String dbid, Node node) throws Exception {
    method bulkAddNodes (line 924) | @Override
    method bulkAddNodesImpl (line 937) | private long[] bulkAddNodesImpl(String dbid, List<Node> nodes) throws ...
    method getNode (line 978) | @Override
    method getNodeImpl (line 991) | private Node getNodeImpl(String dbid, int type, long id) throws Except...
    method updateNode (line 1013) | @Override
    method updateNodeImpl (line 1026) | private boolean updateNodeImpl(String dbid, Node node) throws Exception {
    method deleteNode (line 1045) | @Override
    method deleteNodeImpl (line 1058) | private boolean deleteNodeImpl(String dbid, int type, long id) throws ...
    method stringLiteral (line 1082) | private static String stringLiteral(byte arr[]) {
    method hexStringLiteral (line 1129) | private static String hexStringLiteral(byte[] arr) {

FILE: src/main/java/com/facebook/LinkBench/LinkStoreRocksDb.java
  class LinkStoreRocksDb (line 54) | public class LinkStoreRocksDb extends GraphStore {
    method getRocksClient (line 87) | private RocksService getRocksClient() throws Exception {
    method incrThreads (line 104) | static synchronized void incrThreads() {
    method isLastThread (line 108) | static synchronized boolean isLastThread() {
    method close (line 116) | @Override
    method initialize (line 130) | @Override
    method LinkStoreRocksDb (line 143) | public LinkStoreRocksDb() {
    method LinkStoreRocksDb (line 147) | public LinkStoreRocksDb(Properties props) throws IOException, Exception {
    method clearErrors (line 152) | public void clearErrors(int threadID) {
    method addLink (line 165) | @Override
    method addLinkImpl (line 176) | private boolean addLinkImpl(String dbid, Link l, boolean noinverse)
    method addLinksNoCount (line 198) | private boolean addLinksNoCount(String dbid, List<Link> links)
    method deleteLink (line 215) | @Override
    method deleteLinkImpl (line 227) | private boolean deleteLinkImpl(String dbid, long id1, long link_type,
    method updateLink (line 243) | @Override
    method getLink (line 253) | @Override
    method getLinkImpl (line 264) | private Link getLinkImpl(String dbid, long id1, long link_type, long id2)
    method multigetLinks (line 274) | @Override
    method multigetLinksImpl (line 285) | private Link[] multigetLinksImpl(String dbid, long id1, long link_type,
    method getLinkList (line 306) | @Override
    method getLinkList (line 313) | @Override
    method getLinkListImpl (line 326) | private Link[] getLinkListImpl(String dbid, long id1, long link_type,
    method countLinks (line 345) | @Override
    method countLinksImpl (line 356) | private long countLinksImpl(String dbid, long id1, long link_type)
    method bulkLoadBatchSize (line 368) | @Override
    method addBulkLinks (line 373) | @Override
    method addBulkLinksImpl (line 384) | private void addBulkLinksImpl(String dbid, List<Link> links,
    method addBulkCounts (line 392) | @Override
    method addBulkCountsImpl (line 403) | private void addBulkCountsImpl(String dbid, List<LinkCount> counts)
    method resetNodeStore (line 418) | @Override
    method addNode (line 423) | @Override
    method addNodeImpl (line 433) | private long addNodeImpl(String dbid, Node node) throws Exception {
    method bulkAddNodes (line 439) | @Override
    method bulkAddNodesImpl (line 449) | private long[] bulkAddNodesImpl(String dbid, List<Node> nodes)
    method getNode (line 462) | @Override
    method getNodeImpl (line 472) | private Node getNodeImpl(String dbid, int type, long id) throws Except...
    method updateNode (line 486) | @Override
    method updateNodeImpl (line 496) | private boolean updateNodeImpl(String dbid, Node node) throws Exception {
    method deleteNode (line 500) | @Override
    method deleteNodeImpl (line 510) | private boolean deleteNodeImpl(String dbid, int type, long id)

FILE: src/main/java/com/facebook/LinkBench/MemoryLinkStore.java
  class MemoryLinkStore (line 37) | public class MemoryLinkStore extends GraphStore {
    class LinkLookupKey (line 38) | private static class LinkLookupKey {
      method LinkLookupKey (line 42) | public LinkLookupKey(long id1, long link_type) {
      method equals (line 48) | @Override
      method hashCode (line 57) | @Override
    class LinkTimeStampComparator (line 64) | private static class LinkTimeStampComparator implements Comparator<Lin...
      method compare (line 66) | @Override
    class NodeDB (line 99) | private static class NodeDB {
      method NodeDB (line 104) | NodeDB() {
      method NodeDB (line 108) | NodeDB(long startID) {
      method allocateID (line 112) | long allocateID() {
      method reset (line 116) | void reset(long startID) {
    method MemoryLinkStore (line 139) | public MemoryLinkStore() {
    method MemoryLinkStore (line 147) | private MemoryLinkStore(Map<String, Map<LinkLookupKey, SortedSet<Link>...
    method findLinkByKey (line 163) | private SortedSet<Link> findLinkByKey(String dbid, long id1,
    method newSortedLinkSet (line 188) | private TreeSet<Link> newSortedLinkSet() {
    method newHandle (line 195) | public MemoryLinkStore newHandle() {
    method initialize (line 199) | @Override
    method close (line 204) | @Override
    method clearErrors (line 208) | @Override
    method addLink (line 212) | @Override
    method deleteLink (line 236) | @Override
    method updateLink (line 260) | @Override
    method getLink (line 283) | @Override
    method getLinkList (line 299) | @Override
    method getLinkList (line 305) | @Override
    method countLinks (line 357) | @Override
    method getNodeDB (line 387) | private NodeDB getNodeDB(String dbid, boolean autocreate) throws Excep...
    method resetNodeStore (line 402) | @Override
    method addNode (line 414) | @Override
    method getNode (line 431) | @Override
    method updateNode (line 445) | @Override
    method deleteNode (line 461) | @Override

FILE: src/main/java/com/facebook/LinkBench/Node.java
  class Node (line 24) | public class Node {
    method Node (line 40) | public Node(long id, int type, long version, int time,
    method clone (line 50) | public Node clone() {
    method equals (line 53) | @Override
    method toString (line 63) | public String toString() {

FILE: src/main/java/com/facebook/LinkBench/NodeLoader.java
  class NodeLoader (line 40) | public class NodeLoader implements Runnable {
    method NodeLoader (line 73) | public NodeLoader(Properties props, Logger logger,
    method run (line 109) | @Override
    method displayAndResetStats (line 154) | private void displayAndResetStats() {
    method genNode (line 167) | private void genNode(Random rng, long id1, ArrayList<Node> nodeLoadBuf...
    method loadNodes (line 180) | private void loadNodes(ArrayList<Node> nodeLoadBuffer) {
    method getNodesLoaded (line 223) | public long getNodesLoaded() {

FILE: src/main/java/com/facebook/LinkBench/NodeStore.java
  type NodeStore (line 27) | public interface NodeStore {
    method initialize (line 32) | public void initialize(Properties p,
    method resetNodeStore (line 40) | public void resetNodeStore(String dbid, long startID) throws Exception;
    method addNode (line 56) | public long addNode(String dbid, Node node) throws Exception;
    method bulkAddNodes (line 67) | public long[] bulkAddNodes(String dbid, List<Node> nodes) throws Excep...
    method bulkLoadBatchSize (line 73) | public int bulkLoadBatchSize();
    method getNode (line 82) | public Node getNode(String dbid, int type, long id) throws Exception;
    method updateNode (line 90) | public boolean updateNode(String dbid, Node node) throws Exception;
    method deleteNode (line 99) | public boolean deleteNode(String dbid, int type, long id) throws Excep...
    method clearErrors (line 101) | public void clearErrors(int loaderId);
    method close (line 106) | public void close();

FILE: src/main/java/com/facebook/LinkBench/Phase.java
  type Phase (line 22) | public enum Phase {

FILE: src/main/java/com/facebook/LinkBench/RealDistribution.java
  class RealDistribution (line 38) | public class RealDistribution extends PiecewiseLinearDistribution {
    type DistributionType (line 75) | public static enum DistributionType {
    method RealDistribution (line 88) | public RealDistribution() {
    method init (line 92) | @Override
    method init (line 122) | public void init(Properties props, long min, long max,
    method loadOneShot (line 173) | public static synchronized void loadOneShot(Properties props) {
    method getArea (line 213) | static double getArea(DistributionType type) {
    method readCDF (line 223) | private static ArrayList<Point> readCDF(String filePath, Scanner scann...
    method getCDF (line 246) | static NavigableMap<Integer, Double> getCDF(DistributionType dist) {
    method getStatisticalData (line 274) | private static void getStatisticalData(Properties props) throws FileNo...
    method getNlinks (line 358) | static long getNlinks(long id1, long startid1, long maxid1) {
    method choose (line 363) | @Override
    method getShuffler (line 371) | public static InvertibleShuffler getShuffler(DistributionType type, lo...

FILE: src/main/java/com/facebook/LinkBench/Shuffler.java
  class Shuffler (line 23) | class Shuffler {
    method getPermutationValue (line 40) | static long getPermutationValue(long i, long n, long m) {
    method getPermutationValue (line 54) | static long getPermutationValue(long i, long start, long end, long m) {
    method getPermutationValue (line 60) | static long getPermutationValue(long i, long start, long end, long[] m...

FILE: src/main/java/com/facebook/LinkBench/Timer.java
  class Timer (line 20) | public class Timer {
    method waitExpInterval (line 30) | public static long waitExpInterval(Random rng,
    method waitUntil (line 42) | public static void waitUntil(long time_ns) {

FILE: src/main/java/com/facebook/LinkBench/distributions/AccessDistributions.java
  class AccessDistributions (line 38) | public class AccessDistributions {
    type AccessDistribution (line 39) | public interface AccessDistribution {
      method nextID (line 46) | public abstract long nextID(Random rng, long previousId);
      method getShuffler (line 53) | public abstract InvertibleShuffler getShuffler();
    class BuiltinAccessDistribution (line 56) | public static class BuiltinAccessDistribution implements AccessDistrib...
      method BuiltinAccessDistribution (line 65) | public BuiltinAccessDistribution(AccessDistMode mode,
      method nextID (line 75) | @Override
      method getShuffler (line 117) | @Override
    class ProbAccessDistribution (line 124) | public static class ProbAccessDistribution implements AccessDistributi...
      method ProbAccessDistribution (line 128) | public ProbAccessDistribution(ProbabilityDistribution dist,
      method nextID (line 135) | @Override
      method getShuffler (line 140) | @Override
    type AccessDistMode (line 147) | public static enum AccessDistMode {
    method loadAccessDistribution (line 157) | public static AccessDistribution loadAccessDistribution(Properties props,
    method tryDynamicLoad (line 223) | private static AccessDistribution tryDynamicLoad(String className,

FILE: src/main/java/com/facebook/LinkBench/distributions/ApproxHarmonic.java
  class ApproxHarmonic (line 21) | public class ApproxHarmonic {
    method generalizedHarmonic (line 37) | public static double generalizedHarmonic(final long n,

FILE: src/main/java/com/facebook/LinkBench/distributions/GeometricDistribution.java
  class GeometricDistribution (line 33) | public class GeometricDistribution implements ProbabilityDistribution {
    method init (line 45) | @Override
    method init (line 57) | public void init(long min, long max, double p, double scale) {
    method pdf (line 64) | @Override
    method expectedCount (line 69) | @Override
    method scaledPdf (line 74) | private double scaledPdf(long id, double scaleFactor) {
    method cdf (line 80) | @Override
    method choose (line 87) | @Override
    method quantile (line 92) | @Override

FILE: src/main/java/com/facebook/LinkBench/distributions/Harmonic.java
  class Harmonic (line 26) | public class Harmonic {
    method generalizedHarmonic (line 36) | public static double generalizedHarmonic(final long n, final double m) {

FILE: src/main/java/com/facebook/LinkBench/distributions/ID2Chooser.java
  class ID2Chooser (line 34) | public class ID2Chooser {
    method ID2Chooser (line 69) | public ID2Chooser(Properties props, long startid1, long maxid1,
    method chooseForLoad (line 97) | public long chooseForLoad(Random rng, long id1, long link_type,
    method chooseForOp (line 114) | public long chooseForOp(Random rng, long id1, long linkType,
    method chooseMultipleForOp (line 122) | public long[] chooseMultipleForOp(Random rng, long id1, long linkType,
    method contains (line 159) | private boolean contains(long[] id2s, int n, long id2) {
    method calcID2Range (line 169) | private long calcID2Range(double pExisting, long nlinks) {
    method chooseForOpInternal (line 183) | private long chooseForOpInternal(Random rng, long id1, long range) {
    method calcTotalLinkCount (line 211) | public long calcTotalLinkCount(long id1) {
    method fixId2 (line 230) | private static long fixId2(long id2, long nrequesters,
    method getLinkTypes (line 238) | public long[] getLinkTypes() {
    method chooseRandomLinkType (line 251) | public long chooseRandomLinkType(Random rng) {
    method calcLinkCount (line 255) | public long calcLinkCount(long id1, long linkType) {

FILE: src/main/java/com/facebook/LinkBench/distributions/LinkDistributions.java
  class LinkDistributions (line 29) | public class LinkDistributions {
    type LinkDistribution (line 30) | public static interface LinkDistribution {
      method getNlinks (line 31) | public abstract long getNlinks(long id1);
      method doShuffle (line 37) | public boolean doShuffle();
    class ProbLinkDistribution (line 40) | public static class ProbLinkDistribution implements LinkDistribution {
      method ProbLinkDistribution (line 43) | public ProbLinkDistribution(ProbabilityDistribution dist) {
      method getNlinks (line 47) | @Override
      method doShuffle (line 53) | @Override
    type LinkDistMode (line 62) | public static enum LinkDistMode {
    class ArithLinkDistribution (line 74) | public static class ArithLinkDistribution implements LinkDistribution {
      method ArithLinkDistribution (line 82) | public ArithLinkDistribution(long minid1, long maxid1, LinkDistMode ...
      method getNlinks (line 95) | @Override
      method doShuffle (line 136) | @Override
    method loadLinkDistribution (line 143) | public static LinkDistribution loadLinkDistribution(Properties props,
    method tryDynamicLoad (line 192) | private static LinkDistribution tryDynamicLoad(String className,

FILE: src/main/java/com/facebook/LinkBench/distributions/LogNormalDistribution.java
  class LogNormalDistribution (line 25) | public class LogNormalDistribution implements ProbabilityDistribution {
    method init (line 34) | @Override
    method init (line 49) | public void init(long min, long max, double median, double sigma) {
    method pdf (line 56) | @Override
    method expectedCount (line 61) | @Override
    method cdf (line 66) | @Override
    method choose (line 75) | @Override
    method quantile (line 86) | @Override

FILE: src/main/java/com/facebook/LinkBench/distributions/PiecewiseLinearDistribution.java
  class PiecewiseLinearDistribution (line 53) | public abstract class PiecewiseLinearDistribution implements Probability...
    class Point (line 56) | public static class Point implements Comparable<Point> {
      method Point (line 60) | public Point(int input_value, double input_probability) {
      method compareTo (line 65) | public int compareTo(Point obj) {
      method toString (line 70) | public String toString() {
    method init (line 75) | protected void init(long min, long max, ArrayList<Point> cdf) {
    method init (line 92) | protected void init(long min, long max, ArrayList<Point> cdf,
    method pdf (line 115) | @Override
    method expectedCount (line 122) | @Override
    method expectedCount (line 127) | public static double expectedCount(long min, long max, long id,
    method cdf (line 143) | @Override
    method quantile (line 151) | @Override
    method choose (line 157) | @Override
    method choose (line 162) | protected static long choose(Random rng, long startid1, long maxid1,
    method expectedValue (line 189) | protected static double expectedValue(ArrayList<Point> cdf) {
    method binarySearch (line 207) | public static int binarySearch(ArrayList<Point> points, double p) {
    method binarySearch (line 224) | public static int binarySearch(double[] a, double p) {
    method getPDF (line 235) | protected static double[] getPDF(ArrayList<Point> cdf) {
    method getCCDF (line 251) | protected static double[] getCCDF(double[] pdf) {
    method getCumulativeSum (line 261) | protected static double[] getCumulativeSum(double[] cdf) {

FILE: src/main/java/com/facebook/LinkBench/distributions/ProbabilityDistribution.java
  type ProbabilityDistribution (line 28) | public interface ProbabilityDistribution {
    method init (line 40) | public abstract void init(long min, long max,
    method pdf (line 48) | public abstract double pdf(long id);
    method expectedCount (line 56) | public abstract double expectedCount(long id);
    method cdf (line 65) | public abstract double cdf(long id);
    method choose (line 73) | public abstract long choose(Random rng);
    method quantile (line 80) | public abstract long quantile(double p);

FILE: src/main/java/com/facebook/LinkBench/distributions/UniformDistribution.java
  class UniformDistribution (line 29) | public class UniformDistribution implements ProbabilityDistribution {
    method init (line 35) | public void init(long min, long max, Properties props, String keyPrefi...
    method init (line 50) | public void init(long min, long max, double scale) {
    method pdf (line 56) | @Override
    method expectedCount (line 61) | @Override
    method scaledPDF (line 66) | private double scaledPDF(long id, double scale) {
    method cdf (line 78) | public double cdf(long id) {
    method quantile (line 94) | public long quantile(double p) {
    method choose (line 106) | public long choose(Random rng) {
    method randint2 (line 127) | private long randint2(Random rng, long n) {

FILE: src/main/java/com/facebook/LinkBench/distributions/ZipfDistribution.java
  class ZipfDistribution (line 28) | public class ZipfDistribution implements ProbabilityDistribution {
    method init (line 45) | @Override
    class CacheEntry (line 89) | private static class CacheEntry {
    method calcZetan (line 102) | private double calcZetan(long n) {
    method uncachedCalcZetan (line 131) | private double uncachedCalcZetan(long n) {
    method pdf (line 151) | @Override
    method expectedCount (line 156) | @Override
    method scaledPDF (line 161) | private double scaledPDF(long id, double scale) {
    method cdf (line 168) | @Override
    method choose (line 188) | @Override
    method quantile (line 199) | public long quantile(double p) {

FILE: src/main/java/com/facebook/LinkBench/generators/DataGenerator.java
  type DataGenerator (line 21) | public interface DataGenerator {
    method init (line 23) | public void init(Properties props, String keyPrefix);
    method fill (line 31) | public byte[] fill(Random rng, byte data[]);

FILE: src/main/java/com/facebook/LinkBench/generators/MotifDataGenerator.java
  class MotifDataGenerator (line 50) | public class MotifDataGenerator implements DataGenerator {
    method MotifDataGenerator (line 74) | public MotifDataGenerator() {
    method init (line 85) | public void init(int start, int end, double uniqueness) {
    method init (line 89) | public void init(int start, int end, double uniqueness, int motifBytes) {
    method init (line 110) | @Override
    method estMaxCompression (line 132) | public double estMaxCompression() {
    method fill (line 141) | @Override

FILE: src/main/java/com/facebook/LinkBench/generators/UniformDataGenerator.java
  class UniformDataGenerator (line 33) | public class UniformDataGenerator implements DataGenerator {
    method UniformDataGenerator (line 37) | public UniformDataGenerator() {
    method init (line 47) | public void init(int start, int end) {
    method init (line 65) | @Override
    method fill (line 74) | @Override
    method gen (line 79) | public static byte[] gen(Random rng, byte[] data,

FILE: src/main/java/com/facebook/LinkBench/stats/LatencyStats.java
  class LatencyStats (line 36) | public class LatencyStats {
    method LatencyStats (line 51) | public LatencyStats(int maxThreads) {
    method latencyToBucket (line 78) | public static int latencyToBucket(long microTime) {
    method bucketBound (line 105) | public static long[] bucketBound(int bucket) {
    method recordLatency (line 135) | public void recordLatency(int threadid, LinkBenchOp type,
    method displayLatencyStats (line 156) | public void displayLatencyStats() {
    method printCSVStats (line 180) | public void printCSVStats(PrintStream out, boolean header) {
    method printCSVStats (line 184) | public void printCSVStats(PrintStream out, boolean header, LinkBenchOp...
    method calcMeans (line 228) | private void calcMeans() {
    method calcCumulativeBuckets (line 252) | private void calcCumulativeBuckets() {
    method getBucketBounds (line 267) | private long[] getBucketBounds(LinkBenchOp type, long percentile) {
    method percentileString (line 292) | private String percentileString(LinkBenchOp type, long percentile) {
    method boundsToString (line 296) | static String boundsToString(long[] bucketBounds) {
    method getMean (line 304) | private double getMean(LinkBenchOp type) {
    method getMax (line 308) | private double getMax(LinkBenchOp type) {

FILE: src/main/java/com/facebook/LinkBench/stats/RunningMean.java
  class RunningMean (line 23) | public class RunningMean {
    method RunningMean (line 34) | public RunningMean(double v1) {
    method addSample (line 41) | public void addSample(double vi) {
    method mean (line 46) | public double mean() {
    method samples (line 50) | public long samples() {

FILE: src/main/java/com/facebook/LinkBench/stats/SampledStats.java
  class SampledStats (line 43) | public class SampledStats {
    method SampledStats (line 77) | public SampledStats(int input_threadID,
    method addStats (line 94) | public void addStats(LinkBenchOp type, long timetaken, boolean error) {
    method resetSamples (line 126) | public void resetSamples() {
    method displayStats (line 140) | private void displayStats(LinkBenchOp type, int start, int end,
    method writeCSVHeader (line 212) | public static void writeCSVHeader(PrintStream out) {
    method displayStatsAll (line 218) | public void displayStatsAll(long sampleStartTime_ms, long nowTime_ms) {
    method displayStats (line 223) | public void displayStats(long sampleStartTime_ms, long nowTime_ms,
    method getCount (line 234) | public long getCount(LinkBenchOp type) {

FILE: src/main/java/com/facebook/LinkBench/util/ClassLoadUtil.java
  class ClassLoadUtil (line 23) | public class ClassLoadUtil {
    method getClassByName (line 33) | public static Class<?> getClassByName(String name)
    method newInstance (line 45) | public static <T> T newInstance(Class<?> theClass, Class<T> expected) {
    method newInstance (line 62) | public static <T> T newInstance(String className, Class<T> expected)

FILE: src/test/java/com/facebook/LinkBench/DistributionTestBase.java
  class DistributionTestBase (line 42) | public abstract class DistributionTestBase extends TestCase {
    method getDist (line 44) | protected abstract ProbabilityDistribution getDist();
    method getDistParams (line 46) | protected Properties getDistParams() {
    method cdfChecks (line 51) | protected int cdfChecks() {
    method pdfChecks (line 56) | protected int pdfChecks() {
    method getBucketer (line 60) | protected Bucketer getBucketer() {
    method tolerance (line 65) | protected double tolerance() {
    method initRandom (line 69) | public Random initRandom(String testName) {
    method testCDFSanity (line 78) | @Test
    method testPDFSanity (line 99) | @Test
    method testPDFSum (line 125) | @Test
    method testChooseSanity (line 147) | @Test
    method testCDFChooseConsistency (line 164) | @Test
    method testCDFPDFConsistency (line 207) | @Test
    method testQuantileSanity (line 227) | @Test
    type Bucketer (line 254) | static interface Bucketer {
      method getBucketCount (line 255) | public int getBucketCount();
      method chooseBucket (line 256) | public int chooseBucket(long i, long n);
      method bucketMax (line 257) | public long bucketMax(int bucket, long n);
    class UniformBucketer (line 260) | static class UniformBucketer implements Bucketer {
      method UniformBucketer (line 263) | public UniformBucketer(int bucketCount) {
      method getBucketCount (line 267) | public int getBucketCount() {
      method chooseBucket (line 271) | public int chooseBucket(long i, long n) {
      method bucketMax (line 275) | public long bucketMax(int bucket, long n) {

FILE: src/test/java/com/facebook/LinkBench/DummyLinkStore.java
  class DummyLinkStore (line 32) | public class DummyLinkStore extends GraphStore {
    method DummyLinkStore (line 37) | public DummyLinkStore() {
    method DummyLinkStore (line 41) | public DummyLinkStore(LinkStore wrappedStore) {
    method DummyLinkStore (line 45) | public DummyLinkStore(LinkStore wrappedStore, boolean alreadyInitializ...
    method isRealLinkStore (line 56) | public boolean isRealLinkStore() {
    method isRealGraphStore (line 63) | public boolean isRealGraphStore() {
    method initialize (line 89) | @Override
    method close (line 101) | @Override
    method clearErrors (line 110) | @Override
    method addLink (line 118) | @Override
    method deleteLink (line 129) | @Override
    method updateLink (line 142) | @Override
    method multigetLinks (line 155) | @Override
    method getLink (line 167) | @Override
    method getLinkList (line 179) | @Override
    method getLinkList (line 191) | @Override
    method countLinks (line 206) | @Override
    method checkInitialized (line 218) | private void checkInitialized() {
    method bulkLoadBatchSize (line 224) | @Override
    method addBulkLinks (line 233) | @Override
    method addBulkCounts (line 243) | @Override
    method getRangeLimit (line 252) | @Override
    method setRangeLimit (line 261) | @Override
    method resetNodeStore (line 270) | @Override
    method addNode (line 277) | @Override
    method getNode (line 286) | @Override
    method updateNode (line 295) | @Override
    method deleteNode (line 304) | @Override

FILE: src/test/java/com/facebook/LinkBench/DummyLinkStoreTest.java
  class DummyLinkStoreTest (line 21) | public class DummyLinkStoreTest extends LinkStoreTestBase {
    method initStore (line 25) | @Override
    method getStoreHandle (line 31) | @Override

FILE: src/test/java/com/facebook/LinkBench/GeneratedDataDump.java
  class GeneratedDataDump (line 35) | public class GeneratedDataDump {
    method main (line 38) | public static void main(String args[]) {
    method writeGeneratedDataFile (line 62) | private static void writeGeneratedDataFile(String outFileName,
    method makeUniformObj (line 88) | private static DataGenerator makeUniformObj() {
    method makeMotifObj (line 94) | private static MotifDataGenerator makeMotifObj() {
    method makeUniformAssoc (line 103) | private static DataGenerator makeUniformAssoc() {
    method makeMotifAssoc (line 109) | private static MotifDataGenerator makeMotifAssoc() {

FILE: src/test/java/com/facebook/LinkBench/GeomDistTest.java
  class GeomDistTest (line 25) | public class GeomDistTest extends DistributionTestBase {
    method getDist (line 27) | @Override
    method getDistParams (line 32) | @Override
    method testGeom (line 42) | @Test

FILE: src/test/java/com/facebook/LinkBench/GraphStoreTestBase.java
  class GraphStoreTestBase (line 33) | public abstract class GraphStoreTestBase extends TestCase {
    method initStore (line 44) | protected abstract void initStore(Properties props)
    method getIDCount (line 51) | protected long getIDCount() {
    method getRequestCount (line 58) | protected int getRequestCount() {
    method maxConcurrentThreads (line 65) | protected int maxConcurrentThreads() {
    method getStoreHandle (line 73) | protected abstract DummyLinkStore getStoreHandle(boolean initialized)
    method setUp (line 76) | @Override
    method basicProps (line 86) | protected Properties basicProps() {
    method fillLoadProps (line 93) | public static void fillLoadProps(Properties props, long startId, long ...
    method fillReqProps (line 105) | public static void fillReqProps(Properties props, long startId, long i...
    method testFullWorkload (line 152) | @Test
    method deleteNodeIDRange (line 239) | static void deleteNodeIDRange(String testDB, int type,
    method serialLoadNodes (line 247) | private void serialLoadNodes(Random rng, Logger logger, Properties props,

FILE: src/test/java/com/facebook/LinkBench/HarmonicTest.java
  class HarmonicTest (line 28) | @Category(SlowTest.class)
    method testHarmonic (line 31) | @Test
    method testApproxFast (line 45) | @Test
    method testApproxSlow (line 52) | @Test
    method testApproxHelper (line 61) | private void testApproxHelper(long i) {

FILE: src/test/java/com/facebook/LinkBench/ID2ChooserTest.java
  class ID2ChooserTest (line 30) | public class ID2ChooserTest extends TestCase {
    method setUp (line 33) | @Override
    method testNoLoadCollisions (line 42) | @Test
    method testChooseForOp (line 68) | @Test
    method testMatchPercent (line 92) | @Test
    method testLinkCount (line 135) | @Test

FILE: src/test/java/com/facebook/LinkBench/InvertibleShufflerTest.java
  class InvertibleShufflerTest (line 26) | public class InvertibleShufflerTest extends TestCase {
    method testShuffleSmallRange (line 29) | @Test
    method testShuffleMedRange (line 42) | @Test
    method testShuffleLargeRange (line 48) | @Test
    method testShuffle (line 59) | public static void testShuffle(int minId, int maxId, boolean print,

FILE: src/test/java/com/facebook/LinkBench/LinkStoreTestBase.java
  class LinkStoreTestBase (line 53) | public abstract class LinkStoreTestBase extends TestCase {
    method initStore (line 64) | protected abstract void initStore(Properties props)
    method getIDCount (line 71) | protected long getIDCount() {
    method getRequestCount (line 78) | protected int getRequestCount() {
    method maxConcurrentThreads (line 85) | protected int maxConcurrentThreads() {
    method getStoreHandle (line 93) | protected abstract DummyLinkStore getStoreHandle(boolean initialized)
    method setUp (line 96) | @Override
    method basicProps (line 106) | protected Properties basicProps() {
    method fillLoadProps (line 112) | public static void fillLoadProps(Properties props, long startId, long ...
    method fillReqProps (line 134) | public static void fillReqProps(Properties props, long startId, long i...
    method createRNG (line 194) | static Random createRNG() {
    method testOneLink (line 202) | @Test
    method testMultipleLinks (line 268) | @Test
    method testMultiget (line 340) | @Test
    method testHiding (line 370) | @Test
    method testOverwrite (line 398) | @Test
    method testSqlInjection (line 440) | @Test
    method testAddThenUpdate (line 449) | private void testAddThenUpdate(Link l, byte[] updateData) throws IOExc...
    method testBinary1 (line 470) | @Test
    method testBinary2 (line 476) | @Test
    method testBinary3 (line 483) | @Test
    method binaryDataTest (line 495) | private void binaryDataTest(int startByte, int dataMaxSize)
    method testLoader (line 520) | @Test
    method testRequester (line 570) | @Test
    method testRequesterThrottling (line 635) | @Test
    method testHistoryRequests (line 683) | @Test
    method checkExpectedList (line 744) | private void checkExpectedList(DummyLinkStore store,
    method serialLoad (line 771) | static void serialLoad(Random rng, Logger logger, Properties props,
    method validateLoadedData (line 803) | private void validateLoadedData(Logger logger, DummyLinkStore wrappedS...
    method deleteIDRange (line 834) | static void deleteIDRange(String testDB,

FILE: src/test/java/com/facebook/LinkBench/LogNormalTest.java
  class LogNormalTest (line 23) | public class LogNormalTest extends DistributionTestBase {
    method getDist (line 25) | @Override
    method getDistParams (line 30) | @Override
    method tolerance (line 38) | @Override
    method testLogNormal (line 46) | public void testLogNormal() {
    method testPDFSanity (line 62) | @Override
    method testPDFSum (line 67) | @Override
    method testCDFPDFConsistency (line 72) | @Override
    method testQuantileSanity (line 77) | @Override

FILE: src/test/java/com/facebook/LinkBench/MemoryGraphStoreTest.java
  class MemoryGraphStoreTest (line 21) | public class MemoryGraphStoreTest extends GraphStoreTestBase {
    method initStore (line 24) | @Override
    method getStoreHandle (line 29) | @Override

FILE: src/test/java/com/facebook/LinkBench/MemoryLinkStoreTest.java
  class MemoryLinkStoreTest (line 21) | public class MemoryLinkStoreTest extends LinkStoreTestBase {
    method setUp (line 25) | @Override
    method basicProps (line 30) | @Override
    method initStore (line 37) | @Override
    method getStoreHandle (line 43) | @Override

FILE: src/test/java/com/facebook/LinkBench/MemoryNodeStoreTest.java
  class MemoryNodeStoreTest (line 21) | public class MemoryNodeStoreTest extends NodeStoreTestBase {
    method initNodeStore (line 23) | @Override
    method getNodeStoreHandle (line 29) | @Override

FILE: src/test/java/com/facebook/LinkBench/MySqlGraphStoreTest.java
  class MySqlGraphStoreTest (line 26) | @Category(MySqlTest.class)
    method initStore (line 32) | @Override
    method getIDCount (line 40) | @Override
    method getRequestCount (line 46) | @Override
    method tearDown (line 53) | @Override
    method basicProps (line 59) | @Override
    method getStoreHandle (line 67) | @Override

FILE: src/test/java/com/facebook/LinkBench/MySqlLinkStoreTest.java
  class MySqlLinkStoreTest (line 33) | @Category(MySqlTest.class)
    method getIDCount (line 41) | @Override
    method getRequestCount (line 47) | @Override
    method basicProps (line 53) | protected Properties basicProps() {
    method initStore (line 60) | @Override
    method getStoreHandle (line 73) | @Override
    method tearDown (line 82) | @Override protected void tearDown() throws Exception {

FILE: src/test/java/com/facebook/LinkBench/MySqlNodeStoreTest.java
  class MySqlNodeStoreTest (line 26) | @Category(MySqlTest.class)
    method basicProps (line 32) | @Override
    method initNodeStore (line 39) | @Override
    method getNodeStoreHandle (line 47) | @Override

FILE: src/test/java/com/facebook/LinkBench/MySqlTestConfig.java
  class MySqlTestConfig (line 29) | public class MySqlTestConfig {
    method fillMySqlTestServerProps (line 40) | public static void fillMySqlTestServerProps(Properties props) {
    method createConnection (line 52) | static Connection createConnection(String testDB)
    method createTestTables (line 67) | static void createTestTables(Connection conn, String testDB)
    method dropTestTables (line 104) | static void dropTestTables(Connection conn, String testDB)

FILE: src/test/java/com/facebook/LinkBench/NodeStoreTestBase.java
  class NodeStoreTestBase (line 38) | public abstract class NodeStoreTestBase extends TestCase {
    method initNodeStore (line 42) | protected abstract void initNodeStore(Properties props)
    method getNodeStoreHandle (line 45) | protected abstract NodeStore getNodeStoreHandle(boolean initialized)
    method basicProps (line 48) | protected Properties basicProps() {
    method setUp (line 54) | @Override
    method testIDAlloc (line 64) | @Test
    method testUpdate (line 119) | @Test
    method testBinary (line 134) | @Test

FILE: src/test/java/com/facebook/LinkBench/PiecewiseDistTest.java
  class PiecewiseDistTest (line 25) | public class PiecewiseDistTest extends DistributionTestBase {
    method setUp (line 28) | @Override
    method cdfChecks (line 50) | @Override
    method getDist (line 55) | @Override
    method testCDFSanity (line 66) | @Override
    method testCDFChooseConsistency (line 71) | @Override
    method testCDFPDFConsistency (line 76) | @Override
    method testQuantileSanity (line 81) | @Override

FILE: src/test/java/com/facebook/LinkBench/TestAccessDistribution.java
  class TestAccessDistribution (line 35) | public class TestAccessDistribution extends TestCase {
    method testMultiple (line 37) | @Test
    method testPerfectPower (line 42) | @Test
    method testPower (line 47) | @Test
    method testReciprocal (line 52) | @Test
    method testRoundRobin (line 57) | @Test
    method testUniform (line 62) | @Test
    method testZipf (line 76) | @Test
    method testReal (line 92) | @Test
    method testSanityBuiltinDist (line 110) | public static void testSanityBuiltinDist(AccessDistMode mode, long con...
    method testSanityAccessDist (line 121) | public static void testSanityAccessDist(AccessDistribution dist, long ...

FILE: src/test/java/com/facebook/LinkBench/TestDataGen.java
  class TestDataGen (line 34) | @Category(SlowTest.class)
    method printByteGrid (line 37) | public static void printByteGrid(byte[] data) {
    method testTimingUniform (line 49) | @Test
    method testTimingMotif (line 68) | @Test
    method testTiming (line 83) | private void testTiming(DataGenFactory fact, int bufSize) {
    type DataGenFactory (line 113) | private static interface DataGenFactory {
      method make (line 114) | public abstract DataGenerator make(double param);
    method doTest (line 117) | private long doTest(DataGenFactory fact, byte[] buf, Random rng, int t...
    method testMotif (line 134) | @Test
    method testCompressibility (line 164) | @Test
    method testCompressibility (line 199) | private void testCompressibility(MotifDataGenerator gen, int blockSize...

FILE: src/test/java/com/facebook/LinkBench/TestRealDistribution.java
  class TestRealDistribution (line 38) | public class TestRealDistribution extends TestCase {
    method setUp (line 42) | @Override
    method testBinSearchRegression1 (line 51) | public void testBinSearchRegression1() {
    method testBinSearchRegression2 (line 67) | public void testBinSearchRegression2() {
    method testGetNLinks (line 81) | @Test
    method testGetNextId1 (line 102) | @Test
    method getDistribution (line 136) | private static NavigableMap<Integer, Double> getDistribution(int[] seq,
    method getComparisonError (line 157) | private static double getComparisonError(NavigableMap<Integer, Double>...
    method interpolatedValue (line 182) | private static double interpolatedValue(NavigableMap<Integer, Double> ...
    method testGetNextId1 (line 196) | private static double testGetNextId1(Properties props, Random rng,
    method testGetNlinks (line 221) | private static double testGetNlinks(Properties props,

FILE: src/test/java/com/facebook/LinkBench/TestStats.java
  class TestStats (line 25) | public class TestStats extends TestCase {
    method testBucketing (line 27) | @Test

FILE: src/test/java/com/facebook/LinkBench/TimerTest.java
  class TimerTest (line 24) | public class TimerTest extends TestCase {
    method testTimer1 (line 26) | @Test
    method testTimer2 (line 44) | @Test
    method testExponentialArrivals (line 61) | @Test

FILE: src/test/java/com/facebook/LinkBench/UniformDistTest.java
  class UniformDistTest (line 25) | public class UniformDistTest extends DistributionTestBase {
    method getDist (line 27) | @Override
    method testInRange (line 32) | @Test

FILE: src/test/java/com/facebook/LinkBench/ZipfDistTest.java
  class ZipfDistTest (line 23) | public class ZipfDistTest extends DistributionTestBase {
    method cdfChecks (line 25) | @Override
    method getDist (line 31) | @Override
    method getDistParams (line 36) | @Override
    method getBucketer (line 44) | @Override
    method tolerance (line 49) | @Override
    class ZipfBucketer (line 61) | static class ZipfBucketer implements Bucketer {
      method getBucketCount (line 66) | public int getBucketCount() {
      method chooseBucket (line 70) | public int chooseBucket(long i, long n) {
      method bucketMax (line 79) | public long bucketMax(int bucket, long n) {

FILE: src/test/java/com/facebook/LinkBench/testtypes/MySqlTest.java
  type MySqlTest (line 3) | public interface MySqlTest extends ProviderTest {

FILE: src/test/java/com/facebook/LinkBench/testtypes/ProviderTest.java
  type ProviderTest (line 7) | public interface ProviderTest {

FILE: src/test/java/com/facebook/LinkBench/testtypes/RocksDbTest.java
  type RocksDbTest (line 3) | public interface RocksDbTest extends ProviderTest {

FILE: src/test/java/com/facebook/LinkBench/testtypes/SlowTest.java
  type SlowTest (line 7) | public interface SlowTest {
Condensed preview — 89 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (561K chars).
[
  {
    "path": ".arcconfig",
    "chars": 212,
    "preview": "{\n  \"project_id\" : \"linkbench\",\n  \"conduit_uri\" : \"https://reviews.facebook.net/\",\n  \"copyright_holder\" : \"\",\n  \"lint.en"
  },
  {
    "path": ".gitignore",
    "chars": 76,
    "preview": "target\nout\n.project\n.classpath\n.settings\n*.iws\n*.ipr\n*.iml\n.idea\n.DS_Store\n\n"
  },
  {
    "path": "DataModel.md",
    "chars": 8126,
    "preview": "LinkBench Data and Queries\n==========================\nFacebook has various types of object nodes, such as users, pages e"
  },
  {
    "path": "LICENSE",
    "chars": 11358,
    "preview": "\n                                 Apache License\n                           Version 2.0, January 2004\n                  "
  },
  {
    "path": "NOTICES",
    "chars": 696,
    "preview": "Several third party components are included with the LinkBench source\ndistribution.\n\nApache Commons CLI, Apache Commons "
  },
  {
    "path": "README.md",
    "chars": 21048,
    "preview": "- - -\n\n**_This project is not actively maintained. Proceed at your own risk!_**\n\n- - -  \n\nLinkBench Overview\n==========="
  },
  {
    "path": "bin/genswift",
    "chars": 1586,
    "preview": "#!/usr/bin/env bash\n\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreemen"
  },
  {
    "path": "bin/linkbench",
    "chars": 2281,
    "preview": "#!/usr/bin/env bash\n\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreemen"
  },
  {
    "path": "bin/swift-generator-cli-0.11.0-standalone.jar",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "config/FBWorkload.properties",
    "chars": 8003,
    "preview": "# LinkBench workload configuration file for Facebook social graph workload\n#\n#\n# Default parameters emulate a scaled-dow"
  },
  {
    "path": "config/LinkConfigMysql.properties",
    "chars": 4136,
    "preview": "# Sample MySQL LinkBench configuration file.\n#\n# This file contains settings for the data store, as well as controlling\n"
  },
  {
    "path": "config/LinkConfigRocksDb.properties",
    "chars": 3476,
    "preview": "# Sample RocksDb LinkBench configuration file.\n#\n# This file contains settings for the data store, as well as controllin"
  },
  {
    "path": "pom.xml",
    "chars": 10271,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/Config.java",
    "chars": 7247,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/ConfigUtil.java",
    "chars": 7406,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/GraphStore.java",
    "chars": 1105,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/InvertibleShuffler.java",
    "chars": 2972,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/Link.java",
    "chars": 2510,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkBenchConfigError.java",
    "chars": 811,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkBenchDriver.java",
    "chars": 21481,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkBenchDriverMR.java",
    "chars": 19438,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkBenchLoad.java",
    "chars": 20714,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkBenchOp.java",
    "chars": 1253,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkBenchRequest.java",
    "chars": 39109,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkBenchTask.java",
    "chars": 813,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkCount.java",
    "chars": 1023,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkStore.java",
    "chars": 6078,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkStoreHBaseGeneralAtomicityTesting.java",
    "chars": 11403,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkStoreMysql.java",
    "chars": 34631,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/LinkStoreRocksDb.java",
    "chars": 15114,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/MemoryLinkStore.java",
    "chars": 12928,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/Node.java",
    "chars": 1820,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/NodeLoader.java",
    "chars": 7613,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/NodeStore.java",
    "chars": 3401,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/Phase.java",
    "chars": 720,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/RealDistribution.java",
    "chars": 15916,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/Shuffler.java",
    "chars": 2107,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/Timer.java",
    "chars": 1717,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/distributions/AccessDistributions.java",
    "chars": 8397,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/distributions/ApproxHarmonic.java",
    "chars": 2298,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/distributions/GeometricDistribution.java",
    "chars": 3007,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/distributions/Harmonic.java",
    "chars": 1664,
    "preview": "package com.facebook.LinkBench.distributions;\n/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n "
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/distributions/ID2Chooser.java",
    "chars": 8497,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/distributions/LinkDistributions.java",
    "chars": 7210,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/distributions/LogNormalDistribution.java",
    "chars": 2689,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/distributions/PiecewiseLinearDistribution.java",
    "chars": 8236,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/distributions/ProbabilityDistribution.java",
    "chars": 2552,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/distributions/UniformDistribution.java",
    "chars": 3791,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/distributions/ZipfDistribution.java",
    "chars": 6161,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/generators/DataGenerator.java",
    "chars": 1046,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/generators/MotifDataGenerator.java",
    "chars": 6007,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/generators/UniformDataGenerator.java",
    "chars": 2690,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/stats/LatencyStats.java",
    "chars": 10175,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/stats/RunningMean.java",
    "chars": 1330,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/stats/SampledStats.java",
    "chars": 8047,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/LinkBench/util/ClassLoadUtil.java",
    "chars": 2277,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/main/java/com/facebook/rocks/swift/rocks.thrift",
    "chars": 6804,
    "preview": "// Copyright 2012 Facebook\n\ninclude \"rocks_common.thrift\"\n//include \"common/fb303/if/fb303.thrift\"\n\nnamespace cpp facebo"
  },
  {
    "path": "src/main/java/com/facebook/rocks/swift/rocks_common.thrift",
    "chars": 1597,
    "preview": "// Copyright 2012 Facebook\n\nnamespace cpp facebook.rocks\nnamespace cpp2 facebook.rocks\nnamespace java facebook.rocks\nnam"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/DistributionTestBase.java",
    "chars": 8113,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/DummyLinkStore.java",
    "chars": 7834,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/DummyLinkStoreTest.java",
    "chars": 1178,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/GeneratedDataDump.java",
    "chars": 3704,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/GeomDistTest.java",
    "chars": 1893,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/GraphStoreTestBase.java",
    "chars": 9858,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/HarmonicTest.java",
    "chars": 2787,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/ID2ChooserTest.java",
    "chars": 4876,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/InvertibleShufflerTest.java",
    "chars": 3319,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/LinkStoreTestBase.java",
    "chars": 30692,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/LogNormalTest.java",
    "chars": 2197,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/MemoryGraphStoreTest.java",
    "chars": 1085,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/MemoryLinkStoreTest.java",
    "chars": 1487,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/MemoryNodeStoreTest.java",
    "chars": 1132,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/MySqlGraphStoreTest.java",
    "chars": 2110,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/MySqlLinkStoreTest.java",
    "chars": 2428,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/MySqlNodeStoreTest.java",
    "chars": 1682,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/MySqlTestConfig.java",
    "chars": 4924,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/NodeStoreTestBase.java",
    "chars": 5590,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/PiecewiseDistTest.java",
    "chars": 2627,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/TestAccessDistribution.java",
    "chars": 4801,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/TestDataGen.java",
    "chars": 7091,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/TestRealDistribution.java",
    "chars": 9494,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/TestStats.java",
    "chars": 1401,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/TimerTest.java",
    "chars": 2720,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/UniformDistTest.java",
    "chars": 1467,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/ZipfDistTest.java",
    "chars": 2206,
    "preview": "/*\n * Copyright 2012, Facebook, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may no"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/testtypes/MySqlTest.java",
    "chars": 96,
    "preview": "package com.facebook.LinkBench.testtypes;\n\npublic interface MySqlTest extends ProviderTest {\n\n}\n"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/testtypes/ProviderTest.java",
    "chars": 222,
    "preview": "package com.facebook.LinkBench.testtypes;\n\n/**\n * Marker interface for tests for specific data store providers, particua"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/testtypes/RocksDbTest.java",
    "chars": 98,
    "preview": "package com.facebook.LinkBench.testtypes;\n\npublic interface RocksDbTest extends ProviderTest {\n\n}\n"
  },
  {
    "path": "src/test/java/com/facebook/LinkBench/testtypes/SlowTest.java",
    "chars": 179,
    "preview": "package com.facebook.LinkBench.testtypes;\n\n/**\n * Marker interface for slow unit tests that should not be run as part of"
  }
]

About this extraction

This page contains the full source code of the facebookarchive/linkbench GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 89 files (521.8 KB), approximately 131.7k tokens, and a symbol index with 749 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!