[
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2016 Josh Baker\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the \"Software\"), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of\nthe Software, and to permit persons to whom the Software is furnished to do so,\nsubject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS\nFOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR\nCOPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER\nIN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN\nCONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "**This project has been archived. Please check out [Uhaha](https://github.com/tidwall/uhaha) for a fitter, happier, more productive Raft framework.**\n\n<p align=\"center\">\n<img \n    src=\"logo.jpg\" \n    width=\"314\" height=\"200\" border=\"0\" alt=\"FINN\">\n</p>\n<p align=\"center\">\n<a href=\"https://goreportcard.com/report/github.com/tidwall/finn\"><img src=\"https://goreportcard.com/badge/github.com/tidwall/finn?style=flat-square\" alt=\"Go Report Card\"></a>\n<a href=\"https://godoc.org/github.com/tidwall/finn\"><img src=\"https://img.shields.io/badge/api-reference-blue.svg?style=flat-square\" alt=\"GoDoc\"></a>\n</p>\n\nFinn is a fast and simple framework for building [Raft](https://raft.github.io/) implementations in Go. It uses [Redcon](https://github.com/tidwall/redcon) for the network transport and [Hashicorp Raft](https://github.com/hashicorp/raft). There is also the option to use [LevelDB](https://github.com/syndtr/goleveldb), [BoltDB](https://github.com/boltdb/bolt) or [FastLog](https://github.com/tidwall/raft-fastlog) for log persistence.\n\n\nFeatures\n--------\n\n- Simple API for quickly creating a [fault-tolerant](https://en.wikipedia.org/wiki/Fault_tolerance) cluster\n- Fast network protocol using the [raft-redcon](https://github.com/tidwall/raft-redcon) transport\n- Optional [backends](#log-backends) for log persistence. [LevelDB](https://github.com/syndtr/goleveldb), [BoltDB](https://github.com/boltdb/bolt), or [FastLog](https://github.com/tidwall/raft-fastlog)\n- Adjustable [consistency and durability](#consistency-and-durability) levels\n- A [full-featured example](#full-featured-example) to help jumpstart integration\n- [Built-in raft commands](#built-in-raft-commands) for monitoring and managing the cluster\n- Supports the [Redis log format](http://build47.com/redis-log-format-levels/)\n- Works with clients such as [redigo](https://github.com/garyburd/redigo), [redis-py](https://github.com/andymccurdy/redis-py), [node_redis](https://github.com/NodeRedis/node_redis), [jedis](https://github.com/xetorthio/jedis), and [redis-cli](http://redis.io/topics/rediscli)\n\n\nGetting Started\n---------------\n\n### Installing\n\nTo start using Finn, install Go and run `go get`:\n\n```sh\n$ go get -u github.com/tidwall/finn\n```\n\nThis will retrieve the library.\n\n### Example\n\nHere's an example of a Redis clone that accepts the GET, SET, DEL, and KEYS commands.\n\nYou can run a [full-featured version](#full-featured-example) of this example from a terminal:\n\n```\ngo run example/clone.go\n```\n\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"log\"\n\t\"sort\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/tidwall/finn\"\n\t\"github.com/tidwall/match\"\n\t\"github.com/tidwall/redcon\"\n)\n\nfunc main() {\n\tn, err := finn.Open(\"data\", \":7481\", \"\", NewClone(), nil)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tdefer n.Close()\n\tselect {}\n}\n\ntype Clone struct {\n\tmu   sync.RWMutex\n\tkeys map[string][]byte\n}\n\nfunc NewClone() *Clone {\n\treturn &Clone{keys: make(map[string][]byte)}\n}\n\nfunc (kvm *Clone) Command(m finn.Applier, conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tswitch strings.ToLower(string(cmd.Args[0])) {\n\tdefault:\n\t\treturn nil, finn.ErrUnknownCommand\n\tcase \"set\":\n\t\tif len(cmd.Args) != 3 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\treturn m.Apply(conn, cmd,\n\t\t\tfunc() (interface{}, error) {\n\t\t\t\tkvm.mu.Lock()\n\t\t\t\tkvm.keys[string(cmd.Args[1])] = cmd.Args[2]\n\t\t\t\tkvm.mu.Unlock()\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t\tfunc(v interface{}) (interface{}, error) {\n\t\t\t\tconn.WriteString(\"OK\")\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\tcase \"get\":\n\t\tif len(cmd.Args) != 2 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\treturn m.Apply(conn, cmd, nil,\n\t\t\tfunc(interface{}) (interface{}, error) {\n\t\t\t\tkvm.mu.RLock()\n\t\t\t\tval, ok := kvm.keys[string(cmd.Args[1])]\n\t\t\t\tkvm.mu.RUnlock()\n\t\t\t\tif !ok {\n\t\t\t\t\tconn.WriteNull()\n\t\t\t\t} else {\n\t\t\t\t\tconn.WriteBulk(val)\n\t\t\t\t}\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\tcase \"del\":\n\t\tif len(cmd.Args) < 2 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\treturn m.Apply(conn, cmd,\n\t\t\tfunc() (interface{}, error) {\n\t\t\t\tvar n int\n\t\t\t\tkvm.mu.Lock()\n\t\t\t\tfor i := 1; i < len(cmd.Args); i++ {\n\t\t\t\t\tkey := string(cmd.Args[i])\n\t\t\t\t\tif _, ok := kvm.keys[key]; ok {\n\t\t\t\t\t\tdelete(kvm.keys, key)\n\t\t\t\t\t\tn++\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tkvm.mu.Unlock()\n\t\t\t\treturn n, nil\n\t\t\t},\n\t\t\tfunc(v interface{}) (interface{}, error) {\n\t\t\t\tn := v.(int)\n\t\t\t\tconn.WriteInt(n)\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\tcase \"keys\":\n\t\tif len(cmd.Args) != 2 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\tpattern := string(cmd.Args[1])\n\t\treturn m.Apply(conn, cmd, nil,\n\t\t\tfunc(v interface{}) (interface{}, error) {\n\t\t\t\tvar keys []string\n\t\t\t\tkvm.mu.RLock()\n\t\t\t\tfor key := range kvm.keys {\n\t\t\t\t\tif match.Match(key, pattern) {\n\t\t\t\t\t\tkeys = append(keys, key)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tkvm.mu.RUnlock()\n\t\t\t\tsort.Strings(keys)\n\t\t\t\tconn.WriteArray(len(keys))\n\t\t\t\tfor _, key := range keys {\n\t\t\t\t\tconn.WriteBulkString(key)\n\t\t\t\t}\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\t}\n}\n\nfunc (kvm *Clone) Restore(rd io.Reader) error {\n\tkvm.mu.Lock()\n\tdefer kvm.mu.Unlock()\n\tdata, err := ioutil.ReadAll(rd)\n\tif err != nil {\n\t\treturn err\n\t}\n\tvar keys map[string][]byte\n\tif err := json.Unmarshal(data, &keys); err != nil {\n\t\treturn err\n\t}\n\tkvm.keys = keys\n\treturn nil\n}\n\nfunc (kvm *Clone) Snapshot(wr io.Writer) error {\n\tkvm.mu.RLock()\n\tdefer kvm.mu.RUnlock()\n\tdata, err := json.Marshal(kvm.keys)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif _, err := wr.Write(data); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n```\n\nThe Applier Type\n----------------\nEvery `Command()` call provides an `Applier` type which is responsible for handling all Read or Write operation. In the above example you will see one `m.Apply(conn, cmd, ...)` for each command.\n\nThe signature for the `Apply()` function is:\n```go\nfunc Apply(\n\tconn redcon.Conn, \n\tcmd redcon.Command,\n\tmutate func() (interface{}, error),\n\trespond func(interface{}) (interface{}, error),\n) (interface{}, error)\n```\n\n- `conn` is the client connection making the call. It's possible that this value may be `nil` for commands that are being replicated on Follower nodes. \n- `cmd` is the command to process.\n- `mutate` is the function that handles modifying the node's data. \nPassing `nil` indicates that the operation is read-only.\nThe `interface{}` return value will be passed to the `respond` func.\nReturning an error will cancel the operation and the error will be returned to the client.\n- `respond` is used for responding to the client connection. It's also used for read-only operations. The `interface{}` param is what was passed from the `mutate` function and may be `nil`. \nReturning an error will cancel the operation and the error will be returned to the client.\n\n*Please note that the `Apply()` command is required for modifying or accessing data that is shared on all of the nodes.\nOptionally you can forgo the call altogether for operations that are unique to the node.*\n\nSnapshots\n---------\nAll Raft commands are stored in one big log file that will continue to grow. The log is stored on disk, in memory, or both. At some point the server will run out of memory or disk space.\nSnapshots allows for truncating the log so that it does not take up all of the server's resources.\n\nThe two functions `Snapshot` and `Restore` are used to create a snapshot and restore a snapshot, respectively.\n\nThe `Snapshot()` function passes a writer that you can write your snapshot to.\nReturn `nil` to indicate that you are done writing. Returning an error will cancel the snapshot. If you want to disable snapshots altogether:\n\n```go\nfunc (kvm *Clone) Snapshot(wr io.Writer) error {\n\treturn finn.ErrDisabled\n}\n```\n\nThe `Restore()` function passes a reader that you can use to restore your snapshot from.\n\n*Please note that the Raft cluster is active during a snapshot operation. \nIn the example above we use a read-lock that will force the cluster to delay all writes until the snapshot is complete.\nThis may not be ideal for your scenario.*\n\nFull-featured Example\n---------------------\n\nThere's a command line Redis clone that supports all of Finn's features. Print the help options:\n\n```\ngo run example/clone.go -h\n```\n\nFirst start a single-member cluster:\n```\ngo run example/clone.go\n```\n\nThis will start the clone listening on port 7481 for client and server-to-server communication.\n\nNext, let's set a single key, and then retrieve it:\n\n```\n$ redis-cli -p 7481 SET mykey \"my value\"\nOK\n$ redis-cli -p 7481 GET mykey\n\"my value\"\n```\n\nAdding members:\n```\ngo run example/clone.go -p 7482 -dir data2 -join :7481\ngo run example/clone.go -p 7483 -dir data3 -join :7481\n```\n\nThat's it. Now if node1 goes down, node2 and node3 will continue to operate.\n\n\nBuilt-in Raft Commands\n----------------------\nHere are a few commands for monitoring and managing the cluster:\n\n- **RAFTADDPEER addr**  \nAdds a new member to the Raft cluster\n- **RAFTREMOVEPEER addr**  \nRemoves an existing member\n- **RAFTPEERS addr**  \nLists known peers and their status\n- **RAFTLEADER**  \nReturns the Raft leader, if known\n- **RAFTSNAPSHOT**  \nTriggers a snapshot operation\n- **RAFTSTATE**  \nReturns the state of the node\n- **RAFTSTATS**  \nReturns information and statistics for the node and cluster\n\nConsistency and Durability\n--------------------------\n\n### Write Durability\n\nThe `Options.Durability` field has the following options:\n\n- `Low` - fsync is managed by the operating system, less safe\n- `Medium` - fsync every second, fast and safer\n- `High` - fsync after every write, very durable, slower\n\n### Read Consistency\n\nThe `Options.Consistency` field has the following options:\n\n- `Low` - all nodes accept reads, small risk of [stale](http://stackoverflow.com/questions/1563319/what-is-stale-state) data\n- `Medium` - only the leader accepts reads, itty-bitty risk of stale data during a leadership change\n- `High` - only the leader accepts reads, the raft log index is incremented to guaranteeing no stale data\n\nFor example, setting the following options:\n\n```go\nopts := finn.Options{\n\tConsistency: High,\n\tDurability: High,\n}\nn, err := finn.Open(\"data\", \":7481\", \"\", &opts)\n```\n\nProvides the highest level of durability and consistency.\n\nLog Backends\n------------\nFinn supports the following log databases.\n\n- [FastLog](https://github.com/tidwall/raft-fastlog) - log is stored in memory and persists to disk, very fast reads and writes, log is limited to the amount of server memory.\n- [LevelDB](https://github.com/syndtr/goleveldb) - log is stored only to disk, supports large logs.\n- [Bolt](https://github.com/boltdb/bolt) - log is stored only to disk, supports large logs.\n\nContact\n-------\nJosh Baker [@tidwall](http://twitter.com/tidwall)\n\nLicense\n-------\nFinn source code is available under the MIT [License](/LICENSE).\n"
  },
  {
    "path": "example/clone.go",
    "content": "package main\n\nimport (\n\t\"encoding/json\"\n\t\"flag\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"log\"\n\t\"sort\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/tidwall/finn\"\n\t\"github.com/tidwall/match\"\n\t\"github.com/tidwall/redcon\"\n)\n\nfunc main() {\n\tvar port int\n\tvar backend string\n\tvar durability string\n\tvar consistency string\n\tvar loglevel string\n\tvar join string\n\tvar dir string\n\n\tflag.IntVar(&port, \"p\", 7481, \"Bind port\")\n\tflag.StringVar(&backend, \"backend\", \"fastlog\", \"Raft log backend [fastlog,bolt,inmem]\")\n\tflag.StringVar(&durability, \"durability\", \"medium\", \"Log durability [low,medium,high]\")\n\tflag.StringVar(&consistency, \"consistency\", \"medium\", \"Raft consistency [low,medium,high]\")\n\tflag.StringVar(&loglevel, \"loglevel\", \"notice\", \"Log level [quiet,warning,notice,verbose,debug]\")\n\tflag.StringVar(&dir, \"dir\", \"data\", \"Data directory\")\n\tflag.StringVar(&join, \"join\", \"\", \"Join a cluster by providing an address\")\n\tflag.Parse()\n\n\tvar opts finn.Options\n\n\tswitch strings.ToLower(backend) {\n\tdefault:\n\t\tlog.Fatalf(\"invalid backend '%v'\", backend)\n\tcase \"fastlog\":\n\t\topts.Backend = finn.FastLog\n\tcase \"bolt\":\n\t\topts.Backend = finn.Bolt\n\tcase \"inmem\":\n\t\topts.Backend = finn.InMem\n\t}\n\tswitch strings.ToLower(durability) {\n\tdefault:\n\t\tlog.Fatalf(\"invalid durability '%v'\", durability)\n\tcase \"low\":\n\t\topts.Durability = finn.Low\n\tcase \"medium\":\n\t\topts.Durability = finn.Medium\n\tcase \"high\":\n\t\topts.Durability = finn.High\n\t}\n\tswitch strings.ToLower(consistency) {\n\tdefault:\n\t\tlog.Fatalf(\"invalid consistency '%v'\", consistency)\n\tcase \"low\":\n\t\topts.Consistency = finn.Low\n\tcase \"medium\":\n\t\topts.Consistency = finn.Medium\n\tcase \"high\":\n\t\topts.Consistency = finn.High\n\t}\n\tswitch strings.ToLower(loglevel) {\n\tdefault:\n\t\tlog.Fatalf(\"invalid loglevel '%v'\", loglevel)\n\tcase \"quiet\":\n\t\topts.LogOutput = ioutil.Discard\n\tcase \"warning\":\n\t\topts.LogLevel = finn.Warning\n\tcase \"notice\":\n\t\topts.LogLevel = finn.Notice\n\tcase \"verbose\":\n\t\topts.LogLevel = finn.Verbose\n\tcase \"debug\":\n\t\topts.LogLevel = finn.Debug\n\t}\n\tn, err := finn.Open(dir, fmt.Sprintf(\":%d\", port), join, NewClone(), &opts)\n\tif err != nil {\n\t\tif opts.LogOutput == ioutil.Discard {\n\t\t\tlog.Fatal(err)\n\t\t}\n\t}\n\tdefer n.Close()\n\tselect {}\n}\n\n// Clone represent a Redis clone machine\ntype Clone struct {\n\tmu   sync.RWMutex\n\tkeys map[string][]byte\n}\n\n// NewClone create a new clone\nfunc NewClone() *Clone {\n\treturn &Clone{\n\t\tkeys: make(map[string][]byte),\n\t}\n}\n\n// Command processes a command\nfunc (kvm *Clone) Command(m finn.Applier, conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tswitch strings.ToLower(string(cmd.Args[0])) {\n\tdefault:\n\t\treturn nil, finn.ErrUnknownCommand\n\tcase \"set\":\n\t\tif len(cmd.Args) != 3 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\treturn m.Apply(conn, cmd,\n\t\t\tfunc() (interface{}, error) {\n\t\t\t\tkvm.mu.Lock()\n\t\t\t\tkvm.keys[string(cmd.Args[1])] = cmd.Args[2]\n\t\t\t\tkvm.mu.Unlock()\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t\tfunc(v interface{}) (interface{}, error) {\n\t\t\t\tconn.WriteString(\"OK\")\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\tcase \"get\":\n\t\tif len(cmd.Args) != 2 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\treturn m.Apply(conn, cmd, nil,\n\t\t\tfunc(interface{}) (interface{}, error) {\n\t\t\t\tkvm.mu.RLock()\n\t\t\t\tval, ok := kvm.keys[string(cmd.Args[1])]\n\t\t\t\tkvm.mu.RUnlock()\n\t\t\t\tif !ok {\n\t\t\t\t\tconn.WriteNull()\n\t\t\t\t} else {\n\t\t\t\t\tconn.WriteBulk(val)\n\t\t\t\t}\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\tcase \"del\":\n\t\tif len(cmd.Args) < 2 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\treturn m.Apply(conn, cmd,\n\t\t\tfunc() (interface{}, error) {\n\t\t\t\tvar n int\n\t\t\t\tkvm.mu.Lock()\n\t\t\t\tfor i := 1; i < len(cmd.Args); i++ {\n\t\t\t\t\tkey := string(cmd.Args[i])\n\t\t\t\t\tif _, ok := kvm.keys[key]; ok {\n\t\t\t\t\t\tdelete(kvm.keys, key)\n\t\t\t\t\t\tn++\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tkvm.mu.Unlock()\n\t\t\t\treturn n, nil\n\t\t\t},\n\t\t\tfunc(v interface{}) (interface{}, error) {\n\t\t\t\tn := v.(int)\n\t\t\t\tconn.WriteInt(n)\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\tcase \"keys\":\n\t\tif len(cmd.Args) != 2 {\n\t\t\treturn nil, finn.ErrWrongNumberOfArguments\n\t\t}\n\t\tpattern := string(cmd.Args[1])\n\t\treturn m.Apply(conn, cmd, nil,\n\t\t\tfunc(v interface{}) (interface{}, error) {\n\t\t\t\tvar keys []string\n\t\t\t\tkvm.mu.RLock()\n\t\t\t\tfor key := range kvm.keys {\n\t\t\t\t\tif match.Match(key, pattern) {\n\t\t\t\t\t\tkeys = append(keys, key)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tkvm.mu.RUnlock()\n\t\t\t\tsort.Strings(keys)\n\t\t\t\tconn.WriteArray(len(keys))\n\t\t\t\tfor _, key := range keys {\n\t\t\t\t\tconn.WriteBulkString(key)\n\t\t\t\t}\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\t}\n}\n\n// Restore restores a snapshot\nfunc (kvm *Clone) Restore(rd io.Reader) error {\n\tkvm.mu.Lock()\n\tdefer kvm.mu.Unlock()\n\tdata, err := ioutil.ReadAll(rd)\n\tif err != nil {\n\t\treturn err\n\t}\n\tvar keys map[string][]byte\n\tif err := json.Unmarshal(data, &keys); err != nil {\n\t\treturn err\n\t}\n\tkvm.keys = keys\n\treturn nil\n}\n\n// Snapshot creates a snapshot\nfunc (kvm *Clone) Snapshot(wr io.Writer) error {\n\tkvm.mu.RLock()\n\tdefer kvm.mu.RUnlock()\n\tdata, err := json.Marshal(kvm.keys)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif _, err := wr.Write(data); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "finn.go",
    "content": "// Package finn provide a fast and simple Raft implementation.\npackage finn\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sort\"\n\t\"strings\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"time\"\n\n\t\"github.com/hashicorp/raft\"\n\traftboltdb \"github.com/tidwall/raft-boltdb\"\n\traftfastlog \"github.com/tidwall/raft-fastlog\"\n\traftleveldb \"github.com/tidwall/raft-leveldb\"\n\traftredcon \"github.com/tidwall/raft-redcon\"\n\t\"github.com/tidwall/redcon\"\n\t\"github.com/tidwall/redlog\"\n)\n\nvar (\n\t// ErrUnknownCommand is returned when the command is not known.\n\tErrUnknownCommand = errors.New(\"unknown command\")\n\t// ErrWrongNumberOfArguments is returned when the number of arguments is wrong.\n\tErrWrongNumberOfArguments = errors.New(\"wrong number of arguments\")\n\t// ErrDisabled is returned when a feature is disabled.\n\tErrDisabled = errors.New(\"disabled\")\n)\n\nvar (\n\terrInvalidCommand          = errors.New(\"invalid command\")\n\terrInvalidConsistencyLevel = errors.New(\"invalid consistency level\")\n\terrSyntaxError             = errors.New(\"syntax error\")\n\terrInvalidResponse         = errors.New(\"invalid response\")\n)\n\nconst (\n\tretainSnapshotCount = 2\n\traftTimeout         = 10 * time.Second\n)\n\n// Level is for defining the raft consistency level.\ntype Level int\n\n// String returns a string representation of Level.\nfunc (l Level) String() string {\n\tswitch l {\n\tdefault:\n\t\treturn \"unknown\"\n\tcase Low:\n\t\treturn \"low\"\n\tcase Medium:\n\t\treturn \"medium\"\n\tcase High:\n\t\treturn \"high\"\n\t}\n}\n\nconst (\n\t// Low is \"low\" consistency. All readonly commands will can processed by\n\t// any node. Very fast but may have stale reads.\n\tLow Level = -1\n\t// Medium is \"medium\" consistency. All readonly commands can only be\n\t// processed by the leader. The command is not processed through the\n\t// raft log, therefore a very small (microseconds) chance for a stale\n\t// read is possible when a leader change occurs. Fast but only the leader\n\t// handles all reads and writes.\n\tMedium Level = 0\n\t// High is \"high\" consistency. All commands go through the raft log.\n\t// Not as fast because all commands must pass through the raft log.\n\tHigh Level = 1\n)\n\n// Backend is a raft log database type.\ntype Backend int\n\nconst (\n\t// FastLog is a persistent in-memory raft log.\n\t// This is the default.\n\tFastLog Backend = iota\n\t// Bolt is a persistent disk raft log.\n\tBolt\n\t// InMem is a non-persistent in-memory raft log.\n\tInMem\n\t// LevelDB is a persistent disk raft log.\n\tLevelDB\n)\n\n// String returns a string representation of the Backend\nfunc (b Backend) String() string {\n\tswitch b {\n\tdefault:\n\t\treturn \"unknown\"\n\tcase FastLog:\n\t\treturn \"fastlog\"\n\tcase Bolt:\n\t\treturn \"bolt\"\n\tcase InMem:\n\t\treturn \"inmem\"\n\tcase LevelDB:\n\t\treturn \"leveldb\"\n\t}\n}\n\n// LogLevel is used to define the verbosity of the log outputs\ntype LogLevel int\n\nconst (\n\t// Debug prints everything\n\tDebug LogLevel = -2\n\t// Verbose prints extra detail\n\tVerbose LogLevel = -1\n\t// Notice is the standard level\n\tNotice LogLevel = 0\n\t// Warning only prints warnings\n\tWarning LogLevel = 1\n)\n\n// Options are used to provide a Node with optional functionality.\ntype Options struct {\n\t// Consistency is the raft consistency level for reads.\n\t// Default is Medium\n\tConsistency Level\n\t// Durability is the fsync durability for disk writes.\n\t// Default is Medium\n\tDurability Level\n\t// Backend is the database backend.\n\t// Default is MemLog\n\tBackend Backend\n\t// LogLevel is the log verbosity\n\t// Default is Notice\n\tLogLevel LogLevel\n\t// LogOutput is the log writer\n\t// Default is os.Stderr\n\tLogOutput io.Writer\n\t// Accept is an optional function that can be used to\n\t// accept or deny a connection. It fires when new client\n\t// connections are created.\n\t// Return false to deny the connection.\n\tConnAccept func(redcon.Conn) bool\n\t// ConnClosed is an optional function that fires\n\t// when client connections are closed.\n\t// If there was a network error, then the error will be\n\t// passed in as an argument.\n\tConnClosed func(redcon.Conn, error)\n}\n\n// fillOptions fills in default options\nfunc fillOptions(opts *Options) *Options {\n\tif opts == nil {\n\t\topts = &Options{}\n\t}\n\t// copy and reassign the options\n\tnopts := *opts\n\tif nopts.LogOutput == nil {\n\t\tnopts.LogOutput = os.Stderr\n\t}\n\treturn &nopts\n}\n\n// Logger is a logger\ntype Logger interface {\n\t// Printf write notice messages\n\tPrintf(format string, args ...interface{})\n\t// Verbosef writes verbose messages\n\tVerbosef(format string, args ...interface{})\n\t// Noticef writes notice messages\n\tNoticef(format string, args ...interface{})\n\t// Warningf write warning messages\n\tWarningf(format string, args ...interface{})\n\t// Debugf writes debug messages\n\tDebugf(format string, args ...interface{})\n}\n\n// Applier is used to apply raft commands.\ntype Applier interface {\n\t// Apply applies a command\n\tApply(conn redcon.Conn, cmd redcon.Command,\n\t\tmutate func() (interface{}, error),\n\t\trespond func(interface{}) (interface{}, error),\n\t) (interface{}, error)\n\tLog() Logger\n}\n\n// Machine handles raft commands and raft snapshotting.\ntype Machine interface {\n\t// Command is called by the Node for incoming commands.\n\tCommand(a Applier, conn redcon.Conn, cmd redcon.Command) (interface{}, error)\n\t// Restore is used to restore data from a snapshot.\n\tRestore(rd io.Reader) error\n\t// Snapshot is used to support log compaction. This call should write a\n\t// snapshot to the provided writer.\n\tSnapshot(wr io.Writer) error\n}\n\n// Node represents a Raft server node.\ntype Node struct {\n\tmu       sync.RWMutex\n\taddr     string\n\tsnapshot raft.SnapshotStore\n\ttrans    *raftredcon.RedconTransport\n\traft     *raft.Raft\n\tlog      *redlog.Logger // the node logger\n\tmlog     *redlog.Logger // the machine logger\n\tclosed   bool\n\topts     *Options\n\tlevel    Level\n\thandler  Machine\n\tstore    bigStore\n\tpeers    map[string]string\n}\n\n// bigStore represents a raft store that conforms to\n// raft.PeerStore, raft.LogStore, and raft.StableStore.\ntype bigStore interface {\n\tClose() error\n\tFirstIndex() (uint64, error)\n\tLastIndex() (uint64, error)\n\tGetLog(idx uint64, log *raft.Log) error\n\tStoreLog(log *raft.Log) error\n\tStoreLogs(logs []*raft.Log) error\n\tDeleteRange(min, max uint64) error\n\tSet(k, v []byte) error\n\tGet(k []byte) ([]byte, error)\n\tSetUint64(key []byte, val uint64) error\n\tGetUint64(key []byte) (uint64, error)\n\tPeers() ([]string, error)\n\tSetPeers(peers []string) error\n}\n\n// Open opens a Raft node and returns the Node to the caller.\nfunc Open(dir, addr, join string, handler Machine, opts *Options) (node *Node, err error) {\n\topts = fillOptions(opts)\n\tlog := redlog.New(opts.LogOutput).Sub('N')\n\tlog.SetFilter(redlog.HashicorpRaftFilter)\n\tlog.SetIgnoreDups(true)\n\tswitch opts.LogLevel {\n\tcase Debug:\n\t\tlog.SetLevel(0)\n\tcase Verbose:\n\t\tlog.SetLevel(1)\n\tcase Notice:\n\t\tlog.SetLevel(2)\n\tcase Warning:\n\t\tlog.SetLevel(3)\n\t}\n\n\t// if this function fails then write the error to the logger\n\tdefer func() {\n\t\tif err != nil {\n\t\t\tlog.Warningf(\"%v\", err)\n\t\t}\n\t}()\n\n\t// create the directory\n\tif err := os.MkdirAll(dir, 0700); err != nil {\n\t\treturn nil, err\n\t}\n\n\t// create a node and assign it some fields\n\tn := &Node{\n\t\tlog:     log,\n\t\tmlog:    log.Sub('C'),\n\t\topts:    opts,\n\t\tlevel:   opts.Consistency,\n\t\thandler: handler,\n\t\tpeers:   make(map[string]string),\n\t}\n\n\tvar store bigStore\n\tif opts.Backend == Bolt {\n\t\topts.Durability = High\n\t\tstore, err = raftboltdb.NewBoltStore(filepath.Join(dir, \"raft.db\"))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t} else if opts.Backend == LevelDB {\n\t\tvar dur raftleveldb.Level\n\t\tswitch opts.Durability {\n\t\tdefault:\n\t\t\tdur = raftleveldb.Medium\n\t\t\topts.Durability = Medium\n\t\tcase High:\n\t\t\tdur = raftleveldb.High\n\t\tcase Low:\n\t\t\tdur = raftleveldb.Low\n\t\t}\n\t\tstore, err = raftleveldb.NewLevelDBStore(filepath.Join(dir, \"raft.db\"), dur)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t} else if opts.Backend == InMem {\n\t\topts.Durability = Low\n\t\tstore, err = raftfastlog.NewFastLogStore(\":memory:\", raftfastlog.Low, n.log.Sub('S'))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t} else {\n\t\topts.Backend = FastLog\n\t\tvar dur raftfastlog.Level\n\t\tswitch opts.Durability {\n\t\tdefault:\n\t\t\tdur = raftfastlog.Medium\n\t\t\topts.Durability = Medium\n\t\tcase High:\n\t\t\tdur = raftfastlog.High\n\t\tcase Low:\n\t\t\tdur = raftfastlog.Low\n\t\t}\n\t\tstore, err = raftfastlog.NewFastLogStore(filepath.Join(dir, \"raft.db\"), dur, n.log.Sub('S'))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\tn.store = store\n\n\tn.log.Debugf(\"Consistency: %s, Durability: %s, Backend: %s\", opts.Consistency, opts.Durability, opts.Backend)\n\n\t// get the peer list\n\tpeers, err := n.store.Peers()\n\tif err != nil {\n\t\tn.Close()\n\t\treturn nil, err\n\t}\n\n\t// Setup Raft configuration.\n\tconfig := raft.DefaultConfig()\n\tconfig.LogOutput = n.log\n\n\t// Allow the node to enter single-mode, potentially electing itself, if\n\t// explicitly enabled and there is only 1 node in the cluster already.\n\tif join == \"\" && len(peers) <= 1 {\n\t\tn.log.Noticef(\"Enable single node\")\n\t\tconfig.EnableSingleNode = true\n\t\tconfig.DisableBootstrapAfterElect = false\n\t}\n\n\t// create the snapshot store. This allows the Raft to truncate the log.\n\tn.snapshot, err = raft.NewFileSnapshotStore(dir, retainSnapshotCount, n.log)\n\tif err != nil {\n\t\tn.Close()\n\t\treturn nil, err\n\t}\n\n\t// verify the syntax of the address.\n\ttaddr, err := net.ResolveTCPAddr(\"tcp\", addr)\n\tif err != nil {\n\t\tn.Close()\n\t\treturn nil, err\n\t}\n\n\t// Set the atomic flag which indicates that we can accept Redcon commands.\n\tvar doReady uint64\n\n\t// start the raft server\n\tn.addr = taddr.String()\n\tn.trans, err = raftredcon.NewRedconTransport(\n\t\tn.addr,\n\t\tfunc(conn redcon.Conn, cmd redcon.Command) {\n\t\t\tif atomic.LoadUint64(&doReady) != 0 {\n\t\t\t\tn.doCommand(conn, cmd)\n\t\t\t} else {\n\t\t\t\tconn.WriteError(\"ERR raft not ready\")\n\t\t\t}\n\t\t}, opts.ConnAccept, opts.ConnClosed,\n\t\tn.log.Sub('L'),\n\t)\n\tif err != nil {\n\t\tn.Close()\n\t\treturn nil, err\n\t}\n\n\t// Instantiate the Raft systems.\n\tn.raft, err = raft.NewRaft(config, (*nodeFSM)(n),\n\t\tn.store, n.store, n.snapshot, n.store, n.trans)\n\tif err != nil {\n\t\tn.Close()\n\t\treturn nil, err\n\t}\n\n\t// set the atomic flag which indicates that we can accept Redcon commands.\n\tatomic.AddUint64(&doReady, 1)\n\n\t// if --join was specified, make the join request.\n\tfor {\n\t\tif join != \"\" && len(peers) == 0 {\n\t\t\tif err := reqRaftJoin(join, n.addr); err != nil {\n\t\t\t\tif strings.HasPrefix(err.Error(), \"TRY \") {\n\t\t\t\t\t// we received a \"TRY addr\" response. let forward the join to\n\t\t\t\t\t// the specified address\"\n\t\t\t\t\tjoin = strings.Split(err.Error(), \" \")[1]\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\treturn nil, fmt.Errorf(\"failed to join node at %v: %v\", join, err)\n\t\t\t}\n\t\t}\n\t\tbreak\n\t}\n\tgo n.watchPeers()\n\treturn n, nil\n}\n\n// Close closes the node\nfunc (n *Node) Close() error {\n\tn.mu.Lock()\n\tdefer n.mu.Unlock()\n\t// shutdown the raft, but do not handle the future error. :PPA:\n\tif n.raft != nil {\n\t\tn.raft.Shutdown().Error()\n\t}\n\tif n.trans != nil {\n\t\tn.trans.Close()\n\t}\n\t// close the raft database\n\tif n.store != nil {\n\t\tn.store.Close()\n\t}\n\tn.closed = true\n\treturn nil\n}\n\n// Store returns the underlying storage object.\nfunc (n *Node) Store() interface{} {\n\treturn n.store\n}\n\nfunc (n *Node) watchPeers() {\n\tbuf := make([]byte, 1024)\n\tfor {\n\t\tvar peers []string\n\t\tvar err error\n\t\tif !func() bool {\n\t\t\tn.mu.Lock()\n\t\t\tdefer n.mu.Unlock()\n\t\t\tif n.closed {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tpeers, err = n.store.Peers()\n\t\t\treturn true\n\t\t}() {\n\t\t\treturn\n\t\t}\n\n\t\tfunc() {\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tpeersState := make(map[string]string)\n\t\t\tfor _, peer := range peers {\n\t\t\t\tstate, err := func() (string, error) {\n\t\t\t\t\tconn, err := net.DialTimeout(\"tcp\", peer, time.Second)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn \"\", err\n\t\t\t\t\t}\n\t\t\t\t\tdefer conn.Close()\n\t\t\t\t\tif err := conn.SetDeadline(time.Now().Add(time.Second)); err != nil {\n\t\t\t\t\t\treturn \"\", err\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := conn.Write([]byte(\"RAFTSTATE\\r\\n\")); err != nil {\n\t\t\t\t\t\treturn \"\", err\n\t\t\t\t\t}\n\t\t\t\t\tn, err := conn.Read(buf)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn \"\", err\n\t\t\t\t\t}\n\t\t\t\t\tparts := strings.Split(string(buf[:n]), \"\\r\\n\")\n\t\t\t\t\tif len(parts) != 3 || buf[0] != '$' {\n\t\t\t\t\t\treturn \"\", err\n\t\t\t\t\t}\n\t\t\t\t\treturn parts[1], nil\n\t\t\t\t}()\n\t\t\t\tif err, ok := err.(net.Error); ok && err.Timeout() {\n\t\t\t\t\tstate = \"Timeout\"\n\t\t\t\t} else if err != nil {\n\t\t\t\t\tstate = \"Invalid\"\n\t\t\t\t}\n\t\t\t\tpeersState[peer] = state\n\t\t\t}\n\t\t\tn.mu.Lock()\n\t\t\tif !n.closed {\n\t\t\t\tn.peers = peersState\n\t\t\t}\n\t\t\tn.mu.Unlock()\n\t\t}()\n\t\ttime.Sleep(time.Second)\n\t}\n}\n\n// Log returns the active logger for printing messages\nfunc (n *Node) Log() Logger {\n\treturn n.mlog\n}\n\n// leader returns the client address for the leader\nfunc (n *Node) leader() string {\n\treturn n.raft.Leader()\n}\n\n// reqRaftJoin does a remote \"RAFTJOIN\" command at the specified address.\nfunc reqRaftJoin(join, raftAddr string) error {\n\tresp, _, err := raftredcon.Do(join, nil, []byte(\"raftaddpeer\"), []byte(raftAddr))\n\tif err != nil {\n\t\treturn err\n\t}\n\tif string(resp) != \"OK\" {\n\t\treturn errors.New(\"invalid response\")\n\t}\n\treturn nil\n}\n\n// scanForErrors returns pipeline errors. All messages must be errors\nfunc scanForErrors(buf []byte) [][]byte {\n\tvar res [][]byte\n\tfor len(buf) > 0 {\n\t\tif buf[0] != '-' {\n\t\t\treturn nil\n\t\t}\n\t\tbuf = buf[1:]\n\t\tfor i := 0; i < len(buf); i++ {\n\t\t\tif buf[i] == '\\n' && i > 0 && buf[i-1] == '\\r' {\n\t\t\t\tres = append(res, buf[:i-1])\n\t\t\t\tbuf = buf[i+1:]\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\treturn res\n}\n\nfunc (n *Node) translateError(err error, cmd string) string {\n\tif err.Error() == ErrDisabled.Error() || err.Error() == ErrUnknownCommand.Error() {\n\t\treturn \"ERR unknown command '\" + cmd + \"'\"\n\t} else if err.Error() == ErrWrongNumberOfArguments.Error() {\n\t\treturn \"ERR wrong number of arguments for '\" + cmd + \"' command\"\n\t} else if err.Error() == raft.ErrNotLeader.Error() {\n\t\tleader := n.raft.Leader()\n\t\tif leader == \"\" {\n\t\t\treturn \"ERR leader not known\"\n\t\t}\n\t\treturn \"TRY \" + leader\n\t}\n\treturn strings.TrimSpace(strings.Split(err.Error(), \"\\n\")[0])\n}\n\n// doCommand executes a client command which is processed through the raft pipeline.\nfunc (n *Node) doCommand(conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tif len(cmd.Args) == 0 {\n\t\treturn nil, nil\n\t}\n\tvar val interface{}\n\tvar err error\n\tswitch strings.ToLower(string(cmd.Args[0])) {\n\tdefault:\n\t\tval, err = n.handler.Command((*nodeApplier)(n), conn, cmd)\n\t\tif err == ErrDisabled {\n\t\t\terr = ErrUnknownCommand\n\t\t}\n\tcase \"raftaddpeer\":\n\t\tval, err = n.doRaftAddPeer(conn, cmd)\n\tcase \"raftremovepeer\":\n\t\tval, err = n.doRaftRemovePeer(conn, cmd)\n\tcase \"raftleader\":\n\t\tval, err = n.doRaftLeader(conn, cmd)\n\tcase \"raftsnapshot\":\n\t\tval, err = n.doRaftSnapshot(conn, cmd)\n\tcase \"raftshrinklog\":\n\t\tval, err = n.doRaftShrinkLog(conn, cmd)\n\tcase \"raftstate\":\n\t\tval, err = n.doRaftState(conn, cmd)\n\tcase \"raftstats\":\n\t\tval, err = n.doRaftStats(conn, cmd)\n\tcase \"raftpeers\":\n\t\tval, err = n.doRaftPeers(conn, cmd)\n\tcase \"quit\":\n\t\tval, err = n.doQuit(conn, cmd)\n\tcase \"ping\":\n\t\tval, err = n.doPing(conn, cmd)\n\t}\n\tif err != nil && conn != nil {\n\t\t// it's possible that this was a pipelined response.\n\t\twr := redcon.BaseWriter(conn)\n\t\tif wr != nil {\n\t\t\tbuf := wr.Buffer()\n\t\t\trerrs := scanForErrors(buf)\n\t\t\tif len(rerrs) > 0 {\n\t\t\t\twr.SetBuffer(nil)\n\t\t\t\tfor _, rerr := range rerrs {\n\t\t\t\t\tconn.WriteError(n.translateError(errors.New(string(rerr)), string(cmd.Args[0])))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tconn.WriteError(n.translateError(err, string(cmd.Args[0])))\n\t}\n\treturn val, err\n}\n\n// doPing handles a \"PING\" client command.\nfunc (n *Node) doPing(conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tswitch len(cmd.Args) {\n\tdefault:\n\t\treturn nil, ErrWrongNumberOfArguments\n\tcase 1:\n\t\tconn.WriteString(\"PONG\")\n\tcase 2:\n\t\tconn.WriteBulk(cmd.Args[1])\n\t}\n\treturn nil, nil\n}\n\n// doRaftLeader handles a \"RAFTLEADER\" client command.\nfunc (n *Node) doRaftLeader(conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tif len(cmd.Args) != 1 {\n\t\treturn nil, ErrWrongNumberOfArguments\n\t}\n\tleader := n.raft.Leader()\n\tif leader == \"\" {\n\t\tconn.WriteNull()\n\t} else {\n\t\tconn.WriteBulkString(leader)\n\t}\n\treturn nil, nil\n}\n\n// doRaftSnapshot handles a \"RAFTSNAPSHOT\" client command.\nfunc (n *Node) doRaftSnapshot(conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tif len(cmd.Args) != 1 {\n\t\treturn nil, ErrWrongNumberOfArguments\n\t}\n\tf := n.raft.Snapshot()\n\terr := f.Error()\n\tif err != nil {\n\t\tconn.WriteError(\"ERR \" + err.Error())\n\t\treturn nil, nil\n\t}\n\tconn.WriteString(\"OK\")\n\treturn nil, nil\n}\n\ntype shrinkable interface {\n\tShrink() error\n}\n\n// doRaftShrinkLog handles a \"RAFTSHRINKLOG\" client command.\nfunc (n *Node) doRaftShrinkLog(conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tif len(cmd.Args) != 1 {\n\t\treturn nil, ErrWrongNumberOfArguments\n\t}\n\tif s, ok := n.store.(shrinkable); ok {\n\t\terr := s.Shrink()\n\t\tif err != nil {\n\t\t\tconn.WriteError(\"ERR \" + err.Error())\n\t\t\treturn nil, nil\n\t\t}\n\t\tconn.WriteString(\"OK\")\n\t\treturn nil, nil\n\t}\n\tconn.WriteError(\"ERR log is not shrinkable\")\n\treturn nil, nil\n}\n\n// doRaftState handles a \"RAFTSTATE\" client command.\nfunc (n *Node) doRaftState(conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tif len(cmd.Args) != 1 {\n\t\treturn nil, ErrWrongNumberOfArguments\n\t}\n\tconn.WriteBulkString(n.raft.State().String())\n\treturn nil, nil\n}\n\n// doRaftStatus handles a \"RAFTSTATUS\" client command.\nfunc (n *Node) doRaftStats(conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tif len(cmd.Args) != 1 {\n\t\treturn nil, ErrWrongNumberOfArguments\n\t}\n\tn.mu.RLock()\n\tdefer n.mu.RUnlock()\n\tstats := n.raft.Stats()\n\tkeys := make([]string, 0, len(stats))\n\tfor key := range stats {\n\t\tkeys = append(keys, key)\n\t}\n\tsort.Strings(keys)\n\tconn.WriteArray(len(keys) * 2)\n\tfor _, key := range keys {\n\t\tconn.WriteBulkString(key)\n\t\tconn.WriteBulkString(stats[key])\n\t}\n\treturn nil, nil\n}\n\n// doRaftStatus handles a \"RAFTSTATUS\" client command.\nfunc (n *Node) doRaftPeers(conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tif len(cmd.Args) != 1 {\n\t\treturn nil, ErrWrongNumberOfArguments\n\t}\n\tvar peers []string\n\tpeersState := make(map[string]string)\n\tfunc() {\n\t\tn.mu.RLock()\n\t\tdefer n.mu.RUnlock()\n\t\tfor peer, state := range n.peers {\n\t\t\tpeersState[peer] = state\n\t\t\tpeers = append(peers, peer)\n\t\t}\n\t}()\n\tsort.Strings(peers)\n\n\tconn.WriteArray(len(peers) * 2)\n\tfor _, peer := range peers {\n\t\tconn.WriteBulkString(peer)\n\t\tconn.WriteBulkString(peersState[peer])\n\t}\n\treturn nil, nil\n}\n\n// doQuit handles a \"QUIT\" client command.\nfunc (n *Node) doQuit(conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tconn.WriteString(\"OK\")\n\tconn.Close()\n\treturn nil, nil\n}\n\n// doRaftAddPeer handles a \"RAFTADDPEER address\" client command.\nfunc (n *Node) doRaftAddPeer(conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tif len(cmd.Args) != 2 {\n\t\treturn nil, ErrWrongNumberOfArguments\n\t}\n\tn.log.Noticef(\"Received add peer request from %v\", string(cmd.Args[1]))\n\tf := n.raft.AddPeer(string(cmd.Args[1]))\n\tif f.Error() != nil {\n\t\treturn nil, f.Error()\n\t}\n\tn.log.Noticef(\"Node %v added successfully\", string(cmd.Args[1]))\n\tconn.WriteString(\"OK\")\n\treturn nil, nil\n}\n\n// doRaftRemovePeer handles a \"RAFTREMOVEPEER address\" client command.\nfunc (n *Node) doRaftRemovePeer(conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tif len(cmd.Args) != 2 {\n\t\treturn nil, ErrWrongNumberOfArguments\n\t}\n\tn.log.Noticef(\"Received remove peer request from %v\", string(cmd.Args[1]))\n\tf := n.raft.RemovePeer(string(cmd.Args[1]))\n\tif f.Error() != nil {\n\t\treturn nil, f.Error()\n\t}\n\tn.log.Noticef(\"Node %v detached successfully\", string(cmd.Args[1]))\n\tconn.WriteString(\"OK\")\n\treturn nil, nil\n}\n\n// raftApplyCommand encodes a series of args into a raft command and\n// applies it to the index.\nfunc (n *Node) raftApplyCommand(cmd redcon.Command) (interface{}, error) {\n\tf := n.raft.Apply(cmd.Raw, raftTimeout)\n\tif err := f.Error(); err != nil {\n\t\treturn nil, err\n\t}\n\t// we check for the response to be an error and return it as such.\n\tswitch v := f.Response().(type) {\n\tdefault:\n\t\treturn v, nil\n\tcase error:\n\t\treturn nil, v\n\t}\n}\n\n// raftLevelGuard is used to process readonly commands depending on the\n// consistency readonly level.\n// It either:\n// - low consistency: just processes the command without concern about\n//   leadership or cluster state.\n// - medium consistency: makes sure that the node is the leader first.\n// - high consistency: sends a blank command through the raft pipeline to\n// ensure that the node is thel leader, the raft index is incremented, and\n// that the cluster is sane before processing the readonly command.\nfunc (n *Node) raftLevelGuard() error {\n\tswitch n.level {\n\tdefault:\n\t\t// a valid level is required\n\t\treturn errInvalidConsistencyLevel\n\tcase Low:\n\t\t// anything goes.\n\t\treturn nil\n\tcase Medium:\n\t\t// must be the leader\n\t\tif n.raft.State() != raft.Leader {\n\t\t\treturn raft.ErrNotLeader\n\t\t}\n\t\treturn nil\n\tcase High:\n\t\t// process a blank command. this will update the raft log index\n\t\t// and allow for readonly commands to process in order without\n\t\t// serializing the actual command.\n\t\tf := n.raft.Apply(nil, raftTimeout)\n\t\tif err := f.Error(); err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// the blank command succeeded.\n\t\tv := f.Response()\n\t\t// check if response was an error and return that.\n\t\tswitch v := v.(type) {\n\t\tcase nil:\n\t\t\treturn nil\n\t\tcase error:\n\t\t\treturn v\n\t\t}\n\t\treturn errInvalidResponse\n\t}\n}\n\n// nodeApplier exposes the Applier interface of the Node type\ntype nodeApplier Node\n\n// Apply executes a command through raft.\n// The mutate param should be set to nil for readonly commands.\n// The repsond param is required and any response to conn happens here.\n// The return value from mutate will be passed into the respond param.\nfunc (m *nodeApplier) Apply(\n\tconn redcon.Conn,\n\tcmd redcon.Command,\n\tmutate func() (interface{}, error),\n\trespond func(interface{}) (interface{}, error),\n) (interface{}, error) {\n\tvar val interface{}\n\tvar err error\n\tif mutate == nil {\n\t\t// no apply, just do a level guard.\n\t\tif err := (*Node)(m).raftLevelGuard(); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t} else if conn == nil {\n\t\t// this is happening on a follower node.\n\t\treturn mutate()\n\t} else {\n\t\t// this is happening on the leader node.\n\t\t// apply the command to the raft log.\n\t\tval, err = (*Node)(m).raftApplyCommand(cmd)\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\t// responde\n\treturn respond(val)\n}\n\n// Log returns the active logger for printing messages\nfunc (m *nodeApplier) Log() Logger {\n\treturn (*Node)(m).Log()\n}\n\n// nodeFSM exposes the raft.FSM interface of the Node type\ntype nodeFSM Node\n\n// Apply applies a Raft log entry to the key-value store.\nfunc (m *nodeFSM) Apply(l *raft.Log) interface{} {\n\tif len(l.Data) == 0 {\n\t\t// blank data\n\t\treturn nil\n\t}\n\tcmd, err := redcon.Parse(l.Data)\n\tif err != nil {\n\t\treturn err\n\t}\n\tval, err := (*Node)(m).doCommand(nil, cmd)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn val\n}\n\n// Restore stores the key-value store to a previous state.\nfunc (m *nodeFSM) Restore(rc io.ReadCloser) error {\n\tdefer rc.Close()\n\treturn (*Node)(m).handler.Restore(rc)\n}\n\n// Persist writes the snapshot to the given sink.\nfunc (m *nodeFSM) Persist(sink raft.SnapshotSink) error {\n\tif err := (*Node)(m).handler.Snapshot(sink); err != nil {\n\t\tsink.Cancel()\n\t\treturn err\n\t}\n\tsink.Close()\n\treturn nil\n}\n\n// Release deletes the temp file\nfunc (m *nodeFSM) Release() {}\n\n// Snapshot returns a snapshot of the key-value store.\nfunc (m *nodeFSM) Snapshot() (raft.FSMSnapshot, error) {\n\treturn m, nil\n}\n"
  },
  {
    "path": "finn_test.go",
    "content": "package finn\n\nimport (\n\t\"bufio\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"net\"\n\t\"os\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/tidwall/raft-redcon\"\n\t\"github.com/tidwall/redcon\"\n)\n\ntype KVM struct {\n\tmu   sync.RWMutex\n\tkeys map[string][]byte\n}\n\nfunc NewKVM() *KVM {\n\treturn &KVM{\n\t\tkeys: make(map[string][]byte),\n\t}\n}\nfunc (kvm *KVM) Command(m Applier, conn redcon.Conn, cmd redcon.Command) (interface{}, error) {\n\tswitch strings.ToLower(string(cmd.Args[0])) {\n\tdefault:\n\t\treturn nil, ErrUnknownCommand\n\tcase \"set\":\n\t\tif len(cmd.Args) != 3 {\n\t\t\treturn nil, ErrWrongNumberOfArguments\n\t\t}\n\t\treturn m.Apply(conn, cmd,\n\t\t\tfunc() (interface{}, error) {\n\t\t\t\tkvm.mu.Lock()\n\t\t\t\tdefer kvm.mu.Unlock()\n\t\t\t\tkvm.keys[string(cmd.Args[1])] = cmd.Args[2]\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t\tfunc(v interface{}) (interface{}, error) {\n\t\t\t\tconn.WriteString(\"OK\")\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\tcase \"get\":\n\t\tif len(cmd.Args) != 2 {\n\t\t\treturn nil, ErrWrongNumberOfArguments\n\t\t}\n\t\treturn m.Apply(conn, cmd,\n\t\t\tnil,\n\t\t\tfunc(interface{}) (interface{}, error) {\n\t\t\t\tkvm.mu.RLock()\n\t\t\t\tdefer kvm.mu.RUnlock()\n\t\t\t\tif val, ok := kvm.keys[string(cmd.Args[1])]; !ok {\n\t\t\t\t\tconn.WriteNull()\n\t\t\t\t} else {\n\t\t\t\t\tconn.WriteBulk(val)\n\t\t\t\t}\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t)\n\t}\n}\n\nfunc (kvm *KVM) Restore(rd io.Reader) error {\n\tkvm.mu.Lock()\n\tdefer kvm.mu.Unlock()\n\tdata, err := ioutil.ReadAll(rd)\n\tif err != nil {\n\t\treturn err\n\t}\n\tvar keys map[string][]byte\n\tif err := json.Unmarshal(data, &keys); err != nil {\n\t\treturn err\n\t}\n\tkvm.keys = keys\n\treturn nil\n}\n\nfunc (kvm *KVM) Snapshot(wr io.Writer) error {\n\tkvm.mu.RLock()\n\tdefer kvm.mu.RUnlock()\n\tdata, err := json.Marshal(kvm.keys)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif _, err := wr.Write(data); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nvar killed = make(map[int]bool)\nvar killCond = sync.NewCond(&sync.Mutex{})\n\nfunc killNodes(basePort int) {\n\tkillCond.L.Lock()\n\tkilled[basePort] = true\n\tkillCond.Broadcast()\n\tkillCond.L.Unlock()\n}\n\nfunc startTestNode(t testing.TB, basePort int, num int, opts *Options) {\n\tnode := fmt.Sprintf(\"%d\", num)\n\n\tif err := os.MkdirAll(\"data/\"+node, 0700); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tjoin := \"\"\n\tif node == \"\" {\n\t\tnode = \"0\"\n\t}\n\taddr := fmt.Sprintf(\":%d\", basePort/10*10+num)\n\tif node != \"0\" {\n\t\tjoin = fmt.Sprintf(\":%d\", basePort)\n\t}\n\tn, err := Open(\"data/\"+node, addr, join, NewKVM(), opts)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer n.Close()\n\tfor {\n\t\tkillCond.L.Lock()\n\t\tif killed[basePort] {\n\t\t\tkillCond.L.Unlock()\n\t\t\treturn\n\t\t}\n\t\tkillCond.Wait()\n\t\tkillCond.L.Unlock()\n\t}\n}\n\nfunc waitFor(t testing.TB, basePort, node int) {\n\ttarget := fmt.Sprintf(\":%d\", basePort/10*10+node)\n\tstart := time.Now()\n\tfor {\n\t\tif time.Now().Sub(start) > time.Second*10 {\n\t\t\tt.Fatal(\"timeout looking for leader\")\n\t\t}\n\t\ttime.Sleep(time.Second / 4)\n\t\tresp, _, err := raftredcon.Do(target, nil, []byte(\"raftleader\"))\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\t\tif len(resp) != 0 {\n\t\t\treturn\n\t\t}\n\t}\n}\n\nfunc testDo(t testing.TB, basePort, node int, expect string, args ...string) string {\n\tvar bargs [][]byte\n\tfor _, arg := range args {\n\t\tbargs = append(bargs, []byte(arg))\n\t}\n\ttarget := fmt.Sprintf(\":%d\", basePort/10*10+node)\n\tresp, _, err := raftredcon.Do(target, nil, bargs...)\n\tif err != nil {\n\t\tif err.Error() == expect {\n\t\t\treturn \"\"\n\t\t}\n\t\tt.Fatalf(\"node %d: %v\", node, err)\n\t}\n\tif expect != \"???\" && string(resp) != expect {\n\t\tt.Fatalf(\"node %d: expected '%v', got '%v'\", node, expect, string(resp))\n\t}\n\treturn string(resp)\n}\n\nfunc TestVarious(t *testing.T) {\n\tt.Run(\"Level\", SubTestLevel)\n\tt.Run(\"Backend\", SubTestBackend)\n}\n\nfunc SubTestLevel(t *testing.T) {\n\tvar level Level\n\tlevel = Level(-99)\n\tif level.String() != \"unknown\" {\n\t\tt.Fatalf(\"expecting '%v', got '%v'\", \"unknown\", level.String())\n\t}\n\tlevel = Low\n\tif level.String() != \"low\" {\n\t\tt.Fatalf(\"expecting '%v', got '%v'\", \"low\", level.String())\n\t}\n\tlevel = Medium\n\tif level.String() != \"medium\" {\n\t\tt.Fatalf(\"expecting '%v', got '%v'\", \"medium\", level.String())\n\t}\n\tlevel = High\n\tif level.String() != \"high\" {\n\t\tt.Fatalf(\"expecting '%v', got '%v'\", \"high\", level.String())\n\t}\n}\n\nfunc SubTestBackend(t *testing.T) {\n\tvar backend Backend\n\tbackend = Backend(-99)\n\tif backend.String() != \"unknown\" {\n\t\tt.Fatalf(\"expecting '%v', got '%v'\", \"unknown\", backend.String())\n\t}\n\tbackend = FastLog\n\tif backend.String() != \"fastlog\" {\n\t\tt.Fatalf(\"expecting '%v', got '%v'\", \"fastlog\", backend.String())\n\t}\n\tbackend = Bolt\n\tif backend.String() != \"bolt\" {\n\t\tt.Fatalf(\"expecting '%v', got '%v'\", \"bolt\", backend.String())\n\t}\n\tbackend = LevelDB\n\tif backend.String() != \"leveldb\" {\n\t\tt.Fatalf(\"expecting '%v', got '%v'\", \"leveldb\", backend.String())\n\t}\n\tbackend = InMem\n\tif backend.String() != \"inmem\" {\n\t\tt.Fatalf(\"expecting '%v', got '%v'\", \"inmem\", backend.String())\n\t}\n}\n\nfunc TestCluster(t *testing.T) {\n\n\tvar optsArr []Options\n\tfor _, backend := range []Backend{LevelDB, Bolt, FastLog, InMem} {\n\t\tfor _, consistency := range []Level{Low, Medium, High} {\n\t\t\toptsArr = append(optsArr, Options{\n\t\t\t\tBackend:     backend,\n\t\t\t\tConsistency: consistency,\n\t\t\t})\n\t\t}\n\t}\n\tfor i := 0; i < len(optsArr); i++ {\n\t\tfunc() {\n\t\t\topts := optsArr[i]\n\t\t\tif os.Getenv(\"LOG\") != \"1\" {\n\t\t\t\topts.LogOutput = ioutil.Discard\n\t\t\t}\n\t\t\tbasePort := (7480/10 + i) * 10\n\t\t\ttag := fmt.Sprintf(\"%v-%v-%d\", opts.Backend, opts.Consistency, basePort)\n\t\t\tt.Logf(\"%s\", tag)\n\t\t\tt.Run(tag, func(t *testing.T) {\n\t\t\t\tos.RemoveAll(\"data\")\n\t\t\t\tdefer os.RemoveAll(\"data\")\n\t\t\t\tdefer killNodes(basePort)\n\t\t\t\tfor i := 0; i < 3; i++ {\n\t\t\t\t\tgo startTestNode(t, basePort, i, &opts)\n\t\t\t\t\twaitFor(t, basePort, i)\n\t\t\t\t}\n\t\t\t\tt.Run(\"Leader\", func(t *testing.T) { SubTestLeader(t, basePort, &opts) })\n\t\t\t\tt.Run(\"Set\", func(t *testing.T) { SubTestSet(t, basePort, &opts) })\n\t\t\t\tt.Run(\"Get\", func(t *testing.T) { SubTestGet(t, basePort, &opts) })\n\t\t\t\tt.Run(\"Snapshot\", func(t *testing.T) { SubTestSnapshot(t, basePort, &opts) })\n\t\t\t\tt.Run(\"Ping\", func(t *testing.T) { SubTestPing(t, basePort, &opts) })\n\t\t\t\tt.Run(\"RaftShrinkLog\", func(t *testing.T) { SubTestRaftShrinkLog(t, basePort, &opts) })\n\t\t\t\tt.Run(\"RaftStats\", func(t *testing.T) { SubTestRaftStats(t, basePort, &opts) })\n\t\t\t\tt.Run(\"RaftState\", func(t *testing.T) { SubTestRaftState(t, basePort, &opts) })\n\t\t\t\tt.Run(\"AddPeer\", func(t *testing.T) { SubTestAddPeer(t, basePort, &opts) })\n\t\t\t\tt.Run(\"RemovePeer\", func(t *testing.T) { SubTestRemovePeer(t, basePort, &opts) })\n\t\t\t})\n\t\t}()\n\t}\n}\n\nfunc SubTestLeader(t *testing.T, basePort int, opts *Options) {\n\tbaseAddr := fmt.Sprintf(\":%d\", basePort)\n\ttestDo(t, basePort, 0, baseAddr, \"raftleader\")\n\ttestDo(t, basePort, 1, baseAddr, \"raftleader\")\n\ttestDo(t, basePort, 2, baseAddr, \"raftleader\")\n}\n\nfunc SubTestSet(t *testing.T, basePort int, opts *Options) {\n\tbaseAddr := fmt.Sprintf(\":%d\", basePort)\n\ttestDo(t, basePort, 0, \"OK\", \"set\", \"hello\", \"world\")\n\ttestDo(t, basePort, 1, \"TRY \"+baseAddr, \"set\", \"hello\", \"world\")\n\ttestDo(t, basePort, 2, \"TRY \"+baseAddr, \"set\", \"hello\", \"world\")\n}\n\nfunc SubTestGet(t *testing.T, basePort int, opts *Options) {\n\tbaseAddr := fmt.Sprintf(\":%d\", basePort)\n\ttestDo(t, basePort, 0, \"world\", \"get\", \"hello\")\n\ttestDo(t, basePort, 1, \"TRY \"+baseAddr, \"set\", \"hello\", \"world\")\n\ttestDo(t, basePort, 2, \"TRY \"+baseAddr, \"set\", \"hello\", \"world\")\n}\n\nfunc SubTestPing(t *testing.T, basePort int, opts *Options) {\n\tfor i := 0; i < 3; i++ {\n\t\ttestDo(t, basePort, i, \"PONG\", \"ping\")\n\t\ttestDo(t, basePort, i, \"HELLO\", \"ping\", \"HELLO\")\n\t\ttestDo(t, basePort, i, \"ERR wrong number of arguments for 'ping' command\", \"ping\", \"HELLO\", \"WORLD\")\n\t}\n}\n\nfunc SubTestRaftShrinkLog(t *testing.T, basePort int, opts *Options) {\n\tfor i := 0; i < 3; i++ {\n\t\tif opts.Backend == Bolt || opts.Backend == LevelDB {\n\t\t\ttestDo(t, basePort, i, \"ERR log is not shrinkable\", \"raftshrinklog\")\n\t\t} else {\n\t\t\ttestDo(t, basePort, i, \"OK\", \"raftshrinklog\")\n\t\t}\n\t\ttestDo(t, basePort, i, \"ERR wrong number of arguments for 'raftshrinklog' command\", \"raftshrinklog\", \"abc\")\n\t}\n}\nfunc SubTestRaftStats(t *testing.T, basePort int, opts *Options) {\n\tfor i := 0; i < 3; i++ {\n\t\tresp := testDo(t, basePort, i, \"???\", \"raftstats\")\n\t\tif !strings.Contains(resp, \"applied_index\") || !strings.Contains(resp, \"num_peers\") {\n\t\t\tt.Fatal(\"expected values\")\n\t\t}\n\t\ttestDo(t, basePort, i, \"ERR wrong number of arguments for 'raftstats' command\", \"raftstats\", \"abc\")\n\t}\n}\nfunc SubTestRaftState(t *testing.T, basePort int, opts *Options) {\n\tfor i := 0; i < 3; i++ {\n\t\tif i == 0 {\n\t\t\ttestDo(t, basePort, i, \"Leader\", \"raftstate\")\n\t\t} else {\n\t\t\ttestDo(t, basePort, i, \"Follower\", \"raftstate\")\n\t\t}\n\t\ttestDo(t, basePort, i, \"ERR wrong number of arguments for 'raftstate' command\", \"raftstate\", \"abc\")\n\t}\n}\nfunc SubTestSnapshot(t *testing.T, basePort int, opts *Options) {\n\t// insert 1000 items\n\tfor i := 0; i < 1000; i++ {\n\t\ttestDo(t, basePort, 0, \"OK\", \"set\", fmt.Sprintf(\"key:%d\", i), fmt.Sprintf(\"val:%d\", i))\n\t}\n\ttestDo(t, basePort, 0, \"OK\", \"raftsnapshot\")\n\ttestDo(t, basePort, 1, \"OK\", \"raftsnapshot\")\n\ttestDo(t, basePort, 2, \"OK\", \"raftsnapshot\")\n}\nfunc SubTestAddPeer(t *testing.T, basePort int, opts *Options) {\n\tbaseAddr := fmt.Sprintf(\":%d\", basePort)\n\tgo startTestNode(t, basePort, 3, opts)\n\twaitFor(t, basePort, 3)\n\ttestDo(t, basePort, 3, baseAddr, \"raftleader\")\n\ttestDo(t, basePort, 3, \"TRY \"+baseAddr, \"set\", \"hello\", \"world\")\n\ttestDo(t, basePort, 3, \"OK\", \"raftsnapshot\")\n}\n\nfunc SubTestRemovePeer(t *testing.T, basePort int, opts *Options) {\n\tbaseAddr := fmt.Sprintf(\":%d\", basePort)\n\ttestDo(t, basePort, 1, \"TRY \"+baseAddr, \"raftremovepeer\", fmt.Sprintf(\":%d3\", basePort/10))\n\ttestDo(t, basePort, 0, \"OK\", \"raftremovepeer\", fmt.Sprintf(\":%d3\", basePort/10))\n\ttestDo(t, basePort, 0, \"peer is unknown\", \"raftremovepeer\", fmt.Sprintf(\":%d3\", basePort/10))\n}\n\nfunc BenchmarkCluster(t *testing.B) {\n\tos.RemoveAll(\"data\")\n\tdefer os.RemoveAll(\"data\")\n\tfor i := 0; i < 3; i++ {\n\t\tgo startTestNode(t, 7480, i, &Options{LogOutput: ioutil.Discard})\n\t\twaitFor(t, 7480, i)\n\t}\n\tt.Run(\"PL\", func(t *testing.B) {\n\t\tpl := []int{1, 4, 16, 64}\n\t\tfor i := 0; i < len(pl); i++ {\n\t\t\tfunc(pl int) {\n\t\t\t\tt.Run(fmt.Sprintf(\"%d\", pl), func(t *testing.B) {\n\t\t\t\t\tt.Run(\"Ping\", func(t *testing.B) { SubBenchmarkPing(t, pl) })\n\t\t\t\t\tt.Run(\"Set\", func(t *testing.B) { SubBenchmarkSet(t, pl) })\n\t\t\t\t\tt.Run(\"Get\", func(t *testing.B) { SubBenchmarkGet(t, pl) })\n\t\t\t\t})\n\t\t\t}(pl[i])\n\t\t}\n\t})\n}\nfunc testDial(t testing.TB, node int) (net.Conn, *bufio.ReadWriter) {\n\tconn, err := net.Dial(\"tcp\", fmt.Sprintf(\":748%d\", node))\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\treturn conn, bufio.NewReadWriter(bufio.NewReader(conn), bufio.NewWriter(conn))\n}\nfunc buildCommand(args ...string) []byte {\n\tvar buf []byte\n\tbuf = append(buf, '*')\n\tbuf = append(buf, strconv.FormatInt(int64(len(args)), 10)...)\n\tbuf = append(buf, '\\r', '\\n')\n\tfor _, arg := range args {\n\t\tbuf = append(buf, '$')\n\t\tbuf = append(buf, strconv.FormatInt(int64(len(arg)), 10)...)\n\t\tbuf = append(buf, '\\r', '\\n')\n\t\tbuf = append(buf, arg...)\n\t\tbuf = append(buf, '\\r', '\\n')\n\t}\n\treturn buf\n}\n\nfunc testConnDo(t testing.TB, rw *bufio.ReadWriter, pl int, expect string, cmd []byte) {\n\tfor i := 0; i < pl; i++ {\n\t\trw.Write(cmd)\n\t}\n\tif err := rw.Flush(); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tbuf := make([]byte, len(expect))\n\tfor i := 0; i < pl; i++ {\n\t\tif _, err := io.ReadFull(rw, buf); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif string(buf) != expect {\n\t\t\tt.Fatalf(\"expected '%v', got '%v'\", expect, string(buf))\n\t\t}\n\t}\n}\n\nfunc SubBenchmarkPing(t *testing.B, pipeline int) {\n\tconn, rw := testDial(t, 0)\n\tdefer conn.Close()\n\tt.ResetTimer()\n\tfor i := 0; i < t.N; i += pipeline {\n\t\tn := pipeline\n\t\tif t.N-i < pipeline {\n\t\t\tn = t.N - i\n\t\t}\n\t\ttestConnDo(t, rw, n, \"+PONG\\r\\n\", []byte(\"*1\\r\\n$4\\r\\nPING\\r\\n\"))\n\t}\n}\n\nfunc SubBenchmarkSet(t *testing.B, pipeline int) {\n\tconn, rw := testDial(t, 0)\n\tdefer conn.Close()\n\tt.ResetTimer()\n\tfor i := 0; i < t.N; i += pipeline {\n\t\tn := pipeline\n\t\tif t.N-i < pipeline {\n\t\t\tn = t.N - i\n\t\t}\n\t\ttestConnDo(t, rw, n, \"+OK\\r\\n\", buildCommand(\"set\", fmt.Sprintf(\"key:%d\", i), fmt.Sprintf(\"val:%d\", i)))\n\t}\n}\n\nfunc SubBenchmarkGet(t *testing.B, pipeline int) {\n\tconn, rw := testDial(t, 0)\n\tdefer conn.Close()\n\tt.ResetTimer()\n\tfor i := 0; i < t.N; i += pipeline {\n\t\tn := pipeline\n\t\tif t.N-i < pipeline {\n\t\t\tn = t.N - i\n\t\t}\n\t\ttestConnDo(t, rw, n, \"$-1\\r\\n\", buildCommand(\"get\", \"key:na\"))\n\t}\n}\n"
  },
  {
    "path": "go.mod",
    "content": "module github.com/tidwall/finn\n\ngo 1.13\n\nrequire (\n\tgithub.com/armon/go-metrics v0.3.0 // indirect\n\tgithub.com/boltdb/bolt v1.3.1 // indirect\n\tgithub.com/garyburd/redigo v1.6.0 // indirect\n\tgithub.com/hashicorp/go-msgpack v0.5.5 // indirect\n\tgithub.com/hashicorp/raft v0.1.0\n\tgithub.com/syndtr/goleveldb v1.0.0 // indirect\n\tgithub.com/tidwall/match v1.0.1 // indirect\n\tgithub.com/tidwall/raft-boltdb v0.0.0-20160909211738-25b87f2c5677\n\tgithub.com/tidwall/raft-fastlog v0.0.0-20160922202426-2f0d0a0ce558\n\tgithub.com/tidwall/raft-leveldb v0.0.0-20170127185243-ada471496dc9\n\tgithub.com/tidwall/raft-redcon v0.1.0\n\tgithub.com/tidwall/redcon v1.0.0\n\tgithub.com/tidwall/redlog v0.0.0-20180507234857-bbed90f29893\n\tgolang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413 // indirect\n)\n"
  },
  {
    "path": "go.sum",
    "content": "github.com/DataDog/datadog-go v2.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=\ngithub.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=\ngithub.com/armon/go-metrics v0.0.0-20190430140413-ec5e00d3c878 h1:EFSB7Zo9Eg91v7MJPVsifUysc/wPdN+NOnVe6bWbdBM=\ngithub.com/armon/go-metrics v0.0.0-20190430140413-ec5e00d3c878/go.mod h1:3AMJUQhVx52RsWOnlkpikZr01T/yAVN2gn0861vByNg=\ngithub.com/armon/go-metrics v0.3.0 h1:B7AQgHi8QSEi4uHu7Sbsga+IJDU+CENgjxoo81vDUqU=\ngithub.com/armon/go-metrics v0.3.0/go.mod h1:zXjbSimjXTd7vOpY8B0/2LpvNvDoXBuplAD+gJD3GYs=\ngithub.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=\ngithub.com/boltdb/bolt v1.3.1 h1:JQmyP4ZBrce+ZQu0dY660FMfatumYDLun9hBCUVIkF4=\ngithub.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps=\ngithub.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=\ngithub.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=\ngithub.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=\ngithub.com/garyburd/redigo v1.6.0 h1:0VruCpn7yAIIu7pWVClQC8wxCJEcG3nyzpMSHKi1PQc=\ngithub.com/garyburd/redigo v1.6.0/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=\ngithub.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/snappy v0.0.0-20180518054509-2e65f85255db h1:woRePGFeVFfLKN/pOkfl+p/TAqKOfFu+7KPlMVpok/w=\ngithub.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=\ngithub.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=\ngithub.com/hashicorp/go-hclog v0.9.1 h1:9PZfAcVEvez4yhLH2TBU64/h/z4xlFI80cWXRrxuKuM=\ngithub.com/hashicorp/go-hclog v0.9.1/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ=\ngithub.com/hashicorp/go-immutable-radix v1.0.0 h1:AKDB1HM5PWEA7i4nhcpwOrO2byshxBjXVn/J/3+z5/0=\ngithub.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=\ngithub.com/hashicorp/go-msgpack v0.5.5 h1:i9R9JSrqIz0QVLz3sz+i3YJdT7TTSLcfLLzJi9aZTuI=\ngithub.com/hashicorp/go-msgpack v0.5.5/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=\ngithub.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=\ngithub.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=\ngithub.com/hashicorp/golang-lru v0.5.0 h1:CL2msUPvZTLb5O648aiLNJw3hnBxN2+1Jq8rCOH9wdo=\ngithub.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=\ngithub.com/hashicorp/raft v0.0.0-20160824023112-5f09c4ffdbcd h1:gN6xm3iAclW5DKJWYiXO8tZN25Zy7UsB6Wh/85OB8Bg=\ngithub.com/hashicorp/raft v0.0.0-20160824023112-5f09c4ffdbcd/go.mod h1:DVSAWItjLjTOkVbSpWQ0j0kUADIvDaCtBxIcbNAQLkI=\ngithub.com/hashicorp/raft v0.1.0 h1:OC+j7LWkv7x8s9c5wnXCEgtP1J0LDw2fKNxUiYCZFNo=\ngithub.com/hashicorp/raft v0.1.0/go.mod h1:DVSAWItjLjTOkVbSpWQ0j0kUADIvDaCtBxIcbNAQLkI=\ngithub.com/hashicorp/raft v1.1.1 h1:HJr7UE1x/JrJSc9Oy6aDBHtNHUUBHjcQjTgvUVihoZs=\ngithub.com/hashicorp/raft v1.1.1/go.mod h1:vPAJM8Asw6u8LxC3eJCUZmRP/E4QmUGE1R7g7k8sG/8=\ngithub.com/hashicorp/raft-boltdb v0.0.0-20171010151810-6e5ba93211ea/go.mod h1:pNv7Wc3ycL6F5oOWn+tPGo2gWD4a5X+yp/ntwdKLjRk=\ngithub.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=\ngithub.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=\ngithub.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=\ngithub.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=\ngithub.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=\ngithub.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=\ngithub.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM=\ngithub.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=\ngithub.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=\ngithub.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=\ngithub.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=\ngithub.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=\ngithub.com/syndtr/goleveldb v1.0.0 h1:fBdIW9lB4Iz0n9khmH8w27SJ3QEJ7+IgjPEwGSZiFdE=\ngithub.com/syndtr/goleveldb v1.0.0/go.mod h1:ZVVdQEZoIme9iO1Ch2Jdy24qqXrMMOU6lpPAyBWyWuQ=\ngithub.com/tidwall/match v1.0.1 h1:PnKP62LPNxHKTwvHHZZzdOAOCtsJTjo6dZLCwpKm5xc=\ngithub.com/tidwall/match v1.0.1/go.mod h1:LujAq0jyVjBy028G1WhWfIzbpQfMO8bBZ6Tyb0+pL9E=\ngithub.com/tidwall/raft v1.0.0 h1:XuXkumePQBVcBYtfC57f7dXRM4WBZei3lZkK06+0D8I=\ngithub.com/tidwall/raft v1.0.0/go.mod h1:uMALL7ToL5LoHWCGwLE1uROh9W0ormbwrdxg2uSo7Oo=\ngithub.com/tidwall/raft-boltdb v0.0.0-20160909211738-25b87f2c5677 h1:8FkXr+GCV4wb8WAct/V1vKB/Ivy11Y+fm919EHgdfWA=\ngithub.com/tidwall/raft-boltdb v0.0.0-20160909211738-25b87f2c5677/go.mod h1:O7b2tvwZmC+IFu8djLOZj0jc/tjssDPiJ8xIt+U2jTU=\ngithub.com/tidwall/raft-boltdb v0.0.0-20180905173017-ae4e25b230d8 h1:D9uqhiFILz+qx8y2LtX70pDvjCYXEghnUTZ934M+fuY=\ngithub.com/tidwall/raft-boltdb v0.0.0-20180905173017-ae4e25b230d8/go.mod h1:O7b2tvwZmC+IFu8djLOZj0jc/tjssDPiJ8xIt+U2jTU=\ngithub.com/tidwall/raft-fastlog v0.0.0-20160922202426-2f0d0a0ce558 h1:hQYEIfMzrH6LRzjz7Jp5Rv8jrty1bAR5M0DjOYSxxks=\ngithub.com/tidwall/raft-fastlog v0.0.0-20160922202426-2f0d0a0ce558/go.mod h1:KNwBhka/a5Ucw5bfEzKHTEKuCO2Do1tKs+kDdu3Sbb4=\ngithub.com/tidwall/raft-fastlog v0.0.0-20190329194628-f798a12ed2b3 h1:Km24Wbatpk4a0cQlmW1lGvyjzDD2biQlaqtqR1G7Cic=\ngithub.com/tidwall/raft-fastlog v0.0.0-20190329194628-f798a12ed2b3/go.mod h1:KNwBhka/a5Ucw5bfEzKHTEKuCO2Do1tKs+kDdu3Sbb4=\ngithub.com/tidwall/raft-leveldb v0.0.0-20170127185243-ada471496dc9 h1:Z5QMqF/MSuvnrTibHqs/xx+ZE5gypLV02YU8Ry4kJ7A=\ngithub.com/tidwall/raft-leveldb v0.0.0-20170127185243-ada471496dc9/go.mod h1:KNAMyK8s/oUOTbIL/T07fTL6/EgJfHhK8XeeEPq35eU=\ngithub.com/tidwall/raft-leveldb v0.0.0-20180905172604-d81b19dd795a h1:wSOV25XXv0kdoWUEqCYEgaPAgWm5mdi3c1wkisYdQaM=\ngithub.com/tidwall/raft-leveldb v0.0.0-20180905172604-d81b19dd795a/go.mod h1:KNAMyK8s/oUOTbIL/T07fTL6/EgJfHhK8XeeEPq35eU=\ngithub.com/tidwall/raft-leveldb v0.0.0-20190319171839-8607dc18110d h1:DypM2TD6Pdev1QH5WrwLO09jB1oq7KvWpYvnqCS3Vow=\ngithub.com/tidwall/raft-leveldb v0.0.0-20190319171839-8607dc18110d/go.mod h1:KNAMyK8s/oUOTbIL/T07fTL6/EgJfHhK8XeeEPq35eU=\ngithub.com/tidwall/raft-redcon v0.1.0 h1:qwYaFaAVNFleY2EFm0j7UK4vEpoNa19ohH7U4idbg+s=\ngithub.com/tidwall/raft-redcon v0.1.0/go.mod h1:YhoECfJs8MXbrwak9H7wKYDMZ3rMaB7el7zZ7MRw9Xw=\ngithub.com/tidwall/redcon v1.0.0 h1:D4AzzJ81Afeh144fgnj5H0aSVPBBJ5RI9Rzj0zThU+E=\ngithub.com/tidwall/redcon v1.0.0/go.mod h1:bdYBm4rlcWpst2XMwKVzWDF9CoUxEbUmM7CQrKeOZas=\ngithub.com/tidwall/redlog v0.0.0-20180507234857-bbed90f29893 h1:aGyVYs0o1pThR9i+SuYCG/VqWibHkUXl9kIMZGhAXDw=\ngithub.com/tidwall/redlog v0.0.0-20180507234857-bbed90f29893/go.mod h1:NssoNA+Uwqd5WHKkVwAzO7AT6VuG3wiC8r5nBqds3Ao=\ngithub.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=\ngolang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=\ngolang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413 h1:ULYEB3JvPRE/IfO+9uO7vKV/xzVTO7XPAwm8xbf4w2g=\ngolang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=\ngolang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190412213103-97732733099d h1:+R4KGOnez64A81RvjARKc4UT5/tI9ujCIVX+P5KiHuI=\ngolang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190523142557-0e01d883c5c5 h1:sM3evRHxE/1RuMe1FYAL3j7C7fUfIjkbE+NiDAYUF8U=\ngolang.org/x/sys v0.0.0-20190523142557-0e01d883c5c5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=\ngopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=\ngopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\n"
  }
]