Full Code of orca-zhang/ecache for AI

master e7223905abc2 cached
18 files
100.3 KB
38.4k tokens
101 symbols
1 requests
Download .txt
Repository: orca-zhang/ecache
Branch: master
Commit: e7223905abc2
Files: 18
Total size: 100.3 KB

Directory structure:
gitextract_norfhvbw/

├── .semaphore/
│   └── semaphore.yml
├── LICENSE
├── README.md
├── README_en.md
├── dist/
│   ├── dist.go
│   ├── dist_test.go
│   ├── goredis/
│   │   ├── goredis.go
│   │   ├── goredis_test.go
│   │   └── v7/
│   │       ├── goredis.go
│   │       └── goredis_test.go
│   └── redigo/
│       ├── redigo.go
│       └── redigo_test.go
├── ecache.go
├── ecache_test.go
├── go.mod
├── go.sum
└── stats/
    ├── stats.go
    └── stats_test.go

================================================
FILE CONTENTS
================================================

================================================
FILE: .semaphore/semaphore.yml
================================================
version: v1.0
name: Go
agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu2004
blocks:
  - name: 'Test '
    task:
      jobs:
        - name: go test
          commands:
            - sudo apt-get install redis-server -y
            - sudo service redis-server start
            - sudo netstat -anpt | grep 6379
            - sudo cat /etc/redis/redis.conf
            - sem-version go 1.14
            - export GO111MODULE=on
            - export GOPATH=~/go
            - 'export PATH=/home/semaphore/go/bin:$PATH'
            - checkout
            - go get ./...
            - go build -v .
            - go test -coverprofile=coverage.txt -covermode=atomic -v ./...
            - 'bash <(curl -s https://codecov.io/bash) -t 08147e5d-b7a3-4d7a-9bee-ae4cedb39293 || echo "Codecov did not collect coverage reports"'


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2021 Orca

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
[English README | 英文说明](README_en.md)

# 🦄 ecache
<p align="center">
  <a href="#">
    <img src="https://github.com/orca-zhang/ecache/raw/master/doc/logo.svg">
  </a>
</p>

<p align="center">
  <a href="/go.mod#L3" alt="go version">
    <img src="https://img.shields.io/badge/go%20version-%3E=1.11-brightgreen?style=flat"/>
  </a>
  <a href="https://goreportcard.com/badge/github.com/orca-zhang/ecache" alt="goreport">
    <img src="https://goreportcard.com/badge/github.com/orca-zhang/ecache">
  </a>
  <a href="https://orca-zhang.semaphoreci.com/projects/ecache" alt="buiding status">
    <img src="https://orca-zhang.semaphoreci.com/badges/ecache.svg?style=shields">
  </a>
  <a href="https://codecov.io/gh/orca-zhang/ecache" alt="codecov">
    <img src="https://codecov.io/gh/orca-zhang/ecache/branch/master/graph/badge.svg?token=F6LQbADKkq"/>
  </a>
  <a href="https://github.com/orca-zhang/ecache/blob/master/LICENSE" alt="license MIT">
    <img src="https://img.shields.io/badge/license-MIT-brightgreen.svg?style=flat">
  </a>
  <a href="https://app.fossa.com/projects/git%2Bgithub.com%2Forca-zhang%2Fcache?ref=badge_shield" alt="FOSSA Status">
    <img src="https://app.fossa.com/api/projects/git%2Bgithub.com%2Forca-zhang%2Fcache.svg?type=shield"/>
  </a>
  <a href="https://benchplus.github.io/gocache/dev/bench/" alt="continuous benchmark">
    <img src="https://img.shields.io/badge/benchmark-click--me-brightgreen.svg?style=flat"/>
  </a>
</p>
<p align="center">一款极简设计、高性能、并发安全、支持分布式一致性的轻量级内存缓存</p>

## 特性

- 🤏 代码量<300行、30s完成接入
- 🚀 高性能、极简设计、并发安全
- 🌈 支持`LRU` 和 [`LRU-2`](#LRU-2模式)两种模式
- 🦖 额外[小组件](#分布式一致性组件)支持分布式一致性

## 基准性能

> :snail: 代表很慢, :airplane: 代表快, :rocket: 代表非常快

> [👁️‍🗨️点我看用例](https://github.com/benchplus/gocache) [👁️‍🗨️点我看结果](https://benchplus.github.io/gocache/dev/bench/) (除了缓存命中率数值越低越好)

<table style="text-align: center">
   <tr>
      <td></td>
      <td><a href="https://github.com/allegro/bigcache">bigcache</a></td>
      <td><a href="https://github.com/FishGoddess/cachego">cachego</a></td>
      <td><a href="https://github.com/orca-zhang/ecache"><strong>ecache🌟</strong></a></td>
      <td><a href="https://github.com/coocood/freecache">freecache</a></td>
      <td><a href="https://github.com/bluele/gcache">gcache</a></td>
      <td><a href="https://github.com/patrickmn/go-cache">gocache</a></td>
   </tr>
   <tr>
      <td>PutInt</td>
      <td>:airplane:</td>
      <td></td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>GetInt</td>
      <td>:airplane:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>Put1K</td>
      <td>:airplane:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>Put1M</td>
      <td>:snail:</td>
      <td></td>
      <td>:rocket:</td>
      <td>:snail:</td>
      <td>:airplane:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>PutTinyObject</td>
      <td>:airplane:</td>
      <td></td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
      <td></td>
   </tr>
   <tr>
      <td>ChangeOutAllInt</td>
      <td>:airplane:</td>
      <td></td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>HeavyReadInt</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td></td>
      <td></td>
      <td>:rocket:</td>
   </tr>
   <tr>
      <td>HeavyReadIntGC</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>HeavyWriteInt</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>HeavyWriteIntGC</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
      <td>:airplane:</td>
      <td></td>
      <td></td>
   </tr>
   <tr>
      <td>HeavyWrite1K</td>
      <td>:snail:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>HeavyWrite1KGC</td>
      <td>:snail:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>HeavyMixedInt</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
      <td>:rocket:</td>
   </tr>
   <tr>
    <td colspan="7">
      <a href="https://github.com/FishGoddess/cachego"><strong>FishGoddess/cachego</strong></a> 和 <a href="https://github.com/patrickmn/go-cache"><strong>patrickmn/go-cache</strong></a> 是简单的map+过期时间的实现,所以没有命中率测试
    </td>
   </tr>
   <tr>
    <td colspan="7">
      <a href="https://github.com/kpango/gache"><strong>kpango/gache</strong></a> & <a href="https://github.com/hlts2/gocache"><strong>hlts2/gocache</strong></a> 性能表现不是很好,所以从列表中剔除
    </td>
   </tr>
   <tr>
    <td colspan="7">
      <a href="https://github.com/patrickmn/go-cache"><strong>patrickmn/go-cache</strong></a> 是FIFO模式,其他的库都是LRU模式
    </td>
   </tr>
</table>

![](https://github.com/orca-zhang/ecache/raw/master/doc/benchmark.png)

> gc pause测试结果 [代码由`bigcache`提供](https://github.com/allegro/bigcache-bench)(数值越低越好)
![](https://github.com/orca-zhang/ecache/raw/master/doc/gc.png)

### 目前正在生产环境大流量验证中
- [`已验证`]公众号后台(几百QPS):用户信息、订单信息、配置信息
- [`已验证`]推送系统(几万QPS):可调整系统配置、信息去重、固定信息缓存
- [`已验证`]评论系统(几万QPS):用户信息、分布式一致性组件

## 如何使用

#### 引入包(预计5秒)
``` go
import (
    "time"

    "github.com/orca-zhang/ecache"
)
```

#### 定义实例(预计5秒)
> 可以放置在任意位置(全局也可以),建议就近定义
``` go
var c = ecache.NewLRUCache(16, 200, 10 * time.Second)
```

#### 设置缓存(预计5秒)
``` go
c.Put("uid1", o) // `o`可以是任意变量,一般是对象指针,存放固定的信息,比如`*UserInfo`
```

#### 查询缓存(预计5秒)
``` go
if v, ok := c.Get("uid1"); ok {
    return v.(*UserInfo) // 不用类型断言,咱们自己控制类型
}
// 如果内存缓存没有查询到,下面再回源查redis/db
```

#### 删除缓存(预计5秒)
> 在信息发生变化的地方
``` go
c.Del("uid1")
```

#### 下载包(预计5秒)

> 非go modules模式:\
> sh>  ```go get -u github.com/orca-zhang/ecache```

> go modules模式:\
> sh>  ```go mod tidy && go mod download```

#### 运行吧
> 🎉 完美搞定 🚀 性能直接提升X倍!\
> sh>  ```go run <你的main.go文件>```

## 参数说明

- `NewLRUCache`
  - 第一个参数是桶的个数,用来分散锁的粒度,每个桶都会使用独立的锁,最大值为65535,支持65536个实例
    - 不用担心,随意设置一个就好,`ecache`会找一个合适的数字便于后面掩码计算
  - 第二个参数是每个桶所能容纳的item个数上限,最大值为65535
    - 意味着`ecache`全部写满的情况下,应该有`第一个参数 X 第二个参数`个item,最多能支持存储42亿个item
  - \[`可选`\]第三个参数是每个item的过期时间
    - `ecache`使用内部计时器提升性能,默认100ms精度,每秒校准
    - 不传或者传`0`,代表永久有效

## 最佳实践

- 支持任意类型的值
  - 提供`Put`/`PutInt64`/`PutBytes`三种方法,适应不同场景,需要与`Get`/`GetInt64`/`GetBytes`配对使用(后两种方法GC开销较小)
  - 复杂对象优先存放指针(注意⚠️一旦放进去不要再修改其字段,即使再拿出来也是,item有可能被其他人同时访问)
    - 如果需要修改,解决方案:取出字段每个单独赋值,或者用[copier做一次深拷贝后在副本上修改](#需要修改部分数据且用对象指针方式存储时)
    - 也可以存放对象(相对于直接存对象指针性能差一些,因为拿出去有拷贝)
    - 缓存的对象尽可能越往业务上层越大越好(节省内存拼装和组织时间)
- 如果不想因为类似遍历的请求把热数据刷掉,可以改用[`LRU-2`模式](#LRU-2模式),可能有很少的损耗(💬 [什么是LRU-2](#什么是LRU-2))
  - `LRU2`和`LRU`的大小设置分别为1/4和3/4效果较好
- 一个实例可以存储多种类型的对象,试试key格式化的时候加上前缀,用冒号分割
- 并发访问量大的场景,试试`256`、`1024`个桶,甚至更多
- 可以当作**缓冲队列**用于合并更新以减少刷盘次数(数据可以重建或容忍断电丢失的情况下)
  - 具体使用方式是[挂载`Inspector`](#注入监听器)监听驱逐事件
  - 终末或定时调用[`Walk`](#遍历所有元素)将数据刷到存储

## 特别场景

### 整型键、整型值和字节数组
``` go
// 整型键
c.Put(strconv.FormatInt(d, 10), o) // d为`int64`类型

// 整型值
c.PutInt64("uid1", int64(1))
if d, ok := c.GetInt64("uid1"); ok {
    // d为`int64`类型的1
}

// 字节数组
c.PutBytes("uid1", b)// b为`[]byte`类型
if b, ok := c.GetBytes("uid1"); ok {
    // b为`[]byte`类型
}
```

### LRU-2模式

- 💬 [什么是LRU-2](#什么是LRU-2)

> 直接在`NewLRUCache()`后面跟`.LRU2(<num>)`就好,参数`<num>`代表`LRU-2`热队列的item上限个数(每个桶)
``` go
var c = ecache.NewLRUCache(16, 200, 10 * time.Second).LRU2(1024)
```

### 空缓存哨兵(不存在的对象不用再回源)
``` go
// 设置的时候直接给`nil`就好
c.Put("uid1", nil)
```

``` go
// 读取的时候,也和正常差不多
if v, ok := c.Get("uid1"); ok {
  if v == nil { // 注意⚠️这里需要判断是不是空缓存哨兵
    return nil  // 是空缓存哨兵,那就返回没有信息或者也可以让`uid1`不出现在待回源列表里
  }
  return v.(*UserInfo)
}
// 如果内存缓存没有查询到,下面再回源查redis/db
```

### 需要修改部分数据,且用对象指针方式存储时

> 比如,我们从`ecache`中获取了`*UserInfo`类型的用户信息缓存`v`,需要修改其状态字段
``` go
import (
    "github.com/jinzhu/copier"
)
```

``` go
o := &UserInfo{}
copier.Copy(o, v) // 从`v`复制到`o`
o.Status = 1      // 修改副本的字段
```

### 注入监听器

``` go
// inspector - 可以用来做统计或者缓冲队列等
//   `action`:PUT, `status`: evicted=-1, updated=0, added=1
//   `action`:GET, `status`: miss=0, hit=1
//   `action`:DEL, `status`: miss=0, hit=1
//   `iface`/`bytes`只有在`status`不为0或者`action`为PUT时才不为nil
type inspector func(action int, key string, iface *interface{}, bytes []byte, status int)
```

- 使用方式
``` go
cache.Inspect(func(action int, key string, iface *interface{}, bytes []byte, status int) {
  // TODO: 实现你想做的事情
  //     监听器会根据注入顺序依次执行
  //     注意⚠️如果有耗时操作,尽量另开channel保证不阻塞当前协程

  // - 如何获取正确的值 -
  //   - `Put`:      `*iface`
  //   - `PutBytes`: `bytes`
  //   - `PutInt64`: `ecache.ToInt64(bytes)`
})
```

### 遍历所有元素

``` go
  // 只会遍历缓存中存在且未过期的项
  cache.Walk(func(key string, iface *interface{}, bytes []byte, expireAt int64) bool {
    // `key`是值,`iface`/`bytes`是值,`expireAt`是过期时间

    // - 如何获取正确的值 -
    //   - `Put`:      `*iface`
    //   - `PutBytes`: `bytes`
    //   - `PutInt64`: `ecache.ToInt64(bytes)`
    return true // 是否继续遍历
  })
```

## 统计缓存使用情况

> 实现超级简单,注入inspector后,每个操作只多了一次原子操作,具体看[代码](/stats/stats.go#L34)

##### 引入stats包
``` go
import (
    "github.com/orca-zhang/ecache/stats"
)
```

#### 绑定缓存实例
> 名称为自定义的池子名称,内部会按名称聚合\
> 注意⚠️绑定可以放在全局
``` go
var _ = stats.Bind("user", c)
var _ = stats.Bind("user", c0, c1, c2)
var _ = stats.Bind("token", caches...)
```

#### 获取统计信息
``` go
stats.Stats().Range(func(k, v interface{}) bool {
    fmt.Printf("stats: %s %+v\n", k, v) // k是池子名称,v是(*stats.StatsNode)类型
    // 其中统计了各种事件的次数,使用`HitRate`方法可以获得缓存命中率
    return true
})
```

## 分布式一致性组件

- 💬 [原理说明](#分布式一致性组件原理)

### 引入dist包
``` go
import (
    "github.com/orca-zhang/ecache/dist"
)
```

### 绑定缓存实例
> 名称为自定义的池子名称,内部会按名称聚合\
> 注意⚠️绑定可以放在全局,不依赖初始化
``` go
var _ = dist.Bind("user", c)
var _ = dist.Bind("user", c0, c1, c2)
var _ = dist.Bind("token", caches...)
```

### 绑定redis client
> 目前支持redigo和goredis,其他库可以自行实现dist.RedisCli接口,或者提issue给我

#### go-redis v7及以下版本
``` go
import (
    "github.com/orca-zhang/ecache/dist/goredis/v7"
)

dist.Init(goredis.Take(redisCli)) // redisCli是*redis.RedisClient类型
dist.Init(goredis.Take(redisCli, 100000)) // 第二个参数是channel缓冲区大小,不传默认100
```

#### go-redis v8及以上版本
``` go
import (
    "github.com/orca-zhang/ecache/dist/goredis"
)

dist.Init(goredis.Take(redisCli)) // redisCli是*redis.RedisClient类型
dist.Init(goredis.Take(redisCli, 100000)) // 第二个参数是channel缓冲区大小,不传默认100
```

#### redigo
> 注意⚠️`github.com/gomodule/redigo` 要求最低版本 `go 1.14`
``` go
import (
    "github.com/orca-zhang/ecache/dist/redigo"
)

dist.Init(redigo.Take(pool)) // pool是*redis.Pool类型
```

#### 主动通知所有节点、所有实例删除(包括本机)
> 当db的数据发生变化或者删除时调用\
> 发生错误时会降级成只处理本机所有实例(比如未初始化或者网络错误)
``` go
dist.OnDel("user", "uid1") // user是池子名称,uid1是要删除的key
```

## 使用[`lrucache`](http://github.com/orca-zhang/lrucache)的老用户升级指导

- 只需四步:
1. 引入包 `github.com/orca-zhang/lrucache` 改为 `github.com/orca-zhang/ecache`
2. `lrucache.NewSyncCache` 改为 `ecache.NewLRUCache`
3. 第3个参数从默认的单位秒改为`*time.Second`
4. `Delete`方法改为`Del`

# 不希望你白来

- 客官,既然来了,学点东西再走吧!
- 我想尽力让你明白`ecache`做了啥,以及为什么要这么做

## 什么是本地内存缓存

---
    L1 缓存引用 .................... 0.5 ns
    分支错误预测 ...................... 5 ns
    L2 缓存引用 ...................... 7 ns
    互斥锁/解锁 ...................... 25 ns
    主存储器引用 .................... 100 ns
    使用 Zippy 压缩 1K 字节 ........3,000 ns =   3 µs
    通过 1 Gbps 网络发送 2K 字节... 20,000 ns =  20 µs
    从内存中顺序读取 1 MB ........ 250,000 ns = 250 µs
    同一数据中心内的往返........... 500,000 ns = 0.5 ms
    发送数据包 加州<->荷兰 .... 150,000,000 ns = 150 ms

- 从上表可以看出,内存访问和网络访问(同数据中心)差不多是一千到一万倍的差距!
- 曾经遇到不止一个工程师:“缓存?上redis”,但我想说,redis不是万金油,某些程度上讲,用它还是噩梦(当然我说的是缓存一致性问题...😄)
- 因为内存操作非常快,相对于redis/db你基本可以忽略不计,比如现在有一个QPS是1000查询API,我们把结果缓存1秒,也就是1秒内不会请求redis/db,那回源次数降低到了1/1000(理想情况),意味着访问redis/db部分的性能提升了1000倍,听上去是不是很棒?
- 继续看,你会爱上她的!(当然也可能是他,亦或者是牠,ahaha)

### 使用场景,解决什么问题

- 高并发大流量场景
  - 缓存热点数据(比如人气比较高的直播间)
  - 突发QPS削峰(比如信息流中突发新闻)
  - 降低延迟和拥堵(比如短时间内频繁访问的页面)
- 节省成本
  - 单机场景(不部署redis、memcache也能快速提升QPS上限)
  - redis和db实例降配(能拦截大部分请求)
- 不怎么会变化的数据(写少读多)
  - 比如配置等(这类数据使用地方多,会有放大效应,很多时候可能会因为这些配置热key对redis/db实例的规格误判,需要单独为它们升配)
- 可以容忍短暂不一致的数据
  - 用户头像、昵称、商品库存(实际下单会在db再次检查)等
  - 修改的配置(过期时间10秒,那最多延迟10秒生效)
- 缓冲队列:合并更新以减少刷盘次数
  - 可以通过给查询打补丁来实现强一致(分布式情况下,需要在负载均衡层保证同用户/设备调度到同一节点)
  - 可以重建或容忍断电丢失的情况下

## 设计思路

> `ecache`是[`lrucache`](http://github.com/orca-zhang/lrucache)库的升级版本

- 最下层是用原生map和双链表实现的最基础`LRU`(最久未访问)
  - PS:我实现的其他版本([go](https://github.com/orca-zhang/lrucache) / [c++](https://github.com/ez8-co/linked_hash) / [js](https://github.com/orca-zhang/ecache.js))在leetcode都是超越100%的解法
- 第2层包了分桶策略、并发控制、过期控制(会自动选择2的幂次个桶,便于掩码计算)
- 第2.5层用很简单的方式实现了`LRU-2`能力,代码不超过20行,直接看源码(搜关键词`LRU-2`)

### 什么是LRU
- 最久未访问的优先驱逐
- 每次被访问,item会被刷新到队列的最前面
- 队列满后再次写入新item,优先驱逐队列最后面、也就是最久未访问的item

### 什么是LRU-2
- `LRU-K`是少于K次访问的用单独的`LRU`队列存放,超过K次的另外存放
- 主要优化的场景是比如一些遍历类型的查询,批量刷缓存以后,很容易把一些本来较热的item给驱逐掉
- 为了实现简单,我们这里实现的是`LRU-2`,也就是第2次访问就放到热队列里,并不记录访问次数
- 主要优化的是热key的缓存命中率
- 和mysql的[缓冲池lru算法](https://dev.mysql.com/doc/refman/5.7/en/innodb-buffer-pool.html)非常类似

### 分布式一致性组件原理

- 其实简单的利用了redis的pubsub功能
- 主动告知被缓存的信息有更新,广播到所有节点
- 某种意义上说,它只是缩小不一致时间窗口的一个方式(有网络延迟且不保证一定完成)
- 需要注意⚠️:
  - 尽量减少使用,适合用在写少读多`WORM(Write-Once-Read-Many)`的场景
    - redis性能毕竟不如内存,而且有广播类通信(写放大)
  - 以下场景会降级(时间窗口变大),但至少会保证当前节点的强一致性
    - redis不可用、网络错误
    - 消费goroutine panic
    - 存在未生效节点(灰度`canary`发布,或者发布过程中)的情况下,比如
      - 已使用`ecache`但首次添加此插件
      - 新加入缓存的数据或者新加的删除操作

### 关于性能

- 释放锁不用defer
- 不用异步清理(没意义,分散到写时驱逐更合理,不易抖动)
- 没有用内存容量来控制(单个item的大小一般都有预估大小,简单控制个数即可)
- 分桶策略,自动选择2的幂次个桶(分散锁竞争,2的幂次掩码操作更快)
- key用`string`类型(可扩展性强;语言内建支持引用,更省内存)
- 不用虚表头(虽然绕脑一些,但是有20%左右提升)
- 选择`LRU-2`实现`LRU-K`(实现简单,近乎没有额外损耗)
- 可以直接存指针(不用序列化,有些场景如果使用`[]byte`那优势大大降低)
- 使用内部计时器计时(默认100ms精度,每秒校准,剖析发现time.Now()产生临时对象导致GC耗时增加)
- 双链表用固定分配内存存储,用时间戳置0来标记删除,减少GC(并且同规格比`bigcache`节省内存50%以上)

#### 失败的优化尝试

- key由`string`改为`reflect.StringHeader`,结果:负优化
- 互斥锁改为读写锁,Get请求也会修改数据,访问违例,即使不改数据,结果:读写混合场景负优化
- 用`time.Timer`实现内部计时器,结果:触发不稳定,后直接用`time.Sleep`实现计时器
- 分布式一致性组件挂inspector自动同步更新和删除,结果:性能影响较大且需要特殊处理循环调用问题

### 关于GC优化

- 就像我在C++版性能剖析器里提到的[性能优化的几个层次](https://github.com/ez8-co/ezpp#性能优化的几个层次),单从一个层次考虑性能并不高明
- 《第三层次》里有一句“没有比不存在的东西性能更快的了”(类似奥卡姆剃刀),能砍掉一定不要想着优化
- 比如为了减少GC大块分配内存,却提供`[]byte`的值存储,意味着可能需要序列化、拷贝(虽不在库的性能指标里,人家用还是要算,包括:GC、内存、CPU)
- 如果序列化的部分可以复用用在协议层拼接,能做到`ZeroCopy`,那也无可厚非,但实际分层以后,无法在协议层直接实现拼接,而`ecache`存储指针直接省了额外的部分
- 我想表达的并不是GC优化不重要,而更多应该结合场景,使用者额外损耗也需要考虑,而非宣称gc-free,结果用起来并非那样
- 我所崇尚的“暴力美学”是极简,缺陷率和代码量成正比,复杂的东西早晚会被淘汰,`KISS`才是王道
- `ecache`一共只有不到300行,千行bug率一定的情况下,它的bug不会多

## 常见问题
> 问:一个实例可以存储多种对象吗?
- 答:可以呀,比如加前缀格式化key就可以了(像用redis那样冒号分割),注意⚠️别搞错类型。

> 问:如何给不同item设置不同过期时间?
- 答:用多个缓存实例。(😄没想到吧)

> 问:如果有热热热热key问题怎么解决?
- 答:本身【本地内存缓存】就是用来扛住热key的,这里可以理解成是非常非常热的key(单节点几十万QPS),它们最大的问题是对单一bucket锁定次数过多,影响在同一个bucket的其他数据。那么可以这样:一是改用`LRU-2`不让类似遍历的请求把热数据刷掉,二是除了增加bucket,可以用多实例(同时写入相同的item)+读访问某一个(比如按访问用户uid hash)的方式,让热key有多个副本,不过删除(反写)的时候要注意多实例全部删除,适用于“写少读多`WORM(Write-Once-Read-Many)`”的场景,或者“写多读多”的场景可以把有变化的diff部分单独摘出来转化为“写少读多`WORM(Write-Once-Read-Many)`”的场景。

> 问:如果同一时间并发回源到DB查询同一个资源怎么优化?
- 答:可以使用[sync/singleflight](https://pkg.go.dev/golang.org/x/sync/singleflight)包,同时访问同一个资源时,只回源一次,防止热点数据把DB打爆的问题。

> 问:为什么不用虚表头方式处理双链表?太弱了吧!
- 答:2019-04-22泄漏的【[lrucache](http://github.com/orca-zhang/lrucache)】被人在V站上扒出来喷过,还真不是不会,现在的写法,虽然比pointer-to-pointer方式读起来绕脑,但是有20%左右的提升哈!(😄没想到吧)

## 相关文献

- [如何一步步提升Go内存缓存性能](https://my.oschina.net/u/5577511/blog/5438484)

## 致谢

感谢在开发过程中进行code review、勘误 & 提出宝贵建议的各位!(排名不分先后)

<table>
  <tr>
    <td align="center">
      <a href="https://github.com/askuy">
        <img src="https://avatars.githubusercontent.com/u/14119383?v=4" width="64px;" alt=""/>
        <br />
        <b>askuy</b>
        <br />
        <sub><a href="https://github.com/gotomicro/ego">[ego]</a></sub>
      </a>
    </td>
    <td align="center">
      <a href="https://github.com/auula">
        <img src="https://avatars.githubusercontent.com/u/38412458?v=4" width="64px;" alt=""/>
        <br />
        <b>Leon Ding</b>
        <br />
        <sub><a href="https://mp.weixin.qq.com/mp/profile_ext?action=home&__biz=MzI3MzQwNjcyNg==&scene=124#wechat_redirect">[打码匠]</a></sub>
      </a>
    </td>
    <td align="center">
      <a href="https://github.com/Danceiny">
        <img src="https://avatars.githubusercontent.com/u/9427454?v=4" width="64px;" alt=""/>
        <br />
        <b>黄振</b>
        <br />
        <sub>&nbsp;</sub>
      </a>
    </td>
    <td align="center">
      <a href="https://github.com/IceCream01">
        <img src="https://avatars.githubusercontent.com/u/19547638?v=4" width="64px;" alt=""/>
        <br />
        <b>Ice</b>
        <br />
        <sub>&nbsp;</sub>
      </a>
    </td>
    <td align="center">
      <a href="https://github.com/FishGoddess">
        <img src="https://avatars.githubusercontent.com/u/36259784?v=4" width="64px;" alt=""/>
        <br />
        <b>水不要鱼</b>
        <br />
        <sub><a href="https://github.com/FishGoddess/cachego">[cachego]</a></sub>
      </a>
    </td>
  </tr>
</table>

## 赞助

通过成为赞助商来支持这个项目。 您的logo将显示在此处,并带有指向您网站的链接。 [[成为赞助商](https://opencollective.com/ecache#sponsor)]

<a href="https://opencollective.com/ecache/sponsor/0/website" target="_blank"><img src="https://opencollective.com/ecache/sponsor/0/avatar.svg"></a>
<a href="https://opencollective.com/ecache/sponsor/1/website" target="_blank"><img src="https://opencollective.com/ecache/sponsor/1/avatar.svg"></a>
<a href="https://opencollective.com/ecache/sponsor/2/website" target="_blank"><img src="https://opencollective.com/ecache/sponsor/2/avatar.svg"></a>
<a href="https://opencollective.com/ecache/sponsor/3/website" target="_blank"><img src="https://opencollective.com/ecache/sponsor/3/avatar.svg"></a>

## 贡献者

这个项目的存在要感谢所有做出贡献的人。

请给我们一个💖star💖来支持我们,谢谢。

并感谢我们所有的支持者! 🙏

<a href="https://opencollective.com/ecache/backer/0/website?requireActive=false" target="_blank"><img src="https://opencollective.com/ecache/backer/0/avatar.svg?requireActive=false"></a>
<a href="https://opencollective.com/ecache/backer/1/website?requireActive=false" target="_blank"><img src="https://opencollective.com/ecache/backer/1/avatar.svg?requireActive=false"></a>
<a href="https://opencollective.com/ecache/backer/2/website?requireActive=false" target="_blank"><img src="https://opencollective.com/ecache/backer/2/avatar.svg?requireActive=false"></a>
<a href="https://opencollective.com/ecache/backer/3/website?requireActive=false" target="_blank"><img src="https://opencollective.com/ecache/backer/3/avatar.svg?requireActive=false"></a>
<a href="https://opencollective.com/ecache#backers" target="_blank"><img src="https://opencollective.com/ecache/contributors.svg?width=890" /></a>


================================================
FILE: README_en.md
================================================
[Simplified Chinese README | 简体中文说明](README.md)

# 🦄 ecache
<p align="center">
  <a href="#">
    <img src="https://github.com/orca-zhang/ecache/raw/master/doc/logo.svg">
  </a>
</p>

<p align="center">
  <a href="/go.mod#L3" alt="go version">
    <img src="https://img.shields.io/badge/go%20version-%3E=1.11-brightgreen?style=flat"/>
  </a>
  <a href="https://goreportcard.com/badge/github.com/orca-zhang/ecache" alt="goreport">
    <img src="https://goreportcard.com/badge/github.com/orca-zhang/ecache">
  </a>
  <a href="https://orca-zhang.semaphoreci.com/projects/ecache" alt="buiding status">
    <img src="https://orca-zhang.semaphoreci.com/badges/ecache.svg?style=shields">
  </a>
  <a href="https://codecov.io/gh/orca-zhang/ecache" alt="codecov">
    <img src="https://codecov.io/gh/orca-zhang/ecache/branch/master/graph/badge.svg?token=F6LQbADKkq"/>
  </a>
  <a href="https://github.com/orca-zhang/ecache/blob/master/LICENSE" alt="license MIT">
    <img src="https://img.shields.io/badge/license-MIT-brightgreen.svg?style=flat">
  </a>
  <a href="https://app.fossa.com/projects/git%2Bgithub.com%2Forca-zhang%2Fcache?ref=badge_shield" alt="FOSSA Status">
    <img src="https://app.fossa.com/api/projects/git%2Bgithub.com%2Forca-zhang%2Fcache.svg?type=shield"/>
  </a>
  <a href="https://benchplus.github.io/gocache/dev/bench/" alt="continuous benchmark">
    <img src="https://img.shields.io/badge/benchmark-click--me-brightgreen.svg?style=flat"/>
  </a>
</p>
<p align="center">Extremely easy, ultra fast, concurrency-safe and support distributed consistency.</p>

## Features

- 🤏  Less than 300 lines, cost only ~30s to assemble
- 🚀  Extremely easy, ultra fast and  concurrency-safe
- 🌈  Support both `LRU` mode and  [`LRU-2`](#LRU-2-mode) mode inside
- 🦖  [Extra plugin](#Distributed-Consistency-Plugin) that support distributed consistency

## Benchmarks

> :snail: for very-slow, :airplane: for fast, :rocket: for very-fast.

> [👁️‍🗨️click me to see cases](https://github.com/benchplus/gocache) [👁️‍🗨️click me to see results](https://benchplus.github.io/gocache/dev/bench/) (the lower the better except cache hit rate)

<table style="text-align: center">
   <tr>
      <td></td>
      <td><a href="https://github.com/allegro/bigcache">bigcache</a></td>
      <td><a href="https://github.com/FishGoddess/cachego">cachego</a></td>
      <td><a href="https://github.com/orca-zhang/ecache"><strong>ecache🌟</strong></a></td>
      <td><a href="https://github.com/coocood/freecache">freecache</a></td>
      <td><a href="https://github.com/bluele/gcache">gcache</a></td>
      <td><a href="https://github.com/patrickmn/go-cache">gocache</a></td>
   </tr>
   <tr>
      <td>PutInt</td>
      <td>:airplane:</td>
      <td></td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>GetInt</td>
      <td>:airplane:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>Put1K</td>
      <td>:airplane:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>Put1M</td>
      <td>:snail:</td>
      <td></td>
      <td>:rocket:</td>
      <td>:snail:</td>
      <td>:airplane:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>PutTinyObject</td>
      <td>:airplane:</td>
      <td></td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
      <td></td>
   </tr>
   <tr>
      <td>ChangeOutAllInt</td>
      <td>:airplane:</td>
      <td></td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>HeavyReadInt</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td></td>
      <td></td>
      <td>:rocket:</td>
   </tr>
   <tr>
      <td>HeavyReadIntGC</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>HeavyWriteInt</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>HeavyWriteIntGC</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
      <td>:airplane:</td>
      <td></td>
      <td></td>
   </tr>
   <tr>
      <td>HeavyWrite1K</td>
      <td>:snail:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>HeavyWrite1KGC</td>
      <td>:snail:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
   </tr>
   <tr>
      <td>HeavyMixedInt</td>
      <td>:rocket:</td>
      <td>:airplane:</td>
      <td>:rocket:</td>
      <td></td>
      <td>:airplane:</td>
      <td>:rocket:</td>
   </tr>
   <tr>
    <td colspan="7">
      <a href="https://github.com/FishGoddess/cachego"><strong>FishGoddess/cachego</strong></a> and <a href="https://github.com/patrickmn/go-cache"><strong>patrickmn/go-cache</strong></a> are based on simple map with expiration, so that there's no hit rate case.
    </td>
   </tr>
   <tr>
    <td colspan="7">
      <a href="https://github.com/kpango/gache"><strong>kpango/gache</strong></a> & <a href="https://github.com/hlts2/gocache"><strong>hlts2/gocache</strong></a> not performs well, so remove them out from the benchmark list.
    </td>
   </tr>
   <tr>
    <td colspan="7">
      <a href="https://github.com/patrickmn/go-cache"><strong>patrickmn/go-cache</strong></a> is FIFO mode, and others are LRU mode.
    </td>
   </tr>
</table>

![](https://github.com/orca-zhang/ecache/raw/master/doc/benchmark.png)

> gc pause test result [code provided by `bigcache`](https://github.com/allegro/bigcache-bench) (the lower the better)
![](https://github.com/orca-zhang/ecache/raw/master/doc/gc.png)

### Stablity validation in production environment
- [`Confirmed`]Official Account Backend(hundreds QPS), user & order info, configrations.
- [`Confirmed`]Push Platform(tens of thousands QPS), system configrations, deduplication, fixed info cache like app info and etc.
- [`Confirmed`]Comment Platform(tens of thousands QPS), user info and distributed consistency plugin for user avatar & nickname.

## How To Use

#### Import Package (almost 5s)
``` go
import (
    "time"

    "github.com/orca-zhang/ecache"
)
```

#### Definition (almost 5s)
> Can be placed in any position (global is also OK), it is recommended to define nearby
``` go
var c = ecache.NewLRUCache(16, 200, 10 * time.Second)
```

#### Put Item (almost 5s)
``` go
c.Put("uid1", o) // `o` can be any variable, generally an object pointer, storing fixed information, such as `*UserInfo`
```

#### Retrive Item (almost 5s)
``` go
if v, ok := c.Get("uid1"); ok {
    return v.(*UserInfo) // No type assertion, let's control the type by ourselves
}
// If it is not found in memory cache , go back to query the redis/db
```

#### Remove Item (almost 5s)
> when the original info was updated
``` go
c.Del("uid1")
```

#### Download Package (almost 5s)

> non-go modules mode:\
> sh>  ```go get -u github.com/orca-zhang/ecache```

> go modules mode:\
> sh>  ```go mod tidy && go mod download```

#### Fire
> 🎉 Finished. 🚀 Performance accelerated to X times! \
> sh>  ```go run <your-main.go file>```

## Instruction

- `NewLRUCache`
  - First parameter is the number of buckets, each bucket will use an independent lock, max to 65535(for 65536 buckets)
    - Don't worry, just set as you want, `ecache` will find a suitable number which is convenient for mask calculation later
  - Second parameter is the number of items that each bucket can hold, max to 65535
    - When `ecache` is full, there should be `first parameter X second parameter` item, can store max to 4.2 billion items
  - \[`Optional`\]Third parameter is the expiration time of each item
    - `ecache` uses internal counter to improve performance, default 100ms accuracy, calibration every second
    - No parameter or pass `0`, means permanent

## Best Practices

- Support any type of value
  - Provides `Put`/`PutInt64`/`PutBytes` three methods to adapt to different scenarios and need to be used in pairs with `Get`/`GetInt64`/`GetBytes` (the latter two methods have less GC overhead)
  - Store pointers for complex objects (Note: ⚠️ Do not modify its fields once it is put in, even if it is taken out again, because the item may be accessed by other people at the same time)
    - If you need to modify, the solution: take out each individual assignment of the field, or use [copier to make a deep copy and modify on the copy](#need-to-modify-and-store-the-object-pointer)
    - Objects can also be stored directly (compared to the previous one, the performance is worse because there are copy operations when taken out)
    - The larger cached objects, the better, the upper level of the business, the better (save memory assembly and data organization time)
- If you don’t want to erase the hot data due to traversal requests, you can switch to [`LRU-2` mode](#LRU-2-mode), there may be very little overhead (💬 [What Is LRU-2](#What-Is-LRU-2))
  - - The size of `LRU2` and `LRU` is set to 1/4 and 3/4, which may perform better。
- One instance can store multiple types of objects, try adding a prefix when formatting the key and separating it with a colon
- For scenes with large concurrent visits, try `256`, `1024` buckets, or even more
- Can be used as a **buffer queue** to merge updates to reduce disk flushes (data can be rebuilt or tolerate loss of power outage)
   - [Add an `Inspector`](#inject-an-inspector) to monitor the eviction event
   - At the end or intervally call [`Walk`](#fetch-all-items) to flush the data to storage

## Special Scenarios

### integer key, integer value and bytes value
``` go
// integer key
c.Put(strconv.FormatInt(d, 10), o) // d is type of `int64`

// integer value
c.PutInt64("uid1", int64(1))
if d, ok := c.GetInt64("uid1"); ok {
    // d is type of `int64` and value is 1
}

// bytes value
c.PutBytes("uid1", b)// b is type of `[]byte`
if b, ok := c.GetBytes("uid1"); ok {
    // b is type of `[]byte`
}
```

### LRU-2 mode

- 💬 [What Is LRU-2](#What-Is-LRU-2)

> Just follow `NewLRUCache()` directly with `.LRU2(<num>)`, and the parameter `<num>` represents the number of items in the `LRU-2` hot queue (per bucket)
``` go
var c = ecache.NewLRUCache(16, 200, 10 * time.Second).LRU2(1024)
```

### Empty cache sentry (non-existent objects do not need to query the source)
``` go
// Just give `nil` when put
c.Put("uid1", nil)
```

``` go
// When reading, it is almost like normal
if v, ok := c.Get("uid1"); ok {
  if v == nil { // Note:⚠️ it is necessary to judge whether it is empty
    return nil  // If it is empty, then return `nil` or you can prevent `uid1` from appearing in the list of source to be queried
  }
  return v.(*UserInfo)
}
// If the memory cache is miss, go back to query the redis/db
```

### Need to modify, and store the object pointer

> For example, we get the user information cache `v` of type `*UserInfo` from `ecache`, and need to modify its status field
``` go
import (
    "github.com/jinzhu/copier"
)
```

``` go
o := &UserInfo{}
copier.Copy(o, v) // Copy from `v` to `o`
o.Status = 1      // Modify the field of the copy
```

### Inject an inspector

``` go
// inspector - can be used to do statistics or buffer queues, etc.
// `action`:PUT, `status`: evicted=-1, updated=0, added=1
// `action`:GET, `status`: miss=0, hit=1
// `action`:DEL, `status`: miss=0, hit=1
// `iface`/`bytes` is not `nil` when `status` is not 0 or `action` is PUT
type inspector func(action int, key string, iface *interface{}, bytes []byte, status int)
```

- How to use
``` go
cache.Inspect(func(action int, key string, iface *interface{}, bytes []byte, status int) {
   // TODO: add what you want to do
   //     Inspector will be executed in sequence according to the injection order
   //     Note:⚠️ If there is a operation that takes a long time, try to transfer job to another channel to ensure not blocking current coroutine.
   
   // - how to fetch right value -
  //   - `Put`:      `*iface`
  //   - `PutBytes`: `bytes`
  //   - `PutInt64`: `ecache.ToInt64(bytes)`
})
```

### Fetch all items

``` go
  // only invalid items can be fetched
  cache.Walk(func(key string, iface *interface{}, bytes []byte, expireAt int64) bool {
    // `key` is key of item, `iface`/`bytes` is value of item, `expireAt` is the time that item expired

    // - how to fetch right value -
    //   - `Put`:      `*iface`
    //   - `PutBytes`: `bytes`
    //   - `PutInt64`: `ecache.ToInt64(bytes)`
    return true // true stands for walk on
  })
```

## Cache Usage Statistics

> The implementation is super simple. After the inspector is injected, only one more atomic operation is added to each operation. See [details](/stats/stats.go#L34).

##### Import the `stats` package
``` go
import (
    "github.com/orca-zhang/ecache/stats"
)
```

#### Bind the cache instance
> The name is a custom pool name, which will be aggregated by name internally.\
> Note:⚠️ The binding can be placed in global scope.
``` go
var _ = stats.Bind("user", c)
var _ = stats.Bind("user", c0, c1, c2)
var _ = stats.Bind("token", caches...)
```

#### Get statistics
``` go
stats.Stats().Range(func(k, v interface{}) bool {
    fmt.Printf("stats: %s %+v\n", k, v) // k is name of pool, v is type of (*stats.StatsNode)
    // StatsNode stores count of events, use `HitRate` method can know cache hit rate
    return true
})
```

## Distributed Consistency Plugin

- 💬 [Principle Explanation](#Principle-of-Distributed-Consistency-Plugin)

### Import the `dist` package
``` go
import (
    "github.com/orca-zhang/ecache/dist"
)
```

### Bind cache instance
> The name is a custom pool name, which will be aggregated by name internally.\
> Note:⚠️ The binding can be placed in global scope and does not depend on initialization.
``` go
var _ = dist.Bind("user", c)
var _ = dist.Bind("user", c0, c1, c2)
var _ = dist.Bind("token", caches...)
```

### Bind redis client
> Currently `redigo` and `goredis` are supported, other libraries can implement the `dist.RedisCli` interface by yourselves, or you can submit an issue to me.

#### go-redis v7 and below
``` go
import (
    "github.com/orca-zhang/ecache/dist/goredis/v7"
)

dist.Init(goredis.Take(redisCli)) // redisCli is *redis.RedisClient type
dist.Init(goredis.Take(redisCli, 100000)) // Second parameter is size of channel buffer, default is 100 if not passed
```

#### go-redis v8 and above
``` go
import (
    "github.com/orca-zhang/ecache/dist/goredis"
)

dist.Init(goredis.Take(redisCli)) // redisCli is *redis.RedisClient type
dist.Init(goredis.Take(redisCli, 100000)) // Second parameter is size of channel buffer, default is 100 if not passed
```

#### redigo
> Note:⚠️ `github.com/gomodule/redigo` requires minimum version `go 1.14`
``` go
import (
    "github.com/orca-zhang/ecache/dist/redigo"
)

dist.Init(redigo.Take(pool)) // pool is of *redis.Pool type
```

#### Proactively notify all nodes and all instances to delete item (including local machine)
> Called when the data of db changes or is deleted\
> When error occurs, it will be downgraded to local operation (such as uninitialized or network error)
``` go
dist.OnDel("user", "uid1") // user is name of pool, uid1 is the key that want to be deleted
```

## Update guide for old [`lrucache`](http://github.com/orca-zhang/lrucache) fans

- Only four steps:
1. Import `github.com/orca-zhang/ecache` instead of `github.com/orca-zhang/lrucache`
2. `ecache.NewLRUCache` instead of `lrucache.NewSyncCache`
3. Third parameter should add unit `*time.Second`
4. `Delete` method replace to `Del`

# You won't leave empty-handed

- Guest officer, let's learn something before leaving!
- I want to try my best to make you understand what `ecache` did and why.

## What is local memory cache

---
    L1 cache reference ......................... 0.5 ns
    Branch mispredict ............................ 5 ns
    L2 cache reference ........................... 7 ns
    Mutex lock/unlock ........................... 25 ns
    Main memory reference ...................... 100 ns
    Compress 1K bytes with Zippy ............. 3,000 ns  =   3 µs
    Send 2K bytes over 1 Gbps network ....... 20,000 ns  =  20 µs
    Read 1 MB sequentially from memory ..... 250,000 ns  = 250 µs
    Round trip within same datacenter ...... 500,000 ns  = 0.5 ms
    Send packet CA<->Netherlands ....... 150,000,000 ns  = 150 ms

- As can be seen from the above table, the gap between memory access and network access (same as data center) is almost one to ten thousand times!
- I have encountered more than one engineer: "Cache? Use redis", but I want to say that redis is not a panacea. To some extent, using it is still a nightmare (of course I am talking about cache consistency issues...😄)
- Because the memory operation is very fast, it can be basically omitted when compared to redis/db. For example, there is a 1000QPS query API. If we cache the result for 1 second, which means that redis/db won't be requested within 1 second, then source query counts is reduced to 1/1000 (ideally), so the performance of accessing the redis/db part has been improved by 1000 times. Doesn't it sound great?
- Keep watching, you will fall in love with her! (Of course might be him or it, ahaha)

### Use Scenarios(problems to be solved)

- High concurrency and large traffic scenarios
   - Cache hotspot data (such as live broadcast rooms with high popularity)
   - Sudden QPS peak clipping (such as breaking news in the information stream)
   - Reduce latency and congestion (such as frequently visited pages in a short period of time)
- Cut costs
   - Stand-alone scenario (the QPS can be quickly increased without deploying redis or memcache)
   - Downgrade of redis and db instances (can intercept most requests)
- Persistent or semi-persistent data (write less and read more)
   - For example, configuration, etc. (This kind of data is used in many places, and there will be an amplification effect. In many cases, these configuration hot keys may lead to specification mis-upgrade of the redis/db instance)
- Inconsistent-tolerated data
   - Such as user avatar, nickname, product inventory (the actual order will be checked again in db), etc.
   - Modified configuration (expiration time is 10 seconds, then it will take effect with a maximum delay of 10 seconds)
- Buffer queue: merge updates to reduce disk flushes
   - Can achieve strong consistency by patching query with cache diff (in the case of distributed, it is necessary to ensure that the same user/device is balanced to the same node at the load balancing layer)
   - Data can be rebuilt or tolerate loss of power outage

## Design Ideas

> `ecache` is an upgraded version of the [`lrucache`](http://github.com/orca-zhang/lrucache) library

- Bottom layer is the most basic `LRU` implemented with native map and double-linked lists (the longest not visited)
   - PS: All other versions I implemented ([go](https://github.com/orca-zhang/lrucache) / [c++](https://github.com/ez8-co/linked_hash) / [js](https://github.com/orca-zhang/ecache.js)) in leetcode are solutions beats 100% submissions.
- Second layer includes bucketing strategy, concurrency control, and expiration control (it will automatically adapt to power-of-two buckets to facilitate mask calculation)
- The 2.5 layer implements the `LRU-2` ability in a very simple way, the code does not exceed 20 lines, directly look at the source code (search for the keyword `LRU-2`)

### What is LRU

- Evict longest visited item first.
- Each time it is accessed, the item will be refreshed to the first of the queue.
- Put a new item when the queue is full, the last item in the queue, that is the item has not been accessed for the longest time, will be evicted.

### What Is LRU-2

- `LRU-K` is that less than K visits items stored in a separate `LRU` queue, and additional queue for more than K visits
- The target scenario is that, for example, some traversal queries will evict some hot items that we need in the future.
- For the sake of simplicity, what we have implemented here is `LRU-2`, that is, the second visit is placed in the hot queue, and the count of visits is not recorded.
- It used to optimize the cache hit rate of hot keys.
- Very similar to [mysql's buffer pool lru algorithm](https://dev.mysql.com/doc/refman/5.7/en/innodb-buffer-pool.html).

### Principle of Distributed Consistency Plugin

- In fact, it simply uses the pubsub feature of redis
- Proactively inform that the cached information is updated and broadcast it to all the nodes
- In a sense, it is just a way to narrow the inconsistent time window (there is a network delay and it is not guaranteed to be completed)
- Pay Attention: ⚠️
   - Reduce the use even if necessary, suitable for the scenario where write less read more `WORM(Write-Once-Read-Many)`
     - Because redis's performance is not as good as memory after all, and there is broadcast communication (write amplification)
   - The following scenarios will be degraded (the time window becomes larger), but at least the strong consistency of the current node will be guaranteed
     - Redis is unavailable, network error
     - Consume goroutine panic
     - When not all nodes are ready (`canary` deployment, or in the process of deployment), such as
       - Already used `ecache` but added this plugin for the first time
       - Newly added cached data or newly added delete operation

### About performance

- No defer is needed to release the lock.
- No need to clean up asynchronously (clean-up is meaningless, it is more reasonable to disperse to eviction when writing, and it is not easy to GC thrashing).
- No memory capacity is used to control (the size of a single item generally has an estimated size, simply control the number).
- Bucket strategy, automatic selection of power-of-two buckets (reduce lock competition, power-of-two mask operation is faster).
- Use string type for key (strong scalability; built-in language support for reference, which saves memory).
- No virtual header for doubly-linked list (although it is a little bit around, but there is an increase of about 20%).
- Choose `LRU-2` to implement `LRU-K` (simple implementation, almost no additional overhead).
- Store pointers directly (without serialization, the advantage is greatly reduced if you use `[]byte` in some scenarios).
- Use internal counter for timing (default 100ms accuracy, calibration per second, `pprof` found that time.Now() generates temporary objects, which leads to increased GC time consumption).
- -Double-linked list uses fixed allocation memory storage, uses zero timestamp to mark delete, reduces GC (and saves memory by more than 50% compared with `bigcache` in the same specification)

#### Failed optimization attempt

- The key is changed from string to `reflect.StringHeader`, result: negative optimization.
- The mutex lock is changed to a read-write lock, the Get request will also modify the data, and the access is illegal, even if the data is not changed, the result: negative optimization for read-write mixed scenarios.
- Use `time.Timer` implements the internal counter, the result: the trigger is unstable, use `time.Sleep` instead.
- Distributed consistency plugin that automatically updated and deleted by the inspector. The result: the performance decreased and the loop call problem needs to be specially dealt with.

### About GC optimization

- As I mentioned in the C++ version of the performance profiler [several levels of performance optimization](https://github.com/ez8-co/ezpp#性能优化的几个层次), consider at only one level is not good.
- The Third Level says, 'Nothing is faster than nothing' (similar to Occam's razor), you should not come up with optimization if you can remove it.
- For example, some library want to reduce GC by allocating large block of memory, but provides `[]byte` value storage, which means that it may need extra serialization and copy.
- If the serialized part can be reused in the protocol layer that `ZeroCopy` can be achieved is OK, but very often, it's hard or impossible to reuse in the protocol layer in fact, so the `ecache` storage pointer directly so that it can omit the overhead.
- What I want to express is that GC optimization is really important, but more that it should be combined with the scene, and extra overhead of client-end also needs to be considered, instead of claiming gc-free, the result is not that way.
- The violent aesthetics I advocate is minimalism, the defect rate is proportional to the amount of code, complex things will be eliminated sooner or later, and `KISS` is the true king.
- `ecache` has only less than 300 lines in total, and if the bug rate of thousand lines is fixed, there aren't many bugs in it.

## FAQ
> Q: Can an instance store multiple kind of objects?
- A: Yes, for example, you can format the key with a prefix (like redis keys separated by a colon), please pay attention to ⚠️not misusing the wrong type.

> Q: How to set different expiration times for different items?
- A: Use several cache instances. (😄did not expect?)

> Q: How to solve the problem of very-very-very hot key?
- A: The [local memory cache] is used for cache hot keys, so very-very-very hot keys here can be understood as single node hundreds of thousands of QPS, the biggest problem is that there are too many lock competitions on a single bucket, which affects other data in the same bucket. Then it can be like this: First, use `LRU-2` to prevent similar traversal requests from flushing hot data. Secondly, in addition to adding buckets, you can write multiple instances (write the same item at the same time) and read a certain one (for example, according to the access user uid hash), let the hot key have multiple copies. But when deleting (reverse writing), be careful to delete all instances of multiple instances, which is suitable for the scenario of "write less read more `WORM (Write-Once-Read-Many)`". " The scenario of “write more, read more” can extract the diff separately and turn it into a `WORM` scenario.

> Q: How to prevent cocurrent request to db at the same time for the same resource?
- A: Use [singleflight](https://pkg.go.dev/golang.org/x/sync/singleflight).

> Q: Why not deal with doubly-linked list in the way of virtual headers? It's bullshxt now!
- A: The leaked code [[lrucache](http://github.com/orca-zhang/lrucache)] has been challenged on the V2EX on 2019-04-22. It’s really not that I don't know to use virtual headers. Although it is more confusing to read than the pointer-to-pointer method, current way has an improvement of about 20%! (😄did not expect?)

## Related Docs

- [How to improve performance of `ecache` step by step](https://my.oschina.net/u/5577511/blog/5438484)

## Thanks

Gratitude to them who performed code review, errata, and valuable suggestions during the development process! (names not listed in order)

<table>
  <tr>
    <td align="center">
      <a href="https://github.com/askuy">
        <img src="https://avatars.githubusercontent.com/u/14119383?v=4" width="64px;" alt=""/>
        <br />
        <b>askuy</b>
        <br />
        <sub><a href="https://github.com/gotomicro/ego">[ego]</a></sub>
      </a>
    </td>
    <td align="center">
      <a href="https://github.com/auula">
        <img src="https://avatars.githubusercontent.com/u/38412458?v=4" width="64px;" alt=""/>
        <br />
        <b>auula</b>
        <br />
        <sub><a href="https://mp.weixin.qq.com/mp/profile_ext?action=home&__biz=MzI3MzQwNjcyNg==&scene=124#wechat_redirect">[CodingSauce]</a></sub>
      </a>
    </td>
    <td align="center">
      <a href="https://github.com/Danceiny">
        <img src="https://avatars.githubusercontent.com/u/9427454?v=4" width="64px;" alt=""/>
        <br />
        <b>Danceiny</b>
        <br />
        <sub>&nbsp;</sub>
      </a>
    </td>
    <td align="center">
      <a href="https://github.com/IceCream01">
        <img src="https://avatars.githubusercontent.com/u/19547638?v=4" width="64px;" alt=""/>
        <br />
        <b>Ice</b>
        <br />
        <sub>&nbsp;</sub>
      </a>
    </td>
    <td align="center">
      <a href="https://github.com/FishGoddess">
        <img src="https://avatars.githubusercontent.com/u/36259784?v=4" width="64px;" alt=""/>
        <br />
        <b>FishGoddess</b>
        <br />
        <sub><a href="https://github.com/FishGoddess/cachego">[cachego]</a></sub>
      </a>
    </td>
  </tr>
</table>

## Sponsors

Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [[Become a sponsor](https://opencollective.com/ecache#sponsor)]

<a href="https://opencollective.com/ecache/sponsor/0/website" target="_blank"><img src="https://opencollective.com/ecache/sponsor/0/avatar.svg"></a>
<a href="https://opencollective.com/ecache/sponsor/1/website" target="_blank"><img src="https://opencollective.com/ecache/sponsor/1/avatar.svg"></a>
<a href="https://opencollective.com/ecache/sponsor/2/website" target="_blank"><img src="https://opencollective.com/ecache/sponsor/2/avatar.svg"></a>
<a href="https://opencollective.com/ecache/sponsor/3/website" target="_blank"><img src="https://opencollective.com/ecache/sponsor/3/avatar.svg"></a>

## Contributors

This project exists thanks to all the people who contribute.

Please give us a 💖 star 💖 to support us. Thank you.

And thank you to all our backers! 🙏

<a href="https://opencollective.com/ecache/backer/0/website?requireActive=false" target="_blank"><img src="https://opencollective.com/ecache/backer/0/avatar.svg?requireActive=false"></a>
<a href="https://opencollective.com/ecache/backer/1/website?requireActive=false" target="_blank"><img src="https://opencollective.com/ecache/backer/1/avatar.svg?requireActive=false"></a>
<a href="https://opencollective.com/ecache/backer/2/website?requireActive=false" target="_blank"><img src="https://opencollective.com/ecache/backer/2/avatar.svg?requireActive=false"></a>
<a href="https://opencollective.com/ecache/backer/3/website?requireActive=false" target="_blank"><img src="https://opencollective.com/ecache/backer/3/avatar.svg?requireActive=false"></a>
<a href="https://opencollective.com/ecache#backers" target="_blank"><img src="https://opencollective.com/ecache/contributors.svg?width=890" /></a>


================================================
FILE: dist/dist.go
================================================
package dist

import (
	"log"
	"runtime/debug"
	"strings"
	"sync"
	"time"

	"github.com/orca-zhang/ecache"
)

const topic = "orca-zhang/ecache"

// `RedisCli`` interface used by `dist` component
type RedisCli interface {
	// if the redis client is ready
	OK() bool
	// pub a payload to channel
	Pub(channel, payload string) error
	// sub a payload from channel, callback uill tidy the local cache
	Sub(channel string, callback func(payload string)) error
}

var redisCli RedisCli
var m sync.Map

func delAll(pool, key string) {
	if caches, _ := m.Load(pool); caches != nil {
		for _, c := range *(caches.(*[]*ecache.Cache)) {
			c.Del(key)
		}
	}
}

// Init `dist` component with redis client
func Init(r RedisCli) {
	if redisCli != r {
		redisCli = r
		go func() {
			defer func() {
				if err := recover(); err != nil {
					log.Println(err)
					debug.PrintStack()
				}
			}()

			for {
				for r == nil || !r.OK() {
					time.Sleep(10 * time.Millisecond)
				}
				_ = r.Sub(topic, func(payload string) {
					vs := strings.Split(payload, ":")
					if len(vs) >= 2 {
						delAll(vs[0], vs[1])
					}
				})
			}
		}()
	}
}

// Bind - to enable distributed consistency
// `pool` is not necessary, it can be used to classify instances that store same items, but it will be more efficient if it is not empty
// `caches` is cache instances to be binded
func Bind(pool string, caches ...*ecache.Cache) error {
	c, _ := m.LoadOrStore(pool, &[]*ecache.Cache{})
	*(c.(*[]*ecache.Cache)) = append(*(c.(*[]*ecache.Cache)), caches...)
	return nil
}

// OnDel - delete `key` in `pool` at distributed scale
func OnDel(pool, key string) error {
	// pub to remote nodes
	r := redisCli
	if r != nil && r.Pub(topic, strings.Join([]string{pool, key}, ":")) == nil {
		return nil
	}
	delAll(pool, key)
	return nil
}


================================================
FILE: dist/dist_test.go
================================================
package dist

import (
	// "sync"
	"testing"
	"time"

	"github.com/orca-zhang/ecache"
)

type DIYCli struct {
	ok *bool
	c  chan string
}

// if the redis client is ready
func (d *DIYCli) OK() bool {
	return d.ok != nil && *d.ok
}

// pub a payload to channel
func (d *DIYCli) Pub(channel, payload string) error {
	d.c <- payload
	return nil
}

// sub a payload from channel, callback uill tidy the local cache
func (d *DIYCli) Sub(channel string, callback func(payload string)) error {
	for {
		if payload, ok := <-d.c; ok {
			callback(payload)
		} else {
			break
		}
	}
	return nil
}

func Take(ok *bool) RedisCli {
	return &DIYCli{
		ok: ok,
		c:  make(chan string, 100),
	}
}

func TestBind(t *testing.T) {
	lc1 := ecache.NewLRUCache(1, 100, 10*time.Second)
	lc2 := ecache.NewLRUCache(1, 100, 10*time.Second)
	lc1.Put("1", "1")
	lc2.Put("1", "1")
	lc1.Put("2", "1")
	lc2.Put("2", "1")
	lc1.Put("3", "1")
	lc2.Put("3", "1")

	// bind them into a pool
	Bind("lc", lc1)
	Bind("lc", lc2)

	time.Sleep(3 * time.Second)

	// try to del a item
	OnDel("lc", "1")

	time.Sleep(3 * time.Second)

	if _, ok := lc1.Get("1"); ok {
		t.Error("case 1 failed")
	}
	if _, ok := lc2.Get("1"); ok {
		t.Error("case 1 failed")
	}
}

func TestInit(t *testing.T) {
	// nil Init
	Init(nil)

	// is OK
	OnDel("lc", "1")
}

func TestDIYClient(t *testing.T) {
	ok := false

	Init(Take(&ok))

	time.Sleep(3 * time.Second)

	// mark ready
	ok = true

	// is OK
	OnDel("lc", "1")

	time.Sleep(3 * time.Second)

	lc1 := ecache.NewLRUCache(1, 100, 10*time.Second)
	lc1.Put("1", "1")

	if _, ok := lc1.Get("1"); !ok {
		t.Error("case 2 failed")
	}

	// bind them into a pool
	Bind("lc", lc1)
	OnDel("lc", "1")

	time.Sleep(3 * time.Second)

	if _, ok := lc1.Get("1"); ok {
		t.Error("case 2 failed")
	}
}

type PanicCli struct {
}

// if the redis client is ready
func (d *PanicCli) OK() bool {
	return true
}

// pub a payload to channel
func (d *PanicCli) Pub(channel, payload string) error {
	return nil
}

// sub a payload from channel, callback uill tidy the local cache
func (d *PanicCli) Sub(channel string, callback func(payload string)) error {
	panic("test panic client")
}

func TestPanicClient(t *testing.T) {
	Init(&PanicCli{})

	time.Sleep(3 * time.Second)
}


================================================
FILE: dist/goredis/goredis.go
================================================
package goredis

import (
	"context"

	"github.com/go-redis/redis/v8"
	"github.com/orca-zhang/ecache/dist"
)

type GoRedisCli struct {
	ctx      context.Context
	redisCli *redis.Client
	chanSize int
}

// if the redis client is ready
func (g *GoRedisCli) OK() bool {
	_, err := g.redisCli.Ping(g.ctx).Result()
	return err == nil
}

// pub a payload to channel
func (g *GoRedisCli) Pub(channel, payload string) error {
	_, err := g.redisCli.Publish(g.ctx, channel, payload).Result()
	return err
}

// sub a payload from channel, callback uill tidy the local cache
func (g *GoRedisCli) Sub(channel string, callback func(payload string)) error {
	msgChan := g.redisCli.Subscribe(g.ctx, channel).ChannelSize(g.chanSize)
	for {
		select {
		case msg, ok := <-msgChan:
			if !ok {
				return nil
			}
			callback(msg.Payload)
		default:
		}
	}
}

func Take(r *redis.Client, size ...int) dist.RedisCli {
	s := 100 // default 100 messages
	if len(size) > 0 {
		s = size[0]
	}
	return &GoRedisCli{
		ctx:      context.TODO(),
		redisCli: r,
		chanSize: s,
	}
}


================================================
FILE: dist/goredis/goredis_test.go
================================================
package goredis

import (
	// "sync"
	"testing"
	"time"

	"github.com/go-redis/redis/v8"
	"github.com/orca-zhang/ecache"
	"github.com/orca-zhang/ecache/dist"
)

var rdb *redis.Client

func init() {
	rdb = redis.NewClient(&redis.Options{
		Addr:         ":6379",
		DialTimeout:  10 * time.Second,
		ReadTimeout:  30 * time.Second,
		WriteTimeout: 30 * time.Second,
		PoolSize:     10,
		PoolTimeout:  30 * time.Second,
	})
}

func TestBind(t *testing.T) {
	dist.Init(Take(rdb, 10000))
	lc1 := ecache.NewLRUCache(1, 100, 10*time.Second)
	lc2 := ecache.NewLRUCache(1, 100, 10*time.Second)
	lc1.Put("1", "1")
	lc2.Put("1", "1")
	lc1.Put("2", "1")
	lc2.Put("2", "1")
	lc1.Put("3", "1")
	lc2.Put("3", "1")

	// bind them into a pool
	dist.Bind("lc", lc1, lc2)

	time.Sleep(3 * time.Second)

	// try to del a item
	dist.OnDel("lc", "1")

	time.Sleep(3 * time.Second)

	if _, ok := lc1.Get("1"); ok {
		t.Error("case 1 failed")
	}
	if _, ok := lc2.Get("1"); ok {
		t.Error("case 1 failed")
	}
}

func TestDisconnect(t *testing.T) {
	dist.Init(Take(rdb, 10000))
	rdb.Close()

	time.Sleep(5 * time.Second)
}

/*
func TestConcurrent(t *testing.T) {
	lc := ecache.NewLRUCache(4, 1, 2*time.Second).LRU2(1)
	dist.Bind("lc", lc)
	var wg sync.WaitGroup
	for index := 0; index < 10000; index++ {
		wg.Add(2)
		go func() {
			lc.Put("1", "2")
			wg.Done()
		}()
		go func() {
			lc.Get("1")
			wg.Done()
		}()
	}
	for index := 0; index < 100; index++ {
		wg.Add(1)
		go func() {
			time.Sleep(50 * time.Millisecond)
			dist.OnDel("lc", "1")
			wg.Done()
		}()
	}
	wg.Wait()
}*/


================================================
FILE: dist/goredis/v7/goredis.go
================================================
package goredis

import (
	"github.com/go-redis/redis/v7"
	"github.com/orca-zhang/ecache/dist"
)

type GoRedisCli struct {
	redisCli *redis.Client
	chanSize int
}

// if the redis client is ready
func (g *GoRedisCli) OK() bool {
	_, err := g.redisCli.Ping().Result()
	return err == nil
}

// pub a payload to channel
func (g *GoRedisCli) Pub(channel, payload string) error {
	_, err := g.redisCli.Publish(channel, payload).Result()
	return err
}

// sub a payload from channel, callback uill tidy the local cache
func (g *GoRedisCli) Sub(channel string, callback func(payload string)) error {
	msgChan := g.redisCli.Subscribe(channel).ChannelSize(g.chanSize)
	for {
		select {
		case msg, ok := <-msgChan:
			if !ok {
				return nil
			}
			callback(msg.Payload)
		default:
		}
	}
}

func Take(r *redis.Client, size ...int) dist.RedisCli {
	s := 100 // default 100 messages
	if len(size) > 0 {
		s = size[0]
	}
	return &GoRedisCli{
		redisCli: r,
		chanSize: s,
	}
}


================================================
FILE: dist/goredis/v7/goredis_test.go
================================================
package goredis

import (
	// "sync"
	"testing"
	"time"

	"github.com/go-redis/redis/v7"
	"github.com/orca-zhang/ecache"
	"github.com/orca-zhang/ecache/dist"
)

var rdb *redis.Client

func init() {
	rdb = redis.NewClient(&redis.Options{
		Addr:         ":6379",
		DialTimeout:  10 * time.Second,
		ReadTimeout:  30 * time.Second,
		WriteTimeout: 30 * time.Second,
		PoolSize:     10,
		PoolTimeout:  30 * time.Second,
	})
}

func TestBind(t *testing.T) {
	dist.Init(Take(rdb, 10000))
	lc1 := ecache.NewLRUCache(1, 100, 10*time.Second)
	lc2 := ecache.NewLRUCache(1, 100, 10*time.Second)
	lc1.Put("1", "1")
	lc2.Put("1", "1")
	lc1.Put("2", "1")
	lc2.Put("2", "1")
	lc1.Put("3", "1")
	lc2.Put("3", "1")

	// bind them into a pool
	dist.Bind("lc", lc1, lc2)

	time.Sleep(3 * time.Second)

	// try to del a item
	dist.OnDel("lc", "1")

	time.Sleep(3 * time.Second)

	if _, ok := lc1.Get("1"); ok {
		t.Error("case 1 failed")
	}
	if _, ok := lc2.Get("1"); ok {
		t.Error("case 1 failed")
	}
}

func TestDisconnect(t *testing.T) {
	dist.Init(Take(rdb, 10000))
	rdb.Close()

	time.Sleep(5 * time.Second)
}

/*
func TestConcurrent(t *testing.T) {
	lc := ecache.NewLRUCache(4, 1, 2*time.Second).LRU2(1)
	dist.Bind("lc", lc)
	var wg sync.WaitGroup
	for index := 0; index < 10000; index++ {
		wg.Add(2)
		go func() {
			lc.Put("1", "2")
			wg.Done()
		}()
		go func() {
			lc.Get("1")
			wg.Done()
		}()
	}
	for index := 0; index < 100; index++ {
		wg.Add(1)
		go func() {
			time.Sleep(50 * time.Millisecond)
			dist.OnDel("lc", "1")
			wg.Done()
		}()
	}
	wg.Wait()
}*/


================================================
FILE: dist/redigo/redigo.go
================================================
package redigo

import (
	"github.com/gomodule/redigo/redis"
	"github.com/orca-zhang/ecache/dist"
)

type RedigoCli struct {
	p *redis.Pool
}

// if the redis client is ready
func (g *RedigoCli) OK() bool {
	conn := g.p.Get()
	defer conn.Close()

	_, err := conn.Do("PING")
	return err == nil
}

// pub a payload to channel
func (g *RedigoCli) Pub(channel, payload string) error {
	conn := g.p.Get()
	defer conn.Close()

	_, err := conn.Do("PUBLISH", channel, payload)
	return err
}

// sub a payload from channel, callback uill tidy the local cache
func (g *RedigoCli) Sub(channel string, callback func(payload string)) error {
	conn := g.p.Get()
	defer conn.Close()

	psc := redis.PubSubConn{Conn: conn}
	_ = psc.Subscribe(channel)

	for {
		switch msg := psc.Receive().(type) {
		case error:
			return msg
		case redis.Message:
			callback(string(msg.Data))
		}
	}
}

func Take(r *redis.Pool) dist.RedisCli {
	return &RedigoCli{p: r}
}


================================================
FILE: dist/redigo/redigo_test.go
================================================
package redigo

import (
	// "sync"
	"testing"
	"time"

	"github.com/gomodule/redigo/redis"
	"github.com/orca-zhang/ecache"
	"github.com/orca-zhang/ecache/dist"
)

var pool *redis.Pool

func init() {
	pool = &redis.Pool{
		// Other pool configuration not shown in this example.
		Dial: func() (redis.Conn, error) {
			c, err := redis.Dial("tcp", ":6379")
			if err != nil {
				return nil, err
			}
			return c, nil
		},
	}
}

func TestBind(t *testing.T) {
	dist.Init(Take(pool))
	lc1 := ecache.NewLRUCache(1, 100, 10*time.Second)
	lc2 := ecache.NewLRUCache(1, 100, 10*time.Second)
	lc1.Put("1", "1")
	lc2.Put("1", "1")
	lc1.Put("2", "1")
	lc2.Put("2", "1")
	lc1.Put("3", "1")
	lc2.Put("3", "1")

	// bind them into a pool
	dist.Bind("lc", lc1)
	dist.Bind("lc", lc2)

	time.Sleep(3 * time.Second)

	// try to del a item
	dist.OnDel("lc", "1")

	time.Sleep(3 * time.Second)

	if _, ok := lc1.Get("1"); ok {
		t.Error("case 1 failed")
	}
	if _, ok := lc2.Get("1"); ok {
		t.Error("case 1 failed")
	}
}

func TestDisconnect(t *testing.T) {
	dist.Init(Take(pool))
	pool.Close()

	time.Sleep(5 * time.Second)
}

/*
func TestConcurrent(t *testing.T) {
	lc := ecache.NewLRUCache(4, 1, 2*time.Second).LRU2(1)
	dist.Bind("lc", lc)
	var wg sync.WaitGroup
	for index := 0; index < 10000; index++ {
		wg.Add(2)
		go func() {
			lc.Put("1", "2")
			wg.Done()
		}()
		go func() {
			lc.Get("1")
			wg.Done()
		}()
	}
	for index := 0; index < 100; index++ {
		wg.Add(1)
		go func() {
			time.Sleep(50 * time.Millisecond)
			dist.OnDel("lc", "1")
			wg.Done()
		}()
	}
	wg.Wait()
}*/


================================================
FILE: ecache.go
================================================
package ecache

import (
	"encoding/binary"
	"sync"
	"sync/atomic"
	"time"
)

var clock, p, n = time.Now().UnixNano(), uint16(0), uint16(1)

func now() int64 { return atomic.LoadInt64(&clock) }
func init() {
	go func() { // internal counter that reduce GC caused by `time.Now()`
		for {
			atomic.StoreInt64(&clock, time.Now().UnixNano()) // calibration every second
			for i := 0; i < 9; i++ {
				time.Sleep(100 * time.Millisecond)
				atomic.AddInt64(&clock, int64(100*time.Millisecond))
			}
			time.Sleep(100 * time.Millisecond)
		}
	}()
}

func hashBKRD(s string) (hash int32) {
	for i := 0; i < len(s); i++ {
		hash = hash*131 + int32(s[i])
	}
	return hash
}

func maskOfNextPowOf2(cap uint16) uint32 {
	if cap > 0 && cap&(cap-1) == 0 {
		return uint32(cap - 1)
	}
	cap |= (cap >> 1)
	cap |= (cap >> 2)
	cap |= (cap >> 4)
	return uint32(cap | (cap >> 8))
}

type value struct {
	i *interface{} // interface
	b []byte       // bytes
}

type node struct {
	k        string
	v        value
	expireAt int64 // nano timestamp, expireAt=0 if marked as deleted, `createdAt`=`expireAt`-`expiration`
}

type cache struct {
	dlnk [][2]uint16       // double link list, 0 for prev, 1 for next, the first node stands for [tail, head]
	m    []node            // memory pre-allocated
	hmap map[string]uint16 // key -> idx in []node
	last uint16            // last element index when not full
}

func create(cap uint32) *cache {
	return &cache{make([][2]uint16, cap+1), make([]node, cap), make(map[string]uint16, cap), 0}
}

// put a cache item into lru cache, if added return 1, updated return 0
func (c *cache) put(k string, i *interface{}, b []byte, expireAt int64, on inspector) int {
	if x, ok := c.hmap[k]; ok {
		c.m[x-1].v.i, c.m[x-1].v.b, c.m[x-1].expireAt = i, b, expireAt
		c.adjust(x, p, n) // refresh to head
		return 0
	}

	if c.last == uint16(cap(c.m)) {
		tail := &c.m[c.dlnk[0][p]-1]
		if (*tail).expireAt > 0 { // do not notify for mark delete ones
			on(PUT, (*tail).k, (*tail).v.i, (*tail).v.b, -1)
		}
		delete(c.hmap, (*tail).k)
		c.hmap[k], (*tail).k, (*tail).v.i, (*tail).v.b, (*tail).expireAt = c.dlnk[0][p], k, i, b, expireAt // reuse to reduce gc
		c.adjust(c.dlnk[0][p], p, n)                                                                       // refresh to head
		return 1
	}

	c.last++
	if len(c.hmap) <= 0 {
		c.dlnk[0][p] = c.last
	} else {
		c.dlnk[c.dlnk[0][n]][p] = c.last
	}
	c.m[c.last-1].k, c.m[c.last-1].v.i, c.m[c.last-1].v.b, c.m[c.last-1].expireAt, c.dlnk[c.last], c.hmap[k], c.dlnk[0][n] = k, i, b, expireAt, [2]uint16{0, c.dlnk[0][n]}, c.last, c.last
	return 1
}

// get value of key from lru cache with result
func (c *cache) get(k string) (*node, int) {
	if x, ok := c.hmap[k]; ok {
		c.adjust(x, p, n) // refresh to head
		return &c.m[x-1], 1
	}
	return nil, 0
}

// delete item by key from lru cache
func (c *cache) del(k string) (_ *node, _ int, e int64) {
	if x, ok := c.hmap[k]; ok && c.m[x-1].expireAt > 0 {
		c.m[x-1].expireAt, e = 0, c.m[x-1].expireAt // mark as deleted
		c.adjust(x, n, p)                           // sink to tail
		return &c.m[x-1], 1, e
	}
	return nil, 0, 0
}

// calls f sequentially for each valid item in the lru cache
func (c *cache) walk(walker func(key string, iface *interface{}, bytes []byte, expireAt int64) bool) {
	for idx := c.dlnk[0][n]; idx != 0; idx = c.dlnk[idx][n] {
		if c.m[idx-1].expireAt > 0 && !walker(c.m[idx-1].k, c.m[idx-1].v.i, c.m[idx-1].v.b, c.m[idx-1].expireAt) {
			return
		}
	}
}

// when f=0, t=1, move to head, otherwise to tail
func (c *cache) adjust(idx, f, t uint16) {
	if c.dlnk[idx][f] != 0 { // f=0, t=1, not head node, otherwise not tail
		c.dlnk[c.dlnk[idx][t]][f], c.dlnk[c.dlnk[idx][f]][t], c.dlnk[idx][f], c.dlnk[idx][t], c.dlnk[c.dlnk[0][t]][f], c.dlnk[0][t] = c.dlnk[idx][f], c.dlnk[idx][t], 0, c.dlnk[0][t], idx, idx
	}
}

// Cache - concurrent cache structure
type Cache struct {
	locks      []sync.Mutex
	insts      [][2]*cache // level-0 for normal LRU, level-1 for LRU-2
	expiration time.Duration
	on         inspector
	mask       int32
}

// NewLRUCache - create lru cache
// `bucketCnt` is buckets that shard items to reduce lock racing
// `capPerBkt` is length of each bucket, can store `capPerBkt * bucketCnt` count of items in Cache at most
// optional `expiration` is item alive time (and we only use lazy eviction here), default `0` stands for permanent
func NewLRUCache(bucketCnt, capPerBkt uint16, expiration ...time.Duration) *Cache {
	mask := maskOfNextPowOf2(bucketCnt)
	c := &Cache{make([]sync.Mutex, mask+1), make([][2]*cache, mask+1), 0, func(int, string, *interface{}, []byte, int) {}, int32(mask)}
	for i := range c.insts {
		c.insts[i][0] = create(uint32(capPerBkt))
	}
	if len(expiration) > 0 {
		c.expiration = expiration[0]
	}
	return c
}

// LRU2 - add LRU-2 support (especially LRU-2 that when item visited twice it moves to upper-level-cache)
// `capPerBkt` is length of each LRU-2 bucket, can store extra `capPerBkt * bucketCnt` count of items in Cache at most
func (c *Cache) LRU2(capPerBkt uint16) *Cache {
	for i := range c.insts {
		c.insts[i][1] = create(uint32(capPerBkt))
	}
	return c
}

// put - put a item into cache
func (c *Cache) put(key string, i *interface{}, b []byte) {
	idx := hashBKRD(key) & c.mask
	c.locks[idx].Lock()
	status := c.insts[idx][0].put(key, i, b, now()+int64(c.expiration), c.on)
	c.locks[idx].Unlock()
	c.on(PUT, key, i, b, status)
}

// ToInt64 - convert bytes to int64
func ToInt64(b []byte) (int64, bool) {
	if len(b) >= 8 {
		return int64(binary.LittleEndian.Uint64(b)), true
	}
	return 0, false
}

// Put - put an item into cache
func (c *Cache) Put(key string, val interface{}) { c.put(key, &val, nil) }

// PutInt64 - put a digit item into cache
func (c *Cache) PutInt64(key string, d int64) {
	var data [8]byte
	binary.LittleEndian.PutUint64(data[:], uint64(d))
	c.put(key, nil, data[:])
}

// PutBytes - put a bytes item into cache
func (c *Cache) PutBytes(key string, b []byte) { c.put(key, nil, b) }

// Get - get value of key from cache with result
func (c *Cache) Get(key string) (interface{}, bool) {
	if i, _, ok := c.get(key); ok && i != nil {
		return *i, true
	}
	return nil, false
}

// GetBytes - get bytes value of key from cache with result
func (c *Cache) GetBytes(key string) ([]byte, bool) {
	if _, b, ok := c.get(key); ok {
		return b, true
	}
	return nil, false
}

// GetInt64 - get value of key from cache with result
func (c *Cache) GetInt64(key string) (int64, bool) {
	if _, b, ok := c.get(key); ok && len(b) >= 8 {
		return int64(binary.LittleEndian.Uint64(b)), true
	}
	return 0, false
}

func (c *Cache) _get(key string, idx, level int32) (*node, int) {
	if n, s := c.insts[idx][level].get(key); s > 0 && n.expireAt > 0 && (c.expiration <= 0 || now() < n.expireAt) {
		n.expireAt = now() + int64(c.expiration) // refresh expiration
		return n, s                              // no necessary to remove the expired item here, otherwise will cause GC thrashing
	}
	return nil, 0
}

func (c *Cache) get(key string) (i *interface{}, b []byte, _ bool) {
	idx := hashBKRD(key) & c.mask
	c.locks[idx].Lock()
	n, s := (*node)(nil), 0
	if c.insts[idx][1] == nil { // (if LRU-2 mode not support, loss is little)
		n, s = c._get(key, idx, 0) // normal lru mode
	} else { // LRU-2 mode
		e := int64(0)
		if n, s, e = c.insts[idx][0].del(key); s <= 0 {
			n, s = c._get(key, idx, 1) // re-find in level-1
		} else {
			c.insts[idx][1].put(key, n.v.i, n.v.b, e, c.on) // find in level-0, move to level-1
		}
	}
	if s <= 0 {
		c.locks[idx].Unlock()
		c.on(GET, key, nil, nil, 0)
		return
	}
	i, b = n.v.i, n.v.b
	c.locks[idx].Unlock()
	c.on(GET, key, i, b, 1)
	return i, b, true
}

// Del - delete item by key from cache
func (c *Cache) Del(key string) {
	idx := hashBKRD(key) & c.mask
	c.locks[idx].Lock()
	n, s, e := c.insts[idx][0].del(key)
	if c.insts[idx][1] != nil { // (if LRU-2 mode not support, loss is little)
		if n2, s2, e2 := c.insts[idx][1].del(key); n2 != nil && (n == nil || e < e2) { // callback latest added one if both exists
			n, s = n2, s2
		}
	}
	if s > 0 {
		c.on(DEL, key, n.v.i, n.v.b, 1)
		n.v.i, n.v.b = nil, nil // release now
	} else {
		c.on(DEL, key, nil, nil, 0)
	}
	c.locks[idx].Unlock()
}

// Walk - calls f sequentially for each valid item in the lru cache, return false to stop iteration for every bucket
func (c *Cache) Walk(walker func(key string, iface *interface{}, bytes []byte, expireAt int64) bool) {
	for i := range c.insts {
		c.locks[i].Lock()
		if c.insts[i][0].walk(walker); c.insts[i][1] != nil {
			c.insts[i][1].walk(walker)
		}
		c.locks[i].Unlock()
	}
}

const (
	PUT = iota + 1
	GET
	DEL
)

// inspector - can be used to statistics cache hit/miss rate or other scenario like ringbuf queue
//   more details about every parameter: https://github.com/orca-zhang/ecache/blob/master/README_en.md#inject-an-inspector
type inspector func(action int, key string, iface *interface{}, bytes []byte, status int)

// Inspect - to inspect the actions
func (c *Cache) Inspect(insptr inspector) {
	old := c.on
	c.on = func(action int, key string, iface *interface{}, bytes []byte, status int) {
		old(action, key, iface, bytes, status) // call as the declared order, old first
		insptr(action, key, iface, bytes, status)
	}
}


================================================
FILE: ecache_test.go
================================================
package ecache

import (
	"bytes"
	"container/list"
	"fmt"
	"sync"
	"testing"
	"time"
)

var on = func(int, string, *interface{}, []byte, int) {}

var inst = NewLRUCache(1, 1, time.Second)

func iface(i interface{}) *interface{} { return &i }

type Elem struct {
	key string
	val string
}

func Test_create(t *testing.T) {
	c := create(5)
	if len(c.hmap) != 0 {
		t.Error("case 1 failed")
	}
}

func Test_put(t *testing.T) {
	c := create(5)
	c.put("1", iface("1"), nil, now()+int64(10*time.Second), on)
	c.put("2", iface("2"), nil, now()+int64(10*time.Second), on)
	c.put("1", iface("3"), nil, now()+int64(10*time.Second), on)
	if len(c.hmap) != 2 {
		t.Error("case 2.1 failed")
	}

	l := list.New()
	l.PushBack(&Elem{"1", "3"})
	l.PushBack(&Elem{"2", "2"})

	e := l.Front()
	for idx := c.dlnk[0][n]; idx != 0; idx = c.dlnk[idx][n] {
		v := e.Value.(*Elem)
		el := c.m[idx-1]
		if el.expireAt <= 0 {
			continue
		}
		if el.k != v.key {
			t.Error("case 2.2 failed: ", el.k, v.key)
		}
		if (*(el.v.i)).(string) != v.val {
			t.Error("case 2.3 failed: ", (*(el.v.i)).(string), v.val)
		}
		e = e.Next()
	}

	c.put("3", iface("4"), nil, now()+int64(10*time.Second), on)
	c.put("4", iface("5"), nil, now()+int64(10*time.Second), on)
	c.put("5", iface("6"), nil, now()+int64(10*time.Second), on)
	c.put("2", iface("7"), nil, now()+int64(10*time.Second), on)
	if len(c.hmap) != 5 {
		t.Error("case 3.1 failed")
	}

	l = list.New()
	l.PushBack(&Elem{"2", "7"})
	l.PushBack(&Elem{"5", "6"})
	l.PushBack(&Elem{"4", "5"})
	l.PushBack(&Elem{"3", "4"})
	l.PushBack(&Elem{"1", "3"})

	rl := list.New()
	rl.PushBack(&Elem{"1", "3"})
	rl.PushBack(&Elem{"3", "4"})
	rl.PushBack(&Elem{"4", "5"})
	rl.PushBack(&Elem{"5", "6"})
	rl.PushBack(&Elem{"2", "7"})

	e = l.Front()
	for idx := c.dlnk[0][n]; idx != 0; idx = c.dlnk[idx][n] {
		v := e.Value.(*Elem)
		el := c.m[idx-1]
		if el.expireAt <= 0 {
			continue
		}
		if el.k != v.key {
			t.Error("case 3.2 failed: ", el.k, v.key)
		}
		if (*(el.v.i)).(string) != v.val {
			t.Error("case 3.3 failed: ", (*(el.v.i)).(string), v.val)
		}
		e = e.Next()
	}

	e = rl.Front()
	for idx := c.dlnk[0][p]; idx != 0; idx = c.dlnk[idx][p] {
		v := e.Value.(*Elem)
		el := c.m[idx-1]
		if el.expireAt <= 0 {
			continue
		}
		if el.k != v.key {
			t.Error("case 3.4 failed: ", el.k, v.key)
		}
		if (*(el.v.i)).(string) != v.val {
			t.Error("case 3.5 failed: ", (*(el.v.i)).(string), v.val)
		}
		e = e.Next()
	}

	c.put("6", iface("8"), nil, now()+int64(10*time.Second), on)
	if len(c.hmap) != 5 {
		t.Error("case 4.1 failed")
	}

	l = list.New()
	l.PushBack(&Elem{"6", "8"})
	l.PushBack(&Elem{"2", "7"})
	l.PushBack(&Elem{"5", "6"})
	l.PushBack(&Elem{"4", "5"})
	l.PushBack(&Elem{"3", "4"})

	e = l.Front()
	for idx := c.dlnk[0][n]; idx != 0; idx = c.dlnk[idx][n] {
		v := e.Value.(*Elem)
		el := c.m[idx-1]
		if el.expireAt <= 0 {
			continue
		}
		if el.k != v.key {
			t.Error("case 4.2 failed: ", el.k, v.key)
		}
		if (*(el.v.i)).(string) != v.val {
			t.Error("case 4.3 failed: ", (*(el.v.i)).(string), v.val)
		}
		e = e.Next()
	}
}

func Test_get(t *testing.T) {
	c := create(2)
	c.put("1", iface("1"), nil, now()+int64(10*time.Second), on)
	c.put("2", iface("2"), nil, now()+int64(10*time.Second), on)
	if v, _ := c.get("1"); *(v.v.i) != "1" {
		t.Error("case 1.1 failed")
	}
	c.put("3", iface("3"), nil, now()+int64(10*time.Second), on)
	if len(c.hmap) != 2 {
		t.Error("case 1.2 failed")
	}

	l := list.New()
	l.PushBack(&Elem{"3", "3"})
	l.PushBack(&Elem{"1", "1"})

	e := l.Front()
	for idx := c.dlnk[0][n]; idx != 0; idx = c.dlnk[idx][n] {
		v := e.Value.(*Elem)
		el := c.m[idx-1]
		if el.k != v.key {
			t.Error("case 1.3 failed: ", el.k, v.key)
		}
		if (*(el.v.i)).(string) != v.val {
			t.Error("case 1.4 failed: ", (*(el.v.i)).(string), v.val)
		}
		e = e.Next()
	}
}

func Test_delete(t *testing.T) {
	c := create(5)
	c.put("3", iface("4"), nil, now()+int64(10*time.Second), on)
	c.put("4", iface("5"), nil, now()+int64(10*time.Second), on)
	c.put("5", iface("6"), nil, now()+int64(10*time.Second), on)
	c.put("2", iface("7"), nil, now()+int64(10*time.Second), on)
	c.put("6", iface("8"), nil, now()+int64(10*time.Second), on)
	c.del("5")

	l := list.New()
	l.PushBack(&Elem{"6", "8"})
	l.PushBack(&Elem{"2", "7"})
	l.PushBack(&Elem{"4", "5"})
	l.PushBack(&Elem{"3", "4"})
	/*if len(c.hmap) != 4 {
		t.Error("case 1.1 failed")
	}*/

	e := l.Front()
	for idx := c.dlnk[0][n]; idx != 0; idx = c.dlnk[idx][n] {
		el := c.m[idx-1]
		if el.expireAt <= 0 {
			continue
		}
		v := e.Value.(*Elem)
		if el.k != v.key {
			t.Error("case 1.2 failed: ", el.k, v.key)
		}
		if (*(el.v.i)).(string) != v.val {
			t.Error("case 1.3 failed: ", (*(el.v.i)).(string), v.val)
		}
		e = e.Next()
	}

	c.del("6")

	l = list.New()
	l.PushBack(&Elem{"2", "7"})
	l.PushBack(&Elem{"4", "5"})
	l.PushBack(&Elem{"3", "4"})
	/*if len(c.hmap) != 3 {
		t.Error("case 2.1 failed")
	}*/

	e = l.Front()
	for idx := c.dlnk[0][n]; idx != 0; idx = c.dlnk[idx][n] {
		el := c.m[idx-1]
		if el.expireAt <= 0 {
			continue
		}
		v := e.Value.(*Elem)
		if el.k != v.key {
			t.Error("case 2.2 failed: ", el.k, v.key)
		}
		if (*(el.v.i)).(string) != v.val {
			t.Error("case 2.3 failed: ", (*(el.v.i)).(string), v.val)
		}
		e = e.Next()
	}

	c.del("3")

	l = list.New()
	l.PushBack(&Elem{"2", "7"})
	l.PushBack(&Elem{"4", "5"})
	/*if len(c.hmap) != 2 {
		t.Error("case 3.1 failed")
	}*/

	e = l.Front()
	for idx := c.dlnk[0][n]; idx != 0; idx = c.dlnk[idx][n] {
		el := c.m[idx-1]
		if el.expireAt <= 0 {
			continue
		}
		v := e.Value.(*Elem)
		if el.k != v.key {
			t.Error("case 3.2 failed: ", el.k, v.key)
		}
		if (*(el.v.i)).(string) != v.val {
			t.Error("case 3.3 failed: ", (*(el.v.i)).(string), v.val)
		}
		e = e.Next()
	}
}

func Test_walk(t *testing.T) {
	c := create(5)
	c.put("3", iface(4), nil, now()+int64(10*time.Second), on)
	c.put("4", iface(5), nil, now()+int64(10*time.Second), on)
	c.put("5", iface(6), nil, now()+int64(10*time.Second), on)
	c.put("2", iface(7), nil, now()+int64(10*time.Second), on)
	c.put("6", iface(8), nil, now()+int64(10*time.Second), on)

	l := list.New()
	l.PushBack(&Elem{"6", "8"})
	l.PushBack(&Elem{"2", "7"})
	l.PushBack(&Elem{"5", "6"})
	l.PushBack(&Elem{"4", "5"})
	l.PushBack(&Elem{"3", "4"})

	e := l.Front()
	c.walk(
		func(key string, iface *interface{}, b []byte, expireAt int64) bool {
			v := e.Value.(*Elem)
			if key != v.key {
				t.Error("case 1.1 failed: ", key, v.key)
			}
			if fmt.Sprint(*iface) != v.val {
				t.Error("case 1.2 failed: ", *iface, v.val)
			}
			e = e.Next()
			return true
		})

	if e != nil {
		t.Error("case 1.3 failed: ", e.Value)
	}

	e = l.Front()
	c.walk(
		func(key string, iface *interface{}, b []byte, expireAt int64) bool {
			v := e.Value.(*Elem)
			if key != v.key {
				t.Error("case 1.1 failed: ", key, v.key)
			}
			if fmt.Sprint(*iface) != v.val {
				t.Error("case 1.2 failed: ", iface, v.val)
			}
			return false
		})
}

func TestHashBKRD(t *testing.T) {
	if hashBKRD("12345") != int32(1658880867) {
		t.Error("case 1 failed")
	}
	if hashBKRD("abcdefghijklmnopqrstuvwxyz") != int32(-1761441311) {
		t.Error("case 2 failed")
	}
}

func TestMaskOfNextPowOf2(t *testing.T) {
	if maskOfNextPowOf2(0) != 0 {
		t.Error("case 1 failed")
	}
	if maskOfNextPowOf2(1) != 0 {
		t.Error("case 2 failed")
	}
	if maskOfNextPowOf2(2) != 1 {
		t.Error("case 3 failed")
	}
	if maskOfNextPowOf2(3) != 3 {
		t.Error("case 4 failed")
	}
	if maskOfNextPowOf2(4) != 3 {
		t.Error("case 5 failed")
	}
	if maskOfNextPowOf2(123) != 127 {
		t.Error("case 6 failed")
	}
	if maskOfNextPowOf2(0x7FFF) != 0x7FFF {
		t.Error("case 7 failed")
	}
	if maskOfNextPowOf2(0x8001) != 0xFFFF {
		t.Error("case 8 failed")
	}
}

func TestExpiration(t *testing.T) {
	lc := NewLRUCache(2, 1, time.Second)
	lc.Put("1", "2")
	if v, ok := lc.Get("1"); !ok || v != "2" {
		t.Error("case 1 failed")
	}
	time.Sleep(2 * time.Second)
	if _, ok := lc.Get("1"); ok {
		t.Error("case 2 failed")
	}

	// permanent
	lc2 := NewLRUCache(2, 1, 0)
	lc2.Put("1", "2")
	if v, ok := lc2.Get("1"); !ok || v != "2" {
		t.Error("case 1 failed")
	}
	time.Sleep(time.Second)
	if _, ok := lc2.Get("1"); !ok {
		t.Error("case 2 failed")
	}
}

func TestLRUCache(t *testing.T) {
	lc := NewLRUCache(1, 3, 1*time.Second)
	lc.Put("1", "1")
	lc.Put("2", "2")
	lc.Put("3", "3")
	v, _ := lc.Get("2") // check reuse
	lc.Put("4", "4")
	lc.Put("5", "5")
	lc.Put("6", "6")
	if v != "2" {
		t.Error("case 3 failed")
	}
}

func TestWalk(t *testing.T) {
	m := make(map[string]string, 0)
	lc := NewLRUCache(2, 3, 10*time.Second).LRU2(3)
	lc.Put("1", "1")
	m["1"] = "1"
	lc.Put("2", "2")
	m["2"] = "2"
	lc.Put("3", "3")
	m["3"] = "3"
	lc.Get("2") // l0 -> l1
	lc.Put("4", "4")
	m["4"] = "4"
	lc.Put("5", "5")
	m["5"] = "5"
	lc.Put("6", "6")
	m["6"] = "6"
	lc.Walk(func(key string, iface *interface{}, b []byte, expireAt int64) bool {
		if m[key] != (*iface).(string) {
			t.Error("case failed")
		}
		delete(m, key)
		return true
	})
	if len(m) > 0 {
		fmt.Println(m)
		t.Error("case failed")
	}
}

func TestPutGet(t *testing.T) {
	lc := NewLRUCache(1, 10, time.Second)
	lc.Put("1", "1")
	if v, _ := lc.Get("1"); v != "1" {
		t.Error("case 1 failed")
	}
	lc.Put("1", nil)
	if v, ok := lc.Get("1"); !ok || v != nil {
		t.Error("case 2 failed")
	}
	if _, ok := lc.Get("no1"); ok {
		t.Error("case 3 failed")
	}

	lc.PutInt64("2", int64(1))
	if v, _ := lc.GetInt64("2"); v != int64(1) {
		t.Error("case 4 failed")
	}
	lc.PutInt64("2", int64(0))
	if v, _ := lc.GetInt64("2"); v != int64(0) {
		t.Error("case 5 failed")
	}
	lc.PutInt64("2", int64(123456))
	if v, _ := lc.GetInt64("2"); v != int64(123456) {
		t.Error("case 6 failed")
	}
	lc.PutInt64("2", int64(0x7FFFFFFFFFFFFFFF))
	if v, _ := lc.GetInt64("2"); v != int64(0x7FFFFFFFFFFFFFFF) {
		t.Error("case 7 failed")
	}
	lc.PutInt64("2", int64(^0x7FFFFFFFFFFFFFFF))
	if v, _ := lc.GetInt64("2"); v != int64(^0x7FFFFFFFFFFFFFFF) {
		t.Error("case 8 failed")
	}
	if _, ok := lc.GetInt64("no2"); ok {
		t.Error("case 9 failed")
	}

	b := []byte{1, 2, 3, 4, 5, 6}
	lc.PutBytes("3", b)
	if v, _ := lc.GetBytes("3"); !bytes.Equal(b, v) {
		t.Error("case 10 failed")
	}

	lc.PutBytes("3", nil)
	if v, _ := lc.GetBytes("3"); !bytes.Equal(nil, v) {
		t.Error("case 11 failed")
	}
	if _, ok := lc.GetBytes("no3"); ok {
		t.Error("case 12 failed")
	}

	lc.PutBytes("4", []byte{0})
	if _, ok := lc.GetInt64("4"); ok {
		t.Error("case 13 failed")
	}

	lc.PutBytes("5", []byte{0x88, 0x77, 0x66, 0x55, 0x44, 0x33, 0x22, 0x11})
	if v, ok := lc.GetBytes("5"); ok {
		if i, _ := ToInt64(v); i != 0x1122334455667788 {
			t.Error("case 14 failed")
		}
	} else {
		t.Error("case 15 failed")
	}

	lc.PutInt64("6", 0x1122334455667788)
	if v, ok := lc.GetBytes("6"); ok {
		if !bytes.Equal(v, []byte{0x88, 0x77, 0x66, 0x55, 0x44, 0x33, 0x22, 0x11}) {
			t.Error("case 16 failed")
		}
	} else {
		t.Error("case 17 failed")
	}

	if _, ok := ToInt64([]byte{0}); ok {
		t.Error("case 18 failed")
	}
}

func TestLRU2Cache(t *testing.T) {
	lc := NewLRUCache(1, 3, time.Second).LRU2(1)
	lc.Put("1", "1")
	lc.Put("2", "2")
	lc.Put("3", "3")
	lc.Get("2") // l0 -> l1
	lc.Get("3") // l0 -> l1
	if _, ok := lc.Get("2"); ok {
		t.Error("case 4 failed")
	}
	lc.Put("4", "4")
	lc.Put("5", "5")
	if _, ok := lc.Get("1"); !ok { // l0 -> l1
		t.Error("case 4 failed")
	}

	toCheck := "1"
	lc.Inspect(func(action int, key string, iface *interface{}, b []byte, ok int) {
		if action == DEL && iface != nil && *iface != toCheck {
			t.Error("case 4 failed")
		}
	})

	lc.Del("1")
	// del in l1

	if _, ok := lc.Get("1"); ok {
		t.Error("case 4 failed")
	}
	lc.Put("6", "6")
	lc.Put("7", "7")
	if _, ok := lc.Get("4"); ok {
		t.Error("case 4 failed")
	}

	// l0 -> l1 both exist
	lc.Put("1", "1")
	lc.Get("1") // l0 -> l1

	time.Sleep(time.Second)

	lc.Put("1", "2")

	// both del, return newest one
	toCheck = "2"
	lc.Del("1")

	if _, ok := lc.Get("1"); ok {
		t.Error("case 4 failed")
	}
}

func TestConcurrent(t *testing.T) {
	lc := NewLRUCache(4, 1, 2*time.Second)
	var wg sync.WaitGroup
	for index := 0; index < 1000000; index++ {
		wg.Add(3)
		go func() {
			lc.Put("1", "2")
			wg.Done()
		}()
		go func() {
			lc.Get("1")
			wg.Done()
		}()
		go func() {
			lc.Del("1")
			wg.Done()
		}()
	}
	wg.Wait()
}

func TestConcurrentLRU2(t *testing.T) {
	lc := NewLRUCache(4, 1, 2*time.Second).LRU2(1)
	var wg sync.WaitGroup
	for index := 0; index < 1000000; index++ {
		wg.Add(3)
		go func() {
			lc.Put("1", "2")
			wg.Done()
		}()
		go func() {
			lc.Get("1")
			wg.Done()
		}()
		go func() {
			lc.Del("1")
			wg.Done()
		}()
	}
	wg.Wait()
}

func TestInspect(t *testing.T) {
	lc := NewLRUCache(1, 3, 1*time.Second)
	lc.Inspect(func(action int, key string, iface *interface{}, b []byte, ok int) {
		if iface != nil {
			fmt.Println(action, key, *iface, ok)
		} else {
			fmt.Println(action, key, ok)
		}
	})
	lc.Put("1", "1")
	lc.Put("1", "2")
	lc.Put("2", "2")
	lc.Put("3", "3")
	v, _ := lc.Get("2") // check reuse
	lc.Put("4", "4")
	lc.Put("5", "5")
	lc.Put("6", "6")
	if v != "2" {
		t.Error("case 3 failed")
	}
	lc.Get("10")
	lc.Del("6")
	lc.Del("10")
}

func TestForIssue7(t *testing.T) {
	lc := NewLRUCache(16, 65535, 100*time.Millisecond)
	var wg sync.WaitGroup
	for index := 0; index < 1000000; index++ {
		wg.Add(3)
		go func() {
			lc.Put("1", "2")
			wg.Done()
		}()
		go func() {
			lc.Get("1")
			wg.Done()
		}()
		go func() {
			lc.Del("1")
			wg.Done()
		}()
	}
	wg.Wait()

	lc = NewLRUCache(65535, 16, 100*time.Millisecond)
	for index := 0; index < 1000000; index++ {
		wg.Add(3)
		go func() {
			lc.Put("1", "2")
			wg.Done()
		}()
		go func() {
			lc.Get("1")
			wg.Done()
		}()
		go func() {
			lc.Del("1")
			wg.Done()
		}()
	}
	wg.Wait()
}


================================================
FILE: go.mod
================================================
module github.com/orca-zhang/ecache

go 1.14

require (
	github.com/go-redis/redis/v7 v7.4.1
	github.com/go-redis/redis/v8 v8.11.4
	github.com/gomodule/redigo v1.8.6
)


================================================
FILE: go.sum
================================================
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/go-redis/redis/v7 v7.4.1 h1:PASvf36gyUpr2zdOUS/9Zqc80GbM+9BDyiJSJDDOrTI=
github.com/go-redis/redis/v7 v7.4.1/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-redis/redis/v8 v8.11.4 h1:kHoYkfZP6+pe04aFTnhDH6GDROa5yJdHJVNxV3F46Tg=
github.com/go-redis/redis/v8 v8.11.4/go.mod h1:2Z2wHZXdQpCDXEGzqMockDpNyYvi2l4Pxt6RJr792+w=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/gomodule/redigo v1.8.6 h1:h7kHSqUl2kxeaQtVslsfUCPJ1oz2pxcyzLy4zezIzPw=
github.com/gomodule/redigo v1.8.6/go.mod h1:P9dn9mFrCBvWhGE1wpxx6fgq7BAeLBk+UUUzlpkBYO0=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.16.4 h1:29JGrr5oVBm5ulCWet69zQkzWipVXIol6ygQUe/EzNc=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.16.0 h1:6gjqkI8iiRHMvdccRJM8rVKjCWk6ZIm6FTm3ddIe4/c=
github.com/onsi/gomega v1.16.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781 h1:DzZ89McO9/gWPsQXS/FVKAlG02ZjaQ6AlZRBimEYOd0=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da h1:b3NXsE2LusjYGGjL5bxEVZZORm/YEFFrWFjR8eFrw/c=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=


================================================
FILE: stats/stats.go
================================================
package stats

import (
	"sync"
	"sync/atomic"
	"unsafe"

	"github.com/orca-zhang/ecache"
)

var m sync.Map

type StatsNode struct {
	// don't reorder them or add field between them
	Evicted, Updated, Added, GetMiss, GetHit, DelMiss, DelHit uint64
}

// HitRate
func (s *StatsNode) HitRate() float64 {
	if s.GetHit == 0 && s.GetMiss == 0 {
		return 0.0
	}
	return float64(s.GetHit) / float64(s.GetHit+s.GetMiss)
}

// Bind - to stats a cache
// `pool` can be used to classify instances that store same items
// `caches` is cache instances to be binded
func Bind(pool string, caches ...*ecache.Cache) error {
	v, _ := m.LoadOrStore(pool, &StatsNode{})
	for _, c := range caches {
		c.Inspect(func(action int, _ string, _ *interface{}, _ []byte, status int) {
			// very, very, very low-cost for stats
			atomic.AddUint64((*uint64)(unsafe.Pointer(uintptr(unsafe.Pointer(v.(*StatsNode)))+uintptr(status+action*2-1)*unsafe.Sizeof(&status))), 1)
		})
	}
	return nil
}

// Stats - get the result like follows
//
// `k` is categoy, type is string
// `v` is node, type is `*stats.StatsNode`
//
// 	stats.Stats().Range(func(k, v interface{}) bool {
//     	fmt.Println("stats:", k, v)
//     	return true
// 	})
func Stats() *sync.Map {
	return &m
}


================================================
FILE: stats/stats_test.go
================================================
package stats

import (
	"fmt"
	"sync"
	"testing"
	"time"

	"github.com/orca-zhang/ecache"
)

func TestLRU2Cache(t *testing.T) {
	lc := ecache.NewLRUCache(1, 3, 10*time.Second).LRU2(1)
	Bind("lc", lc)

	v, _ := Stats().Load("lc")
	node := v.(*StatsNode)
	if node.HitRate() > 1e-6 {
		t.Error("case 1 failed")
	}

	lc.Put("1", "1")              // Added
	lc.Put("2", "2")              // Added
	lc.Put("3", "3")              // Added
	lc.Get("2")                   // l0 -> l1 GetHit
	lc.Get("3")                   // l0 -> l1 GetHit, Evicted
	if _, ok := lc.Get("2"); ok { // GetMiss
		t.Error("case 1 failed")
	}
	lc.Put("4", "4")               // Added
	lc.Put("5", "5")               // Added
	if _, ok := lc.Get("1"); !ok { // l0 -> l1 GetHit, Evicted
		t.Error("case 1 failed")
	}

	Stats().Range(func(k, v interface{}) bool {
		fmt.Printf("stats: %s %+v\n", k, v)
		if k == "lc" {
			node := v.(*StatsNode)
			if node.Evicted != 2 {
				t.Error("case 1 failed")
			}
			if node.Updated != 0 {
				t.Error("case 1 failed")
			}
			if node.Added != 5 {
				t.Error("case 1 failed")
			}
			if node.GetMiss != 1 {
				t.Error("case 1 failed")
			}
			if node.GetHit != 3 {
				t.Error("case 1 failed")
			}
			if node.DelMiss != 0 {
				t.Error("case 1 failed")
			}
			if node.DelHit != 0 {
				t.Error("case 1 failed")
			}
		}
		return true
	})

	v, _ = Stats().Load("lc")
	node = v.(*StatsNode)
	if node.HitRate()-0.75 > 1e-6 {
		t.Error("case 1 failed")
	}

	lc.Put("6", "6")              // Added
	lc.Put("7", "7")              // Added, Evicted
	if _, ok := lc.Get("4"); ok { // GetMiss
		t.Error("case 1 failed")
	}
	lc.Del("7")                   // DelHit
	lc.Del("8")                   // DelMiss
	lc.Put("1", "1")              // Added
	lc.Put("1", "2")              // Updated
	lc.Del("1")                   // DelHit
	if _, ok := lc.Get("1"); ok { // GetMiss
		t.Error("case 1 failed")
	}

	Stats().Range(func(k, v interface{}) bool {
		fmt.Printf("stats: %s %+v\n", k, v)
		if k == "lc" {
			node := v.(*StatsNode)
			if node.Evicted != 3 {
				t.Error("case 1 failed")
			}
			if node.Updated != 1 {
				t.Error("case 1 failed")
			}
			if node.Added != 8 {
				t.Error("case 1 failed")
			}
			if node.GetMiss != 3 {
				t.Error("case 1 failed")
			}
			if node.GetHit != 3 {
				t.Error("case 1 failed")
			}
			if node.DelMiss != 1 {
				t.Error("case 1 failed")
			}
			if node.DelHit != 2 {
				t.Error("case 1 failed")
			}
		}
		return true
	})

	v, _ = Stats().Load("lc")
	node = v.(*StatsNode)
	if node.HitRate()-0.5 > 1e-6 {
		t.Error("case 1 failed")
	}
}

func TestConcurrent(t *testing.T) {
	lc := ecache.NewLRUCache(4, 1, 10*time.Second).LRU2(1)
	Bind("aaaa", lc)
	var wg sync.WaitGroup
	for index := 0; index < 1000000; index++ {
		wg.Add(3)
		go func() {
			lc.Put("1", "2")
			wg.Done()
		}()
		go func() {
			lc.Get("1")
			wg.Done()
		}()
		go func() {
			lc.Del("1")
			wg.Done()
		}()
	}
	wg.Wait()
	Stats().Range(func(k, v interface{}) bool {
		fmt.Printf("stats: %s %+v\n", k, v)
		return true
	})
}

func TestBindToExistPool(t *testing.T) {
	lcOld := ecache.NewLRUCache(1, 3, 1*time.Second).LRU2(1)
	Bind("lc2", lcOld)
	lc := ecache.NewLRUCache(1, 3, 1*time.Second).LRU2(1)
	Bind("lc2", lc)
	lc.Put("1", "1")
	Stats().Range(func(k, v interface{}) bool {
		fmt.Printf("stats: %s %+v\n", k, v)
		if k == "lc2" {
			node := v.(*StatsNode)
			if node.Added != 1 {
				t.Error("case 3 failed")
			}
		}
		return true
	})
}
Download .txt
gitextract_norfhvbw/

├── .semaphore/
│   └── semaphore.yml
├── LICENSE
├── README.md
├── README_en.md
├── dist/
│   ├── dist.go
│   ├── dist_test.go
│   ├── goredis/
│   │   ├── goredis.go
│   │   ├── goredis_test.go
│   │   └── v7/
│   │       ├── goredis.go
│   │       └── goredis_test.go
│   └── redigo/
│       ├── redigo.go
│       └── redigo_test.go
├── ecache.go
├── ecache_test.go
├── go.mod
├── go.sum
└── stats/
    ├── stats.go
    └── stats_test.go
Download .txt
SYMBOL INDEX (101 symbols across 12 files)

FILE: dist/dist.go
  constant topic (line 13) | topic = "orca-zhang/ecache"
  type RedisCli (line 16) | type RedisCli interface
  function delAll (line 28) | func delAll(pool, key string) {
  function Init (line 37) | func Init(r RedisCli) {
  function Bind (line 66) | func Bind(pool string, caches ...*ecache.Cache) error {
  function OnDel (line 73) | func OnDel(pool, key string) error {

FILE: dist/dist_test.go
  type DIYCli (line 11) | type DIYCli struct
    method OK (line 17) | func (d *DIYCli) OK() bool {
    method Pub (line 22) | func (d *DIYCli) Pub(channel, payload string) error {
    method Sub (line 28) | func (d *DIYCli) Sub(channel string, callback func(payload string)) er...
  function Take (line 39) | func Take(ok *bool) RedisCli {
  function TestBind (line 46) | func TestBind(t *testing.T) {
  function TestInit (line 75) | func TestInit(t *testing.T) {
  function TestDIYClient (line 83) | func TestDIYClient(t *testing.T) {
  type PanicCli (line 116) | type PanicCli struct
    method OK (line 120) | func (d *PanicCli) OK() bool {
    method Pub (line 125) | func (d *PanicCli) Pub(channel, payload string) error {
    method Sub (line 130) | func (d *PanicCli) Sub(channel string, callback func(payload string)) ...
  function TestPanicClient (line 134) | func TestPanicClient(t *testing.T) {

FILE: dist/goredis/goredis.go
  type GoRedisCli (line 10) | type GoRedisCli struct
    method OK (line 17) | func (g *GoRedisCli) OK() bool {
    method Pub (line 23) | func (g *GoRedisCli) Pub(channel, payload string) error {
    method Sub (line 29) | func (g *GoRedisCli) Sub(channel string, callback func(payload string)...
  function Take (line 43) | func Take(r *redis.Client, size ...int) dist.RedisCli {

FILE: dist/goredis/goredis_test.go
  function init (line 15) | func init() {
  function TestBind (line 26) | func TestBind(t *testing.T) {
  function TestDisconnect (line 55) | func TestDisconnect(t *testing.T) {

FILE: dist/goredis/v7/goredis.go
  type GoRedisCli (line 8) | type GoRedisCli struct
    method OK (line 14) | func (g *GoRedisCli) OK() bool {
    method Pub (line 20) | func (g *GoRedisCli) Pub(channel, payload string) error {
    method Sub (line 26) | func (g *GoRedisCli) Sub(channel string, callback func(payload string)...
  function Take (line 40) | func Take(r *redis.Client, size ...int) dist.RedisCli {

FILE: dist/goredis/v7/goredis_test.go
  function init (line 15) | func init() {
  function TestBind (line 26) | func TestBind(t *testing.T) {
  function TestDisconnect (line 55) | func TestDisconnect(t *testing.T) {

FILE: dist/redigo/redigo.go
  type RedigoCli (line 8) | type RedigoCli struct
    method OK (line 13) | func (g *RedigoCli) OK() bool {
    method Pub (line 22) | func (g *RedigoCli) Pub(channel, payload string) error {
    method Sub (line 31) | func (g *RedigoCli) Sub(channel string, callback func(payload string))...
  function Take (line 48) | func Take(r *redis.Pool) dist.RedisCli {

FILE: dist/redigo/redigo_test.go
  function init (line 15) | func init() {
  function TestBind (line 28) | func TestBind(t *testing.T) {
  function TestDisconnect (line 58) | func TestDisconnect(t *testing.T) {

FILE: ecache.go
  function now (line 12) | func now() int64 { return atomic.LoadInt64(&clock) }
  function init (line 13) | func init() {
  function hashBKRD (line 26) | func hashBKRD(s string) (hash int32) {
  function maskOfNextPowOf2 (line 33) | func maskOfNextPowOf2(cap uint16) uint32 {
  type value (line 43) | type value struct
  type node (line 48) | type node struct
  type cache (line 54) | type cache struct
    method put (line 66) | func (c *cache) put(k string, i *interface{}, b []byte, expireAt int64...
    method get (line 95) | func (c *cache) get(k string) (*node, int) {
    method del (line 104) | func (c *cache) del(k string) (_ *node, _ int, e int64) {
    method walk (line 114) | func (c *cache) walk(walker func(key string, iface *interface{}, bytes...
    method adjust (line 123) | func (c *cache) adjust(idx, f, t uint16) {
  function create (line 61) | func create(cap uint32) *cache {
  type Cache (line 130) | type Cache struct
    method LRU2 (line 156) | func (c *Cache) LRU2(capPerBkt uint16) *Cache {
    method put (line 164) | func (c *Cache) put(key string, i *interface{}, b []byte) {
    method Put (line 181) | func (c *Cache) Put(key string, val interface{}) { c.put(key, &val, ni...
    method PutInt64 (line 184) | func (c *Cache) PutInt64(key string, d int64) {
    method PutBytes (line 191) | func (c *Cache) PutBytes(key string, b []byte) { c.put(key, nil, b) }
    method Get (line 194) | func (c *Cache) Get(key string) (interface{}, bool) {
    method GetBytes (line 202) | func (c *Cache) GetBytes(key string) ([]byte, bool) {
    method GetInt64 (line 210) | func (c *Cache) GetInt64(key string) (int64, bool) {
    method _get (line 217) | func (c *Cache) _get(key string, idx, level int32) (*node, int) {
    method get (line 225) | func (c *Cache) get(key string) (i *interface{}, b []byte, _ bool) {
    method Del (line 251) | func (c *Cache) Del(key string) {
    method Walk (line 270) | func (c *Cache) Walk(walker func(key string, iface *interface{}, bytes...
    method Inspect (line 291) | func (c *Cache) Inspect(insptr inspector) {
  function NewLRUCache (line 142) | func NewLRUCache(bucketCnt, capPerBkt uint16, expiration ...time.Duratio...
  function ToInt64 (line 173) | func ToInt64(b []byte) (int64, bool) {
  constant PUT (line 281) | PUT = iota + 1
  constant GET (line 282) | GET
  constant DEL (line 283) | DEL
  type inspector (line 288) | type inspector

FILE: ecache_test.go
  function iface (line 16) | func iface(i interface{}) *interface{} { return &i }
  type Elem (line 18) | type Elem struct
  function Test_create (line 23) | func Test_create(t *testing.T) {
  function Test_put (line 30) | func Test_put(t *testing.T) {
  function Test_get (line 142) | func Test_get(t *testing.T) {
  function Test_delete (line 172) | func Test_delete(t *testing.T) {
  function Test_walk (line 258) | func Test_walk(t *testing.T) {
  function TestHashBKRD (line 305) | func TestHashBKRD(t *testing.T) {
  function TestMaskOfNextPowOf2 (line 314) | func TestMaskOfNextPowOf2(t *testing.T) {
  function TestExpiration (line 341) | func TestExpiration(t *testing.T) {
  function TestLRUCache (line 364) | func TestLRUCache(t *testing.T) {
  function TestWalk (line 378) | func TestWalk(t *testing.T) {
  function TestPutGet (line 407) | func TestPutGet(t *testing.T) {
  function TestLRU2Cache (line 487) | func TestLRU2Cache(t *testing.T) {
  function TestConcurrent (line 539) | func TestConcurrent(t *testing.T) {
  function TestConcurrentLRU2 (line 560) | func TestConcurrentLRU2(t *testing.T) {
  function TestInspect (line 581) | func TestInspect(t *testing.T) {
  function TestForIssue7 (line 606) | func TestForIssue7(t *testing.T) {

FILE: stats/stats.go
  type StatsNode (line 13) | type StatsNode struct
    method HitRate (line 19) | func (s *StatsNode) HitRate() float64 {
  function Bind (line 29) | func Bind(pool string, caches ...*ecache.Cache) error {
  function Stats (line 49) | func Stats() *sync.Map {

FILE: stats/stats_test.go
  function TestLRU2Cache (line 12) | func TestLRU2Cache(t *testing.T) {
  function TestConcurrent (line 121) | func TestConcurrent(t *testing.T) {
  function TestBindToExistPool (line 147) | func TestBindToExistPool(t *testing.T) {
Condensed preview — 18 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (120K chars).
[
  {
    "path": ".semaphore/semaphore.yml",
    "chars": 855,
    "preview": "version: v1.0\r\nname: Go\r\nagent:\r\n  machine:\r\n    type: e1-standard-2\r\n    os_image: ubuntu2004\r\nblocks:\r\n  - name: 'Test"
  },
  {
    "path": "LICENSE",
    "chars": 1061,
    "preview": "MIT License\n\nCopyright (c) 2021 Orca\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof th"
  },
  {
    "path": "README.md",
    "chars": 18958,
    "preview": "[English README | 英文说明](README_en.md)\n\n# 🦄 ecache\n<p align=\"center\">\n  <a href=\"#\">\n    <img src=\"https://github.com/orc"
  },
  {
    "path": "README_en.md",
    "chars": 30643,
    "preview": "[Simplified Chinese README | 简体中文说明](README.md)\n\n# 🦄 ecache\n<p align=\"center\">\n  <a href=\"#\">\n    <img src=\"https://gith"
  },
  {
    "path": "dist/dist.go",
    "chars": 1803,
    "preview": "package dist\n\nimport (\n\t\"log\"\n\t\"runtime/debug\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/orca-zhang/ecache\"\n)\n\nconst topi"
  },
  {
    "path": "dist/dist_test.go",
    "chars": 2247,
    "preview": "package dist\n\nimport (\n\t// \"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/orca-zhang/ecache\"\n)\n\ntype DIYCli struct {\n\tok *bool\n"
  },
  {
    "path": "dist/goredis/goredis.go",
    "chars": 1052,
    "preview": "package goredis\n\nimport (\n\t\"context\"\n\n\t\"github.com/go-redis/redis/v8\"\n\t\"github.com/orca-zhang/ecache/dist\"\n)\n\ntype GoRed"
  },
  {
    "path": "dist/goredis/goredis_test.go",
    "chars": 1560,
    "preview": "package goredis\n\nimport (\n\t// \"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/go-redis/redis/v8\"\n\t\"github.com/orca-zhang/ecache\""
  },
  {
    "path": "dist/goredis/v7/goredis.go",
    "chars": 967,
    "preview": "package goredis\n\nimport (\n\t\"github.com/go-redis/redis/v7\"\n\t\"github.com/orca-zhang/ecache/dist\"\n)\n\ntype GoRedisCli struct"
  },
  {
    "path": "dist/goredis/v7/goredis_test.go",
    "chars": 1560,
    "preview": "package goredis\n\nimport (\n\t// \"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/go-redis/redis/v7\"\n\t\"github.com/orca-zhang/ecache\""
  },
  {
    "path": "dist/redigo/redigo.go",
    "chars": 939,
    "preview": "package redigo\n\nimport (\n\t\"github.com/gomodule/redigo/redis\"\n\t\"github.com/orca-zhang/ecache/dist\"\n)\n\ntype RedigoCli stru"
  },
  {
    "path": "dist/redigo/redigo_test.go",
    "chars": 1568,
    "preview": "package redigo\n\nimport (\n\t// \"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/gomodule/redigo/redis\"\n\t\"github.com/orca-zhang/ecac"
  },
  {
    "path": "ecache.go",
    "chars": 9266,
    "preview": "package ecache\n\nimport (\n\t\"encoding/binary\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"time\"\n)\n\nvar clock, p, n = time.Now().UnixNano(), u"
  },
  {
    "path": "ecache_test.go",
    "chars": 13698,
    "preview": "package ecache\n\nimport (\n\t\"bytes\"\n\t\"container/list\"\n\t\"fmt\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\nvar on = func(int, string, *int"
  },
  {
    "path": "go.mod",
    "chars": 168,
    "preview": "module github.com/orca-zhang/ecache\n\ngo 1.14\n\nrequire (\n\tgithub.com/go-redis/redis/v7 v7.4.1\n\tgithub.com/go-redis/redis/"
  },
  {
    "path": "go.sum",
    "chars": 11678,
    "preview": "github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=\ngithub.com/cespare/xxhash/v2 v2.1.2/"
  },
  {
    "path": "stats/stats.go",
    "chars": 1241,
    "preview": "package stats\n\nimport (\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"unsafe\"\n\n\t\"github.com/orca-zhang/ecache\"\n)\n\nvar m sync.Map\n\ntype StatsN"
  },
  {
    "path": "stats/stats_test.go",
    "chars": 3460,
    "preview": "package stats\n\nimport (\n\t\"fmt\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/orca-zhang/ecache\"\n)\n\nfunc TestLRU2Cache(t *test"
  }
]

About this extraction

This page contains the full source code of the orca-zhang/ecache GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 18 files (100.3 KB), approximately 38.4k tokens, and a symbol index with 101 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!