[
  {
    "path": ".github/workflows/go.yml",
    "content": "# This workflow will build a golang project\n# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-go\n\nname: Go\n\non:\n  push:\n    branches: [ \"master\" ]\n  pull_request:\n    branches: [ \"master\" ]\n\njobs:\n\n  build:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@v3\n\n    - name: Set up Go\n      uses: actions/setup-go@v3\n      with:\n        go-version: 1.19\n\n    - name: Build\n      run: go build -v ./...\n\n    - name: Test\n      run: go test -v ./...\n"
  },
  {
    "path": ".gitignore",
    "content": "vendor/\n"
  },
  {
    "path": ".travis.yml",
    "content": "language: go\n\ngo:\n  - 1.18\n\nenv:\n  - GO111MODULE=on\n\nscript:\n  - go test -v -race -coverprofile=coverage.txt -covermode=atomic\n\nafter_success:\n  - bash <(curl -s https://codecov.io/bash)\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2017 Nick Randall \n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "MIGRATE.md",
    "content": "## Upgrade from v1 to v2\nThe only difference between v1 and v2 is that we added use of [context](https://golang.org/pkg/context).\n\n```diff\n- loader.Load(key string) Thunk\n+ loader.Load(ctx context.Context, key string) Thunk\n- loader.LoadMany(keys []string) ThunkMany\n+ loader.LoadMany(ctx context.Context, keys []string) ThunkMany\n```\n\n```diff\n- type BatchFunc func([]string) []*Result\n+ type BatchFunc func(context.Context, []string) []*Result\n```\n\n## Upgrade from v2 to v3\n```diff\n// dataloader.Interface as added context.Context to methods\n- loader.Prime(key string, value interface{}) Interface\n+ loader.Prime(ctx context.Context, key string, value interface{}) Interface\n- loader.Clear(key string) Interface\n+ loader.Clear(ctx context.Context, key string) Interface\n```\n\n```diff\n// cache interface as added context.Context to methods\ntype Cache interface {\n-\tGet(string) (Thunk, bool)\n+\tGet(context.Context, string) (Thunk, bool)\n-\tSet(string, Thunk)\n+\tSet(context.Context, string, Thunk)\n-\tDelete(string) bool\n+\tDelete(context.Context, string) bool\n\tClear()\n}\n```\n\n## Upgrade from v3 to v4\n```diff\n// dataloader.Interface as now allows interace{} as key rather than string\n- loader.Load(context.Context, key string) Thunk\n+ loader.Load(ctx context.Context, key interface{}) Thunk\n- loader.LoadMany(context.Context, key []string) ThunkMany\n+ loader.LoadMany(ctx context.Context, keys []interface{}) ThunkMany\n- loader.Prime(context.Context, key string, value interface{}) Interface\n+ loader.Prime(ctx context.Context, key interface{}, value interface{}) Interface\n- loader.Clear(context.Context, key string) Interface\n+ loader.Clear(ctx context.Context, key interface{}) Interface\n```\n\n```diff\n// cache interface now allows interface{} as key instead of string\ntype Cache interface {\n-\tGet(context.Context, string) (Thunk, bool)\n+\tGet(context.Context, interface{}) (Thunk, bool)\n-\tSet(context.Context, string, Thunk)\n+\tSet(context.Context, interface{}, Thunk)\n-\tDelete(context.Context, string) bool\n+\tDelete(context.Context, interface{}) bool\n\tClear()\n}\n```\n\n## Upgrade from v4 to v5\n```diff\n// dataloader.Interface as now allows interace{} as key rather than string\n- loader.Load(context.Context, key interface{}) Thunk\n+ loader.Load(ctx context.Context, key Key) Thunk\n- loader.LoadMany(context.Context, key []interface{}) ThunkMany\n+ loader.LoadMany(ctx context.Context, keys Keys) ThunkMany\n- loader.Prime(context.Context, key interface{}, value interface{}) Interface\n+ loader.Prime(ctx context.Context, key Key, value interface{}) Interface\n- loader.Clear(context.Context, key interface{}) Interface\n+ loader.Clear(ctx context.Context, key Key) Interface\n```\n\n```diff\n// cache interface now allows interface{} as key instead of string\ntype Cache interface {\n-\tGet(context.Context, interface{}) (Thunk, bool)\n+\tGet(context.Context, Key) (Thunk, bool)\n-\tSet(context.Context, interface{}, Thunk)\n+\tSet(context.Context, Key, Thunk)\n-\tDelete(context.Context, interface{}) bool\n+\tDelete(context.Context, Key) bool\n\tClear()\n}\n```\n\n## Upgrade from v5 to v6\n\nWe add major version release because we switched to using Go Modules from dep,\nand drop build tags for older versions of Go (1.9).\n\nThe preferred import method includes the major version tag.\n\n```go\nimport \"github.com/graph-gophers/dataloader/v6\"\n```\n\n## Upgrade from v6 to v7\n\n[Generics](https://go.dev/doc/tutorial/generics) support has been added.\nWith this update, you can now write more type-safe code.\n\nUse the major version tag in the import path.\n\n```go\nimport \"github.com/graph-gophers/dataloader/v7\"\n```\n"
  },
  {
    "path": "README.md",
    "content": "# DataLoader\n[![GoDoc](https://godoc.org/gopkg.in/graph-gophers/dataloader.v7?status.svg)](https://pkg.go.dev/github.com/graph-gophers/dataloader/v7)\n[![Build Status](https://travis-ci.org/graph-gophers/dataloader.svg?branch=master)](https://travis-ci.org/graph-gophers/dataloader)\n\nThis is an implementation of [Facebook's DataLoader](https://github.com/facebook/dataloader) in Golang.\n\n## Install\n`go get -u github.com/graph-gophers/dataloader/v7`\n\n## Usage\n```go\n// setup batch function - the first Context passed to the Loader's Load\n// function will be provided when the batch function is called.\n// this function is registered with the Loader, and the key and value are fixed using generics.\nbatchFn := func(ctx context.Context, keys []int) []*dataloader.Result[*User] {\n  var results []*dataloader.Result[*User]\n  // do some async work to get data for specified keys\n  // append to this list resolved values\n  return results\n}\n\n// create Loader with an in-memory cache\nloader := dataloader.NewBatchedLoader(batchFn)\n\n/**\n * Use loader\n *\n * A thunk is a function returned from a function that is a\n * closure over a value (in this case an interface value and error).\n * When called, it will block until the value is resolved.\n *\n * loader.Load() may be called multiple times for a given batch window.\n * The first context passed to Load is the object that will be passed\n * to the batch function.\n */\nthunk := loader.Load(context.TODO(), 5)\nresult, err := thunk()\nif err != nil {\n  // handle data error\n}\n\nlog.Printf(\"value: %#v\", result)\n```\n\n### Don't need/want to use context?\nYou're welcome to install the v1 version of this library.\n\n## Cache\nThis implementation contains a very basic cache that is intended only to be used for short lived DataLoaders (i.e. DataLoaders that only exist for the life of an http request). You may use your own implementation if you want.\n\n> it also has a `NoCache` type that implements the cache interface but all methods are noop. If you do not wish to cache anything.\n\n## Examples\nThere are a few basic examples in the example folder.\n\n## See also\n- [TRACE](TRACE.md)\n- [MIGRATE](MIGRATE.md)\n"
  },
  {
    "path": "TRACE.md",
    "content": "# Adding a new trace backend.\n\nIf you want to add a new tracing backend all you need to do is implement the\n`Tracer` interface and pass it as an option to the dataloader on initialization.\n\nAs an example, this is how you could implement it to an OpenCensus backend.\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"strings\"\n\n\t\"github.com/graph-gophers/dataloader/v7\"\n\texp \"go.opencensus.io/examples/exporter\"\n\t\"go.opencensus.io/trace\"\n)\n\ntype User struct {\n\tID string\n}\n\n// OpenCensusTracer Tracer implements a tracer that can be used with the Open Tracing standard.\ntype OpenCensusTracer struct{}\n\n// TraceLoad will trace a call to dataloader.LoadMany with Open Tracing\nfunc (OpenCensusTracer) TraceLoad(ctx context.Context, key string) (context.Context, dataloader.TraceLoadFinishFunc[*User]) {\n\tcCtx, cSpan := trace.StartSpan(ctx, \"Dataloader: load\")\n\tcSpan.AddAttributes(\n\t\ttrace.StringAttribute(\"dataloader.key\", key),\n\t)\n\treturn cCtx, func(thunk dataloader.Thunk[*User]) {\n\t\t// TODO: is there anything we should do with the results?\n\t\tcSpan.End()\n\t}\n}\n\n// TraceLoadMany will trace a call to dataloader.LoadMany with Open Tracing\nfunc (OpenCensusTracer) TraceLoadMany(ctx context.Context, keys []string) (context.Context, dataloader.TraceLoadManyFinishFunc[*User]) {\n\tcCtx, cSpan := trace.StartSpan(ctx, \"Dataloader: loadmany\")\n\tcSpan.AddAttributes(\n\t\ttrace.StringAttribute(\"dataloader.keys\", strings.Join(keys, \",\")),\n\t)\n\treturn cCtx, func(thunk dataloader.ThunkMany[*User]) {\n\t\t// TODO: is there anything we should do with the results?\n\t\tcSpan.End()\n\t}\n}\n\n// TraceBatch will trace a call to dataloader.LoadMany with Open Tracing\nfunc (OpenCensusTracer) TraceBatch(ctx context.Context, keys []string) (context.Context, dataloader.TraceBatchFinishFunc[*User]) {\n\tcCtx, cSpan := trace.StartSpan(ctx, \"Dataloader: batch\")\n\tcSpan.AddAttributes(\n\t\ttrace.StringAttribute(\"dataloader.keys\", strings.Join(keys, \",\")),\n\t)\n\treturn cCtx, func(results []*dataloader.Result[*User]) {\n\t\t// TODO: is there anything we should do with the results?\n\t\tcSpan.End()\n\t}\n}\n\nfunc batchFunc(ctx context.Context, keys []string) []*dataloader.Result[*User] {\n\t// ...loader logic goes here\n}\n\nfunc main() {\n\t//initialize an example exporter that just logs to the console\n\ttrace.ApplyConfig(trace.Config{\n\t\tDefaultSampler: trace.AlwaysSample(),\n\t})\n\ttrace.RegisterExporter(&exp.PrintExporter{})\n\t// initialize the dataloader with your new tracer backend\n\tloader := dataloader.NewBatchedLoader(batchFunc, dataloader.WithTracer[string, *User](OpenCensusTracer{}))\n\t// initialize a context since it's not receiving one from anywhere else.\n\tctx, span := trace.StartSpan(context.TODO(), \"Span Name\")\n\tdefer span.End()\n\t// request from the dataloader as usual\n\tvalue, err := loader.Load(ctx, SomeID)()\n\t// ...\n}\n```\n\nDon't forget to initialize the exporters of your choice and register it with `trace.RegisterExporter(&exporterInstance)`.\n"
  },
  {
    "path": "cache.go",
    "content": "package dataloader\n\nimport \"context\"\n\n// The Cache interface. If a custom cache is provided, it must implement this interface.\ntype Cache[K comparable, V any] interface {\n\tGet(context.Context, K) (Thunk[V], bool)\n\tSet(context.Context, K, Thunk[V])\n\tDelete(context.Context, K) bool\n\tClear()\n}\n\n// NoCache implements Cache interface where all methods are noops.\n// This is useful for when you don't want to cache items but still\n// want to use a data loader\ntype NoCache[K comparable, V any] struct{}\n\n// Get is a NOOP\nfunc (c *NoCache[K, V]) Get(context.Context, K) (Thunk[V], bool) { return nil, false }\n\n// Set is a NOOP\nfunc (c *NoCache[K, V]) Set(context.Context, K, Thunk[V]) { return }\n\n// Delete is a NOOP\nfunc (c *NoCache[K, V]) Delete(context.Context, K) bool { return false }\n\n// Clear is a NOOP\nfunc (c *NoCache[K, V]) Clear() { return }\n"
  },
  {
    "path": "codecov.yml",
    "content": "codecov:\n  notify:\n    require_ci_to_pass: true\ncomment:\n  behavior: default\n  layout: header, diff\n  require_changes: false\ncoverage:\n  precision: 2\n  range:\n  - 70.0\n  - 100.0\n  round: down\n  status:\n    changes: false\n    patch: true\n    project: true\nparsers:\n  gcov:\n    branch_detection:\n      conditional: true\n      loop: true\n      macro: false\n      method: false\n  javascript:\n    enable_partials: false\n"
  },
  {
    "path": "dataloader.go",
    "content": "// Package dataloader is an implementation of facebook's dataloader in go.\n// See https://github.com/facebook/dataloader for more information\npackage dataloader\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"log\"\n\t\"runtime\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"time\"\n)\n\n// Interface is a `DataLoader` Interface which defines a public API for loading data from a particular\n// data back-end with unique keys such as the `id` column of a SQL table or\n// document name in a MongoDB database, given a batch loading function.\n//\n// Each `DataLoader` instance should contain a unique memoized cache. Use caution when\n// used in long-lived applications or those which serve many users with\n// different access permissions and consider creating a new instance per\n// web request.\ntype Interface[K comparable, V any] interface {\n\tLoad(context.Context, K) Thunk[V]\n\tLoadMany(context.Context, []K) ThunkMany[V]\n\tClear(context.Context, K) Interface[K, V]\n\tClearAll() Interface[K, V]\n\tPrime(ctx context.Context, key K, value V) Interface[K, V]\n\tFlush()\n}\n\nvar ErrNoResultProvided = errors.New(\"no result provided\")\n\n// BatchFunc is a function, which when given a slice of keys (string), returns a slice of `results`.\n// It's important that the length of the input keys matches the length of the output results.\n// Should the batch function return nil for a result, it will be treated as return an error\n// of `ErrNoResultProvided` for that key.\n//\n// The keys passed to this function are guaranteed to be unique\ntype BatchFunc[K comparable, V any] func(context.Context, []K) []*Result[V]\n\n// Result is the data structure that a BatchFunc returns.\n// It contains the resolved data, and any errors that may have occurred while fetching the data.\ntype Result[V any] struct {\n\tData  V\n\tError error\n}\n\n// ResultMany is used by the LoadMany method.\n// It contains a list of resolved data and a list of errors.\n// The lengths of the data list and error list will match, and elements at each index correspond to each other.\ntype ResultMany[V any] struct {\n\tData  []V\n\tError []error\n}\n\n// PanicErrorWrapper wraps the error interface.\n// This is used to check if the error is a panic error.\n// We should not cache panic errors.\ntype PanicErrorWrapper struct {\n\tpanicError error\n}\n\nfunc (p *PanicErrorWrapper) Error() string {\n\treturn p.panicError.Error()\n}\n\n// SkipCacheError wraps the error interface.\n// The cache should not store SkipCacheErrors.\ntype SkipCacheError struct {\n\terr error\n}\n\nfunc (s *SkipCacheError) Error() string {\n\treturn s.err.Error()\n}\n\nfunc (s *SkipCacheError) Unwrap() error {\n\treturn s.err\n}\n\nfunc NewSkipCacheError(err error) *SkipCacheError {\n\treturn &SkipCacheError{err: err}\n}\n\n// Loader implements the dataloader.Interface.\ntype Loader[K comparable, V any] struct {\n\t// the batch function to be used by this loader\n\tbatchFn BatchFunc[K, V]\n\n\t// the maximum batch size. Set to 0 if you want it to be unbounded.\n\tbatchCap int\n\n\t// the internal cache. This packages contains a basic cache implementation but any custom cache\n\t// implementation could be used as long as it implements the `Cache` interface.\n\tcacheLock sync.Mutex\n\tcache     Cache[K, V]\n\t// should we clear the cache on each batch?\n\t// this would allow batching but no long term caching\n\tclearCacheOnBatch bool\n\n\t// count of queued up items\n\tcount int\n\n\t// the maximum input queue size. Set to 0 if you want it to be unbounded.\n\tinputCap int\n\n\t// the amount of time to wait before triggering a batch\n\twait time.Duration\n\n\t// lock to protect the batching operations\n\tbatchLock sync.Mutex\n\n\t// current batcher\n\tcurBatcher *batcher[K, V]\n\n\t// used to close the sleeper of the current batcher\n\tendSleeper chan bool\n\n\t// used by tests to prevent logs\n\tsilent bool\n\n\t// can be set to trace calls to dataloader\n\ttracer Tracer[K, V]\n}\n\n// Thunk is a function that will block until the value (*Result) it contains is resolved.\n// After the value it contains is resolved, this function will return the result.\n// This function can be called many times, much like a Promise is other languages.\n// The value will only need to be resolved once so subsequent calls will return immediately.\ntype Thunk[V any] func() (V, error)\n\n// ThunkMany is much like the Thunk func type but it contains a list of results.\ntype ThunkMany[V any] func() ([]V, []error)\n\n// type used to on input channel\ntype batchRequest[K comparable, V any] struct {\n\tkey    K\n\tresult atomic.Pointer[Result[V]]\n\tdone   chan struct{}\n}\n\n// Option allows for configuration of Loader fields.\ntype Option[K comparable, V any] func(*Loader[K, V])\n\n// WithCache sets the BatchedLoader cache. Defaults to InMemoryCache if a Cache is not set.\nfunc WithCache[K comparable, V any](c Cache[K, V]) Option[K, V] {\n\treturn func(l *Loader[K, V]) {\n\t\tl.cache = c\n\t}\n}\n\n// WithBatchCapacity sets the batch capacity. Default is 0 (unbounded).\nfunc WithBatchCapacity[K comparable, V any](c int) Option[K, V] {\n\treturn func(l *Loader[K, V]) {\n\t\tl.batchCap = c\n\t}\n}\n\n// WithInputCapacity sets the input capacity. Default is 1000.\nfunc WithInputCapacity[K comparable, V any](c int) Option[K, V] {\n\treturn func(l *Loader[K, V]) {\n\t\tl.inputCap = c\n\t}\n}\n\n// WithWait sets the amount of time to wait before triggering a batch.\n// Default duration is 16 milliseconds.\nfunc WithWait[K comparable, V any](d time.Duration) Option[K, V] {\n\treturn func(l *Loader[K, V]) {\n\t\tl.wait = d\n\t}\n}\n\n// WithClearCacheOnBatch allows batching of items but no long term caching.\n// It accomplishes this by clearing the cache after each batch operation.\nfunc WithClearCacheOnBatch[K comparable, V any]() Option[K, V] {\n\treturn func(l *Loader[K, V]) {\n\t\tl.cacheLock.Lock()\n\t\tl.clearCacheOnBatch = true\n\t\tl.cacheLock.Unlock()\n\t}\n}\n\n// withSilentLogger turns of log messages. It's used by the tests\nfunc withSilentLogger[K comparable, V any]() Option[K, V] {\n\treturn func(l *Loader[K, V]) {\n\t\tl.silent = true\n\t}\n}\n\n// WithTracer allows tracing of calls to Load and LoadMany\nfunc WithTracer[K comparable, V any](tracer Tracer[K, V]) Option[K, V] {\n\treturn func(l *Loader[K, V]) {\n\t\tl.tracer = tracer\n\t}\n}\n\n// NewBatchedLoader constructs a new Loader with given options.\nfunc NewBatchedLoader[K comparable, V any](batchFn BatchFunc[K, V], opts ...Option[K, V]) *Loader[K, V] {\n\tloader := &Loader[K, V]{\n\t\tbatchFn:  batchFn,\n\t\tinputCap: 1000,\n\t\twait:     16 * time.Millisecond,\n\t}\n\n\t// Apply options\n\tfor _, apply := range opts {\n\t\tapply(loader)\n\t}\n\n\t// Set defaults\n\tif loader.cache == nil {\n\t\tloader.cache = NewCache[K, V]()\n\t}\n\n\tif loader.tracer == nil {\n\t\tloader.tracer = NoopTracer[K, V]{}\n\t}\n\n\treturn loader\n}\n\n// Load load/resolves the given key, returning a channel that will contain the value and error.\n// The first context passed to this function within a given batch window will be provided to\n// the registered BatchFunc.\nfunc (l *Loader[K, V]) Load(originalContext context.Context, key K) Thunk[V] {\n\tctx, finish := l.tracer.TraceLoad(originalContext, key)\n\treq := &batchRequest[K, V]{\n\t\tkey:  key,\n\t\tdone: make(chan struct{}),\n\t}\n\n\t// We need to lock both the batchLock and cacheLock because the batcher can\n\t// reset the cache when either the batchCap or the wait time is reached.\n\t//\n\t// When we would only lock the cacheLock while doing l.cache.Get and/or\n\t// l.cache.Set, it could be that the batcher resets the cache after those\n\t// operations have finished but before the new request (if any) is send to the\n\t// batcher.\n\t//\n\t// In that case it is no longer guaranteed that the keys passed to the BatchFunc\n\t// function are unique as the cache has been reset so if the same key is\n\t// requested again before the new batcher is started, the same key will be\n\t// send to the batcher again causing unexpected behavior in the BatchFunc.\n\tl.batchLock.Lock()\n\tl.cacheLock.Lock()\n\n\tif v, ok := l.cache.Get(ctx, key); ok {\n\t\tl.cacheLock.Unlock()\n\t\tl.batchLock.Unlock()\n\t\tdefer finish(v)\n\t\treturn v\n\t}\n\n\tthunk := func() (V, error) {\n\t\t<-req.done\n\t\tresult := req.result.Load()\n\t\tvar ev *PanicErrorWrapper\n\t\tvar es *SkipCacheError\n\t\tif result.Error != nil && (errors.As(result.Error, &ev) || errors.As(result.Error, &es)) {\n\t\t\tl.Clear(ctx, key)\n\t\t}\n\t\treturn result.Data, result.Error\n\t}\n\tdefer finish(thunk)\n\n\tl.cache.Set(ctx, key, thunk)\n\n\t// start the batch window if it hasn't already started.\n\tif l.curBatcher == nil {\n\t\tl.curBatcher = l.newBatcher(l.silent, l.tracer)\n\t\t// start the current batcher batch function\n\t\tgo l.curBatcher.batch(originalContext)\n\t\t// start a sleeper for the current batcher\n\t\tl.endSleeper = make(chan bool)\n\t\tgo l.sleeper(l.curBatcher, l.endSleeper)\n\t}\n\n\tl.curBatcher.input <- req\n\n\t// if we need to keep track of the count (max batch), then do so.\n\tif l.batchCap > 0 {\n\t\tl.count++\n\t\t// if we hit our limit, force the batch to start\n\t\tif l.count == l.batchCap {\n\t\t\t// end/flush the batcher synchronously here because another call to Load\n\t\t\t// may concurrently happen and needs to go to a new batcher.\n\t\t\tl.flush()\n\t\t}\n\t}\n\n\t// NOTE: It is intended that these are not unlocked with a `defer`. This is due to the `defer finish(thunk)` above.\n\t// There is a locking bug where, if you have a tracer that calls the thunk to read the results, the dataloader runs\n\t// into a deadlock scenario, as `finish` is called before these mutexes are free'd on the same goroutine.\n\tl.batchLock.Unlock()\n\tl.cacheLock.Unlock()\n\n\treturn thunk\n}\n\n// flush() is a helper that runs whatever batched items there are immediately.\n// it must be called by code protected by a l.batchLock.Lock()\nfunc (l *Loader[K, V]) flush() {\n\tl.curBatcher.end()\n\n\t// end the sleeper for the current batcher.\n\t// this is to stop the goroutine without waiting for the\n\t// sleeper timeout.\n\tclose(l.endSleeper)\n\tl.reset()\n}\n\n// Flush will load the items in the current batch immediately without waiting for the timer.\nfunc (l *Loader[K, V]) Flush() {\n\tl.batchLock.Lock()\n\tdefer l.batchLock.Unlock()\n\tif l.curBatcher == nil {\n\t\treturn\n\t}\n\tl.flush()\n}\n\n// LoadMany loads multiple keys, returning a thunk (type: ThunkMany) that will resolve the keys passed in.\nfunc (l *Loader[K, V]) LoadMany(originalContext context.Context, keys []K) ThunkMany[V] {\n\tctx, finish := l.tracer.TraceLoadMany(originalContext, keys)\n\n\tvar (\n\t\tlength = len(keys)\n\t\tdata   = make([]V, length)\n\t\terrors = make([]error, length)\n\t\tresult atomic.Pointer[ResultMany[V]]\n\t\twg     sync.WaitGroup\n\t\tdone   = make(chan struct{})\n\t)\n\n\tresolve := func(ctx context.Context, i int) {\n\t\tdefer wg.Done()\n\t\tthunk := l.Load(ctx, keys[i])\n\t\tresult, err := thunk()\n\t\tdata[i] = result\n\t\terrors[i] = err\n\t}\n\n\twg.Add(length)\n\tfor i := range keys {\n\t\tgo resolve(ctx, i)\n\t}\n\n\tgo func() {\n\t\tdefer close(done)\n\t\twg.Wait()\n\n\t\t// errs is nil unless there exists a non-nil error.\n\t\t// This prevents dataloader from returning a slice of all-nil errors.\n\t\tvar errs []error\n\t\tfor _, e := range errors {\n\t\t\tif e != nil {\n\t\t\t\terrs = errors\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\n\t\tresult.Store(&ResultMany[V]{Data: data, Error: errs})\n\t}()\n\n\tthunkMany := func() ([]V, []error) {\n\t\t<-done\n\t\tr := result.Load()\n\t\treturn r.Data, r.Error\n\t}\n\n\tdefer finish(thunkMany)\n\treturn thunkMany\n}\n\n// Clear clears the value at `key` from the cache, it it exists. Returns self for method chaining\nfunc (l *Loader[K, V]) Clear(ctx context.Context, key K) Interface[K, V] {\n\tl.cacheLock.Lock()\n\tl.cache.Delete(ctx, key)\n\tl.cacheLock.Unlock()\n\treturn l\n}\n\n// ClearAll clears the entire cache. To be used when some event results in unknown invalidations.\n// Returns self for method chaining.\nfunc (l *Loader[K, V]) ClearAll() Interface[K, V] {\n\tl.cacheLock.Lock()\n\tl.cache.Clear()\n\tl.cacheLock.Unlock()\n\treturn l\n}\n\n// Prime adds the provided key and value to the cache. If the key already exists, no change is made.\n// Returns self for method chaining\nfunc (l *Loader[K, V]) Prime(ctx context.Context, key K, value V) Interface[K, V] {\n\tif _, ok := l.cache.Get(ctx, key); !ok {\n\t\tthunk := func() (V, error) {\n\t\t\treturn value, nil\n\t\t}\n\t\tl.cache.Set(ctx, key, thunk)\n\t}\n\treturn l\n}\n\nfunc (l *Loader[K, V]) reset() {\n\tl.count = 0\n\tl.curBatcher = nil\n\n\tif l.clearCacheOnBatch {\n\t\tl.cache.Clear()\n\t}\n}\n\ntype batcher[K comparable, V any] struct {\n\tinput    chan *batchRequest[K, V]\n\tbatchFn  BatchFunc[K, V]\n\tfinished bool\n\tsilent   bool\n\ttracer   Tracer[K, V]\n}\n\n// newBatcher returns a batcher for the current requests\n// all the batcher methods must be protected by a global batchLock\nfunc (l *Loader[K, V]) newBatcher(silent bool, tracer Tracer[K, V]) *batcher[K, V] {\n\treturn &batcher[K, V]{\n\t\tinput:   make(chan *batchRequest[K, V], l.inputCap),\n\t\tbatchFn: l.batchFn,\n\t\tsilent:  silent,\n\t\ttracer:  tracer,\n\t}\n}\n\n// stop receiving input and process batch function\nfunc (b *batcher[K, V]) end() {\n\tif !b.finished {\n\t\tclose(b.input)\n\t\tb.finished = true\n\t}\n}\n\n// execute the batch of all items in queue\nfunc (b *batcher[K, V]) batch(originalContext context.Context) {\n\tvar (\n\t\tkeys     = make([]K, 0)\n\t\treqs     = make([]*batchRequest[K, V], 0)\n\t\titems    = make([]*Result[V], 0)\n\t\tpanicErr interface{}\n\t)\n\n\tfor item := range b.input {\n\t\tkeys = append(keys, item.key)\n\t\treqs = append(reqs, item)\n\t}\n\n\tctx, finish := b.tracer.TraceBatch(originalContext, keys)\n\tdefer finish(items)\n\n\tfunc() {\n\t\tdefer func() {\n\t\t\tif r := recover(); r != nil {\n\t\t\t\tpanicErr = r\n\t\t\t\tif b.silent {\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tconst size = 64 << 10\n\t\t\t\tbuf := make([]byte, size)\n\t\t\t\tbuf = buf[:runtime.Stack(buf, false)]\n\t\t\t\tlog.Printf(\"Dataloader: Panic received in batch function: %v\\n%s\", panicErr, buf)\n\t\t\t}\n\t\t}()\n\t\titems = b.batchFn(ctx, keys)\n\t}()\n\n\tif panicErr != nil {\n\t\tfor _, req := range reqs {\n\t\t\treq.result.Store(&Result[V]{Error: &PanicErrorWrapper{panicError: fmt.Errorf(\"Panic received in batch function: %v\", panicErr)}})\n\t\t\tclose(req.done)\n\t\t}\n\t\treturn\n\t}\n\n\tif len(items) != len(keys) {\n\t\terr := &Result[V]{Error: fmt.Errorf(`\n\t\t\tThe batch function supplied did not return an array of responses\n\t\t\tthe same length as the array of keys.\n\n\t\t\tKeys:\n\t\t\t%v\n\n\t\t\tValues:\n\t\t\t%v\n\t\t`, keys, items)}\n\n\t\tfor _, req := range reqs {\n\t\t\treq.result.Store(err)\n\t\t\tclose(req.done)\n\t\t}\n\n\t\treturn\n\t}\n\n\tvar notSetResult *Result[V] // don't allocate unless we need it\n\tfor i, req := range reqs {\n\t\tif items[i] == nil {\n\t\t\tif notSetResult == nil {\n\t\t\t\tnotSetResult = &Result[V]{Error: ErrNoResultProvided}\n\t\t\t}\n\t\t\treq.result.Store(notSetResult)\n\t\t} else {\n\t\t\treq.result.Store(items[i])\n\t\t}\n\t\tclose(req.done)\n\t}\n}\n\n// wait the appropriate amount of time for the provided batcher\nfunc (l *Loader[K, V]) sleeper(b *batcher[K, V], close chan bool) {\n\tselect {\n\t// used by batch to close early. usually triggered by max batch size\n\tcase <-close:\n\t\treturn\n\t// this will move this goroutine to the back of the callstack?\n\tcase <-time.After(l.wait):\n\t}\n\n\t// reset\n\t// this is protected by the batchLock to avoid closing the batcher input\n\t// channel while Load is inserting a request\n\tl.batchLock.Lock()\n\tb.end()\n\n\t// We can end here also if the batcher has already been closed and a\n\t// new one has been created. So reset the loader state only if the batcher\n\t// is the current one\n\tif l.curBatcher == b {\n\t\tl.reset()\n\t}\n\tl.batchLock.Unlock()\n}\n"
  },
  {
    "path": "dataloader_test.go",
    "content": "package dataloader\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"log\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\n/*\nTests\n*/\nfunc TestLoader(t *testing.T) {\n\tt.Run(\"test Load method\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, _ := IDLoader[string](0)\n\t\tctx := context.Background()\n\t\tfuture := identityLoader.Load(ctx, \"1\")\n\t\tvalue, err := future()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\tif value != \"1\" {\n\t\t\tt.Error(\"load didn't return the right value\")\n\t\t}\n\t})\n\n\tt.Run(\"test thunk does not contain race conditions\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, _ := IDLoader[string](0)\n\t\tctx := context.Background()\n\t\tfuture := identityLoader.Load(ctx, \"1\")\n\t\tgo future()\n\t\tgo future()\n\t})\n\n\tt.Run(\"test Load Method Panic Safety\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tdefer func() {\n\t\t\tr := recover()\n\t\t\tif r != nil {\n\t\t\t\tt.Error(\"Panic Loader's panic should have been handled'\")\n\t\t\t}\n\t\t}()\n\t\tpanicLoader, _ := PanicLoader[string](0)\n\t\tctx := context.Background()\n\t\tfuture := panicLoader.Load(ctx, \"1\")\n\t\t_, err := future()\n\t\tif err == nil || err.Error() != \"Panic received in batch function: Programming error\" {\n\t\t\tt.Error(\"Panic was not propagated as an error.\")\n\t\t}\n\t})\n\n\tt.Run(\"test Load Method cache error\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\terrorCacheLoader, _ := ErrorCacheLoader[string](0)\n\t\tctx := context.Background()\n\t\tfutures := []Thunk[string]{}\n\t\tfor i := 0; i < 2; i++ {\n\t\t\tfutures = append(futures, errorCacheLoader.Load(ctx, strconv.Itoa(i)))\n\t\t}\n\n\t\tfor _, f := range futures {\n\t\t\t_, err := f()\n\t\t\tif err == nil {\n\t\t\t\tt.Error(\"Error was not propagated\")\n\t\t\t}\n\t\t}\n\t\tnextFuture := errorCacheLoader.Load(ctx, \"1\")\n\t\t_, err := nextFuture()\n\n\t\t// Normal errors should be cached.\n\t\tif err == nil {\n\t\t\tt.Error(\"Error from batch function was not cached\")\n\t\t}\n\t})\n\n\tt.Run(\"test Load Method not caching results with errors of type SkipCacheError\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tskipCacheLoader, loadCalls := SkipCacheErrorLoader(3, \"1\")\n\t\tctx := context.Background()\n\t\tfutures1 := skipCacheLoader.LoadMany(ctx, []string{\"1\", \"2\", \"3\"})\n\t\t_, errs1 := futures1()\n\t\tvar errCount int = 0\n\t\tvar nilCount int = 0\n\t\tfor _, err := range errs1 {\n\t\t\tif err == nil {\n\t\t\t\tnilCount++\n\t\t\t} else {\n\t\t\t\terrCount++\n\t\t\t}\n\t\t}\n\t\tif errCount != 1 {\n\t\t\tt.Error(\"Expected an error on only key \\\"1\\\"\")\n\t\t}\n\n\t\tif nilCount != 2 {\n\t\t\tt.Error(\"Expected the other errors to be nil\")\n\t\t}\n\n\t\tfutures2 := skipCacheLoader.LoadMany(ctx, []string{\"2\", \"3\", \"1\"})\n\t\t_, errs2 := futures2()\n\t\t// There should be no errors in the second batch, as the only key that was not cached\n\t\t// this time around will not throw an error\n\t\tif errs2 != nil {\n\t\t\tt.Error(\"Expected LoadMany() to return nil error slice when no errors occurred\")\n\t\t}\n\n\t\tcalls := (*loadCalls)[1]\n\t\texpected := []string{\"1\"}\n\n\t\tif !reflect.DeepEqual(calls, expected) {\n\t\t\tt.Errorf(\"Expected load calls %#v, got %#v\", expected, calls)\n\t\t}\n\t})\n\n\tt.Run(\"test Load Method Panic Safety in multiple keys\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tdefer func() {\n\t\t\tr := recover()\n\t\t\tif r != nil {\n\t\t\t\tt.Error(\"Panic Loader's panic should have been handled'\")\n\t\t\t}\n\t\t}()\n\t\tpanicLoader, _ := PanicCacheLoader[string](0)\n\t\tfutures := []Thunk[string]{}\n\t\tctx := context.Background()\n\t\tfor i := 0; i < 3; i++ {\n\t\t\tfutures = append(futures, panicLoader.Load(ctx, strconv.Itoa(i)))\n\t\t}\n\t\tfor _, f := range futures {\n\t\t\t_, err := f()\n\t\t\tif err == nil || err.Error() != \"Panic received in batch function: Programming error\" {\n\t\t\t\tt.Error(\"Panic was not propagated as an error.\")\n\t\t\t}\n\t\t}\n\n\t\tfutures = []Thunk[string]{}\n\t\tfor i := 0; i < 3; i++ {\n\t\t\tfutures = append(futures, panicLoader.Load(ctx, strconv.Itoa(1)))\n\t\t}\n\n\t\tfor _, f := range futures {\n\t\t\t_, err := f()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Panic error from batch function was cached\")\n\t\t\t}\n\t\t}\n\t})\n\n\tt.Run(\"test Load method does not create a deadlock mutex condition\", func(t *testing.T) {\n\t\tt.Parallel()\n\n\t\tloader, _ := IDLoader(1, WithTracer[string, string](&TracerWithThunkReading[string, string]{}))\n\n\t\tvalue, err := loader.Load(context.Background(), \"1\")()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\tif value != \"1\" {\n\t\t\tt.Error(\"load didn't return the right value\")\n\t\t}\n\n\t\t// By this function completing, we confirm that there is not a deadlock condition, else the test will hang\n\t})\n\n\tt.Run(\"test LoadMany returns errors\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\terrorLoader, _ := ErrorLoader[string](0)\n\t\tctx := context.Background()\n\t\tfuture := errorLoader.LoadMany(ctx, []string{\"1\", \"2\", \"3\"})\n\t\t_, err := future()\n\t\tif len(err) != 3 {\n\t\t\tt.Error(\"LoadMany didn't return right number of errors\")\n\t\t}\n\t})\n\n\tt.Run(\"test LoadMany returns len(errors) == len(keys)\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tloader, _ := OneErrorLoader[string](3)\n\t\tctx := context.Background()\n\t\tfuture := loader.LoadMany(ctx, []string{\"1\", \"2\", \"3\"})\n\t\t_, errs := future()\n\t\tif len(errs) != 3 {\n\t\t\tt.Errorf(\"LoadMany didn't return right number of errors (should match size of input)\")\n\t\t}\n\n\t\tvar errCount int = 0\n\t\tvar nilCount int = 0\n\t\tfor _, err := range errs {\n\t\t\tif err == nil {\n\t\t\t\tnilCount++\n\t\t\t} else {\n\t\t\t\terrCount++\n\t\t\t}\n\t\t}\n\t\tif errCount != 1 {\n\t\t\tt.Error(\"Expected an error on only one of the items loaded\")\n\t\t}\n\n\t\tif nilCount != 2 {\n\t\t\tt.Error(\"Expected second and third errors to be nil\")\n\t\t}\n\t})\n\n\tt.Run(\"test LoadMany returns nil []error when no errors occurred\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tloader, _ := IDLoader[string](0)\n\t\tctx := context.Background()\n\t\t_, err := loader.LoadMany(ctx, []string{\"1\", \"2\", \"3\"})()\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Expected LoadMany() to return nil error slice when no errors occurred\")\n\t\t}\n\t})\n\n\tt.Run(\"test LoadMany method does not create a deadlock mutex condition\", func(t *testing.T) {\n\t\tt.Parallel()\n\n\t\tloader, _ := IDLoader(1, WithTracer[string, string](&TracerWithThunkReading[string, string]{}))\n\n\t\tvalues, errs := loader.LoadMany(context.Background(), []string{\"1\", \"2\", \"3\"})()\n\t\tfor _, err := range errs {\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err.Error())\n\t\t\t}\n\t\t}\n\t\tfor _, value := range values {\n\t\t\tif value == \"\" {\n\t\t\t\tt.Error(\"unexpected empty value in LoadMany returned\")\n\t\t\t}\n\t\t}\n\n\t\t// By this function completing, we confirm that there is not a deadlock condition, else the test will hang\n\t})\n\n\tt.Run(\"test thunkmany does not contain race conditions\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, _ := IDLoader[string](0)\n\t\tctx := context.Background()\n\t\tfuture := identityLoader.LoadMany(ctx, []string{\"1\", \"2\", \"3\"})\n\t\tgo future()\n\t\tgo future()\n\t})\n\n\tt.Run(\"test Load Many Method Panic Safety\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tdefer func() {\n\t\t\tr := recover()\n\t\t\tif r != nil {\n\t\t\t\tt.Error(\"Panic Loader's panic should have been handled'\")\n\t\t\t}\n\t\t}()\n\t\tpanicLoader, _ := PanicCacheLoader[string](0)\n\t\tctx := context.Background()\n\t\tfuture := panicLoader.LoadMany(ctx, []string{\"1\", \"2\"})\n\t\t_, errs := future()\n\t\tif len(errs) < 2 || errs[0].Error() != \"Panic received in batch function: Programming error\" {\n\t\t\tt.Error(\"Panic was not propagated as an error.\")\n\t\t}\n\n\t\tfuture = panicLoader.LoadMany(ctx, []string{\"1\"})\n\t\t_, errs = future()\n\n\t\tif len(errs) > 0 {\n\t\t\tt.Error(\"Panic error from batch function was cached\")\n\t\t}\n\n\t})\n\n\tt.Run(\"test LoadMany method\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, _ := IDLoader[string](0)\n\t\tctx := context.Background()\n\t\tfuture := identityLoader.LoadMany(ctx, []string{\"1\", \"2\", \"3\"})\n\t\tresults, _ := future()\n\t\tif results[0] != \"1\" || results[1] != \"2\" || results[2] != \"3\" {\n\t\t\tt.Error(\"loadmany didn't return the right value\")\n\t\t}\n\t})\n\n\tt.Run(\"batches many requests\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, loadCalls := IDLoader[string](0)\n\t\tctx := context.Background()\n\t\tfuture1 := identityLoader.Load(ctx, \"1\")\n\t\tfuture2 := identityLoader.Load(ctx, \"2\")\n\n\t\t_, err := future1()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future2()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\n\t\tcalls := *loadCalls\n\t\tinner := []string{\"1\", \"2\"}\n\t\texpected := [][]string{inner}\n\t\tif !reflect.DeepEqual(calls, expected) {\n\t\t\tt.Errorf(\"did not call batchFn in right order. Expected %#v, got %#v\", expected, calls)\n\t\t}\n\t})\n\n\tt.Run(\"number of results matches number of keys\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tfaultyLoader, _ := FaultyLoader[string]()\n\t\tctx := context.Background()\n\n\t\tn := 10\n\t\treqs := []Thunk[string]{}\n\t\tvar keys []string\n\t\tfor i := 0; i < n; i++ {\n\t\t\tkey := strconv.Itoa(i)\n\t\t\treqs = append(reqs, faultyLoader.Load(ctx, key))\n\t\t\tkeys = append(keys, key)\n\t\t}\n\n\t\tfor _, future := range reqs {\n\t\t\t_, err := future()\n\t\t\tif err == nil {\n\t\t\t\tt.Error(\"if number of results doesn't match keys, all keys should contain error\")\n\t\t\t}\n\t\t}\n\n\t\t// TODO: expect to get some kind of warning\n\t})\n\n\tt.Run(\"responds to max batch size\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, loadCalls := IDLoader[string](2)\n\t\tctx := context.Background()\n\t\tfuture1 := identityLoader.Load(ctx, \"1\")\n\t\tfuture2 := identityLoader.Load(ctx, \"2\")\n\t\tfuture3 := identityLoader.Load(ctx, \"3\")\n\n\t\t_, err := future1()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future2()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future3()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\n\t\tcalls := *loadCalls\n\t\tinner1 := []string{\"1\", \"2\"}\n\t\tinner2 := []string{\"3\"}\n\t\texpected := [][]string{inner1, inner2}\n\t\tif !reflect.DeepEqual(calls, expected) {\n\t\t\tt.Errorf(\"did not respect max batch size. Expected %#v, got %#v\", expected, calls)\n\t\t}\n\t})\n\n\tt.Run(\"caches repeated requests\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, loadCalls := IDLoader[string](0)\n\t\tctx := context.Background()\n\t\tstart := time.Now()\n\t\tfuture1 := identityLoader.Load(ctx, \"1\")\n\t\tfuture2 := identityLoader.Load(ctx, \"1\")\n\n\t\t_, err := future1()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future2()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\n\t\t// also check that it took the full timeout to return\n\t\tvar duration = time.Since(start)\n\t\tif duration < 16*time.Millisecond {\n\t\t\tt.Errorf(\"took %v when expected it to take more than 16 ms because of wait\", duration)\n\t\t}\n\n\t\tcalls := *loadCalls\n\t\tinner := []string{\"1\"}\n\t\texpected := [][]string{inner}\n\t\tif !reflect.DeepEqual(calls, expected) {\n\t\t\tt.Errorf(\"did not respect max batch size. Expected %#v, got %#v\", expected, calls)\n\t\t}\n\t})\n\n\tt.Run(\"doesn't wait for timeout if Flush() is called\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, loadCalls := IDLoader[string](0)\n\t\tctx := context.Background()\n\t\tstart := time.Now()\n\t\tfuture1 := identityLoader.Load(ctx, \"1\")\n\t\tfuture2 := identityLoader.Load(ctx, \"2\")\n\n\t\t// trigger them to be fetched immediately vs waiting for the 16 ms timer\n\t\tidentityLoader.Flush()\n\n\t\t_, err := future1()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future2()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\n\t\tvar duration = time.Since(start)\n\t\tif duration > 2*time.Millisecond {\n\t\t\tt.Errorf(\"took %v when expected it to take less than 2 ms b/c we called Flush()\", duration)\n\t\t}\n\n\t\tcalls := *loadCalls\n\t\tinner := []string{\"1\", \"2\"}\n\t\texpected := [][]string{inner}\n\t\tif !reflect.DeepEqual(calls, expected) {\n\t\t\tt.Errorf(\"did not respect max batch size. Expected %#v, got %#v\", expected, calls)\n\t\t}\n\t})\n\n\tt.Run(\"Nothing for Flush() to do on empty loader with current batch\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, _ := IDLoader[string](0)\n\t\tidentityLoader.Flush()\n\t})\n\n\tt.Run(\"allows primed cache\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, loadCalls := IDLoader[string](0)\n\t\tctx := context.Background()\n\t\tidentityLoader.Prime(ctx, \"A\", \"Cached\")\n\t\tfuture1 := identityLoader.Load(ctx, \"1\")\n\t\tfuture2 := identityLoader.Load(ctx, \"A\")\n\n\t\t_, err := future1()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\tvalue, err := future2()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\n\t\tcalls := *loadCalls\n\t\tinner := []string{\"1\"}\n\t\texpected := [][]string{inner}\n\t\tif !reflect.DeepEqual(calls, expected) {\n\t\t\tt.Errorf(\"did not respect max batch size. Expected %#v, got %#v\", expected, calls)\n\t\t}\n\n\t\tif value != \"Cached\" {\n\t\t\tt.Errorf(\"did not use primed cache value. Expected '%#v', got '%#v'\", \"Cached\", value)\n\t\t}\n\t})\n\n\tt.Run(\"allows clear value in cache\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, loadCalls := IDLoader[string](0)\n\t\tctx := context.Background()\n\t\tidentityLoader.Prime(ctx, \"A\", \"Cached\")\n\t\tidentityLoader.Prime(ctx, \"B\", \"B\")\n\t\tfuture1 := identityLoader.Load(ctx, \"1\")\n\t\tfuture2 := identityLoader.Clear(ctx, \"A\").Load(ctx, \"A\")\n\t\tfuture3 := identityLoader.Load(ctx, \"B\")\n\n\t\t_, err := future1()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\tvalue, err := future2()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future3()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\n\t\tcalls := *loadCalls\n\t\tinner := []string{\"1\", \"A\"}\n\t\texpected := [][]string{inner}\n\t\tif !reflect.DeepEqual(calls, expected) {\n\t\t\tt.Errorf(\"did not respect max batch size. Expected %#v, got %#v\", expected, calls)\n\t\t}\n\n\t\tif value != \"A\" {\n\t\t\tt.Errorf(\"did not use primed cache value. Expected '%#v', got '%#v'\", \"Cached\", value)\n\t\t}\n\t})\n\n\tt.Run(\"clears cache on batch with WithClearCacheOnBatch\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tbatchOnlyLoader, loadCalls := BatchOnlyLoader[string](0)\n\t\tctx := context.Background()\n\t\tfuture1 := batchOnlyLoader.Load(ctx, \"1\")\n\t\tfuture2 := batchOnlyLoader.Load(ctx, \"1\")\n\n\t\t_, err := future1()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future2()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\n\t\tcalls := *loadCalls\n\t\tinner := []string{\"1\"}\n\t\texpected := [][]string{inner}\n\t\tif !reflect.DeepEqual(calls, expected) {\n\t\t\tt.Errorf(\"did not batch queries. Expected %#v, got %#v\", expected, calls)\n\t\t}\n\n\t\tif _, found := batchOnlyLoader.cache.Get(ctx, \"1\"); found {\n\t\t\tt.Errorf(\"did not clear cache after batch. Expected %#v, got %#v\", false, found)\n\t\t}\n\t})\n\n\tt.Run(\"allows clearAll values in cache\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, loadCalls := IDLoader[string](0)\n\t\tctx := context.Background()\n\t\tidentityLoader.Prime(ctx, \"A\", \"Cached\")\n\t\tidentityLoader.Prime(ctx, \"B\", \"B\")\n\n\t\tidentityLoader.ClearAll()\n\n\t\tfuture1 := identityLoader.Load(ctx, \"1\")\n\t\tfuture2 := identityLoader.Load(ctx, \"A\")\n\t\tfuture3 := identityLoader.Load(ctx, \"B\")\n\n\t\t_, err := future1()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future2()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future3()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\n\t\tcalls := *loadCalls\n\t\tinner := []string{\"1\", \"A\", \"B\"}\n\t\texpected := [][]string{inner}\n\t\tif !reflect.DeepEqual(calls, expected) {\n\t\t\tt.Errorf(\"did not respect max batch size. Expected %#v, got %#v\", expected, calls)\n\t\t}\n\t})\n\n\tt.Run(\"all methods on NoCache are Noops\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, loadCalls := NoCacheLoader[string](0)\n\t\tctx := context.Background()\n\t\tidentityLoader.Prime(ctx, \"A\", \"Cached\")\n\t\tidentityLoader.Prime(ctx, \"B\", \"B\")\n\n\t\tidentityLoader.ClearAll()\n\n\t\tfuture1 := identityLoader.Clear(ctx, \"1\").Load(ctx, \"1\")\n\t\tfuture2 := identityLoader.Load(ctx, \"A\")\n\t\tfuture3 := identityLoader.Load(ctx, \"B\")\n\n\t\t_, err := future1()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future2()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future3()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\n\t\tcalls := *loadCalls\n\t\tinner := []string{\"1\", \"A\", \"B\"}\n\t\texpected := [][]string{inner}\n\t\tif !reflect.DeepEqual(calls, expected) {\n\t\t\tt.Errorf(\"did not respect max batch size. Expected %#v, got %#v\", expected, calls)\n\t\t}\n\t})\n\n\tt.Run(\"no cache does not cache anything\", func(t *testing.T) {\n\t\tt.Parallel()\n\t\tidentityLoader, loadCalls := NoCacheLoader[string](0)\n\t\tctx := context.Background()\n\t\tidentityLoader.Prime(ctx, \"A\", \"Cached\")\n\t\tidentityLoader.Prime(ctx, \"B\", \"B\")\n\n\t\tfuture1 := identityLoader.Load(ctx, \"1\")\n\t\tfuture2 := identityLoader.Load(ctx, \"A\")\n\t\tfuture3 := identityLoader.Load(ctx, \"B\")\n\n\t\t_, err := future1()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future2()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\t\t_, err = future3()\n\t\tif err != nil {\n\t\t\tt.Error(err.Error())\n\t\t}\n\n\t\tcalls := *loadCalls\n\t\tinner := []string{\"1\", \"A\", \"B\"}\n\t\texpected := [][]string{inner}\n\t\tif !reflect.DeepEqual(calls, expected) {\n\t\t\tt.Errorf(\"did not respect max batch size. Expected %#v, got %#v\", expected, calls)\n\t\t}\n\t})\n\n}\n\n// test helpers\nfunc IDLoader[K comparable](max int, options ...Option[K, K]) (*Loader[K, K], *[][]K) {\n\tvar mu sync.Mutex\n\tvar loadCalls [][]K\n\tidentityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {\n\t\tvar results []*Result[K]\n\t\tmu.Lock()\n\t\tloadCalls = append(loadCalls, keys)\n\t\tmu.Unlock()\n\t\tfor _, key := range keys {\n\t\t\tresults = append(results, &Result[K]{key, nil})\n\t\t}\n\t\treturn results\n\t}, append([]Option[K, K]{WithBatchCapacity[K, K](max)}, options...)...)\n\treturn identityLoader, &loadCalls\n}\nfunc BatchOnlyLoader[K comparable](max int) (*Loader[K, K], *[][]K) {\n\tvar mu sync.Mutex\n\tvar loadCalls [][]K\n\tidentityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {\n\t\tvar results []*Result[K]\n\t\tmu.Lock()\n\t\tloadCalls = append(loadCalls, keys)\n\t\tmu.Unlock()\n\t\tfor _, key := range keys {\n\t\t\tresults = append(results, &Result[K]{key, nil})\n\t\t}\n\t\treturn results\n\t}, WithBatchCapacity[K, K](max), WithClearCacheOnBatch[K, K]())\n\treturn identityLoader, &loadCalls\n}\nfunc ErrorLoader[K comparable](max int) (*Loader[K, K], *[][]K) {\n\tvar mu sync.Mutex\n\tvar loadCalls [][]K\n\tidentityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {\n\t\tvar results []*Result[K]\n\t\tmu.Lock()\n\t\tloadCalls = append(loadCalls, keys)\n\t\tmu.Unlock()\n\t\tfor _, key := range keys {\n\t\t\tresults = append(results, &Result[K]{key, fmt.Errorf(\"this is a test error\")})\n\t\t}\n\t\treturn results\n\t}, WithBatchCapacity[K, K](max))\n\treturn identityLoader, &loadCalls\n}\nfunc OneErrorLoader[K comparable](max int) (*Loader[K, K], *[][]K) {\n\tvar mu sync.Mutex\n\tvar loadCalls [][]K\n\tidentityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {\n\t\tresults := make([]*Result[K], max)\n\t\tmu.Lock()\n\t\tloadCalls = append(loadCalls, keys)\n\t\tmu.Unlock()\n\t\tfor i := range keys {\n\t\t\tvar err error\n\t\t\tif i == 0 {\n\t\t\t\terr = errors.New(\"always error on the first key\")\n\t\t\t}\n\t\t\tresults[i] = &Result[K]{keys[i], err}\n\t\t}\n\t\treturn results\n\t}, WithBatchCapacity[K, K](max))\n\treturn identityLoader, &loadCalls\n}\nfunc PanicLoader[K comparable](max int) (*Loader[K, K], *[][]K) {\n\tvar loadCalls [][]K\n\tpanicLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {\n\t\tpanic(\"Programming error\")\n\t}, WithBatchCapacity[K, K](max), withSilentLogger[K, K]())\n\treturn panicLoader, &loadCalls\n}\n\nfunc PanicCacheLoader[K comparable](max int) (*Loader[K, K], *[][]K) {\n\tvar loadCalls [][]K\n\tpanicCacheLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {\n\t\tif len(keys) > 1 {\n\t\t\tpanic(\"Programming error\")\n\t\t}\n\n\t\treturnResult := make([]*Result[K], len(keys))\n\t\tfor idx := range returnResult {\n\t\t\treturnResult[idx] = &Result[K]{\n\t\t\t\tkeys[0],\n\t\t\t\tnil,\n\t\t\t}\n\t\t}\n\n\t\treturn returnResult\n\n\t}, WithBatchCapacity[K, K](max), withSilentLogger[K, K]())\n\treturn panicCacheLoader, &loadCalls\n}\n\nfunc ErrorCacheLoader[K comparable](max int) (*Loader[K, K], *[][]K) {\n\tvar loadCalls [][]K\n\terrorCacheLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {\n\t\tif len(keys) > 1 {\n\t\t\tvar results []*Result[K]\n\t\t\tfor _, key := range keys {\n\t\t\t\tresults = append(results, &Result[K]{key, fmt.Errorf(\"this is a test error\")})\n\t\t\t}\n\t\t\treturn results\n\t\t}\n\n\t\treturnResult := make([]*Result[K], len(keys))\n\t\tfor idx := range returnResult {\n\t\t\treturnResult[idx] = &Result[K]{\n\t\t\t\tkeys[0],\n\t\t\t\tnil,\n\t\t\t}\n\t\t}\n\n\t\treturn returnResult\n\n\t}, WithBatchCapacity[K, K](max), withSilentLogger[K, K]())\n\treturn errorCacheLoader, &loadCalls\n}\n\nfunc SkipCacheErrorLoader[K comparable](max int, onceErrorKey K) (*Loader[K, K], *[][]K) {\n\tvar mu sync.Mutex\n\tvar loadCalls [][]K\n\terrorThrown := false\n\tskipCacheErrorLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {\n\t\tvar results []*Result[K]\n\t\tmu.Lock()\n\t\tloadCalls = append(loadCalls, keys)\n\t\tmu.Unlock()\n\t\t// return a non cacheable error for the first occurence of onceErrorKey\n\t\tfor _, k := range keys {\n\t\t\tif !errorThrown && k == onceErrorKey {\n\t\t\t\tresults = append(results, &Result[K]{k, NewSkipCacheError(fmt.Errorf(\"non cacheable error\"))})\n\t\t\t\terrorThrown = true\n\t\t\t} else {\n\t\t\t\tresults = append(results, &Result[K]{k, nil})\n\t\t\t}\n\t\t}\n\n\t\treturn results\n\t}, WithBatchCapacity[K, K](max))\n\treturn skipCacheErrorLoader, &loadCalls\n}\n\nfunc BadLoader[K comparable](max int) (*Loader[K, K], *[][]K) {\n\tvar mu sync.Mutex\n\tvar loadCalls [][]K\n\tidentityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {\n\t\tvar results []*Result[K]\n\t\tmu.Lock()\n\t\tloadCalls = append(loadCalls, keys)\n\t\tmu.Unlock()\n\t\tresults = append(results, &Result[K]{keys[0], nil})\n\t\treturn results\n\t}, WithBatchCapacity[K, K](max))\n\treturn identityLoader, &loadCalls\n}\n\nfunc NoCacheLoader[K comparable](max int) (*Loader[K, K], *[][]K) {\n\tvar mu sync.Mutex\n\tvar loadCalls [][]K\n\tcache := &NoCache[K, K]{}\n\tidentityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {\n\t\tvar results []*Result[K]\n\t\tmu.Lock()\n\t\tloadCalls = append(loadCalls, keys)\n\t\tmu.Unlock()\n\t\tfor _, key := range keys {\n\t\t\tresults = append(results, &Result[K]{key, nil})\n\t\t}\n\t\treturn results\n\t}, WithCache[K, K](cache), WithBatchCapacity[K, K](max))\n\treturn identityLoader, &loadCalls\n}\n\n// FaultyLoader gives len(keys)-1 results.\nfunc FaultyLoader[K comparable]() (*Loader[K, K], *[][]K) {\n\tvar mu sync.Mutex\n\tvar loadCalls [][]K\n\n\tloader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {\n\t\tvar results []*Result[K]\n\t\tmu.Lock()\n\t\tloadCalls = append(loadCalls, keys)\n\t\tmu.Unlock()\n\n\t\tlastKeyIndex := len(keys) - 1\n\t\tfor i, key := range keys {\n\t\t\tif i == lastKeyIndex {\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\tresults = append(results, &Result[K]{key, nil})\n\t\t}\n\t\treturn results\n\t})\n\n\treturn loader, &loadCalls\n}\n\ntype TracerWithThunkReading[K comparable, V any] struct{}\n\nvar _ Tracer[string, struct{}] = (*TracerWithThunkReading[string, struct{}])(nil)\n\nfunc (_ *TracerWithThunkReading[K, V]) TraceLoad(ctx context.Context, key K) (context.Context, TraceLoadFinishFunc[V]) {\n\treturn ctx, func(thunk Thunk[V]) {\n\t\t_, _ = thunk()\n\t}\n}\n\nfunc (_ *TracerWithThunkReading[K, V]) TraceLoadMany(ctx context.Context, keys []K) (context.Context, TraceLoadManyFinishFunc[V]) {\n\treturn ctx, func(thunks ThunkMany[V]) {\n\t\t_, _ = thunks()\n\t}\n}\n\nfunc (_ *TracerWithThunkReading[K, V]) TraceBatch(ctx context.Context, keys []K) (context.Context, TraceBatchFinishFunc[V]) {\n\treturn ctx, func(thunks []*Result[V]) {\n\t\t//\n\t}\n}\n\n/*\nBenchmarks\n*/\nvar a = &Avg{}\n\nfunc batchIdentity[K comparable](_ context.Context, keys []K) (results []*Result[K]) {\n\ta.Add(len(keys))\n\tfor _, key := range keys {\n\t\tresults = append(results, &Result[K]{key, nil})\n\t}\n\treturn\n}\n\nvar _ctx = context.Background()\n\nfunc BenchmarkLoader(b *testing.B) {\n\tUserLoader := NewBatchedLoader(batchIdentity[string])\n\tb.ResetTimer()\n\tfor i := 0; i < b.N; i++ {\n\t\tUserLoader.Load(_ctx, (strconv.Itoa(i)))\n\t}\n\tlog.Printf(\"avg: %f\", a.Avg())\n}\n\ntype Avg struct {\n\ttotal  float64\n\tlength float64\n\tlock   sync.RWMutex\n}\n\nfunc (a *Avg) Add(v int) {\n\ta.lock.Lock()\n\ta.total += float64(v)\n\ta.length++\n\ta.lock.Unlock()\n}\n\nfunc (a *Avg) Avg() float64 {\n\ta.lock.RLock()\n\tdefer a.lock.RUnlock()\n\tif a.total == 0 {\n\t\treturn 0\n\t} else if a.length == 0 {\n\t\treturn 0\n\t}\n\treturn a.total / a.length\n}\n"
  },
  {
    "path": "example/lru_cache/golang_lru_test.go",
    "content": "// package lru_cache_test contains an exmaple of using go-cache as a long term cache solution for dataloader.\npackage lru_cache_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\tdataloader \"github.com/graph-gophers/dataloader/v7\"\n\n\tlru \"github.com/hashicorp/golang-lru\"\n)\n\n// Cache implements the dataloader.Cache interface\ntype cache[K comparable, V any] struct {\n\t*lru.ARCCache\n}\n\n// Get gets an item from the cache\nfunc (c *cache[K, V]) Get(_ context.Context, key K) (dataloader.Thunk[V], bool) {\n\tv, ok := c.ARCCache.Get(key)\n\tif ok {\n\t\treturn v.(dataloader.Thunk[V]), ok\n\t}\n\treturn nil, ok\n}\n\n// Set sets an item in the cache\nfunc (c *cache[K, V]) Set(_ context.Context, key K, value dataloader.Thunk[V]) {\n\tc.ARCCache.Add(key, value)\n}\n\n// Delete deletes an item in the cache\nfunc (c *cache[K, V]) Delete(_ context.Context, key K) bool {\n\tif c.ARCCache.Contains(key) {\n\t\tc.ARCCache.Remove(key)\n\t\treturn true\n\t}\n\treturn false\n}\n\n// Clear clears the cache\nfunc (c *cache[K, V]) Clear() {\n\tc.ARCCache.Purge()\n}\n\nfunc ExampleGolangLRU() {\n\ttype User struct {\n\t\tID        int\n\t\tEmail     string\n\t\tFirstName string\n\t\tLastName  string\n\t}\n\n\tm := map[int]*User{\n\t\t5: {ID: 5, FirstName: \"John\", LastName: \"Smith\", Email: \"john@example.com\"},\n\t}\n\n\tbatchFunc := func(_ context.Context, keys []int) []*dataloader.Result[*User] {\n\t\tvar results []*dataloader.Result[*User]\n\t\t// do some pretend work to resolve keys\n\t\tfor _, k := range keys {\n\t\t\tresults = append(results, &dataloader.Result[*User]{Data: m[k]})\n\t\t}\n\t\treturn results\n\t}\n\n\t// go-cache will automatically cleanup expired items on given duration.\n\tc, _ := lru.NewARC(100)\n\tcache := &cache[int, *User]{ARCCache: c}\n\tloader := dataloader.NewBatchedLoader(batchFunc, dataloader.WithCache[int, *User](cache))\n\n\t// immediately call the future function from loader\n\tresult, err := loader.Load(context.TODO(), 5)()\n\tif err != nil {\n\t\t// handle error\n\t}\n\n\tfmt.Printf(\"result: %+v\", result)\n\t// Output: result: &{ID:5 Email:john@example.com FirstName:John LastName:Smith}\n}\n"
  },
  {
    "path": "example/no_cache/no_cache_test.go",
    "content": "package no_cache_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\tdataloader \"github.com/graph-gophers/dataloader/v7\"\n)\n\nfunc ExampleNoCache() {\n\ttype User struct {\n\t\tID        int\n\t\tEmail     string\n\t\tFirstName string\n\t\tLastName  string\n\t}\n\n\tm := map[int]*User{\n\t\t5: {ID: 5, FirstName: \"John\", LastName: \"Smith\", Email: \"john@example.com\"},\n\t}\n\n\tbatchFunc := func(_ context.Context, keys []int) []*dataloader.Result[*User] {\n\t\tvar results []*dataloader.Result[*User]\n\t\t// do some pretend work to resolve keys\n\t\tfor _, k := range keys {\n\t\t\tresults = append(results, &dataloader.Result[*User]{Data: m[k]})\n\t\t}\n\t\treturn results\n\t}\n\n\t// go-cache will automatically cleanup expired items on given duration\n\tcache := &dataloader.NoCache[int, *User]{}\n\tloader := dataloader.NewBatchedLoader(batchFunc, dataloader.WithCache[int, *User](cache))\n\n\tresult, err := loader.Load(context.Background(), 5)()\n\tif err != nil {\n\t\t// handle error\n\t}\n\n\tfmt.Printf(\"result: %+v\", result)\n\t// Output: result: &{ID:5 Email:john@example.com FirstName:John LastName:Smith}\n}\n"
  },
  {
    "path": "example/ttl_cache/go_cache_test.go",
    "content": "// package ttl_cache_test contains an example of using go-cache as a long term cache solution for dataloader.\npackage ttl_cache_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"time\"\n\n\tdataloader \"github.com/graph-gophers/dataloader/v7\"\n\n\tcache \"github.com/patrickmn/go-cache\"\n)\n\n// Cache implements the dataloader.Cache interface\ntype Cache[K comparable, V any] struct {\n\tc *cache.Cache\n}\n\n// Get gets a value from the cache\nfunc (c *Cache[K, V]) Get(_ context.Context, key K) (dataloader.Thunk[V], bool) {\n\tk := fmt.Sprintf(\"%v\", key) // convert the key to string because the underlying library doesn't support Generics yet\n\tv, ok := c.c.Get(k)\n\tif ok {\n\t\treturn v.(dataloader.Thunk[V]), ok\n\t}\n\treturn nil, ok\n}\n\n// Set sets a value in the cache\nfunc (c *Cache[K, V]) Set(_ context.Context, key K, value dataloader.Thunk[V]) {\n\tk := fmt.Sprintf(\"%v\", key) // convert the key to string because the underlying library doesn't support Generics yet\n\tc.c.Set(k, value, 0)\n}\n\n// Delete deletes and item in the cache\nfunc (c *Cache[K, V]) Delete(_ context.Context, key K) bool {\n\tk := fmt.Sprintf(\"%v\", key) // convert the key to string because the underlying library doesn't support Generics yet\n\tif _, found := c.c.Get(k); found {\n\t\tc.c.Delete(k)\n\t\treturn true\n\t}\n\treturn false\n}\n\n// Clear clears the cache\nfunc (c *Cache[K, V]) Clear() {\n\tc.c.Flush()\n}\n\nfunc ExampleTTLCache() {\n\ttype User struct {\n\t\tID        int\n\t\tEmail     string\n\t\tFirstName string\n\t\tLastName  string\n\t}\n\n\tm := map[int]*User{\n\t\t5: {ID: 5, FirstName: \"John\", LastName: \"Smith\", Email: \"john@example.com\"},\n\t}\n\n\tbatchFunc := func(_ context.Context, keys []int) []*dataloader.Result[*User] {\n\t\tvar results []*dataloader.Result[*User]\n\t\t// do some pretend work to resolve keys\n\t\tfor _, k := range keys {\n\t\t\tresults = append(results, &dataloader.Result[*User]{Data: m[k]})\n\t\t}\n\t\treturn results\n\t}\n\n\t// go-cache will automatically cleanup expired items on given duration\n\tc := cache.New(15*time.Minute, 15*time.Minute)\n\tcache := &Cache[int, *User]{c}\n\tloader := dataloader.NewBatchedLoader(batchFunc, dataloader.WithCache[int, *User](cache))\n\n\t// immediately call the future function from loader\n\tresult, err := loader.Load(context.Background(), 5)()\n\tif err != nil {\n\t\t// handle error\n\t}\n\n\tfmt.Printf(\"result: %+v\", result)\n\t// Output: result: &{ID:5 Email:john@example.com FirstName:John LastName:Smith}\n}\n"
  },
  {
    "path": "go.mod",
    "content": "module github.com/graph-gophers/dataloader/v7\n\ngo 1.19\n\nrequire (\n\tgithub.com/hashicorp/golang-lru v0.5.4\n\tgithub.com/opentracing/opentracing-go v1.2.0\n\tgithub.com/patrickmn/go-cache v2.1.0+incompatible\n\tgo.opentelemetry.io/otel v1.6.3\n\tgo.opentelemetry.io/otel/trace v1.6.3\n)\n\nrequire (\n\tgithub.com/go-logr/logr v1.2.3 // indirect\n\tgithub.com/go-logr/stdr v1.2.2 // indirect\n)\n"
  },
  {
    "path": "go.sum",
    "content": "github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=\ngithub.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=\ngithub.com/go-logr/logr v1.2.3 h1:2DntVwHkVopvECVRSlL5PSo9eG+cAkDCuckLubN+rq0=\ngithub.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=\ngithub.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=\ngithub.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=\ngithub.com/google/go-cmp v0.5.7 h1:81/ik6ipDQS2aGcBfIN5dHDB36BwrStyeAQquSYCV4o=\ngithub.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=\ngithub.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=\ngithub.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=\ngithub.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs=\ngithub.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc=\ngithub.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=\ngithub.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=\ngithub.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=\ngithub.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=\ngithub.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY=\ngithub.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=\ngo.opentelemetry.io/otel v1.6.3 h1:FLOfo8f9JzFVFVyU+MSRJc2HdEAXQgm7pIv2uFKRSZE=\ngo.opentelemetry.io/otel v1.6.3/go.mod h1:7BgNga5fNlF/iZjG06hM3yofffp0ofKCDwSXx1GC4dI=\ngo.opentelemetry.io/otel/trace v1.6.3 h1:IqN4L+5b0mPNjdXIiZ90Ni4Bl5BRkDQywePLWemd9bc=\ngo.opentelemetry.io/otel/trace v1.6.3/go.mod h1:GNJQusJlUgZl9/TQBPKU/Y/ty+0iVB5fjhKeJGZPGFs=\ngolang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=\ngopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\n"
  },
  {
    "path": "in_memory_cache.go",
    "content": "package dataloader\n\nimport (\n\t\"context\"\n\t\"sync\"\n)\n\n// InMemoryCache is an in memory implementation of Cache interface.\n// This simple implementation is well suited for\n// a \"per-request\" dataloader (i.e. one that only lives\n// for the life of an http request) but it's not well suited\n// for long lived cached items.\ntype InMemoryCache[K comparable, V any] struct {\n\titems map[K]Thunk[V]\n\tmu    sync.RWMutex\n}\n\n// NewCache constructs a new InMemoryCache\nfunc NewCache[K comparable, V any]() *InMemoryCache[K, V] {\n\titems := make(map[K]Thunk[V])\n\treturn &InMemoryCache[K, V]{\n\t\titems: items,\n\t}\n}\n\n// Set sets the `value` at `key` in the cache\nfunc (c *InMemoryCache[K, V]) Set(_ context.Context, key K, value Thunk[V]) {\n\tc.mu.Lock()\n\tc.items[key] = value\n\tc.mu.Unlock()\n}\n\n// Get gets the value at `key` if it exists, returns value (or nil) and bool\n// indicating of value was found\nfunc (c *InMemoryCache[K, V]) Get(_ context.Context, key K) (Thunk[V], bool) {\n\tc.mu.RLock()\n\tdefer c.mu.RUnlock()\n\n\titem, found := c.items[key]\n\tif !found {\n\t\treturn nil, false\n\t}\n\n\treturn item, true\n}\n\n// Delete deletes item at `key` from cache\nfunc (c *InMemoryCache[K, V]) Delete(ctx context.Context, key K) bool {\n\tif _, found := c.Get(ctx, key); found {\n\t\tc.mu.Lock()\n\t\tdefer c.mu.Unlock()\n\t\tdelete(c.items, key)\n\t\treturn true\n\t}\n\treturn false\n}\n\n// Clear clears the entire cache\nfunc (c *InMemoryCache[K, V]) Clear() {\n\tc.mu.Lock()\n\tc.items = map[K]Thunk[V]{}\n\tc.mu.Unlock()\n}\n"
  },
  {
    "path": "trace/opentracing/trace.go",
    "content": "package opentracing\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/graph-gophers/dataloader/v7\"\n\n\t\"github.com/opentracing/opentracing-go\"\n)\n\n// Tracer implements a tracer that can be used with the Open Tracing standard.\ntype Tracer[K comparable, V any] struct{}\n\n// TraceLoad will trace a call to dataloader.LoadMany with Open Tracing.\nfunc (Tracer[K, V]) TraceLoad(ctx context.Context, key K) (context.Context, dataloader.TraceLoadFinishFunc[V]) {\n\tspan, spanCtx := opentracing.StartSpanFromContext(ctx, \"Dataloader: load\")\n\n\tspan.SetTag(\"dataloader.key\", fmt.Sprintf(\"%v\", key))\n\n\treturn spanCtx, func(thunk dataloader.Thunk[V]) {\n\t\tspan.Finish()\n\t}\n}\n\n// TraceLoadMany will trace a call to dataloader.LoadMany with Open Tracing.\nfunc (Tracer[K, V]) TraceLoadMany(ctx context.Context, keys []K) (context.Context, dataloader.TraceLoadManyFinishFunc[V]) {\n\tspan, spanCtx := opentracing.StartSpanFromContext(ctx, \"Dataloader: loadmany\")\n\n\tspan.SetTag(\"dataloader.keys\", fmt.Sprintf(\"%v\", keys))\n\n\treturn spanCtx, func(thunk dataloader.ThunkMany[V]) {\n\t\tspan.Finish()\n\t}\n}\n\n// TraceBatch will trace a call to dataloader.LoadMany with Open Tracing.\nfunc (Tracer[K, V]) TraceBatch(ctx context.Context, keys []K) (context.Context, dataloader.TraceBatchFinishFunc[V]) {\n\tspan, spanCtx := opentracing.StartSpanFromContext(ctx, \"Dataloader: batch\")\n\n\tspan.SetTag(\"dataloader.keys\", fmt.Sprintf(\"%v\", keys))\n\n\treturn spanCtx, func(results []*dataloader.Result[V]) {\n\t\tspan.Finish()\n\t}\n}\n"
  },
  {
    "path": "trace/opentracing/trace_test.go",
    "content": "package opentracing_test\n\nimport (\n\t\"testing\"\n\n\t\"github.com/graph-gophers/dataloader/v7\"\n\t\"github.com/graph-gophers/dataloader/v7/trace/opentracing\"\n)\n\nfunc TestInterfaceImplementation(t *testing.T) {\n\ttype User struct {\n\t\tID        uint\n\t\tFirstName string\n\t\tLastName  string\n\t\tEmail     string\n\t}\n\tvar _ dataloader.Tracer[string, int] = opentracing.Tracer[string, int]{}\n\tvar _ dataloader.Tracer[string, string] = opentracing.Tracer[string, string]{}\n\tvar _ dataloader.Tracer[uint, User] = opentracing.Tracer[uint, User]{}\n\t// check compatibility with loader options\n\tdataloader.WithTracer[uint, User](&opentracing.Tracer[uint, User]{})\n}\n"
  },
  {
    "path": "trace/otel/trace.go",
    "content": "package otel\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/graph-gophers/dataloader/v7\"\n\n\t\"go.opentelemetry.io/otel\"\n\t\"go.opentelemetry.io/otel/attribute\"\n\t\"go.opentelemetry.io/otel/trace\"\n)\n\n// Tracer implements a tracer that can be used with the Open Tracing standard.\ntype Tracer[K comparable, V any] struct {\n\ttr trace.Tracer\n}\n\nfunc NewTracer[K comparable, V any](tr trace.Tracer) *Tracer[K, V] {\n\treturn &Tracer[K, V]{tr: tr}\n}\n\nfunc (t *Tracer[K, V]) Tracer() trace.Tracer {\n\tif t.tr != nil {\n\t\treturn t.tr\n\t}\n\treturn otel.Tracer(\"graph-gophers/dataloader\")\n}\n\n// TraceLoad will trace a call to dataloader.LoadMany with Open Tracing.\nfunc (t Tracer[K, V]) TraceLoad(ctx context.Context, key K) (context.Context, dataloader.TraceLoadFinishFunc[V]) {\n\tspanCtx, span := t.Tracer().Start(ctx, \"Dataloader: load\")\n\n\tspan.SetAttributes(attribute.String(\"dataloader.key\", fmt.Sprintf(\"%v\", key)))\n\n\treturn spanCtx, func(thunk dataloader.Thunk[V]) {\n\t\tspan.End()\n\t}\n}\n\n// TraceLoadMany will trace a call to dataloader.LoadMany with Open Tracing.\nfunc (t Tracer[K, V]) TraceLoadMany(ctx context.Context, keys []K) (context.Context, dataloader.TraceLoadManyFinishFunc[V]) {\n\tspanCtx, span := t.Tracer().Start(ctx, \"Dataloader: loadmany\")\n\n\tspan.SetAttributes(attribute.String(\"dataloader.keys\", fmt.Sprintf(\"%v\", keys)))\n\n\treturn spanCtx, func(thunk dataloader.ThunkMany[V]) {\n\t\tspan.End()\n\t}\n}\n\n// TraceBatch will trace a call to dataloader.LoadMany with Open Tracing.\nfunc (t Tracer[K, V]) TraceBatch(ctx context.Context, keys []K) (context.Context, dataloader.TraceBatchFinishFunc[V]) {\n\tspanCtx, span := t.Tracer().Start(ctx, \"Dataloader: batch\")\n\n\tspan.SetAttributes(attribute.String(\"dataloader.keys\", fmt.Sprintf(\"%v\", keys)))\n\n\treturn spanCtx, func(results []*dataloader.Result[V]) {\n\t\tspan.End()\n\t}\n}\n"
  },
  {
    "path": "trace/otel/trace_test.go",
    "content": "package otel_test\n\nimport (\n\t\"testing\"\n\n\t\"github.com/graph-gophers/dataloader/v7\"\n\t\"github.com/graph-gophers/dataloader/v7/trace/otel\"\n)\n\nfunc TestInterfaceImplementation(t *testing.T) {\n\ttype User struct {\n\t\tID        uint\n\t\tFirstName string\n\t\tLastName  string\n\t\tEmail     string\n\t}\n\tvar _ dataloader.Tracer[string, int] = otel.Tracer[string, int]{}\n\tvar _ dataloader.Tracer[string, string] = otel.Tracer[string, string]{}\n\tvar _ dataloader.Tracer[uint, User] = otel.Tracer[uint, User]{}\n\t// check compatibility with loader options\n\tdataloader.WithTracer[uint, User](&otel.Tracer[uint, User]{})\n}\n"
  },
  {
    "path": "trace.go",
    "content": "package dataloader\n\nimport (\n\t\"context\"\n)\n\ntype TraceLoadFinishFunc[V any] func(Thunk[V])\ntype TraceLoadManyFinishFunc[V any] func(ThunkMany[V])\ntype TraceBatchFinishFunc[V any] func([]*Result[V])\n\n// Tracer is an interface that may be used to implement tracing.\ntype Tracer[K comparable, V any] interface {\n\t// TraceLoad will trace the calls to Load.\n\tTraceLoad(ctx context.Context, key K) (context.Context, TraceLoadFinishFunc[V])\n\t// TraceLoadMany will trace the calls to LoadMany.\n\tTraceLoadMany(ctx context.Context, keys []K) (context.Context, TraceLoadManyFinishFunc[V])\n\t// TraceBatch will trace data loader batches.\n\tTraceBatch(ctx context.Context, keys []K) (context.Context, TraceBatchFinishFunc[V])\n}\n\n// NoopTracer is the default (noop) tracer\ntype NoopTracer[K comparable, V any] struct{}\n\n// TraceLoad is a noop function\nfunc (NoopTracer[K, V]) TraceLoad(ctx context.Context, key K) (context.Context, TraceLoadFinishFunc[V]) {\n\treturn ctx, func(Thunk[V]) {}\n}\n\n// TraceLoadMany is a noop function\nfunc (NoopTracer[K, V]) TraceLoadMany(ctx context.Context, keys []K) (context.Context, TraceLoadManyFinishFunc[V]) {\n\treturn ctx, func(ThunkMany[V]) {}\n}\n\n// TraceBatch is a noop function\nfunc (NoopTracer[K, V]) TraceBatch(ctx context.Context, keys []K) (context.Context, TraceBatchFinishFunc[V]) {\n\treturn ctx, func(result []*Result[V]) {}\n}\n"
  }
]