Full Code of nicksrandall/dataloader for AI

master b77c904dd9fc cached
22 files
64.5 KB
19.4k tokens
102 symbols
1 requests
Download .txt
Repository: nicksrandall/dataloader
Branch: master
Commit: b77c904dd9fc
Files: 22
Total size: 64.5 KB

Directory structure:
gitextract_ieg74ypr/

├── .github/
│   └── workflows/
│       └── go.yml
├── .gitignore
├── .travis.yml
├── LICENSE
├── MIGRATE.md
├── README.md
├── TRACE.md
├── cache.go
├── codecov.yml
├── dataloader.go
├── dataloader_test.go
├── example/
│   ├── lru_cache/
│   │   └── golang_lru_test.go
│   ├── no_cache/
│   │   └── no_cache_test.go
│   └── ttl_cache/
│       └── go_cache_test.go
├── go.mod
├── go.sum
├── in_memory_cache.go
├── trace/
│   ├── opentracing/
│   │   ├── trace.go
│   │   └── trace_test.go
│   └── otel/
│       ├── trace.go
│       └── trace_test.go
└── trace.go

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/workflows/go.yml
================================================
# This workflow will build a golang project
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-go

name: Go

on:
  push:
    branches: [ "master" ]
  pull_request:
    branches: [ "master" ]

jobs:

  build:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3

    - name: Set up Go
      uses: actions/setup-go@v3
      with:
        go-version: 1.19

    - name: Build
      run: go build -v ./...

    - name: Test
      run: go test -v ./...


================================================
FILE: .gitignore
================================================
vendor/


================================================
FILE: .travis.yml
================================================
language: go

go:
  - 1.18

env:
  - GO111MODULE=on

script:
  - go test -v -race -coverprofile=coverage.txt -covermode=atomic

after_success:
  - bash <(curl -s https://codecov.io/bash)


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2017 Nick Randall 

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: MIGRATE.md
================================================
## Upgrade from v1 to v2
The only difference between v1 and v2 is that we added use of [context](https://golang.org/pkg/context).

```diff
- loader.Load(key string) Thunk
+ loader.Load(ctx context.Context, key string) Thunk
- loader.LoadMany(keys []string) ThunkMany
+ loader.LoadMany(ctx context.Context, keys []string) ThunkMany
```

```diff
- type BatchFunc func([]string) []*Result
+ type BatchFunc func(context.Context, []string) []*Result
```

## Upgrade from v2 to v3
```diff
// dataloader.Interface as added context.Context to methods
- loader.Prime(key string, value interface{}) Interface
+ loader.Prime(ctx context.Context, key string, value interface{}) Interface
- loader.Clear(key string) Interface
+ loader.Clear(ctx context.Context, key string) Interface
```

```diff
// cache interface as added context.Context to methods
type Cache interface {
-	Get(string) (Thunk, bool)
+	Get(context.Context, string) (Thunk, bool)
-	Set(string, Thunk)
+	Set(context.Context, string, Thunk)
-	Delete(string) bool
+	Delete(context.Context, string) bool
	Clear()
}
```

## Upgrade from v3 to v4
```diff
// dataloader.Interface as now allows interace{} as key rather than string
- loader.Load(context.Context, key string) Thunk
+ loader.Load(ctx context.Context, key interface{}) Thunk
- loader.LoadMany(context.Context, key []string) ThunkMany
+ loader.LoadMany(ctx context.Context, keys []interface{}) ThunkMany
- loader.Prime(context.Context, key string, value interface{}) Interface
+ loader.Prime(ctx context.Context, key interface{}, value interface{}) Interface
- loader.Clear(context.Context, key string) Interface
+ loader.Clear(ctx context.Context, key interface{}) Interface
```

```diff
// cache interface now allows interface{} as key instead of string
type Cache interface {
-	Get(context.Context, string) (Thunk, bool)
+	Get(context.Context, interface{}) (Thunk, bool)
-	Set(context.Context, string, Thunk)
+	Set(context.Context, interface{}, Thunk)
-	Delete(context.Context, string) bool
+	Delete(context.Context, interface{}) bool
	Clear()
}
```

## Upgrade from v4 to v5
```diff
// dataloader.Interface as now allows interace{} as key rather than string
- loader.Load(context.Context, key interface{}) Thunk
+ loader.Load(ctx context.Context, key Key) Thunk
- loader.LoadMany(context.Context, key []interface{}) ThunkMany
+ loader.LoadMany(ctx context.Context, keys Keys) ThunkMany
- loader.Prime(context.Context, key interface{}, value interface{}) Interface
+ loader.Prime(ctx context.Context, key Key, value interface{}) Interface
- loader.Clear(context.Context, key interface{}) Interface
+ loader.Clear(ctx context.Context, key Key) Interface
```

```diff
// cache interface now allows interface{} as key instead of string
type Cache interface {
-	Get(context.Context, interface{}) (Thunk, bool)
+	Get(context.Context, Key) (Thunk, bool)
-	Set(context.Context, interface{}, Thunk)
+	Set(context.Context, Key, Thunk)
-	Delete(context.Context, interface{}) bool
+	Delete(context.Context, Key) bool
	Clear()
}
```

## Upgrade from v5 to v6

We add major version release because we switched to using Go Modules from dep,
and drop build tags for older versions of Go (1.9).

The preferred import method includes the major version tag.

```go
import "github.com/graph-gophers/dataloader/v6"
```

## Upgrade from v6 to v7

[Generics](https://go.dev/doc/tutorial/generics) support has been added.
With this update, you can now write more type-safe code.

Use the major version tag in the import path.

```go
import "github.com/graph-gophers/dataloader/v7"
```


================================================
FILE: README.md
================================================
# DataLoader
[![GoDoc](https://godoc.org/gopkg.in/graph-gophers/dataloader.v7?status.svg)](https://pkg.go.dev/github.com/graph-gophers/dataloader/v7)
[![Build Status](https://travis-ci.org/graph-gophers/dataloader.svg?branch=master)](https://travis-ci.org/graph-gophers/dataloader)

This is an implementation of [Facebook's DataLoader](https://github.com/facebook/dataloader) in Golang.

## Install
`go get -u github.com/graph-gophers/dataloader/v7`

## Usage
```go
// setup batch function - the first Context passed to the Loader's Load
// function will be provided when the batch function is called.
// this function is registered with the Loader, and the key and value are fixed using generics.
batchFn := func(ctx context.Context, keys []int) []*dataloader.Result[*User] {
  var results []*dataloader.Result[*User]
  // do some async work to get data for specified keys
  // append to this list resolved values
  return results
}

// create Loader with an in-memory cache
loader := dataloader.NewBatchedLoader(batchFn)

/**
 * Use loader
 *
 * A thunk is a function returned from a function that is a
 * closure over a value (in this case an interface value and error).
 * When called, it will block until the value is resolved.
 *
 * loader.Load() may be called multiple times for a given batch window.
 * The first context passed to Load is the object that will be passed
 * to the batch function.
 */
thunk := loader.Load(context.TODO(), 5)
result, err := thunk()
if err != nil {
  // handle data error
}

log.Printf("value: %#v", result)
```

### Don't need/want to use context?
You're welcome to install the v1 version of this library.

## Cache
This implementation contains a very basic cache that is intended only to be used for short lived DataLoaders (i.e. DataLoaders that only exist for the life of an http request). You may use your own implementation if you want.

> it also has a `NoCache` type that implements the cache interface but all methods are noop. If you do not wish to cache anything.

## Examples
There are a few basic examples in the example folder.

## See also
- [TRACE](TRACE.md)
- [MIGRATE](MIGRATE.md)


================================================
FILE: TRACE.md
================================================
# Adding a new trace backend.

If you want to add a new tracing backend all you need to do is implement the
`Tracer` interface and pass it as an option to the dataloader on initialization.

As an example, this is how you could implement it to an OpenCensus backend.

```go
package main

import (
	"context"
	"strings"

	"github.com/graph-gophers/dataloader/v7"
	exp "go.opencensus.io/examples/exporter"
	"go.opencensus.io/trace"
)

type User struct {
	ID string
}

// OpenCensusTracer Tracer implements a tracer that can be used with the Open Tracing standard.
type OpenCensusTracer struct{}

// TraceLoad will trace a call to dataloader.LoadMany with Open Tracing
func (OpenCensusTracer) TraceLoad(ctx context.Context, key string) (context.Context, dataloader.TraceLoadFinishFunc[*User]) {
	cCtx, cSpan := trace.StartSpan(ctx, "Dataloader: load")
	cSpan.AddAttributes(
		trace.StringAttribute("dataloader.key", key),
	)
	return cCtx, func(thunk dataloader.Thunk[*User]) {
		// TODO: is there anything we should do with the results?
		cSpan.End()
	}
}

// TraceLoadMany will trace a call to dataloader.LoadMany with Open Tracing
func (OpenCensusTracer) TraceLoadMany(ctx context.Context, keys []string) (context.Context, dataloader.TraceLoadManyFinishFunc[*User]) {
	cCtx, cSpan := trace.StartSpan(ctx, "Dataloader: loadmany")
	cSpan.AddAttributes(
		trace.StringAttribute("dataloader.keys", strings.Join(keys, ",")),
	)
	return cCtx, func(thunk dataloader.ThunkMany[*User]) {
		// TODO: is there anything we should do with the results?
		cSpan.End()
	}
}

// TraceBatch will trace a call to dataloader.LoadMany with Open Tracing
func (OpenCensusTracer) TraceBatch(ctx context.Context, keys []string) (context.Context, dataloader.TraceBatchFinishFunc[*User]) {
	cCtx, cSpan := trace.StartSpan(ctx, "Dataloader: batch")
	cSpan.AddAttributes(
		trace.StringAttribute("dataloader.keys", strings.Join(keys, ",")),
	)
	return cCtx, func(results []*dataloader.Result[*User]) {
		// TODO: is there anything we should do with the results?
		cSpan.End()
	}
}

func batchFunc(ctx context.Context, keys []string) []*dataloader.Result[*User] {
	// ...loader logic goes here
}

func main() {
	//initialize an example exporter that just logs to the console
	trace.ApplyConfig(trace.Config{
		DefaultSampler: trace.AlwaysSample(),
	})
	trace.RegisterExporter(&exp.PrintExporter{})
	// initialize the dataloader with your new tracer backend
	loader := dataloader.NewBatchedLoader(batchFunc, dataloader.WithTracer[string, *User](OpenCensusTracer{}))
	// initialize a context since it's not receiving one from anywhere else.
	ctx, span := trace.StartSpan(context.TODO(), "Span Name")
	defer span.End()
	// request from the dataloader as usual
	value, err := loader.Load(ctx, SomeID)()
	// ...
}
```

Don't forget to initialize the exporters of your choice and register it with `trace.RegisterExporter(&exporterInstance)`.


================================================
FILE: cache.go
================================================
package dataloader

import "context"

// The Cache interface. If a custom cache is provided, it must implement this interface.
type Cache[K comparable, V any] interface {
	Get(context.Context, K) (Thunk[V], bool)
	Set(context.Context, K, Thunk[V])
	Delete(context.Context, K) bool
	Clear()
}

// NoCache implements Cache interface where all methods are noops.
// This is useful for when you don't want to cache items but still
// want to use a data loader
type NoCache[K comparable, V any] struct{}

// Get is a NOOP
func (c *NoCache[K, V]) Get(context.Context, K) (Thunk[V], bool) { return nil, false }

// Set is a NOOP
func (c *NoCache[K, V]) Set(context.Context, K, Thunk[V]) { return }

// Delete is a NOOP
func (c *NoCache[K, V]) Delete(context.Context, K) bool { return false }

// Clear is a NOOP
func (c *NoCache[K, V]) Clear() { return }


================================================
FILE: codecov.yml
================================================
codecov:
  notify:
    require_ci_to_pass: true
comment:
  behavior: default
  layout: header, diff
  require_changes: false
coverage:
  precision: 2
  range:
  - 70.0
  - 100.0
  round: down
  status:
    changes: false
    patch: true
    project: true
parsers:
  gcov:
    branch_detection:
      conditional: true
      loop: true
      macro: false
      method: false
  javascript:
    enable_partials: false


================================================
FILE: dataloader.go
================================================
// Package dataloader is an implementation of facebook's dataloader in go.
// See https://github.com/facebook/dataloader for more information
package dataloader

import (
	"context"
	"errors"
	"fmt"
	"log"
	"runtime"
	"sync"
	"sync/atomic"
	"time"
)

// Interface is a `DataLoader` Interface which defines a public API for loading data from a particular
// data back-end with unique keys such as the `id` column of a SQL table or
// document name in a MongoDB database, given a batch loading function.
//
// Each `DataLoader` instance should contain a unique memoized cache. Use caution when
// used in long-lived applications or those which serve many users with
// different access permissions and consider creating a new instance per
// web request.
type Interface[K comparable, V any] interface {
	Load(context.Context, K) Thunk[V]
	LoadMany(context.Context, []K) ThunkMany[V]
	Clear(context.Context, K) Interface[K, V]
	ClearAll() Interface[K, V]
	Prime(ctx context.Context, key K, value V) Interface[K, V]
	Flush()
}

var ErrNoResultProvided = errors.New("no result provided")

// BatchFunc is a function, which when given a slice of keys (string), returns a slice of `results`.
// It's important that the length of the input keys matches the length of the output results.
// Should the batch function return nil for a result, it will be treated as return an error
// of `ErrNoResultProvided` for that key.
//
// The keys passed to this function are guaranteed to be unique
type BatchFunc[K comparable, V any] func(context.Context, []K) []*Result[V]

// Result is the data structure that a BatchFunc returns.
// It contains the resolved data, and any errors that may have occurred while fetching the data.
type Result[V any] struct {
	Data  V
	Error error
}

// ResultMany is used by the LoadMany method.
// It contains a list of resolved data and a list of errors.
// The lengths of the data list and error list will match, and elements at each index correspond to each other.
type ResultMany[V any] struct {
	Data  []V
	Error []error
}

// PanicErrorWrapper wraps the error interface.
// This is used to check if the error is a panic error.
// We should not cache panic errors.
type PanicErrorWrapper struct {
	panicError error
}

func (p *PanicErrorWrapper) Error() string {
	return p.panicError.Error()
}

// SkipCacheError wraps the error interface.
// The cache should not store SkipCacheErrors.
type SkipCacheError struct {
	err error
}

func (s *SkipCacheError) Error() string {
	return s.err.Error()
}

func (s *SkipCacheError) Unwrap() error {
	return s.err
}

func NewSkipCacheError(err error) *SkipCacheError {
	return &SkipCacheError{err: err}
}

// Loader implements the dataloader.Interface.
type Loader[K comparable, V any] struct {
	// the batch function to be used by this loader
	batchFn BatchFunc[K, V]

	// the maximum batch size. Set to 0 if you want it to be unbounded.
	batchCap int

	// the internal cache. This packages contains a basic cache implementation but any custom cache
	// implementation could be used as long as it implements the `Cache` interface.
	cacheLock sync.Mutex
	cache     Cache[K, V]
	// should we clear the cache on each batch?
	// this would allow batching but no long term caching
	clearCacheOnBatch bool

	// count of queued up items
	count int

	// the maximum input queue size. Set to 0 if you want it to be unbounded.
	inputCap int

	// the amount of time to wait before triggering a batch
	wait time.Duration

	// lock to protect the batching operations
	batchLock sync.Mutex

	// current batcher
	curBatcher *batcher[K, V]

	// used to close the sleeper of the current batcher
	endSleeper chan bool

	// used by tests to prevent logs
	silent bool

	// can be set to trace calls to dataloader
	tracer Tracer[K, V]
}

// Thunk is a function that will block until the value (*Result) it contains is resolved.
// After the value it contains is resolved, this function will return the result.
// This function can be called many times, much like a Promise is other languages.
// The value will only need to be resolved once so subsequent calls will return immediately.
type Thunk[V any] func() (V, error)

// ThunkMany is much like the Thunk func type but it contains a list of results.
type ThunkMany[V any] func() ([]V, []error)

// type used to on input channel
type batchRequest[K comparable, V any] struct {
	key    K
	result atomic.Pointer[Result[V]]
	done   chan struct{}
}

// Option allows for configuration of Loader fields.
type Option[K comparable, V any] func(*Loader[K, V])

// WithCache sets the BatchedLoader cache. Defaults to InMemoryCache if a Cache is not set.
func WithCache[K comparable, V any](c Cache[K, V]) Option[K, V] {
	return func(l *Loader[K, V]) {
		l.cache = c
	}
}

// WithBatchCapacity sets the batch capacity. Default is 0 (unbounded).
func WithBatchCapacity[K comparable, V any](c int) Option[K, V] {
	return func(l *Loader[K, V]) {
		l.batchCap = c
	}
}

// WithInputCapacity sets the input capacity. Default is 1000.
func WithInputCapacity[K comparable, V any](c int) Option[K, V] {
	return func(l *Loader[K, V]) {
		l.inputCap = c
	}
}

// WithWait sets the amount of time to wait before triggering a batch.
// Default duration is 16 milliseconds.
func WithWait[K comparable, V any](d time.Duration) Option[K, V] {
	return func(l *Loader[K, V]) {
		l.wait = d
	}
}

// WithClearCacheOnBatch allows batching of items but no long term caching.
// It accomplishes this by clearing the cache after each batch operation.
func WithClearCacheOnBatch[K comparable, V any]() Option[K, V] {
	return func(l *Loader[K, V]) {
		l.cacheLock.Lock()
		l.clearCacheOnBatch = true
		l.cacheLock.Unlock()
	}
}

// withSilentLogger turns of log messages. It's used by the tests
func withSilentLogger[K comparable, V any]() Option[K, V] {
	return func(l *Loader[K, V]) {
		l.silent = true
	}
}

// WithTracer allows tracing of calls to Load and LoadMany
func WithTracer[K comparable, V any](tracer Tracer[K, V]) Option[K, V] {
	return func(l *Loader[K, V]) {
		l.tracer = tracer
	}
}

// NewBatchedLoader constructs a new Loader with given options.
func NewBatchedLoader[K comparable, V any](batchFn BatchFunc[K, V], opts ...Option[K, V]) *Loader[K, V] {
	loader := &Loader[K, V]{
		batchFn:  batchFn,
		inputCap: 1000,
		wait:     16 * time.Millisecond,
	}

	// Apply options
	for _, apply := range opts {
		apply(loader)
	}

	// Set defaults
	if loader.cache == nil {
		loader.cache = NewCache[K, V]()
	}

	if loader.tracer == nil {
		loader.tracer = NoopTracer[K, V]{}
	}

	return loader
}

// Load load/resolves the given key, returning a channel that will contain the value and error.
// The first context passed to this function within a given batch window will be provided to
// the registered BatchFunc.
func (l *Loader[K, V]) Load(originalContext context.Context, key K) Thunk[V] {
	ctx, finish := l.tracer.TraceLoad(originalContext, key)
	req := &batchRequest[K, V]{
		key:  key,
		done: make(chan struct{}),
	}

	// We need to lock both the batchLock and cacheLock because the batcher can
	// reset the cache when either the batchCap or the wait time is reached.
	//
	// When we would only lock the cacheLock while doing l.cache.Get and/or
	// l.cache.Set, it could be that the batcher resets the cache after those
	// operations have finished but before the new request (if any) is send to the
	// batcher.
	//
	// In that case it is no longer guaranteed that the keys passed to the BatchFunc
	// function are unique as the cache has been reset so if the same key is
	// requested again before the new batcher is started, the same key will be
	// send to the batcher again causing unexpected behavior in the BatchFunc.
	l.batchLock.Lock()
	l.cacheLock.Lock()

	if v, ok := l.cache.Get(ctx, key); ok {
		l.cacheLock.Unlock()
		l.batchLock.Unlock()
		defer finish(v)
		return v
	}

	thunk := func() (V, error) {
		<-req.done
		result := req.result.Load()
		var ev *PanicErrorWrapper
		var es *SkipCacheError
		if result.Error != nil && (errors.As(result.Error, &ev) || errors.As(result.Error, &es)) {
			l.Clear(ctx, key)
		}
		return result.Data, result.Error
	}
	defer finish(thunk)

	l.cache.Set(ctx, key, thunk)

	// start the batch window if it hasn't already started.
	if l.curBatcher == nil {
		l.curBatcher = l.newBatcher(l.silent, l.tracer)
		// start the current batcher batch function
		go l.curBatcher.batch(originalContext)
		// start a sleeper for the current batcher
		l.endSleeper = make(chan bool)
		go l.sleeper(l.curBatcher, l.endSleeper)
	}

	l.curBatcher.input <- req

	// if we need to keep track of the count (max batch), then do so.
	if l.batchCap > 0 {
		l.count++
		// if we hit our limit, force the batch to start
		if l.count == l.batchCap {
			// end/flush the batcher synchronously here because another call to Load
			// may concurrently happen and needs to go to a new batcher.
			l.flush()
		}
	}

	// NOTE: It is intended that these are not unlocked with a `defer`. This is due to the `defer finish(thunk)` above.
	// There is a locking bug where, if you have a tracer that calls the thunk to read the results, the dataloader runs
	// into a deadlock scenario, as `finish` is called before these mutexes are free'd on the same goroutine.
	l.batchLock.Unlock()
	l.cacheLock.Unlock()

	return thunk
}

// flush() is a helper that runs whatever batched items there are immediately.
// it must be called by code protected by a l.batchLock.Lock()
func (l *Loader[K, V]) flush() {
	l.curBatcher.end()

	// end the sleeper for the current batcher.
	// this is to stop the goroutine without waiting for the
	// sleeper timeout.
	close(l.endSleeper)
	l.reset()
}

// Flush will load the items in the current batch immediately without waiting for the timer.
func (l *Loader[K, V]) Flush() {
	l.batchLock.Lock()
	defer l.batchLock.Unlock()
	if l.curBatcher == nil {
		return
	}
	l.flush()
}

// LoadMany loads multiple keys, returning a thunk (type: ThunkMany) that will resolve the keys passed in.
func (l *Loader[K, V]) LoadMany(originalContext context.Context, keys []K) ThunkMany[V] {
	ctx, finish := l.tracer.TraceLoadMany(originalContext, keys)

	var (
		length = len(keys)
		data   = make([]V, length)
		errors = make([]error, length)
		result atomic.Pointer[ResultMany[V]]
		wg     sync.WaitGroup
		done   = make(chan struct{})
	)

	resolve := func(ctx context.Context, i int) {
		defer wg.Done()
		thunk := l.Load(ctx, keys[i])
		result, err := thunk()
		data[i] = result
		errors[i] = err
	}

	wg.Add(length)
	for i := range keys {
		go resolve(ctx, i)
	}

	go func() {
		defer close(done)
		wg.Wait()

		// errs is nil unless there exists a non-nil error.
		// This prevents dataloader from returning a slice of all-nil errors.
		var errs []error
		for _, e := range errors {
			if e != nil {
				errs = errors
				break
			}
		}

		result.Store(&ResultMany[V]{Data: data, Error: errs})
	}()

	thunkMany := func() ([]V, []error) {
		<-done
		r := result.Load()
		return r.Data, r.Error
	}

	defer finish(thunkMany)
	return thunkMany
}

// Clear clears the value at `key` from the cache, it it exists. Returns self for method chaining
func (l *Loader[K, V]) Clear(ctx context.Context, key K) Interface[K, V] {
	l.cacheLock.Lock()
	l.cache.Delete(ctx, key)
	l.cacheLock.Unlock()
	return l
}

// ClearAll clears the entire cache. To be used when some event results in unknown invalidations.
// Returns self for method chaining.
func (l *Loader[K, V]) ClearAll() Interface[K, V] {
	l.cacheLock.Lock()
	l.cache.Clear()
	l.cacheLock.Unlock()
	return l
}

// Prime adds the provided key and value to the cache. If the key already exists, no change is made.
// Returns self for method chaining
func (l *Loader[K, V]) Prime(ctx context.Context, key K, value V) Interface[K, V] {
	if _, ok := l.cache.Get(ctx, key); !ok {
		thunk := func() (V, error) {
			return value, nil
		}
		l.cache.Set(ctx, key, thunk)
	}
	return l
}

func (l *Loader[K, V]) reset() {
	l.count = 0
	l.curBatcher = nil

	if l.clearCacheOnBatch {
		l.cache.Clear()
	}
}

type batcher[K comparable, V any] struct {
	input    chan *batchRequest[K, V]
	batchFn  BatchFunc[K, V]
	finished bool
	silent   bool
	tracer   Tracer[K, V]
}

// newBatcher returns a batcher for the current requests
// all the batcher methods must be protected by a global batchLock
func (l *Loader[K, V]) newBatcher(silent bool, tracer Tracer[K, V]) *batcher[K, V] {
	return &batcher[K, V]{
		input:   make(chan *batchRequest[K, V], l.inputCap),
		batchFn: l.batchFn,
		silent:  silent,
		tracer:  tracer,
	}
}

// stop receiving input and process batch function
func (b *batcher[K, V]) end() {
	if !b.finished {
		close(b.input)
		b.finished = true
	}
}

// execute the batch of all items in queue
func (b *batcher[K, V]) batch(originalContext context.Context) {
	var (
		keys     = make([]K, 0)
		reqs     = make([]*batchRequest[K, V], 0)
		items    = make([]*Result[V], 0)
		panicErr interface{}
	)

	for item := range b.input {
		keys = append(keys, item.key)
		reqs = append(reqs, item)
	}

	ctx, finish := b.tracer.TraceBatch(originalContext, keys)
	defer finish(items)

	func() {
		defer func() {
			if r := recover(); r != nil {
				panicErr = r
				if b.silent {
					return
				}
				const size = 64 << 10
				buf := make([]byte, size)
				buf = buf[:runtime.Stack(buf, false)]
				log.Printf("Dataloader: Panic received in batch function: %v\n%s", panicErr, buf)
			}
		}()
		items = b.batchFn(ctx, keys)
	}()

	if panicErr != nil {
		for _, req := range reqs {
			req.result.Store(&Result[V]{Error: &PanicErrorWrapper{panicError: fmt.Errorf("Panic received in batch function: %v", panicErr)}})
			close(req.done)
		}
		return
	}

	if len(items) != len(keys) {
		err := &Result[V]{Error: fmt.Errorf(`
			The batch function supplied did not return an array of responses
			the same length as the array of keys.

			Keys:
			%v

			Values:
			%v
		`, keys, items)}

		for _, req := range reqs {
			req.result.Store(err)
			close(req.done)
		}

		return
	}

	var notSetResult *Result[V] // don't allocate unless we need it
	for i, req := range reqs {
		if items[i] == nil {
			if notSetResult == nil {
				notSetResult = &Result[V]{Error: ErrNoResultProvided}
			}
			req.result.Store(notSetResult)
		} else {
			req.result.Store(items[i])
		}
		close(req.done)
	}
}

// wait the appropriate amount of time for the provided batcher
func (l *Loader[K, V]) sleeper(b *batcher[K, V], close chan bool) {
	select {
	// used by batch to close early. usually triggered by max batch size
	case <-close:
		return
	// this will move this goroutine to the back of the callstack?
	case <-time.After(l.wait):
	}

	// reset
	// this is protected by the batchLock to avoid closing the batcher input
	// channel while Load is inserting a request
	l.batchLock.Lock()
	b.end()

	// We can end here also if the batcher has already been closed and a
	// new one has been created. So reset the loader state only if the batcher
	// is the current one
	if l.curBatcher == b {
		l.reset()
	}
	l.batchLock.Unlock()
}


================================================
FILE: dataloader_test.go
================================================
package dataloader

import (
	"context"
	"errors"
	"fmt"
	"log"
	"reflect"
	"strconv"
	"sync"
	"testing"
	"time"
)

/*
Tests
*/
func TestLoader(t *testing.T) {
	t.Run("test Load method", func(t *testing.T) {
		t.Parallel()
		identityLoader, _ := IDLoader[string](0)
		ctx := context.Background()
		future := identityLoader.Load(ctx, "1")
		value, err := future()
		if err != nil {
			t.Error(err.Error())
		}
		if value != "1" {
			t.Error("load didn't return the right value")
		}
	})

	t.Run("test thunk does not contain race conditions", func(t *testing.T) {
		t.Parallel()
		identityLoader, _ := IDLoader[string](0)
		ctx := context.Background()
		future := identityLoader.Load(ctx, "1")
		go future()
		go future()
	})

	t.Run("test Load Method Panic Safety", func(t *testing.T) {
		t.Parallel()
		defer func() {
			r := recover()
			if r != nil {
				t.Error("Panic Loader's panic should have been handled'")
			}
		}()
		panicLoader, _ := PanicLoader[string](0)
		ctx := context.Background()
		future := panicLoader.Load(ctx, "1")
		_, err := future()
		if err == nil || err.Error() != "Panic received in batch function: Programming error" {
			t.Error("Panic was not propagated as an error.")
		}
	})

	t.Run("test Load Method cache error", func(t *testing.T) {
		t.Parallel()
		errorCacheLoader, _ := ErrorCacheLoader[string](0)
		ctx := context.Background()
		futures := []Thunk[string]{}
		for i := 0; i < 2; i++ {
			futures = append(futures, errorCacheLoader.Load(ctx, strconv.Itoa(i)))
		}

		for _, f := range futures {
			_, err := f()
			if err == nil {
				t.Error("Error was not propagated")
			}
		}
		nextFuture := errorCacheLoader.Load(ctx, "1")
		_, err := nextFuture()

		// Normal errors should be cached.
		if err == nil {
			t.Error("Error from batch function was not cached")
		}
	})

	t.Run("test Load Method not caching results with errors of type SkipCacheError", func(t *testing.T) {
		t.Parallel()
		skipCacheLoader, loadCalls := SkipCacheErrorLoader(3, "1")
		ctx := context.Background()
		futures1 := skipCacheLoader.LoadMany(ctx, []string{"1", "2", "3"})
		_, errs1 := futures1()
		var errCount int = 0
		var nilCount int = 0
		for _, err := range errs1 {
			if err == nil {
				nilCount++
			} else {
				errCount++
			}
		}
		if errCount != 1 {
			t.Error("Expected an error on only key \"1\"")
		}

		if nilCount != 2 {
			t.Error("Expected the other errors to be nil")
		}

		futures2 := skipCacheLoader.LoadMany(ctx, []string{"2", "3", "1"})
		_, errs2 := futures2()
		// There should be no errors in the second batch, as the only key that was not cached
		// this time around will not throw an error
		if errs2 != nil {
			t.Error("Expected LoadMany() to return nil error slice when no errors occurred")
		}

		calls := (*loadCalls)[1]
		expected := []string{"1"}

		if !reflect.DeepEqual(calls, expected) {
			t.Errorf("Expected load calls %#v, got %#v", expected, calls)
		}
	})

	t.Run("test Load Method Panic Safety in multiple keys", func(t *testing.T) {
		t.Parallel()
		defer func() {
			r := recover()
			if r != nil {
				t.Error("Panic Loader's panic should have been handled'")
			}
		}()
		panicLoader, _ := PanicCacheLoader[string](0)
		futures := []Thunk[string]{}
		ctx := context.Background()
		for i := 0; i < 3; i++ {
			futures = append(futures, panicLoader.Load(ctx, strconv.Itoa(i)))
		}
		for _, f := range futures {
			_, err := f()
			if err == nil || err.Error() != "Panic received in batch function: Programming error" {
				t.Error("Panic was not propagated as an error.")
			}
		}

		futures = []Thunk[string]{}
		for i := 0; i < 3; i++ {
			futures = append(futures, panicLoader.Load(ctx, strconv.Itoa(1)))
		}

		for _, f := range futures {
			_, err := f()
			if err != nil {
				t.Error("Panic error from batch function was cached")
			}
		}
	})

	t.Run("test Load method does not create a deadlock mutex condition", func(t *testing.T) {
		t.Parallel()

		loader, _ := IDLoader(1, WithTracer[string, string](&TracerWithThunkReading[string, string]{}))

		value, err := loader.Load(context.Background(), "1")()
		if err != nil {
			t.Error(err.Error())
		}
		if value != "1" {
			t.Error("load didn't return the right value")
		}

		// By this function completing, we confirm that there is not a deadlock condition, else the test will hang
	})

	t.Run("test LoadMany returns errors", func(t *testing.T) {
		t.Parallel()
		errorLoader, _ := ErrorLoader[string](0)
		ctx := context.Background()
		future := errorLoader.LoadMany(ctx, []string{"1", "2", "3"})
		_, err := future()
		if len(err) != 3 {
			t.Error("LoadMany didn't return right number of errors")
		}
	})

	t.Run("test LoadMany returns len(errors) == len(keys)", func(t *testing.T) {
		t.Parallel()
		loader, _ := OneErrorLoader[string](3)
		ctx := context.Background()
		future := loader.LoadMany(ctx, []string{"1", "2", "3"})
		_, errs := future()
		if len(errs) != 3 {
			t.Errorf("LoadMany didn't return right number of errors (should match size of input)")
		}

		var errCount int = 0
		var nilCount int = 0
		for _, err := range errs {
			if err == nil {
				nilCount++
			} else {
				errCount++
			}
		}
		if errCount != 1 {
			t.Error("Expected an error on only one of the items loaded")
		}

		if nilCount != 2 {
			t.Error("Expected second and third errors to be nil")
		}
	})

	t.Run("test LoadMany returns nil []error when no errors occurred", func(t *testing.T) {
		t.Parallel()
		loader, _ := IDLoader[string](0)
		ctx := context.Background()
		_, err := loader.LoadMany(ctx, []string{"1", "2", "3"})()
		if err != nil {
			t.Errorf("Expected LoadMany() to return nil error slice when no errors occurred")
		}
	})

	t.Run("test LoadMany method does not create a deadlock mutex condition", func(t *testing.T) {
		t.Parallel()

		loader, _ := IDLoader(1, WithTracer[string, string](&TracerWithThunkReading[string, string]{}))

		values, errs := loader.LoadMany(context.Background(), []string{"1", "2", "3"})()
		for _, err := range errs {
			if err != nil {
				t.Error(err.Error())
			}
		}
		for _, value := range values {
			if value == "" {
				t.Error("unexpected empty value in LoadMany returned")
			}
		}

		// By this function completing, we confirm that there is not a deadlock condition, else the test will hang
	})

	t.Run("test thunkmany does not contain race conditions", func(t *testing.T) {
		t.Parallel()
		identityLoader, _ := IDLoader[string](0)
		ctx := context.Background()
		future := identityLoader.LoadMany(ctx, []string{"1", "2", "3"})
		go future()
		go future()
	})

	t.Run("test Load Many Method Panic Safety", func(t *testing.T) {
		t.Parallel()
		defer func() {
			r := recover()
			if r != nil {
				t.Error("Panic Loader's panic should have been handled'")
			}
		}()
		panicLoader, _ := PanicCacheLoader[string](0)
		ctx := context.Background()
		future := panicLoader.LoadMany(ctx, []string{"1", "2"})
		_, errs := future()
		if len(errs) < 2 || errs[0].Error() != "Panic received in batch function: Programming error" {
			t.Error("Panic was not propagated as an error.")
		}

		future = panicLoader.LoadMany(ctx, []string{"1"})
		_, errs = future()

		if len(errs) > 0 {
			t.Error("Panic error from batch function was cached")
		}

	})

	t.Run("test LoadMany method", func(t *testing.T) {
		t.Parallel()
		identityLoader, _ := IDLoader[string](0)
		ctx := context.Background()
		future := identityLoader.LoadMany(ctx, []string{"1", "2", "3"})
		results, _ := future()
		if results[0] != "1" || results[1] != "2" || results[2] != "3" {
			t.Error("loadmany didn't return the right value")
		}
	})

	t.Run("batches many requests", func(t *testing.T) {
		t.Parallel()
		identityLoader, loadCalls := IDLoader[string](0)
		ctx := context.Background()
		future1 := identityLoader.Load(ctx, "1")
		future2 := identityLoader.Load(ctx, "2")

		_, err := future1()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future2()
		if err != nil {
			t.Error(err.Error())
		}

		calls := *loadCalls
		inner := []string{"1", "2"}
		expected := [][]string{inner}
		if !reflect.DeepEqual(calls, expected) {
			t.Errorf("did not call batchFn in right order. Expected %#v, got %#v", expected, calls)
		}
	})

	t.Run("number of results matches number of keys", func(t *testing.T) {
		t.Parallel()
		faultyLoader, _ := FaultyLoader[string]()
		ctx := context.Background()

		n := 10
		reqs := []Thunk[string]{}
		var keys []string
		for i := 0; i < n; i++ {
			key := strconv.Itoa(i)
			reqs = append(reqs, faultyLoader.Load(ctx, key))
			keys = append(keys, key)
		}

		for _, future := range reqs {
			_, err := future()
			if err == nil {
				t.Error("if number of results doesn't match keys, all keys should contain error")
			}
		}

		// TODO: expect to get some kind of warning
	})

	t.Run("responds to max batch size", func(t *testing.T) {
		t.Parallel()
		identityLoader, loadCalls := IDLoader[string](2)
		ctx := context.Background()
		future1 := identityLoader.Load(ctx, "1")
		future2 := identityLoader.Load(ctx, "2")
		future3 := identityLoader.Load(ctx, "3")

		_, err := future1()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future2()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future3()
		if err != nil {
			t.Error(err.Error())
		}

		calls := *loadCalls
		inner1 := []string{"1", "2"}
		inner2 := []string{"3"}
		expected := [][]string{inner1, inner2}
		if !reflect.DeepEqual(calls, expected) {
			t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls)
		}
	})

	t.Run("caches repeated requests", func(t *testing.T) {
		t.Parallel()
		identityLoader, loadCalls := IDLoader[string](0)
		ctx := context.Background()
		start := time.Now()
		future1 := identityLoader.Load(ctx, "1")
		future2 := identityLoader.Load(ctx, "1")

		_, err := future1()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future2()
		if err != nil {
			t.Error(err.Error())
		}

		// also check that it took the full timeout to return
		var duration = time.Since(start)
		if duration < 16*time.Millisecond {
			t.Errorf("took %v when expected it to take more than 16 ms because of wait", duration)
		}

		calls := *loadCalls
		inner := []string{"1"}
		expected := [][]string{inner}
		if !reflect.DeepEqual(calls, expected) {
			t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls)
		}
	})

	t.Run("doesn't wait for timeout if Flush() is called", func(t *testing.T) {
		t.Parallel()
		identityLoader, loadCalls := IDLoader[string](0)
		ctx := context.Background()
		start := time.Now()
		future1 := identityLoader.Load(ctx, "1")
		future2 := identityLoader.Load(ctx, "2")

		// trigger them to be fetched immediately vs waiting for the 16 ms timer
		identityLoader.Flush()

		_, err := future1()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future2()
		if err != nil {
			t.Error(err.Error())
		}

		var duration = time.Since(start)
		if duration > 2*time.Millisecond {
			t.Errorf("took %v when expected it to take less than 2 ms b/c we called Flush()", duration)
		}

		calls := *loadCalls
		inner := []string{"1", "2"}
		expected := [][]string{inner}
		if !reflect.DeepEqual(calls, expected) {
			t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls)
		}
	})

	t.Run("Nothing for Flush() to do on empty loader with current batch", func(t *testing.T) {
		t.Parallel()
		identityLoader, _ := IDLoader[string](0)
		identityLoader.Flush()
	})

	t.Run("allows primed cache", func(t *testing.T) {
		t.Parallel()
		identityLoader, loadCalls := IDLoader[string](0)
		ctx := context.Background()
		identityLoader.Prime(ctx, "A", "Cached")
		future1 := identityLoader.Load(ctx, "1")
		future2 := identityLoader.Load(ctx, "A")

		_, err := future1()
		if err != nil {
			t.Error(err.Error())
		}
		value, err := future2()
		if err != nil {
			t.Error(err.Error())
		}

		calls := *loadCalls
		inner := []string{"1"}
		expected := [][]string{inner}
		if !reflect.DeepEqual(calls, expected) {
			t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls)
		}

		if value != "Cached" {
			t.Errorf("did not use primed cache value. Expected '%#v', got '%#v'", "Cached", value)
		}
	})

	t.Run("allows clear value in cache", func(t *testing.T) {
		t.Parallel()
		identityLoader, loadCalls := IDLoader[string](0)
		ctx := context.Background()
		identityLoader.Prime(ctx, "A", "Cached")
		identityLoader.Prime(ctx, "B", "B")
		future1 := identityLoader.Load(ctx, "1")
		future2 := identityLoader.Clear(ctx, "A").Load(ctx, "A")
		future3 := identityLoader.Load(ctx, "B")

		_, err := future1()
		if err != nil {
			t.Error(err.Error())
		}
		value, err := future2()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future3()
		if err != nil {
			t.Error(err.Error())
		}

		calls := *loadCalls
		inner := []string{"1", "A"}
		expected := [][]string{inner}
		if !reflect.DeepEqual(calls, expected) {
			t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls)
		}

		if value != "A" {
			t.Errorf("did not use primed cache value. Expected '%#v', got '%#v'", "Cached", value)
		}
	})

	t.Run("clears cache on batch with WithClearCacheOnBatch", func(t *testing.T) {
		t.Parallel()
		batchOnlyLoader, loadCalls := BatchOnlyLoader[string](0)
		ctx := context.Background()
		future1 := batchOnlyLoader.Load(ctx, "1")
		future2 := batchOnlyLoader.Load(ctx, "1")

		_, err := future1()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future2()
		if err != nil {
			t.Error(err.Error())
		}

		calls := *loadCalls
		inner := []string{"1"}
		expected := [][]string{inner}
		if !reflect.DeepEqual(calls, expected) {
			t.Errorf("did not batch queries. Expected %#v, got %#v", expected, calls)
		}

		if _, found := batchOnlyLoader.cache.Get(ctx, "1"); found {
			t.Errorf("did not clear cache after batch. Expected %#v, got %#v", false, found)
		}
	})

	t.Run("allows clearAll values in cache", func(t *testing.T) {
		t.Parallel()
		identityLoader, loadCalls := IDLoader[string](0)
		ctx := context.Background()
		identityLoader.Prime(ctx, "A", "Cached")
		identityLoader.Prime(ctx, "B", "B")

		identityLoader.ClearAll()

		future1 := identityLoader.Load(ctx, "1")
		future2 := identityLoader.Load(ctx, "A")
		future3 := identityLoader.Load(ctx, "B")

		_, err := future1()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future2()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future3()
		if err != nil {
			t.Error(err.Error())
		}

		calls := *loadCalls
		inner := []string{"1", "A", "B"}
		expected := [][]string{inner}
		if !reflect.DeepEqual(calls, expected) {
			t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls)
		}
	})

	t.Run("all methods on NoCache are Noops", func(t *testing.T) {
		t.Parallel()
		identityLoader, loadCalls := NoCacheLoader[string](0)
		ctx := context.Background()
		identityLoader.Prime(ctx, "A", "Cached")
		identityLoader.Prime(ctx, "B", "B")

		identityLoader.ClearAll()

		future1 := identityLoader.Clear(ctx, "1").Load(ctx, "1")
		future2 := identityLoader.Load(ctx, "A")
		future3 := identityLoader.Load(ctx, "B")

		_, err := future1()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future2()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future3()
		if err != nil {
			t.Error(err.Error())
		}

		calls := *loadCalls
		inner := []string{"1", "A", "B"}
		expected := [][]string{inner}
		if !reflect.DeepEqual(calls, expected) {
			t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls)
		}
	})

	t.Run("no cache does not cache anything", func(t *testing.T) {
		t.Parallel()
		identityLoader, loadCalls := NoCacheLoader[string](0)
		ctx := context.Background()
		identityLoader.Prime(ctx, "A", "Cached")
		identityLoader.Prime(ctx, "B", "B")

		future1 := identityLoader.Load(ctx, "1")
		future2 := identityLoader.Load(ctx, "A")
		future3 := identityLoader.Load(ctx, "B")

		_, err := future1()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future2()
		if err != nil {
			t.Error(err.Error())
		}
		_, err = future3()
		if err != nil {
			t.Error(err.Error())
		}

		calls := *loadCalls
		inner := []string{"1", "A", "B"}
		expected := [][]string{inner}
		if !reflect.DeepEqual(calls, expected) {
			t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls)
		}
	})

}

// test helpers
func IDLoader[K comparable](max int, options ...Option[K, K]) (*Loader[K, K], *[][]K) {
	var mu sync.Mutex
	var loadCalls [][]K
	identityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {
		var results []*Result[K]
		mu.Lock()
		loadCalls = append(loadCalls, keys)
		mu.Unlock()
		for _, key := range keys {
			results = append(results, &Result[K]{key, nil})
		}
		return results
	}, append([]Option[K, K]{WithBatchCapacity[K, K](max)}, options...)...)
	return identityLoader, &loadCalls
}
func BatchOnlyLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
	var mu sync.Mutex
	var loadCalls [][]K
	identityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {
		var results []*Result[K]
		mu.Lock()
		loadCalls = append(loadCalls, keys)
		mu.Unlock()
		for _, key := range keys {
			results = append(results, &Result[K]{key, nil})
		}
		return results
	}, WithBatchCapacity[K, K](max), WithClearCacheOnBatch[K, K]())
	return identityLoader, &loadCalls
}
func ErrorLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
	var mu sync.Mutex
	var loadCalls [][]K
	identityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {
		var results []*Result[K]
		mu.Lock()
		loadCalls = append(loadCalls, keys)
		mu.Unlock()
		for _, key := range keys {
			results = append(results, &Result[K]{key, fmt.Errorf("this is a test error")})
		}
		return results
	}, WithBatchCapacity[K, K](max))
	return identityLoader, &loadCalls
}
func OneErrorLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
	var mu sync.Mutex
	var loadCalls [][]K
	identityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {
		results := make([]*Result[K], max)
		mu.Lock()
		loadCalls = append(loadCalls, keys)
		mu.Unlock()
		for i := range keys {
			var err error
			if i == 0 {
				err = errors.New("always error on the first key")
			}
			results[i] = &Result[K]{keys[i], err}
		}
		return results
	}, WithBatchCapacity[K, K](max))
	return identityLoader, &loadCalls
}
func PanicLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
	var loadCalls [][]K
	panicLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {
		panic("Programming error")
	}, WithBatchCapacity[K, K](max), withSilentLogger[K, K]())
	return panicLoader, &loadCalls
}

func PanicCacheLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
	var loadCalls [][]K
	panicCacheLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {
		if len(keys) > 1 {
			panic("Programming error")
		}

		returnResult := make([]*Result[K], len(keys))
		for idx := range returnResult {
			returnResult[idx] = &Result[K]{
				keys[0],
				nil,
			}
		}

		return returnResult

	}, WithBatchCapacity[K, K](max), withSilentLogger[K, K]())
	return panicCacheLoader, &loadCalls
}

func ErrorCacheLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
	var loadCalls [][]K
	errorCacheLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {
		if len(keys) > 1 {
			var results []*Result[K]
			for _, key := range keys {
				results = append(results, &Result[K]{key, fmt.Errorf("this is a test error")})
			}
			return results
		}

		returnResult := make([]*Result[K], len(keys))
		for idx := range returnResult {
			returnResult[idx] = &Result[K]{
				keys[0],
				nil,
			}
		}

		return returnResult

	}, WithBatchCapacity[K, K](max), withSilentLogger[K, K]())
	return errorCacheLoader, &loadCalls
}

func SkipCacheErrorLoader[K comparable](max int, onceErrorKey K) (*Loader[K, K], *[][]K) {
	var mu sync.Mutex
	var loadCalls [][]K
	errorThrown := false
	skipCacheErrorLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {
		var results []*Result[K]
		mu.Lock()
		loadCalls = append(loadCalls, keys)
		mu.Unlock()
		// return a non cacheable error for the first occurence of onceErrorKey
		for _, k := range keys {
			if !errorThrown && k == onceErrorKey {
				results = append(results, &Result[K]{k, NewSkipCacheError(fmt.Errorf("non cacheable error"))})
				errorThrown = true
			} else {
				results = append(results, &Result[K]{k, nil})
			}
		}

		return results
	}, WithBatchCapacity[K, K](max))
	return skipCacheErrorLoader, &loadCalls
}

func BadLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
	var mu sync.Mutex
	var loadCalls [][]K
	identityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {
		var results []*Result[K]
		mu.Lock()
		loadCalls = append(loadCalls, keys)
		mu.Unlock()
		results = append(results, &Result[K]{keys[0], nil})
		return results
	}, WithBatchCapacity[K, K](max))
	return identityLoader, &loadCalls
}

func NoCacheLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
	var mu sync.Mutex
	var loadCalls [][]K
	cache := &NoCache[K, K]{}
	identityLoader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {
		var results []*Result[K]
		mu.Lock()
		loadCalls = append(loadCalls, keys)
		mu.Unlock()
		for _, key := range keys {
			results = append(results, &Result[K]{key, nil})
		}
		return results
	}, WithCache[K, K](cache), WithBatchCapacity[K, K](max))
	return identityLoader, &loadCalls
}

// FaultyLoader gives len(keys)-1 results.
func FaultyLoader[K comparable]() (*Loader[K, K], *[][]K) {
	var mu sync.Mutex
	var loadCalls [][]K

	loader := NewBatchedLoader(func(_ context.Context, keys []K) []*Result[K] {
		var results []*Result[K]
		mu.Lock()
		loadCalls = append(loadCalls, keys)
		mu.Unlock()

		lastKeyIndex := len(keys) - 1
		for i, key := range keys {
			if i == lastKeyIndex {
				break
			}

			results = append(results, &Result[K]{key, nil})
		}
		return results
	})

	return loader, &loadCalls
}

type TracerWithThunkReading[K comparable, V any] struct{}

var _ Tracer[string, struct{}] = (*TracerWithThunkReading[string, struct{}])(nil)

func (_ *TracerWithThunkReading[K, V]) TraceLoad(ctx context.Context, key K) (context.Context, TraceLoadFinishFunc[V]) {
	return ctx, func(thunk Thunk[V]) {
		_, _ = thunk()
	}
}

func (_ *TracerWithThunkReading[K, V]) TraceLoadMany(ctx context.Context, keys []K) (context.Context, TraceLoadManyFinishFunc[V]) {
	return ctx, func(thunks ThunkMany[V]) {
		_, _ = thunks()
	}
}

func (_ *TracerWithThunkReading[K, V]) TraceBatch(ctx context.Context, keys []K) (context.Context, TraceBatchFinishFunc[V]) {
	return ctx, func(thunks []*Result[V]) {
		//
	}
}

/*
Benchmarks
*/
var a = &Avg{}

func batchIdentity[K comparable](_ context.Context, keys []K) (results []*Result[K]) {
	a.Add(len(keys))
	for _, key := range keys {
		results = append(results, &Result[K]{key, nil})
	}
	return
}

var _ctx = context.Background()

func BenchmarkLoader(b *testing.B) {
	UserLoader := NewBatchedLoader(batchIdentity[string])
	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		UserLoader.Load(_ctx, (strconv.Itoa(i)))
	}
	log.Printf("avg: %f", a.Avg())
}

type Avg struct {
	total  float64
	length float64
	lock   sync.RWMutex
}

func (a *Avg) Add(v int) {
	a.lock.Lock()
	a.total += float64(v)
	a.length++
	a.lock.Unlock()
}

func (a *Avg) Avg() float64 {
	a.lock.RLock()
	defer a.lock.RUnlock()
	if a.total == 0 {
		return 0
	} else if a.length == 0 {
		return 0
	}
	return a.total / a.length
}


================================================
FILE: example/lru_cache/golang_lru_test.go
================================================
// package lru_cache_test contains an exmaple of using go-cache as a long term cache solution for dataloader.
package lru_cache_test

import (
	"context"
	"fmt"

	dataloader "github.com/graph-gophers/dataloader/v7"

	lru "github.com/hashicorp/golang-lru"
)

// Cache implements the dataloader.Cache interface
type cache[K comparable, V any] struct {
	*lru.ARCCache
}

// Get gets an item from the cache
func (c *cache[K, V]) Get(_ context.Context, key K) (dataloader.Thunk[V], bool) {
	v, ok := c.ARCCache.Get(key)
	if ok {
		return v.(dataloader.Thunk[V]), ok
	}
	return nil, ok
}

// Set sets an item in the cache
func (c *cache[K, V]) Set(_ context.Context, key K, value dataloader.Thunk[V]) {
	c.ARCCache.Add(key, value)
}

// Delete deletes an item in the cache
func (c *cache[K, V]) Delete(_ context.Context, key K) bool {
	if c.ARCCache.Contains(key) {
		c.ARCCache.Remove(key)
		return true
	}
	return false
}

// Clear clears the cache
func (c *cache[K, V]) Clear() {
	c.ARCCache.Purge()
}

func ExampleGolangLRU() {
	type User struct {
		ID        int
		Email     string
		FirstName string
		LastName  string
	}

	m := map[int]*User{
		5: {ID: 5, FirstName: "John", LastName: "Smith", Email: "john@example.com"},
	}

	batchFunc := func(_ context.Context, keys []int) []*dataloader.Result[*User] {
		var results []*dataloader.Result[*User]
		// do some pretend work to resolve keys
		for _, k := range keys {
			results = append(results, &dataloader.Result[*User]{Data: m[k]})
		}
		return results
	}

	// go-cache will automatically cleanup expired items on given duration.
	c, _ := lru.NewARC(100)
	cache := &cache[int, *User]{ARCCache: c}
	loader := dataloader.NewBatchedLoader(batchFunc, dataloader.WithCache[int, *User](cache))

	// immediately call the future function from loader
	result, err := loader.Load(context.TODO(), 5)()
	if err != nil {
		// handle error
	}

	fmt.Printf("result: %+v", result)
	// Output: result: &{ID:5 Email:john@example.com FirstName:John LastName:Smith}
}


================================================
FILE: example/no_cache/no_cache_test.go
================================================
package no_cache_test

import (
	"context"
	"fmt"

	dataloader "github.com/graph-gophers/dataloader/v7"
)

func ExampleNoCache() {
	type User struct {
		ID        int
		Email     string
		FirstName string
		LastName  string
	}

	m := map[int]*User{
		5: {ID: 5, FirstName: "John", LastName: "Smith", Email: "john@example.com"},
	}

	batchFunc := func(_ context.Context, keys []int) []*dataloader.Result[*User] {
		var results []*dataloader.Result[*User]
		// do some pretend work to resolve keys
		for _, k := range keys {
			results = append(results, &dataloader.Result[*User]{Data: m[k]})
		}
		return results
	}

	// go-cache will automatically cleanup expired items on given duration
	cache := &dataloader.NoCache[int, *User]{}
	loader := dataloader.NewBatchedLoader(batchFunc, dataloader.WithCache[int, *User](cache))

	result, err := loader.Load(context.Background(), 5)()
	if err != nil {
		// handle error
	}

	fmt.Printf("result: %+v", result)
	// Output: result: &{ID:5 Email:john@example.com FirstName:John LastName:Smith}
}


================================================
FILE: example/ttl_cache/go_cache_test.go
================================================
// package ttl_cache_test contains an example of using go-cache as a long term cache solution for dataloader.
package ttl_cache_test

import (
	"context"
	"fmt"
	"time"

	dataloader "github.com/graph-gophers/dataloader/v7"

	cache "github.com/patrickmn/go-cache"
)

// Cache implements the dataloader.Cache interface
type Cache[K comparable, V any] struct {
	c *cache.Cache
}

// Get gets a value from the cache
func (c *Cache[K, V]) Get(_ context.Context, key K) (dataloader.Thunk[V], bool) {
	k := fmt.Sprintf("%v", key) // convert the key to string because the underlying library doesn't support Generics yet
	v, ok := c.c.Get(k)
	if ok {
		return v.(dataloader.Thunk[V]), ok
	}
	return nil, ok
}

// Set sets a value in the cache
func (c *Cache[K, V]) Set(_ context.Context, key K, value dataloader.Thunk[V]) {
	k := fmt.Sprintf("%v", key) // convert the key to string because the underlying library doesn't support Generics yet
	c.c.Set(k, value, 0)
}

// Delete deletes and item in the cache
func (c *Cache[K, V]) Delete(_ context.Context, key K) bool {
	k := fmt.Sprintf("%v", key) // convert the key to string because the underlying library doesn't support Generics yet
	if _, found := c.c.Get(k); found {
		c.c.Delete(k)
		return true
	}
	return false
}

// Clear clears the cache
func (c *Cache[K, V]) Clear() {
	c.c.Flush()
}

func ExampleTTLCache() {
	type User struct {
		ID        int
		Email     string
		FirstName string
		LastName  string
	}

	m := map[int]*User{
		5: {ID: 5, FirstName: "John", LastName: "Smith", Email: "john@example.com"},
	}

	batchFunc := func(_ context.Context, keys []int) []*dataloader.Result[*User] {
		var results []*dataloader.Result[*User]
		// do some pretend work to resolve keys
		for _, k := range keys {
			results = append(results, &dataloader.Result[*User]{Data: m[k]})
		}
		return results
	}

	// go-cache will automatically cleanup expired items on given duration
	c := cache.New(15*time.Minute, 15*time.Minute)
	cache := &Cache[int, *User]{c}
	loader := dataloader.NewBatchedLoader(batchFunc, dataloader.WithCache[int, *User](cache))

	// immediately call the future function from loader
	result, err := loader.Load(context.Background(), 5)()
	if err != nil {
		// handle error
	}

	fmt.Printf("result: %+v", result)
	// Output: result: &{ID:5 Email:john@example.com FirstName:John LastName:Smith}
}


================================================
FILE: go.mod
================================================
module github.com/graph-gophers/dataloader/v7

go 1.19

require (
	github.com/hashicorp/golang-lru v0.5.4
	github.com/opentracing/opentracing-go v1.2.0
	github.com/patrickmn/go-cache v2.1.0+incompatible
	go.opentelemetry.io/otel v1.6.3
	go.opentelemetry.io/otel/trace v1.6.3
)

require (
	github.com/go-logr/logr v1.2.3 // indirect
	github.com/go-logr/stdr v1.2.2 // indirect
)


================================================
FILE: go.sum
================================================
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.3 h1:2DntVwHkVopvECVRSlL5PSo9eG+cAkDCuckLubN+rq0=
github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/google/go-cmp v0.5.7 h1:81/ik6ipDQS2aGcBfIN5dHDB36BwrStyeAQquSYCV4o=
github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs=
github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
go.opentelemetry.io/otel v1.6.3 h1:FLOfo8f9JzFVFVyU+MSRJc2HdEAXQgm7pIv2uFKRSZE=
go.opentelemetry.io/otel v1.6.3/go.mod h1:7BgNga5fNlF/iZjG06hM3yofffp0ofKCDwSXx1GC4dI=
go.opentelemetry.io/otel/trace v1.6.3 h1:IqN4L+5b0mPNjdXIiZ90Ni4Bl5BRkDQywePLWemd9bc=
go.opentelemetry.io/otel/trace v1.6.3/go.mod h1:GNJQusJlUgZl9/TQBPKU/Y/ty+0iVB5fjhKeJGZPGFs=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=


================================================
FILE: in_memory_cache.go
================================================
package dataloader

import (
	"context"
	"sync"
)

// InMemoryCache is an in memory implementation of Cache interface.
// This simple implementation is well suited for
// a "per-request" dataloader (i.e. one that only lives
// for the life of an http request) but it's not well suited
// for long lived cached items.
type InMemoryCache[K comparable, V any] struct {
	items map[K]Thunk[V]
	mu    sync.RWMutex
}

// NewCache constructs a new InMemoryCache
func NewCache[K comparable, V any]() *InMemoryCache[K, V] {
	items := make(map[K]Thunk[V])
	return &InMemoryCache[K, V]{
		items: items,
	}
}

// Set sets the `value` at `key` in the cache
func (c *InMemoryCache[K, V]) Set(_ context.Context, key K, value Thunk[V]) {
	c.mu.Lock()
	c.items[key] = value
	c.mu.Unlock()
}

// Get gets the value at `key` if it exists, returns value (or nil) and bool
// indicating of value was found
func (c *InMemoryCache[K, V]) Get(_ context.Context, key K) (Thunk[V], bool) {
	c.mu.RLock()
	defer c.mu.RUnlock()

	item, found := c.items[key]
	if !found {
		return nil, false
	}

	return item, true
}

// Delete deletes item at `key` from cache
func (c *InMemoryCache[K, V]) Delete(ctx context.Context, key K) bool {
	if _, found := c.Get(ctx, key); found {
		c.mu.Lock()
		defer c.mu.Unlock()
		delete(c.items, key)
		return true
	}
	return false
}

// Clear clears the entire cache
func (c *InMemoryCache[K, V]) Clear() {
	c.mu.Lock()
	c.items = map[K]Thunk[V]{}
	c.mu.Unlock()
}


================================================
FILE: trace/opentracing/trace.go
================================================
package opentracing

import (
	"context"
	"fmt"

	"github.com/graph-gophers/dataloader/v7"

	"github.com/opentracing/opentracing-go"
)

// Tracer implements a tracer that can be used with the Open Tracing standard.
type Tracer[K comparable, V any] struct{}

// TraceLoad will trace a call to dataloader.LoadMany with Open Tracing.
func (Tracer[K, V]) TraceLoad(ctx context.Context, key K) (context.Context, dataloader.TraceLoadFinishFunc[V]) {
	span, spanCtx := opentracing.StartSpanFromContext(ctx, "Dataloader: load")

	span.SetTag("dataloader.key", fmt.Sprintf("%v", key))

	return spanCtx, func(thunk dataloader.Thunk[V]) {
		span.Finish()
	}
}

// TraceLoadMany will trace a call to dataloader.LoadMany with Open Tracing.
func (Tracer[K, V]) TraceLoadMany(ctx context.Context, keys []K) (context.Context, dataloader.TraceLoadManyFinishFunc[V]) {
	span, spanCtx := opentracing.StartSpanFromContext(ctx, "Dataloader: loadmany")

	span.SetTag("dataloader.keys", fmt.Sprintf("%v", keys))

	return spanCtx, func(thunk dataloader.ThunkMany[V]) {
		span.Finish()
	}
}

// TraceBatch will trace a call to dataloader.LoadMany with Open Tracing.
func (Tracer[K, V]) TraceBatch(ctx context.Context, keys []K) (context.Context, dataloader.TraceBatchFinishFunc[V]) {
	span, spanCtx := opentracing.StartSpanFromContext(ctx, "Dataloader: batch")

	span.SetTag("dataloader.keys", fmt.Sprintf("%v", keys))

	return spanCtx, func(results []*dataloader.Result[V]) {
		span.Finish()
	}
}


================================================
FILE: trace/opentracing/trace_test.go
================================================
package opentracing_test

import (
	"testing"

	"github.com/graph-gophers/dataloader/v7"
	"github.com/graph-gophers/dataloader/v7/trace/opentracing"
)

func TestInterfaceImplementation(t *testing.T) {
	type User struct {
		ID        uint
		FirstName string
		LastName  string
		Email     string
	}
	var _ dataloader.Tracer[string, int] = opentracing.Tracer[string, int]{}
	var _ dataloader.Tracer[string, string] = opentracing.Tracer[string, string]{}
	var _ dataloader.Tracer[uint, User] = opentracing.Tracer[uint, User]{}
	// check compatibility with loader options
	dataloader.WithTracer[uint, User](&opentracing.Tracer[uint, User]{})
}


================================================
FILE: trace/otel/trace.go
================================================
package otel

import (
	"context"
	"fmt"

	"github.com/graph-gophers/dataloader/v7"

	"go.opentelemetry.io/otel"
	"go.opentelemetry.io/otel/attribute"
	"go.opentelemetry.io/otel/trace"
)

// Tracer implements a tracer that can be used with the Open Tracing standard.
type Tracer[K comparable, V any] struct {
	tr trace.Tracer
}

func NewTracer[K comparable, V any](tr trace.Tracer) *Tracer[K, V] {
	return &Tracer[K, V]{tr: tr}
}

func (t *Tracer[K, V]) Tracer() trace.Tracer {
	if t.tr != nil {
		return t.tr
	}
	return otel.Tracer("graph-gophers/dataloader")
}

// TraceLoad will trace a call to dataloader.LoadMany with Open Tracing.
func (t Tracer[K, V]) TraceLoad(ctx context.Context, key K) (context.Context, dataloader.TraceLoadFinishFunc[V]) {
	spanCtx, span := t.Tracer().Start(ctx, "Dataloader: load")

	span.SetAttributes(attribute.String("dataloader.key", fmt.Sprintf("%v", key)))

	return spanCtx, func(thunk dataloader.Thunk[V]) {
		span.End()
	}
}

// TraceLoadMany will trace a call to dataloader.LoadMany with Open Tracing.
func (t Tracer[K, V]) TraceLoadMany(ctx context.Context, keys []K) (context.Context, dataloader.TraceLoadManyFinishFunc[V]) {
	spanCtx, span := t.Tracer().Start(ctx, "Dataloader: loadmany")

	span.SetAttributes(attribute.String("dataloader.keys", fmt.Sprintf("%v", keys)))

	return spanCtx, func(thunk dataloader.ThunkMany[V]) {
		span.End()
	}
}

// TraceBatch will trace a call to dataloader.LoadMany with Open Tracing.
func (t Tracer[K, V]) TraceBatch(ctx context.Context, keys []K) (context.Context, dataloader.TraceBatchFinishFunc[V]) {
	spanCtx, span := t.Tracer().Start(ctx, "Dataloader: batch")

	span.SetAttributes(attribute.String("dataloader.keys", fmt.Sprintf("%v", keys)))

	return spanCtx, func(results []*dataloader.Result[V]) {
		span.End()
	}
}


================================================
FILE: trace/otel/trace_test.go
================================================
package otel_test

import (
	"testing"

	"github.com/graph-gophers/dataloader/v7"
	"github.com/graph-gophers/dataloader/v7/trace/otel"
)

func TestInterfaceImplementation(t *testing.T) {
	type User struct {
		ID        uint
		FirstName string
		LastName  string
		Email     string
	}
	var _ dataloader.Tracer[string, int] = otel.Tracer[string, int]{}
	var _ dataloader.Tracer[string, string] = otel.Tracer[string, string]{}
	var _ dataloader.Tracer[uint, User] = otel.Tracer[uint, User]{}
	// check compatibility with loader options
	dataloader.WithTracer[uint, User](&otel.Tracer[uint, User]{})
}


================================================
FILE: trace.go
================================================
package dataloader

import (
	"context"
)

type TraceLoadFinishFunc[V any] func(Thunk[V])
type TraceLoadManyFinishFunc[V any] func(ThunkMany[V])
type TraceBatchFinishFunc[V any] func([]*Result[V])

// Tracer is an interface that may be used to implement tracing.
type Tracer[K comparable, V any] interface {
	// TraceLoad will trace the calls to Load.
	TraceLoad(ctx context.Context, key K) (context.Context, TraceLoadFinishFunc[V])
	// TraceLoadMany will trace the calls to LoadMany.
	TraceLoadMany(ctx context.Context, keys []K) (context.Context, TraceLoadManyFinishFunc[V])
	// TraceBatch will trace data loader batches.
	TraceBatch(ctx context.Context, keys []K) (context.Context, TraceBatchFinishFunc[V])
}

// NoopTracer is the default (noop) tracer
type NoopTracer[K comparable, V any] struct{}

// TraceLoad is a noop function
func (NoopTracer[K, V]) TraceLoad(ctx context.Context, key K) (context.Context, TraceLoadFinishFunc[V]) {
	return ctx, func(Thunk[V]) {}
}

// TraceLoadMany is a noop function
func (NoopTracer[K, V]) TraceLoadMany(ctx context.Context, keys []K) (context.Context, TraceLoadManyFinishFunc[V]) {
	return ctx, func(ThunkMany[V]) {}
}

// TraceBatch is a noop function
func (NoopTracer[K, V]) TraceBatch(ctx context.Context, keys []K) (context.Context, TraceBatchFinishFunc[V]) {
	return ctx, func(result []*Result[V]) {}
}
Download .txt
gitextract_ieg74ypr/

├── .github/
│   └── workflows/
│       └── go.yml
├── .gitignore
├── .travis.yml
├── LICENSE
├── MIGRATE.md
├── README.md
├── TRACE.md
├── cache.go
├── codecov.yml
├── dataloader.go
├── dataloader_test.go
├── example/
│   ├── lru_cache/
│   │   └── golang_lru_test.go
│   ├── no_cache/
│   │   └── no_cache_test.go
│   └── ttl_cache/
│       └── go_cache_test.go
├── go.mod
├── go.sum
├── in_memory_cache.go
├── trace/
│   ├── opentracing/
│   │   ├── trace.go
│   │   └── trace_test.go
│   └── otel/
│       ├── trace.go
│       └── trace_test.go
└── trace.go
Download .txt
SYMBOL INDEX (102 symbols across 12 files)

FILE: cache.go
  type Cache (line 6) | type Cache interface
  type NoCache (line 16) | type NoCache struct
  method Get (line 19) | func (c *NoCache[K, V]) Get(context.Context, K) (Thunk[V], bool) { retur...
  method Set (line 22) | func (c *NoCache[K, V]) Set(context.Context, K, Thunk[V]) { return }
  method Delete (line 25) | func (c *NoCache[K, V]) Delete(context.Context, K) bool { return false }
  method Clear (line 28) | func (c *NoCache[K, V]) Clear() { return }

FILE: dataloader.go
  type Interface (line 24) | type Interface interface
  type BatchFunc (line 41) | type BatchFunc
  type Result (line 45) | type Result struct
  type ResultMany (line 53) | type ResultMany struct
  type PanicErrorWrapper (line 61) | type PanicErrorWrapper struct
    method Error (line 65) | func (p *PanicErrorWrapper) Error() string {
  type SkipCacheError (line 71) | type SkipCacheError struct
    method Error (line 75) | func (s *SkipCacheError) Error() string {
    method Unwrap (line 79) | func (s *SkipCacheError) Unwrap() error {
  function NewSkipCacheError (line 83) | func NewSkipCacheError(err error) *SkipCacheError {
  type Loader (line 88) | type Loader struct
  type Thunk (line 132) | type Thunk
  type ThunkMany (line 135) | type ThunkMany
  type batchRequest (line 138) | type batchRequest struct
  type Option (line 145) | type Option
  function WithCache (line 148) | func WithCache[K comparable, V any](c Cache[K, V]) Option[K, V] {
  function WithBatchCapacity (line 155) | func WithBatchCapacity[K comparable, V any](c int) Option[K, V] {
  function WithInputCapacity (line 162) | func WithInputCapacity[K comparable, V any](c int) Option[K, V] {
  function WithWait (line 170) | func WithWait[K comparable, V any](d time.Duration) Option[K, V] {
  function WithClearCacheOnBatch (line 178) | func WithClearCacheOnBatch[K comparable, V any]() Option[K, V] {
  function withSilentLogger (line 187) | func withSilentLogger[K comparable, V any]() Option[K, V] {
  function WithTracer (line 194) | func WithTracer[K comparable, V any](tracer Tracer[K, V]) Option[K, V] {
  function NewBatchedLoader (line 201) | func NewBatchedLoader[K comparable, V any](batchFn BatchFunc[K, V], opts...
  method Load (line 228) | func (l *Loader[K, V]) Load(originalContext context.Context, key K) Thun...
  method flush (line 305) | func (l *Loader[K, V]) flush() {
  method Flush (line 316) | func (l *Loader[K, V]) Flush() {
  method LoadMany (line 326) | func (l *Loader[K, V]) LoadMany(originalContext context.Context, keys []...
  method Clear (line 379) | func (l *Loader[K, V]) Clear(ctx context.Context, key K) Interface[K, V] {
  method ClearAll (line 388) | func (l *Loader[K, V]) ClearAll() Interface[K, V] {
  method Prime (line 397) | func (l *Loader[K, V]) Prime(ctx context.Context, key K, value V) Interf...
  method reset (line 407) | func (l *Loader[K, V]) reset() {
  type batcher (line 416) | type batcher struct
  method newBatcher (line 426) | func (l *Loader[K, V]) newBatcher(silent bool, tracer Tracer[K, V]) *bat...
  method end (line 436) | func (b *batcher[K, V]) end() {
  method batch (line 444) | func (b *batcher[K, V]) batch(originalContext context.Context) {
  method sleeper (line 519) | func (l *Loader[K, V]) sleeper(b *batcher[K, V], close chan bool) {

FILE: dataloader_test.go
  function TestLoader (line 18) | func TestLoader(t *testing.T) {
  function IDLoader (line 629) | func IDLoader[K comparable](max int, options ...Option[K, K]) (*Loader[K...
  function BatchOnlyLoader (line 644) | func BatchOnlyLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
  function ErrorLoader (line 659) | func ErrorLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
  function OneErrorLoader (line 674) | func OneErrorLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
  function PanicLoader (line 693) | func PanicLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
  function PanicCacheLoader (line 701) | func PanicCacheLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
  function ErrorCacheLoader (line 722) | func ErrorCacheLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
  function SkipCacheErrorLoader (line 747) | func SkipCacheErrorLoader[K comparable](max int, onceErrorKey K) (*Loade...
  function BadLoader (line 771) | func BadLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
  function NoCacheLoader (line 785) | func NoCacheLoader[K comparable](max int) (*Loader[K, K], *[][]K) {
  function FaultyLoader (line 803) | func FaultyLoader[K comparable]() (*Loader[K, K], *[][]K) {
  type TracerWithThunkReading (line 827) | type TracerWithThunkReading struct
  method TraceLoad (line 831) | func (_ *TracerWithThunkReading[K, V]) TraceLoad(ctx context.Context, ke...
  method TraceLoadMany (line 837) | func (_ *TracerWithThunkReading[K, V]) TraceLoadMany(ctx context.Context...
  method TraceBatch (line 843) | func (_ *TracerWithThunkReading[K, V]) TraceBatch(ctx context.Context, k...
  function batchIdentity (line 854) | func batchIdentity[K comparable](_ context.Context, keys []K) (results [...
  function BenchmarkLoader (line 864) | func BenchmarkLoader(b *testing.B) {
  type Avg (line 873) | type Avg struct
    method Add (line 879) | func (a *Avg) Add(v int) {
    method Avg (line 886) | func (a *Avg) Avg() float64 {

FILE: example/lru_cache/golang_lru_test.go
  type cache (line 14) | type cache struct
  method Get (line 19) | func (c *cache[K, V]) Get(_ context.Context, key K) (dataloader.Thunk[V]...
  method Set (line 28) | func (c *cache[K, V]) Set(_ context.Context, key K, value dataloader.Thu...
  method Delete (line 33) | func (c *cache[K, V]) Delete(_ context.Context, key K) bool {
  method Clear (line 42) | func (c *cache[K, V]) Clear() {
  function ExampleGolangLRU (line 46) | func ExampleGolangLRU() {

FILE: example/no_cache/no_cache_test.go
  function ExampleNoCache (line 10) | func ExampleNoCache() {

FILE: example/ttl_cache/go_cache_test.go
  type Cache (line 15) | type Cache struct
  method Get (line 20) | func (c *Cache[K, V]) Get(_ context.Context, key K) (dataloader.Thunk[V]...
  method Set (line 30) | func (c *Cache[K, V]) Set(_ context.Context, key K, value dataloader.Thu...
  method Delete (line 36) | func (c *Cache[K, V]) Delete(_ context.Context, key K) bool {
  method Clear (line 46) | func (c *Cache[K, V]) Clear() {
  function ExampleTTLCache (line 50) | func ExampleTTLCache() {

FILE: in_memory_cache.go
  type InMemoryCache (line 13) | type InMemoryCache struct
  function NewCache (line 19) | func NewCache[K comparable, V any]() *InMemoryCache[K, V] {
  method Set (line 27) | func (c *InMemoryCache[K, V]) Set(_ context.Context, key K, value Thunk[...
  method Get (line 35) | func (c *InMemoryCache[K, V]) Get(_ context.Context, key K) (Thunk[V], b...
  method Delete (line 48) | func (c *InMemoryCache[K, V]) Delete(ctx context.Context, key K) bool {
  method Clear (line 59) | func (c *InMemoryCache[K, V]) Clear() {

FILE: trace.go
  type TraceLoadFinishFunc (line 7) | type TraceLoadFinishFunc
  type TraceLoadManyFinishFunc (line 8) | type TraceLoadManyFinishFunc
  type TraceBatchFinishFunc (line 9) | type TraceBatchFinishFunc
  type Tracer (line 12) | type Tracer interface
  type NoopTracer (line 22) | type NoopTracer struct
  method TraceLoad (line 25) | func (NoopTracer[K, V]) TraceLoad(ctx context.Context, key K) (context.C...
  method TraceLoadMany (line 30) | func (NoopTracer[K, V]) TraceLoadMany(ctx context.Context, keys []K) (co...
  method TraceBatch (line 35) | func (NoopTracer[K, V]) TraceBatch(ctx context.Context, keys []K) (conte...

FILE: trace/opentracing/trace.go
  type Tracer (line 13) | type Tracer struct
  method TraceLoad (line 16) | func (Tracer[K, V]) TraceLoad(ctx context.Context, key K) (context.Conte...
  method TraceLoadMany (line 27) | func (Tracer[K, V]) TraceLoadMany(ctx context.Context, keys []K) (contex...
  method TraceBatch (line 38) | func (Tracer[K, V]) TraceBatch(ctx context.Context, keys []K) (context.C...

FILE: trace/opentracing/trace_test.go
  function TestInterfaceImplementation (line 10) | func TestInterfaceImplementation(t *testing.T) {

FILE: trace/otel/trace.go
  type Tracer (line 15) | type Tracer struct
  function NewTracer (line 19) | func NewTracer[K comparable, V any](tr trace.Tracer) *Tracer[K, V] {
  method Tracer (line 23) | func (t *Tracer[K, V]) Tracer() trace.Tracer {
  method TraceLoad (line 31) | func (t Tracer[K, V]) TraceLoad(ctx context.Context, key K) (context.Con...
  method TraceLoadMany (line 42) | func (t Tracer[K, V]) TraceLoadMany(ctx context.Context, keys []K) (cont...
  method TraceBatch (line 53) | func (t Tracer[K, V]) TraceBatch(ctx context.Context, keys []K) (context...

FILE: trace/otel/trace_test.go
  function TestInterfaceImplementation (line 10) | func TestInterfaceImplementation(t *testing.T) {
Condensed preview — 22 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (72K chars).
[
  {
    "path": ".github/workflows/go.yml",
    "chars": 525,
    "preview": "# This workflow will build a golang project\n# For more information see: https://docs.github.com/en/actions/automating-bu"
  },
  {
    "path": ".gitignore",
    "chars": 8,
    "preview": "vendor/\n"
  },
  {
    "path": ".travis.yml",
    "chars": 187,
    "preview": "language: go\n\ngo:\n  - 1.18\n\nenv:\n  - GO111MODULE=on\n\nscript:\n  - go test -v -race -coverprofile=coverage.txt -covermode="
  },
  {
    "path": "LICENSE",
    "chars": 1070,
    "preview": "MIT License\n\nCopyright (c) 2017 Nick Randall \n\nPermission is hereby granted, free of charge, to any person obtaining a c"
  },
  {
    "path": "MIGRATE.md",
    "chars": 3575,
    "preview": "## Upgrade from v1 to v2\nThe only difference between v1 and v2 is that we added use of [context](https://golang.org/pkg/"
  },
  {
    "path": "README.md",
    "chars": 2137,
    "preview": "# DataLoader\n[![GoDoc](https://godoc.org/gopkg.in/graph-gophers/dataloader.v7?status.svg)](https://pkg.go.dev/github.com"
  },
  {
    "path": "TRACE.md",
    "chars": 2904,
    "preview": "# Adding a new trace backend.\n\nIf you want to add a new tracing backend all you need to do is implement the\n`Tracer` int"
  },
  {
    "path": "cache.go",
    "chars": 848,
    "preview": "package dataloader\n\nimport \"context\"\n\n// The Cache interface. If a custom cache is provided, it must implement this inte"
  },
  {
    "path": "codecov.yml",
    "chars": 415,
    "preview": "codecov:\n  notify:\n    require_ci_to_pass: true\ncomment:\n  behavior: default\n  layout: header, diff\n  require_changes: f"
  },
  {
    "path": "dataloader.go",
    "chars": 15053,
    "preview": "// Package dataloader is an implementation of facebook's dataloader in go.\n// See https://github.com/facebook/dataloader"
  },
  {
    "path": "dataloader_test.go",
    "chars": 23604,
    "preview": "package dataloader\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"log\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n)\n\n/*\nT"
  },
  {
    "path": "example/lru_cache/golang_lru_test.go",
    "chars": 2002,
    "preview": "// package lru_cache_test contains an exmaple of using go-cache as a long term cache solution for dataloader.\npackage lr"
  },
  {
    "path": "example/no_cache/no_cache_test.go",
    "chars": 1036,
    "preview": "package no_cache_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\tdataloader \"github.com/graph-gophers/dataloader/v7\"\n)\n\nfunc ExampleN"
  },
  {
    "path": "example/ttl_cache/go_cache_test.go",
    "chars": 2357,
    "preview": "// package ttl_cache_test contains an example of using go-cache as a long term cache solution for dataloader.\npackage tt"
  },
  {
    "path": "go.mod",
    "chars": 378,
    "preview": "module github.com/graph-gophers/dataloader/v7\n\ngo 1.19\n\nrequire (\n\tgithub.com/hashicorp/golang-lru v0.5.4\n\tgithub.com/op"
  },
  {
    "path": "go.sum",
    "chars": 2630,
    "preview": "github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=\ngithub.com/davecgh/go-spew v1.1.0/go.m"
  },
  {
    "path": "in_memory_cache.go",
    "chars": 1468,
    "preview": "package dataloader\n\nimport (\n\t\"context\"\n\t\"sync\"\n)\n\n// InMemoryCache is an in memory implementation of Cache interface.\n/"
  },
  {
    "path": "trace/opentracing/trace.go",
    "chars": 1473,
    "preview": "package opentracing\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/graph-gophers/dataloader/v7\"\n\n\t\"github.com/opentracing/ope"
  },
  {
    "path": "trace/opentracing/trace_test.go",
    "chars": 640,
    "preview": "package opentracing_test\n\nimport (\n\t\"testing\"\n\n\t\"github.com/graph-gophers/dataloader/v7\"\n\t\"github.com/graph-gophers/data"
  },
  {
    "path": "trace/otel/trace.go",
    "chars": 1803,
    "preview": "package otel\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/graph-gophers/dataloader/v7\"\n\n\t\"go.opentelemetry.io/otel\"\n\t\"go.op"
  },
  {
    "path": "trace/otel/trace_test.go",
    "chars": 598,
    "preview": "package otel_test\n\nimport (\n\t\"testing\"\n\n\t\"github.com/graph-gophers/dataloader/v7\"\n\t\"github.com/graph-gophers/dataloader/"
  },
  {
    "path": "trace.go",
    "chars": 1354,
    "preview": "package dataloader\n\nimport (\n\t\"context\"\n)\n\ntype TraceLoadFinishFunc[V any] func(Thunk[V])\ntype TraceLoadManyFinishFunc[V"
  }
]

About this extraction

This page contains the full source code of the nicksrandall/dataloader GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 22 files (64.5 KB), approximately 19.4k tokens, and a symbol index with 102 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!