Full Code of ExaScience/pargo for AI

master b66d9800f997 cached
27 files
206.9 KB
57.4k tokens
317 symbols
1 requests
Download .txt
Showing preview only (216K chars total). Download the full file or copy to clipboard to get everything.
Repository: ExaScience/pargo
Branch: master
Commit: b66d9800f997
Files: 27
Total size: 206.9 KB

Directory structure:
gitextract_ml7ttqty/

├── .gitignore
├── LICENSE
├── README.md
├── doc.go
├── go.mod
├── go.sum
├── internal/
│   └── internal.go
├── parallel/
│   ├── example_heatdistribution_test.go
│   ├── parallel.go
│   └── parallel_test.go
├── pipeline/
│   ├── example_wordcount_test.go
│   ├── filter.go
│   ├── filters.go
│   ├── lparnode.go
│   ├── parnode.go
│   ├── pipeline.go
│   ├── seqnode.go
│   ├── source.go
│   └── strictordnode.go
├── sequential/
│   └── sequential.go
├── sort/
│   ├── example_interface_test.go
│   ├── mergesort.go
│   ├── quicksort.go
│   ├── sort.go
│   └── sort_test.go
├── speculative/
│   └── speculative.go
└── sync/
    └── map.go

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Binaries for programs and plugins
*.exe
*.dll
*.so
*.dylib

# Test binary, build with `go test -c`
*.test

# Output of the go coverage tool, specifically when used with LiteIDE
*.out

# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
.glide/


================================================
FILE: LICENSE
================================================
BSD 3-Clause License

Copyright (c) 2017, Imec
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

* Redistributions of source code must retain the above copyright notice, this
  list of conditions and the following disclaimer.

* Redistributions in binary form must reproduce the above copyright notice,
  this list of conditions and the following disclaimer in the documentation
  and/or other materials provided with the distribution.

* Neither the name of the copyright holder nor the names of its
  contributors may be used to endorse or promote products derived from
  this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.


================================================
FILE: README.md
================================================
# pargo
## A library for parallel programming in Go

Package pargo provides functions and data structures for expressing
parallel algorithms. While Go is primarily designed for concurrent
programming, it is also usable to some extent for parallel
programming, and this library provides convenience functionality to
turn otherwise sequential algorithms into parallel algorithms, with
the goal to improve performance.

Documentation: [http://godoc.org/github.com/ExaScience/pargo](http://godoc.org/github.com/ExaScience/pargo)
and [http://github.com/ExaScience/pargo/wiki](http://github.com/ExaScience/pargo/wiki)


================================================
FILE: doc.go
================================================
// Package pargo provides functions and data structures for expressing parallel
// algorithms. While Go is primarily designed for concurrent programming, it is
// also usable to some extent for parallel programming, and this library
// provides convenience functionality to turn otherwise sequential algorithms
// into parallel algorithms, with the goal to improve performance.
//
// For documentation that provides a more structured overview than is possible
// with Godoc, see the wiki at https://github.com/exascience/pargo/wiki
//
// Pargo provides the following subpackages:
//
// pargo/parallel provides simple functions for executing series of thunks or
// predicates, as well as thunks, predicates, or reducers over ranges in
// parallel. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism
//
// pargo/speculative provides speculative implementations of most of the
// functions from pargo/parallel. These implementations not only execute in
// parallel, but also attempt to terminate early as soon as the final result is
// known. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism
//
// pargo/sequential provides sequential implementations of all functions from
// pargo/parallel, for testing and debugging purposes.
//
// pargo/sort provides parallel sorting algorithms.
//
// pargo/sync provides an efficient parallel map implementation.
//
// pargo/pipeline provides functions and data structures to construct and
// execute parallel pipelines.
//
// Pargo has been influenced to various extents by ideas from Cilk, Threading
// Building Blocks, and Java's java.util.concurrent and java.util.stream
// packages. See http://supertech.csail.mit.edu/papers/steal.pdf for some
// theoretical background, and the sample chapter at
// https://mitpress.mit.edu/books/introduction-algorithms for a more practical
// overview of the underlying concepts.
package pargo


================================================
FILE: go.mod
================================================
module github.com/exascience/pargo

go 1.14

require gonum.org/v1/gonum v0.7.0


================================================
FILE: go.sum
================================================
github.com/ajstarks/svgo v0.0.0-20180226025133-644b8db467af/go.mod h1:K08gAheRH3/J6wwsYMMT4xOr94bZjxIelGM0+d/wbFw=
github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k=
github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0/go.mod h1:E/TSTwGwJL78qG/PmXZO1EjYhfJinVAhrmmHX6Z8B9k=
github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190125153040-c74c464bbbf2 h1:y102fOLFqhV41b+4GPiJoa0k/x+pJcEi2/HB1Y5T6fU=
golang.org/x/exp v0.0.0-20190125153040-c74c464bbbf2/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs=
golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190206041539-40960b6deb8e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
gonum.org/v1/gonum v0.0.0-20180816165407-929014505bf4/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo=
gonum.org/v1/gonum v0.7.0 h1:Hdks0L0hgznZLG9nzXb8vZ0rRvqNvAcgAp84y7Mwkgw=
gonum.org/v1/gonum v0.7.0/go.mod h1:L02bwd0sqlsvRv41G7wGWFCsVNZFv/k1xzGIxeANHGM=
gonum.org/v1/netlib v0.0.0-20190313105609-8cb42192e0e0 h1:OE9mWmgKkjJyEmDAAtGMPjXu+YNeGvK9VTSHY6+Qihc=
gonum.org/v1/netlib v0.0.0-20190313105609-8cb42192e0e0/go.mod h1:wa6Ws7BG/ESfp6dHfk7C6KdzKA7wR7u/rKwOGE66zvw=
gonum.org/v1/plot v0.0.0-20190515093506-e2840ee46a6b/go.mod h1:Wt8AAjI+ypCyYX3nZBvf6cAIx93T+c/OS2HFAYskSZc=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=


================================================
FILE: internal/internal.go
================================================
package internal

import (
	"errors"
	"fmt"
	"runtime"
	"runtime/debug"
)

// ComputeNofBatches divides the size of the range (high - low) by n. If n is 0,
// a default is used that takes runtime.GOMAXPROCS(0) into account.
func ComputeNofBatches(low, high, n int) (batches int) {
	switch size := high - low; {
	case size > 0:
		switch {
		case n == 0:
			batches = 2 * runtime.GOMAXPROCS(0)
		case n > 0:
			batches = n
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
		if batches > size {
			batches = size
		}
	case size == 0:
		batches = 1
	default:
		panic(fmt.Sprintf("invalid range: %v:%v", low, high))
	}
	return
}

type runtimeError struct{ error }

func (runtimeError) RuntimeError() {}

// WrapPanic adds stack trace information to a recovered panic.
func WrapPanic(p interface{}) interface{} {
	if p != nil {
		s := fmt.Sprintf("%v\n%s\nrethrown at", p, debug.Stack())
		if _, isError := p.(error); isError {
			r := errors.New(s)
			if _, isRuntimeError := p.(runtime.Error); isRuntimeError {
				return runtimeError{r}
			}
			return r
		}
		return s
	}
	return nil
}


================================================
FILE: parallel/example_heatdistribution_test.go
================================================
package parallel_test

// This is a simplified version of a heat distribution simulation, based on an
// implementation by Wilfried Verachtert.
//
// See https://en.wikipedia.org/wiki/Heat_equation for some theoretical
// background.

import (
	"fmt"
	"math"

	"gonum.org/v1/gonum/mat"

	"github.com/exascience/pargo/parallel"
)

const ε = 0.001

func maxDiff(m1, m2 *mat.Dense) (result float64) {
	rows, cols := m1.Dims()
	result = parallel.RangeReduceFloat64(
		1, rows-1, 0,
		func(low, high int) (result float64) {
			for row := low; row < high; row++ {
				r1 := m1.RawRowView(row)
				r2 := m2.RawRowView(row)
				for col := 1; col < cols-1; col++ {
					result = math.Max(result, math.Abs(r1[col]-r2[col]))
				}
			}
			return
		},
		math.Max,
	)
	return
}

func HeatDistributionStep(u, v *mat.Dense) {
	rows, cols := u.Dims()
	parallel.Range(1, rows-1, 0,
		func(low, high int) {
			for row := low; row < high; row++ {
				uRow := u.RawRowView(row)
				vRow := v.RawRowView(row)
				vRowUp := v.RawRowView(row - 1)
				vRowDn := v.RawRowView(row + 1)
				for col := 1; col < cols-1; col++ {
					uRow[col] = (vRowUp[col] + vRowDn[col] + vRow[col-1] + vRow[col+1]) / 4.0
				}
			}
		},
	)
}

func HeatDistributionSimulation(M, N int, init, t, r, b, l float64) {
	// ensure a border
	M += 2
	N += 2

	// set up the input matrix
	data := make([]float64, M*N)
	for i := range data {
		data[i] = init
	}
	u := mat.NewDense(M, N, data)

	// set up the border for the input matrix
	for i := 0; i < N; i++ {
		u.Set(0, i, t)
		u.Set(M-1, i, b)
	}
	for i := 0; i < M; i++ {
		u.Set(i, 0, l)
		u.Set(i, N-1, r)
	}

	// create a secondary working matrix
	v := mat.NewDense(M, N, nil)
	v.Copy(u)

	// run the simulation
	for δ, iterations := ε+1.0, 0; δ >= ε; {
		for step := 0; step < 1000; step++ {
			HeatDistributionStep(v, u)
			HeatDistributionStep(u, v)
		}
		iterations += 2000
		δ = maxDiff(u, v)
		fmt.Printf("iterations: %6d, δ: %08.6f, u[8][8]: %10.8f\n", iterations, δ, u.At(8, 8))
	}
}

func Example_heatDistributionSimulation() {
	HeatDistributionSimulation(1024, 1024, 75, 0, 100, 100, 100)

	// Output:
	// iterations:   2000, δ: 0.009073, u[8][8]: 50.99678108
	// iterations:   4000, δ: 0.004537, u[8][8]: 50.50380048
	// iterations:   6000, δ: 0.003025, u[8][8]: 50.33708179
	// iterations:   8000, δ: 0.002268, u[8][8]: 50.25326869
	// iterations:  10000, δ: 0.001815, u[8][8]: 50.20283493
	// iterations:  12000, δ: 0.001512, u[8][8]: 50.16915148
	// iterations:  14000, δ: 0.001296, u[8][8]: 50.14506197
	// iterations:  16000, δ: 0.001134, u[8][8]: 50.12697847
	// iterations:  18000, δ: 0.001008, u[8][8]: 50.11290381
	// iterations:  20000, δ: 0.000907, u[8][8]: 50.10163797
}


================================================
FILE: parallel/parallel.go
================================================
// Package parallel provides functions for expressing parallel algorithms.
//
// See https://github.com/ExaScience/pargo/wiki/TaskParallelism for a general
// overview.
package parallel

import (
	"fmt"
	"sync"

	"github.com/exascience/pargo/internal"
)

// Reduce receives one or more functions, executes them in parallel, and
// combines their results with the join function in parallel.
//
// Each function is invoked in its own goroutine, and Reduce returns only when
// all functions have terminated.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and Reduce eventually panics with the left-most recovered panic
// value.
func Reduce(
	join func(x, y interface{}) interface{},
	firstFunction func() interface{},
	moreFunctions ...func() interface{},
) interface{} {
	if len(moreFunctions) == 0 {
		return firstFunction()
	}
	var left, right interface{}
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	if len(moreFunctions) == 1 {
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = moreFunctions[0]()
		}()
		left = firstFunction()
	} else {
		half := (len(moreFunctions) + 1) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = Reduce(join, moreFunctions[half], moreFunctions[half+1:]...)
		}()
		left = Reduce(join, firstFunction, moreFunctions[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return join(left, right)
}

// ReduceFloat64 receives one or more functions, executes them in parallel, and
// combines their results with the join function in parallel.
//
// Each function is invoked in its own goroutine, and ReduceFloat64 returns only
// when all functions have terminated.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and ReduceFloat64 eventually panics with the left-most recovered
// panic value.
func ReduceFloat64(
	join func(x, y float64) float64,
	firstFunction func() float64,
	moreFunctions ...func() float64,
) float64 {
	if len(moreFunctions) == 0 {
		return firstFunction()
	}
	var left, right float64
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	if len(moreFunctions) == 1 {
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = moreFunctions[0]()
		}()
		left = firstFunction()
	} else {
		half := (len(moreFunctions) + 1) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = ReduceFloat64(join, moreFunctions[half], moreFunctions[half+1:]...)
		}()
		left = ReduceFloat64(join, firstFunction, moreFunctions[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return join(left, right)
}

// ReduceFloat64Sum receives zero or more functions, executes them in parallel,
// and adds their results in parallel.
//
// Each function is invoked in its own goroutine, and ReduceFloat64Sum returns
// only when all functions have terminated.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and ReduceFloat64Sum eventually panics with the left-most recovered
// panic value.
func ReduceFloat64Sum(functions ...func() float64) float64 {
	switch len(functions) {
	case 0:
		return 0
	case 1:
		return functions[0]()
	}
	var left, right float64
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	switch len(functions) {
	case 2:
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = functions[1]()
		}()
		left = functions[0]()
	default:
		half := len(functions) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = ReduceFloat64Sum(functions[half:]...)
		}()
		left = ReduceFloat64Sum(functions[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return left + right
}

// ReduceFloat64Product receives zero or more functions, executes them in
// parallel, and multiplies their results in parallel.
//
// Each function is invoked in its own goroutine, and ReduceFloat64Product
// returns only when all functions have terminated.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and ReduceFloat64Product eventually panics with the left-most
// recovered panic value.
func ReduceFloat64Product(functions ...func() float64) float64 {
	switch len(functions) {
	case 0:
		return 1
	case 1:
		return functions[0]()
	}
	var left, right float64
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	switch len(functions) {
	case 2:
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = functions[1]()
		}()
		left = functions[0]()
	default:
		half := len(functions) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = ReduceFloat64Product(functions[half:]...)
		}()
		left = ReduceFloat64Product(functions[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return left * right
}

// ReduceInt receives zero or more functions, executes them in parallel, and
// combines their results with the join function in parallel.
//
// Each function is invoked in its own goroutine, and ReduceInt returns only
// when all functions have terminated.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and ReduceInt eventually panics with the left-most recovered panic
// value.
func ReduceInt(
	join func(x, y int) int,
	firstFunction func() int,
	moreFunctions ...func() int,
) int {
	if len(moreFunctions) == 0 {
		return firstFunction()
	}
	var left, right int
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	if len(moreFunctions) == 1 {
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = moreFunctions[0]()
		}()
		left = firstFunction()
	} else {
		half := (len(moreFunctions) + 1) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = ReduceInt(join, moreFunctions[half], moreFunctions[half+1:]...)
		}()
		left = ReduceInt(join, firstFunction, moreFunctions[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return join(left, right)
}

// ReduceIntSum receives zero or more functions, executes them in parallel, and
// adds their results in parallel.
//
// Each function is invoked in its own goroutine, and ReduceIntSum returns only
// when all functions have terminated.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and ReduceIntSum eventually panics with the left-most recovered panic
// value.
func ReduceIntSum(functions ...func() int) int {
	switch len(functions) {
	case 0:
		return 0
	case 1:
		return functions[0]()
	}
	var left, right int
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	switch len(functions) {
	case 2:
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = functions[1]()
		}()
		left = functions[0]()
	default:
		half := len(functions) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = ReduceIntSum(functions[half:]...)
		}()
		left = ReduceIntSum(functions[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return left + right
}

// ReduceIntProduct receives zero or more functions, executes them in parallel,
// and multiplies their results in parallel.
//
// Each function is invoked in its own goroutine, and ReduceIntProduct returns
// only when all functions have terminated.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and ReduceIntProduct eventually panics with the left-most recovered
// panic value.
func ReduceIntProduct(functions ...func() int) int {
	switch len(functions) {
	case 0:
		return 1
	case 1:
		return functions[0]()
	}
	var left, right int
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	switch len(functions) {
	case 2:
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = functions[1]()
		}()
		left = functions[0]()
	default:
		half := len(functions) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = ReduceIntProduct(functions[half:]...)
		}()
		left = ReduceIntProduct(functions[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return left * right
}

// ReduceString receives zero or more functions, executes them in parallel, and
// combines their results with the join function in parallel.
//
// Each function is invoked in its own goroutine, and ReduceString returns only
// when all functions have terminated.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and ReduceString eventually panics with the left-most recovered panic
// value.
func ReduceString(
	join func(x, y string) string,
	firstFunction func() string,
	moreFunctions ...func() string,
) string {
	if len(moreFunctions) == 0 {
		return firstFunction()
	}
	var left, right string
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	if len(moreFunctions) == 1 {
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = moreFunctions[0]()
		}()
		left = firstFunction()
	} else {
		half := (len(moreFunctions) + 1) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = ReduceString(join, moreFunctions[half], moreFunctions[half+1:]...)
		}()
		left = ReduceString(join, firstFunction, moreFunctions[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return join(left, right)
}

// ReduceStringSum receives zero or more functions, executes them in parallel,
// and concatenates their results in parallel.
//
// Each function is invoked in its own goroutine, and ReduceStringSum returns
// only when all functions have terminated.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and ReduceStringSum eventually panics with the left-most recovered
// panic value.
func ReduceStringSum(functions ...func() string) string {
	switch len(functions) {
	case 0:
		return ""
	case 1:
		return functions[0]()
	}
	var left, right string
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	switch len(functions) {
	case 2:
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = functions[1]()
		}()
		left = functions[0]()
	default:
		half := len(functions) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right = ReduceStringSum(functions[half:]...)
		}()
		left = ReduceStringSum(functions[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return left + right
}

// Do receives zero or more thunks and executes them in parallel.
//
// Each thunk is invoked in its own goroutine, and Do returns only when all
// thunks have terminated.
//
// If one or more thunks panic, the corresponding goroutines recover the panics,
// and Do eventually panics with the left-most recovered panic value.
func Do(thunks ...func()) {
	switch len(thunks) {
	case 0:
		return
	case 1:
		thunks[0]()
		return
	}
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	switch len(thunks) {
	case 2:
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			thunks[1]()
		}()
		thunks[0]()
	default:
		half := len(thunks) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			Do(thunks[half:]...)
		}()
		Do(thunks[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
}

// And receives zero or more predicate functions and executes them in parallel.
//
// Each predicate is invoked in its own goroutine, and And returns only when all
// predicates have terminated, combining all return values with the && operator,
// with true as the default return value.
//
// If one or more predicates panic, the corresponding goroutines recover the
// panics, and And eventually panics with the left-most recovered panic value.
func And(predicates ...func() bool) bool {
	switch len(predicates) {
	case 0:
		return true
	case 1:
		return predicates[0]()
	}
	var b0, b1 bool
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	switch len(predicates) {
	case 2:
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			b1 = predicates[1]()
		}()
		b0 = predicates[0]()
	default:
		half := len(predicates) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			b1 = And(predicates[half:]...)
		}()
		b0 = And(predicates[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return b0 && b1
}

// Or receives zero or more predicate functions and executes them in parallel.
//
// Each predicate is invoked in its own goroutine, and Or returns only when all
// predicates have terminated, combining all return values with the || operator,
// with false as the default return value.
//
// If one or more predicates panic, the corresponding goroutines recover the
// panics, and Or eventually panics with the left-most recovered panic value.
func Or(predicates ...func() bool) bool {
	switch len(predicates) {
	case 0:
		return false
	case 1:
		return predicates[0]()
	}
	var b0, b1 bool
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	switch len(predicates) {
	case 2:
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			b1 = predicates[1]()
		}()
		b0 = predicates[0]()
	default:
		half := len(predicates) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			b1 = Or(predicates[half:]...)
		}()
		b0 = Or(predicates[:half]...)
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return b0 || b1
}

// Range receives a range, a batch count n, and a range function f, divides the
// range into batches, and invokes the range function for each of these batches
// in parallel, covering the half-open interval from low to high, including low
// but excluding high.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range function is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and Range returns only when all range functions have terminated.
//
// Range panics if high < low, or if n < 0.
//
// If one or more range function invocations panic, the corresponding goroutines
// recover the panics, and Range eventually panics with the left-most recovered
// panic value.
func Range(
	low, high, n int,
	f func(low, high int),
) {
	var recur func(int, int, int)
	recur = func(low, high, n int) {
		switch {
		case n == 1:
			f(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				f(low, high)
				return
			}
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				recur(mid, high, n-half)
			}()
			recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeAnd receives a range, a batch count n, and a range predicate function f,
// divides the range into batches, and invokes the range predicate for each of
// these batches in parallel, covering the half-open interval from low to high,
// including low but excluding high.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range predicate is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeAnd returns only when all range predicates have
// terminated, combining all return values with the && operator.
//
// RangeAnd panics if high < low, or if n < 0.
//
// If one or more range predicate invocations panic, the corresponding
// goroutines recover the panics, and RangeAnd eventually panics with the
// left-most recovered panic value.
func RangeAnd(
	low, high, n int,
	f func(low, high int) bool,
) bool {
	var recur func(int, int, int) bool
	recur = func(low, high, n int) bool {
		switch {
		case n == 1:
			return f(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return f(low, high)
			}
			var b0, b1 bool
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				b1 = recur(mid, high, n-half)
			}()
			b0 = recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return b0 && b1
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeOr receives a range, a batch count n, and a range predicate function f,
// divides the range into batches, and invokes the range predicate for each of
// these batches in parallel, covering the half-open interval from low to high,
// including low but excluding high.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range predicate is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeOr returns only when all range predicates have
// terminated, combining all return values with the || operator.
//
// RangeOr panics if high < low, or if n < 0.
//
// If one or more range predicate invocations panic, the corresponding
// goroutines recover the panics, and RangeOr eventually panics with the
// left-most recovered panic value.
func RangeOr(
	low, high, n int,
	f func(low, high int) bool,
) bool {
	var recur func(int, int, int) bool
	recur = func(low, high, n int) bool {
		switch {
		case n == 1:
			return f(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return f(low, high)
			}
			var b0, b1 bool
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				b1 = recur(mid, high, n-half)
			}()
			b0 = recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return b0 || b1
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduce receives a range, a batch count, a range reduce function, and a
// join function, divides the range into batches, and invokes the range reducer
// for each of these batches in parallel, covering the half-open interval from
// low to high, including low but excluding high. The results of the range
// reducer invocations are then combined by repeated invocations of join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduce returns only when all range reducers and pair
// reducers have terminated.
//
// RangeReduce panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduce eventually panics with the left-most
// recovered panic value.
func RangeReduce(
	low, high, n int,
	reduce func(low, high int) interface{},
	join func(x, y interface{}) interface{},
) interface{} {
	var recur func(int, int, int) interface{}
	recur = func(low, high, n int) interface{} {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right interface{}
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(mid, high, n-half)
			}()
			left = recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceInt receives a range, a batch count n, a range reducer function,
// and a join function, divides the range into batches, and invokes the range
// reducer for each of these batches in parallel, covering the half-open
// interval from low to high, including low but excluding high. The results of
// the range reducer invocations are then combined by repeated invocations of
// join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduceInt returns only when all range reducers and pair
// reducers have terminated.
//
// RangeReduceInt panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduceInt eventually panics with the left-most
// recovered panic value.
func RangeReduceInt(
	low, high, n int,
	reduce func(low, high int) int,
	join func(x, y int) int,
) int {
	var recur func(int, int, int) int
	recur = func(low, high, n int) int {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right int
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(mid, high, n-half)
			}()
			left = recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceIntSum receives a range, a batch count n, and a range reducer
// function, divides the range into batches, and invokes the range reducer for
// each of these batches in parallel, covering the half-open interval from low
// to high, including low but excluding high. The results of the range reducer
// invocations are then added together.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduceIntSum returns only when all range reducers and
// pair reducers have terminated.
//
// RangeReduceIntSum panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduceIntSum eventually panics with the
// left-most recovered panic value.
func RangeReduceIntSum(
	low, high, n int,
	reduce func(low, high int) int,
) int {
	var recur func(int, int, int) int
	recur = func(low, high, n int) int {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right int
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(mid, high, n-half)
			}()
			left = recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return left + right
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceIntProduct receives a range, a batch count n, and a range reducer
// function, divides the range into batches, and invokes the range reducer for
// each of these batches in parallel, covering the half-open interval from low
// to high, including low but excluding high. The results of the range reducer
// invocations are then multiplied with each other.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduceIntProduct returns only when all range reducers
// and pair reducers have terminated.
//
// RangeReduceIntProduct panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduceIntProducet eventually panics with the
// left-most recovered panic value.
func RangeReduceIntProduct(
	low, high, n int,
	reduce func(low, high int) int,
) int {
	var recur func(int, int, int) int
	recur = func(low, high, n int) int {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right int
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(mid, high, n-half)
			}()
			left = recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return left * right
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceFloat64 receives a range, a batch count n, a range reducer
// function, and a join function, divides the range into batches, and invokes
// the range reducer for each of these batches in parallel, covering the
// half-open interval from low to high, including low but excluding high. The
// results of the range reducer invocations are then combined by repeated
// invocations of join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduceFloat64 returns only when all range reducers and
// pair reducers have terminated.
//
// RangeReduceFloat64 panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduceFloat64 eventually panics with the
// left-most recovered panic value.
func RangeReduceFloat64(
	low, high, n int,
	reduce func(low, high int) float64,
	join func(x, y float64) float64,
) float64 {
	var recur func(int, int, int) float64
	recur = func(low, high, n int) float64 {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right float64
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(mid, high, n-half)
			}()
			left = recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceFloat64Sum receives a range, a batch count n, and a range reducer
// function, divides the range into batches, and invokes the range reducer for
// each of these batches in parallel, covering the half-open interval from low
// to high, including low but excluding high. The results of the range reducer
// invocations are then added together.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduceFloat64Sum returns only when all range reducers
// and pair reducers have terminated.
//
// RangeReduceFloat64Sum panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduceFloat64Sum eventually panics with the
// left-most recovered panic value.
func RangeReduceFloat64Sum(
	low, high, n int,
	reduce func(low, high int) float64,
) float64 {
	var recur func(int, int, int) float64
	recur = func(low, high, n int) float64 {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right float64
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(mid, high, n-half)
			}()
			left = recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return left + right
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceFloat64Product receives a range, a batch count n, and a range
// reducer function, divides the range into batches, and invokes the range
// reducer for each of these batches in parallel, covering the half-open
// interval from low to high, including low but excluding high. The results of
// the range reducer invocations are then multiplied with each other.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduceFloat64Product returns only when all range
// reducers and pair reducers have terminated.
//
// RangeReduceFloat64Product panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduceFloat64Producet eventually panics with the
// left-most recovered panic value.
func RangeReduceFloat64Product(
	low, high, n int,
	reduce func(low, high int) float64,
) float64 {
	var recur func(int, int, int) float64
	recur = func(low, high, n int) float64 {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right float64
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(mid, high, n-half)
			}()
			left = recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return left * right
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceString receives a range, a batch count n, a range reducer
// function, and a join function, divides the range into batches, and invokes
// the range reducer for each of these batches in parallel, covering the
// half-open interval from low to high, including low but excluding high. The
// results of the range reducer invocations are then combined by repeated
// invocations of join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduceString returns only when all range reducers and
// pair reducers have terminated.
//
// RangeReduceString panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduceString eventually panics with the
// left-most recovered panic value.
func RangeReduceString(
	low, high, n int,
	reduce func(low, high int) string,
	join func(x, y string) string,
) string {
	var recur func(int, int, int) string
	recur = func(low, high, n int) string {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right string
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(mid, high, n-half)
			}()
			left = recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceStringSum receives a range, a batch count n, and a range reducer
// function, divides the range into batches, and invokes the range reducer for
// each of these batches in parallel, covering the half-open interval from low
// to high, including low but excluding high. The results of the range reducer
// invocations are then concatenated together.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduceStringSum returns only when all range reducers
// and pair reducers have terminated.
//
// RangeReduceStringSum panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduceStringSum eventually panics with the
// left-most recovered panic value.
func RangeReduceStringSum(
	low, high, n int,
	reduce func(low, high int) string,
) string {
	var recur func(int, int, int) string
	recur = func(low, high, n int) string {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right string
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(mid, high, n-half)
			}()
			left = recur(low, mid, half)
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return left + right
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}


================================================
FILE: parallel/parallel_test.go
================================================
package parallel_test

import (
	"errors"
	"fmt"
	"runtime"

	"github.com/exascience/pargo/parallel"
)

func ExampleDo() {
	var fib func(int) (int, error)

	fib = func(n int) (result int, err error) {
		if n < 0 {
			err = errors.New("invalid argument")
		} else if n < 2 {
			result = n
		} else {
			var n1, n2 int
			n1, err = fib(n - 1)
			if err != nil {
				return
			}
			n2, err = fib(n - 2)
			result = n1 + n2
		}
		return
	}

	type intErr struct {
		n   int
		err error
	}

	var parallelFib func(int) intErr

	parallelFib = func(n int) (result intErr) {
		if n < 0 {
			result.err = errors.New("invalid argument")
		} else if n < 20 {
			result.n, result.err = fib(n)
		} else {
			var n1, n2 intErr
			parallel.Do(
				func() { n1 = parallelFib(n - 1) },
				func() { n2 = parallelFib(n - 2) },
			)
			result.n = n1.n + n2.n
			if n1.err != nil {
				result.err = n1.err
			} else {
				result.err = n2.err
			}
		}
		return
	}

	if result := parallelFib(-1); result.err != nil {
		fmt.Println(result.err)
	} else {
		fmt.Println(result.n)
	}

	// Output:
	// invalid argument
}

func ExampleRangeReduceIntSum() {
	numDivisors := func(n int) int {
		return parallel.RangeReduceIntSum(
			1, n+1, runtime.GOMAXPROCS(0),
			func(low, high int) int {
				var sum int
				for i := low; i < high; i++ {
					if (n % i) == 0 {
						sum++
					}
				}
				return sum
			},
		)
	}

	fmt.Println(numDivisors(12))

	// Output:
	// 6
}

func numDivisors(n int) int {
	return parallel.RangeReduceIntSum(
		1, n+1, runtime.GOMAXPROCS(0),
		func(low, high int) int {
			var sum int
			for i := low; i < high; i++ {
				if (n % i) == 0 {
					sum++
				}
			}
			return sum
		},
	)
}

func ExampleRangeReduce() {
	findPrimes := func(n int) []int {
		result := parallel.RangeReduce(
			2, n, 4*runtime.GOMAXPROCS(0),
			func(low, high int) interface{} {
				var slice []int
				for i := low; i < high; i++ {
					if numDivisors(i) == 2 { // see RangeReduceInt example
						slice = append(slice, i)
					}
				}
				return slice
			},
			func(x, y interface{}) interface{} {
				return append(x.([]int), y.([]int)...)
			},
		)
		return result.([]int)
	}

	fmt.Println(findPrimes(20))

	// Output:
	// [2 3 5 7 11 13 17 19]
}

func ExampleRangeReduceFloat64Sum() {
	sumFloat64s := func(f []float64) float64 {
		result := parallel.RangeReduceFloat64Sum(
			0, len(f), runtime.GOMAXPROCS(0),
			func(low, high int) float64 {
				var sum float64
				for i := low; i < high; i++ {
					sum += f[i]
				}
				return sum
			},
		)
		return result
	}

	fmt.Println(sumFloat64s([]float64{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}))

	// Output:
	// 55
}


================================================
FILE: pipeline/example_wordcount_test.go
================================================
package pipeline_test

import (
	"bufio"
	"fmt"
	"io"
	"runtime"
	"strings"

	"github.com/exascience/pargo/pipeline"
	"github.com/exascience/pargo/sort"
	"github.com/exascience/pargo/sync"
)

type Word string

func (w Word) Hash() (hash uint64) {
	// DJBX33A
	hash = 5381
	for _, b := range w {
		hash = ((hash << 5) + hash) + uint64(b)
	}
	return
}

func WordCount(r io.Reader) *sync.Map {
	result := sync.NewMap(16 * runtime.GOMAXPROCS(0))
	scanner := pipeline.NewScanner(r)
	scanner.Split(bufio.ScanWords)
	var p pipeline.Pipeline
	p.Source(scanner)
	p.Add(
		pipeline.Par(pipeline.Receive(
			func(_ int, data interface{}) interface{} {
				var uniqueWords []string
				for _, s := range data.([]string) {
					newValue, _ := result.Modify(Word(s), func(value interface{}, ok bool) (newValue interface{}, store bool) {
						if ok {
							newValue = value.(int) + 1
						} else {
							newValue = 1
						}
						store = true
						return
					})
					if newValue.(int) == 1 {
						uniqueWords = append(uniqueWords, s)
					}
				}
				return uniqueWords
			},
		)),
		pipeline.Ord(pipeline.ReceiveAndFinalize(
			func(_ int, data interface{}) interface{} {
				// print unique words as encountered first at the source
				for _, s := range data.([]string) {
					fmt.Print(s, " ")
				}
				return data
			},
			func() { fmt.Println(".") },
		)),
	)
	p.Run()
	return result
}

func Example_wordCount() {
	r := strings.NewReader("The big black bug bit the big black bear but the big black bear bit the big black bug back")
	counts := WordCount(r)
	words := make(sort.StringSlice, 0)
	counts.Range(func(key, _ interface{}) bool {
		words = append(words, string(key.(Word)))
		return true
	})
	sort.Sort(words)
	for _, word := range words {
		count, _ := counts.Load(Word(word))
		fmt.Println(word, count.(int))
	}

	// Output:
	// The big black bug bit the bear but back .
	// The 1
	// back 1
	// bear 2
	// big 4
	// bit 2
	// black 4
	// bug 2
	// but 1
	// the 3
}


================================================
FILE: pipeline/filter.go
================================================
package pipeline

// A NodeKind reperesents the different kinds of nodes.
type NodeKind int

const (
	// Ordered nodes receive batches in encounter order.
	Ordered NodeKind = iota

	// Sequential nodes receive batches in arbitrary sequential order.
	Sequential

	// Parallel nodes receives batches in parallel.
	Parallel
)

// A Filter is a function that returns a Receiver and a Finalizer to be added to
// a node. It receives a pipeline, the kind of node it will be added to, and the
// expected total data size that the receiver will be asked to process.
//
// The dataSize parameter is either positive, in which case it indicates the
// expected total size of all batches that will eventually be passed to this
// filter's receiver, or it is negative, in which case the expected size is
// either unknown or too difficult to determine. The dataSize parameter is a
// pointer whose contents can be modified by the filter, for example if this
// filter increases or decreases the total size for subsequent filters, or if
// this filter can change dataSize from an unknown to a known value, or vice
// versa, must change it from a known to an unknown value.
//
// Either the receiver or the finalizer or both can be nil, in which case they
// will not be added to the current node.
type Filter func(pipeline *Pipeline, kind NodeKind, dataSize *int) (Receiver, Finalizer)

// A Receiver is called for every data batch, and returns a potentially modified
// data batch. The seqNo parameter indicates the order in which the data batch
// was encountered at the current pipeline's data source.
type Receiver func(seqNo int, data interface{}) (filteredData interface{})

// A Finalizer is called once after the corresponding receiver has been called
// for all data batches in the current pipeline.
type Finalizer func()

// ComposeFilters takes a number of filters, calls them with the given pipeline,
// kind, and dataSize parameters in order, and appends the returned receivers
// and finalizers (except for nil values) to the result slices.
//
// ComposeFilters is used in Node implementations. User programs typically do
// not call ComposeFilters.
func ComposeFilters(pipeline *Pipeline, kind NodeKind, dataSize *int, filters []Filter) (receivers []Receiver, finalizers []Finalizer) {
	for _, filter := range filters {
		receiver, finalizer := filter(pipeline, kind, dataSize)
		if receiver != nil {
			receivers = append(receivers, receiver)
		}
		if finalizer != nil {
			finalizers = append(finalizers, finalizer)
		}
	}
	return
}

func feed(p *Pipeline, receivers []Receiver, index int, seqNo int, data interface{}) {
	for _, receive := range receivers {
		data = receive(seqNo, data)
	}
	p.FeedForward(index, seqNo, data)
}


================================================
FILE: pipeline/filters.go
================================================
package pipeline

import (
	"errors"
	"reflect"
	"sync"
	"sync/atomic"
)

// NewNode creates a node of the given kind, with the given filters.
//
// It is often more convenient to use one of the Ord, Seq, or Par methods.
func NewNode(kind NodeKind, filters ...Filter) Node {
	switch kind {
	case Ordered, Sequential:
		return &seqnode{kind: kind, filters: filters}
	case Parallel:
		return &parnode{filters: filters}
	default:
		panic("Invalid NodeKind in pipeline.NewNode.")
	}
}

// Identity is a filter that passes data batches through unmodified.
// This filter will be optimized away in a pipeline, so it
// does not hurt to add it.
func Identity(_ *Pipeline, _ NodeKind, _ *int) (_ Receiver, _ Finalizer) {
	return
}

// Receive creates a Filter that returns the given receiver and a nil finalizer.
func Receive(receive Receiver) Filter {
	return func(_ *Pipeline, _ NodeKind, _ *int) (receiver Receiver, _ Finalizer) {
		receiver = receive
		return
	}
}

// Finalize creates a filter that returns a nil receiver and the given
// finalizer.
func Finalize(finalize Finalizer) Filter {
	return func(_ *Pipeline, _ NodeKind, _ *int) (_ Receiver, finalizer Finalizer) {
		finalizer = finalize
		return
	}
}

// ReceiveAndFinalize creates a filter that returns the given filter and
// receiver.
func ReceiveAndFinalize(receive Receiver, finalize Finalizer) Filter {
	return func(_ *Pipeline, _ NodeKind, _ *int) (receiver Receiver, finalizer Finalizer) {
		receiver = receive
		finalizer = finalize
		return
	}
}

// A Predicate is a function that is passed a data batch and returns a boolean
// value.
//
// In most cases, it will cast the data parameter to a specific slice type and
// check a predicate on each element of the slice.
type Predicate func(data interface{}) bool

// Every creates a filter that sets the result pointer to true if the given
// predicate returns true for every data batch. If cancelWhenKnown is true, this
// filter cancels the pipeline as soon as the predicate returns false on a data
// batch.
func Every(result *bool, cancelWhenKnown bool, predicate Predicate) Filter {
	*result = true
	return func(pipeline *Pipeline, kind NodeKind, _ *int) (receiver Receiver, finalizer Finalizer) {
		switch kind {
		case Parallel:
			res := int32(1)
			receiver = func(_ int, data interface{}) interface{} {
				if !predicate(data) {
					atomic.StoreInt32(&res, 0)
					if cancelWhenKnown {
						pipeline.Cancel()
					}
				}
				return data
			}
			finalizer = func() {
				if atomic.LoadInt32(&res) == 0 {
					*result = false
				}
			}
		default:
			receiver = func(_ int, data interface{}) interface{} {
				if !predicate(data) {
					*result = false
					if cancelWhenKnown {
						pipeline.Cancel()
					}
				}
				return data
			}
		}
		return
	}
}

// NotEvery creates a filter that sets the result pointer to true if the given
// predicate returns false for at least one of the data batches it is passed. If
// cancelWhenKnown is true, this filter cancels the pipeline as soon as the
// predicate returns false on a data batch.
func NotEvery(result *bool, cancelWhenKnown bool, predicate Predicate) Filter {
	*result = false
	return func(pipeline *Pipeline, kind NodeKind, _ *int) (receiver Receiver, finalizer Finalizer) {
		switch kind {
		case Parallel:
			res := int32(0)
			receiver = func(_ int, data interface{}) interface{} {
				if !predicate(data) {
					atomic.StoreInt32(&res, 1)
					if cancelWhenKnown {
						pipeline.Cancel()
					}
				}
				return data
			}
			finalizer = func() {
				if atomic.LoadInt32(&res) == 1 {
					*result = true
				}
			}
		default:
			receiver = func(_ int, data interface{}) interface{} {
				if !predicate(data) {
					*result = true
					if cancelWhenKnown {
						pipeline.Cancel()
					}
				}
				return data
			}
		}
		return
	}
}

// Some creates a filter that sets the result pointer to true if the given
// predicate returns true for at least one of the data batches it is passed. If
// cancelWhenKnown is true, this filter cancels the pipeline as soon as the
// predicate returns true on a data batch.
func Some(result *bool, cancelWhenKnown bool, predicate Predicate) Filter {
	*result = false
	return func(pipeline *Pipeline, kind NodeKind, _ *int) (receiver Receiver, finalizer Finalizer) {
		switch kind {
		case Parallel:
			res := int32(0)
			receiver = func(_ int, data interface{}) interface{} {
				if predicate(data) {
					atomic.StoreInt32(&res, 1)
					if cancelWhenKnown {
						pipeline.Cancel()
					}
				}
				return data
			}
			finalizer = func() {
				if atomic.LoadInt32(&res) == 1 {
					*result = true
				}
			}
		default:
			receiver = func(_ int, data interface{}) interface{} {
				if predicate(data) {
					*result = true
					if cancelWhenKnown {
						pipeline.Cancel()
					}
				}
				return data
			}
		}
		return
	}
}

// NotAny creates a filter that sets the result pointer to true if the given
// predicate returns false for every data batch. If cancelWhenKnown is true,
// this filter cancels the pipeline as soon as the predicate returns true on a
// data batch.
func NotAny(result *bool, cancelWhenKnown bool, predicate Predicate) Filter {
	*result = true
	return func(pipeline *Pipeline, kind NodeKind, _ *int) (receiver Receiver, finalizer Finalizer) {
		switch kind {
		case Parallel:
			res := int32(1)
			receiver = func(_ int, data interface{}) interface{} {
				if predicate(data) {
					atomic.StoreInt32(&res, 0)
					if cancelWhenKnown {
						pipeline.Cancel()
					}
				}
				return data
			}
			finalizer = func() {
				if atomic.LoadInt32(&res) == 0 {
					*result = false
				}
			}
		default:
			receiver = func(_ int, data interface{}) interface{} {
				if predicate(data) {
					*result = false
					if cancelWhenKnown {
						pipeline.Cancel()
					}
				}
				return data
			}
		}
		return
	}
}

// Slice creates a filter that appends all the data batches it sees to the
// result. The result must represent a settable slice, for example by using the
// address operator & on a given slice.
func Slice(result interface{}) Filter {
	res := reflect.ValueOf(result).Elem()
	return func(pipeline *Pipeline, kind NodeKind, _ *int) (receiver Receiver, finalizer Finalizer) {
		if (res.Kind() != reflect.Slice) && !res.CanSet() {
			pipeline.SetErr(errors.New("result is not a settable slice in Pipeline.ToSlice"))
			return
		}
		switch kind {
		case Parallel:
			var m sync.Mutex
			receiver = func(_ int, data interface{}) interface{} {
				if data != nil {
					d := reflect.ValueOf(data)
					m.Lock()
					defer m.Unlock()
					res.Set(reflect.AppendSlice(res, d))
				}
				return data
			}
		default:
			receiver = func(_ int, data interface{}) interface{} {
				if data != nil {
					res.Set(reflect.AppendSlice(res, reflect.ValueOf(data)))
				}
				return data
			}
		}
		return
	}
}

// Count creates a filter that sets the result pointer to the total size of all
// data batches it sees.
func Count(result *int) Filter {
	return func(pipeline *Pipeline, kind NodeKind, size *int) (receiver Receiver, finalizer Finalizer) {
		switch {
		case *size >= 0:
			*result = *size
		case kind == Parallel:
			var res = int64(0)
			receiver = func(_ int, data interface{}) interface{} {
				if data != nil {
					d := reflect.ValueOf(data)
					atomic.AddInt64(&res, int64(d.Len()))
				}
				return data
			}
			finalizer = func() {
				*result = int(atomic.LoadInt64(&res))
			}
		default:
			receiver = func(_ int, data interface{}) interface{} {
				if data != nil {
					d := reflect.ValueOf(data)
					*result += d.Len()
				}
				return data
			}
		}
		return
	}
}

// Limit creates an ordered node with a filter that caps the total size of all
// data batches it passes to the next filter in the pipeline to the given limit.
// If cancelWhenKnown is true, this filter cancels the pipeline as soon as the
// limit is reached. If limit is negative, all data is passed through
// unmodified.
func Limit(limit int, cancelWhenReached bool) Node {
	return Ord(func(pipeline *Pipeline, _ NodeKind, size *int) (receiver Receiver, _ Finalizer) {
		switch {
		case limit < 0: // unlimited
		case limit == 0:
			*size = 0
			if cancelWhenReached {
				pipeline.Cancel()
			}
			receiver = func(_ int, _ interface{}) interface{} { return nil }
		case (*size < 0) || (*size > limit):
			if *size > 0 {
				*size = limit
			}
			seen := 0
			receiver = func(_ int, data interface{}) (result interface{}) {
				if seen >= limit {
					return
				}
				d := reflect.ValueOf(data)
				l := d.Len()
				if (seen + l) > limit {
					result = d.Slice(0, limit-seen).Interface()
					seen = limit
				} else {
					result = data
					seen += l
				}
				if cancelWhenReached && (seen == limit) {
					pipeline.Cancel()
				}
				return
			}
		}
		return
	})
}

// Skip creates an ordered node with a filter that skips the first n elements
// from the data batches it passes to the next filter in the pipeline. If n is
// negative, no data is passed through, and the error value of the pipeline is
// set to a non-nil value.
func Skip(n int) Node {
	return Ord(func(pipeline *Pipeline, _ NodeKind, size *int) (receiver Receiver, _ Finalizer) {
		switch {
		case n < 0: // skip everything
			*size = 0
			pipeline.SetErr(errors.New("skip filter with unknown size"))
			receiver = func(_ int, _ interface{}) interface{} { return nil }
		case n == 0: // nothing to skip
		case (*size < 0) || (*size > n):
			if *size > 0 {
				*size = n
			}
			seen := 0
			receiver = func(_ int, data interface{}) (result interface{}) {
				if seen >= n {
					result = data
					return
				}
				d := reflect.ValueOf(data)
				l := d.Len()
				if (seen + l) > n {
					result = d.Slice(n-seen, l).Interface()
					seen = n
				} else {
					seen += l
				}
				return
			}
		case *size <= n:
			*size = 0
			receiver = func(_ int, _ interface{}) interface{} { return nil }
		}
		return
	})
}


================================================
FILE: pipeline/lparnode.go
================================================
package pipeline

import (
	"runtime"
	"sync"
)

type lparnode struct {
	limit      int
	ordered    bool
	cond       *sync.Cond
	channel    chan dataBatch
	waitGroup  sync.WaitGroup
	run        int
	filters    []Filter
	receivers  []Receiver
	finalizers []Finalizer
}

// LimitedPar creates a parallel node with the given filters. The node uses at
// most limit goroutines at the same time. If limit is 0, a reasonable default
// is used instead. Even if limit is 0, the node is still limited. For unlimited
// nodes, use Par instead.
func LimitedPar(limit int, filters ...Filter) Node {
	if limit <= 0 {
		limit = runtime.GOMAXPROCS(0)
	}
	if limit == 1 {
		return &seqnode{kind: Sequential, filters: filters}
	}
	return &lparnode{limit: limit, filters: filters}
}

func (node *lparnode) makeOrdered() {
	node.ordered = true
	node.cond = sync.NewCond(&sync.Mutex{})
}

// Implements the TryMerge method of the Node interface.
func (node *lparnode) TryMerge(next Node) bool {
	if nxt, merge := next.(*lparnode); merge && (nxt.limit == node.limit) {
		node.filters = append(node.filters, nxt.filters...)
		node.receivers = append(node.receivers, nxt.receivers...)
		node.finalizers = append(node.finalizers, nxt.finalizers...)
		return true
	}
	return false
}

// Implements the Begin method of the Node interface.
func (node *lparnode) Begin(p *Pipeline, index int, dataSize *int) (keep bool) {
	node.receivers, node.finalizers = ComposeFilters(p, Parallel, dataSize, node.filters)
	node.filters = nil
	if keep = (len(node.receivers) > 0) || (len(node.finalizers) > 0); keep {
		node.channel = make(chan dataBatch)
		node.waitGroup.Add(node.limit)
		for i := 0; i < node.limit; i++ {
			go func() {
				defer node.waitGroup.Done()
				for {
					select {
					case <-p.ctx.Done():
						if node.ordered {
							node.cond.Broadcast()
						}
						return
					case batch, ok := <-node.channel:
						if !ok {
							return
						}
						if node.ordered {
							node.cond.L.Lock()
							if batch.seqNo != node.run {
								panic("Invalid receive order in an ordered limited parallel pipeline node.")
							}
							node.run++
							node.cond.L.Unlock()
							node.cond.Broadcast()
						}
						feed(p, node.receivers, index, batch.seqNo, batch.data)
					}
				}
			}()
		}
	}
	return
}

// Implements the Feed method of the Node interface.
func (node *lparnode) Feed(p *Pipeline, _ int, seqNo int, data interface{}) {
	if node.ordered {
		node.cond.L.Lock()
		defer node.cond.L.Unlock()
		for {
			if node.run == seqNo {
				select {
				case <-p.ctx.Done():
					return
				case node.channel <- dataBatch{seqNo, data}:
					return
				}
			}
			select {
			case <-p.ctx.Done():
				return
			default:
				node.cond.Wait()
			}
		}
	}
	select {
	case <-p.ctx.Done():
		return
	case node.channel <- dataBatch{seqNo, data}:
		return
	}
}

// Implements the End method of the Node interface.
func (node *lparnode) End() {
	close(node.channel)
	node.waitGroup.Wait()
	for _, finalize := range node.finalizers {
		finalize()
	}
	node.receivers = nil
	node.finalizers = nil
}


================================================
FILE: pipeline/parnode.go
================================================
package pipeline

import (
	"sync"
)

type parnode struct {
	waitGroup  sync.WaitGroup
	filters    []Filter
	receivers  []Receiver
	finalizers []Finalizer
}

// Par creates a parallel node with the given filters.
func Par(filters ...Filter) Node {
	return &parnode{filters: filters}
}

// Implements the TryMerge method of the Node interface.
func (node *parnode) TryMerge(next Node) bool {
	if nxt, merge := next.(*parnode); merge {
		node.filters = append(node.filters, nxt.filters...)
		node.receivers = append(node.receivers, nxt.receivers...)
		node.finalizers = append(node.finalizers, nxt.finalizers...)
		return true
	}
	return false
}

// Implements the Begin method of the Node interface.
func (node *parnode) Begin(p *Pipeline, _ int, dataSize *int) (keep bool) {
	node.receivers, node.finalizers = ComposeFilters(p, Parallel, dataSize, node.filters)
	node.filters = nil
	keep = (len(node.receivers) > 0) || (len(node.finalizers) > 0)
	return
}

// Implements the Feed method of the Node interface.
func (node *parnode) Feed(p *Pipeline, index int, seqNo int, data interface{}) {
	node.waitGroup.Add(1)
	go func() {
		defer node.waitGroup.Done()
		select {
		case <-p.ctx.Done():
			return
		default:
			feed(p, node.receivers, index, seqNo, data)
		}
	}()
}

// Implements the End method of the Node interface.
func (node *parnode) End() {
	node.waitGroup.Wait()
	for _, finalize := range node.finalizers {
		finalize()
	}
	node.receivers = nil
	node.finalizers = nil
}


================================================
FILE: pipeline/pipeline.go
================================================
// Package pipeline provides means to construct and execute parallel pipelines.
//
// A Pipeline feeds batches of data through several functions that can be
// specified to be executed in encounter order, in arbitrary sequential order,
// or in parallel.  Ordered, sequential, or parallel stages can arbitrarily
// alternate.
//
// A Pipeline consists of a Source object, and several Node objects.
//
// Source objects that are supported by this implementation are arrays, slices,
// strings, channels, and bufio.Scanner objects, but other kinds of Source
// objects can be added by user programs.
//
// Node objects can be specified to receive batches from the input source either
// sequentially in encounter order, which is always the same order in which they
// were originally encountered at the source; sequentially, but in arbitrary
// order; or in parallel. Ordered nodes always receive batches in encounter
// order even if they are preceded by arbitrary sequential, or even parallel
// nodes.
//
// Node objects consist of filters, which are pairs of receiver and finalizer
// functions. Each batch is passed to each receiver function, which can
// transform and modify the batch for the next receiver function in the
// pipeline. Each finalizer function is called once when all batches have been
// passed through all receiver functions.
//
// Pipelines do not have an explicit representation for sinks. Instead, filters
// can use side effects to generate results.
//
// Pipelines also support cancelation by way of the context package of Go's
// standard library.
//
// An application of pipelines can be found in the elPrep tool at
// https://github.com/exascience/elprep - specifically in
// https://github.com/ExaScience/elprep/blob/master/sam/filter-pipeline.go
package pipeline

import (
	"context"
	"runtime"
	"sync"
)

// A Node object represents a sequence of filters which are together executed
// either in encounter order, in arbitrary sequential order, or in parallel.
//
// The methods of this interface are typically not called by user programs, but
// rather implemented by specific node types and called by pipelines. Ordered,
// sequential, and parallel nodes are also implemented in this package, so that
// user programs are typically not concerned with Node methods at all.
type Node interface {

	// TryMerge tries to merge node with the current node by appending its
	// filters to the filters of the current node, which succeeds if both nodes
	// are either sequential or parallel. The return value merged indicates
	// whether merging succeeded.
	TryMerge(node Node) (merged bool)

	// Begin informs this node that the pipeline is going to start to feed
	// batches of data to this node. The pipeline, the index of this node among
	// all the nodes in the pipeline, and the expected total size of all batches
	// combined are passed as parameters.
	//
	// The dataSize parameter is either positive, in which case it indicates the
	// expected total size of all batches that will eventually be passed to this
	// node's Feed method, or it is negative, in which case the expected size is
	// either unknown or too difficult to determine. The dataSize parameter is a
	// pointer whose contents can be modified by Begin, for example if this node
	// increases or decreases the total size for subsequent nodes, or if this
	// node can change dataSize from an unknown to a known value, or vice versa,
	// must change it from a known to an unknown value.
	//
	// A node may decide that, based on the given information, it will actually
	// not need to see any of the batches that are normally going to be passed
	// to it. In that case, it can return false as a result, and its Feed and
	// End method will not be called anymore.  Otherwise, it should return true
	// by default.
	Begin(p *Pipeline, index int, dataSize *int) (keep bool)

	// Feed is called for each batch of data. The pipeline, the index of this
	// node among all the nodes in the pipeline (which may be different from the
	// index number seen by Begin), the sequence number of the batch (according
	// to the encounter order), and the actual batch of data are passed as
	// parameters.
	//
	// The data parameter contains the batch of data, which is usually a slice
	// of a particular type. After the data has been processed by all filters of
	// this node, the node must call p.FeedForward with exactly the same index
	// and sequence numbers, but a potentially modified batch of data.
	// FeedForward must be called even when the data batch is or becomes empty,
	// to ensure that all sequence numbers are seen by subsequent nodes.
	Feed(p *Pipeline, index int, seqNo int, data interface{})

	// End is called after all batches have been passed to Feed. This allows the
	// node to release resources and call the finalizers of its filters.
	End()
}

// A Pipeline is a parallel pipeline that can feed batches of data fetched from
// a source through several nodes that are ordered, sequential, or parallel.
//
// The zero Pipeline is valid and empty.
//
// A Pipeline must not be copied after first use.
type Pipeline struct {
	mutex        sync.RWMutex
	err          error
	ctx          context.Context
	cancel       context.CancelFunc
	source       Source
	nodes        []Node
	nofBatches   int
	batchInc     int
	maxBatchSize int
}

// Err returns the current error value for this pipeline, which may be nil if no
// error has occurred so far.
//
// Err and SetErr are safe to be concurrently invoked.
func (p *Pipeline) Err() (err error) {
	p.mutex.RLock()
	err = p.err
	p.mutex.RUnlock()
	return err
}

// SetErr attempts to set a new error value for this pipeline, unless it already
// has a non-nil error value. If the attempt is successful, SetErr also cancels
// the pipeline, and returns true. If the attempt is not successful, SetErr
// returns false.
//
// SetErr and Err are safe to be concurrently invoked, for example from the
// different goroutines executing filters of parallel nodes in this pipeline.
func (p *Pipeline) SetErr(err error) bool {
	p.mutex.Lock()
	if p.err == nil {
		p.err = err
		p.mutex.Unlock()
		p.cancel()
		return true
	}
	p.mutex.Unlock()
	return false
}

// Context returns this pipeline's context.
func (p *Pipeline) Context() context.Context {
	return p.ctx
}

// Cancel calls the cancel function of this pipeline's context.
func (p *Pipeline) Cancel() {
	p.cancel()
}

// Source sets the data source for this pipeline.
//
// If source does not implement the Source interface, the pipeline uses
// reflection to create a proper source for arrays, slices, strings, or
// channels.
//
// It is safe to call Source multiple times before Run or RunWithContext is
// called, in which case only the last call to Source is effective.
func (p *Pipeline) Source(source interface{}) {
	switch src := source.(type) {
	case Source:
		p.source = src
	default:
		p.source = reflectSource(source)
	}
}

// Add appends nodes to the end of this pipeline.
func (p *Pipeline) Add(nodes ...Node) {
	for _, node := range nodes {
		if l := len(p.nodes); (l == 0) || !p.nodes[l-1].TryMerge(node) {
			p.nodes = append(p.nodes, node)
		}
	}
}

// NofBatches sets or gets the number of batches that are created from the data
// source for this pipeline, if the expected total size for this pipeline's data
// source is known or can be determined easily.
//
// NofBatches can be called safely by user programs before Run or RunWithContext
// is called.
//
// If user programs do not call NofBatches, or call them with a value < 1, then
// the pipeline will choose a reasonable default value that takes
// runtime.GOMAXPROCS(0) into account.
//
// If the expected total size for this pipeline's data source is unknown, or is
// difficult to determine, use SetVariableBatchSize to influence batch sizes.
func (p *Pipeline) NofBatches(n int) (nofBatches int) {
	if n < 1 {
		nofBatches = p.nofBatches
		if nofBatches < 1 {
			nofBatches = 2 * runtime.GOMAXPROCS(0)
			p.nofBatches = nofBatches
		}
	} else {
		nofBatches = n
		p.nofBatches = n
	}
	return
}

const (
	defaultBatchInc     = 1024
	defaultMaxBatchSize = 0x2000000
)

// SetVariableBatchSize sets the batch size(s) for the batches that are created
// from the data source for this pipeline, if the expected total size for this
// pipeline's data source is unknown or difficult to determine.
//
// SetVariableBatchSize can be called safely by user programs before Run or
// RunWithContext is called.
//
// If user programs do not call SetVariableBatchSize, or pass a value < 1 to any
// of the two parameters, then the pipeline will choose a reasonable default
// value for that respective parameter.
//
// The pipeline will start with batchInc as a batch size, and increase the batch
// size for every subsequent batch by batchInc to accomodate data sources of
// different total sizes. The batch size will never be larger than maxBatchSize,
// though.
//
// If the expected total size for this pipeline's data source is known, or can
// be determined easily, use NofBatches to influence the batch size.
func (p *Pipeline) SetVariableBatchSize(batchInc, maxBatchSize int) {
	p.batchInc = batchInc
	p.maxBatchSize = maxBatchSize
}

func (p *Pipeline) finalizeVariableBatchSize() {
	if p.batchInc < 1 {
		p.batchInc = defaultBatchInc
	}
	if p.maxBatchSize < 1 {
		p.maxBatchSize = defaultMaxBatchSize
	}
}

func (p *Pipeline) nextBatchSize(batchSize int) (result int) {
	result = batchSize + p.batchInc
	if result > p.maxBatchSize {
		result = p.maxBatchSize
	}
	return
}

// RunWithContext initiates pipeline execution.
//
// It expects a context and a cancel function as parameters, for example from
// context.WithCancel(context.Background()). It does not ensure that the cancel
// function is called at least once, so this must be ensured by the function
// calling RunWithContext.
//
// RunWithContext should only be called after a data source has been set using
// the Source method, and one or more Node objects have been added to the
// pipeline using the Add method. NofBatches can be called before RunWithContext
// to deal with load imbalance, but this is not necessary since RunWithContext
// chooses a reasonable default value.
//
// RunWithContext prepares the data source, tells each node that batches are
// going to be sent to them by calling Begin, and then fetches batches from the
// data source and sends them to the nodes. Once the data source is depleted,
// the nodes are informed that the end of the data source has been reached.
func (p *Pipeline) RunWithContext(ctx context.Context, cancel context.CancelFunc) {
	if p.err != nil {
		return
	}
	p.ctx, p.cancel = ctx, cancel
	dataSize := p.source.Prepare(p.ctx)
	filteredSize := dataSize
	for index := 0; index < len(p.nodes); {
		if p.nodes[index].Begin(p, index, &filteredSize) {
			index++
		} else {
			p.nodes = append(p.nodes[:index], p.nodes[index+1:]...)
		}
	}
	if p.err != nil {
		return
	}
	if len(p.nodes) > 0 {
		for index := 0; index < len(p.nodes)-1; {
			if p.nodes[index].TryMerge(p.nodes[index+1]) {
				p.nodes = append(p.nodes[:index+1], p.nodes[index+2:]...)
			} else {
				index++
			}
		}
		for index := len(p.nodes) - 1; index >= 0; index-- {
			if _, ok := p.nodes[index].(*strictordnode); ok {
				for index = index - 1; index >= 0; index-- {
					switch node := p.nodes[index].(type) {
					case *seqnode:
						node.kind = Ordered
					case *lparnode:
						node.makeOrdered()
					}
				}
				break
			}
		}
		if dataSize < 0 {
			p.finalizeVariableBatchSize()
			for seqNo, batchSize := 0, p.batchInc; p.source.Fetch(batchSize) > 0; seqNo, batchSize = seqNo+1, p.nextBatchSize(batchSize) {
				p.nodes[0].Feed(p, 0, seqNo, p.source.Data())
				if err := p.source.Err(); err != nil {
					p.SetErr(err)
					return
				} else if p.Err() != nil {
					return
				}
			}
		} else {
			batchSize := ((dataSize - 1) / p.NofBatches(0)) + 1
			if batchSize == 0 {
				batchSize = 1
			}
			for seqNo := 0; p.source.Fetch(batchSize) > 0; seqNo++ {
				p.nodes[0].Feed(p, 0, seqNo, p.source.Data())
				if err := p.source.Err(); err != nil {
					p.SetErr(err)
					return
				} else if p.Err() != nil {
					return
				}
			}
		}
	}
	for _, node := range p.nodes {
		node.End()
	}
	if p.err == nil {
		p.err = p.source.Err()
	}
}

// Run initiates pipeline execution by calling
// RunWithContext(context.WithCancel(context.Background())), and ensures that
// the cancel function is called at least once when the pipeline is done.
//
// Run should only be called after a data source has been set using the Source
// method, and one or more Node objects have been added to the pipeline using
// the Add method. NofBatches can be called before Run to deal with load
// imbalance, but this is not necessary since Run chooses a reasonable default
// value.
//
// Run prepares the data source, tells each node that batches are going to be
// sent to them by calling Begin, and then fetches batches from the data source
// and sends them to the nodes. Once the data source is depleted, the nodes are
// informed that the end of the data source has been reached.
func (p *Pipeline) Run() {
	ctx, cancel := context.WithCancel(context.Background())
	defer cancel()
	p.RunWithContext(ctx, cancel)
}

// FeedForward must be called in the Feed method of a node to forward a
// potentially modified data batch to the next node in the current pipeline.
//
// FeedForward is used in Node implementations. User programs typically do not
// call FeedForward.
//
// FeedForward must be called with the pipeline received as a parameter by Feed,
// and must pass the same index and seqNo received by Feed. The data parameter
// can be either a modified or an unmodified data batch. FeedForward must always
// be called, even if the data batch is unmodified, and even if the data batch
// is or becomes empty.
func (p *Pipeline) FeedForward(index int, seqNo int, data interface{}) {
	if index++; index < len(p.nodes) {
		p.nodes[index].Feed(p, index, seqNo, data)
	}
}


================================================
FILE: pipeline/seqnode.go
================================================
package pipeline

import (
	"sync"
)

type (
	dataBatch struct {
		seqNo int
		data  interface{}
	}

	seqnode struct {
		kind       NodeKind
		channel    chan dataBatch
		waitGroup  sync.WaitGroup
		filters    []Filter
		receivers  []Receiver
		finalizers []Finalizer
	}
)

// Ord creates an ordered node with the given filters.
func Ord(filters ...Filter) Node {
	return &seqnode{kind: Ordered, filters: filters}
}

// Seq creates a sequential node with the given filters.
func Seq(filters ...Filter) Node {
	return &seqnode{kind: Sequential, filters: filters}
}

// Implements the TryMerge method of the Node interface.
func (node *seqnode) TryMerge(next Node) bool {
	if nxt, merge := next.(*seqnode); merge && (len(nxt.filters) > 0) {
		if nxt.kind == Ordered {
			node.kind = Ordered
		}
		node.filters = append(node.filters, nxt.filters...)
		node.receivers = append(node.receivers, nxt.receivers...)
		node.finalizers = append(node.finalizers, nxt.finalizers...)
		return true
	}
	return false
}

// Implements the Begin method of the Node interface.
func (node *seqnode) Begin(p *Pipeline, index int, dataSize *int) (keep bool) {
	node.receivers, node.finalizers = ComposeFilters(p, node.kind, dataSize, node.filters)
	node.filters = nil
	if keep = (len(node.receivers) > 0) || (len(node.finalizers) > 0); keep {
		node.channel = make(chan dataBatch)
		node.waitGroup.Add(1)
		switch node.kind {
		case Sequential:
			go func() {
				defer node.waitGroup.Done()
				for {
					select {
					case <-p.ctx.Done():
						return
					case batch, ok := <-node.channel:
						if !ok {
							return
						}
						feed(p, node.receivers, index, batch.seqNo, batch.data)
					}
				}
			}()
		case Ordered:
			go func() {
				defer node.waitGroup.Done()
				stash := make(map[int]interface{})
				run := 0
				for {
					select {
					case <-p.ctx.Done():
						return
					case batch, ok := <-node.channel:
						switch {
						case !ok:
							return
						case batch.seqNo < run:
							panic("Invalid receive order in an ordered pipeline node.")
						case batch.seqNo > run:
							stash[batch.seqNo] = batch.data
						default:
							feed(p, node.receivers, index, batch.seqNo, batch.data)
						checkStash:
							for {
								select {
								case <-p.ctx.Done():
									return
								default:
									run++
									data, ok := stash[run]
									if !ok {
										break checkStash
									}
									delete(stash, run)
									feed(p, node.receivers, index, run, data)
								}
							}
						}
					}
				}
			}()
		default:
			panic("Invalid NodeKind in a sequential pipeline node.")
		}
	}
	return
}

// Implements the Feed method of the Node interface.
func (node *seqnode) Feed(p *Pipeline, _ int, seqNo int, data interface{}) {
	select {
	case <-p.ctx.Done():
		return
	case node.channel <- dataBatch{seqNo, data}:
		return
	}
}

// Implements the End method of the Node interface.
func (node *seqnode) End() {
	close(node.channel)
	node.waitGroup.Wait()
	for _, finalize := range node.finalizers {
		finalize()
	}
	node.receivers = nil
	node.finalizers = nil
}


================================================
FILE: pipeline/source.go
================================================
package pipeline

import (
	"bufio"
	"context"
	"io"
	"reflect"
)

// A Source represents an object that can generate data batches for pipelines.
type Source interface {
	// Err returns an error value or nil
	Err() error

	// Prepare receives a pipeline context and informs the pipeline what the
	// total expected size of all data batches is. The return value is -1 if the
	// total size is unknown or difficult to determine.
	Prepare(ctx context.Context) (size int)

	// Fetch gets a data batch of the requested size from the source. It returns
	// the size of the data batch that it was actually able to fetch. It returns
	// 0 if there is no more data to be fetched from the source; the pipeline
	// will then make no further attempts to fetch more elements.
	Fetch(size int) (fetched int)

	// Data returns the last fetched data batch.
	Data() interface{}
}

type sliceSource struct {
	value reflect.Value
	size  int
	data  interface{}
}

func newSliceSource(value reflect.Value) *sliceSource {
	size := value.Len()
	return &sliceSource{value: value.Slice(0, size), size: size}
}

func (src *sliceSource) Err() error {
	return nil
}

func (src *sliceSource) Prepare(_ context.Context) int {
	return src.size
}

func (src *sliceSource) Fetch(n int) (fetched int) {
	switch {
	case src.size == 0:
		src.data = nil
	case n >= src.size:
		fetched = src.size
		src.data = src.value.Interface()
		src.value = reflect.ValueOf(nil)
		src.size = 0
	default:
		fetched = n
		src.data = src.value.Slice(0, n).Interface()
		src.value = src.value.Slice(n, src.size)
		src.size -= n
	}
	return
}

func (src *sliceSource) Data() interface{} {
	return src.data
}

type chanSource struct {
	cases []reflect.SelectCase
	zero  reflect.Value
	data  interface{}
}

func newChanSource(value reflect.Value) *chanSource {
	zeroElem := value.Type().Elem()
	return &chanSource{
		cases: []reflect.SelectCase{{Dir: reflect.SelectRecv, Chan: value}},
		zero:  reflect.Zero(reflect.SliceOf(zeroElem)),
	}
}

func (src *chanSource) Err() error {
	return nil
}

func (src *chanSource) Prepare(ctx context.Context) (size int) {
	src.cases = append(src.cases, reflect.SelectCase{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(ctx.Done())})
	return -1
}

func (src *chanSource) Fetch(n int) (fetched int) {
	data := src.zero
	for fetched = 0; fetched < n; fetched++ {
		if chosen, element, ok := reflect.Select(src.cases); (chosen == 0) && ok {
			data = reflect.Append(data, element)
		} else {
			break
		}
	}
	src.data = data.Interface()
	return
}

func (src *chanSource) Data() interface{} {
	return src.data
}

func reflectSource(source interface{}) Source {
	switch value := reflect.ValueOf(source); value.Kind() {
	case reflect.Array, reflect.Slice, reflect.String:
		return newSliceSource(value)
	case reflect.Chan:
		return newChanSource(value)
	default:
		panic("A default pipeline source is not of kind Array, Slice, String, or Chan.")
	}
}

// Scanner is a wrapper around bufio.Scanner so it can act as a data source for
// pipelines. It fetches strings.
type Scanner struct {
	*bufio.Scanner
	data interface{}
}

// NewScanner returns a new Scanner to read from r. The split function defaults
// to bufio.ScanLines.
func NewScanner(r io.Reader) *Scanner {
	return &Scanner{Scanner: bufio.NewScanner(r)}
}

// Prepare implements the method of the Source interface.
func (src *Scanner) Prepare(_ context.Context) (size int) {
	return -1
}

// Fetch implements the method of the Source interface.
func (src *Scanner) Fetch(n int) (fetched int) {
	var data []string
	for fetched = 0; fetched < n; fetched++ {
		if src.Scan() {
			data = append(data, src.Text())
		} else {
			break
		}
	}
	src.data = data
	return
}

// Data implements the method of the Source interface.
func (src *Scanner) Data() interface{} {
	return src.data
}

// BytesScanner is a wrapper around bufio.Scanner so it can act as a data source
// for pipelines. It fetches slices of bytes.
type BytesScanner struct {
	*bufio.Scanner
	data interface{}
}

// NewBytesScanner returns a new Scanner to read from r. The split function
// defaults to bufio.ScanLines.
func NewBytesScanner(r io.Reader) *BytesScanner {
	return &BytesScanner{Scanner: bufio.NewScanner(r)}
}

// Prepare implements the method of the Source interface.
func (src *BytesScanner) Prepare(_ context.Context) (size int) {
	return -1
}

// Fetch implements the method of the Source interface.
func (src *BytesScanner) Fetch(n int) (fetched int) {
	var data [][]byte
	for fetched = 0; fetched < n; fetched++ {
		if src.Scan() {
			data = append(data, append([]byte(nil), src.Bytes()...))
		} else {
			break
		}
	}
	src.data = data
	return
}

// Data implements the method of the Source interface.
func (src *BytesScanner) Data() interface{} {
	return src.data
}

// Func is a generic source that generates data batches
// by repeatedly calling a function.
type Func struct {
	data interface{}
	err error
	size int
	fetch func(size int) (data interface{}, fetched int, err error)
}

// NewFunc returns a new Func to generate data batches
// by repeatedly calling fetch.
//
// The size parameter informs the pipeline what the total
// expected size of all data batches is. Pass -1 if the
// total size is unknown or difficult to determine.
//
// The fetch function returns a data batch of the requested
// size. It returns the size of the data batch that it was
// actually able to fetch. It returns 0 if there is no more
// data to be fetched from the source; the pipeline will
// then make no further attempts to fetch more elements.
//
// The fetch function can also return an error if necessary.
func NewFunc(size int, fetch func(size int) (data interface{}, fetched int, err error)) *Func {
	return &Func{size: size, fetch: fetch}
}

// Err implements the method of the Source interface.
func (f *Func) Err() error {
	return f.err
}

// Prepare implements the method of the Source interface.
func (f *Func) Prepare(_ context.Context) int {
	return f.size
}

// Fetch implements the method of the Source interface.
func (f *Func) Fetch(size int) (fetched int) {
	f.data, fetched, f.err = f.fetch(size)
	return
}

// Data implements the method of the Source interface.
func (f *Func) Data() interface{} {
	return f.data
}

// SingletonChan is similar to a regular chan source,
// except it only accepts and passes through single
// elements instead of creating slices of elements
// from the input channel.
type SingletonChan struct {
	cases []reflect.SelectCase
	zero reflect.Value
	data interface{}
}

// NewSingletonChan returns a new SingletonChan to read from
// the given channel.
func NewSingletonChan(channel interface{}) *SingletonChan {
	value := reflect.ValueOf(channel)
	if value.Kind() != reflect.Chan {
		panic("parameter for pargo.pipeline.NewSingletonChan is not a channel")
	}
	return &SingletonChan{
		cases: []reflect.SelectCase{{Dir: reflect.SelectRecv, Chan: value}},
		zero: reflect.Zero(value.Type().Elem()),
	}
}

// Err implements the method of the Source interface.
func (src *SingletonChan) Err() error {
	return nil
}

// Prepare implements the method of the Source interface.
func (src *SingletonChan) Prepare(ctx context.Context) (size int) {
	src.cases = append(src.cases, reflect.SelectCase{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(ctx.Done())})
	return -1
}

// Fetch implements the method of the Source interface.
func (src *SingletonChan) Fetch(n int) (fetched int) {
	if chosen, element, ok := reflect.Select(src.cases); (chosen == 0) && ok {
		src.data = element.Interface()
		return 1
	}
	src.data = src.zero
	return 0
}

// Data implements the method of the Source interface.
func (src *SingletonChan) Data() interface{} {
	return src.data
}


================================================
FILE: pipeline/strictordnode.go
================================================
package pipeline

import (
	"sync"
)

type strictordnode struct {
	cond       *sync.Cond
	channel    chan dataBatch
	waitGroup  sync.WaitGroup
	run        int
	filters    []Filter
	receivers  []Receiver
	finalizers []Finalizer
}

// StrictOrd creates an ordered node with the given filters.
func StrictOrd(filters ...Filter) Node {
	return &strictordnode{filters: filters}
}

// Implements the TryMerge method of the Node interface.
func (node *strictordnode) TryMerge(next Node) bool {
	switch nxt := next.(type) {
	case *seqnode:
		node.filters = append(node.filters, nxt.filters...)
		node.receivers = append(node.receivers, nxt.receivers...)
		node.finalizers = append(node.finalizers, nxt.finalizers...)
		return true
	case *strictordnode:
		node.filters = append(node.filters, nxt.filters...)
		node.receivers = append(node.receivers, nxt.receivers...)
		node.finalizers = append(node.finalizers, nxt.finalizers...)
		return true
	default:
		return false
	}
}

//Implements the Begin method of the Node interface.
func (node *strictordnode) Begin(p *Pipeline, index int, dataSize *int) (keep bool) {
	node.receivers, node.finalizers = ComposeFilters(p, Ordered, dataSize, node.filters)
	node.filters = nil
	if keep = (len(node.receivers) > 0) || (len(node.finalizers) > 0); keep {
		node.cond = sync.NewCond(&sync.Mutex{})
		node.channel = make(chan dataBatch)
		node.waitGroup.Add(1)
		go func() {
			defer node.waitGroup.Done()
			for {
				select {
				case <-p.ctx.Done():
					node.cond.Broadcast()
					return
				case batch, ok := <-node.channel:
					if !ok {
						return
					}
					node.cond.L.Lock()
					if batch.seqNo != node.run {
						panic("Invalid receive order in a strictly ordered pipeline node.")
					}
					node.run++
					node.cond.L.Unlock()
					node.cond.Broadcast()
					feed(p, node.receivers, index, batch.seqNo, batch.data)
				}
			}
		}()
	}
	return
}

// Implements the Feed method of the Node interface.
func (node *strictordnode) Feed(p *Pipeline, _ int, seqNo int, data interface{}) {
	node.cond.L.Lock()
	defer node.cond.L.Unlock()
	for {
		if node.run == seqNo {
			select {
			case <-p.ctx.Done():
				return
			case node.channel <- dataBatch{seqNo, data}:
				return
			}
		}
		select {
		case <-p.ctx.Done():
			return
		default:
			node.cond.Wait()
		}
	}
}

// Implements the End method of the Node interface.
func (node *strictordnode) End() {
	close(node.channel)
	node.waitGroup.Wait()
	for _, finalize := range node.finalizers {
		finalize()
	}
	node.receivers = nil
	node.finalizers = nil
}


================================================
FILE: sequential/sequential.go
================================================
// Package sequential provides sequential implementations of the functions
// provided by the parallel and speculative packages. This is useful for testing
// and debugging.
//
// It is not recommended to use the implementations of this package for any
// other purpose, because they are almost certainly too inefficient for regular
// sequential programs.
package sequential

import (
	"fmt"

	"github.com/exascience/pargo/internal"
)

// Reduce receives one or more functions, executes them sequentially, and
// combines their results sequentially.
//
// Partial results are combined with the join function.
func Reduce(
	join func(x, y interface{}) interface{},
	firstFunction func() interface{},
	moreFunctions ...func() interface{},
) interface{} {
	result := firstFunction()
	for _, f := range moreFunctions {
		result = join(result, f())
	}
	return result
}

// ReduceFloat64 receives one or more functions, executes them sequentially, and
// combines their results sequentially.
//
// Partial results are combined with the join function.
func ReduceFloat64(
	join func(x, y float64) float64,
	firstFunction func() float64,
	moreFunctions ...func() float64,
) float64 {
	result := firstFunction()
	for _, f := range moreFunctions {
		result = join(result, f())
	}
	return result
}

// ReduceFloat64Sum receives zero or more functions, executes them sequentially,
// and adds their results sequentially.
func ReduceFloat64Sum(functions ...func() float64) float64 {
	result := float64(0)
	for _, f := range functions {
		result += f()
	}
	return result
}

// ReduceFloat64Product receives zero or more functions, executes them
// sequentially, and multiplies their results sequentially.
func ReduceFloat64Product(functions ...func() float64) float64 {
	result := float64(1)
	for _, f := range functions {
		result *= f()
	}
	return result
}

// ReduceInt receives one or more functions, executes them sequentially, and
// combines their results sequentially.
//
// Partial results are combined with the join function.
func ReduceInt(
	join func(x, y int) int,
	firstFunction func() int,
	moreFunctions ...func() int,
) int {
	result := firstFunction()
	for _, f := range moreFunctions {
		result = join(result, f())
	}
	return result
}

// ReduceIntSum receives zero or more functions, executes them sequentially, and
// adds their results sequentially.
func ReduceIntSum(functions ...func() int) int {
	result := 0
	for _, f := range functions {
		result += f()
	}
	return result
}

// ReduceIntProduct receives zero or more functions, executes them sequentially,
// and multiplies their results sequentially.
func ReduceIntProduct(functions ...func() int) int {
	result := 1
	for _, f := range functions {
		result *= f()
	}
	return result
}

// ReduceString receives one or more functions, executes them in parallel, and
// combines their results in parallel.
//
// Partial results are combined with the join function.
func ReduceString(
	join func(x, y string) string,
	firstFunction func() string,
	moreFunctions ...func() string,
) string {
	result := firstFunction()
	for _, f := range moreFunctions {
		result = join(result, f())
	}
	return result
}

// ReduceStringSum receives zero or more functions, executes them in parallel,
// and concatenates their results in parallel.
func ReduceStringSum(functions ...func() string) string {
	result := ""
	for _, f := range functions {
		result += f()
	}
	return result
}

// Do receives zero or more thunks and executes them sequentially.
func Do(thunks ...func()) {
	for _, thunk := range thunks {
		thunk()
	}
}

// And receives zero or more predicate functions and executes them sequentially,
// combining all return values with the && operator, with true as the default
// return value.
func And(predicates ...func() bool) bool {
	result := true
	for _, predicate := range predicates {
		result = result && predicate()
	}
	return result
}

// Or receives zero or more predicate functions and executes them sequentially,
// combining all return values with the || operator, with false as the default
// return value.
func Or(predicates ...func() bool) bool {
	result := false
	for _, predicate := range predicates {
		result = result || predicate()
	}
	return result
}

// Range receives a range, a batch count n, and a range function f, divides the
// range into batches, and invokes the range function for each of these batches
// sequentially, covering the half-open interval from low to high, including low
// but excluding high.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// Range panics if high < low, or if n < 0.
func Range(
	low, high, n int,
	f func(low, high int),
) {
	var recur func(int, int, int)
	recur = func(low, high, n int) {
		switch {
		case n == 1:
			f(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				f(low, high)
				return
			}
			recur(low, mid, half)
			recur(mid, high, n-half)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeAnd receives a range, a batch count n, and a range predicate function f,
// divides the range into batches, and invokes the range predicate for each of
// these batches sequentially, covering the half-open interval from low to high,
// including low but excluding high.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// RangeAnd returns by combining all return values with the && operator.
//
// RangeAnd panics if high < low, or if n < 0.
func RangeAnd(
	low, high, n int,
	f func(low, high int) bool,
) bool {
	var recur func(int, int, int) bool
	recur = func(low, high, n int) bool {
		switch {
		case n == 1:
			return f(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return f(low, high)
			}
			b0 := recur(low, mid, half)
			b1 := recur(mid, high, n-half)
			return b0 && b1
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeOr receives a range, a batch count n, and a range predicate function f,
// divides the range into batches, and invokes the range predicate for each of
// these batches sequentially, covering the half-open interval from low to high,
// including low but excluding high.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// RangeOr returns by combining all return values with the || operator.
//
// RangeOr panics if high < low, or if n < 0.
func RangeOr(
	low, high, n int,
	f func(low, high int) bool,
) bool {
	var recur func(int, int, int) bool
	recur = func(low, high, n int) bool {
		switch {
		case n == 1:
			return f(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return f(low, high)
			}
			b0 := recur(low, mid, half)
			b1 := recur(mid, high, n-half)
			return b0 || b1
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduce receives a range, a batch count, a range reduce function, and a
// join function, divides the range into batches, and invokes the range reducer
// for each of these batches sequentially, covering the half-open interval from
// low to high, including low but excluding high. The results of the range
// reducer invocations are then combined by repeated invocations of join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// RangeReduce panics if high < low, or if n < 0.
func RangeReduce(
	low, high, n int,
	reduce func(low, high int) interface{},
	join func(x, y interface{}) interface{},
) interface{} {
	var recur func(int, int, int) interface{}
	recur = func(low, high, n int) interface{} {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			left := recur(low, mid, half)
			right := recur(mid, high, n-half)
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceInt receives a range, a batch count n, a range reducer function,
// and a join function, divides the range into batches, and invokes the range
// reducer for each of these batches sequentially, covering the half-open
// interval from low to high, including low but excluding high. The results of
// the range reducer invocations are then combined by repeated invocations of
// join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// IntRangeReduce panics if high < low, or if n < 0.
func RangeReduceInt(
	low, high, n int,
	reduce func(low, high int) int,
	join func(x, y int) int,
) int {
	var recur func(int, int, int) int
	recur = func(low, high, n int) int {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			left := recur(low, mid, half)
			right := recur(mid, high, n-half)
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceIntSum receives a range, a batch count n, and a range reducer
// function, divides the range into batches, and invokes the range reducer for
// each of these batches sequentially, covering the half-open interval from low
// to high, including low but excluding high. The results of the range reducer
// invocations are then added together.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// RangeReduceIntSum panics if high < low, or if n < 0.
func RangeReduceIntSum(
	low, high, n int,
	reduce func(low, high int) int,
) int {
	var recur func(int, int, int) int
	recur = func(low, high, n int) int {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			left := recur(low, mid, half)
			right := recur(mid, high, n-half)
			return left + right
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceIntProduct receives a range, a batch count n, and a range reducer
// function, divides the range into batches, and invokes the range reducer for
// each of these batches sequentially, covering the half-open interval from low
// to high, including low but excluding high. The results of the range reducer
// invocations are then multiplied with each other.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// RangeReduceIntProduct panics if high < low, or if n < 0.
func RangeReduceIntProduct(
	low, high, n int,
	reduce func(low, high int) int,
) int {
	var recur func(int, int, int) int
	recur = func(low, high, n int) int {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			left := recur(low, mid, half)
			right := recur(mid, high, n-half)
			return left * right
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceFloat64 receives a range, a batch count n, a range reducer
// function, and a join function, divides the range into batches, and invokes
// the range reducer for each of these batches sequentially, covering the
// half-open interval from low to high, including low but excluding high. The
// results of the range reducer invocations are then combined by repeated
// invocations of join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// Float64RangeReduce panics if high < low, or if n < 0.
func RangeReduceFloat64(
	low, high, n int,
	reduce func(low, high int) float64,
	join func(x, y float64) float64,
) float64 {
	var recur func(int, int, int) float64
	recur = func(low, high, n int) float64 {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			left := recur(low, mid, half)
			right := recur(mid, high, n-half)
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceFloat64Sum receives a range, a batch count n, and a range reducer
// function, divides the range into batches, and invokes the range reducer for
// each of these batches sequentially, covering the half-open interval from low
// to high, including low but excluding high. The results of the range reducer
// invocations are then added together.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// RangeReduceFloat64Sum panics if high < low, or if n < 0.
func RangeReduceFloat64Sum(
	low, high, n int,
	reduce func(low, high int) float64,
) float64 {
	var recur func(int, int, int) float64
	recur = func(low, high, n int) float64 {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			left := recur(low, mid, half)
			right := recur(mid, high, n-half)
			return left + right
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceFloat64Product receives a range, a batch count n, and a range
// reducer function, divides the range into batches, and invokes the range
// reducer for each of these batches sequentially, covering the half-open
// interval from low to high, including low but excluding high. The results of
// the range reducer invocations are then multiplied with each other.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// RangeReduceFloat64Product panics if high < low, or if n < 0.
func RangeReduceFloat64Product(
	low, high, n int,
	reduce func(low, high int) float64,
) float64 {
	var recur func(int, int, int) float64
	recur = func(low, high, n int) float64 {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			left := recur(low, mid, half)
			right := recur(mid, high, n-half)
			return left * right
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceString receives a range, a batch count n, a range reducer
// function, and a join function, divides the range into batches, and invokes
// the range reducer for each of these batches sequentially, covering the
// half-open interval from low to high, including low but excluding high. The
// results of the range reducer invocations are then combined by repeated
// invocations of join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// StringRangeReduce panics if high < low, or if n < 0.
func RangeReduceString(
	low, high, n int,
	reduce func(low, high int) string,
	join func(x, y string) string,
) string {
	var recur func(int, int, int) string
	recur = func(low, high, n int) string {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			left := recur(low, mid, half)
			right := recur(mid, high, n-half)
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceStringSum receives a range, a batch count n, and a range reducer
// function, divides the range into batches, and invokes the range reducer for
// each of these batches sequentially, covering the half-open interval from low
// to high, including low but excluding high. The results of the range reducer
// invocations are then concatenated together.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// RangeReduceStringSum panics if high < low, or if n < 0.
func RangeReduceStringSum(
	low, high, n int,
	reduce func(low, high int) string,
) string {
	var recur func(int, int, int) string
	recur = func(low, high, n int) string {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			left := recur(low, mid, half)
			right := recur(mid, high, n-half)
			return left + right
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}


================================================
FILE: sort/example_interface_test.go
================================================
// Copyright 2011 The Go Authors. All rights reserved. Use of this source code
// is governed by a BSD-style license that can be found in the LICENSE file.

// Adapted by Pascal Costanza for the Pargo package.

package sort_test

import (
	"fmt"
	stdsort "sort"

	sort "github.com/exascience/pargo/sort"
)

type Person struct {
	Name string
	Age  int
}

func (p Person) String() string {
	return fmt.Sprintf("%s: %d", p.Name, p.Age)
}

// ByAge implements sort.SequentialSorter, sort.Sorter, and sort.StableSorter
// for []Person based on the Age field.
type ByAge []Person

func (a ByAge) SequentialSort(i, j int) {
	stdsort.SliceStable(a, func(i, j int) bool {
		return a[i].Age < a[j].Age
	})
}

func (a ByAge) Len() int           { return len(a) }
func (a ByAge) Swap(i, j int)      { a[i], a[j] = a[j], a[i] }
func (a ByAge) Less(i, j int) bool { return a[i].Age < a[j].Age }

func (a ByAge) NewTemp() sort.StableSorter { return make(ByAge, len(a)) }

func (this ByAge) Assign(that sort.StableSorter) func(i, j, len int) {
	dst, src := this, that.(ByAge)
	return func(i, j, len int) {
		for k := 0; k < len; k++ {
			dst[i+k] = src[j+k]
		}
	}
}

func Example() {
	people := []Person{
		{"Bob", 31},
		{"John", 42},
		{"Michael", 17},
		{"Jenny", 26},
	}

	fmt.Println(people)
	sort.Sort(ByAge(people))
	fmt.Println(people)

	people = []Person{
		{"Bob", 31},
		{"John", 42},
		{"Michael", 17},
		{"Jenny", 26},
	}

	fmt.Println(people)
	sort.StableSort(ByAge(people))
	fmt.Println(people)

	// Output:
	// [Bob: 31 John: 42 Michael: 17 Jenny: 26]
	// [Michael: 17 Jenny: 26 Bob: 31 John: 42]
	// [Bob: 31 John: 42 Michael: 17 Jenny: 26]
	// [Michael: 17 Jenny: 26 Bob: 31 John: 42]
}


================================================
FILE: sort/mergesort.go
================================================
package sort

import (
	"sync"

	"github.com/exascience/pargo/parallel"
)

const msortGrainSize = 0x3000

// StableSorter is a type, typically a collection, that can be sorted by
// StableSort in this package. The methods require that ranges of elements of
// the collection can be enumerated by integer indices.
type StableSorter interface {
	SequentialSorter

	// NewTemp creates a new collection that can hold as many elements as the
	// original collection. This is temporary memory needed by StableSort, but
	// not needed anymore afterwards. The temporary collection does not need to
	// be initialized.
	NewTemp() StableSorter

	// Len is the number of elements in the collection.
	Len() int

	// Less reports whether the element with index i should sort before the
	// element with index j.
	Less(i, j int) bool

	// Assign returns a function that assigns ranges from source to the receiver
	// collection. The element with index i is the first element in the receiver
	// to assign to, and the element with index j is the first element in the
	// source collection to assign from, with len determining the number of
	// elements to assign. The effect should be the same as receiver[i:i+len] =
	// source[j:j+len].
	Assign(source StableSorter) func(i, j, len int)
}

type sorter struct {
	less   func(i, j int) bool
	assign func(i, j, len int)
}

func binarySearchEq(x int, T *sorter, p, r int) int {
	low, high := p, r+1
	if low > high {
		return low
	}
	for low < high {
		mid := (low + high) / 2
		if !T.less(mid, x) {
			high = mid
		} else {
			low = mid + 1
		}
	}
	return high
}

func binarySearchNeq(x int, T *sorter, p, r int) int {
	low, high := p, r+1
	if low > high {
		return low
	}
	for low < high {
		mid := (low + high) / 2
		if T.less(x, mid) {
			high = mid
		} else {
			low = mid + 1
		}
	}
	return high
}

func sMerge(T *sorter, p1, r1, p2, r2 int, A *sorter, p3 int) {
	for {
		if p2 > r2 {
			A.assign(p3, p1, r1+1-p1)
			return
		}

		q1 := p1
		for (p1 <= r1) && !T.less(p2, p1) {
			p1++
		}
		n1 := p1 - q1
		A.assign(p3, q1, n1)
		p3 += n1

		if p1 > r1 {
			A.assign(p3, p2, r2+1-p2)
			return
		}

		q2 := p2
		for (p2 <= r2) && T.less(p2, p1) {
			p2++
		}
		n2 := p2 - q2
		A.assign(p3, q2, n2)
		p3 += n2
	}
}

func pMerge(T *sorter, p1, r1, p2, r2 int, A *sorter, p3 int) {
	n1 := r1 - p1 + 1
	n2 := r2 - p2 + 1
	if (n1 + n2) < msortGrainSize {
		sMerge(T, p1, r1, p2, r2, A, p3)
		return
	}
	if n1 > n2 {
		if n1 == 0 {
			return
		}
		q1 := (p1 + r1) / 2
		q2 := binarySearchEq(q1, T, p2, r2)
		q3 := p3 + (q1 - p1) + (q2 - p2)
		A.assign(q3, q1, 1)
		parallel.Do(
			func() { pMerge(T, p1, q1-1, p2, q2-1, A, p3) },
			func() { pMerge(T, q1+1, r1, q2, r2, A, q3+1) },
		)
	} else {
		if n2 == 0 {
			return
		}
		q2 := (p2 + r2) / 2
		q1 := binarySearchNeq(q2, T, p1, r1)
		q3 := p3 + (q1 - p1) + (q2 - p2)
		A.assign(q3, q2, 1)
		parallel.Do(
			func() { pMerge(T, p1, q1-1, p2, q2-1, A, p3) },
			func() { pMerge(T, q1, r1, q2+1, r2, A, q3+1) },
		)
	}
}

// StableSort uses a parallel implementation of merge sort, also known as
// cilksort.
//
// StableSort is only stable if data's SequentialSort method is stable.
//
// StableSort is good for large core counts and large collection sizes, but
// needs a shallow copy of the data collection as additional temporary memory.
func StableSort(data StableSorter) {
	// See https://en.wikipedia.org/wiki/Introduction_to_Algorithms and
	// https://www.clear.rice.edu/comp422/lecture-notes/ for details on the algorithm.
	size := data.Len()
	sSort := data.SequentialSort
	if size < msortGrainSize {
		sSort(0, size)
		return
	}
	var T, A *sorter
	var temp sync.WaitGroup
	temp.Add(1)
	go func() {
		defer temp.Done()
		a := data.NewTemp()
		T = &sorter{data.Less, data.Assign(a)}
		A = &sorter{a.Less, a.Assign(data)}
	}()
	var pSort func(int, int)
	pSort = func(index, size int) {
		if size < msortGrainSize {
			sSort(index, index+size)
		} else {
			q1 := size / 4
			q2 := q1 + q1
			q3 := q2 + q1
			parallel.Do(
				func() { pSort(index, q1) },
				func() { pSort(index+q1, q1) },
				func() { pSort(index+q2, q1) },
				func() { pSort(index+q3, size-q3) },
			)
			temp.Wait()
			parallel.Do(
				func() { pMerge(T, index, index+q1-1, index+q1, index+q2-1, A, index) },
				func() { pMerge(T, index+q2, index+q3-1, index+q3, index+size-1, A, index+q2) },
			)
			pMerge(A, index, index+q2-1, index+q2, index+size-1, T, index)
		}
	}
	pSort(0, size)
}


================================================
FILE: sort/quicksort.go
================================================
package sort

import (
	"sort"

	"github.com/exascience/pargo/parallel"
)

const qsortGrainSize = 0x500

// Sorter is a type, typically a collection, that can be sorted by Sort in this
// package. The methods require that (ranges of) elements of the collection can
// be enumerated by integer indices.
type Sorter interface {
	SequentialSorter
	sort.Interface
}

func medianOfThree(data sort.Interface, l, m, r int) int {
	if data.Less(l, m) {
		if data.Less(m, r) {
			return m
		} else if data.Less(l, r) {
			return r
		}
	} else if data.Less(r, m) {
		return m
	} else if data.Less(r, l) {
		return r
	}
	return l
}

func pseudoMedianOfNine(data sort.Interface, index, size int) int {
	offset := size / 8
	return medianOfThree(data,
		medianOfThree(data, index, index+offset, index+offset*2),
		medianOfThree(data, index+offset*3, index+offset*4, index+offset*5),
		medianOfThree(data, index+offset*6, index+offset*7, index+size-1),
	)
}

// Sort uses a parallel quicksort implementation.
//
// It is good for small core counts and small collection sizes.
func Sort(data Sorter) {
	size := data.Len()
	sSort := data.SequentialSort
	if size < qsortGrainSize {
		sSort(0, size)
		return
	}
	var pSort func(int, int)
	pSort = func(index, size int) {
		if size < qsortGrainSize {
			sSort(index, index+size)
		} else {
			m := pseudoMedianOfNine(data, index, size)
			if m > index {
				data.Swap(index, m)
			}
			i, j := index, index+size
		outer:
			for {
				for {
					j--
					if !data.Less(index, j) {
						break
					}
				}
				for {
					if i == j {
						break outer
					}
					i++
					if !data.Less(i, index) {
						break
					}
				}
				if i == j {
					break outer
				}
				data.Swap(i, j)
			}
			data.Swap(j, index)
			i = j + 1
			parallel.Do(
				func() { pSort(index, j-index) },
				func() { pSort(i, index+size-i) },
			)
		}
	}
	if !IsSorted(data) {
		pSort(0, size)
	}
}


================================================
FILE: sort/sort.go
================================================
// Package sort provides implementations of parallel sorting algorithms.
package sort

import (
	"sort"
	"sync/atomic"

	"github.com/exascience/pargo/speculative"
)

// SequentialSorter is a type, typically a collection, that can be sequentially
// sorted. This is needed as a base case for the parallel sorting algorithms in
// this package. It is recommended to implement this interface by using the
// functions in the sort package of Go's standard library.
type SequentialSorter interface {
	// Sort the range that starts at index i and ends at index j. If the
	// collection that is represented by this interface is a slice, then the
	// slice expression collection[i:j] returns the correct slice to be sorted.
	SequentialSort(i, j int)
}

const serialCutoff = 10

// IsSorted determines in parallel whether data is already sorted. It attempts
// to terminate early when the return value is false.
func IsSorted(data sort.Interface) bool {
	size := data.Len()
	if size < qsortGrainSize {
		return sort.IsSorted(data)
	}
	for i := 1; i < serialCutoff; i++ {
		if data.Less(i, i-1) {
			return false
		}
	}
	var done int32
	defer atomic.StoreInt32(&done, 1)
	var pTest func(int, int) bool
	pTest = func(index, size int) bool {
		if size < qsortGrainSize {
			for i := index; i < index+size; i++ {
				if ((i % 1024) == 0) && (atomic.LoadInt32(&done) != 0) {
					return false
				}
				if data.Less(i, i-1) {
					return false
				}
			}
			return true
		}
		half := size / 2
		result := speculative.And(
			func() bool { return pTest(index, half) },
			func() bool { return pTest(index+half, size-half) },
		)
		return result
	}
	return pTest(serialCutoff, size-serialCutoff)
}

// IntSlice attaches the methods of sort.Interface, SequentialSorter, Sorter,
// and StableSorter to []int, sorting in increasing order.
type IntSlice []int

// SequentialSort implements the method of of the SequentialSorter interface.
func (s IntSlice) SequentialSort(i, j int) {
	sort.Stable(sort.IntSlice(s[i:j]))
}

func (s IntSlice) Len() int {
	return len(s)
}

func (s IntSlice) Less(i, j int) bool {
	return s[i] < s[j]
}

func (s IntSlice) Swap(i, j int) {
	s[i], s[j] = s[j], s[i]
}

// NewTemp implements the method of the StableSorter interface.
func (s IntSlice) NewTemp() StableSorter {
	return IntSlice(make([]int, len(s)))
}

// Assign implements the method of the StableSorter interface.
func (s IntSlice) Assign(source StableSorter) func(i, j, len int) {
	dst, src := s, source.(IntSlice)
	return func(i, j, len int) {
		copy(dst[i:i+len], src[j:j+len])
	}
}

// IntsAreSorted determines in parallel whether a slice of ints is already
// sorted in increasing order. It attempts to terminate early when the return
// value is false.
func IntsAreSorted(a []int) bool {
	return IsSorted(IntSlice(a))
}

// Float64Slice attaches the methods of sort.Interface, SequentialSorter,
// Sorter, and StableSorter to []float64, sorting in increasing order.
type Float64Slice []float64

// SequentialSort implements the method of the SequentialSorter interface.
func (s Float64Slice) SequentialSort(i, j int) {
	sort.Stable(sort.Float64Slice(s[i:j]))
}

func (s Float64Slice) Len() int {
	return len(s)
}

func (s Float64Slice) Less(i, j int) bool {
	return s[i] < s[j]
}

func (s Float64Slice) Swap(i, j int) {
	s[i], s[j] = s[j], s[i]
}

// NewTemp implements the method of the StableSorter interface.
func (s Float64Slice) NewTemp() StableSorter {
	return Float64Slice(make([]float64, len(s)))
}

// Assign implements the method of the StableSorter interface.
func (s Float64Slice) Assign(source StableSorter) func(i, j, len int) {
	dst, src := s, source.(Float64Slice)
	return func(i, j, len int) {
		copy(dst[i:i+len], src[j:j+len])
	}
}

// Float64sAreSorted determines in parallel whether a slice of float64s is
// already sorted in increasing order. It attempts to terminate early when the
// return value is false.
func Float64sAreSorted(a []float64) bool {
	return IsSorted(Float64Slice(a))
}

// StringSlice attaches the methods of sort.Interface, SequentialSorter, Sorter,
// and StableSorter to []string, sorting in increasing order.
type StringSlice []string

// SequentialSort implements the method of the SequentialSorter interface.
func (s StringSlice) SequentialSort(i, j int) {
	sort.Stable(sort.StringSlice(s[i:j]))
}

func (s StringSlice) Len() int {
	return len(s)
}

func (s StringSlice) Less(i, j int) bool {
	return s[i] < s[j]
}

func (s StringSlice) Swap(i, j int) {
	s[i], s[j] = s[j], s[i]
}

// NewTemp implements the method of the StableSorter interface.
func (s StringSlice) NewTemp() StableSorter {
	return StringSlice(make([]string, len(s)))
}

// Assign implements the method of the StableSorter interface.
func (s StringSlice) Assign(source StableSorter) func(i, j, len int) {
	dst, src := s, source.(StringSlice)
	return func(i, j, len int) {
		copy(dst[i:i+len], src[j:j+len])
	}
}

// StringsAreSorted determines in parallel whether a slice of strings is already
// sorted in increasing order. It attempts to terminate early when the return
// value is false.
func StringsAreSorted(a []string) bool {
	return IsSorted(StringSlice(a))
}


================================================
FILE: sort/sort_test.go
================================================
package sort

import (
	"bytes"
	"math/rand"
	"sort"
	"testing"
)

type (
	By func(i, j int) bool

	IntSliceSorter struct {
		slice []int
		by    By
	}
)

func (s IntSliceSorter) NewTemp() StableSorter {
	return IntSliceSorter{make([]int, len(s.slice)), s.by}
}

func (s IntSliceSorter) Len() int {
	return len(s.slice)
}

func (s IntSliceSorter) Less(i, j int) bool {
	return s.by(s.slice[i], s.slice[j])
}

func (s IntSliceSorter) Swap(i, j int) {
	s.slice[i], s.slice[j] = s.slice[j], s.slice[i]
}

func (s IntSliceSorter) Assign(t StableSorter) func(i, j, len int) {
	dst, src := s.slice, t.(IntSliceSorter).slice
	return func(i, j, len int) {
		for k := 0; k < len; k++ {
			dst[i+k] = src[j+k]
		}
	}
}

func (s IntSliceSorter) SequentialSort(i, j int) {
	slice, by := s.slice[i:j], s.by
	sort.Slice(slice, func(i, j int) bool {
		return by(slice[i], slice[j])
	})
}

func (by By) SequentialSort(slice []int) {
	sort.Sort(IntSliceSorter{slice, by})
}

func (by By) ParallelStableSort(slice []int) {
	StableSort(IntSliceSorter{slice, by})
}

func (by By) ParallelSort(slice []int) {
	Sort(IntSliceSorter{slice, by})
}

func (by By) IsSorted(slice []int) bool {
	return sort.IsSorted(IntSliceSorter{slice, by})
}

func makeRandomSlice(size, limit int) []int {
	result := make([]int, size)
	for i := 0; i < size; i++ {
		result[i] = rand.Intn(limit)
	}
	return result
}

func TestSort(t *testing.T) {
	orgSlice := makeRandomSlice(100*0x6000, 100*100*0x6000)
	s1 := make([]int, len(orgSlice))
	s2 := make([]int, len(orgSlice))
	copy(s1, orgSlice)
	copy(s2, orgSlice)

	t.Run("ParallelStableSort", func(t *testing.T) {
		By(func(i, j int) bool { return i < j }).ParallelStableSort(s1)
		if !By(func(i, j int) bool { return i < j }).IsSorted(s1) {
			t.Errorf("parallel stable sort incorrect")
		}
	})

	t.Run("ParallelSort", func(t *testing.T) {
		By(func(i, j int) bool { return i < j }).ParallelSort(s2)
		if !By(func(i, j int) bool { return i < j }).IsSorted(s2) {
			t.Errorf("parallel sort incorrect")
		}
	})
}

func TestIntSort(t *testing.T) {
	orgSlice := makeRandomSlice(100*0x6000, 100*100*0x6000)
	s1 := make([]int, len(orgSlice))
	s2 := make([]int, len(orgSlice))
	copy(s1, orgSlice)
	copy(s2, orgSlice)

	t.Run("ParallelStableSort IntSlice", func(t *testing.T) {
		StableSort(IntSlice(s1))
		if !sort.IntsAreSorted(s1) {
			t.Errorf("parallel stable sort on IntSlice incorrect")
		}
		if !IntsAreSorted(s1) {
			t.Errorf("parallel IntsAreSorted incorrect")
		}
	})

	t.Run("ParallelSort IntSlice", func(t *testing.T) {
		Sort(IntSlice(s2))
		if !sort.IntsAreSorted(s2) {
			t.Errorf("parallel sort on IntSlice incorrect")
		}
		if !IntsAreSorted(s2) {
			t.Errorf("parallel IntsAreSorted incorrect")
		}
	})
}

func makeRandomFloat64Slice(size int) []float64 {
	result := make([]float64, size)
	for i := 0; i < size; i++ {
		result[i] = rand.NormFloat64()
	}
	return result
}

func TestFloat64Sort(t *testing.T) {
	orgSlice := makeRandomFloat64Slice(100 * 0x6000)
	s1 := make([]float64, len(orgSlice))
	s2 := make([]float64, len(orgSlice))
	copy(s1, orgSlice)
	copy(s2, orgSlice)

	t.Run("ParallelStableSort Float64Slice", func(t *testing.T) {
		StableSort(Float64Slice(s1))
		if !sort.Float64sAreSorted(s1) {
			t.Errorf("parallel stable sort on Float64Slice incorrect")
		}
		if !Float64sAreSorted(s1) {
			t.Errorf("parallel Float64sAreSorted incorrect")
		}
	})

	t.Run("ParallelSort Float64Slice", func(t *testing.T) {
		Sort(Float64Slice(s2))
		if !sort.Float64sAreSorted(s2) {
			t.Errorf("parallel sort on Float64Slice incorrect")
		}
		if !Float64sAreSorted(s2) {
			t.Errorf("parallel Float64sAreSorted incorrect")
		}
	})
}

func makeRandomStringSlice(size, lenlimit int, limit int32) []string {
	result := make([]string, size)
	for i := 0; i < size; i++ {
		var buf bytes.Buffer
		len := rand.Intn(lenlimit)
		for j := 0; j < len; j++ {
			buf.WriteRune(rand.Int31n(limit))
		}
		result[i] = buf.String()
	}
	return result
}

func TestStringSort(t *testing.T) {
	orgSlice := makeRandomStringSlice(100*0x6000, 256, 16384)
	s1 := make([]string, len(orgSlice))
	s2 := make([]string, len(orgSlice))
	copy(s1, orgSlice)
	copy(s2, orgSlice)

	t.Run("ParallelStableSort StringSlice", func(t *testing.T) {
		StableSort(StringSlice(s1))
		if !sort.StringsAreSorted(s1) {
			t.Errorf("parallel stable sort on StringSlice incorrect")
		}
		if !StringsAreSorted(s1) {
			t.Errorf("parallel StringsAreSorted incorrect")
		}
	})

	t.Run("ParallelSort StringSlice", func(t *testing.T) {
		Sort(StringSlice(s2))
		if !sort.StringsAreSorted(s2) {
			t.Errorf("parallel sort on StringSlice incorrect")
		}
		if !StringsAreSorted(s2) {
			t.Errorf("parallel StringsAreSorted incorrect")
		}
	})
}

type (
	box struct {
		primary, secondary int
	}

	boxSlice []box
)

func makeRandomBoxSlice(size int) boxSlice {
	result := make([]box, size)
	half := ((size - 1) / 2) + 1
	for i := 0; i < size; i++ {
		result[i].primary = rand.Intn(half)
		result[i].secondary = i + 1
	}
	return result
}

func (s boxSlice) NewTemp() StableSorter {
	return boxSlice(make([]box, len(s)))
}

func (s boxSlice) Len() int {
	return len(s)
}

func (s boxSlice) Less(i, j int) bool {
	return s[i].primary < s[j].primary
}

func (s boxSlice) Assign(source StableSorter) func(i, j, len int) {
	dst, src := s, source.(boxSlice)
	return func(i, j, len int) {
		for k := 0; k < len; k++ {
			dst[i+k] = src[j+k]
		}
	}
}

func (s boxSlice) SequentialSort(i, j int) {
	slice := s[i:j]
	sort.SliceStable(slice, func(i, j int) bool {
		return slice[i].primary < slice[j].primary
	})
}

func checkStable(b boxSlice) bool {
	m := make(map[int]int)
	for _, el := range b {
		if m[el.primary] < el.secondary {
			m[el.primary] = el.secondary
		} else {
			return false
		}
	}
	return true
}

func TestStableSort(t *testing.T) {
	orgSlice := makeRandomBoxSlice(100 * 0x6000)
	s1 := make(boxSlice, len(orgSlice))
	copy(s1, orgSlice)

	t.Run("ParallelStableSort boxSlice", func(t *testing.T) {
		StableSort(s1)
		if !sort.SliceIsSorted(s1, func(i, j int) bool {
			return s1[i].primary < s1[j].primary
		}) {
			t.Errorf("parallel stable sort on boxSlice incorrect")
		}
	})

	t.Run("CheckStable ParallelStableSort boxSlice", func(t *testing.T) {
		if !checkStable(s1) {
			t.Errorf("parallel stable sort on boxSlice not stable")
		}
	})
}

func BenchmarkSort(b *testing.B) {
	orgSlice := makeRandomSlice(100*0x6000, 100*100*0x6000)
	s1 := make([]int, len(orgSlice))
	s2 := make([]int, len(orgSlice))
	s3 := make([]int, len(orgSlice))

	b.Run("SequentialSort", func(b *testing.B) {
		for i := 0; i < b.N; i++ {
			b.StopTimer()
			copy(s1, orgSlice)
			b.StartTimer()
			By(func(i, j int) bool { return i < j }).SequentialSort(s1)
		}
	})

	b.Run("ParallelStableSort", func(b *testing.B) {
		for i := 0; i < b.N; i++ {
			b.StopTimer()
			copy(s2, orgSlice)
			b.StartTimer()
			By(func(i, j int) bool { return i < j }).ParallelStableSort(s2)
		}
	})

	b.Run("ParallelSort", func(b *testing.B) {
		for i := 0; i < b.N; i++ {
			b.StopTimer()
			copy(s3, orgSlice)
			b.StartTimer()
			By(func(i, j int) bool { return i < j }).ParallelSort(s3)
		}
	})
}


================================================
FILE: speculative/speculative.go
================================================
// Package speculative provides functions for expressing parallel algorithms,
// similar to the functions in package parallel, except that the implementations
// here terminate early when they can.
//
// See https://github.com/ExaScience/pargo/wiki/TaskParallelism for a general
// overview.
package speculative

import (
	"fmt"
	"sync"

	"github.com/exascience/pargo/internal"
)

// Reduce receives one or more functions, executes them in parallel, and
// combines their results with the join function in parallel.
//
// Each function is invoked in its own goroutine. Reduce returns either when all
// functions have terminated with a second return value of false; or when one or
// more functions return a second return value of true. In the latter case, the
// first return value of the left-most function that returned true as a second
// return value becomes the final result, without waiting for the other
// functions to terminate.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and Reduce eventually panics with the left-most recovered panic
// value.
func Reduce(
	join func(x, y interface{}) (interface{}, bool),
	firstFunction func() (interface{}, bool),
	moreFunctions ...func() (interface{}, bool),
) (interface{}, bool) {
	if len(moreFunctions) == 0 {
		return firstFunction()
	}
	var left, right interface{}
	var b0, b1 bool
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	if len(moreFunctions) == 1 {
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right, b1 = moreFunctions[0]()
		}()
		left, b0 = firstFunction()
	} else {
		half := (len(moreFunctions) + 1) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right, b1 = Reduce(join, moreFunctions[half], moreFunctions[half+1:]...)
		}()
		left, b0 = Reduce(join, firstFunction, moreFunctions[:half]...)
	}
	if b0 {
		return left, true
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	if b1 {
		return right, true
	}
	return join(left, right)
}

// ReduceFloat64 receives one or more functions, executes them in parallel, and
// combines their results with the join function in parallel.
//
// Each function is invoked in its own goroutine. ReduceFloat64 returns either
// when all functions have terminated with a second return value of false; or
// when one or more functions return a second return value of true. In the
// latter case, the first return value of the left-most function that returned
// true as a second return value becomes the final result, without waiting for
// the other functions to terminate.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and ReduceFloat64 eventually panics with the left-most recovered
// panic value.
func ReduceFloat64(
	join func(x, y float64) (float64, bool),
	firstFunction func() (float64, bool),
	moreFunctions ...func() (float64, bool),
) (float64, bool) {
	if len(moreFunctions) == 0 {
		return firstFunction()
	}
	var left, right float64
	var b0, b1 bool
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	if len(moreFunctions) == 1 {
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right, b1 = moreFunctions[0]()
		}()
		left, b0 = firstFunction()
	} else {
		half := (len(moreFunctions) + 1) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right, b1 = ReduceFloat64(join, moreFunctions[half], moreFunctions[half+1:]...)
		}()
		left, b0 = ReduceFloat64(join, firstFunction, moreFunctions[:half]...)
	}
	if b0 {
		return left, true
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	if b1 {
		return right, true
	}
	return join(left, right)
}

// ReduceInt receives one or more functions, executes them in parallel, and
// combines their results with the join function in parallel.
//
// Each function is invoked in its own goroutine. ReduceInt returns either when
// all functions have terminated with a second return value of false; or when
// one or more functions return a second return value of true. In the latter
// case, the first return value of the left-most function that returned true as
// a second return value becomes the final result, without waiting for the other
// functions to terminate.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and ReduceInt eventually panics with the left-most recovered panic
// value.
func ReduceInt(
	join func(x, y int) (int, bool),
	firstFunction func() (int, bool),
	moreFunctions ...func() (int, bool),
) (int, bool) {
	if len(moreFunctions) == 0 {
		return firstFunction()
	}
	var left, right int
	var b0, b1 bool
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	if len(moreFunctions) == 1 {
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right, b1 = moreFunctions[0]()
		}()
		left, b0 = firstFunction()
	} else {
		half := (len(moreFunctions) + 1) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right, b1 = ReduceInt(join, moreFunctions[half], moreFunctions[half+1:]...)
		}()
		left, b0 = ReduceInt(join, firstFunction, moreFunctions[:half]...)
	}
	if b0 {
		return left, true
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	if b1 {
		return right, true
	}
	return join(left, right)
}

// ReduceString receives one or more functions, executes them in parallel, and
// combines their results with the join function in parallel.
//
// Each function is invoked in its own goroutine. ReduceString returns either
// when all functions have terminated with a second return value of false; or
// when one or more functions return a second return value of true. In the
// latter case, the first return value of the left-most function that returned
// true as a second return value becomes the final result, without waiting for
// the other functions to terminate.
//
// If one or more functions panic, the corresponding goroutines recover the
// panics, and ReduceString eventually panics with the left-most recovered panic
// value.
func ReduceString(
	join func(x, y string) (string, bool),
	firstFunction func() (string, bool),
	moreFunctions ...func() (string, bool),
) (string, bool) {
	if len(moreFunctions) == 0 {
		return firstFunction()
	}
	var left, right string
	var b0, b1 bool
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	if len(moreFunctions) == 1 {
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right, b1 = moreFunctions[0]()
		}()
		left, b0 = firstFunction()
	} else {
		half := (len(moreFunctions) + 1) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			right, b1 = ReduceString(join, moreFunctions[half], moreFunctions[half+1:]...)
		}()
		left, b0 = ReduceString(join, firstFunction, moreFunctions[:half]...)
	}
	if b0 {
		return left, true
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	if b1 {
		return right, true
	}
	return join(left, right)
}

// Do receives zero or more thunks and executes them in parallel.
//
// Each function is invoked in its own goroutine. Do returns either when all
// functions have terminated with a return value of false; or when one or more
// functions return true, without waiting for the other functions to terminate.
//
// If one or more thunks panic, the corresponding goroutines recover the panics,
// and Do may eventually panic with the left-most recovered panic value.
func Do(thunks ...func() bool) bool {
	switch len(thunks) {
	case 0:
		return false
	case 1:
		return thunks[0]()
	}
	var b0, b1 bool
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	switch len(thunks) {
	case 2:
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			b1 = thunks[1]()
		}()
		b0 = thunks[0]()
	default:
		half := len(thunks) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			b1 = Do(thunks[half:]...)
		}()
		b0 = Do(thunks[:half]...)
	}
	if b0 {
		return true
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return b1
}

// And receives zero or more predicate functions and executes them in parallel.
//
// Each predicate is invoked in its own goroutine, and And returns true if all
// of them return true; or And returns false when at least one of them returns
// false, without waiting for the other predicates to terminate.
//
// If one or more predicates panic, the corresponding goroutines recover the
// panics, and And may eventually panic with the left-most recovered panic
// value.
func And(predicates ...func() bool) bool {
	switch len(predicates) {
	case 0:
		return true
	case 1:
		return predicates[0]()
	}
	var b0, b1 bool
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	switch len(predicates) {
	case 2:
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			b1 = predicates[1]()
		}()
		b0 = predicates[0]()
	default:
		half := len(predicates) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			b1 = And(predicates[half:]...)
		}()
		b0 = And(predicates[:half]...)
	}
	if !b0 {
		return false
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return b1
}

// Or receives zero or more predicate functions and executes them in parallel.
//
// Each predicate is invoked in its own goroutine, and Or returns false if all
// of them return false; or Or returns true when at least one of them returns
// true, without waiting for the other predicates to terminate.
//
// If one or more predicates panic, the corresponding goroutines recover the
// panics, and Or may eventually panic with the left-most recovered panic value.
func Or(predicates ...func() bool) bool {
	switch len(predicates) {
	case 0:
		return false
	case 1:
		return predicates[0]()
	}
	var b0, b1 bool
	var p interface{}
	var wg sync.WaitGroup
	wg.Add(1)
	switch len(predicates) {
	case 2:
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			b1 = predicates[1]()
		}()
		b0 = predicates[0]()
	default:
		half := len(predicates) / 2
		go func() {
			defer func() {
				p = internal.WrapPanic(recover())
				wg.Done()
			}()
			b1 = Or(predicates[half:]...)
		}()
		b0 = Or(predicates[:half]...)
	}
	if b0 {
		return true
	}
	wg.Wait()
	if p != nil {
		panic(p)
	}
	return b1
}

// Range receives a range, a batch count n, and a range function f, divides the
// range into batches, and invokes the range function for each of these batches
// in parallel, covering the half-open interval from low to high, including low
// but excluding high.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range function is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and Range returns either when all range functions have
// terminated with a return value of true; or when one or more range functions
// return true, without waiting for the other range functions to terminate.
//
// Range panics if high < low, or if n < 0.
//
// If one or more range functions panic, the corresponding goroutines recover
// the panics, and Range may eventually panic with the left-most recovered panic
// value. If both non-nil error values are returned and panics occur, then the
// left-most of these events take precedence.
func Range(
	low, high, n int,
	f func(low, high int) bool,
) bool {
	var recur func(int, int, int) bool
	recur = func(low, high, n int) bool {
		switch {
		case n == 1:
			return f(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return f(low, high)
			}
			var b1 bool
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				b1 = recur(mid, high, n-half)
			}()
			if recur(low, mid, half) {
				return true
			}
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return b1
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeAnd receives a range, a batch count n, and a range predicate function f,
// divides the range into batches, and invokes the range predicate for each of
// these batches in parallel, covering the half-open interval from low to high,
// including low but excluding high.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range predicate is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeAnd returns true if all of them return true; or
// RangeAnd returns false when at least one of them returns false, without
// waiting for the other range predicates to terminate.
//
// RangeAnd panics if high < low, or if n < 0.
//
// If one or more range predicates panic, the corresponding goroutines recover
// the panics, and RangeAnd may eventually panic with the left-most recovered
// panic value.
func RangeAnd(
	low, high, n int,
	f func(low, high int) bool,
) bool {
	var recur func(int, int, int) bool
	recur = func(low, high, n int) bool {
		switch {
		case n == 1:
			return f(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return f(low, high)
			}
			var b1 bool
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				b1 = recur(mid, high, n-half)
			}()
			if !recur(low, mid, half) {
				return false
			}
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return b1
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeOr receives a range, a batch count n, and a range predicate function f,
// divides the range into batches, and invokes the range predicate for each of
// these batches in parallel, covering the half-open interval from low to high,
// including low but excluding high.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range predicate is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeOr returns false if all of them return false; or
// RangeOr returns true when at least one of them returns true, without waiting
// for the other range predicates to terminate.
//
// RangeOr panics if high < low, or if n < 0.
//
// If one or more range predicates panic, the corresponding goroutines recover
// the panics, and RangeOr may eventually panic with the left-most recovered
// panic value.
func RangeOr(
	low, high, n int,
	f func(low, high int) bool,
) bool {
	var recur func(int, int, int) bool
	recur = func(low, high, n int) bool {
		switch {
		case n == 1:
			return f(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return f(low, high)
			}
			var b1 bool
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				b1 = recur(mid, high, n-half)
			}()
			if recur(low, mid, half) {
				return true
			}
			wg.Wait()
			if p != nil {
				panic(p)
			}
			return b1
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduce receives a range, a batch count n, a range reducer function, and
// a join function, divides the range into batches, and invokes the range
// reducer for each of these batches in parallel, covering the half-open
// interval from low to high, including low but excluding high. The results of
// the range reducer invocations are then combined by repeated invocations of
// join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduce returns either when all range reducers and joins
// have terminated with a second return value of false; or when one or more
// range or join functions return a second return value of true. In the latter
// case, the first return value of the left-most function that returned true as
// a second return value becomes the final result, without waiting for the other
// range and pair reducers to terminate.
//
// RangeReduce panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduce eventually panics with the left-most
// recovered panic value.
func RangeReduce(
	low, high, n int,
	reduce func(low, high int) (interface{}, bool),
	join func(x, y interface{}) (interface{}, bool),
) (interface{}, bool) {
	var recur func(int, int, int) (interface{}, bool)
	recur = func(low, high, n int) (interface{}, bool) {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right interface{}
			var b0, b1 bool
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right, b1 = recur(mid, high, n-half)
			}()
			left, b0 = recur(low, mid, half)
			if b0 {
				return left, true
			}
			wg.Wait()
			if p != nil {
				panic(p)
			}
			if b1 {
				return right, true
			}
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceInt receives a range, a batch count n, a range reducer function,
// and a join function, divides the range into batches, and invokes the range
// reducer for each of these batches in parallel, covering the half-open
// interval from low to high, including low but excluding high. The results of
// the range reducer invocations are then combined by repeated invocations of
// join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduceInt returns either when all range reducers and
// joins have terminated with a second return value of false; or when one or
// more range or join functions return a second return value of true. In the
// latter case, the first return value of the left-most function that returned
// true as a second return value becomes the final result, without waiting for
// the other range and pair reducers to terminate.
//
// RangeReduceInt panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduceInt eventually panics with the left-most
// recovered panic value.
func RangeReduceInt(
	low, high, n int,
	reduce func(low, high int) (int, bool),
	join func(x, y int) (int, bool),
) (int, bool) {
	var recur func(int, int, int) (int, bool)
	recur = func(low, high, n int) (int, bool) {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right int
			var b0, b1 bool
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right, b1 = recur(mid, high, n-half)
			}()
			left, b0 = recur(low, mid, half)
			if b0 {
				return left, true
			}
			wg.Wait()
			if p != nil {
				panic(p)
			}
			if b1 {
				return right, true
			}
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceFloat64 receives a range, a batch count n, a range reducer
// function, and a join function, divides the range into batches, and invokes
// the range reducer for each of these batches in parallel, covering the
// half-open interval from low to high, including low but excluding high. The
// results of the range reducer invocations are then combined by repeated
// invocations of join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduceFloat64 returns either when all range reducers
// and joins have terminated with a second return value of false; or when one or
// more range or join functions return a second return value of true. In the
// latter case, the first return value of the left-most function that returned
// true as a second return value becomes the final result, without waiting for
// the other range and pair reducers to terminate.
//
// RangeReduceFloat64 panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduceFloat64 eventually panics with the
// left-most recovered panic value.
func RangeReduceFloat64(
	low, high, n int,
	reduce func(low, high int) (float64, bool),
	join func(x, y float64) (float64, bool),
) (float64, bool) {
	var recur func(int, int, int) (float64, bool)
	recur = func(low, high, n int) (float64, bool) {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right float64
			var b0, b1 bool
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right, b1 = recur(mid, high, n-half)
			}()
			left, b0 = recur(low, mid, half)
			if b0 {
				return left, true
			}
			wg.Wait()
			if p != nil {
				panic(p)
			}
			if b1 {
				return right, true
			}
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}

// RangeReduceString receives a range, a batch count n, a range reducer
// function, and a join function, divides the range into batches, and invokes
// the range reducer for each of these batches in parallel, covering the
// half-open interval from low to high, including low but excluding high. The
// results of the range reducer invocations are then combined by repeated
// invocations of join.
//
// The range is specified by a low and high integer, with low <= high. The
// batches are determined by dividing up the size of the range (high - low) by
// n. If n is 0, a reasonable default is used that takes runtime.GOMAXPROCS(0)
// into account.
//
// The range reducer is invoked for each batch in its own goroutine, with 0 <=
// low <= high, and RangeReduceString returns either when all range reducers and
// joins have terminated with a second return value of false; or when one or
// more range or join functions return a second return value of true. In the
// latter case, the first return value of the left-most function that returned
// true as a second return value becomes the final result, without waiting for
// the other range and pair reducers to terminate.
//
// RangeReduceString panics if high < low, or if n < 0.
//
// If one or more reducer invocations panic, the corresponding goroutines
// recover the panics, and RangeReduceString eventually panics with the
// left-most recovered panic value.
func RangeReduceString(
	low, high, n int,
	reduce func(low, high int) (string, bool),
	join func(x, y string) (string, bool),
) (string, bool) {
	var recur func(int, int, int) (string, bool)
	recur = func(low, high, n int) (string, bool) {
		switch {
		case n == 1:
			return reduce(low, high)
		case n > 1:
			batchSize := ((high - low - 1) / n) + 1
			half := n / 2
			mid := low + batchSize*half
			if mid >= high {
				return reduce(low, high)
			}
			var left, right string
			var b0, b1 bool
			var p interface{}
			var wg sync.WaitGroup
			wg.Add(1)
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right, b1 = recur(mid, high, n-half)
			}()
			left, b0 = recur(low, mid, half)
			if b0 {
				return left, true
			}
			wg.Wait()
			if p != nil {
				panic(p)
			}
			if b1 {
				return right, true
			}
			return join(left, right)
		default:
			panic(fmt.Sprintf("invalid number of batches: %v", n))
		}
	}
	return recur(low, high, internal.ComputeNofBatches(low, high, n))
}


================================================
FILE: sync/map.go
================================================
// Package sync provides synchronization primitives similar to the sync package
// of Go's standard library, however here with a focus on parallel performance
// rather than concurrency. So far, this package only provides support for a
// parallel map that can be used to some extent as a drop-in replacement for the
// concurrent map of the standard library. For other synchronization
// primitivies, such as condition variables, mutual exclusion locks, object
// pools, or atomic memory primitives, please use the standard library.
package sync

import (
	"runtime"
	"sync"

	"github.com/exascience/pargo/internal"

	"github.com/exascience/pargo/parallel"
	"github.com/exascience/pargo/speculative"
)

// A Hasher represents an object that has a hash value, which is needed by Map.
//
// If Go would allow access to the predefined hash functions for Go types, this
// interface would not be needed.
type Hasher interface {
	Hash() uint64
}

// A Split is a partial map that belongs to a larger Map, which can be
// individually locked. Its enclosed map can then be individually accessed
// without blocking accesses to other splits.
type Split struct {
	sync.RWMutex
	Map map[interface{}]interface{}
}

// A Map is a parallel map that consists of several split maps that can be
// individually locked and accessed.
//
// The zero Map is not valid.
type Map struct {
	splits []Split
}

// NewMap returns a map with size splits.
//
// If size is <= 0, runtime.GOMAXPROCS(0) is used instead.
func NewMap(size int) *Map {
	if size <= 0 {
		size = runtime.GOMAXPROCS(0)
	}
	splits := make([]Split, size)
	for i := range splits {
		splits[i].Map = make(map[interface{}]interface{})
	}
	return &Map{splits}
}

// Split retrieves the split for a particular key.
//
// The split must be locked/unlocked properly by user programs to safely access
// its contents. In many cases, it is easier to use one of the high-level
// methods, like Load, LoadOrStore, LoadOrCompute, Delete, DeleteOrStore,
// DeleteOrCompute, and Modify, which implicitly take care of proper locking.
func (m *Map) Split(key Hasher) *Split {
	splits := m.splits
	return &splits[key.Hash()%uint64(len(splits))]
}

// Delete deletes the value for a key.
func (m *Map) Delete(key Hasher) {
	split := m.Split(key)
	split.Lock()
	delete(split.Map, key)
	split.Unlock()
}

// Load returns the value stored in the map for a key, or nil if no value is
// present. The ok result indicates whether value was found in the map.
func (m *Map) Load(key Hasher) (value interface{}, ok bool) {
	split := m.Split(key)
	split.RLock()
	value, ok = split.Map[key]
	split.RUnlock()
	return
}

// LoadOrStore returns the existing value for the key if present. Otherwise, it
// stores and returns the given value. The loaded result is true if the value
// was loaded, false if stored.
func (m *Map) LoadOrStore(key Hasher, value interface{}) (actual interface{}, loaded bool) {
	split := m.Split(key)
	split.RLock()
	actual, loaded = split.Map[key]
	split.RUnlock()
	if loaded {
		return
	}
	split.Lock()
	if actual, loaded = split.Map[key]; !loaded {
		actual = value
		split.Map[key] = value
	}
	split.Unlock()
	return
}

// LoadOrCompute returns the existing value for the key if present. Otherwise,
// it calls computer, and then stores and returns the computed value. The loaded
// result is true if the value was loaded, false if stored.
//
// The computer function is invoked either zero times or once. While computer is
// executing no locks related to this map are being held.
//
// The computed value may not be stored and returned, since a parallel thread
// may have successfully stored a value for the key in the meantime. In that
// case, the value stored by the parallel thread is returned instead.
func (m *Map) LoadOrCompute(key Hasher, computer func() interface{}) (actual interface{}, loaded bool) {
	split := m.Split(key)
	split.RLock()
	actual, loaded = split.Map[key]
	split.RUnlock()
	if loaded {
		return
	}
	value := computer()
	split.Lock()
	if actual, loaded = split.Map[key]; !loaded {
		actual = value
		split.Map[key] = actual
	}
	split.Unlock()
	return
}

// DeleteOrStore deletes and returns the existing value for the key if present.
// Otherwise, it stores and returns the given value. The deleted result is true
// if the value was deleted, false if stored.
func (m *Map) DeleteOrStore(key Hasher, value interface{}) (actual interface{}, deleted bool) {
	split := m.Split(key)
	split.Lock()
	if actual, deleted = split.Map[key]; deleted {
		delete(split.Map, key)
	} else {
		actual = value
		split.Map[key] = value
	}
	split.Unlock()
	return
}

// DeleteOrCompute deletes and returns the existing value for the key if
// present. Otherwise, it calls computer, and then stores and returns the
// computed value. The deleted result is true if the value was deleted, false if
// stored.
//
// The computer function is invoked either zero times or once. While computer is
// executing, a lock is being held on a portion of the map, so the function
// should be brief.
func (m *Map) DeleteOrCompute(key Hasher, computer func() interface{}) (actual interface{}, deleted bool) {
	split := m.Split(key)
	split.Lock()
	if actual, deleted = split.Map[key]; deleted {
		delete(split.Map, key)
	} else {
		actual = computer()
		split.Map[key] = actual
	}
	split.Unlock()
	return
}

// Modify looks up a value for the key if present and passes it to the modifier.
// The ok parameter indicates whether value was found in the map. The
// replacement returned by the modifier is then stored as a value for key in the
// map if storeNotDelete is true, otherwise the value is deleted from the map.
// Modify returns the same results as modifier.
//
// The modifier is invoked exactly once. While modifier is executing, a lock is
// being held on a portion of the map, so the function should be brief.
//
// This is the most general modification function for parallel maps. Other
// functions that modify the map are potentially more efficient, so it is better
// to be more specific if possible.
func (m *Map) Modify(
	key Hasher,
	modifier func(value interface{}, ok bool) (replacement interface{}, storeNotDelete bool),
) (replacement interface{}, storeNotDelete bool) {
	split := m.Split(key)
	split.Lock()
	value, ok := split.Map[key]
	if replacement, storeNotDelete = modifier(value, ok); storeNotDelete {
		split.Map[key] = replacement
	} else {
		delete(split.Map, key)
	}
	split.Unlock()
	return
}

func (split *Split) srange(f func(key, value interface{}) bool) bool {
	split.Lock()
	defer split.Unlock()
	for key, value := range split.Map {
		if !f(key, value) {
			return false
		}
	}
	return true
}

// Range calls f sequentially for each key and value present in the map. If f
// returns false, Range stops the iteration.
//
// While iterating through a split of m, Range holds the corresponding lock.
//
// Range does not necessarily correspond to any consistent snapshot of the Map's
// contents: no key will be visited more than once, but if the value for any key
// is stored or deleted concurrently, Range may reflect any mapping for that key
// from any point during the Range call.
func (m *Map) Range(f func(key, value interface{}) bool) {
	splits := m.splits
	for i := 0; i < len(splits); i++ {
		if !splits[i].srange(f) {
			return
		}
	}
}

func (split *Split) parallelRange(f func(key, value interface{})) {
	split.Lock()
	defer split.Unlock()
	for key, value := range split.Map {
		f(key, value)
	}
}

// ParallelRange calls f in parallel for each key and value present in the map.
//
// While iterating through a split of m, ParallelRange holds the corresponding
// lock.
//
// ParallelRange does not necessarily correspond to any consistent snapshot of
// the Map's contents: no key will be visited more than once, but if the value
// for any key is stored or deleted concurrently, ParallelRange may reflect any
// mapping for that key from any point during the Range call.
func (m *Map) ParallelRange(f func(key, value interface{})) {
	splits := m.splits
	parallel.Range(0, len(splits), 0, func(low, high int) {
		for i := low; i < high; i++ {
			splits[i].parallelRange(f)
		}
	})
}

// SpeculativeRange calls f in parallel for each key and value present in the
// map. If f returns false, SpeculativeRange stops the iteration, and returns
// without waiting for the other goroutines that it invoked to terminate.
//
// While iterating through a split of m, SpeculativeRange holds the
// corresponding lock.
//
// SpeculativeRange is useful as an alternative to ParallelRange in cases where
// ParallelRange tends to use computational resources for too long when false is
// a common and/or early return value for f. On the other hand, SpeculativeRange
// adds overhead, so for cases where false is an uncommon and/or late return
// value for f, it may be more efficient to use ParallelRange.
//
// SpeculativeRange does not necessarily correspond to any consistent snapshot
// of the Map's contents: no key will be visited more than once, but if the
// value for any key is stored or deleted concurrently, SpeculativeRange may
// reflect any mapping for that key from any point during the Range call.
func (m *Map) SpeculativeRange(f func(key, value interface{}) bool) {
	splits := m.splits
	speculative.Range(0, len(splits), 0, func(low, high int) bool {
		for i := low; i < high; i++ {
			if !splits[i].srange(f) {
				return false
			}
		}
		return true
	})
}

func (split *Split) predicate(p func(map[interface{}]interface{}) bool) bool {
	split.Lock()
	defer split.Unlock()
	return p(split.Map)
}

// And calls predicate for every split of m sequentially. If any predicate
// invocation returns false, And immediately terminates and also returns false.
// Otherwise, And returns true.
//
// While predicate is executed on a split of m, And holds the corresponding
// lock.
//
// And does not necessarily correspond to any consistent snapshot of the Map's
// contents: no split will be visited more than once, but if the value for any
// key is stored or deleted concurrently, And may reflect any mapping for that
// key from any point during the And call.
func (m *Map) And(predicate func(map[interface{}]interface{}) bool) bool {
	splits := m.splits
	for i := 0; i < len(splits); i++ {
		if ok := splits[i].predicate(predicate); !ok {
			return false
		}
	}
	return true
}

// ParallelAnd calls predicate for every split of m in parallel. The results of
// the predicate invocations are then combined with the && operator.
//
// ParallelAnd returns only when all goroutines it spawns have terminated.
//
// While predicate is executed on a split of m, ParallelAnd holds the
// corresponding lock.
//
// If one or more predicate invocations panic, the corresponding goroutines
// recover the panics, and ParallelAnd eventually panics with the left-most
// recovered panic value.
//
// ParallelAnd does not necessarily correspond to any consistent snapshot of the
// Map's contents: no split will be visited more than once, but if the value for
// any key is stored or deleted concurrently, ParallelAnd may reflect any
// mapping for that key from any point during the ParallelAnd call.
func (m *Map) ParallelAnd(predicate func(map[interface{}]interface{}) bool) bool {
	splits := m.splits
	return parallel.RangeAnd(0, len(splits), 0, func(low, high int) bool {
		for i := low; i < high; i++ {
			if ok := splits[i].predicate(predicate); !ok {
				return false
			}
		}
		return true
	})
}

// SpeculativeAnd calls predicate for every split of m in parallel.
// SpeculativeAnd returns true if all predicate invocations return true; or
// SpeculativeAnd return false when at least one of them returns false, without
// waiting for the other predicates to terminate.
//
// While predicate is executed on a split of m, SpeculativeAnd holds the
// corresponding lock.
//
// If one or more predicate invocations panic, the corresponding goroutines
// recover the panics, and SpeculativeAnd eventually panics with the left-most
// recovered panic value.
//
// SpeculativeAnd does not necessarily correspond to any consistent snapshot of
// the Map's contents: no split will be visited more than once, but if the value
// for any key is stored or deleted concurrently, SpeculativeAnd may reflect any
// mapping for that key from any point during the SpeculativeAnd call.
func (m *Map) SpeculativeAnd(predicate func(map[interface{}]interface{}) bool) bool {
	splits := m.splits
	return speculative.RangeAnd(0, len(splits), 0, func(low, high int) bool {
		for i := low; i < high; i++ {
			if ok := splits[i].predicate(predicate); !ok {
				return false
			}
		}
		return true
	})
}

// Or calls predicate for every split of m sequentially. If any predicate
// invocation returns true, Or immediately terminates and also returns true.
// Otherwise, Or returns false.
//
// While predicate is executed on a split of m, Or holds the corresponding lock.
//
// Or does not necessarily correspond to any consistent snapshot of the Map's
// contents: no split will be visited more than once, but if the value for any
// key is stored or deleted concurrently, Or may reflect any mapping for that
// key from any point during the Or call.
func (m *Map) Or(predicate func(map[interface{}]interface{}) bool) bool {
	splits := m.splits
	for i := 0; i < len(splits); i++ {
		if ok := splits[i].predicate(predicate); ok {
			return true
		}
	}
	return false
}

// ParallelOr calls predicate for every split of m in parallel. The results of
// the predicate invocations are then combined with the || operator.
//
// ParallelOr returns only when all goroutines it spawns have terminated.
//
// While predicate is executed on a split of m, ParallelOr holds the
// corresponding lock.
//
// If one or more predicate invocations panic, the corresponding goroutines
// recover the panics, and ParallelAnd eventually panics with the left-most
// recovered panic value.
//
// ParallelOr does not necessarily correspond to any consistent snapshot of the
// Map's contents: no split will be visited more than once, but if the value for
// any key is stored or deleted concurrently, ParallelOr may reflect any mapping
// for that key from any point during the ParallelOr call.
func (m *Map) ParallelOr(predicate func(map[interface{}]interface{}) bool) bool {
	splits := m.splits
	return parallel.RangeOr(0, len(splits), 0, func(low, high int) bool {
		for i := low; i < high; i++ {
			if ok := splits[i].predicate(predicate); ok {
				return true
			}
		}
		return false
	})
}

// SpeculativeOr calls predicate for every split of m in parallel. SpeculativeOr
// returns false if all predicate invocations return false; or SpeculativeOr
// return true when at least one of them returns true, without waiting for the
// other predicates to terminate.
//
// While predicate is executed on a split of m, SpeculativeOr holds the
// corresponding lock.
//
// If one or more predicate invocations panic, the corresponding goroutines
// recover the panics, and SpeculativeOr eventually panics with the left-most
// recovered panic value.
//
// SpeculativeOr does not necessarily correspond to any consistent snapshot of
// the Map's contents: no split will be visited more than once, but if the value
// for any key is stored or deleted concurrently, SpeculativeOr may reflect any
// mapping for that key from any point during the SpeculativeOr call.
func (m *Map) SpeculativeOr(predicate func(map[interface{}]interface{}) bool) bool {
	splits := m.splits
	return speculative.RangeOr(0, len(splits), 0, func(low, high int) bool {
		for i := low; i < high; i++ {
			if ok := splits[i].predicate(predicate); ok {
				return true
			}
		}
		return false
	})
}

func (split *Split) reduce(r func(map[interface{}]interface{}) interface{}) interface{} {
	split.Lock()
	defer split.Unlock()
	return r(split.Map)
}

// Reduce calls reduce for every split of m sequentially. The results of the
// reduce invocations are then combined by repeated invocations of the join
// function.
//
// While reduce is executed on a split of m, Reduce holds the corresponding
// lock.
//
// Reduce does not necessarily correspond to any consistent snapshot of the
// Map's contents: no split will be visited more than once, but if the value for
// any key is stored or deleted concurrently, Reduce may reflect any mapping for
// that key from any point during the Reduce call.
func (m *Map) Reduce(
	reduce func(map[interface{}]interface{}) interface{},
	join func(x, y interface{}) interface{},
) interface{} {
	splits := m.splits
	// NewMap ensures that len(splits) > 0
	result := splits[0].reduce(reduce)
	for i := 1; i < len(splits); i++ {
		result = join(result, splits[i].reduce(reduce))
	}
	return result
}

func (split *Split) reduceFloat64(r func(map[interface{}]interface{}) float64) float64 {
	split.Lock()
	defer split.Unlock()
	return r(split.Map)
}

// ReduceFloat64 calls reduce for every split of m sequentially. The results of
// the reduce invocations are then combined by repeated invocations of the join
// function.
//
// While reduce is executed on a split of m, ReduceFloat64 holds the
// corresponding lock.
//
// ReduceFloat64 does not necessarily correspond to any consistent snapshot of
// the Map's contents: no split will be visited more than once, but if the value
// for any key is stored or deleted concurrently, ReduceFloat64 may reflect any
// mapping for that key from any point during the ReduceFloat64 call.
func (m *Map) ReduceFloat64(
	reduce func(map[interface{}]interface{}) float64,
	join func(x, y float64) float64,
) float64 {
	splits := m.splits
	// NewMap ensures that len(splits) > 0
	result := splits[0].reduceFloat64(reduce)
	for i := 1; i < len(splits); i++ {
		result = join(result, splits[i].reduceFloat64(reduce))
	}
	return result
}

// ReduceFloat64Sum calls reduce for every split of m sequentially. The results
// of the reduce invocations are then added together.
//
// While reduce is executed on a split of m, ReduceFloat64Sum holds the
// corresponding lock.
//
// ReduceFloat64Sum does not necessarily correspond to any consistent snapshot
// of the Map's contents: no split will be visited more than once, but if the
// value for any key is stored or deleted concurrently, ReduceFloat64Sum may
// reflect any mapping for that key from any point during the ReduceFloat64Sum
// call.
func (m *Map) ReduceFloat64Sum(reduce func(map[interface{}]interface{}) float64) float64 {
	result := float64(0)
	splits := m.splits
	for i := 0; i < len(splits); i++ {
		result += splits[i].reduceFloat64(reduce)
	}
	return result
}

// ReduceFloat64Product calls reduce for every split of m sequentially. The
// results of the reduce invocations are then multiplied with each other.
//
// While reduce is executed on a split of m, ReduceFloat64Product holds the
// corresponding lock.
//
// ReduceFloat64Product does not necessarily correspond to any consistent
// snapshot of the Map's contents: no split will be visited more than once, but
// if the value for any key is stored or deleted concurrently,
// ReduceFloat64Product may reflect any mapping for that key from any point
// during the ReduceFloat64Product call.
func (m *Map) ReduceFloat64Product(reduce func(map[interface{}]interface{}) float64) float64 {
	result := float64(1)
	splits := m.splits
	for i := 0; i < len(splits); i++ {
		result *= splits[i].reduceFloat64(reduce)
	}
	return result
}

func (split *Split) reduceInt(r func(map[interface{}]interface{}) int) int {
	split.Lock()
	defer split.Unlock()
	return r(split.Map)
}

// ReduceInt calls reduce for every split of m sequentially. The results of the
// reduce invocations are then combined by repeated invocations of the join
// function.
//
// While reduce is executed on a split of m, ReduceInt holds the corresponding
// lock.
//
// ReduceInt does not necessarily correspond to any consistent snapshot of the
// Map's contents: no split will be visited more than once, but if the value for
// any key is stored or deleted concurrently, ReduceInt may reflect any mapping
// for that key from any point during the ReduceInt call.
func (m *Map) ReduceInt(
	reduce func(map[interface{}]interface{}) int,
	join func(x, y int) int,
) int {
	splits := m.splits
	// NewMap ensures that len(splits) > 0
	result := splits[0].reduceInt(reduce)
	for i := 1; i < len(splits); i++ {
		result = join(result, splits[i].reduceInt(reduce))
	}
	return result
}

// ReduceIntSum calls reduce for every split of m sequentially. The results of
// the reduce invocations are then added together.
//
// While reduce is executed on a split of m, ReduceIntSum holds the
// corresponding lock.
//
// ReduceIntSum does not necessarily correspond to any consistent snapshot of
// the Map's contents: no split will be visited more than once, but if the value
// for any key is stored or deleted concurrently, ReduceIntSum may reflect any
// mapping for that key from any point during the ReduceIntSum call.
func (m *Map) ReduceIntSum(reduce func(map[interface{}]interface{}) int) int {
	result := 0
	splits := m.splits
	for i := 0; i < len(splits); i++ {
		result += splits[i].reduceInt(reduce)
	}
	return result
}

// ReduceIntProduct calls reduce for every split of m sequentially. The results
// of the reduce invocations are then multiplied with each other.
//
// While reduce is executed on a split of m, ReduceIntProduct holds the
// corresponding lock.
//
// ReduceIntProduct does not necessarily correspond to any consistent snapshot
// of the Map's contents: no split will be visited more than once, but if the
// value for any key is stored or deleted concurrently, ReduceIntProduct may
// reflect any mapping for that key from any point during the ReduceIntProduct
// call.
func (m *Map) ReduceIntProduct(reduce func(map[interface{}]interface{}) int) int {
	result := 1
	splits := m.splits
	for i := 0; i < len(splits); i++ {
		result *= splits[i].reduceInt(reduce)
	}
	return result
}

func (split *Split) reduceString(r func(map[interface{}]interface{}) string) string {
	split.Lock()
	defer split.Unlock()
	return r(split.Map)
}

// ReduceString calls reduce for every split of m sequentially. The results of
// the reduce invocations are then combined by repeated invocations of the join
// function.
//
// While reduce is executed on a split of m, ReduceString holds the
// corresponding lock.
//
// ReduceString does not necessarily correspond to any consistent snapshot of
// the Map's contents: no split will be visited more than once, but if the value
// for any key is stored or deleted concurrently, ReduceString may reflect any
// mapping for that key from any point during the ReduceString call.
func (m *Map) ReduceString(
	reduce func(map[interface{}]interface{}) string,
	join func(x, y string) string,
) string {
	splits := m.splits
	// NewMap ensures that len(splits) > 0
	result := splits[0].reduceString(reduce)
	for i := 1; i < len(splits); i++ {
		result = join(result, splits[i].reduceString(reduce))
	}
	return result
}

// ReduceStringSum calls reduce for every split of m sequentially. The results
// of the reduce invocations are then concatenated with each other.
//
// While reduce is executed on a split of m, ReduceStringSum holds the
// corresponding lock.
//
// ReduceStringSum does not necessarily correspond to any consistent snapshot of
// the Map's contents: no split will be visited more than once, but if the value
// for any key is stored or deleted concurrently, ReduceStringSum may reflect
// any mapping for that key from any point during the ReduceStringSum call.
func (m *Map) ReduceStringSum(reduce func(map[interface{}]interface{}) string) string {
	result := ""
	splits := m.splits
	for i := 0; i < len(splits); i++ {
		result += splits[i].reduceString(reduce)
	}
	return result
}

// ParallelReduce calls reduce for every split of m in parallel. The results of
// the reduce invocations are then combined by repeated invocations of the join
// function.
//
// ParallelReduce returns only when all goroutines it spawns have terminated.
//
// While reduce is executed on a split of m, ParallelReduce holds the
// corresponding lock.
//
// If one or more reduce invocations panic, the corresponding goroutines recover
// the panics, and ParallelReduce eventually panics with the left-most recovered
// panic value.
//
// ParallelReduce does not necessarily correspond to any consistent snapshot of
// the Map's contents: no split will be visited more than once, but if the value
// for any key is stored or deleted concurrently, ParallelReduce may reflect any
// mapping for that key from any point during the ParallelReduce call.
func (m *Map) ParallelReduce(
	reduce func(map[interface{}]interface{}) interface{},
	join func(x, y interface{}) interface{},
) interface{} {
	var recur func(splits []Split) interface{}
	recur = func(splits []Split) interface{} {
		if len(splits) < 2 {
			// NewMap and case 2 below ensure that len(splits) > 0
			return splits[0].reduce(reduce)
		}
		var left, right interface{}
		var p interface{}
		var wg sync.WaitGroup
		wg.Add(1)
		switch len(splits) {
		case 2:
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = splits[1].reduce(reduce)
			}()
			left = splits[0].reduce(reduce)
		default:
			half := len(splits) / 2
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(splits[half:])
			}()
			left = recur(splits[:half])
		}
		wg.Wait()
		if p != nil {
			panic(p)
		}
		return join(left, right)
	}
	return recur(m.splits)
}

// ParallelReduceFloat64 calls reduce for every split of m in parallel. The
// results of the reduce invocations are then combined by repeated invocations
// of the join function.
//
// ParallelReduceFloat64 returns only when all goroutines it spawns have
// terminated.
//
// While reduce is executed on a split of m, ParallelReduceFloat64 holds the
// corresponding lock.
//
// If one or more reduce invocations panic, the corresponding goroutines recover
// the panics, and ParallelReduceFloat64 eventually panics with the left-most
// recovered panic value.
//
// ParallelReduceFloat64 does not necessarily correspond to any consistent
// snapshot of the Map's contents: no split will be visited more than once, but
// if the value for any key is stored or deleted concurrently,
// ParallelReduceFloat64 may reflect any mapping for that key from any point
// during the ParallelReduceFloat64 call.
func (m *Map) ParallelReduceFloat64(
	reduce func(map[interface{}]interface{}) float64,
	join func(x, y float64) float64,
) float64 {
	var recur func(splits []Split) float64
	recur = func(splits []Split) float64 {
		if len(splits) < 2 {
			// NewMap and case 2 below ensure that len(splits) > 0
			return splits[0].reduceFloat64(reduce)
		}
		var left, right float64
		var p interface{}
		var wg sync.WaitGroup
		wg.Add(1)
		switch len(splits) {
		case 2:
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = splits[1].reduceFloat64(reduce)
			}()
			left = splits[0].reduceFloat64(reduce)
		default:
			half := len(splits) / 2
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(splits[half:])
			}()
			left = recur(splits[:half])
		}
		wg.Wait()
		if p != nil {
			panic(p)
		}
		return join(left, right)
	}
	return recur(m.splits)
}

// ParallelReduceFloat64Sum calls reduce for every split of m in parallel. The
// results of the reduce invocations are then added together.
//
// ParallelReduceFloat64Sum returns only when all goroutines it spawns have
// terminated.
//
// While reduce is executed on a split of m, ParallelReduceFloat64Sum holds the
// corresponding lock.
//
// If one or more reduce invocations panic, the corresponding goroutines recover
// the panics, and ParallelReduceFloat64Sum eventually panics with the left-most
// recovered panic value.
//
// ParallelReduceFloat64Sum does not necessarily correspond to any consistent
// snapshot of the Map's contents: no split will be visited more than once, but
// if the value for any key is stored or deleted concurrently,
// ParallelReduceFloat64Sum may reflect any mapping for that key from any point
// during the ParallelReduceFloat64Sum call.
func (m *Map) ParallelReduceFloat64Sum(reduce func(map[interface{}]interface{}) float64) float64 {
	var recur func(splits []Split) float64
	recur = func(splits []Split) float64 {
		if len(splits) < 2 {
			// NewMap and case 2 below ensure that len(splits) > 0
			return splits[0].reduceFloat64(reduce)
		}
		var left, right float64
		var p interface{}
		var wg sync.WaitGroup
		wg.Add(1)
		switch len(splits) {
		case 2:
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = splits[1].reduceFloat64(reduce)
			}()
			left = splits[0].reduceFloat64(reduce)
		default:
			half := len(splits) / 2
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(splits[half:])
			}()
			left = recur(splits[:half])
		}
		wg.Wait()
		if p != nil {
			panic(p)
		}
		return left + right
	}
	return recur(m.splits)
}

// ParallelReduceFloat64Product calls reduce for every split of m in parallel.
// The results of the reduce invocations are then multiplied with each other.
//
// ParallelReduceFloat64Product returns only when all goroutines it spawns have
// terminated.
//
// While reduce is executed on a split of m, ParallelReduceFloat64Product holds
// the corresponding lock.
//
// If one or more reduce invocations panic, the corresponding goroutines recover
// the panics, and ParallelReduceFloat64Product eventually panics with the
// left-most recovered panic value.
//
// ParallelReduceFloat64Product does not necessarily correspond to any
// consistent snapshot of the Map's contents: no split will be visited more than
// once, but if the value for any key is stored or deleted concurrently,
// ParallelReduceFloat64Product may reflect any mapping for that key from any
// point during the ParallelReduceFloat64Product call.
func (m *Map) ParallelReduceFloat64Product(reduce func(map[interface{}]interface{}) float64) float64 {
	var recur func(splits []Split) float64
	recur = func(splits []Split) float64 {
		if len(splits) < 2 {
			// NewMap and case 2 below ensure that len(splits) > 0
			return splits[0].reduceFloat64(reduce)
		}
		var left, right float64
		var p interface{}
		var wg sync.WaitGroup
		wg.Add(1)
		switch len(splits) {
		case 2:
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = splits[1].reduceFloat64(reduce)
			}()
			left = splits[0].reduceFloat64(reduce)
		default:
			half := len(splits) / 2
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(splits[half:])
			}()
			left = recur(splits[:half])
		}
		wg.Wait()
		if p != nil {
			panic(p)
		}
		return left * right
	}
	return recur(m.splits)
}

// ParallelReduceInt calls reduce for every split of m in parallel. The results
// of the reduce invocations are then combined by repeated invocations of the
// join function.
//
// While reduce is executed on a split of m, ParallelReduceInt holds the
// corresponding lock.
//
// If one or more reduce invocations panic, the corresponding goroutines recover
// the panics, and ParallelReduceInt eventually panics with the left-most
// recovered panic value.
//
// ParallelReduceInt does not necessarily correspond to any consistent snapshot
// of the Map's contents: no split will be visited more than once, but if the
// value for any key is stored or deleted concurrently, ParallelReduceInt may
// reflect any mapping for that key from any point during the ParallelReduceInt
// call.
func (m *Map) ParallelReduceInt(
	reduce func(map[interface{}]interface{}) int,
	join func(x, y int) int,
) int {
	var recur func(splits []Split) int
	recur = func(splits []Split) int {
		if len(splits) < 2 {
			// NewMap and case 2 below ensure that len(splits) > 0
			return splits[0].reduceInt(reduce)
		}
		var left, right int
		var p interface{}
		var wg sync.WaitGroup
		wg.Add(1)
		switch len(splits) {
		case 2:
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = splits[1].reduceInt(reduce)
			}()
			left = splits[0].reduceInt(reduce)
		default:
			half := len(splits) / 2
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(splits[half:])
			}()
			left = recur(splits[:half])
		}
		wg.Wait()
		if p != nil {
			panic(p)
		}
		return join(left, right)
	}
	return recur(m.splits)
}

// ParallelReduceIntSum calls reduce for every split of m in parallel. The
// results of the reduce invocations are then added together.
//
// ParallelReduceIntSum returns only when all goroutines it spawns have
// terminated.
//
// While reduce is executed on a split of m, ParallelReduceIntSum holds the
// corresponding lock.
//
// If one or more reduce invocations panic, the corresponding goroutines recover
// the panics, and ParallelReduceIntSum eventually panics with the left-most
// recovered panic value.
//
// ParallelReduceIntSum does not necessarily correspond to any consistent
// snapshot of the Map's contents: no split will be visited more than once, but
// if the value for any key is stored or deleted concurrently,
// ParallelReduceIntSum may reflect any mapping for that key from any point
// during the ParallelReduceIntSum call.
func (m *Map) ParallelReduceIntSum(reduce func(map[interface{}]interface{}) int) int {
	var recur func(splits []Split) int
	recur = func(splits []Split) int {
		if len(splits) < 2 {
			// NewMap and case 2 below ensure that len(splits) > 0
			return splits[0].reduceInt(reduce)
		}
		var left, right int
		var p interface{}
		var wg sync.WaitGroup
		wg.Add(1)
		switch len(splits) {
		case 2:
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = splits[1].reduceInt(reduce)
			}()
			left = splits[0].reduceInt(reduce)
		default:
			half := len(splits) / 2
			go func() {
				defer func() {
					p = internal.WrapPanic(recover())
					wg.Done()
				}()
				right = recur(splits[half:])
			}()
			left = recur(splits[:half])
		}
		wg.Wait()
		if p != nil {
			panic(p)
		}
		return left + right
	}
	return recur(m.splits)
}

// ParallelReduceIntProduct calls reduce for every split of m in parallel. The
// results of the reduce invocations are then multiplied with each other.
//
// ParallelReduceIntProduct returns only when all goroutines it spawns have
// terminated.
//
// While reduce is executed on a s
Download .txt
gitextract_ml7ttqty/

├── .gitignore
├── LICENSE
├── README.md
├── doc.go
├── go.mod
├── go.sum
├── internal/
│   └── internal.go
├── parallel/
│   ├── example_heatdistribution_test.go
│   ├── parallel.go
│   └── parallel_test.go
├── pipeline/
│   ├── example_wordcount_test.go
│   ├── filter.go
│   ├── filters.go
│   ├── lparnode.go
│   ├── parnode.go
│   ├── pipeline.go
│   ├── seqnode.go
│   ├── source.go
│   └── strictordnode.go
├── sequential/
│   └── sequential.go
├── sort/
│   ├── example_interface_test.go
│   ├── mergesort.go
│   ├── quicksort.go
│   ├── sort.go
│   └── sort_test.go
├── speculative/
│   └── speculative.go
└── sync/
    └── map.go
Download .txt
SYMBOL INDEX (317 symbols across 21 files)

FILE: internal/internal.go
  function ComputeNofBatches (line 12) | func ComputeNofBatches(low, high, n int) (batches int) {
  type runtimeError (line 34) | type runtimeError struct
    method RuntimeError (line 36) | func (runtimeError) RuntimeError() {}
  function WrapPanic (line 39) | func WrapPanic(p interface{}) interface{} {

FILE: parallel/example_heatdistribution_test.go
  constant ε (line 18) | ε = 0.001
  function maxDiff (line 20) | func maxDiff(m1, m2 *mat.Dense) (result float64) {
  function HeatDistributionStep (line 39) | func HeatDistributionStep(u, v *mat.Dense) {
  function HeatDistributionSimulation (line 56) | func HeatDistributionSimulation(M, N int, init, t, r, b, l float64) {
  function Example_heatDistributionSimulation (line 94) | func Example_heatDistributionSimulation() {

FILE: parallel/parallel.go
  function Reduce (line 23) | func Reduce(
  function ReduceFloat64 (line 71) | func ReduceFloat64(
  function ReduceFloat64Sum (line 119) | func ReduceFloat64Sum(functions ...func() float64) float64 {
  function ReduceFloat64Product (line 167) | func ReduceFloat64Product(functions ...func() float64) float64 {
  function ReduceInt (line 215) | func ReduceInt(
  function ReduceIntSum (line 263) | func ReduceIntSum(functions ...func() int) int {
  function ReduceIntProduct (line 311) | func ReduceIntProduct(functions ...func() int) int {
  function ReduceString (line 359) | func ReduceString(
  function ReduceStringSum (line 407) | func ReduceStringSum(functions ...func() string) string {
  function Do (line 453) | func Do(thunks ...func()) {
  function And (line 499) | func And(predicates ...func() bool) bool {
  function Or (line 546) | func Or(predicates ...func() bool) bool {
  function Range (line 603) | func Range(
  function RangeAnd (line 662) | func RangeAnd(
  function RangeOr (line 721) | func RangeOr(
  function RangeReduce (line 781) | func RangeReduce(
  function RangeReduceInt (line 843) | func RangeReduceInt(
  function RangeReduceIntSum (line 904) | func RangeReduceIntSum(
  function RangeReduceIntProduct (line 964) | func RangeReduceIntProduct(
  function RangeReduceFloat64 (line 1025) | func RangeReduceFloat64(
  function RangeReduceFloat64Sum (line 1086) | func RangeReduceFloat64Sum(
  function RangeReduceFloat64Product (line 1146) | func RangeReduceFloat64Product(
  function RangeReduceString (line 1207) | func RangeReduceString(
  function RangeReduceStringSum (line 1268) | func RangeReduceStringSum(

FILE: parallel/parallel_test.go
  function ExampleDo (line 11) | func ExampleDo() {
  function ExampleRangeReduceIntSum (line 69) | func ExampleRangeReduceIntSum() {
  function numDivisors (line 91) | func numDivisors(n int) int {
  function ExampleRangeReduce (line 106) | func ExampleRangeReduce() {
  function ExampleRangeReduceFloat64Sum (line 132) | func ExampleRangeReduceFloat64Sum() {

FILE: pipeline/example_wordcount_test.go
  type Word (line 15) | type Word
    method Hash (line 17) | func (w Word) Hash() (hash uint64) {
  function WordCount (line 26) | func WordCount(r io.Reader) *sync.Map {
  function Example_wordCount (line 68) | func Example_wordCount() {

FILE: pipeline/filter.go
  type NodeKind (line 4) | type NodeKind
  constant Ordered (line 8) | Ordered NodeKind = iota
  constant Sequential (line 11) | Sequential
  constant Parallel (line 14) | Parallel
  type Filter (line 32) | type Filter
  type Receiver (line 37) | type Receiver
  type Finalizer (line 41) | type Finalizer
  function ComposeFilters (line 49) | func ComposeFilters(pipeline *Pipeline, kind NodeKind, dataSize *int, fi...
  function feed (line 62) | func feed(p *Pipeline, receivers []Receiver, index int, seqNo int, data ...

FILE: pipeline/filters.go
  function NewNode (line 13) | func NewNode(kind NodeKind, filters ...Filter) Node {
  function Identity (line 27) | func Identity(_ *Pipeline, _ NodeKind, _ *int) (_ Receiver, _ Finalizer) {
  function Receive (line 32) | func Receive(receive Receiver) Filter {
  function Finalize (line 41) | func Finalize(finalize Finalizer) Filter {
  function ReceiveAndFinalize (line 50) | func ReceiveAndFinalize(receive Receiver, finalize Finalizer) Filter {
  type Predicate (line 63) | type Predicate
  function Every (line 69) | func Every(result *bool, cancelWhenKnown bool, predicate Predicate) Filt...
  function NotEvery (line 108) | func NotEvery(result *bool, cancelWhenKnown bool, predicate Predicate) F...
  function Some (line 147) | func Some(result *bool, cancelWhenKnown bool, predicate Predicate) Filter {
  function NotAny (line 186) | func NotAny(result *bool, cancelWhenKnown bool, predicate Predicate) Fil...
  function Slice (line 224) | func Slice(result interface{}) Filter {
  function Count (line 257) | func Count(result *int) Filter {
  function Limit (line 292) | func Limit(limit int, cancelWhenReached bool) Node {
  function Skip (line 334) | func Skip(n int) Node {

FILE: pipeline/lparnode.go
  type lparnode (line 8) | type lparnode struct
    method makeOrdered (line 34) | func (node *lparnode) makeOrdered() {
    method TryMerge (line 40) | func (node *lparnode) TryMerge(next Node) bool {
    method Begin (line 51) | func (node *lparnode) Begin(p *Pipeline, index int, dataSize *int) (ke...
    method Feed (line 90) | func (node *lparnode) Feed(p *Pipeline, _ int, seqNo int, data interfa...
    method End (line 120) | func (node *lparnode) End() {
  function LimitedPar (line 24) | func LimitedPar(limit int, filters ...Filter) Node {

FILE: pipeline/parnode.go
  type parnode (line 7) | type parnode struct
    method TryMerge (line 20) | func (node *parnode) TryMerge(next Node) bool {
    method Begin (line 31) | func (node *parnode) Begin(p *Pipeline, _ int, dataSize *int) (keep bo...
    method Feed (line 39) | func (node *parnode) Feed(p *Pipeline, index int, seqNo int, data inte...
    method End (line 53) | func (node *parnode) End() {
  function Par (line 15) | func Par(filters ...Filter) Node {

FILE: pipeline/pipeline.go
  type Node (line 51) | type Node interface
  type Pipeline (line 105) | type Pipeline struct
    method Err (line 121) | func (p *Pipeline) Err() (err error) {
    method SetErr (line 135) | func (p *Pipeline) SetErr(err error) bool {
    method Context (line 148) | func (p *Pipeline) Context() context.Context {
    method Cancel (line 153) | func (p *Pipeline) Cancel() {
    method Source (line 165) | func (p *Pipeline) Source(source interface{}) {
    method Add (line 175) | func (p *Pipeline) Add(nodes ...Node) {
    method NofBatches (line 196) | func (p *Pipeline) NofBatches(n int) (nofBatches int) {
    method SetVariableBatchSize (line 233) | func (p *Pipeline) SetVariableBatchSize(batchInc, maxBatchSize int) {
    method finalizeVariableBatchSize (line 238) | func (p *Pipeline) finalizeVariableBatchSize() {
    method nextBatchSize (line 247) | func (p *Pipeline) nextBatchSize(batchSize int) (result int) {
    method RunWithContext (line 272) | func (p *Pipeline) RunWithContext(ctx context.Context, cancel context....
    method Run (line 359) | func (p *Pipeline) Run() {
    method FeedForward (line 376) | func (p *Pipeline) FeedForward(index int, seqNo int, data interface{}) {
  constant defaultBatchInc (line 211) | defaultBatchInc     = 1024
  constant defaultMaxBatchSize (line 212) | defaultMaxBatchSize = 0x2000000

FILE: pipeline/seqnode.go
  type dataBatch (line 8) | type dataBatch struct
  type seqnode (line 13) | type seqnode struct
    method TryMerge (line 34) | func (node *seqnode) TryMerge(next Node) bool {
    method Begin (line 48) | func (node *seqnode) Begin(p *Pipeline, index int, dataSize *int) (kee...
    method Feed (line 116) | func (node *seqnode) Feed(p *Pipeline, _ int, seqNo int, data interfac...
    method End (line 126) | func (node *seqnode) End() {
  function Ord (line 24) | func Ord(filters ...Filter) Node {
  function Seq (line 29) | func Seq(filters ...Filter) Node {

FILE: pipeline/source.go
  type Source (line 11) | type Source interface
  type sliceSource (line 30) | type sliceSource struct
    method Err (line 41) | func (src *sliceSource) Err() error {
    method Prepare (line 45) | func (src *sliceSource) Prepare(_ context.Context) int {
    method Fetch (line 49) | func (src *sliceSource) Fetch(n int) (fetched int) {
    method Data (line 67) | func (src *sliceSource) Data() interface{} {
  function newSliceSource (line 36) | func newSliceSource(value reflect.Value) *sliceSource {
  type chanSource (line 71) | type chanSource struct
    method Err (line 85) | func (src *chanSource) Err() error {
    method Prepare (line 89) | func (src *chanSource) Prepare(ctx context.Context) (size int) {
    method Fetch (line 94) | func (src *chanSource) Fetch(n int) (fetched int) {
    method Data (line 107) | func (src *chanSource) Data() interface{} {
  function newChanSource (line 77) | func newChanSource(value reflect.Value) *chanSource {
  function reflectSource (line 111) | func reflectSource(source interface{}) Source {
  type Scanner (line 124) | type Scanner struct
    method Prepare (line 136) | func (src *Scanner) Prepare(_ context.Context) (size int) {
    method Fetch (line 141) | func (src *Scanner) Fetch(n int) (fetched int) {
    method Data (line 155) | func (src *Scanner) Data() interface{} {
  function NewScanner (line 131) | func NewScanner(r io.Reader) *Scanner {
  type BytesScanner (line 161) | type BytesScanner struct
    method Prepare (line 173) | func (src *BytesScanner) Prepare(_ context.Context) (size int) {
    method Fetch (line 178) | func (src *BytesScanner) Fetch(n int) (fetched int) {
    method Data (line 192) | func (src *BytesScanner) Data() interface{} {
  function NewBytesScanner (line 168) | func NewBytesScanner(r io.Reader) *BytesScanner {
  type Func (line 198) | type Func struct
    method Err (line 224) | func (f *Func) Err() error {
    method Prepare (line 229) | func (f *Func) Prepare(_ context.Context) int {
    method Fetch (line 234) | func (f *Func) Fetch(size int) (fetched int) {
    method Data (line 240) | func (f *Func) Data() interface{} {
  function NewFunc (line 219) | func NewFunc(size int, fetch func(size int) (data interface{}, fetched i...
  type SingletonChan (line 248) | type SingletonChan struct
    method Err (line 268) | func (src *SingletonChan) Err() error {
    method Prepare (line 273) | func (src *SingletonChan) Prepare(ctx context.Context) (size int) {
    method Fetch (line 279) | func (src *SingletonChan) Fetch(n int) (fetched int) {
    method Data (line 289) | func (src *SingletonChan) Data() interface{} {
  function NewSingletonChan (line 256) | func NewSingletonChan(channel interface{}) *SingletonChan {

FILE: pipeline/strictordnode.go
  type strictordnode (line 7) | type strictordnode struct
    method TryMerge (line 23) | func (node *strictordnode) TryMerge(next Node) bool {
    method Begin (line 41) | func (node *strictordnode) Begin(p *Pipeline, index int, dataSize *int...
    method Feed (line 75) | func (node *strictordnode) Feed(p *Pipeline, _ int, seqNo int, data in...
    method End (line 97) | func (node *strictordnode) End() {
  function StrictOrd (line 18) | func StrictOrd(filters ...Filter) Node {

FILE: sequential/sequential.go
  function Reduce (line 20) | func Reduce(
  function ReduceFloat64 (line 36) | func ReduceFloat64(
  function ReduceFloat64Sum (line 50) | func ReduceFloat64Sum(functions ...func() float64) float64 {
  function ReduceFloat64Product (line 60) | func ReduceFloat64Product(functions ...func() float64) float64 {
  function ReduceInt (line 72) | func ReduceInt(
  function ReduceIntSum (line 86) | func ReduceIntSum(functions ...func() int) int {
  function ReduceIntProduct (line 96) | func ReduceIntProduct(functions ...func() int) int {
  function ReduceString (line 108) | func ReduceString(
  function ReduceStringSum (line 122) | func ReduceStringSum(functions ...func() string) string {
  function Do (line 131) | func Do(thunks ...func()) {
  function And (line 140) | func And(predicates ...func() bool) bool {
  function Or (line 151) | func Or(predicates ...func() bool) bool {
  function Range (line 170) | func Range(
  function RangeAnd (line 209) | func RangeAnd(
  function RangeOr (line 248) | func RangeOr(
  function RangeReduce (line 286) | func RangeReduce(
  function RangeReduceInt (line 326) | func RangeReduceInt(
  function RangeReduceIntSum (line 365) | func RangeReduceIntSum(
  function RangeReduceIntProduct (line 403) | func RangeReduceIntProduct(
  function RangeReduceFloat64 (line 442) | func RangeReduceFloat64(
  function RangeReduceFloat64Sum (line 481) | func RangeReduceFloat64Sum(
  function RangeReduceFloat64Product (line 519) | func RangeReduceFloat64Product(
  function RangeReduceString (line 558) | func RangeReduceString(
  function RangeReduceStringSum (line 597) | func RangeReduceStringSum(

FILE: sort/example_interface_test.go
  type Person (line 15) | type Person struct
    method String (line 20) | func (p Person) String() string {
  type ByAge (line 26) | type ByAge
    method SequentialSort (line 28) | func (a ByAge) SequentialSort(i, j int) {
    method Len (line 34) | func (a ByAge) Len() int           { return len(a) }
    method Swap (line 35) | func (a ByAge) Swap(i, j int)      { a[i], a[j] = a[j], a[i] }
    method Less (line 36) | func (a ByAge) Less(i, j int) bool { return a[i].Age < a[j].Age }
    method NewTemp (line 38) | func (a ByAge) NewTemp() sort.StableSorter { return make(ByAge, len(a)) }
    method Assign (line 40) | func (this ByAge) Assign(that sort.StableSorter) func(i, j, len int) {
  function Example (line 49) | func Example() {

FILE: sort/mergesort.go
  constant msortGrainSize (line 9) | msortGrainSize = 0x3000
  type StableSorter (line 14) | type StableSorter interface
  type sorter (line 39) | type sorter struct
  function binarySearchEq (line 44) | func binarySearchEq(x int, T *sorter, p, r int) int {
  function binarySearchNeq (line 60) | func binarySearchNeq(x int, T *sorter, p, r int) int {
  function sMerge (line 76) | func sMerge(T *sorter, p1, r1, p2, r2 int, A *sorter, p3 int) {
  function pMerge (line 106) | func pMerge(T *sorter, p1, r1, p2, r2 int, A *sorter, p3 int) {
  function StableSort (line 147) | func StableSort(data StableSorter) {

FILE: sort/quicksort.go
  constant qsortGrainSize (line 9) | qsortGrainSize = 0x500
  type Sorter (line 14) | type Sorter interface
  function medianOfThree (line 19) | func medianOfThree(data sort.Interface, l, m, r int) int {
  function pseudoMedianOfNine (line 34) | func pseudoMedianOfNine(data sort.Interface, index, size int) int {
  function Sort (line 46) | func Sort(data Sorter) {

FILE: sort/sort.go
  type SequentialSorter (line 15) | type SequentialSorter interface
  constant serialCutoff (line 22) | serialCutoff = 10
  function IsSorted (line 26) | func IsSorted(data sort.Interface) bool {
  type IntSlice (line 63) | type IntSlice
    method SequentialSort (line 66) | func (s IntSlice) SequentialSort(i, j int) {
    method Len (line 70) | func (s IntSlice) Len() int {
    method Less (line 74) | func (s IntSlice) Less(i, j int) bool {
    method Swap (line 78) | func (s IntSlice) Swap(i, j int) {
    method NewTemp (line 83) | func (s IntSlice) NewTemp() StableSorter {
    method Assign (line 88) | func (s IntSlice) Assign(source StableSorter) func(i, j, len int) {
  function IntsAreSorted (line 98) | func IntsAreSorted(a []int) bool {
  type Float64Slice (line 104) | type Float64Slice
    method SequentialSort (line 107) | func (s Float64Slice) SequentialSort(i, j int) {
    method Len (line 111) | func (s Float64Slice) Len() int {
    method Less (line 115) | func (s Float64Slice) Less(i, j int) bool {
    method Swap (line 119) | func (s Float64Slice) Swap(i, j int) {
    method NewTemp (line 124) | func (s Float64Slice) NewTemp() StableSorter {
    method Assign (line 129) | func (s Float64Slice) Assign(source StableSorter) func(i, j, len int) {
  function Float64sAreSorted (line 139) | func Float64sAreSorted(a []float64) bool {
  type StringSlice (line 145) | type StringSlice
    method SequentialSort (line 148) | func (s StringSlice) SequentialSort(i, j int) {
    method Len (line 152) | func (s StringSlice) Len() int {
    method Less (line 156) | func (s StringSlice) Less(i, j int) bool {
    method Swap (line 160) | func (s StringSlice) Swap(i, j int) {
    method NewTemp (line 165) | func (s StringSlice) NewTemp() StableSorter {
    method Assign (line 170) | func (s StringSlice) Assign(source StableSorter) func(i, j, len int) {
  function StringsAreSorted (line 180) | func StringsAreSorted(a []string) bool {

FILE: sort/sort_test.go
  type By (line 11) | type By
    method SequentialSort (line 51) | func (by By) SequentialSort(slice []int) {
    method ParallelStableSort (line 55) | func (by By) ParallelStableSort(slice []int) {
    method ParallelSort (line 59) | func (by By) ParallelSort(slice []int) {
    method IsSorted (line 63) | func (by By) IsSorted(slice []int) bool {
  type IntSliceSorter (line 13) | type IntSliceSorter struct
    method NewTemp (line 19) | func (s IntSliceSorter) NewTemp() StableSorter {
    method Len (line 23) | func (s IntSliceSorter) Len() int {
    method Less (line 27) | func (s IntSliceSorter) Less(i, j int) bool {
    method Swap (line 31) | func (s IntSliceSorter) Swap(i, j int) {
    method Assign (line 35) | func (s IntSliceSorter) Assign(t StableSorter) func(i, j, len int) {
    method SequentialSort (line 44) | func (s IntSliceSorter) SequentialSort(i, j int) {
  function makeRandomSlice (line 67) | func makeRandomSlice(size, limit int) []int {
  function TestSort (line 75) | func TestSort(t *testing.T) {
  function TestIntSort (line 97) | func TestIntSort(t *testing.T) {
  function makeRandomFloat64Slice (line 125) | func makeRandomFloat64Slice(size int) []float64 {
  function TestFloat64Sort (line 133) | func TestFloat64Sort(t *testing.T) {
  function makeRandomStringSlice (line 161) | func makeRandomStringSlice(size, lenlimit int, limit int32) []string {
  function TestStringSort (line 174) | func TestStringSort(t *testing.T) {
  type box (line 203) | type box struct
  type boxSlice (line 207) | type boxSlice
    method NewTemp (line 220) | func (s boxSlice) NewTemp() StableSorter {
    method Len (line 224) | func (s boxSlice) Len() int {
    method Less (line 228) | func (s boxSlice) Less(i, j int) bool {
    method Assign (line 232) | func (s boxSlice) Assign(source StableSorter) func(i, j, len int) {
    method SequentialSort (line 241) | func (s boxSlice) SequentialSort(i, j int) {
  function makeRandomBoxSlice (line 210) | func makeRandomBoxSlice(size int) boxSlice {
  function checkStable (line 248) | func checkStable(b boxSlice) bool {
  function TestStableSort (line 260) | func TestStableSort(t *testing.T) {
  function BenchmarkSort (line 281) | func BenchmarkSort(b *testing.B) {

FILE: speculative/speculative.go
  function Reduce (line 29) | func Reduce(
  function ReduceFloat64 (line 88) | func ReduceFloat64(
  function ReduceInt (line 147) | func ReduceInt(
  function ReduceString (line 206) | func ReduceString(
  function Do (line 260) | func Do(thunks ...func() bool) bool {
  function And (line 311) | func And(predicates ...func() bool) bool {
  function Or (line 361) | func Or(predicates ...func() bool) bool {
  function Range (line 424) | func Range(
  function RangeAnd (line 486) | func RangeAnd(
  function RangeOr (line 548) | func RangeOr(
  function RangeReduce (line 615) | func RangeReduce(
  function RangeReduceInt (line 688) | func RangeReduceInt(
  function RangeReduceFloat64 (line 761) | func RangeReduceFloat64(
  function RangeReduceString (line 834) | func RangeReduceString(

FILE: sync/map.go
  type Hasher (line 24) | type Hasher interface
  type Split (line 31) | type Split struct
    method srange (line 200) | func (split *Split) srange(f func(key, value interface{}) bool) bool {
    method parallelRange (line 229) | func (split *Split) parallelRange(f func(key, value interface{})) {
    method predicate (line 284) | func (split *Split) predicate(p func(map[interface{}]interface{}) bool...
    method reduce (line 443) | func (split *Split) reduce(r func(map[interface{}]interface{}) interfa...
    method reduceFloat64 (line 473) | func (split *Split) reduceFloat64(r func(map[interface{}]interface{}) ...
    method reduceInt (line 543) | func (split *Split) reduceInt(r func(map[interface{}]interface{}) int)...
    method reduceString (line 612) | func (split *Split) reduceString(r func(map[interface{}]interface{}) s...
    method speculativeReduce (line 1203) | func (split *Split) speculativeReduce(r func(map[interface{}]interface...
    method speculativeReduceFloat64 (line 1283) | func (split *Split) speculativeReduceFloat64(r func(map[interface{}]in...
    method speculativeReduceInt (line 1363) | func (split *Split) speculativeReduceInt(r func(map[interface{}]interf...
    method speculativeReduceString (line 1443) | func (split *Split) speculativeReduceString(r func(map[interface{}]int...
  type Map (line 40) | type Map struct
    method Split (line 64) | func (m *Map) Split(key Hasher) *Split {
    method Delete (line 70) | func (m *Map) Delete(key Hasher) {
    method Load (line 79) | func (m *Map) Load(key Hasher) (value interface{}, ok bool) {
    method LoadOrStore (line 90) | func (m *Map) LoadOrStore(key Hasher, value interface{}) (actual inter...
    method LoadOrCompute (line 117) | func (m *Map) LoadOrCompute(key Hasher, computer func() interface{}) (...
    method DeleteOrStore (line 138) | func (m *Map) DeleteOrStore(key Hasher, value interface{}) (actual int...
    method DeleteOrCompute (line 159) | func (m *Map) DeleteOrCompute(key Hasher, computer func() interface{})...
    method Modify (line 184) | func (m *Map) Modify(
    method Range (line 220) | func (m *Map) Range(f func(key, value interface{}) bool) {
    method ParallelRange (line 246) | func (m *Map) ParallelRange(f func(key, value interface{})) {
    method SpeculativeRange (line 272) | func (m *Map) SpeculativeRange(f func(key, value interface{}) bool) {
    method And (line 301) | func (m *Map) And(predicate func(map[interface{}]interface{}) bool) bo...
    method ParallelAnd (line 327) | func (m *Map) ParallelAnd(predicate func(map[interface{}]interface{}) ...
    method SpeculativeAnd (line 355) | func (m *Map) SpeculativeAnd(predicate func(map[interface{}]interface{...
    method Or (line 377) | func (m *Map) Or(predicate func(map[interface{}]interface{}) bool) bool {
    method ParallelOr (line 403) | func (m *Map) ParallelOr(predicate func(map[interface{}]interface{}) b...
    method SpeculativeOr (line 431) | func (m *Map) SpeculativeOr(predicate func(map[interface{}]interface{}...
    method Reduce (line 460) | func (m *Map) Reduce(
    method ReduceFloat64 (line 490) | func (m *Map) ReduceFloat64(
    method ReduceFloat64Sum (line 514) | func (m *Map) ReduceFloat64Sum(reduce func(map[interface{}]interface{}...
    method ReduceFloat64Product (line 534) | func (m *Map) ReduceFloat64Product(reduce func(map[interface{}]interfa...
    method ReduceInt (line 560) | func (m *Map) ReduceInt(
    method ReduceIntSum (line 583) | func (m *Map) ReduceIntSum(reduce func(map[interface{}]interface{}) in...
    method ReduceIntProduct (line 603) | func (m *Map) ReduceIntProduct(reduce func(map[interface{}]interface{}...
    method ReduceString (line 629) | func (m *Map) ReduceString(
    method ReduceStringSum (line 652) | func (m *Map) ReduceStringSum(reduce func(map[interface{}]interface{})...
    method ParallelReduce (line 678) | func (m *Map) ParallelReduce(
    method ParallelReduceFloat64 (line 741) | func (m *Map) ParallelReduceFloat64(
    method ParallelReduceFloat64Sum (line 803) | func (m *Map) ParallelReduceFloat64Sum(reduce func(map[interface{}]int...
    method ParallelReduceFloat64Product (line 862) | func (m *Map) ParallelReduceFloat64Product(reduce func(map[interface{}...
    method ParallelReduceInt (line 919) | func (m *Map) ParallelReduceInt(
    method ParallelReduceIntSum (line 981) | func (m *Map) ParallelReduceIntSum(reduce func(map[interface{}]interfa...
    method ParallelReduceIntProduct (line 1040) | func (m *Map) ParallelReduceIntProduct(reduce func(map[interface{}]int...
    method ParallelReduceString (line 1100) | func (m *Map) ParallelReduceString(
    method ParallelReduceStringSum (line 1162) | func (m *Map) ParallelReduceStringSum(reduce func(map[interface{}]inte...
    method SpeculativeReduce (line 1232) | func (m *Map) SpeculativeReduce(
    method SpeculativeReduceFloat64 (line 1312) | func (m *Map) SpeculativeReduceFloat64(
    method SpeculativeReduceInt (line 1392) | func (m *Map) SpeculativeReduceInt(
    method SpeculativeReduceString (line 1472) | func (m *Map) SpeculativeReduceString(
  function NewMap (line 47) | func NewMap(size int) *Map {
Condensed preview — 27 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (231K chars).
[
  {
    "path": ".gitignore",
    "chars": 275,
    "preview": "# Binaries for programs and plugins\n*.exe\n*.dll\n*.so\n*.dylib\n\n# Test binary, build with `go test -c`\n*.test\n\n# Output of"
  },
  {
    "path": "LICENSE",
    "chars": 1504,
    "preview": "BSD 3-Clause License\n\nCopyright (c) 2017, Imec\nAll rights reserved.\n\nRedistribution and use in source and binary forms, "
  },
  {
    "path": "README.md",
    "chars": 612,
    "preview": "# pargo\n## A library for parallel programming in Go\n\nPackage pargo provides functions and data structures for expressing"
  },
  {
    "path": "doc.go",
    "chars": 1902,
    "preview": "// Package pargo provides functions and data structures for expressing parallel\n// algorithms. While Go is primarily des"
  },
  {
    "path": "go.mod",
    "chars": 79,
    "preview": "module github.com/exascience/pargo\n\ngo 1.14\n\nrequire gonum.org/v1/gonum v0.7.0\n"
  },
  {
    "path": "go.sum",
    "chars": 1873,
    "preview": "github.com/ajstarks/svgo v0.0.0-20180226025133-644b8db467af/go.mod h1:K08gAheRH3/J6wwsYMMT4xOr94bZjxIelGM0+d/wbFw=\ngithu"
  },
  {
    "path": "internal/internal.go",
    "chars": 1107,
    "preview": "package internal\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"runtime\"\n\t\"runtime/debug\"\n)\n\n// ComputeNofBatches divides the size of the "
  },
  {
    "path": "parallel/example_heatdistribution_test.go",
    "chars": 2700,
    "preview": "package parallel_test\n\n// This is a simplified version of a heat distribution simulation, based on an\n// implementation "
  },
  {
    "path": "parallel/parallel.go",
    "chars": 36559,
    "preview": "// Package parallel provides functions for expressing parallel algorithms.\n//\n// See https://github.com/ExaScience/pargo"
  },
  {
    "path": "parallel/parallel_test.go",
    "chars": 2633,
    "preview": "package parallel_test\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"runtime\"\n\n\t\"github.com/exascience/pargo/parallel\"\n)\n\nfunc ExampleDo()"
  },
  {
    "path": "pipeline/example_wordcount_test.go",
    "chars": 1976,
    "preview": "package pipeline_test\n\nimport (\n\t\"bufio\"\n\t\"fmt\"\n\t\"io\"\n\t\"runtime\"\n\t\"strings\"\n\n\t\"github.com/exascience/pargo/pipeline\"\n\t\"g"
  },
  {
    "path": "pipeline/filter.go",
    "chars": 2731,
    "preview": "package pipeline\n\n// A NodeKind reperesents the different kinds of nodes.\ntype NodeKind int\n\nconst (\n\t// Ordered nodes r"
  },
  {
    "path": "pipeline/filters.go",
    "chars": 9912,
    "preview": "package pipeline\n\nimport (\n\t\"errors\"\n\t\"reflect\"\n\t\"sync\"\n\t\"sync/atomic\"\n)\n\n// NewNode creates a node of the given kind, w"
  },
  {
    "path": "pipeline/lparnode.go",
    "chars": 3077,
    "preview": "package pipeline\n\nimport (\n\t\"runtime\"\n\t\"sync\"\n)\n\ntype lparnode struct {\n\tlimit      int\n\tordered    bool\n\tcond       *sy"
  },
  {
    "path": "pipeline/parnode.go",
    "chars": 1482,
    "preview": "package pipeline\n\nimport (\n\t\"sync\"\n)\n\ntype parnode struct {\n\twaitGroup  sync.WaitGroup\n\tfilters    []Filter\n\treceivers  "
  },
  {
    "path": "pipeline/pipeline.go",
    "chars": 14025,
    "preview": "// Package pipeline provides means to construct and execute parallel pipelines.\n//\n// A Pipeline feeds batches of data t"
  },
  {
    "path": "pipeline/seqnode.go",
    "chars": 3089,
    "preview": "package pipeline\n\nimport (\n\t\"sync\"\n)\n\ntype (\n\tdataBatch struct {\n\t\tseqNo int\n\t\tdata  interface{}\n\t}\n\n\tseqnode struct {\n\t"
  },
  {
    "path": "pipeline/source.go",
    "chars": 7705,
    "preview": "package pipeline\n\nimport (\n\t\"bufio\"\n\t\"context\"\n\t\"io\"\n\t\"reflect\"\n)\n\n// A Source represents an object that can generate da"
  },
  {
    "path": "pipeline/strictordnode.go",
    "chars": 2548,
    "preview": "package pipeline\n\nimport (\n\t\"sync\"\n)\n\ntype strictordnode struct {\n\tcond       *sync.Cond\n\tchannel    chan dataBatch\n\twai"
  },
  {
    "path": "sequential/sequential.go",
    "chars": 19755,
    "preview": "// Package sequential provides sequential implementations of the functions\n// provided by the parallel and speculative p"
  },
  {
    "path": "sort/example_interface_test.go",
    "chars": 1690,
    "preview": "// Copyright 2011 The Go Authors. All rights reserved. Use of this source code\n// is governed by a BSD-style license tha"
  },
  {
    "path": "sort/mergesort.go",
    "chars": 4450,
    "preview": "package sort\n\nimport (\n\t\"sync\"\n\n\t\"github.com/exascience/pargo/parallel\"\n)\n\nconst msortGrainSize = 0x3000\n\n// StableSorte"
  },
  {
    "path": "sort/quicksort.go",
    "chars": 1900,
    "preview": "package sort\n\nimport (\n\t\"sort\"\n\n\t\"github.com/exascience/pargo/parallel\"\n)\n\nconst qsortGrainSize = 0x500\n\n// Sorter is a "
  },
  {
    "path": "sort/sort.go",
    "chars": 5163,
    "preview": "// Package sort provides implementations of parallel sorting algorithms.\npackage sort\n\nimport (\n\t\"sort\"\n\t\"sync/atomic\"\n\n"
  },
  {
    "path": "sort/sort_test.go",
    "chars": 7122,
    "preview": "package sort\n\nimport (\n\t\"bytes\"\n\t\"math/rand\"\n\t\"sort\"\n\t\"testing\"\n)\n\ntype (\n\tBy func(i, j int) bool\n\n\tIntSliceSorter struc"
  },
  {
    "path": "speculative/speculative.go",
    "chars": 25922,
    "preview": "// Package speculative provides functions for expressing parallel algorithms,\n// similar to the functions in package par"
  },
  {
    "path": "sync/map.go",
    "chars": 50034,
    "preview": "// Package sync provides synchronization primitives similar to the sync package\n// of Go's standard library, however her"
  }
]

About this extraction

This page contains the full source code of the ExaScience/pargo GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 27 files (206.9 KB), approximately 57.4k tokens, and a symbol index with 317 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!