Full Code of cnuernber/libpython-clj for AI

master 97dff5cdf75c cached
101 files
1.6 MB
473.9k tokens
27 symbols
1 requests
Download .txt
Showing preview only (1,708K chars total). Download the full file or copy to clipboard to get everything.
Repository: cnuernber/libpython-clj
Branch: master
Commit: 97dff5cdf75c
Files: 101
Total size: 1.6 MB

Directory structure:
gitextract_0j37h__8/

├── .github/
│   ├── FUNDING.yml
│   └── workflows/
│       └── test.yml
├── .gitignore
├── .gitmodules
├── .travis.yml
├── CHANGELOG.md
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── codegen-test/
│   ├── README.md
│   ├── deps.edn
│   ├── scripts/
│   │   ├── compile
│   │   └── run
│   └── src/
│       ├── code/
│       │   ├── codegen.clj
│       │   └── main.clj
│       └── python/
│           └── numpy.clj
├── deps.edn
├── dockerfiles/
│   ├── CondaDockerfile
│   ├── Py38Dockerfile
│   └── Py39Dockerfile
├── docs/
│   ├── Usage.html
│   ├── css/
│   │   └── default.css
│   ├── embedded.html
│   ├── environments.html
│   ├── highlight/
│   │   └── solarized-light.css
│   ├── index.html
│   ├── js/
│   │   └── page_effects.js
│   ├── libpython-clj.python.html
│   ├── libpython-clj.python.np-array.html
│   ├── libpython-clj.require.html
│   ├── libpython-clj2.codegen.html
│   ├── libpython-clj2.embedded.html
│   ├── libpython-clj2.java-api.html
│   ├── libpython-clj2.python.class.html
│   ├── libpython-clj2.python.html
│   ├── libpython-clj2.python.np-array.html
│   ├── libpython-clj2.require.html
│   ├── new-to-clojure.html
│   ├── scopes-and-gc.html
│   └── slicing.html
├── questions/
│   ├── 32bit.txt
│   ├── 64bit.txt
│   ├── compile32.sh
│   ├── compile64.sh
│   ├── ssizet_size.cpp
│   └── typeobject.cpp
├── resources/
│   └── clj-kondo.exports/
│       └── clj-python/
│           └── libpython-clj/
│               ├── config.edn
│               └── hooks/
│                   └── libpython_clj/
│                       ├── jna/
│                       │   └── base/
│                       │       └── def_pylib_fn.clj
│                       └── require/
│                           └── import_python.clj
├── scripts/
│   ├── build-conda-docker
│   ├── build-py38-docker
│   ├── build-py39-docker
│   ├── conda-repl
│   ├── deploy
│   ├── install
│   ├── py38-repl
│   ├── run-conda-docker
│   ├── run-py38-docker
│   ├── run-py39-docker
│   └── run-tests
├── src/
│   └── libpython_clj2/
│       ├── codegen.clj
│       ├── embedded.clj
│       ├── java_api.clj
│       ├── metadata.clj
│       ├── python/
│       │   ├── base.clj
│       │   ├── bridge_as_jvm.clj
│       │   ├── bridge_as_python.clj
│       │   ├── class.clj
│       │   ├── copy.clj
│       │   ├── dechunk_map.clj
│       │   ├── ffi.clj
│       │   ├── fn.clj
│       │   ├── gc.clj
│       │   ├── info.clj
│       │   ├── io_redirect.clj
│       │   ├── jvm_handle.clj
│       │   ├── np_array.clj
│       │   ├── protocols.clj
│       │   ├── windows.clj
│       │   └── with.clj
│       ├── python.clj
│       ├── require.clj
│       └── sugar.clj
├── test/
│   └── libpython_clj2/
│       ├── classes_test.clj
│       ├── codegen_test.clj
│       ├── ffi_test.clj
│       ├── fncall_test.clj
│       ├── iter_gen_seq_test.clj
│       ├── java_api_test.clj
│       ├── numpy_test.clj
│       ├── python_test.clj
│       ├── require_python_test.clj
│       ├── stress_test.clj
│       └── sugar_test.clj
├── testcode/
│   └── __init__.py
└── topics/
    ├── Usage.md
    ├── embedded.md
    ├── environments.md
    ├── new-to-clojure.md
    ├── scopes-and-gc.md
    └── slicing.md

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/FUNDING.yml
================================================
github: cnuernber


================================================
FILE: .github/workflows/test.yml
================================================
name: test

on:
  push:
    branches:
      - master
    paths-ignore:
      - '**/README.md'
      - '**/CHANGELOG.md'
  pull_request:

jobs:
  unit-test:
    runs-on: ${{matrix.os}}
    strategy:
      fail-fast: false
      matrix:
        os: [macos-latest, ubuntu-latest, windows-latest]
        jdk: [19, 21, 25]
        python-version: ["3.9", "3.11","3.12","3.13","3.14"]

    steps:
      - uses: actions/checkout@v3

      - name: Set up JDK ${{ matrix.jdk }}
        uses: actions/setup-java@v4
        with:
          java-version: ${{ matrix.jdk }}
          distribution: temurin

      - name: Install Clojure
        uses: DeLaGuardo/setup-clojure@11.0
        with:
          cli: 1.11.1.1347

      - name: Set up Python ${{ matrix.python-version }}
        uses: actions/setup-python@v6
        with:
          python-version: ${{ matrix.python-version }}

      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install numpy --no-cache-dir

      - name: Run tests (jdk<17)
        if: ${{ matrix.jdk < 17 }}
        run: |
          clojure -M:test
      - name: Run tests (jdk>=17)
        if: ${{ matrix.jdk >= 17 }}
        run: |
          clojure -M:jdk-${{matrix.jdk}}:test


================================================
FILE: .gitignore
================================================
target
/classes
/checkouts
profiles.clj
pom.xml
pom.xml.asc
*.jar
*.class
/.lein-*
.nrepl-port
.hgignore
.hg/
__pycache__
.cpcache/
cpython
a.out
*.pom
*.pom.asc
/.idea
*.iml
.lsp
.clj-kondo
pregen-ffi-test

================================================
FILE: .gitmodules
================================================


================================================
FILE: .travis.yml
================================================
language: clojure
dist: bionic
lein: 2.9.1
sudo: required
before_install:
  - sudo apt-get -y install python3 python3-pip
  - sudo python3 -mpip install setuptools
  - sudo python3 -mpip install numpy
  - curl -O https://download.clojure.org/install/linux-install-1.10.2.796.sh
  - chmod +x linux-install-1.10.2.796.sh
  - sudo ./linux-install-1.10.2.796.sh
  - clojure -Sdescribe
addons:
  apt:
    update: true
install: clojure -Sdescribe
script: clojure -M:test
cache:
  directories:
    - "$HOME/.m2"


================================================
FILE: CHANGELOG.md
================================================
# Time for a ChangeLog!

## 2.026
 * Faster string conversion from c->jvm for large strings.
 * Fix for https://github.com/techascent/tech.ml.dataset/issues/437

## 2.025
 * [Stop using nio-buffer conversion for ->string](https://github.com/clj-python/libpython-clj/blob/master/src/libpython_clj2/python/ffi.clj#L795)
 * [PR - Handle conversion of more types in ->jvm](https://github.com/clj-python/libpython-clj/pull/251)
 * Fix issue with locating python dll on MS-Windows
   [#246](https://github.com/clj-python/libpython-clj/issues/246).

## 2.024
 * large dtype-next/hamf upgrade.
 * fix for call-attr (it was broken when using keywords).

## 2.023
 * pre/post hooks for python initialization

## 2.022
 * Support for JDK-19.  Still uses JNA as the FFI provider but libpython-clj now works
   out of the box when using JDK-19.

## 2.021
 * Add support for a `python.edn` file, see `libpython-clj2.python/initialize!` documentation

## 2.020
 * Better support for running with `-Dlibpython_clj.manual_gil=true` - `with-manual-gil`.
   Addresses [issue 221](https://github.com/clj-python/libpython-clj/issues/221).

## 2.019
 * Upgrade to clojure 1.11 as development version.
 * Upgrade dtype-next to get rid of 1.11 warnings (and for unary min,max).
 * Upgrade jna to latest version for greater x-platform support.

## 2.017
 * Tested AOT and codegen of numpy namespace to ensure it is loadable and works as expected.
 * This fixed an issue with the codegen system.

## 2.016
 * Upgrade dtype-next to latest version.  This version of dtype-next comes with it's own
   java-api so you can integrate deeper into the zero-copy pathways.

## 2.015
 * java API has stabilized.
 * [GILLocker](https://clj-python.github.io/libpython-clj/libpython-clj2.java-api.html#var--GILLocker) -
 an auto-closeable pathway so you case use Java's try-with-resources or Clojure's with-open.

## 2.013
 * Lots of changes and releases ironing out a java api with a user.  This is close but not
   completely there yet so expect the java api to change a bit more.
 * [make-fastcallable](https://clj-python.github.io/libpython-clj/libpython-clj2.python.html#var-make-fastcallable)

## 2.000
 * You can now create instance functions that accept both positional and keyword arguments -
   libpython-clj2.python/make-kw-instance-fn.  See test at
   test/libpython-clj2.python.classes-test.
 * latest dtype-next - see documentation for tech.v3.datatype/set-value!.
 * Fix for using ffi on arm64 platforms.


## 2.00-beta-22
 * Fix to PR-166 - strings in docs weren't properly escaped.

## 2.00-beta-20
 * [Issue 167] - `py.` fails in a go-loop.
 * [PR 166](https://github.com/clj-python/libpython-clj/pull/166) - Fix codegen for special double values such
   as Nan and infinity.
 * [PR 165](https://github.com/clj-python/libpython-clj/pull/165) - remove codegen builtins and numpy due to
   codegen pathway not being stable and potential version conflicts between e.g. numpy versions.

## 2.00-beta-19
 * [issue 126](https://github.com/clj-python/libpython-clj/issues/126) - Python maps sometimes fail to destructure.

## 2.00-beta-18
 * Additional fix for 162 to allow booleans to be considered primitives and be output inline.
 * Experimental fix for [issue 164](https://github.com/clj-python/libpython-clj/issues/164) - Unsigned int64 fails with
   `OverflowError: int too big to convert`.

## 2.00-beta-17
 * Almost fix fo [issue 163](https://github.com/clj-python/libpython-clj/issues/163) - codegen fails with JVM primitives.

## 2.00-beta-16
 * Fix for [issue 162](https://github.com/clj-python/libpython-clj/issues/162) - many things failing when using JDK-16.
   Note there is now a :jdk-16 alias in deps.edn that shows how to use libpython-clj with JDK-16.

## 2.00-beta-15
 * Fix for [issue 160](https://github.com/clj-python/libpython-clj/issues/160) - ->python has to realize lazy seqs.
 * Remove camel-snake-kebab as direct dependency; it comes in via dtype-next.

## 2.00-beta-13
 * Update to dtype-next to support graal native static ffi and library generation.
 * Update to dtype-next to have correct definitions of ptr-t, offset-t, and size-t types.

## 2.00-beta-12
 * Small refinements around embedding and documentation.

## 2.00-beta-11
 * Major improvements to get libpython-clj v2 to work as robustly as v1 did across
   several operating systems.
 * First release that includes embedded functionality (!!).  See the embedded topic
   for more details.

## 2.00-beta-8
 * Add back in dynamically searching for libraries based on `java.library.path`.

## 2.00-beta-7
 * Upgrade to dtype-next to ensure that JNA is the preferred FFI framework.
 * Re-introduce java.library.path shenanigans in order to support pyenv type systems.

## 2.00-beta-6
 * Revert initialize! back to v1 optional argument style.

## 2.00-beta-5
 * Fix codox document generation issue.
 * Added a document on environments.

## 2.00-beta-4
 * [static namespace generation](https://clj-python.github.io/libpython-clj/libpython-clj2.codegen.html#var-write-namespace.21) - Generate AOT-safe clojure code that will
 dynamically find the symbol and cache it upon first call.

## 2.00-beta-3
 * Fixes to allow Conda to work - [cnuernber/facial-rec](https://github.com/cnuernber/facial-rec) lives again!!

## 2.00-beta-2
 * Fix for [numpy.all returns false](https://github.com/clj-python/libpython-clj/issues/148)

## 2.00-beta-1
* Major upgrade of tech system - moved to dtype-next.  This release is only compatible with
  tech.ml.dataset versions 5.+.  There are many changes but there should be very few user visible
  ones.  Please see [cnuernber/dtype-next](https://cnuernber.github.io/dtype-next/) for more
  information.  This brings a total refactor removing about 1/2 the code, 32bit support and
  and JDK-16 support.

## 1.45
 * tech.datatype - 5.0 release

## 1.43
 * `tech.datatype` - latest version to work with `tech.ml.dataset`.

## 1.42
 * `tech.datatype` - latest version to work with `tech.ml.dataset`.

## 1.41
 * `tech.datatype` - latest version to work with `tech.ml.dataset`.

## 1.40
* `tech.datatype` fixed bug in argsort.

## 1.39
* `tech.datatype` upgrade to version that supports datetime types.

## 1.38-SNAPSHOT

* `tech.datatype` is upgrade to 5.0 (!!).

## 1.37

* Update to tech.datatype 4.88 - much faster group-by, lots of small improvements.

## 1.37

* Fix for metadata generation of sys module that was failing.  This needs a deeper fix.
* Race condition and stability fix.
* `deps.edn` now supported in parallel with `project.clj`


## 1.36

* clojure.core.async upgrade


## 1.35


* [Examples are now done by gigasquid](https://github.com/gigasquid/libpython-clj-examples)

* [datafy/nav](https://clojure.github.io/clojure/branch-master/clojure.datafy-api.html) are now extensible for custom Python objects.
  Extend `libpython-clj.require/pydafy` and `libpython-clj.require/pynav`
  respectively with the symbol of class you want to extend. See
  respective docstrings for details.

* bugfix -- python.str now loaded by `import-python`


## 1.34
 * Skipped due to build system issues.

## 1.33

* Better [windows anaconda support](https://github.com/cnuernber/libpython-clj/pull/67)
  thanks to [orolle](https://github.com/orolle).

* Moved to PyGILState* functions for GIL management.  This mainly due to
  [FongHou](https://github.com/cnuernber/libpython-clj/commits?author=FongHou) in
  PRs [here](https://github.com/cnuernber/libpython-clj/pull/64) and
  [here](https://github.com/cnuernber/libpython-clj/pull/65).

* **BREAKING CHANGE** `require-python` now respects prefix lists --
  unfortunately, the previous syntax was incorrect.
  ```clojure
  ;; WRONG (syntax version < 1.33)
  (require-python '(os math))
  ```
  would be equivalent to
  ```clojure
  ;; (do (require-python 'os) (require-python 'math))
  ```
  the correct syntax for this SHOULD have been
  ```clojure
  (require-python 'os 'math)
  ```

  1.33 fixes this mistake, and provides support for prefix lists,
  for example:

  ```clojure
  (require-python
   '[builtins :as python]
   '(builtins
     [list :as python.list]
     [dict :as python.dict]
     [tuple :as python.tuple]
     [set :as python.set]
     [frozenset :as python.frozenset]))
  ```
  (**Note**: this is done for you by the function `libpython-clj.require/import-python`)

  This fix brought to you by [jjtolton](https://github.com/jjtolton).


## 1.32
* DecRef now happens cooperatively in python thread.  We used to use separate threads
  in order to do decrement the refcount on objects that aren't reachable any more.  Now
  it happens at the end of the `with-gil` macro and thus it is possible to have all
  python access confined to a single thread if this is desired for stability.  It is
  also quite a bit faster as the GIL is captured once and all decrefs happen after
  that.

* Major performance and stability enhancements.
  1.  Doubled down on single-interpreter design.  This simplified some important aspects
      and led to a bit of perf gain.
  2.  Implemented JNA [DirectMapping](https://github.com/java-native-access/jna/blob/master/www/DirectMapping.md) for quite a few hotspots found via profiling some simple
      examples.  Lots of people helped out with this (John Collins, Tom Poole (joinr)).

* Python executables can now be specified directly using the syntax
  ```clojure
  (py/initialize! :python-executable <executable>)
  ```
  where **executable** can be a system installation of Python
  such as `"python3"`, `"python3.7"`; it can also be a fully qualified
  path such as `"/usr/bin/python3.7"`; or any Python executable along
  your discoverable system path.

* Python virtual environments can now be used instead of system
  installations! This has been tested on Linux/Ubuntu variants
  with virtual environments installed with
  ```bash
  virtualenv -p $(which <python-version>) env
  ```
  and then invoked using
  ```clojure
  (py/initialize! :python-executable "/abs/path/to/env/bin/python")
  ```

  Tested on Python 3.6.8 and Python 3.7.

  **WARNING**: This is suitable for casual hacking and exploratory
  development -- however, at this time, we still strongly recommend
  using Docker and a system installation of Python in production
  environments.

* **breaking change** (and remediation): `require-python` no longer
  automatically binds the Python module to the Clojure the namespace
  symbol.  If you wish to bind the module to the namespace symbol,
  you need to use the `:bind-ns` flag.  Example:

  ```clojure
  (require-python 'requests) ;;=> nil
  requests ;;=> throws Exception

  (require-python '[requests :bind-ns]) ;;=> nil
  (py.. requests
        (get "https://www.google.com)
        -content
        (decode "latin-1)) ;; works
  ```

* Python method helper syntax for programmatic passing of maps
  to satisfy `*args`, `**kwargs` situations on the `py.` family of
  macros. Two new macros have been introduced to address this

  ```clojure
  (py* obj method args)
  (py* obj method args kwargs)
  (py** obj method kwargs)
  (py** obj method arg1 arg2 arg3 ... argN kwargs)
  ```
  and the `py..` syntax has been extended to accomodate these
  conventions as well.

  ```clojure
  (py.. obj (*method args))
  (py.. obj (*method args kwargs))
  (py.. obj (**method kwargs))
  (py.. obj (**method arg1 arg2 arg3 ... argN kwargs))
  ```

### Bugs Fixed:

* [attribute calls with argument given in map](https://github.com/cnuernber/libpython-clj/issues/46)
* [allow specification of python executable](https://github.com/cnuernber/libpython-clj/issues/52)
* [difference in calling conventions leads to strange behavior in pandas](https://github.com/cnuernber/libpython-clj/issues/50) with [screencast of fix](https://drive.google.com/file/d/1PTXzWqNaRAiIDDZWqkeffIK2KESRWSRh/view?usp=sharing)
* [Allow single threaded use of Python](https://github.com/cnuernber/libpython-clj/issues/48)
* [Simplify interpreter design for only one interpreter](https://github.com/cnuernber/libpython-clj/issues/47)


## 1.31

* Python objects are now datafy-able and nav-igable.  `require-python`
  is now rebuilt using datafy.

* `py.`, `py.-`, and `py..` added to the `libpython-clj` APIs
  to allow method/attribute access more consistent with idiomatic
  Clojure forms.


## 1.30

This release is a big one.  With finalizing `require-python` we have a clear way
to use Python in daily use and make it look good in normal Clojure usage.  There
is a demo of [facial recognition](https://github.com/cnuernber/facial-rec) using some
of the best open systems for doing this; this demo would absolutely not be possible
without this library due to the extensive use of numpy and cython to implement the
face detection.  We can now interact with even very complex Python systems with
roughly the same performance as a pure Python system.

#### Finalized `require-python`

Lots of work put in to make the `require-python` pathway work with
classes and some serious refactoring overall.

#### Better Numpy Support

* Most of the datatype libraries math operators supported by numpy objects (+,-,etc).
* Numpy objects can be used in datatype library functions (like `copy`, `make-container`)
  and work in optimized ways.

```clojure
libpython-clj.python.numpy-test> (def test-ary (py/$a np-mod array (->> (range 9)
                                                                        (partition 3)
                                                                        (mapv vec))))
#'libpython-clj.python.numpy-test/test-ary
libpython-clj.python.numpy-test> test-ary
[[0 1 2]
 [3 4 5]
 [6 7 8]]
libpython-clj.python.numpy-test> (dfn/+ test-ary 2)
[[ 2  3  4]
 [ 5  6  7]
 [ 8  9 10]]
libpython-clj.python.numpy-test> (dfn/> test-ary 4)
[[False False False]
 [False False  True]
 [ True  True  True]]
```

#### Bugs Fixed
* Support for java character <-> py string
* Fixed potential crash related to use of delay mechanism and stack based gc.
* Added logging to complain loudly if refcounts appear to be bad.


## 1.29

* Found/fixed issue with `->jvm` and large Python dictionaries.


## 1.28

* `(range 5)` - Clojure ranges <-> Python ranges when possible.
* bridged types derive from `collections.abc.*` so that they pass instance checks in
  libraries that are checking for generic types.
* Really interesting unit test for
  [generators, ranges and sequences](test/libpython_clj/iter_gen_seq_test.clj).


## 1.27

* Fixed bug where `(as-python {:is_train false})` results in a dictionary with a none
  value instead of a false value.  This was found through hours of debugging why
  mxnet's forward function call was returning different values in Clojure than in
  Python.


## 1.26

* [python startup work](https://github.com/cnuernber/libpython-clj/commit/16da3d885f29bde59ea219c9438b9d3654387971)
* [python generators & clojure transducers](https://github.com/cnuernber/libpython-clj/pull/27)
* [requre-python reload fix](https://github.com/cnuernber/libpython-clj/pull/24)
* Bugfix with `require-python` :reload semantics.


## 1.25

Fixed (with tests) major issue with `require-python`.


## 1.24

Clojure's range is now respected in two different ways:
* `(range)` - bridges to a Python iterable
* `(range 5)` - copies to a Python list


## 1.23

Equals, hashcode, nice default `.toString` of Python types:

```clojure
user> (require '[libpython-clj.python :as py])
nil
user> (def test-tuple (py/->py-tuple [1 2]))
#'user/test-tuple
user> (require '[libpython-clj.require :refer [require-python]])
nil
user> (require-python '[builtins :as bt])
nil
user> (bt/type test-tuple)
builtins.tuple
user> test-tuple
(1, 2)
user> (def new-tuple (py/->py-tuple [3 4]))
#'user/new-tuple
user> (= test-tuple new-tuple)
false
user> (= test-tuple (py/->py-tuple [1 2]))
true
user> (.hashCode test-tuple)
2130570162
user> (.hashCode (py/->py-tuple [1 2]))
2130570162
user> (require-python '[numpy :as np])
nil
user> (def np-ary (np/array [1 2 3]))
#'user/np-ary
user> np-ary
[1 2 3]
user> (bt/type np-ary)
numpy.ndarray
user> (py/python-type *1)
:type
```


## 1.22

Working to make more Python environments work out of the box.  Currently have a
testcase for conda working in a clean install of a docker container.  There is now a
new method: `libpython-clj.python.interpreter/detect-startup-info` that attempts
call `python3-config --prefix` and `python3 --version` in order to automagically
configure the Python library.


## 1.21

Bugfix release.  Passing infinite sequences to Python functions was
causing a hang as libpython-clj attempted to copy the sequence.  The
current calling convention does a shallow copy of things that are list-like
or map-like, while bridging things that are iterable or don't fall into
the above categories.

This exposed a bug that caused reference counting to be subtly wrong when
Python iterated through a bridged object.  And that was my life for a day.

## 1.20

With too many huge things we had to skip a few versions!

#### require-python

`require-python` works like require but it works on Python modules.
`require-python` dynamically loads the module and exports it's symbols into
a Clojure namespace.  There are many options available for this pathway.


This implements a big step towards embedding Python in Clojure in a simple,
clear, and easy to use way.  One important thing to consider is the require
has a `:reload:` option to allow you to actively develop a Python module and
test it via Clojure.


This excellent work was in large part done by [James Tolton](https://github.com/jjtolton).


* [require-python-test](test/libpython_clj/require_python_test.clj)


#### Clojure-defined Python classes

You can now extend a tuple of Python classes (or implement a new one).  This system
allows, among many things, us to use frameworks that use derivation as part of their
public API.  Please see [classes-test](test/libpython_clj/classes_test.clj) for a documented
example of a simple pathway through the new API.  Note that if you use vanilla
`->py-fn` functions as part of the class definition you won't get access to the `self`
object.


#### Bugfixes

A general stability bugfix was made that was involved in the interoperation of
Clojure functions within Python.  Clojure functions weren't currently adding
a refcount to their return values.


## 1.16

Fixed a bug where the system would load multiple Python libraries, not stopping
after the first valid library loaded.  There are two ways to control the system's
Python library loading mechanism:

1. Pass in a library name in `initialize!`
2. `alter-var-root` the list of libraries in `libpython-clj.jna.base` before
   calling `initialize!`.


## 1.15

Moar syntax sugar --
```clojure
user> (py/$. numpy linspace)
<function linspace at 0x7fa6642766a8>
user> (py/$.. numpy random shuffle)
<built-in method shuffle of numpy.random.mtrand.RandomState object at 0x7fa66410cca8>
```


## 1.14

libpython-clj now searches for several shared libraries instead of being hardcoded
to just one of them.  Because of this, there is now:

```clojure
libpython-clj.jna.base/*python-library-names*
```

This is a sequence of library names that will be tried in order.

You can also pass in the desired library name as part of the `initialize!` call and
only this name will be tried.


================================================
FILE: CONTRIBUTING.md
================================================
# Contributing

Contributions are welcome and come in many forms.  Let's be polite, professional, and
keep this a place we want to be.


## Issues


Issues are best if they come with a test.  The exact version of python you are running
and your operating system, clojure version, java version, etc. if relevant will save
everyone time who reviews your issue.  Consider some form of:

1.  What I did
1.  What I expected to happen
1.  What actually happened


## Failing Tests


A public branch with failing tests of things that you would like to work is also a great idea.  Or just
more tests.  Things like:

>> are java lists and maps actually substitutable for python lists and dicts in python code and vice versa

are most likely ripe areas for tests and will greatly help with expectations and code portability.


## PR's


Shorter the PR, the less back and forth there will be and the more likely to be accepted.


## Documentation


Documentation is key but needs to be accurate and brief.  Incorrect documentation is
worse than no documentation.


================================================
FILE: LICENSE
================================================
Eclipse Public License - v 2.0

    THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE
    PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION
    OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.

1. DEFINITIONS

"Contribution" means:

  a) in the case of the initial Contributor, the initial content
     Distributed under this Agreement, and

  b) in the case of each subsequent Contributor:
     i) changes to the Program, and
     ii) additions to the Program;
  where such changes and/or additions to the Program originate from
  and are Distributed by that particular Contributor. A Contribution
  "originates" from a Contributor if it was added to the Program by
  such Contributor itself or anyone acting on such Contributor's behalf.
  Contributions do not include changes or additions to the Program that
  are not Modified Works.

"Contributor" means any person or entity that Distributes the Program.

"Licensed Patents" mean patent claims licensable by a Contributor which
are necessarily infringed by the use or sale of its Contribution alone
or when combined with the Program.

"Program" means the Contributions Distributed in accordance with this
Agreement.

"Recipient" means anyone who receives the Program under this Agreement
or any Secondary License (as applicable), including Contributors.

"Derivative Works" shall mean any work, whether in Source Code or other
form, that is based on (or derived from) the Program and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship.

"Modified Works" shall mean any work in Source Code or other form that
results from an addition to, deletion from, or modification of the
contents of the Program, including, for purposes of clarity any new file
in Source Code form that contains any contents of the Program. Modified
Works shall not include works that contain only declarations,
interfaces, types, classes, structures, or files of the Program solely
in each case in order to link to, bind by name, or subclass the Program
or Modified Works thereof.

"Distribute" means the acts of a) distributing or b) making available
in any manner that enables the transfer of a copy.

"Source Code" means the form of a Program preferred for making
modifications, including but not limited to software source code,
documentation source, and configuration files.

"Secondary License" means either the GNU General Public License,
Version 2.0, or any later versions of that license, including any
exceptions or additional permissions as identified by the initial
Contributor.

2. GRANT OF RIGHTS

  a) Subject to the terms of this Agreement, each Contributor hereby
  grants Recipient a non-exclusive, worldwide, royalty-free copyright
  license to reproduce, prepare Derivative Works of, publicly display,
  publicly perform, Distribute and sublicense the Contribution of such
  Contributor, if any, and such Derivative Works.

  b) Subject to the terms of this Agreement, each Contributor hereby
  grants Recipient a non-exclusive, worldwide, royalty-free patent
  license under Licensed Patents to make, use, sell, offer to sell,
  import and otherwise transfer the Contribution of such Contributor,
  if any, in Source Code or other form. This patent license shall
  apply to the combination of the Contribution and the Program if, at
  the time the Contribution is added by the Contributor, such addition
  of the Contribution causes such combination to be covered by the
  Licensed Patents. The patent license shall not apply to any other
  combinations which include the Contribution. No hardware per se is
  licensed hereunder.

  c) Recipient understands that although each Contributor grants the
  licenses to its Contributions set forth herein, no assurances are
  provided by any Contributor that the Program does not infringe the
  patent or other intellectual property rights of any other entity.
  Each Contributor disclaims any liability to Recipient for claims
  brought by any other entity based on infringement of intellectual
  property rights or otherwise. As a condition to exercising the
  rights and licenses granted hereunder, each Recipient hereby
  assumes sole responsibility to secure any other intellectual
  property rights needed, if any. For example, if a third party
  patent license is required to allow Recipient to Distribute the
  Program, it is Recipient's responsibility to acquire that license
  before distributing the Program.

  d) Each Contributor represents that to its knowledge it has
  sufficient copyright rights in its Contribution, if any, to grant
  the copyright license set forth in this Agreement.

  e) Notwithstanding the terms of any Secondary License, no
  Contributor makes additional grants to any Recipient (other than
  those set forth in this Agreement) as a result of such Recipient's
  receipt of the Program under the terms of a Secondary License
  (if permitted under the terms of Section 3).

3. REQUIREMENTS

3.1 If a Contributor Distributes the Program in any form, then:

  a) the Program must also be made available as Source Code, in
  accordance with section 3.2, and the Contributor must accompany
  the Program with a statement that the Source Code for the Program
  is available under this Agreement, and informs Recipients how to
  obtain it in a reasonable manner on or through a medium customarily
  used for software exchange; and

  b) the Contributor may Distribute the Program under a license
  different than this Agreement, provided that such license:
     i) effectively disclaims on behalf of all other Contributors all
     warranties and conditions, express and implied, including
     warranties or conditions of title and non-infringement, and
     implied warranties or conditions of merchantability and fitness
     for a particular purpose;

     ii) effectively excludes on behalf of all other Contributors all
     liability for damages, including direct, indirect, special,
     incidental and consequential damages, such as lost profits;

     iii) does not attempt to limit or alter the recipients' rights
     in the Source Code under section 3.2; and

     iv) requires any subsequent distribution of the Program by any
     party to be under a license that satisfies the requirements
     of this section 3.

3.2 When the Program is Distributed as Source Code:

  a) it must be made available under this Agreement, or if the
  Program (i) is combined with other material in a separate file or
  files made available under a Secondary License, and (ii) the initial
  Contributor attached to the Source Code the notice described in
  Exhibit A of this Agreement, then the Program may be made available
  under the terms of such Secondary Licenses, and

  b) a copy of this Agreement must be included with each copy of
  the Program.

3.3 Contributors may not remove or alter any copyright, patent,
trademark, attribution notices, disclaimers of warranty, or limitations
of liability ("notices") contained within the Program from any copy of
the Program which they Distribute, provided that Contributors may add
their own appropriate notices.

4. COMMERCIAL DISTRIBUTION

Commercial distributors of software may accept certain responsibilities
with respect to end users, business partners and the like. While this
license is intended to facilitate the commercial use of the Program,
the Contributor who includes the Program in a commercial product
offering should do so in a manner which does not create potential
liability for other Contributors. Therefore, if a Contributor includes
the Program in a commercial product offering, such Contributor
("Commercial Contributor") hereby agrees to defend and indemnify every
other Contributor ("Indemnified Contributor") against any losses,
damages and costs (collectively "Losses") arising from claims, lawsuits
and other legal actions brought by a third party against the Indemnified
Contributor to the extent caused by the acts or omissions of such
Commercial Contributor in connection with its distribution of the Program
in a commercial product offering. The obligations in this section do not
apply to any claims or Losses relating to any actual or alleged
intellectual property infringement. In order to qualify, an Indemnified
Contributor must: a) promptly notify the Commercial Contributor in
writing of such claim, and b) allow the Commercial Contributor to control,
and cooperate with the Commercial Contributor in, the defense and any
related settlement negotiations. The Indemnified Contributor may
participate in any such claim at its own expense.

For example, a Contributor might include the Program in a commercial
product offering, Product X. That Contributor is then a Commercial
Contributor. If that Commercial Contributor then makes performance
claims, or offers warranties related to Product X, those performance
claims and warranties are such Commercial Contributor's responsibility
alone. Under this section, the Commercial Contributor would have to
defend claims against the other Contributors related to those performance
claims and warranties, and if a court requires any other Contributor to
pay any damages as a result, the Commercial Contributor must pay
those damages.

5. NO WARRANTY

EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT
PERMITTED BY APPLICABLE LAW, THE PROGRAM IS PROVIDED ON AN "AS IS"
BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR
IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF
TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE. Each Recipient is solely responsible for determining the
appropriateness of using and distributing the Program and assumes all
risks associated with its exercise of rights under this Agreement,
including but not limited to the risks and costs of program errors,
compliance with applicable laws, damage to or loss of data, programs
or equipment, and unavailability or interruption of operations.

6. DISCLAIMER OF LIABILITY

EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT
PERMITTED BY APPLICABLE LAW, NEITHER RECIPIENT NOR ANY CONTRIBUTORS
SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST
PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE
EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.

7. GENERAL

If any provision of this Agreement is invalid or unenforceable under
applicable law, it shall not affect the validity or enforceability of
the remainder of the terms of this Agreement, and without further
action by the parties hereto, such provision shall be reformed to the
minimum extent necessary to make such provision valid and enforceable.

If Recipient institutes patent litigation against any entity
(including a cross-claim or counterclaim in a lawsuit) alleging that the
Program itself (excluding combinations of the Program with other software
or hardware) infringes such Recipient's patent(s), then such Recipient's
rights granted under Section 2(b) shall terminate as of the date such
litigation is filed.

All Recipient's rights under this Agreement shall terminate if it
fails to comply with any of the material terms or conditions of this
Agreement and does not cure such failure in a reasonable period of
time after becoming aware of such noncompliance. If all Recipient's
rights under this Agreement terminate, Recipient agrees to cease use
and distribution of the Program as soon as reasonably practicable.
However, Recipient's obligations under this Agreement and any licenses
granted by Recipient relating to the Program shall continue and survive.

Everyone is permitted to copy and distribute copies of this Agreement,
but in order to avoid inconsistency the Agreement is copyrighted and
may only be modified in the following manner. The Agreement Steward
reserves the right to publish new versions (including revisions) of
this Agreement from time to time. No one other than the Agreement
Steward has the right to modify this Agreement. The Eclipse Foundation
is the initial Agreement Steward. The Eclipse Foundation may assign the
responsibility to serve as the Agreement Steward to a suitable separate
entity. Each new version of the Agreement will be given a distinguishing
version number. The Program (including Contributions) may always be
Distributed subject to the version of the Agreement under which it was
received. In addition, after a new version of the Agreement is published,
Contributor may elect to Distribute the Program (including its
Contributions) under the new version.

Except as expressly stated in Sections 2(a) and 2(b) above, Recipient
receives no rights or licenses to the intellectual property of any
Contributor under this Agreement, whether expressly, by implication,
estoppel or otherwise. All rights in the Program not expressly granted
under this Agreement are reserved. Nothing in this Agreement is intended
to be enforceable by any entity that is not a Contributor or Recipient.
No third-party beneficiary rights are created under this Agreement.

Exhibit A - Form of Secondary Licenses Notice

"This Source Code may also be made available under the following 
Secondary Licenses when the conditions for such availability set forth 
in the Eclipse Public License, v. 2.0 are satisfied: {name license(s),
version(s), and exceptions or additional permissions here}."

  Simply including a copy of this Agreement, including this Exhibit A
  is not sufficient to license the Source Code under Secondary Licenses.

  If it is not possible or desirable to put the notice in a particular
  file, then You may include the notice in a location (such as a LICENSE
  file in a relevant directory) where a recipient would be likely to
  look for such a notice.

  You may add additional accurate notices of copyright ownership.


================================================
FILE: README.md
================================================
# Deep Clojure/Python Integration

### Version Info

[![Clojars Project](https://img.shields.io/clojars/v/clj-python/libpython-clj.svg)](https://clojars.org/clj-python/libpython-clj)
[![travis integration](https://travis-ci.com/clj-python/libpython-clj.svg?branch=master)](https://travis-ci.com/clj-python/libpython-clj)


 ## New Versions Are HOT!  Huge New Features! You Can't Afford To Miss Out!
 
 * [API Documentation](https://clj-python.github.io/libpython-clj/)
 * [Java API](https://clj-python.github.io/libpython-clj/libpython-clj2.java-api.html) - you can use libpython-clj from java - no
   Clojure required.  The class is included with the jar so just put the jar on the classpath and then `import libpython_clj2.java_api;`
   will work.  Be sure to carefully read the namespace doc as, due to performance considerations, not all methods are 
   protected via automatic GIL management.  Note this integration includes support for extremely efficient data copies to numpy objects
   and callbacks from python to java.
 * [make-fastcallable](https://clj-python.github.io/libpython-clj/libpython-clj2.python.html#var-make-fastcallable) so if you 
   calling a small function repeatedly you can now call it about twice as fast.  A better optimization is to call
   a function once with numpy array arguments but unfortunately not all use cases are amenable to this pathway.  So we
   did what we can.
 * JDK-17 and Mac M-1 support.  To use libpython-clj2 with jdk-17 you need to enable the foreign module -
   see [deps.edn](https://github.com/clj-python/libpython-clj/blob/6e7368b44aaabddf565a5bbf3a240e60bf3dcbf8/deps.edn#L10)
   for a working alias.
 * You can now [Embed Clojure in Python](https://clj-python.github.io/libpython-clj/embedded.html) - you can launch a Clojure REPL from a Python host process.
 * **32 bit support**!!
 * 20-30% better performance.
 * Please avoid deprecated versions such as `[cnuernber/libpython-clj "1.36"]` (***note name change***).
 * This library, which has received the efforts of many excellent people, is built mainly upon
   [cnuernber/dtype-next](https://github.com/cnuernber/dtype-next/) and the
   [JNA library](https://github.com/java-native-access/jna).
 * [Static code generation](https://clj-python.github.io/libpython-clj/libpython-clj2.codegen.html#var-write-namespace.21) - generate clojure namespaces
   wrapping python modules that are safe to use with AOT and load much faster than analogous `require-python` calls.  These namespace will not
   automatically initialize the python subsystem -- initialize! must be called first (or a nice exception is throw).


## libpython-clj features

* Bridge between JVM objects and Python objects easily; use Python in your Java and
  use some Java in your Python.
* Python objects are linked to the JVM GC such that when they are no longer reachable
  from the JVM their references are released.  Scope based resource contexts are
  [also available](topics/scopes-and-gc.md).
* Finding the python libraries is done dynamically allowing one system to run on multiple versions
  of python.
* REPL oriented design means fast, smooth, iterative development.
* Carin Meier has written excellent posts on [plotting](http://gigasquidsoftware.com/blog/2020/01/18/parens-for-pyplot/) and
  [advanced text generation](http://gigasquidsoftware.com/blog/2020/01/10/hugging-face-gpt-with-clojure/). She also has some
  great [examples](https://github.com/gigasquid/libpython-clj-examples).


## Vision

We aim to integrate Python into Clojure at a deep level.  This means that we want to
be able to load/use python modules almost as if they were Clojure namespaces.  We
also want to be able to use Clojure to extend Python objects.  I gave a
[talk at Clojure Conj 2019](https://www.youtube.com/watch?v=vQPW16_jixs) that
outlines more of what is going on.

This code is a concrete example that generates an
[embedding for faces](https://github.com/cnuernber/facial-rec):

```clojure
(ns facial-rec.face-feature
  (:require [libpython-clj2.require :refer [require-python]]
            [libpython-clj2.python :refer [py. py.. py.-] :as py]
            [tech.v3.datatype :as dtype]))



(require-python 'mxnet
                '(mxnet ndarray module io model))
(require-python 'cv2)
(require-python '[numpy :as np])


(defn load-model
  [& {:keys [model-path checkpoint]
      :or {model-path "models/recognition/model"
           checkpoint 0}}]
  (let [[sym arg-params aux-params] (mxnet.model/load_checkpoint model-path checkpoint)
        all-layers (py. sym get_internals)
        target-layer (py/get-item all-layers "fc1_output")
        model (mxnet.module/Module :symbol target-layer
                                   :context (mxnet/cpu)
                                   :label_names nil)]
    (py. model bind :data_shapes [["data" [1 3 112 112]]])
    (py. model set_params arg-params aux-params)
    model))

(defonce model (load-model))


(defn face->feature
  [img-path]
  (py/with-gil-stack-rc-context
    (if-let [new-img (cv2/imread img-path)]
      (let [new-img (cv2/cvtColor new-img cv2/COLOR_BGR2RGB)
            new-img (np/transpose new-img [2 0 1])
            input-blob (np/expand_dims new-img :axis 0)
            data (mxnet.ndarray/array input-blob)
            batch (mxnet.io/DataBatch :data [data])]
        (py. model forward batch :is_train false)
        (-> (py. model get_outputs)
            first
            (py. asnumpy)
            (#(dtype/make-container :java-array :float32 %))))
      (throw (Exception. (format "Failed to load img: %s" img-path))))))
```


## Usage

#### Config namespace
```clojure
(ns my-py-clj.config
  (:require [libpython-clj2.python :as py]))

;; When you use conda, it should look like this.
(py/initialize! :python-executable "/opt/anaconda3/envs/my_env/bin/python3.7"
                :library-path "/opt/anaconda3/envs/my_env/lib/libpython3.7m.dylib")
```

#### Update project.clj
```clojure
{...
  ;; This namespace going to run when the REPL is up.
  :repl-options {:init-ns my-py-clj.config}
...}
```


```clojure
user> (require '[libpython-clj2.require :refer [require-python]])
...logging info....
nil
user> (require-python '[numpy :as np])
nil
user> (def test-ary (np/array [[1 2][3 4]]))
#'user/test-ary
user> test-ary
[[1 2]
 [3 4]]
```

We have a [document](topics/Usage.md) on all the features but beginning usage is
pretty simple.  Import your modules, use the things from Clojure.  We have put
effort into making sure things like sequences and ranges transfer between the two
languages.


#### Environments


One very complimentary aspect of Python with respect to Clojure is its integration
with cutting edge native libraries.  Our support isn't perfect so some understanding
of the mechanism is important to diagnose errors and issues.

Current, we launch the python3 executable and print out various different bits of
configuration as json.  We parse the json and use the output to attempt to find
the `libpython3.Xm.so` shared library so for example if we are loading python
3.6 we look for `libpython3.6m.so` on Linux or `libpython3.6m.dylib` on the Mac.

If we are unable to find a dynamic library such as `libpythonx.y.so` or `libpythonx.z.dylib`, 
it may be because Python is statically linked and the library is not present at all. 
This is dependent on the operating system and installation, and it is not always possible to detect it. 
In this case, we will receive an error message saying "Failed to find a valid python library!". 
To fix this, you may need to install additional OS packages or manually set the precise library location during `py/initialize!`.


This pathway has allowed us support Conda albeit with some work.  For examples
using Conda, check out the facial rec repository a)bove or look into how we
[build](scripts/build-conda-docker)
our test [docker containers](dockerfiles/CondaDockerfile).

#### devcontainer 
The scicloj community is maintaining a `devcontainer` [template](https://github.com/scicloj/devcontainer-templates/tree/main/src/scicloj) on which `libpython-clj` is know to work
out of the box.

This can be used as a starting point for projects using `libpython-clj` or as reference for debuging issues.


## Community

We like to talk about libpython-clj on [Zulip](https://clojurians.zulipchat.com/#streams/215609/libpython-clj-dev) as the conversations are persistent and searchable.


## Further Information

* Clojure Conj 2019 [video](https://www.youtube.com/watch?v=vQPW16_jixs) and
  [slides](https://docs.google.com/presentation/d/1uegYhpS6P2AtEfhpg6PlgBmTSIPqCXvFTWcGYG_Qk2o/edit?usp=sharing).
* [development discussion forum](https://clojurians.zulipchat.com/#narrow/stream/215609-libpython-clj-dev)
* [design documentation](topics/design.md)
* [scope and garbage collection docs](topics/scopes-and-gc.md)
* [examples](https://github.com/gigasquid/libpython-clj-examples)
* [docker setup](https://github.com/scicloj/docker-hub)
* [pandas bindings (!!)](https://github.com/alanmarazzi/panthera)
* [nextjournal notebooks](https://nextjournal.com/kommen)
* [scicloj video](https://www.youtube.com/watch?v=ajDiGS73i2o)
* [Clojure/Python interop technical blog post](http://techascent.com/blog/functions-across-languages.html)
* [persistent datastructures in python](https://github.com/tobgu/pyrsistent)


## New To Clojure

New to Clojure or the JVM?  Try remixing the nextjournal entry and playing around
there.  For more resources on learning and getting more comfortable with Clojure,
we have an [introductory document](topics/new-to-clojure.md).


## Resources

* [libpython C api](https://docs.python.org/3.7/c-api/index.html#c-api-index)
* [create numpy from C ptr](https://stackoverflow.com/questions/23930671/how-to-create-n-dim-numpy-array-from-a-pointer)
* [create C ptr from numpy](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.ctypes.html)


To install jar to local .m2 :

```bash
$ clj -X:depstar
```

After building and process `pom.xml` file you can run:

```bash
$ clj -X:install
```

### Deploy to clojars

```bash
$ clj -X:deploy
```
> This command will sign jar before deploy, using your gpg key.


## License

Copyright © 2019 Chris Nuernberger

This program and the accompanying materials are made available under the
terms of the Eclipse Public License 2.0 which is available at
http://www.eclipse.org/legal/epl-2.0.


================================================
FILE: codegen-test/README.md
================================================
# Codegen Sample

A quick sample testing intended codegen usage.


## Usage

We first generate the numpy namespace with a compile step.

```console
scripts/compile
```

Next we run the compiled code from an AOT'd main.


```console
scripts/run
```


================================================
FILE: codegen-test/deps.edn
================================================
{:paths ["src"]
 :deps {clj-python/libpython-clj {:mvn/version "2.017"}}}


================================================
FILE: codegen-test/scripts/compile
================================================
#!/bin/bash



clj -e "(require '[code.codegen :as cg])(cg/codegen)(println \"compilation successful\")\
(shutdown-agents)"


================================================
FILE: codegen-test/scripts/run
================================================
#!/bin/bash

clj -e "(compile 'code.main)"
java -cp "$(clj -Spath):classes" code.main

================================================
FILE: codegen-test/src/code/codegen.clj
================================================
(ns code.codegen
  (:require [libpython-clj2.python :as py]
            [libpython-clj2.codegen :as cgen]))


(defn codegen
  []
  (py/initialize!)
  (cgen/write-namespace! "numpy"))


================================================
FILE: codegen-test/src/code/main.clj
================================================
(ns code.main
  (:require [python.numpy :as np]
            [libpython-clj2.python :as py])
  (:gen-class))


(defn setup!
  []
  (py/initialize!))


(defn lspace
  []
  (np/linspace 2 3))


(defn -main
  []
  (setup!)
  (println (lspace))
  (shutdown-agents))


================================================
FILE: codegen-test/src/python/numpy.clj
================================================
(ns python.numpy
"No documentation provided"
(:require [libpython-clj2.python :as py]
          [libpython-clj2.python.jvm-handle :refer [py-global-delay]]
          [libpython-clj2.python.bridge-as-jvm :as as-jvm])
(:refer-clojure :exclude [+ - * / float double int long mod byte test char short take partition require max min identity empty mod repeat str load cast type sort conj map range list next hash eval bytes filter compile print set format compare reduce merge]))

(defonce ^:private src-obj* (py-global-delay (py/path->py-obj "numpy")))

(def ^{:doc "
    result_type(*arrays_and_dtypes)

    Returns the type that results from applying the NumPy
    type promotion rules to the arguments.

    Type promotion in NumPy works similarly to the rules in languages
    like C++, with some slight differences.  When both scalars and
    arrays are used, the array's type takes precedence and the actual value
    of the scalar is taken into account.

    For example, calculating 3*a, where a is an array of 32-bit floats,
    intuitively should result in a 32-bit float output.  If the 3 is a
    32-bit integer, the NumPy rules indicate it can't convert losslessly
    into a 32-bit float, so a 64-bit float should be the result type.
    By examining the value of the constant, '3', we see that it fits in
    an 8-bit integer, which can be cast losslessly into the 32-bit float.

    Parameters
    ----------
    arrays_and_dtypes : list of arrays and dtypes
        The operands of some operation whose result type is needed.

    Returns
    -------
    out : dtype
        The result type.

    See also
    --------
    dtype, promote_types, min_scalar_type, can_cast

    Notes
    -----
    .. versionadded:: 1.6.0

    The specific algorithm used is as follows.

    Categories are determined by first checking which of boolean,
    integer (int/uint), or floating point (float/complex) the maximum
    kind of all the arrays and the scalars are.

    If there are only scalars or the maximum category of the scalars
    is higher than the maximum category of the arrays,
    the data types are combined with :func:`promote_types`
    to produce the return value.

    Otherwise, `min_scalar_type` is called on each array, and
    the resulting data types are all combined with :func:`promote_types`
    to produce the return value.

    The set of int values is not a subset of the uint values for types
    with the same number of bits, something not reflected in
    :func:`min_scalar_type`, but handled as a special case in `result_type`.

    Examples
    --------
    >>> np.result_type(3, np.arange(7, dtype='i1'))
    dtype('int8')

    >>> np.result_type('i4', 'c8')
    dtype('complex128')

    >>> np.result_type(3.0, -2)
    dtype('float64')

    " :arglists '[[& [args {:as kwargs}]]]} result_type (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "result_type"))))

(def ^{:doc "
    ravel_multi_index(multi_index, dims, mode='raise', order='C')

    Converts a tuple of index arrays into an array of flat
    indices, applying boundary modes to the multi-index.

    Parameters
    ----------
    multi_index : tuple of array_like
        A tuple of integer arrays, one array for each dimension.
    dims : tuple of ints
        The shape of array into which the indices from ``multi_index`` apply.
    mode : {'raise', 'wrap', 'clip'}, optional
        Specifies how out-of-bounds indices are handled.  Can specify
        either one mode or a tuple of modes, one mode per index.

        * 'raise' -- raise an error (default)
        * 'wrap' -- wrap around
        * 'clip' -- clip to the range

        In 'clip' mode, a negative index which would normally
        wrap will clip to 0 instead.
    order : {'C', 'F'}, optional
        Determines whether the multi-index should be viewed as
        indexing in row-major (C-style) or column-major
        (Fortran-style) order.

    Returns
    -------
    raveled_indices : ndarray
        An array of indices into the flattened version of an array
        of dimensions ``dims``.

    See Also
    --------
    unravel_index

    Notes
    -----
    .. versionadded:: 1.6.0

    Examples
    --------
    >>> arr = np.array([[3,6,6],[4,5,1]])
    >>> np.ravel_multi_index(arr, (7,6))
    array([22, 41, 37])
    >>> np.ravel_multi_index(arr, (7,6), order='F')
    array([31, 41, 13])
    >>> np.ravel_multi_index(arr, (4,6), mode='clip')
    array([22, 23, 19])
    >>> np.ravel_multi_index(arr, (4,4), mode=('clip','wrap'))
    array([12, 13, 13])

    >>> np.ravel_multi_index((3,1,4,1), (6,7,8,9))
    1621
    " :arglists '[[& [args {:as kwargs}]]]} ravel_multi_index (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "ravel_multi_index"))))

(def ^{:doc "
    vdot(a, b)

    Return the dot product of two vectors.

    The vdot(`a`, `b`) function handles complex numbers differently than
    dot(`a`, `b`).  If the first argument is complex the complex conjugate
    of the first argument is used for the calculation of the dot product.

    Note that `vdot` handles multidimensional arrays differently than `dot`:
    it does *not* perform a matrix product, but flattens input arguments
    to 1-D vectors first. Consequently, it should only be used for vectors.

    Parameters
    ----------
    a : array_like
        If `a` is complex the complex conjugate is taken before calculation
        of the dot product.
    b : array_like
        Second argument to the dot product.

    Returns
    -------
    output : ndarray
        Dot product of `a` and `b`.  Can be an int, float, or
        complex depending on the types of `a` and `b`.

    See Also
    --------
    dot : Return the dot product without using the complex conjugate of the
          first argument.

    Examples
    --------
    >>> a = np.array([1+2j,3+4j])
    >>> b = np.array([5+6j,7+8j])
    >>> np.vdot(a, b)
    (70-8j)
    >>> np.vdot(b, a)
    (70+8j)

    Note that higher-dimensional arrays are flattened!

    >>> a = np.array([[1, 4], [5, 6]])
    >>> b = np.array([[4, 1], [2, 2]])
    >>> np.vdot(a, b)
    30
    >>> np.vdot(b, a)
    30
    >>> 1*4 + 4*1 + 5*2 + 6*2
    30

    " :arglists '[[& [args {:as kwargs}]]]} vdot (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "vdot"))))

(def ^{:doc "Signed integer type, compatible with C ``short``.

    :Character code: ``'h'``
    :Canonical name: `numpy.short`
    :Alias on this platform: `numpy.int16`: 16-bit signed integer (``-32_768`` to ``32_767``)." :arglists '[[self & [args {:as kwargs}]]]} int16 (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "int16"))))

(def ^{:doc ""} NaN ##NaN)

(def ^{:doc "
    Diagnosing machine parameters.

    Attributes
    ----------
    ibeta : int
        Radix in which numbers are represented.
    it : int
        Number of base-`ibeta` digits in the floating point mantissa M.
    machep : int
        Exponent of the smallest (most negative) power of `ibeta` that,
        added to 1.0, gives something different from 1.0
    eps : float
        Floating-point number ``beta**machep`` (floating point precision)
    negep : int
        Exponent of the smallest power of `ibeta` that, subtracted
        from 1.0, gives something different from 1.0.
    epsneg : float
        Floating-point number ``beta**negep``.
    iexp : int
        Number of bits in the exponent (including its sign and bias).
    minexp : int
        Smallest (most negative) power of `ibeta` consistent with there
        being no leading zeros in the mantissa.
    xmin : float
        Floating-point number ``beta**minexp`` (the smallest [in
        magnitude] positive floating point number with full precision).
    maxexp : int
        Smallest (positive) power of `ibeta` that causes overflow.
    xmax : float
        ``(1-epsneg) * beta**maxexp`` (the largest [in magnitude]
        usable floating value).
    irnd : int
        In ``range(6)``, information on what kind of rounding is done
        in addition, and on how underflow is handled.
    ngrd : int
        Number of 'guard digits' used when truncating the product
        of two mantissas to fit the representation.
    epsilon : float
        Same as `eps`.
    tiny : float
        Same as `xmin`.
    huge : float
        Same as `xmax`.
    precision : float
        ``- int(-log10(eps))``
    resolution : float
        ``- 10**(-precision)``

    Parameters
    ----------
    float_conv : function, optional
        Function that converts an integer or integer array to a float
        or float array. Default is `float`.
    int_conv : function, optional
        Function that converts a float or float array to an integer or
        integer array. Default is `int`.
    float_to_float : function, optional
        Function that converts a float array to float. Default is `float`.
        Note that this does not seem to do anything useful in the current
        implementation.
    float_to_str : function, optional
        Function that converts a single float to a string. Default is
        ``lambda v:'%24.16e' %v``.
    title : str, optional
        Title that is printed in the string representation of `MachAr`.

    See Also
    --------
    finfo : Machine limits for floating point types.
    iinfo : Machine limits for integer types.

    References
    ----------
    .. [1] Press, Teukolsky, Vetterling and Flannery,
           \"Numerical Recipes in C++,\" 2nd ed,
           Cambridge University Press, 2002, p. 31.

    " :arglists '[[self & [{float_conv :float_conv, int_conv :int_conv, float_to_float :float_to_float, float_to_str :float_to_str, title :title}]] [self & [{float_conv :float_conv, int_conv :int_conv, float_to_float :float_to_float, float_to_str :float_to_str}]] [self & [{float_conv :float_conv, int_conv :int_conv, float_to_float :float_to_float}]] [self & [{float_conv :float_conv, int_conv :int_conv}]] [self & [{float_conv :float_conv}]] [self]]} MachAr (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "MachAr"))))

(def ^{:doc "
    Kronecker product of two arrays.

    Computes the Kronecker product, a composite array made of blocks of the
    second array scaled by the first.

    Parameters
    ----------
    a, b : array_like

    Returns
    -------
    out : ndarray

    See Also
    --------
    outer : The outer product

    Notes
    -----
    The function assumes that the number of dimensions of `a` and `b`
    are the same, if necessary prepending the smallest with ones.
    If `a.shape = (r0,r1,..,rN)` and `b.shape = (s0,s1,...,sN)`,
    the Kronecker product has shape `(r0*s0, r1*s1, ..., rN*SN)`.
    The elements are products of elements from `a` and `b`, organized
    explicitly by::

        kron(a,b)[k0,k1,...,kN] = a[i0,i1,...,iN] * b[j0,j1,...,jN]

    where::

        kt = it * st + jt,  t = 0,...,N

    In the common 2-D case (N=1), the block structure can be visualized::

        [[ a[0,0]*b,   a[0,1]*b,  ... , a[0,-1]*b  ],
         [  ...                              ...   ],
         [ a[-1,0]*b,  a[-1,1]*b, ... , a[-1,-1]*b ]]


    Examples
    --------
    >>> np.kron([1,10,100], [5,6,7])
    array([  5,   6,   7, ..., 500, 600, 700])
    >>> np.kron([5,6,7], [1,10,100])
    array([  5,  50, 500, ...,   7,  70, 700])

    >>> np.kron(np.eye(2), np.ones((2,2)))
    array([[1.,  1.,  0.,  0.],
           [1.,  1.,  0.,  0.],
           [0.,  0.,  1.,  1.],
           [0.,  0.,  1.,  1.]])

    >>> a = np.arange(100).reshape((2,5,2,5))
    >>> b = np.arange(24).reshape((2,3,4))
    >>> c = np.kron(a,b)
    >>> c.shape
    (2, 10, 6, 20)
    >>> I = (1,3,0,2)
    >>> J = (0,2,1)
    >>> J1 = (0,) + J             # extend to ndim=4
    >>> S1 = (1,) + b.shape
    >>> K = tuple(np.array(I) * np.array(S1) + np.array(J1))
    >>> c[K] == a[I]*b[J]
    True

    " :arglists '[[& [args {:as kwargs}]]]} kron (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "kron"))))

(def ^{:doc "Abstract base class of all scalar types without predefined length.
    The actual size of these types depends on the specific `np.dtype`
    instantiation." :arglists '[[self & [args {:as kwargs}]]]} flexible (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "flexible"))))

(def ^{:doc "
    Load ASCII data stored in a comma-separated file.

    The returned array is a record array (if ``usemask=False``, see
    `recarray`) or a masked record array (if ``usemask=True``,
    see `ma.mrecords.MaskedRecords`).

    Parameters
    ----------
    fname, kwargs : For a description of input parameters, see `genfromtxt`.

    See Also
    --------
    numpy.genfromtxt : generic function to load ASCII data.

    Notes
    -----
    By default, `dtype` is None, which means that the data-type of the output
    array will be determined from the data.

    " :arglists '[[fname & [{:as kwargs}]]]} recfromcsv (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "recfromcsv"))))

(def ^{:doc "Abstract base class of all numeric scalar types with a (potentially)
    inexact representation of the values in its range, such as
    floating-point numbers." :arglists '[[self & [args {:as kwargs}]]]} inexact (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "inexact"))))

(def ^{:doc "floor(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])

Return the floor of the input, element-wise.

The floor of the scalar `x` is the largest integer `i`, such that
`i <= x`.  It is often denoted as :math:`\\lfloor x \\rfloor`.

Parameters
----------
x : array_like
    Input data.
out : ndarray, None, or tuple of ndarray and None, optional
    A location into which the result is stored. If provided, it must have
    a shape that the inputs broadcast to. If not provided or None,
    a freshly-allocated array is returned. A tuple (possible only as a
    keyword argument) must have length equal to the number of outputs.
where : array_like, optional
    This condition is broadcast over the input. At locations where the
    condition is True, the `out` array will be set to the ufunc result.
    Elsewhere, the `out` array will retain its original value.
    Note that if an uninitialized `out` array is created via the default
    ``out=None``, locations within it where the condition is False will
    remain uninitialized.
**kwargs
    For other keyword-only arguments, see the
    :ref:`ufunc docs <ufuncs.kwargs>`.

Returns
-------
y : ndarray or scalar
    The floor of each element in `x`.
    This is a scalar if `x` is a scalar.

See Also
--------
ceil, trunc, rint

Notes
-----
Some spreadsheet programs calculate the \"floor-towards-zero\", in other
words ``floor(-2.5) == -2``.  NumPy instead uses the definition of
`floor` where `floor(-2.5) == -3`.

Examples
--------
>>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
>>> np.floor(a)
array([-2., -2., -1.,  0.,  1.,  1.,  2.])" :arglists '[[self & [args {:as kwargs}]]]} floor (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "floor"))))

(def ^{:doc ""} tracemalloc_domain 389047)

(def ^{:doc "
    Return selected slices of an array along given axis.

    When working along a given axis, a slice along that axis is returned in
    `output` for each index where `condition` evaluates to True. When
    working on a 1-D array, `compress` is equivalent to `extract`.

    Parameters
    ----------
    condition : 1-D array of bools
        Array that selects which entries to return. If len(condition)
        is less than the size of `a` along the given axis, then output is
        truncated to the length of the condition array.
    a : array_like
        Array from which to extract a part.
    axis : int, optional
        Axis along which to take slices. If None (default), work on the
        flattened array.
    out : ndarray, optional
        Output array.  Its type is preserved and it must be of the right
        shape to hold the output.

    Returns
    -------
    compressed_array : ndarray
        A copy of `a` without the slices along axis for which `condition`
        is false.

    See Also
    --------
    take, choose, diag, diagonal, select
    ndarray.compress : Equivalent method in ndarray
    extract: Equivalent method when working on 1-D arrays
    :ref:`ufuncs-output-type`

    Examples
    --------
    >>> a = np.array([[1, 2], [3, 4], [5, 6]])
    >>> a
    array([[1, 2],
           [3, 4],
           [5, 6]])
    >>> np.compress([0, 1], a, axis=0)
    array([[3, 4]])
    >>> np.compress([False, True, True], a, axis=0)
    array([[3, 4],
           [5, 6]])
    >>> np.compress([False, True], a, axis=1)
    array([[2],
           [4],
           [6]])

    Working on the flattened array does not return slices along an axis but
    selects elements.

    >>> np.compress([False, True], a)
    array([2])

    " :arglists '[[& [args {:as kwargs}]]]} compress (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "compress"))))

(def ^{:doc "Extended-precision floating-point number type, compatible with C
    ``long double`` but not necessarily with IEEE 754 quadruple-precision.

    :Character code: ``'g'``
    :Canonical name: `numpy.longdouble`
    :Alias: `numpy.longfloat`
    :Alias on this platform: `numpy.float128`: 128-bit extended-precision floating-point number type." :arglists '[[self & [args {:as kwargs}]]]} longfloat (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "longfloat"))))

(def ^{:doc "log1p(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])

Return the natural logarithm of one plus the input array, element-wise.

Calculates ``log(1 + x)``.

Parameters
----------
x : array_like
    Input values.
out : ndarray, None, or tuple of ndarray and None, optional
    A location into which the result is stored. If provided, it must have
    a shape that the inputs broadcast to. If not provided or None,
    a freshly-allocated array is returned. A tuple (possible only as a
    keyword argument) must have length equal to the number of outputs.
where : array_like, optional
    This condition is broadcast over the input. At locations where the
    condition is True, the `out` array will be set to the ufunc result.
    Elsewhere, the `out` array will retain its original value.
    Note that if an uninitialized `out` array is created via the default
    ``out=None``, locations within it where the condition is False will
    remain uninitialized.
**kwargs
    For other keyword-only arguments, see the
    :ref:`ufunc docs <ufuncs.kwargs>`.

Returns
-------
y : ndarray
    Natural logarithm of `1 + x`, element-wise.
    This is a scalar if `x` is a scalar.

See Also
--------
expm1 : ``exp(x) - 1``, the inverse of `log1p`.

Notes
-----
For real-valued input, `log1p` is accurate also for `x` so small
that `1 + x == 1` in floating-point accuracy.

Logarithm is a multivalued function: for each `x` there is an infinite
number of `z` such that `exp(z) = 1 + x`. The convention is to return
the `z` whose imaginary part lies in `[-pi, pi]`.

For real-valued input data types, `log1p` always returns real output.
For each value that cannot be expressed as a real number or infinity,
it yields ``nan`` and sets the `invalid` floating point error flag.

For complex-valued input, `log1p` is a complex analytical function that
has a branch cut `[-inf, -1]` and is continuous from above on it.
`log1p` handles the floating-point negative zero as an infinitesimal
negative number, conforming to the C99 standard.

References
----------
.. [1] M. Abramowitz and I.A. Stegun, \"Handbook of Mathematical Functions\",
       10th printing, 1964, pp. 67. http://www.math.sfu.ca/~cbm/aands/
.. [2] Wikipedia, \"Logarithm\". https://en.wikipedia.org/wiki/Logarithm

Examples
--------
>>> np.log1p(1e-99)
1e-99
>>> np.log(1 + 1e-99)
0.0" :arglists '[[self & [args {:as kwargs}]]]} log1p (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "log1p"))))

(def ^{:doc "arctan2(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])

Element-wise arc tangent of ``x1/x2`` choosing the quadrant correctly.

The quadrant (i.e., branch) is chosen so that ``arctan2(x1, x2)`` is
the signed angle in radians between the ray ending at the origin and
passing through the point (1,0), and the ray ending at the origin and
passing through the point (`x2`, `x1`).  (Note the role reversal: the
\"`y`-coordinate\" is the first function parameter, the \"`x`-coordinate\"
is the second.)  By IEEE convention, this function is defined for
`x2` = +/-0 and for either or both of `x1` and `x2` = +/-inf (see
Notes for specific values).

This function is not defined for complex-valued arguments; for the
so-called argument of complex values, use `angle`.

Parameters
----------
x1 : array_like, real-valued
    `y`-coordinates.
x2 : array_like, real-valued
    `x`-coordinates.
    If ``x1.shape != x2.shape``, they must be broadcastable to a common
    shape (which becomes the shape of the output).
out : ndarray, None, or tuple of ndarray and None, optional
    A location into which the result is stored. If provided, it must have
    a shape that the inputs broadcast to. If not provided or None,
    a freshly-allocated array is returned. A tuple (possible only as a
    keyword argument) must have length equal to the number of outputs.
where : array_like, optional
    This condition is broadcast over the input. At locations where the
    condition is True, the `out` array will be set to the ufunc result.
    Elsewhere, the `out` array will retain its original value.
    Note that if an uninitialized `out` array is created via the default
    ``out=None``, locations within it where the condition is False will
    remain uninitialized.
**kwargs
    For other keyword-only arguments, see the
    :ref:`ufunc docs <ufuncs.kwargs>`.

Returns
-------
angle : ndarray
    Array of angles in radians, in the range ``[-pi, pi]``.
    This is a scalar if both `x1` and `x2` are scalars.

See Also
--------
arctan, tan, angle

Notes
-----
*arctan2* is identical to the `atan2` function of the underlying
C library.  The following special values are defined in the C
standard: [1]_

====== ====== ================
`x1`   `x2`   `arctan2(x1,x2)`
====== ====== ================
+/- 0  +0     +/- 0
+/- 0  -0     +/- pi
 > 0   +/-inf +0 / +pi
 < 0   +/-inf -0 / -pi
+/-inf +inf   +/- (pi/4)
+/-inf -inf   +/- (3*pi/4)
====== ====== ================

Note that +0 and -0 are distinct floating point numbers, as are +inf
and -inf.

References
----------
.. [1] ISO/IEC standard 9899:1999, \"Programming language C.\"

Examples
--------
Consider four points in different quadrants:

>>> x = np.array([-1, +1, +1, -1])
>>> y = np.array([-1, -1, +1, +1])
>>> np.arctan2(y, x) * 180 / np.pi
array([-135.,  -45.,   45.,  135.])

Note the order of the parameters. `arctan2` is defined also when `x2` = 0
and at several other special points, obtaining values in
the range ``[-pi, pi]``:

>>> np.arctan2([1., -1.], [0., 0.])
array([ 1.57079633, -1.57079633])
>>> np.arctan2([0., 0., np.inf], [+0., -0., np.inf])
array([ 0.        ,  3.14159265,  0.78539816])" :arglists '[[self & [args {:as kwargs}]]]} arctan2 (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "arctan2"))))

(def ^{:doc "ceil(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])

Return the ceiling of the input, element-wise.

The ceil of the scalar `x` is the smallest integer `i`, such that
`i >= x`.  It is often denoted as :math:`\\lceil x \\rceil`.

Parameters
----------
x : array_like
    Input data.
out : ndarray, None, or tuple of ndarray and None, optional
    A location into which the result is stored. If provided, it must have
    a shape that the inputs broadcast to. If not provided or None,
    a freshly-allocated array is returned. A tuple (possible only as a
    keyword argument) must have length equal to the number of outputs.
where : array_like, optional
    This condition is broadcast over the input. At locations where the
    condition is True, the `out` array will be set to the ufunc result.
    Elsewhere, the `out` array will retain its original value.
    Note that if an uninitialized `out` array is created via the default
    ``out=None``, locations within it where the condition is False will
    remain uninitialized.
**kwargs
    For other keyword-only arguments, see the
    :ref:`ufunc docs <ufuncs.kwargs>`.

Returns
-------
y : ndarray or scalar
    The ceiling of each element in `x`, with `float` dtype.
    This is a scalar if `x` is a scalar.

See Also
--------
floor, trunc, rint

Examples
--------
>>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
>>> np.ceil(a)
array([-1., -1., -0.,  1.,  2.,  2.,  2.])" :arglists '[[self & [args {:as kwargs}]]]} ceil (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "ceil"))))

(def ^{:doc "
    Return an array (ndim >= 1) laid out in Fortran order in memory.

    Parameters
    ----------
    a : array_like
        Input array.
    dtype : str or dtype object, optional
        By default, the data-type is inferred from the input data.
    like : array_like
        Reference object to allow the creation of arrays which are not
        NumPy arrays. If an array-like passed in as ``like`` supports
        the ``__array_function__`` protocol, the result will be defined
        by it. In this case, it ensures the creation of an array object
        compatible with that passed in via this argument.

        .. note::
            The ``like`` keyword is an experimental feature pending on
            acceptance of :ref:`NEP 35 <NEP35>`.

        .. versionadded:: 1.20.0

    Returns
    -------
    out : ndarray
        The input `a` in Fortran, or column-major, order.

    See Also
    --------
    ascontiguousarray : Convert input to a contiguous (C order) array.
    asanyarray : Convert input to an ndarray with either row or
        column-major memory order.
    require : Return an ndarray that satisfies requirements.
    ndarray.flags : Information about the memory layout of the array.

    Examples
    --------
    >>> x = np.arange(6).reshape(2,3)
    >>> y = np.asfortranarray(x)
    >>> x.flags['F_CONTIGUOUS']
    False
    >>> y.flags['F_CONTIGUOUS']
    True

    Note: This function returns an array with at least one-dimension (1-d) 
    so it will not preserve 0-d arrays.  

    " :arglists '[[a & [{dtype :dtype, like :like}]] [a & [{like :like}]]]} asfortranarray (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "asfortranarray"))))

(def ^{:doc "logical_and(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])

Compute the truth value of x1 AND x2 element-wise.

Parameters
----------
x1, x2 : array_like
    Input arrays.
    If ``x1.shape != x2.shape``, they must be broadcastable to a common
    shape (which becomes the shape of the output).
out : ndarray, None, or tuple of ndarray and None, optional
    A location into which the result is stored. If provided, it must have
    a shape that the inputs broadcast to. If not provided or None,
    a freshly-allocated array is returned. A tuple (possible only as a
    keyword argument) must have length equal to the number of outputs.
where : array_like, optional
    This condition is broadcast over the input. At locations where the
    condition is True, the `out` array will be set to the ufunc result.
    Elsewhere, the `out` array will retain its original value.
    Note that if an uninitialized `out` array is created via the default
    ``out=None``, locations within it where the condition is False will
    remain uninitialized.
**kwargs
    For other keyword-only arguments, see the
    :ref:`ufunc docs <ufuncs.kwargs>`.

Returns
-------
y : ndarray or bool
    Boolean result of the logical AND operation applied to the elements
    of `x1` and `x2`; the shape is determined by broadcasting.
    This is a scalar if both `x1` and `x2` are scalars.

See Also
--------
logical_or, logical_not, logical_xor
bitwise_and

Examples
--------
>>> np.logical_and(True, False)
False
>>> np.logical_and([True, False], [False, False])
array([False, False])

>>> x = np.arange(5)
>>> np.logical_and(x>1, x<4)
array([False, False,  True,  True, False])


The ``&`` operator can be used as a shorthand for ``np.logical_and`` on
boolean ndarrays.

>>> a = np.array([True, False])
>>> b = np.array([False, False])
>>> a & b
array([False, False])" :arglists '[[self & [args {:as kwargs}]]]} logical_and (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "logical_and"))))

(def ^{:doc "
    Return indices that are non-zero in the flattened version of a.

    This is equivalent to np.nonzero(np.ravel(a))[0].

    Parameters
    ----------
    a : array_like
        Input data.

    Returns
    -------
    res : ndarray
        Output array, containing the indices of the elements of `a.ravel()`
        that are non-zero.

    See Also
    --------
    nonzero : Return the indices of the non-zero elements of the input array.
    ravel : Return a 1-D array containing the elements of the input array.

    Examples
    --------
    >>> x = np.arange(-2, 3)
    >>> x
    array([-2, -1,  0,  1,  2])
    >>> np.flatnonzero(x)
    array([0, 1, 3, 4])

    Use the indices of the non-zero elements as an index array to extract
    these elements:

    >>> x.ravel()[np.flatnonzero(x)]
    array([-2, -1,  1,  2])

    " :arglists '[[& [args {:as kwargs}]]]} flatnonzero (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "flatnonzero"))))

(def ^{:doc "
    Returns True if the type of `element` is a scalar type.

    Parameters
    ----------
    element : any
        Input argument, can be of any type and shape.

    Returns
    -------
    val : bool
        True if `element` is a scalar type, False if it is not.

    See Also
    --------
    ndim : Get the number of dimensions of an array

    Notes
    -----
    If you need a stricter way to identify a *numerical* scalar, use
    ``isinstance(x, numbers.Number)``, as that returns ``False`` for most
    non-numerical elements such as strings.

    In most cases ``np.ndim(x) == 0`` should be used instead of this function,
    as that will also return true for 0d arrays. This is how numpy overloads
    functions in the style of the ``dx`` arguments to `gradient` and the ``bins``
    argument to `histogram`. Some key differences:

    +--------------------------------------+---------------+-------------------+
    | x                                    |``isscalar(x)``|``np.ndim(x) == 0``|
    +======================================+===============+===================+
    | PEP 3141 numeric objects (including  | ``True``      | ``True``          |
    | builtins)                            |               |                   |
    +--------------------------------------+---------------+-------------------+
    | builtin string and buffer objects    | ``True``      | ``True``          |
    +--------------------------------------+---------------+-------------------+
    | other builtin objects, like          | ``False``     | ``True``          |
    | `pathlib.Path`, `Exception`,         |               |                   |
    | the result of `re.compile`           |               |                   |
    +--------------------------------------+---------------+-------------------+
    | third-party objects like             | ``False``     | ``True``          |
    | `matplotlib.figure.Figure`           |               |                   |
    +--------------------------------------+---------------+-------------------+
    | zero-dimensional numpy arrays        | ``False``     | ``True``          |
    +--------------------------------------+---------------+-------------------+
    | other numpy arrays                   | ``False``     | ``False``         |
    +--------------------------------------+---------------+-------------------+
    | `list`, `tuple`, and other sequence  | ``False``     | ``False``         |
    | objects                              |               |                   |
    +--------------------------------------+---------------+-------------------+

    Examples
    --------
    >>> np.isscalar(3.1)
    True
    >>> np.isscalar(np.array(3.1))
    False
    >>> np.isscalar([3.1])
    False
    >>> np.isscalar(False)
    True
    >>> np.isscalar('numpy')
    True

    NumPy supports PEP 3141 numbers:

    >>> from fractions import Fraction
    >>> np.isscalar(Fraction(5, 17))
    True
    >>> from numbers import Number
    >>> np.isscalar(Number())
    True

    " :arglists '[[element]]} isscalar (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "isscalar"))))

(def ^{:doc "
    Return a scalar type which is common to the input arrays.

    The return type will always be an inexact (i.e. floating point) scalar
    type, even if all the arrays are integer arrays. If one of the inputs is
    an integer array, the minimum precision type that is returned is a
    64-bit floating point dtype.

    All input arrays except int64 and uint64 can be safely cast to the
    returned dtype without loss of information.

    Parameters
    ----------
    array1, array2, ... : ndarrays
        Input arrays.

    Returns
    -------
    out : data type code
        Data type code.

    See Also
    --------
    dtype, mintypecode

    Examples
    --------
    >>> np.common_type(np.arange(2, dtype=np.float32))
    <class 'numpy.float32'>
    >>> np.common_type(np.arange(2, dtype=np.float32), np.arange(2))
    <class 'numpy.float64'>
    >>> np.common_type(np.arange(4), np.array([45, 6.j]), np.array([45.0]))
    <class 'numpy.complex128'>

    " :arglists '[[& [args {:as kwargs}]]]} common_type (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "common_type"))))

(def ^{:doc "Complex number type composed of two extended-precision floating-point
    numbers.

    :Character code: ``'G'``
    :Canonical name: `numpy.clongdouble`
    :Alias: `numpy.clongfloat`
    :Alias: `numpy.longcomplex`
    :Alias on this platform: `numpy.complex256`: Complex number type composed of 2 128-bit extended-precision floating-point numbers." :arglists '[[self & [args {:as kwargs}]]]} clongdouble (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "clongdouble"))))

(def ^{:doc "
    Extract a diagonal or construct a diagonal array.

    See the more detailed documentation for ``numpy.diagonal`` if you use this
    function to extract a diagonal and wish to write to the resulting array;
    whether it returns a copy or a view depends on what version of numpy you
    are using.

    Parameters
    ----------
    v : array_like
        If `v` is a 2-D array, return a copy of its `k`-th diagonal.
        If `v` is a 1-D array, return a 2-D array with `v` on the `k`-th
        diagonal.
    k : int, optional
        Diagonal in question. The default is 0. Use `k>0` for diagonals
        above the main diagonal, and `k<0` for diagonals below the main
        diagonal.

    Returns
    -------
    out : ndarray
        The extracted diagonal or constructed diagonal array.

    See Also
    --------
    diagonal : Return specified diagonals.
    diagflat : Create a 2-D array with the flattened input as a diagonal.
    trace : Sum along diagonals.
    triu : Upper triangle of an array.
    tril : Lower triangle of an array.

    Examples
    --------
    >>> x = np.arange(9).reshape((3,3))
    >>> x
    array([[0, 1, 2],
           [3, 4, 5],
           [6, 7, 8]])

    >>> np.diag(x)
    array([0, 4, 8])
    >>> np.diag(x, k=1)
    array([1, 5])
    >>> np.diag(x, k=-1)
    array([3, 7])

    >>> np.diag(np.diag(x))
    array([[0, 0, 0],
           [0, 4, 0],
           [0, 0, 8]])

    " :arglists '[[& [args {:as kwargs}]]]} diag (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "diag"))))

(def ^{:doc "
    Return the maximum of an array or maximum along an axis, ignoring any
    NaNs.  When all-NaN slices are encountered a ``RuntimeWarning`` is
    raised and NaN is returned for that slice.

    Parameters
    ----------
    a : array_like
        Array containing numbers whose maximum is desired. If `a` is not an
        array, a conversion is attempted.
    axis : {int, tuple of int, None}, optional
        Axis or axes along which the maximum is computed. The default is to compute
        the maximum of the flattened array.
    out : ndarray, optional
        Alternate output array in which to place the result.  The default
        is ``None``; if provided, it must have the same shape as the
        expected output, but the type will be cast if necessary. See
        :ref:`ufuncs-output-type` for more details.

        .. versionadded:: 1.8.0
    keepdims : bool, optional
        If this is set to True, the axes which are reduced are left
        in the result as dimensions with size one. With this option,
        the result will broadcast correctly against the original `a`.

        If the value is anything but the default, then
        `keepdims` will be passed through to the `max` method
        of sub-classes of `ndarray`.  If the sub-classes methods
        does not implement `keepdims` any exceptions will be raised.

        .. versionadded:: 1.8.0

    Returns
    -------
    nanmax : ndarray
        An array with the same shape as `a`, with the specified axis removed.
        If `a` is a 0-d array, or if axis is None, an ndarray scalar is
        returned.  The same dtype as `a` is returned.

    See Also
    --------
    nanmin :
        The minimum value of an array along a given axis, ignoring any NaNs.
    amax :
        The maximum value of an array along a given axis, propagating any NaNs.
    fmax :
        Element-wise maximum of two arrays, ignoring any NaNs.
    maximum :
        Element-wise maximum of two arrays, propagating any NaNs.
    isnan :
        Shows which elements are Not a Number (NaN).
    isfinite:
        Shows which elements are neither NaN nor infinity.

    amin, fmin, minimum

    Notes
    -----
    NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic
    (IEEE 754). This means that Not a Number is not equivalent to infinity.
    Positive infinity is treated as a very large number and negative
    infinity is treated as a very small (i.e. negative) number.

    If the input has a integer type the function is equivalent to np.max.

    Examples
    --------
    >>> a = np.array([[1, 2], [3, np.nan]])
    >>> np.nanmax(a)
    3.0
    >>> np.nanmax(a, axis=0)
    array([3.,  2.])
    >>> np.nanmax(a, axis=1)
    array([2.,  3.])

    When positive infinity and negative infinity are present:

    >>> np.nanmax([1, 2, np.nan, np.NINF])
    2.0
    >>> np.nanmax([1, 2, np.nan, np.inf])
    inf

    " :arglists '[[& [args {:as kwargs}]]]} nanmax (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "nanmax"))))

(def ^{:doc "Base class for numpy scalar types.

    Class from which most (all?) numpy scalar types are derived.  For
    consistency, exposes the same API as `ndarray`, despite many
    consequent attributes being either \"get-only,\" or completely irrelevant.
    This is the class from which it is strongly suggested users should derive
    custom scalar types." :arglists '[[self & [args {:as kwargs}]]]} generic (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "generic"))))

(def ^{:doc "
    Return the indices of the bins to which each value in input array belongs.

    =========  =============  ============================
    `right`    order of bins  returned index `i` satisfies
    =========  =============  ============================
    ``False``  increasing     ``bins[i-1] <= x < bins[i]``
    ``True``   increasing     ``bins[i-1] < x <= bins[i]``
    ``False``  decreasing     ``bins[i-1] > x >= bins[i]``
    ``True``   decreasing     ``bins[i-1] >= x > bins[i]``
    =========  =============  ============================

    If values in `x` are beyond the bounds of `bins`, 0 or ``len(bins)`` is
    returned as appropriate.

    Parameters
    ----------
    x : array_like
        Input array to be binned. Prior to NumPy 1.10.0, this array had to
        be 1-dimensional, but can now have any shape.
    bins : array_like
        Array of bins. It has to be 1-dimensional and monotonic.
    right : bool, optional
        Indicating whether the intervals include the right or the left bin
        edge. Default behavior is (right==False) indicating that the interval
        does not include the right edge. The left bin end is open in this
        case, i.e., bins[i-1] <= x < bins[i] is the default behavior for
        monotonically increasing bins.

    Returns
    -------
    indices : ndarray of ints
        Output array of indices, of same shape as `x`.

    Raises
    ------
    ValueError
        If `bins` is not monotonic.
    TypeError
        If the type of the input is complex.

    See Also
    --------
    bincount, histogram, unique, searchsorted

    Notes
    -----
    If values in `x` are such that they fall outside the bin range,
    attempting to index `bins` with the indices that `digitize` returns
    will result in an IndexError.

    .. versionadded:: 1.10.0

    `np.digitize` is  implemented in terms of `np.searchsorted`. This means
    that a binary search is used to bin the values, which scales much better
    for larger number of bins than the previous linear search. It also removes
    the requirement for the input array to be 1-dimensional.

    For monotonically _increasing_ `bins`, the following are equivalent::

        np.digitize(x, bins, right=True)
        np.searchsorted(bins, x, side='left')

    Note that as the order of the arguments are reversed, the side must be too.
    The `searchsorted` call is marginally faster, as it does not do any
    monotonicity checks. Perhaps more importantly, it supports all dtypes.

    Examples
    --------
    >>> x = np.array([0.2, 6.4, 3.0, 1.6])
    >>> bins = np.array([0.0, 1.0, 2.5, 4.0, 10.0])
    >>> inds = np.digitize(x, bins)
    >>> inds
    array([1, 4, 3, 2])
    >>> for n in range(x.size):
    ...   print(bins[inds[n]-1], \"<=\", x[n], \"<\", bins[inds[n]])
    ...
    0.0 <= 0.2 < 1.0
    4.0 <= 6.4 < 10.0
    2.5 <= 3.0 < 4.0
    1.0 <= 1.6 < 2.5

    >>> x = np.array([1.2, 10.0, 12.4, 15.5, 20.])
    >>> bins = np.array([0, 5, 10, 15, 20])
    >>> np.digitize(x,bins,right=True)
    array([1, 2, 3, 4, 4])
    >>> np.digitize(x,bins,right=False)
    array([1, 3, 3, 4, 5])
    " :arglists '[[& [args {:as kwargs}]]]} digitize (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "digitize"))))

(def ^{:doc "
    Split an array into multiple sub-arrays.

    Please refer to the ``split`` documentation.  The only difference
    between these functions is that ``array_split`` allows
    `indices_or_sections` to be an integer that does *not* equally
    divide the axis. For an array of length l that should be split
    into n sections, it returns l % n sub-arrays of size l//n + 1
    and the rest of size l//n.

    See Also
    --------
    split : Split array into multiple sub-arrays of equal size.

    Examples
    --------
    >>> x = np.arange(8.0)
    >>> np.array_split(x, 3)
    [array([0.,  1.,  2.]), array([3.,  4.,  5.]), array([6.,  7.])]

    >>> x = np.arange(9)
    >>> np.array_split(x, 4)
    [array([0, 1, 2]), array([3, 4]), array([5, 6]), array([7, 8])]

    " :arglists '[[& [args {:as kwargs}]]]} array_split (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "array_split"))))

(def ^{:doc "
========================
Random Number Generation
========================

Use ``default_rng()`` to create a `Generator` and call its methods.

=============== =========================================================
Generator
--------------- ---------------------------------------------------------
Generator       Class implementing all of the random number distributions
default_rng     Default constructor for ``Generator``
=============== =========================================================

============================================= ===
BitGenerator Streams that work with Generator
--------------------------------------------- ---
MT19937
PCG64
Philox
SFC64
============================================= ===

============================================= ===
Getting entropy to initialize a BitGenerator
--------------------------------------------- ---
SeedSequence
============================================= ===


Legacy
------

For backwards compatibility with previous versions of numpy before 1.17, the
various aliases to the global `RandomState` methods are left alone and do not
use the new `Generator` API.

==================== =========================================================
Utility functions
-------------------- ---------------------------------------------------------
random               Uniformly distributed floats over ``[0, 1)``
bytes                Uniformly distributed random bytes.
permutation          Randomly permute a sequence / generate a random sequence.
shuffle              Randomly permute a sequence in place.
choice               Random sample from 1-D array.
==================== =========================================================

==================== =========================================================
Compatibility
functions - removed
in the new API
-------------------- ---------------------------------------------------------
rand                 Uniformly distributed values.
randn                Normally distributed values.
ranf                 Uniformly distributed floating point numbers.
random_integers      Uniformly distributed integers in a given range.
                     (deprecated, use ``integers(..., closed=True)`` instead)
random_sample        Alias for `random_sample`
randint              Uniformly distributed integers in a given range
seed                 Seed the legacy random number generator.
==================== =========================================================

==================== =========================================================
Univariate
distributions
-------------------- ---------------------------------------------------------
beta                 Beta distribution over ``[0, 1]``.
binomial             Binomial distribution.
chisquare            :math:`\\chi^2` distribution.
exponential          Exponential distribution.
f                    F (Fisher-Snedecor) distribution.
gamma                Gamma distribution.
geometric            Geometric distribution.
gumbel               Gumbel distribution.
hypergeometric       Hypergeometric distribution.
laplace              Laplace distribution.
logistic             Logistic distribution.
lognormal            Log-normal distribution.
logseries            Logarithmic series distribution.
negative_binomial    Negative binomial distribution.
noncentral_chisquare Non-central chi-square distribution.
noncentral_f         Non-central F distribution.
normal               Normal / Gaussian distribution.
pareto               Pareto distribution.
poisson              Poisson distribution.
power                Power distribution.
rayleigh             Rayleigh distribution.
triangular           Triangular distribution.
uniform              Uniform distribution.
vonmises             Von Mises circular distribution.
wald                 Wald (inverse Gaussian) distribution.
weibull              Weibull distribution.
zipf                 Zipf's distribution over ranked data.
==================== =========================================================

==================== ==========================================================
Multivariate
distributions
-------------------- ----------------------------------------------------------
dirichlet            Multivariate generalization of Beta distribution.
multinomial          Multivariate generalization of the binomial distribution.
multivariate_normal  Multivariate generalization of the normal distribution.
==================== ==========================================================

==================== =========================================================
Standard
distributions
-------------------- ---------------------------------------------------------
standard_cauchy      Standard Cauchy-Lorentz distribution.
standard_exponential Standard exponential distribution.
standard_gamma       Standard Gamma distribution.
standard_normal      Standard normal distribution.
standard_t           Standard Student's t-distribution.
==================== =========================================================

==================== =========================================================
Internal functions
-------------------- ---------------------------------------------------------
get_state            Get tuple representing internal state of generator.
set_state            Set state of generator.
==================== =========================================================


"} random (as-jvm/generic-pyobject (py-global-delay (py/get-attr @src-obj* "random"))))

(def ^{:doc "
    Return the directory that contains the NumPy \\*.h header files.

    Extension modules that need to compile against NumPy should use this
    function to locate the appropriate include directory.

    Notes
    -----
    When using ``distutils``, for example in ``setup.py``.
    ::

        import numpy as np
        ...
        Extension('extension_name', ...
                include_dirs=[np.get_include()])
        ...

    " :arglists '[[]]} get_include (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "get_include"))))

(def ^{:doc ""} MAY_SHARE_BOUNDS 0)

(def ^{:doc ""} UFUNC_BUFSIZE_DEFAULT 8192)

(def ^{:doc ""} __config__ (as-jvm/generic-pyobject (py-global-delay (py/get-attr @src-obj* "__config__"))))

(def ^{:doc "
    Return the length of the first dimension of the input array.

    .. deprecated:: 1.18
       `numpy.alen` is deprecated, use `len` instead.

    Parameters
    ----------
    a : array_like
       Input array.

    Returns
    -------
    alen : int
       Length of the first dimension of `a`.

    See Also
    --------
    shape, size

    Examples
    --------
    >>> a = np.zeros((7,4,5))
    >>> a.shape[0]
    7
    >>> np.alen(a)
    7

    " :arglists '[[& [args {:as kwargs}]]]} alen (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "alen"))))

(def ^{:doc "
    Save an array to a binary file in NumPy ``.npy`` format.

    Parameters
    ----------
    file : file, str, or pathlib.Path
        File or filename to which the data is saved.  If file is a file-object,
        then the filename is unchanged.  If file is a string or Path, a ``.npy``
        extension will be appended to the filename if it does not already
        have one.
    arr : array_like
        Array data to be saved.
    allow_pickle : bool, optional
        Allow saving object arrays using Python pickles. Reasons for disallowing
        pickles include security (loading pickled data can execute arbitrary
        code) and portability (pickled objects may not be loadable on different
        Python installations, for example if the stored objects require libraries
        that are not available, and not all pickled data is compatible between
        Python 2 and Python 3).
        Default: True
    fix_imports : bool, optional
        Only useful in forcing objects in object arrays on Python 3 to be
        pickled in a Python 2 compatible way. If `fix_imports` is True, pickle
        will try to map the new Python 3 names to the old module names used in
        Python 2, so that the pickle data stream is readable with Python 2.

    See Also
    --------
    savez : Save several arrays into a ``.npz`` archive
    savetxt, load

    Notes
    -----
    For a description of the ``.npy`` format, see :py:mod:`numpy.lib.format`.

    Any data saved to the file is appended to the end of the file.

    Examples
    --------
    >>> from tempfile import TemporaryFile
    >>> outfile = TemporaryFile()

    >>> x = np.arange(10)
    >>> np.save(outfile, x)

    >>> _ = outfile.seek(0) # Only needed here to simulate closing & reopening file
    >>> np.load(outfile)
    array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])


    >>> with open('test.npy', 'wb') as f:
    ...     np.save(f, np.array([1, 2]))
    ...     np.save(f, np.array([1, 3]))
    >>> with open('test.npy', 'rb') as f:
    ...     a = np.load(f)
    ...     b = np.load(f)
    >>> print(a, b)
    # [1 2] [1 3]
    " :arglists '[[& [args {:as kwargs}]]]} save (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "save"))))

(def ^{:doc "cosh(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])

Hyperbolic cosine, element-wise.

Equivalent to ``1/2 * (np.exp(x) + np.exp(-x))`` and ``np.cos(1j*x)``.

Parameters
----------
x : array_like
    Input array.
out : ndarray, None, or tuple of ndarray and None, optional
    A location into which the result is stored. If provided, it must have
    a shape that the inputs broadcast to. If not provided or None,
    a freshly-allocated array is returned. A tuple (possible only as a
    keyword argument) must have length equal to the number of outputs.
where : array_like, optional
    This condition is broadcast over the input. At locations where the
    condition is True, the `out` array will be set to the ufunc result.
    Elsewhere, the `out` array will retain its original value.
    Note that if an uninitialized `out` array is created via the default
    ``out=None``, locations within it where the condition is False will
    remain uninitialized.
**kwargs
    For other keyword-only arguments, see the
    :ref:`ufunc docs <ufuncs.kwargs>`.

Returns
-------
out : ndarray or scalar
    Output array of same shape as `x`.
    This is a scalar if `x` is a scalar.

Examples
--------
>>> np.cosh(0)
1.0

The hyperbolic cosine describes the shape of a hanging cable:

>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-4, 4, 1000)
>>> plt.plot(x, np.cosh(x))
>>> plt.show()" :arglists '[[self & [args {:as kwargs}]]]} cosh (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "cosh"))))

(def ^{:doc "
    Construct an open mesh from multiple sequences.

    This function takes N 1-D sequences and returns N outputs with N
    dimensions each, such that the shape is 1 in all but one dimension
    and the dimension with the non-unit shape value cycles through all
    N dimensions.

    Using `ix_` one can quickly construct index arrays that will index
    the cross product. ``a[np.ix_([1,3],[2,5])]`` returns the array
    ``[[a[1,2] a[1,5]], [a[3,2] a[3,5]]]``.

    Parameters
    ----------
    args : 1-D sequences
        Each sequence should be of integer or boolean type.
        Boolean sequences will be interpreted as boolean masks for the
        corresponding dimension (equivalent to passing in
        ``np.nonzero(boolean_sequence)``).

    Returns
    -------
    out : tuple of ndarrays
        N arrays with N dimensions each, with N the number of input
        sequences. Together these arrays form an open mesh.

    See Also
    --------
    ogrid, mgrid, meshgrid

    Examples
    --------
    >>> a = np.arange(10).reshape(2, 5)
    >>> a
    array([[0, 1, 2, 3, 4],
           [5, 6, 7, 8, 9]])
    >>> ixgrid = np.ix_([0, 1], [2, 4])
    >>> ixgrid
    (array([[0],
           [1]]), array([[2, 4]]))
    >>> ixgrid[0].shape, ixgrid[1].shape
    ((2, 1), (1, 2))
    >>> a[ixgrid]
    array([[2, 4],
           [7, 9]])

    >>> ixgrid = np.ix_([True, True], [2, 4])
    >>> a[ixgrid]
    array([[2, 4],
           [7, 9]])
    >>> ixgrid = np.ix_([True, True], [False, False, True, False, True])
    >>> a[ixgrid]
    array([[2, 4],
           [7, 9]])

    " :arglists '[[& [args {:as kwargs}]]]} ix_ (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "ix_"))))

(def ^{:doc ""} NINF ##-Inf)

(def ^{:doc "Functions that operate element by element on whole arrays.

    To see the documentation for a specific ufunc, use `info`.  For
    example, ``np.info(np.sin)``.  Because ufuncs are written in C
    (for speed) and linked into Python with NumPy's ufunc facility,
    Python's help() function finds this page whenever help() is called
    on a ufunc.

    A detailed explanation of ufuncs can be found in the docs for :ref:`ufuncs`.

    **Calling ufuncs:** ``op(*x[, out], where=True, **kwargs)``

    Apply `op` to the arguments `*x` elementwise, broadcasting the arguments.

    The broadcasting rules are:

    * Dimensions of length 1 may be prepended to either array.
    * Arrays may be repeated along dimensions of length 1.

    Parameters
    ----------
    *x : array_like
        Input arrays.
    out : ndarray, None, or tuple of ndarray and None, optional
        Alternate array object(s) in which to put the result; if provided, it
        must have a shape that the inputs broadcast to. A tuple of arrays
        (possible only as a keyword argument) must have length equal to the
        number of outputs; use None for uninitialized outputs to be
        allocated by the ufunc.
    where : array_like, optional
        This condition is broadcast over the input. At locations where the
        condition is True, the `out` array will be set to the ufunc result.
        Elsewhere, the `out` array will retain its original value.
        Note that if an uninitialized `out` array is created via the default
        ``out=None``, locations within it where the condition is False will
        remain uninitialized.
    **kwargs
        For other keyword-only arguments, see the :ref:`ufunc docs <ufuncs.kwargs>`.

    Returns
    -------
    r : ndarray or tuple of ndarray
        `r` will have the shape that the arrays in `x` broadcast to; if `out` is
        provided, it will be returned. If not, `r` will be allocated and
        may contain uninitialized values. If the function has more than one
        output, then the result will be a tuple of arrays." :arglists '[[self & [args {:as kwargs}]]]} ufunc (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "ufunc"))))

(def ^{:doc "" :arglists '[[self & [args {:as kwargs}]]]} TooHardError (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "TooHardError"))))

(def ^{:doc "
    Return the string representation of a scalar dtype.

    Parameters
    ----------
    sctype : scalar dtype or object
        If a scalar dtype, the corresponding string character is
        returned. If an object, `sctype2char` tries to infer its scalar type
        and then return the corresponding string character.

    Returns
    -------
    typechar : str
        The string character corresponding to the scalar type.

    Raises
    ------
    ValueError
        If `sctype` is an object for which the type can not be inferred.

    See Also
    --------
    obj2sctype, issctype, issubsctype, mintypecode

    Examples
    --------
    >>> for sctype in [np.int32, np.double, np.complex_, np.string_, np.ndarray]:
    ...     print(np.sctype2char(sctype))
    l # may vary
    d
    D
    S
    O

    >>> x = np.array([1., 2-1.j])
    >>> np.sctype2char(x)
    'D'
    >>> np.sctype2char(list)
    'O'

    " :arglists '[[sctype]]} sctype2char (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "sctype2char"))))

(def ^{:doc "
    Find the unique elements of an array.

    Returns the sorted unique elements of an array. There are three optional
    outputs in addition to the unique elements:

    * the indices of the input array that give the unique values
    * the indices of the unique array that reconstruct the input array
    * the number of times each unique value comes up in the input array

    Parameters
    ----------
    ar : array_like
        Input array. Unless `axis` is specified, this will be flattened if it
        is not already 1-D.
    return_index : bool, optional
        If True, also return the indices of `ar` (along the specified axis,
        if provided, or in the flattened array) that result in the unique array.
    return_inverse : bool, optional
        If True, also return the indices of the unique array (for the specified
        axis, if provided) that can be used to reconstruct `ar`.
    return_counts : bool, optional
        If True, also return the number of times each unique item appears
        in `ar`.

        .. versionadded:: 1.9.0

    axis : int or None, optional
        The axis to operate on. If None, `ar` will be flattened. If an integer,
        the subarrays indexed by the given axis will be flattened and treated
        as the elements of a 1-D array with the dimension of the given axis,
        see the notes for more details.  Object arrays or structured arrays
        that contain objects are not supported if the `axis` kwarg is used. The
        default is None.

        .. versionadded:: 1.13.0

    Returns
    -------
    unique : ndarray
        The sorted unique values.
    unique_indices : ndarray, optional
        The indices of the first occurrences of the unique values in the
        original array. Only provided if `return_index` is True.
    unique_inverse : ndarray, optional
        The indices to reconstruct the original array from the
        unique array. Only provided if `return_inverse` is True.
    unique_counts : ndarray, optional
        The number of times each of the unique values comes up in the
        original array. Only provided if `return_counts` is True.

        .. versionadded:: 1.9.0

    See Also
    --------
    numpy.lib.arraysetops : Module with a number of other functions for
                            performing set operations on arrays.
    repeat : Repeat elements of an array.

    Notes
    -----
    When an axis is specified the subarrays indexed by the axis are sorted.
    This is done by making the specified axis the first dimension of the array
    (move the axis to the first dimension to keep the order of the other axes)
    and then flattening the subarrays in C order. The flattened subarrays are
    then viewed as a structured type with each element given a label, with the
    effect that we end up with a 1-D array of structured types that can be
    treated in the same way as any other 1-D array. The result is that the
    flattened subarrays are sorted in lexicographic order starting with the
    first element.

    Examples
    --------
    >>> np.unique([1, 1, 2, 2, 3, 3])
    array([1, 2, 3])
    >>> a = np.array([[1, 1], [2, 3]])
    >>> np.unique(a)
    array([1, 2, 3])

    Return the unique rows of a 2D array

    >>> a = np.array([[1, 0, 0], [1, 0, 0], [2, 3, 4]])
    >>> np.unique(a, axis=0)
    array([[1, 0, 0], [2, 3, 4]])

    Return the indices of the original array that give the unique values:

    >>> a = np.array(['a', 'b', 'b', 'c', 'a'])
    >>> u, indices = np.unique(a, return_index=True)
    >>> u
    array(['a', 'b', 'c'], dtype='<U1')
    >>> indices
    array([0, 1, 3])
    >>> a[indices]
    array(['a', 'b', 'c'], dtype='<U1')

    Reconstruct the input array from the unique values and inverse:

    >>> a = np.array([1, 2, 6, 4, 2, 3, 2])
    >>> u, indices = np.unique(a, return_inverse=True)
    >>> u
    array([1, 2, 3, 4, 6])
    >>> indices
    array([0, 1, 4, 3, 1, 2, 1])
    >>> u[indices]
    array([1, 2, 6, 4, 2, 3, 2])

    Reconstruct the input values from the unique values and counts:

    >>> a = np.array([1, 2, 6, 4, 2, 3, 2])
    >>> values, counts = np.unique(a, return_counts=True)
    >>> values
    array([1, 2, 3, 4, 6])
    >>> counts
    array([1, 3, 1, 1, 1])
    >>> np.repeat(values, counts)
    array([1, 2, 2, 2, 3, 4, 6])    # original order not preserved

    " :arglists '[[& [args {:as kwargs}]]]} unique (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "unique"))))

(def ^{:doc "
    Return the minimum of an array or minimum along an axis.

    Parameters
    ----------
    a : array_like
        Input data.
    axis : None or int or tuple of ints, optional
        Axis or axes along which to operate.  By default, flattened input is
        used.

        .. versionadded:: 1.7.0

        If this is a tuple of ints, the minimum is selected over multiple axes,
        instead of a single axis or all the axes as before.
    out : ndarray, optional
        Alternative output array in which to place the result.  Must
        be of the same shape and buffer length as the expected output.
        See :ref:`ufuncs-output-type` for more details.

    keepdims : bool, optional
        If this is set to True, the axes which are reduced are left
        in the result as dimensions with size one. With this option,
        the result will broadcast correctly against the input array.

        If the default value is passed, then `keepdims` will not be
        passed through to the `amin` method of sub-classes of
        `ndarray`, however any non-default value will be.  If the
        sub-class' method does not implement `keepdims` any
        exceptions will be raised.

    initial : scalar, optional
        The maximum value of an output element. Must be present to allow
        computation on empty slice. See `~numpy.ufunc.reduce` for details.

        .. versionadded:: 1.15.0

    where : array_like of bool, optional
        Elements to compare for the minimum. See `~numpy.ufunc.reduce`
        for details.

        .. versionadded:: 1.17.0

    Returns
    -------
    amin : ndarray or scalar
        Minimum of `a`. If `axis` is None, the result is a scalar value.
        If `axis` is given, the result is an array of dimension
        ``a.ndim - 1``.

    See Also
    --------
    amax :
        The maximum value of an array along a given axis, propagating any NaNs.
    nanmin :
        The minimum value of an array along a given axis, ignoring any NaNs.
    minimum :
        Element-wise minimum of two arrays, propagating any NaNs.
    fmin :
        Element-wise minimum of two arrays, ignoring any NaNs.
    argmin :
        Return the indices of the minimum values.

    nanmax, maximum, fmax

    Notes
    -----
    NaN values are propagated, that is if at least one item is NaN, the
    corresponding min value will be NaN as well. To ignore NaN values
    (MATLAB behavior), please use nanmin.

    Don't use `amin` for element-wise comparison of 2 arrays; when
    ``a.shape[0]`` is 2, ``minimum(a[0], a[1])`` is faster than
    ``amin(a, axis=0)``.

    Examples
    --------
    >>> a = np.arange(4).reshape((2,2))
    >>> a
    array([[0, 1],
           [2, 3]])
    >>> np.amin(a)           # Minimum of the flattened array
    0
    >>> np.amin(a, axis=0)   # Minima along the first axis
    array([0, 1])
    >>> np.amin(a, axis=1)   # Minima along the second axis
    array([0, 2])
    >>> np.amin(a, where=[False, True], initial=10, axis=0)
    array([10,  1])

    >>> b = np.arange(5, dtype=float)
    >>> b[2] = np.NaN
    >>> np.amin(b)
    nan
    >>> np.amin(b, where=~np.isnan(b), initial=10)
    0.0
    >>> np.nanmin(b)
    0.0

    >>> np.min([[-50], [10]], axis=-1, initial=0)
    array([-50,   0])

    Notice that the initial value is used as one of the elements for which the
    minimum is determined, unlike for the default argument Python's max
    function, which is only used for empty iterables.

    Notice that this isn't the same as Python's ``default`` argument.

    >>> np.min([6], initial=5)
    5
    >>> min([6], default=5)
    6
    " :arglists '[[& [args {:as kwargs}]]]} min (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "min"))))

(def ^{:doc "
    Compute the bi-dimensional histogram of two data samples.

    Parameters
    ----------
    x : array_like, shape (N,)
        An array containing the x coordinates of the points to be
        histogrammed.
    y : array_like, shape (N,)
        An array containing the y coordinates of the points to be
        histogrammed.
    bins : int or array_like or [int, int] or [array, array], optional
        The bin specification:

          * If int, the number of bins for the two dimensions (nx=ny=bins).
          * If array_like, the bin edges for the two dimensions
            (x_edges=y_edges=bins).
          * If [int, int], the number of bins in each dimension
            (nx, ny = bins).
          * If [array, array], the bin edges in each dimension
            (x_edges, y_edges = bins).
          * A combination [int, array] or [array, int], where int
            is the number of bins and array is the bin edges.

    range : array_like, shape(2,2), optional
        The leftmost and rightmost edges of the bins along each dimension
        (if not specified explicitly in the `bins` parameters):
        ``[[xmin, xmax], [ymin, ymax]]``. All values outside of this range
        will be considered outliers and not tallied in the histogram.
    density : bool, optional
        If False, the default, returns the number of samples in each bin.
        If True, returns the probability *density* function at the bin,
        ``bin_count / sample_count / bin_area``.
    normed : bool, optional
        An alias for the density argument that behaves identically. To avoid
        confusion with the broken normed argument to `histogram`, `density`
        should be preferred.
    weights : array_like, shape(N,), optional
        An array of values ``w_i`` weighing each sample ``(x_i, y_i)``.
        Weights are normalized to 1 if `normed` is True. If `normed` is
        False, the values of the returned histogram are equal to the sum of
        the weights belonging to the samples falling into each bin.

    Returns
    -------
    H : ndarray, shape(nx, ny)
        The bi-dimensional histogram of samples `x` and `y`. Values in `x`
        are histogrammed along the first dimension and values in `y` are
        histogrammed along the second dimension.
    xedges : ndarray, shape(nx+1,)
        The bin edges along the first dimension.
    yedges : ndarray, shape(ny+1,)
        The bin edges along the second dimension.

    See Also
    --------
    histogram : 1D histogram
    histogramdd : Multidimensional histogram

    Notes
    -----
    When `normed` is True, then the returned histogram is the sample
    density, defined such that the sum over bins of the product
    ``bin_value * bin_area`` is 1.

    Please note that the histogram does not follow the Cartesian convention
    where `x` values are on the abscissa and `y` values on the ordinate
    axis.  Rather, `x` is histogrammed along the first dimension of the
    array (vertical), and `y` along the second dimension of the array
    (horizontal).  This ensures compatibility with `histogramdd`.

    Examples
    --------
    >>> from matplotlib.image import NonUniformImage
    >>> import matplotlib.pyplot as plt

    Construct a 2-D histogram with variable bin width. First define the bin
    edges:

    >>> xedges = [0, 1, 3, 5]
    >>> yedges = [0, 2, 3, 4, 6]

    Next we create a histogram H with random bin content:

    >>> x = np.random.normal(2, 1, 100)
    >>> y = np.random.normal(1, 1, 100)
    >>> H, xedges, yedges = np.histogram2d(x, y, bins=(xedges, yedges))
    >>> H = H.T  # Let each row list bins with common y range.

    :func:`imshow <matplotlib.pyplot.imshow>` can only display square bins:

    >>> fig = plt.figure(figsize=(7, 3))
    >>> ax = fig.add_subplot(131, title='imshow: square bins')
    >>> plt.imshow(H, interpolation='nearest', origin='lower',
    ...         extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]])
    <matplotlib.image.AxesImage object at 0x...>

    :func:`pcolormesh <matplotlib.pyplot.pcolormesh>` can display actual edges:

    >>> ax = fig.add_subplot(132, title='pcolormesh: actual edges',
    ...         aspect='equal')
    >>> X, Y = np.meshgrid(xedges, yedges)
    >>> ax.pcolormesh(X, Y, H)
    <matplotlib.collections.QuadMesh object at 0x...>

    :class:`NonUniformImage <matplotlib.image.NonUniformImage>` can be used to
    display actual bin edges with interpolation:

    >>> ax = fig.add_subplot(133, title='NonUniformImage: interpolated',
    ...         aspect='equal', xlim=xedges[[0, -1]], ylim=yedges[[0, -1]])
    >>> im = NonUniformImage(ax, interpolation='bilinear')
    >>> xcenters = (xedges[:-1] + xedges[1:]) / 2
    >>> ycenters = (yedges[:-1] + yedges[1:]) / 2
    >>> im.set_data(xcenters, ycenters, H)
    >>> ax.images.append(im)
    >>> plt.show()

    " :arglists '[[& [args {:as kwargs}]]]} histogram2d (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "histogram2d"))))

(def ^{:doc "
    Return a string representation of the data in an array.

    The data in the array is returned as a single string.  This function is
    similar to `array_repr`, the difference being that `array_repr` also
    returns information on the kind of array and its data type.

    Parameters
    ----------
    a : ndarray
        Input array.
    max_line_width : int, optional
        Inserts newlines if text is longer than `max_line_width`.
        Defaults to ``numpy.get_printoptions()['linewidth']``.
    precision : int, optional
        Floating point precision.
        Defaults to ``numpy.get_printoptions()['precision']``.
    suppress_small : bool, optional
        Represent numbers \"very close\" to zero as zero; default is False.
        Very close is defined by precision: if the precision is 8, e.g.,
        numbers smaller (in absolute value) than 5e-9 are represented as
        zero.
        Defaults to ``numpy.get_printoptions()['suppress']``.

    See Also
    --------
    array2string, array_repr, set_printoptions

    Examples
    --------
    >>> np.array_str(np.arange(3))
    '[0 1 2]'

    " :arglists '[[& [args {:as kwargs}]]]} array_str (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "array_str"))))

(def ^{:doc ""} SHIFT_INVALID 9)

(def ^{:doc "zeros(shape, dtype=float, order='C', *, like=None)

    Return a new array of given shape and type, filled with zeros.

    Parameters
    ----------
    shape : int or tuple of ints
        Shape of the new array, e.g., ``(2, 3)`` or ``2``.
    dtype : data-type, optional
        The desired data-type for the array, e.g., `numpy.int8`.  Default is
        `numpy.float64`.
    order : {'C', 'F'}, optional, default: 'C'
        Whether to store multi-dimensional data in row-major
        (C-style) or column-major (Fortran-style) order in
        memory.
    like : array_like
        Reference object to allow the creation of arrays which are not
        NumPy arrays. If an array-like passed in as ``like`` supports
        the ``__array_function__`` protocol, the result will be defined
        by it. In this case, it ensures the creation of an array object
        compatible with that passed in via this argument.

        .. note::
            The ``like`` keyword is an experimental feature pending on
            acceptance of :ref:`NEP 35 <NEP35>`.

        .. versionadded:: 1.20.0

    Returns
    -------
    out : ndarray
        Array of zeros with the given shape, dtype, and order.

    See Also
    --------
    zeros_like : Return an array of zeros with shape and type of input.
    empty : Return a new uninitialized array.
    ones : Return a new array setting values to one.
    full : Return a new array of given shape filled with value.

    Examples
    --------
    >>> np.zeros(5)
    array([ 0.,  0.,  0.,  0.,  0.])

    >>> np.zeros((5,), dtype=int)
    array([0, 0, 0, 0, 0])

    >>> np.zeros((2, 1))
    array([[ 0.],
           [ 0.]])

    >>> s = (2,2)
    >>> np.zeros(s)
    array([[ 0.,  0.],
           [ 0.,  0.]])

    >>> np.zeros((2,), dtype=[('x', 'i4'), ('y', 'i4')]) # custom dtype
    array([(0, 0), (0, 0)],
          dtype=[('x', '<i4'), ('y', '<i4')])" :arglists '[[self & [args {:as kwargs}]]]} zeros (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "zeros"))))
(def ^{:doc "dict() -> new empty dictionary
dict(mapping) -> new dictionary initialized from a mapping object's
    (key, value) pairs
dict(iterable) -> new dictionary initialized as if via:
    d = {}
    for k, v in iterable:
        d[k] = v
dict(**kwargs) -> new dictionary initialized with the name=value pairs
    in the keyword argument list.  For example:  dict(one=1, two=2)"} sctypes (as-jvm/generic-python-as-map (py-global-delay (py/get-attr @src-obj* "sctypes")))) 

(def ^{:doc "invert(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])

Compute bit-wise inversion, or bit-wise NOT, element-wise.

Computes the bit-wise NOT of the underlying binary representation of
the integers in the input arrays. This ufunc implements the C/Python
operator ``~``.

For signed integer inputs, the two's complement is returned.  In a
two's-complement system negative numbers are represented by the two's
complement of the absolute value. This is the most common method of
representing signed integers on computers [1]_. A N-bit
two's-complement system can represent every integer in the range
:math:`-2^{N-1}` to :math:`+2^{N-1}-1`.

Parameters
----------
x : array_like
    Only integer and boolean types are handled.
out : ndarray, None, or tuple of ndarray and None, optional
    A location into which the result is stored. If provided, it must have
    a shape that the inputs broadcast to. If not provided or None,
    a freshly-allocated array is returned. A tuple (possible only as a
    keyword argument) must have length equal to the number of outputs.
where : array_like, optional
    This condition is broadcast over the input. At locations where the
    condition is True, the `out` array will be set to the ufunc result.
    Elsewhere, the `out` array will retain its original value.
    Note that if an uninitialized `out` array is created via the default
    ``out=None``, locations within it where the condition is False will
    remain uninitialized.
**kwargs
    For other keyword-only arguments, see the
    :ref:`ufunc docs <ufuncs.kwargs>`.

Returns
-------
out : ndarray or scalar
    Result.
    This is a scalar if `x` is a scalar.

See Also
--------
bitwise_and, bitwise_or, bitwise_xor
logical_not
binary_repr :
    Return the binary representation of the input number as a string.

Notes
-----
`bitwise_not` is an alias for `invert`:

>>> np.bitwise_not is np.invert
True

References
----------
.. [1] Wikipedia, \"Two's complement\",
    https://en.wikipedia.org/wiki/Two's_complement

Examples
--------
We've seen that 13 is represented by ``00001101``.
The invert or bit-wise NOT of 13 is then:

>>> x = np.invert(np.array(13, dtype=np.uint8))
>>> x
242
>>> np.binary_repr(x, width=8)
'11110010'

The result depends on the bit-width:

>>> x = np.invert(np.array(13, dtype=np.uint16))
>>> x
65522
>>> np.binary_repr(x, width=16)
'1111111111110010'

When using signed integer types the result is the two's complement of
the result for the unsigned type:

>>> np.invert(np.array([13], dtype=np.int8))
array([-14], dtype=int8)
>>> np.binary_repr(-14, width=8)
'11110010'

Booleans are accepted as well:

>>> np.invert(np.array([True, False]))
array([False,  True])

The ``~`` operator can be used as a shorthand for ``np.invert`` on
ndarrays.

>>> x1 = np.array([True, False])
>>> ~x1
array([False,  True])" :arglists '[[self & [args {:as kwargs}]]]} invert (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "invert"))))

(def ^{:doc "floor_divide(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])

Return the largest integer smaller or equal to the division of the inputs.
It is equivalent to the Python ``//`` operator and pairs with the
Python ``%`` (`remainder`), function so that ``a = a % b + b * (a // b)``
up to roundoff.

Parameters
----------
x1 : array_like
    Numerator.
x2 : array_like
    Denominator.
    If ``x1.shape != x2.shape``, they must be broadcastable to a common
    shape (which becomes the shape of the output).
out : ndarray, None, or tuple of ndarray and None, optional
    A location into which the result is stored. If provided, it must have
    a shape that the inputs broadcast to. If not provided or None,
    a freshly-allocated array is returned. A tuple (possible only as a
    keyword argument) must have length equal to the number of outputs.
where : array_like, optional
    This condition is broadcast over the input. At locations where the
    condition is True, the `out` array will be set to the ufunc result.
    Elsewhere, the `out` array will retain its original value.
    Note that if an uninitialized `out` array is created via the default
    ``out=None``, locations within it where the condition is False will
    remain uninitialized.
**kwargs
    For other keyword-only arguments, see the
    :ref:`ufunc docs <ufuncs.kwargs>`.

Returns
-------
y : ndarray
    y = floor(`x1`/`x2`)
    This is a scalar if both `x1` and `x2` are scalars.

See Also
--------
remainder : Remainder complementary to floor_divide.
divmod : Simultaneous floor division and remainder.
divide : Standard division.
floor : Round a number to the nearest integer toward minus infinity.
ceil : Round a number to the nearest integer toward infinity.

Examples
--------
>>> np.floor_divide(7,3)
2
>>> np.floor_divide([1., 2., 3., 4.], 2.5)
array([ 0.,  0.,  1.,  1.])

The ``//`` operator can be used as a shorthand for ``np.floor_divide``
on ndarrays.

>>> x1 = np.array([1., 2., 3., 4.])
>>> x1 // 2.5
array([0., 0., 1., 1.])" :arglists '[[self & [args {:as kwargs}]]]} floor_divide (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "floor_divide"))))

(def ^{:doc "Unsigned integer type, compatible with C ``unsigned long``.

    :Character code: ``'L'``
    :Canonical name: `numpy.uint`
    :Alias on this platform: `numpy.uint64`: 64-bit unsigned integer (``0`` to ``18_446_744_073_709_551_615``).
    :Alias on this platform: `numpy.uintp`: Unsigned integer large enough to fit pointer, compatible with C ``uintptr_t``." :arglists '[[self & [args {:as kwargs}]]]} uint0 (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "uint0"))))

(def ^{:doc "
============================
``ctypes`` Utility Functions
============================

See Also
---------
load_library : Load a C library.
ndpointer : Array restype/argtype with verification.
as_ctypes : Create a ctypes array from an ndarray.
as_array : Create an ndarray from a ctypes array.

References
----------
.. [1] \"SciPy Cookbook: ctypes\", https://scipy-cookbook.readthedocs.io/items/Ctypes.html

Examples
--------
Load the C library:

>>> _lib = np.ctypeslib.load_library('libmystuff', '.')     #doctest: +SKIP

Our result type, an ndarray that must be of type double, be 1-dimensional
and is C-contiguous in memory:

>>> array_1d_double = np.ctypeslib.ndpointer(
...                          dtype=np.double,
...                          ndim=1, flags='CONTIGUOUS')    #doctest: +SKIP

Our C-function typically takes an array and updates its values
in-place.  For example::

    void foo_func(double* x, int length)
    {
        int i;
        for (i = 0; i < length; i++) {
            x[i] = i*i;
        }
    }

We wrap it using:

>>> _lib.foo_func.restype = None                      #doctest: +SKIP
>>> _lib.foo_func.argtypes = [array_1d_double, c_int] #doctest: +SKIP

Then, we're ready to call ``foo_func``:

>>> out = np.empty(15, dtype=np.double)
>>> _lib.foo_func(out, len(out))                #doctest: +SKIP

"} ctypeslib (as-jvm/generic-pyobject (py-global-delay (py/get-attr @src-obj* "ctypeslib"))))

(def ^{:doc "Boolean type (True or False), stored as a byte.

    .. warning::

       The :class:`bool_` type is not a subclass of the :class:`int_` type
       (the :class:`bool_` is not even a number type). This is different
       than Python's default implementation of :class:`bool` as a
       sub-class of :class:`int`.

    :Character code: ``'?'``
    :Alias: `numpy.bool8`" :arglists '[[self & [args {:as kwargs}]]]} bool_ (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "bool_"))))

(def ^{:doc "Either an opaque sequence of bytes, or a structure.
    
    >>> np.void(b'abcd')
    void(b'\\x61\\x62\\x63\\x64')
    
    Structured `void` scalars can only be constructed via extraction from :ref:`structured_arrays`:
    
    >>> arr = np.array((1, 2), dtype=[('x', np.int8), ('y', np.int8)])
    >>> arr[()]
    (1, 2)  # looks like a tuple, but is `np.void`

    :Character code: ``'V'``" :arglists '[[self & [args {:as kwargs}]]]} void0 (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "void0"))))

(def ^{:doc "A byte string.

    When used in arrays, this type strips trailing null bytes.

    :Character code: ``'S'``
    :Alias: `numpy.string_`" :arglists '[[self & [args {:as kwargs}]]]} bytes_ (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "bytes_"))))

(def ^{:doc "
    Set how floating-point errors are handled.

    Note that operations on integer scalar types (such as `int16`) are
    handled like floating point, and are affected by these settings.

    Parameters
    ----------
    all : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional
        Set treatment for all types of floating-point errors at once:

        - ignore: Take no action when the exception occurs.
        - warn: Print a `RuntimeWarning` (via the Python `warnings` module).
        - raise: Raise a `FloatingPointError`.
        - call: Call a function specified using the `seterrcall` function.
        - print: Print a warning directly to ``stdout``.
        - log: Record error in a Log object specified by `seterrcall`.

        The default is not to change the current behavior.
    divide : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional
        Treatment for division by zero.
    over : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional
        Treatment for floating-point overflow.
    under : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional
        Treatment for floating-point underflow.
    invalid : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}, optional
        Treatment for invalid floating-point operation.

    Returns
    -------
    old_settings : dict
        Dictionary containing the old settings.

    See also
    --------
    seterrcall : Set a callback function for the 'call' mode.
    geterr, geterrcall, errstate

    Notes
    -----
    The floating-point exceptions are defined in the IEEE 754 standard [1]_:

    - Division by zero: infinite result obtained from finite numbers.
    - Overflow: result too large to be expressed.
    - Underflow: result so close to zero that some precision
      was lost.
    - Invalid operation: result is not an expressible number, typically
      indicates that a NaN was produced.

    .. [1] https://en.wikipedia.org/wiki/IEEE_754

    Examples
    --------
    >>> old_settings = np.seterr(all='ignore')  #seterr to known value
    >>> np.seterr(over='raise')
    {'divide': 'ignore', 'over': 'ignore', 'under': 'ignore', 'invalid': 'ignore'}
    >>> np.seterr(**old_settings)  # reset to default
    {'divide': 'ignore', 'over': 'raise', 'under': 'ignore', 'invalid': 'ignore'}

    >>> np.int16(32000) * np.int16(3)
    30464
    >>> old_settings = np.seterr(all='warn', over='raise')
    >>> np.int16(32000) * np.int16(3)
    Traceback (most recent call last):
      File \"<stdin>\", line 1, in <module>
    FloatingPointError: overflow encountered in short_scalars

    >>> from collections import OrderedDict
    >>> old_settings = np.seterr(all='print')
    >>> OrderedDict(np.geterr())
    OrderedDict([('divide', 'print'), ('over', 'print'), ('under', 'print'), ('invalid', 'print')])
    >>> np.int16(32000) * np.int16(3)
    30464

    " :arglists '[[& [{all :all, divide :divide, over :over, under :under, invalid :invalid}]] [& [{all :all, divide :divide, over :over, under :under}]] [& [{all :all, divide :divide, over :over}]] [& [{all :all, divide :divide}]] [& [{all :all}]] []]} seterr (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "seterr"))))

(def ^{:doc ""} ERR_CALL 3)

(def ^{:doc "isinf(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])

Test element-wise for positive or negative infinity.

Returns a boolean array of the same shape as `x`, True where ``x ==
+/-inf``, otherwise False.

Parameters
----------
x : array_like
    Input values
out : ndarray, None, or tuple of ndarray and None, optional
    A location into which the result is stored. If provided, it must have
    a shape that the inputs broadcast to. If not provided or None,
    a freshly-allocated array is returned. A tuple (possible only as a
    keyword argument) must have length equal to the number of outputs.
where : array_like, optional
    This condition is broadcast over the input. At locations where the
    condition is True, the `out` array will be set to the ufunc result.
    Elsewhere, the `out` array will retain its original value.
    Note that if an uninitialized `out` array is created via the default
    ``out=None``, locations within it where the condition is False will
    remain uninitialized.
**kwargs
    For other keyword-only arguments, see the
    :ref:`ufunc docs <ufuncs.kwargs>`.

Returns
-------
y : bool (scalar) or boolean ndarray
    True where ``x`` is positive or negative infinity, false otherwise.
    This is a scalar if `x` is a scalar.

See Also
--------
isneginf, isposinf, isnan, isfinite

Notes
-----
NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic
(IEEE 754).

Errors result if the second argument is supplied when the first
argument is a scalar, or if the first and second arguments have
different shapes.

Examples
--------
>>> np.isinf(np.inf)
True
>>> np.isinf(np.nan)
False
>>> np.isinf(np.NINF)
True
>>> np.isinf([np.inf, -np.inf, 1.0, np.nan])
array([ True,  True, False, False])

>>> x = np.array([-np.inf, 0., np.inf])
>>> y = np.array([2, 2, 2])
>>> np.isinf(x, y)
array([1, 0, 1])
>>> y
array([1, 0, 1])" :arglists '[[self & [args {:as kwargs}]]]} isinf (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "isinf"))))
(def ^{:doc "Built-in immutable sequence.

If no argument is given, the constructor returns an empty tuple.
If iterable is specified the tuple is initialized from iterable's items.

If the argument is a tuple, the return value is the same object."} kernel_version (as-jvm/generic-python-as-list (py-global-delay (py/get-attr @src-obj* "kernel_version")))) 

(def ^{:doc "spacing(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])

Return the distance between x and the nearest adjacent number.

Parameters
----------
x : array_like
    Values to find the spacing of.
out : ndarray, None, or tuple of ndarray and None, optional
    A location into which the result is stored. If provided, it must have
    a shape that the inputs broadcast to. If not provided or None,
    a freshly-allocated array is returned. A tuple (possible only as a
    keyword argument) must have length equal to the number of outputs.
where : array_like, optional
    This condition is broadcast over the input. At locations where the
    condition is True, the `out` array will be set to the ufunc result.
    Elsewhere, the `out` array will retain its original value.
    Note that if an uninitialized `out` array is created via the default
    ``out=None``, locations within it where the condition is False will
    remain uninitialized.
**kwargs
    For other keyword-only arguments, see the
    :ref:`ufunc docs <ufuncs.kwargs>`.

Returns
-------
out : ndarray or scalar
    The spacing of values of `x`.
    This is a scalar if `x` is a scalar.

Notes
-----
It can be considered as a generalization of EPS:
``spacing(np.float64(1)) == np.finfo(np.float64).eps``, and there
should not be any representable number between ``x + spacing(x)`` and
x for any finite x.

Spacing of +- inf and NaN is NaN.

Examples
--------
>>> np.spacing(1) == np.finfo(np.float64).eps
True" :arglists '[[self & [args {:as kwargs}]]]} spacing (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "spacing"))))

(def ^{:doc "Double-precision floating-point number type, compatible with Python `float`
    and C ``double``.

    :Character code: ``'d'``
    :Canonical name: `numpy.double`
    :Alias: `numpy.float_`
    :Alias on this platform: `numpy.float64`: 64-bit precision floating-point number type: sign bit, 11 bits exponent, 52 bits mantissa." :arglists '[[self & [args {:as kwargs}]]]} float_ (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "float_"))))

(def ^{:doc "
    Issued by `polyfit` when the Vandermonde matrix is rank deficient.

    For more information, a way to suppress the warning, and an example of
    `RankWarning` being issued, see `polyfit`.

    " :arglists '[[self & [args {:as kwargs}]]]} RankWarning (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "RankWarning"))))

(def ^{:doc "
    Evaluate a polynomial at specific values.

    .. note::
       This forms part of the old polynomial API. Since version 1.4, the
       new polynomial API defined in `numpy.polynomial` is preferred.
       A summary of the differences can be found in the
       :doc:`transition guide </reference/routines.polynomials>`.

    If `p` is of length N, this function returns the value:

        ``p[0]*x**(N-1) + p[1]*x**(N-2) + ... + p[N-2]*x + p[N-1]``

    If `x` is a sequence, then `p(x)` is returned for each element of `x`.
    If `x` is another polynomial then the composite polynomial `p(x(t))`
    is returned.

    Parameters
    ----------
    p : array_like or poly1d object
       1D array of polynomial coefficients (including coefficients equal
       to zero) from highest degree to the constant term, or an
       instance of poly1d.
    x : array_like or poly1d object
       A number, an array of numbers, or an instance of poly1d, at
       which to evaluate `p`.

    Returns
    -------
    values : ndarray or poly1d
       If `x` is a poly1d instance, the result is the composition of the two
       polynomials, i.e., `x` is \"substituted\" in `p` and the simplified
       result is returned. In addition, the type of `x` - array_like or
       poly1d - governs the type of the output: `x` array_like => `values`
       array_like, `x` a poly1d object => `values` is also.

    See Also
    --------
    poly1d: A polynomial class.

    Notes
    -----
    Horner's scheme [1]_ is used to evaluate the polynomial. Even so,
    for polynomials of high degree the values may be inaccurate due to
    rounding errors. Use carefully.

    If `x` is a subtype of `ndarray` the return value will be of the same type.

    References
    ----------
    .. [1] I. N. Bronshtein, K. A. Semendyayev, and K. A. Hirsch (Eng.
       trans. Ed.), *Handbook of Mathematics*, New York, Van Nostrand
       Reinhold Co., 1985, pg. 720.

    Examples
    --------
    >>> np.polyval([3,0,1], 5)  # 3 * 5**2 + 0 * 5**1 + 1
    76
    >>> np.polyval([3,0,1], np.poly1d(5))
    poly1d([76])
    >>> np.polyval(np.poly1d([3,0,1]), 5)
    76
    >>> np.polyval(np.poly1d([3,0,1]), np.poly1d(5))
    poly1d([76])

    " :arglists '[[& [args {:as kwargs}]]]} polyval (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "polyval"))))

(def ^{:doc "
    Translates slice objects to concatenation along the first axis.

    This is a simple way to build up arrays quickly. There are two use cases.

    1. If the index expression contains comma separated arrays, then stack
       them along their first axis.
    2. If the index expression contains slice notation or scalars then create
       a 1-D array with a range indicated by the slice notation.

    If slice notation is used, the syntax ``start:stop:step`` is equivalent
    to ``np.arange(start, stop, step)`` inside of the brackets. However, if
    ``step`` is an imaginary number (i.e. 100j) then its integer portion is
    interpreted as a number-of-points desired and the start and stop are
    inclusive. In other words ``start:stop:stepj`` is interpreted as
    ``np.linspace(start, stop, step, endpoint=1)`` inside of the brackets.
    After expansion of slice notation, all comma separated sequences are
    concatenated together.

    Optional character strings placed as the first element of the index
    expression can be used to change the output. The strings 'r' or 'c' result
    in matrix output. If the result is 1-D and 'r' is specified a 1 x N (row)
    matrix is produced. If the result is 1-D and 'c' is specified, then a N x 1
    (column) matrix is produced. If the result is 2-D then both provide the
    same matrix result.

    A string integer specifies which axis to stack multiple comma separated
    arrays along. A string of two comma-separated integers allows indication
    of the minimum number of dimensions to force each entry into as the
    second integer (the axis to concatenate along is still the first integer).

    A string with three comma-separated integers allows specification of the
    axis to concatenate along, the minimum number of dimensions to force the
    entries to, and which axis should contain the start of the arrays which
    are less than the specified number of dimensions. In other words the third
    integer allows you to specify where the 1's should be placed in the shape
    of the arrays that have their shapes upgraded. By default, they are placed
    in the front of the shape tuple. The third argument allows you to specify
    where the start of the array should be instead. Thus, a third argument of
    '0' would place the 1's at the end of the array shape. Negative integers
    specify where in the new shape tuple the last dimension of upgraded arrays
    should be placed, so the default is '-1'.

    Parameters
    ----------
    Not a function, so takes no parameters


    Returns
    -------
    A concatenated ndarray or matrix.

    See Also
    --------
    concatenate : Join a sequence of arrays along an existing axis.
    c_ : Translates slice objects to concatenation along the second axis.

    Examples
    --------
    >>> np.r_[np.array([1,2,3]), 0, 0, np.array([4,5,6])]
    array([1, 2, 3, ..., 4, 5, 6])
    >>> np.r_[-1:1:6j, [0]*3, 5, 6]
    array([-1. , -0.6, -0.2,  0.2,  0.6,  1. ,  0. ,  0. ,  0. ,  5. ,  6. ])

    String integers specify the axis to concatenate along or the minimum
    number of dimensions to force entries into.

    >>> a = np.array([[0, 1, 2], [3, 4, 5]])
    >>> np.r_['-1', a, a] # concatenate along last axis
    array([[0, 1, 2, 0, 1, 2],
           [3, 4, 5, 3, 4, 5]])
    >>> np.r_['0,2', [1,2,3], [4,5,6]] # concatenate along first axis, dim>=2
    array([[1, 2, 3],
           [4, 5, 6]])

    >>> np.r_['0,2,0', [1,2,3], [4,5,6]]
    array([[1],
           [2],
           [3],
           [4],
           [5],
           [6]])
    >>> np.r_['1,2,0', [1,2,3], [4,5,6]]
    array([[1, 4],
           [2, 5],
           [3, 6]])

    Using 'r' or 'c' as a first string argument creates a matrix.

    >>> np.r_['r',[1,2,3], [4,5,6]]
    matrix([[1, 2, 3, 4, 5, 6]])

    "} r_ (as-jvm/generic-pyobject (py-global-delay (py/get-attr @src-obj* "r_"))))

(def ^{:doc "
    An N-dimensional iterator object to index arrays.

    Given the shape of an array, an `ndindex` instance iterates over
    the N-dimensional index of the array. At each iteration a tuple
    of indices is returned, the last dimension is iterated over first.

    Parameters
    ----------
    shape : ints, or a single tuple of ints
        The size of each dimension of the array can be passed as 
        individual parameters or as the elements of a tuple.

    See Also
    --------
    ndenumerate, flatiter

    Examples
    --------
    # dimensions as individual arguments
    >>> for index in np.ndindex(3, 2, 1):
    ...     print(index)
    (0, 0, 0)
    (0, 1, 0)
    (1, 0, 0)
    (1, 1, 0)
    (2, 0, 0)
    (2, 1, 0)

    # same dimensions - but in a tuple (3, 2, 1)
    >>> for index in np.ndindex((3, 2, 1)):
    ...     print(index)
    (0, 0, 0)
    (0, 1, 0)
    (1, 0, 0)
    (1, 1, 0)
    (2, 0, 0)
    (2, 1, 0)

    " :arglists '[[self & [shape]]]} ndindex (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "ndindex"))))

(def ^{:doc ""} infty ##Inf)

(def ^{:doc "
    Issues a DeprecationWarning, adds warning to `old_name`'s
    docstring, rebinds ``old_name.__name__`` and returns the new
    function object.

    This function may also be used as a decorator.

    Parameters
    ----------
    func : function
        The function to be deprecated.
    old_name : str, optional
        The name of the function to be deprecated. Default is None, in
        which case the name of `func` is used.
    new_name : str, optional
        The new name for the function. Default is None, in which case the
        deprecation message is that `old_name` is deprecated. If given, the
        deprecation message is that `old_name` is deprecated and `new_name`
        should be used instead.
    message : str, optional
        Additional explanation of the deprecation.  Displayed in the
        docstring after the warning.

    Returns
    -------
    old_func : function
        The deprecated function.

    Examples
    --------
    Note that ``olduint`` returns a value after printing Deprecation
    Warning:

    >>> olduint = np.deprecate(np.uint)
    DeprecationWarning: `uint64` is deprecated! # may vary
    >>> olduint(6)
    6

    " :arglists '[[& [args {:as kwargs}]]]} deprecate (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "deprecate"))))

(def ^{:doc "
    Set printing options.

    These options determine the way floating point numbers, arrays and
    other NumPy objects are displayed.

    Parameters
    ----------
    precision : int or None, optional
        Number of digits of precision for floating point output (default 8).
        May be None if `floatmode` is not `fixed`, to print as many digits as
        necessary to uniquely specify the value.
    threshold : int, optional
        Total number of array elements which trigger summarization
        rather than full repr (default 1000).
        To always use the full repr without summarization, pass `sys.maxsize`.
    edgeitems : int, optional
        Number of array items in summary at beginning and end of
        each dimension (default 3).
    linewidth : int, optional
        The number of characters per line for the purpose of inserting
        line breaks (default 75).
    suppress : bool, optional
        If True, always print floating point numbers using fixed point
        notation, in which case numbers equal to zero in the current precision
        will print as zero.  If False, then scientific notation is used when
        absolute value of the smallest number is < 1e-4 or the ratio of the
        maximum absolute value to the minimum is > 1e3. The default is False.
    nanstr : str, optional
        String representation of floating point not-a-number (default nan).
    infstr : str, optional
        String representation of floating point infinity (default inf).
    sign : string, either '-', '+', or ' ', optional
        Controls printing of the sign of floating-point types. If '+', always
        print the sign of positive values. If ' ', always prints a space
        (whitespace character) in the sign position of positive values.  If
        '-', omit the sign character of positive values. (default '-')
    formatter : dict of callables, optional
        If not None, the keys should indicate the type(s) that the respective
        formatting function applies to.  Callables should return a string.
        Types that are not specified (by their corresponding keys) are handled
        by the default formatters.  Individual types for which a formatter
        can be set are:

        - 'bool'
        - 'int'
        - 'timedelta' : a `numpy.timedelta64`
        - 'datetime' : a `numpy.datetime64`
        - 'float'
        - 'longfloat' : 128-bit floats
        - 'complexfloat'
        - 'longcomplexfloat' : composed of two 128-bit floats
        - 'numpystr' : types `numpy.string_` and `numpy.unicode_`
        - 'object' : `np.object_` arrays
        - 'str' : all other strings

        Other keys that can be used to set a group of types at once are:

        - 'all' : sets all types
        - 'int_kind' : sets 'int'
        - 'float_kind' : sets 'float' and 'longfloat'
        - 'complex_kind' : sets 'complexfloat' and 'longcomplexfloat'
        - 'str_kind' : sets 'str' and 'numpystr'
    floatmode : str, optional
        Controls the interpretation of the `precision` option for
        floating-point types. Can take the following values
        (default maxprec_equal):

        * 'fixed': Always print exactly `precision` fractional digits,
                even if this would print more or fewer digits than
                necessary to specify the value uniquely.
        * 'unique': Print the minimum number of fractional digits necessary
                to represent each value uniquely. Different elements may
                have a different number of digits. The value of the
                `precision` option is ignored.
        * 'maxprec': Print at most `precision` fractional digits, but if
                an element can be uniquely represented with fewer digits
                only print it with that many.
        * 'maxprec_equal': Print at most `precision` fractional digits,
                but if every element in the array can be uniquely
                represented with an equal number of fewer digits, use that
                many digits for all elements.
    legacy : string or `False`, optional
        If set to the string `'1.13'` enables 1.13 legacy printing mode. This
        approximates numpy 1.13 print output by including a space in the sign
        position of floats and different behavior for 0d arrays. If set to
        `False`, disables legacy mode. Unrecognized strings will be ignored
        with a warning for forward compatibility.

        .. versionadded:: 1.14.0

    See Also
    --------
    get_printoptions, printoptions, set_string_function, array2string

    Notes
    -----
    `formatter` is always reset with a call to `set_printoptions`.

    Use `printoptions` as a context manager to set the values temporarily.

    Examples
    --------
    Floating point precision can be set:

    >>> np.set_printoptions(precision=4)
    >>> np.array([1.123456789])
    [1.1235]

    Long arrays can be summarised:

    >>> np.set_printoptions(threshold=5)
    >>> np.arange(10)
    array([0, 1, 2, ..., 7, 8, 9])

    Small results can be suppressed:

    >>> eps = np.finfo(float).eps
    >>> x = np.arange(4.)
    >>> x**2 - (x + eps)**2
    array([-4.9304e-32, -4.4409e-16,  0.0000e+00,  0.0000e+00])
    >>> np.set_printoptions(suppress=True)
    >>> x**2 - (x + eps)**2
    array([-0., -0.,  0.,  0.])

    A custom formatter can be used to display array elements as desired:

    >>> np.set_printoptions(formatter={'all':lambda x: 'int: '+str(-x)})
    >>> x = np.arange(3)
    >>> x
    array([int: 0, int: -1, int: -2])
    >>> np.set_printoptions()  # formatter gets reset
    >>> x
    array([0, 1, 2])

    To put back the default options, you can use:

    >>> np.set_printoptions(edgeitems=3, infstr='inf',
    ... linewidth=75, nanstr='nan', precision=8,
    ... suppress=False, threshold=1000, formatter=None)

    Also to temporarily override options, use `printoptions` as a context manager:

    >>> with np.printoptions(precision=2, suppress=True, threshold=5):
    ...     np.linspace(0, 10, 10)
    array([ 0.  ,  1.11,  2.22, ...,  7.78,  8.89, 10.  ])

    " :arglists '[[& [{legacy :legacy, infstr :infstr, linewidth :linewidth, suppress :suppress, sign :sign, formatter :formatter, precision :precision, floatmode :floatmode, edgeitems :edgeitems, threshold :threshold, nanstr :nanstr}]] [& [{legacy :legacy, infstr :infstr, linewidth :linewidth, suppress :suppress, sign :sign, formatter :formatter, precision :precision, edgeitems :edgeitems, threshold :threshold, nanstr :nanstr}]] [& [{legacy :legacy, infstr :infstr, linewidth :linewidth, suppress :suppress, formatter :formatter, precision :precision, edgeitems :edgeitems, threshold :threshold, nanstr :nanstr}]] [& [{precision :precision, threshold :threshold, edgeitems :edgeitems, linewidth :linewidth, suppress :suppress, nanstr :nanstr, infstr :infstr, legacy :legacy}]] [& [{precision :precision, threshold :threshold, edgeitems :edgeitems, linewidth :linewidth, suppress :suppress, nanstr :nanstr, legacy :legacy}]] [& [{precision :precision, threshold :threshold, edgeitems :edgeitems, linewidth :linewidth, suppress :suppress, legacy :legacy}]] [& [{precision :precision, threshold :threshold, edgeitems :edgeitems, linewidth :linewidth, legacy :legacy}]] [& [{precision :precision, threshold :threshold, edgeitems :edgeitems, legacy :legacy}]] [& [{precision :precision, threshold :threshold, legacy :legacy}]] [& [{precision :precision, legacy :legacy}]] [& [{legacy :legacy}]]]} set_printoptions (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "set_printoptions"))))

(def ^{:doc "
    Pad an array.

    Parameters
    ----------
    array : array_like of rank N
        The array to pad.
    pad_width : {sequence, array_like, int}
        Number of values padded to the edges of each axis.
        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
        for each axis.
        ((before, after),) yields same before and after pad for each axis.
        (pad,) or int is a shortcut for before = after = pad width for all
        axes.
    mode : str or function, optional
        One of the following string values or a user supplied function.

        'constant' (default)
            Pads with a constant value.
        'edge'
            Pads with the edge values of array.
        'linear_ramp'
            Pads with the linear ramp between end_value and the
            array edge value.
        'maximum'
            Pads with the maximum value of all or part of the
            vector along each axis.
        'mean'
            Pads with the mean value of all or part of the
            vector along each axis.
        'median'
            Pads with the median value of all or part of the
            vector along each axis.
        'minimum'
            Pads with the minimum value of all or part of the
            vector along each axis.
        'reflect'
            Pads with the reflection of the vector mirrored on
            the first and last values of the vector along each
            axis.
        'symmetric'
            Pads with the reflection of the vector mirrored
            along the edge of the array.
        'wrap'
            Pads with the wrap of the vector along the axis.
            The first values are used to pad the end and the
            end values are used to pad the beginning.
        'empty'
            Pads with undefined values.

            .. versionadded:: 1.17

        <function>
            Padding function, see Notes.
    stat_length : sequence or int, optional
        Used in 'maximum', 'mean', 'median', and 'minimum'.  Number of
        values at edge of each axis used to calculate the statistic value.

        ((before_1, after_1), ... (before_N, after_N)) unique statistic
        lengths for each axis.

        ((before, after),) yields same before and after statistic lengths
        for each axis.

        (stat_length,) or int is a shortcut for before = after = statistic
        length for all axes.

        Default is ``None``, to use the entire axis.
    constant_values : sequence or scalar, optional
        Used in 'constant'.  The values to set the padded values for each
        axis.

        ``((before_1, after_1), ... (before_N, after_N))`` unique pad constants
        for each axis.

        ``((before, after),)`` yields same before and after constants for each
        axis.

        ``(constant,)`` or ``constant`` is a shortcut for ``before = after = constant`` for
        all axes.

        Default is 0.
    end_values : sequence or scalar, optional
        Used in 'linear_ramp'.  The values used for the ending value of the
        linear_ramp and that will form the edge of the padded array.

        ``((before_1, after_1), ... (before_N, after_N))`` unique end values
        for each axis.

        ``((before, after),)`` yields same before and after end values for each
        axis.

        ``(constant,)`` or ``constant`` is a shortcut for ``before = after = constant`` for
        all axes.

        Default is 0.
    reflect_type : {'even', 'odd'}, optional
        Used in 'reflect', and 'symmetric'.  The 'even' style is the
        default with an unaltered reflection around the edge value.  For
        the 'odd' style, the extended part of the array is created by
        subtracting the reflected values from two times the edge value.

    Returns
    -------
    pad : ndarray
        Padded array of rank equal to `array` with shape increased
        according to `pad_width`.

    Notes
    -----
    .. versionadded:: 1.7.0

    For an array with rank greater than 1, some of the padding of later
    axes is calculated from padding of previous axes.  This is easiest to
    think about with a rank 2 array where the corners of the padded array
    are calculated by using padded values from the first axis.

    The padding function, if used, should modify a rank 1 array in-place. It
    has the following signature::

        padding_func(vector, iaxis_pad_width, iaxis, kwargs)

    where

        vector : ndarray
            A rank 1 array already padded with zeros.  Padded values are
            vector[:iaxis_pad_width[0]] and vector[-iaxis_pad_width[1]:].
        iaxis_pad_width : tuple
            A 2-tuple of ints, iaxis_pad_width[0] represents the number of
            values padded at the beginning of vector where
            iaxis_pad_width[1] represents the number of values padded at
            the end of vector.
        iaxis : int
            The axis currently being calculated.
        kwargs : dict
            Any keyword arguments the function requires.

    Examples
    --------
    >>> a = [1, 2, 3, 4, 5]
    >>> np.pad(a, (2, 3), 'constant', constant_values=(4, 6))
    array([4, 4, 1, ..., 6, 6, 6])

    >>> np.pad(a, (2, 3), 'edge')
    array([1, 1, 1, ..., 5, 5, 5])

    >>> np.pad(a, (2, 3), 'linear_ramp', end_values=(5, -4))
    array([ 5,  3,  1,  2,  3,  4,  5,  2, -1, -4])

    >>> np.pad(a, (2,), 'maximum')
    array([5, 5, 1, 2, 3, 4, 5, 5, 5])

    >>> np.pad(a, (2,), 'mean')
    array([3, 3, 1, 2, 3, 4, 5, 3, 3])

    >>> np.pad(a, (2,), 'median')
    array([3, 3, 1, 2, 3, 4, 5, 3, 3])

    >>> a = [[1, 2], [3, 4]]
    >>> np.pad(a, ((3, 2), (2, 3)), 'minimum')
    array([[1, 1, 1, 2, 1, 1, 1],
           [1, 1, 1, 2, 1, 1, 1],
           [1, 1, 1, 2, 1, 1, 1],
           [1, 1, 1, 2, 1, 1, 1],
           [3, 3, 3, 4, 3, 3, 3],
           [1, 1, 1, 2, 1, 1, 1],
           [1, 1, 1, 2, 1, 1, 1]])

    >>> a = [1, 2, 3, 4, 5]
    >>> np.pad(a, (2, 3), 'reflect')
    array([3, 2, 1, 2, 3, 4, 5, 4, 3, 2])

    >>> np.pad(a, (2, 3), 'reflect', reflect_type='odd')
    array([-1,  0,  1,  2,  3,  4,  5,  6,  7,  8])

    >>> np.pad(a, (2, 3), 'symmetric')
    array([2, 1, 1, 2, 3, 4, 5, 5, 4, 3])

    >>> np.pad(a, (2, 3), 'symmetric', reflect_type='odd')
    array([0, 1, 1, 2, 3, 4, 5, 5, 6, 7])

    >>> np.pad(a, (2, 3), 'wrap')
    array([4, 5, 1, 2, 3, 4, 5, 1, 2, 3])

    >>> def pad_with(vector, pad_width, iaxis, kwargs):
    ...     pad_value = kwargs.get('padder', 10)
    ...     vector[:pad_width[0]] = pad_value
    ...     vector[-pad_width[1]:] = pad_value
    >>> a = np.arange(6)
    >>> a = a.reshape((2, 3))
    >>> np.pad(a, 2, pad_with)
    array([[10, 10, 10, 10, 10, 10, 10],
           [10, 10, 10, 10, 10, 10, 10],
           [10, 10,  0,  1,  2, 10, 10],
           [10, 10,  3,  4,  5, 10, 10],
           [10, 10, 10, 10, 10, 10, 10],
           [10, 10, 10, 10, 10, 10, 10]])
    >>> np.pad(a, 2, pad_with, padder=100)
    array([[100, 100, 100, 100, 100, 100, 100],
           [100, 100, 100, 100, 100, 100, 100],
           [100, 100,   0,   1,   2, 100, 100],
           [100, 100,   3,   4,   5, 100, 100],
           [100, 100, 100, 100, 100, 100, 100],
           [100, 100, 100, 100, 100, 100, 100]])
    " :arglists '[[& [args {:as kwargs}]]]} pad (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "pad"))))

(def ^{:doc "Unsigned integer type, compatible with C ``unsigned long``.

    :Character code: ``'L'``
    :Canonical name: `numpy.uint`
    :Alias on this platform: `numpy.uint64`: 64-bit unsigned integer (``0`` to ``18_446_744_073_709_551_615``).
    :Alias on this platform: `numpy.uintp`: Unsigned integer large enough to fit pointer, compatible with C ``uintptr_t``." :arglists '[[self & [args {:as kwargs}]]]} Uint64 (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "Uint64"))))

(def ^{:doc "
    Print the NumPy arrays in the given dictionary.

    If there is no dictionary passed in or `vardict` is None then returns
    NumPy arrays in the globals() dictionary (all NumPy arrays in the
    namespace).

    Parameters
    ----------
    vardict : dict, optional
        A dictionary possibly containing ndarrays.  Default is globals().

    Returns
    -------
    out : None
        Returns 'None'.

    Notes
    -----
    Prints out the name, shape, bytes and type of all of the ndarrays
    present in `vardict`.

    Examples
    --------
    >>> a = np.arange(10)
    >>> b = np.ones(20)
    >>> np.who()
    Name            Shape            Bytes            Type
    ===========================================================
    a               10               80               int64
    b               20               160              float64
    Upper bound on total bytes  =       240

    >>> d = {'x': np.arange(2.0), 'y': np.arange(3.0), 'txt': 'Some str',
    ... 'idx':5}
    >>> np.who(d)
    Name            Shape            Bytes            Type
    ===========================================================
    x               2                16               float64
    y               3                24               float64
    Upper bound on total bytes  =       40

    " :arglists '[[& [{vardict :vardict}]] []]} who (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "who"))))

(def ^{:doc "
    Function to calculate only the edges of the bins used by the `histogram`
    function.

    Parameters
    ----------
    a : array_like
        Input data. The histogram is computed over the flattened array.
    bins : int or sequence of scalars or str, optional
        If `bins` is an int, it defines the number of equal-width
        bins in the given range (10, by default). If `bins` is a
        sequence, it defines the bin edges, including the rightmost
        edge, allowing for non-uniform bin widths.

        If `bins` is a string from the list below, `histogram_bin_edges` will use
        the method chosen to calculate the optimal bin width and
        consequently the number of bins (see `Notes` for more detail on
        the estimators) from the data that falls within the requested
        range. While the bin width will be optimal for the actual data
        in the range, the number of bins will be computed to fill the
        entire range, including the empty portions. For visualisation,
        using the 'auto' option is suggested. Weighted data is not
        supported for automated bin size selection.

        'auto'
            Maximum of the 'sturges' and 'fd' estimators. Provides good
            all around performance.

        'fd' (Freedman Diaconis Estimator)
            Robust (resilient to outliers) estimator that takes into
            account data variability and data size.

        'doane'
            An improved version of Sturges' estimator that works better
            with non-normal datasets.

        'scott'
            Less robust estimator that that takes into account data
            variability and data size.

        'stone'
            Estimator based on leave-one-out cross-validation estimate of
            the integrated squared error. Can be regarded as a generalization
            of Scott's rule.

        'rice'
            Estimator does not take variability into account, only data
            size. Commonly overestimates number of bins required.

        'sturges'
            R's default method, only accounts for data size. Only
            optimal for gaussian data and underestimates number of bins
            for large non-gaussian datasets.

        'sqrt'
            Square root (of data size) estimator, used by Excel and
            other programs for its speed and simplicity.

    range : (float, float), optional
        The lower and upper range of the bins.  If not provided, range
        is simply ``(a.min(), a.max())``.  Values outside the range are
        ignored. The first element of the range must be less than or
        equal to the second. `range` affects the automatic bin
        computation as well. While bin width is computed to be optimal
        based on the actual data within `range`, the bin count will fill
        the entire range including portions containing no data.

    weights : array_like, optional
        An array of weights, of the same shape as `a`.  Each value in
        `a` only contributes its associated weight towards the bin count
        (instead of 1). This is currently not used by any of the bin estimators,
        but may be in the future.

    Returns
    -------
    bin_edges : array of dtype float
        The edges to pass into `histogram`

    See Also
    --------
    histogram

    Notes
    -----
    The methods to estimate the optimal number of bins are well founded
    in literature, and are inspired by the choices R provides for
    histogram visualisation. Note that having the number of bins
    proportional to :math:`n^{1/3}` is asymptotically optimal, which is
    why it appears in most estimators. These are simply plug-in methods
    that give good starting points for number of bins. In the equations
    below, :math:`h` is the binwidth and :math:`n_h` is the number of
    bins. All estimators that compute bin counts are recast to bin width
    using the `ptp` of the data. The final bin count is obtained from
    ``np.round(np.ceil(range / h))``.

    'auto' (maximum of the 'sturges' and 'fd' estimators)
        A compromise to get a good value. For small datasets the Sturges
        value will usually be chosen, while larger datasets will usually
        default to FD.  Avoids the overly conservative behaviour of FD
        and Sturges for small and large datasets respectively.
        Switchover point is usually :math:`a.size \\approx 1000`.

    'fd' (Freedman Diaconis Estimator)
        .. math:: h = 2 \\frac{IQR}{n^{1/3}}

        The binwidth is proportional to the interquartile range (IQR)
        and inversely proportional to cube root of a.size. Can be too
        conservative for small datasets, but is quite good for large
        datasets. The IQR is very robust to outliers.

    'scott'
        .. math:: h = \\sigma \\sqrt[3]{\\frac{24 * \\sqrt{\\pi}}{n}}

        The binwidth is proportional to the standard deviation of the
        data and inversely proportional to cube root of ``x.size``. Can
        be too conservative for small datasets, but is quite good for
        large datasets. The standard deviation is not very robust to
        outliers. Values are very similar to the Freedman-Diaconis
        estimator in the absence of outliers.

    'rice'
        .. math:: n_h = 2n^{1/3}

        The number of bins is only proportional to cube root of
        ``a.size``. It tends to overestimate the number of bins and it
        does not take into account data variability.

    'sturges'
        .. math:: n_h = \\log _{2}n+1

        The number of bins is the base 2 log of ``a.size``.  This
        estimator assumes normality of data and is too conservative for
        larger, non-normal datasets. This is the default method in R's
        ``hist`` method.

    'doane'
        .. math:: n_h = 1 + \\log_{2}(n) +
                        \\log_{2}(1 + \\frac{|g_1|}{\\sigma_{g_1}})

            g_1 = mean[(\\frac{x - \\mu}{\\sigma})^3]

            \\sigma_{g_1} = \\sqrt{\\frac{6(n - 2)}{(n + 1)(n + 3)}}

        An improved version of Sturges' formula that produces better
        estimates for non-normal datasets. This estimator attempts to
        account for the skew of the data.

    'sqrt'
        .. math:: n_h = \\sqrt n

        The simplest and fastest estimator. Only takes into account the
        data size.

    Examples
    --------
    >>> arr = np.array([0, 0, 0, 1, 2, 3, 3, 4, 5])
    >>> np.histogram_bin_edges(arr, bins='auto', range=(0, 1))
    array([0.  , 0.25, 0.5 , 0.75, 1.  ])
    >>> np.histogram_bin_edges(arr, bins=2)
    array([0. , 2.5, 5. ])

    For consistency with histogram, an array of pre-computed bins is
    passed through unmodified:

    >>> np.histogram_bin_edges(arr, [1, 2])
    array([1, 2])

    This function allows one set of bins to be computed, and reused across
    multiple histograms:

    >>> shared_bins = np.histogram_bin_edges(arr, bins='auto')
    >>> shared_bins
    array([0., 1., 2., 3., 4., 5.])

    >>> group_id = np.array([0, 1, 1, 0, 1, 1, 0, 1, 1])
    >>> hist_0, _ = np.histogram(arr[group_id == 0], bins=shared_bins)
    >>> hist_1, _ = np.histogram(arr[group_id == 1], bins=shared_bins)

    >>> hist_0; hist_1
    array([1, 1, 0, 1, 0])
    array([2, 0, 1, 1, 2])

    Which gives more easily comparable results than using separate bins for
    each histogram:

    >>> hist_0, bins_0 = np.histogram(arr[group_id == 0], bins='auto')
    >>> hist_1, bins_1 = np.histogram(arr[group_id == 1], bins='auto')
    >>> hist_0; hist_1
    array([1, 1, 1])
    array([2, 1, 1, 2])
    >>> bins_0; bins_1
    array([0., 1., 2., 3.])
    array([0.  , 1.25, 2.5 , 3.75, 5.  ])

    " :arglists '[[& [args {:as kwargs}]]]} histogram_bin_edges (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "histogram_bin_edges"))))

(def ^{:doc "
    Compute the standard deviation along the specified axis.

    Returns the standard deviation, a measure of the spread of a distribution,
    of the array elements. The standard deviation is computed for the
    flattened array by default, otherwise over the specified axis.

    Parameters
    ----------
    a : array_like
        Calculate the standard deviation of these values.
    axis : None or int or tuple of ints, optional
        Axis or axes along which the standard deviation is computed. The
        default is to compute the standard deviation of the flattened array.

        .. versionadded:: 1.7.0

        If this is a tuple of ints, a standard deviation is performed over
        multiple axes, instead of a single axis or all the axes as before.
    dtype : dtype, optional
        Type to use in computing the standard deviation. For arrays of
        integer type the default is float64, for arrays of float types it is
        the same as the array type.
    out : ndarray, optional
        Alternative output array in which to place the result. It must have
        the same shape as the expected output but the type (of the calculated
        values) will be cast if necessary.
    ddof : int, optional
        Means Delta Degrees of Freedom.  The divisor used in calculations
        is ``N - ddof``, where ``N`` represents the number of elements.
        By default `ddof` is zero.
    keepdims : bool, optional
        If this is set to True, the axes which are reduced are left
        in the result as dimensions with size one. With this option,
        the result will broadcast correctly against the input array.

        If the default value is passed, then `keepdims` will not be
        passed through to the `std` method of sub-classes of
        `ndarray`, however any non-default value will be.  If the
        sub-class' method does not implement `keepdims` any
        exceptions will be raised.

    where : array_like of bool, optional
        Elements to include in the standard deviation.
        See `~numpy.ufunc.reduce` for details.

        .. versionadded:: 1.20.0

    Returns
    -------
    standard_deviation : ndarray, see dtype parameter above.
        If `out` is None, return a new array containing the standard deviation,
        otherwise return a reference to the output array.

    See Also
    --------
    var, mean, nanmean, nanstd, nanvar
    :ref:`ufuncs-output-type`

    Notes
    -----
    The standard deviation is the square root of the average of the squared
    deviations from the mean, i.e., ``std = sqrt(mean(x))``, where
    ``x = abs(a - a.mean())**2``.

    The average squared deviation is typically calculated as ``x.sum() / N``,
    where ``N = len(x)``. If, however, `ddof` is specified, the divisor
    ``N - ddof`` is used instead. In standard statistical practice, ``ddof=1``
    provides an unbiased estimator of the variance of the infinite population.
    ``ddof=0`` provides a maximum likelihood estimate of the variance for
    normally distributed variables. The standard deviation computed in this
    function is the square root of the estimated variance, so even with
    ``ddof=1``, it will not be an unbiased estimate of the standard deviation
    per se.

    Note that, for complex numbers, `std` takes the absolute
    value before squaring, so that the result is always real and nonnegative.

    For floating-point input, the *std* is computed using the same
    precision the input has. Depending on the input data, this can cause
    the results to be inaccurate, especially for float32 (see example below).
    Specifying a higher-accuracy accumulator using the `dtype` keyword can
    alleviate this issue.

    Examples
    --------
    >>> a = np.array([[1, 2], [3, 4]])
    >>> np.std(a)
    1.1180339887498949 # may vary
    >>> np.std(a, axis=0)
    array([1.,  1.])
    >>> np.std(a, axis=1)
    array([0.5,  0.5])

    In single precision, std() can be inaccurate:

    >>> a = np.zeros((2, 512*512), dtype=np.float32)
    >>> a[0, :] = 1.0
    >>> a[1, :] = 0.1
    >>> np.std(a)
    0.45000005

    Computing the standard deviation in float64 is more accurate:

    >>> np.std(a, dtype=np.float64)
    0.44999999925494177 # may vary

    Specifying a where argument:

    >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]])
    >>> np.std(a)
    2.614064523559687 # may vary
    >>> np.std(a, where=[[True], [True], [False]])
    2.0

    " :arglists '[[& [args {:as kwargs}]]]} std (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "std"))))

(def ^{:doc "
    Test element-wise for negative infinity, return result as bool array.

    Parameters
    ----------
    x : array_like
        The input array.
    out : array_like, optional
        A location into which the result is stored. If provided, it must have a
        shape that the input broadcasts to. If not provided or None, a
        freshly-allocated boolean array is returned.

    Returns
    -------
    out : ndarray
        A boolean array with the same dimensions as the input.
        If second argument is not supplied then a numpy boolean array is
        returned with values True where the corresponding element of the
        input is negative infinity and values False where the element of
        the input is not negative infinity.

        If a second argument is supplied the result is stored there. If the
        type of that array is a numeric type the result is represented as
        zeros and ones, if the type is boolean then as False and True. The
        return value `out` is then a reference to that array.

    See Also
    --------
    isinf, isposinf, isnan, isfinite

    Notes
    -----
    NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic
    (IEEE 754).

    Errors result if the second argument is also supplied when x is a scalar
    input, if first and second arguments have different shapes, or if the
    first argument has complex values.

    Examples
    --------
    >>> np.isneginf(np.NINF)
    True
    >>> np.isneginf(np.inf)
    False
    >>> np.isneginf(np.PINF)
    False
    >>> np.isneginf([-np.inf, 0., np.inf])
    array([ True, False, False])

    >>> x = np.array([-np.inf, 0., np.inf])
    >>> y = np.array([2, 2, 2])
    >>> np.isneginf(x, y)
    array([1, 0, 0])
    >>> y
    array([1, 0, 0])

    " :arglists '[[& [args {:as kwargs}]]]} isneginf (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "isneginf"))))

(def ^{:doc "
    Round to nearest integer towards zero.

    Round an array of floats element-wise to nearest integer towards zero.
    The rounded values are returned as floats.

    Parameters
    ----------
    x : array_like
        An array of floats to be rounded
    out : ndarray, optional
        A location into which the result is stored. If provided, it must have
        a shape that the input broadcasts to. If not provided or None, a
        freshly-allocated array is returned.

    Returns
    -------
    out : ndarray of floats
        A float array with the same dimensions as the input.
        If second argument is not supplied then a float array is returned
        with the rounded values.

        If a second argument is supplied the result is stored there.
        The return value `out` is then a reference to that array.

    See Also
    --------
    trunc, floor, ceil
    around : Round to given number of decimals

    Examples
    --------
    >>> np.fix(3.14)
    3.0
    >>> np.fix(3)
    3.0
    >>> np.fix([2.1, 2.9, -2.1, -2.9])
    array([ 2.,  2., -2., -2.])

    " :arglists '[[& [args {:as kwargs}]]]} fix (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "fix"))))

(def ^{:doc "
    Return a new array of given shape and type, filled with ones.

    Parameters
    ----------
    shape : int or sequence of ints
        Shape of the new array, e.g., ``(2, 3)`` or ``2``.
    dtype : data-type, optional
        The desired data-type for the array, e.g., `numpy.int8`.  Default is
        `numpy.float64`.
    order : {'C', 'F'}, optional, default: C
        Whether to store multi-dimensional data in row-major
        (C-style) or column-major (Fortran-style) order in
        memory.
    like : array_like
        Reference object to allow the creation of arrays which are not
        NumPy arrays. If an array-like passed in as ``like`` supports
        the ``__array_function__`` protocol, the result will be defined
        by it. In this case, it ensures the creation of an array object
        compatible with that passed in via this argument.

        .. note::
            The ``like`` keyword is an experimental feature pending on
            acceptance of :ref:`NEP 35 <NEP35>`.

        .. versionadded:: 1.20.0

    Returns
    -------
    out : ndarray
        Array of ones with the given shape, dtype, and order.

    See Also
    --------
    ones_like : Return an array of ones with shape and type of input.
    empty : Return a new uninitialized array.
    zeros : Return a new array setting values to zero.
    full : Return a new array of given shape filled with value.


    Examples
    --------
    >>> np.ones(5)
    array([1., 1., 1., 1., 1.])

    >>> np.ones((5,), dtype=int)
    array([1, 1, 1, 1, 1])

    >>> np.ones((2, 1))
    array([[1.],
           [1.]])

    >>> s = (2,2)
    >>> np.ones(s)
    array([[1.,  1.],
           [1.,  1.]])

    " :arglists '[[shape & [{dtype :dtype, order :order, like :like}]] [shape & [{dtype :dtype, like :like}]] [shape & [{like :like}]]]} ones (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "ones"))))

(def ^{:doc "
    Remove axes of length one from `a`.

    Parameters
    ----------
    a : array_like
        Input data.
    axis : None or int or tuple of ints, optional
        .. versionadded:: 1.7.0

        Selects a subset of the entries of length one in the
        shape. If an axis is selected with shape entry greater than
        one, an error is raised.

    Returns
    -------
    squeezed : ndarray
        The input array, but with all or a subset of the
        dimensions of length 1 removed. This is always `a` itself
        or a view into `a`. Note that if all axes are squeezed,
        the result is a 0d array and not a scalar.

    Raises
    ------
    ValueError
        If `axis` is not None, and an axis being squeezed is not of length 1

    See Also
    --------
    expand_dims : The inverse operation, adding entries of length one
    reshape : Insert, remove, and combine dimensions, and resize existing ones

    Examples
    --------
    >>> x = np.array([[[0], [1], [2]]])
    >>> x.shape
    (1, 3, 1)
    >>> np.squeeze(x).shape
    (3,)
    >>> np.squeeze(x, axis=0).shape
    (3, 1)
    >>> np.squeeze(x, axis=1).shape
    Traceback (most recent call last):
    ...
    ValueError: cannot select an axis to squeeze out which has size not equal to one
    >>> np.squeeze(x, axis=2).shape
    (1, 3)
    >>> x = np.array([[1234]])
    >>> x.shape
    (1, 1)
    >>> np.squeeze(x)
    array(1234)  # 0d array
    >>> np.squeeze(x).shape
    ()
    >>> np.squeeze(x)[()]
    1234

    " :arglists '[[& [args {:as kwargs}]]]} squeeze (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "squeeze"))))

(def ^{:doc "
    Class to convert formats, names, titles description to a dtype.

    After constructing the format_parser object, the dtype attribute is
    the converted data-type:
    ``dtype = format_parser(formats, names, titles).dtype``

    Attributes
    ----------
    dtype : dtype
        The converted data-type.

    Parameters
    ----------
    formats : str or list of str
        The format description, either specified as a string with
        comma-separated format descriptions in the form ``'f8, i4, a5'``, or
        a list of format description strings  in the form
        ``['f8', 'i4', 'a5']``.
    names : str or list/tuple of str
        The field names, either specified as a comma-separated string in the
        form ``'col1, col2, col3'``, or as a list or tuple of strings in the
        form ``['col1', 'col2', 'col3']``.
        An empty list can be used, in that case default field names
        ('f0', 'f1', ...) are used.
    titles : sequence
        Sequence of title strings. An empty list can be used to leave titles
        out.
    aligned : bool, optional
        If True, align the fields by padding as the C-compiler would.
        Default is False.
    byteorder : str, optional
        If specified, all the fields will be changed to the
        provided byte-order.  Otherwise, the default byte-order is
        used. For all available string specifiers, see `dtype.newbyteorder`.

    See Also
    --------
    dtype, typename, sctype2char

    Examples
    --------
    >>> np.format_parser(['<f8', '<i4', '<a5'], ['col1', 'col2', 'col3'],
    ...                  ['T1', 'T2', 'T3']).dtype
    dtype([(('T1', 'col1'), '<f8'), (('T2', 'col2'), '<i4'), (('T3', 'col3'), 'S5')])

    `names` and/or `titles` can be empty lists. If `titles` is an empty list,
    titles will simply not appear. If `names` is empty, default field names
    will be used.

    >>> np.format_parser(['f8', 'i4', 'a5'], ['col1', 'col2', 'col3'],
    ...                  []).dtype
    dtype([('col1', '<f8'), ('col2', '<i4'), ('col3', '<S5')])
    >>> np.format_parser(['<f8', '<i4', '<a5'], [], []).dtype
    dtype([('f0', '<f8'), ('f1', '<i4'), ('f2', 'S5')])

    " :arglists '[[self formats names titles & [{aligned :aligned, byteorder :byteorder}]] [self formats names titles & [{aligned :aligned}]] [self formats names titles]]} format_parser (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "format_parser"))))

(def ^{:doc "
    Add documentation to an existing object, typically one defined in C

    The purpose is to allow easier editing of the docstrings without requiring
    a re-compile. This exists primarily for internal use within numpy itself.

    Parameters
    ----------
    place : str
        The absolute name of the module to import from
    obj : str
        The name of the object to add documentation to, typically a class or
        function name
    doc : {str, Tuple[str, str], List[Tuple[str, str]]}
        If a string, the documentation to apply to `obj`

        If a tuple, then the first element is interpreted as an attribute of
        `obj` and the second as the docstring to apply - ``(method, docstring)``

        If a list, then each element of the list should be a tuple of length
        two - ``[(method1, docstring1), (method2, docstring2), ...]``
    warn_on_python : bool
        If True, the default, emit `UserWarning` if this is used to attach
        documentation to a pure-python object.

    Notes
    -----
    This routine never raises an error if the docstring can't be written, but
    will raise an error if the object being documented does not exist.

    This routine cannot modify read-only docstrings, as appear
    in new-style classes or built-in functions. Because this
    routine never raises an error the caller must check manually
    that the docstrings were changed.

    Since this function grabs the ``char *`` from a c-level str object and puts
    it into the ``tp_doc`` slot of the type of `obj`, it violates a number of
    C-API best-practices, by:

    - modifying a `PyTypeObject` after calling `PyType_Ready`
    - calling `Py_INCREF` on the str and losing the reference, so the str
      will never be released

    If possible it should be avoided.
    " :arglists '[[place obj doc & [{warn_on_python :warn_on_python}]] [place obj doc]]} add_newdoc (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "add_newdoc"))))

(def ^{:doc "Sub-package containing the matrix class and related functions.

"} _mat (as-jvm/generic-pyobject (py-global-delay (py/get-attr @src-obj* "_mat"))))

(def ^{:doc "Single-precision floating-point number type, compatible with C ``float``.

    :Character code: ``'f'``
    :Canonical name: `numpy.single`
    :Alias on this platform: `numpy.float32`: 32-bit-precision floating-point number type: sign bit, 8 bits exponent, 23 bits mantissa." :arglists '[[self & [args {:as kwargs}]]]} float32 (as-jvm/generic-callable-pyobject (py-global-delay (py/get-attr @src-obj* "float32"))))

(def ^{:doc "
    Return the gradient of an N-dimensional array.

    The gradient is computed using second order accurate central differences
    in the interior points and either first or second order accurate one-sides
    (forward or backwards) differences at the boundaries.
    The returned gradient hence has the same shape as the input array.

    Parameters
    ----------
    f : array_like
        An N-dimensional array containing samples of a scalar function.
    varargs : list of scalar or array, optional
        Spacing between f values. Default unitary spacing for all dimensions.
        Spacing can be specified using:

        1. single scalar to specify a sample distance for all dimensions.
        2. N scalars to specify a constant sample distance for each dimension.
           i.e. `dx`, `dy`, `dz`, ...
        3. N arrays to specify the coordinates of the values along each
           dimension of F. The length of the array must match the size of
           the corresponding dimension
        4. Any combination of N scalars/arrays with the meaning of 2. and 3.

        If `axis` is given, the number of varargs must equal the number of axes.
        Default: 1.

    edge_order : {1, 2}, optional
        Gradient is calculated using N-th order accurate differences
        at the boundaries. Default: 1.

        .. versionadded:: 1.9.1

    axis : None or int or tuple of ints, optional
        Gradient is calculated only along the given axis or axes
        The default (axis = None) is to calculate the gradient for all the axes
        of the input array. axis may be negative, in which case it counts from
        the last to the first axis.

        .. versionadded:: 1.11.0

    Returns
    -------
    gradient : ndarray or list of ndarray
        A set of ndarrays (or a single ndarray if there is only one dimension)
        corresponding to the derivatives of f with respect to each dimension.
        Each derivative has the same shape as f.

    Examples
    --------
    >>> f = np.array([1, 2, 4, 7, 11, 16], dtype=float)
    >>> np.gradient(f)
    array([1. , 1.5, 2.5, 3.5, 4.5, 5. ])
    >>> np.gradient(f, 2)
    array([0.5 ,  0.75,  1.25,  1.75,  2.25,  2.5 ])

    Spacing can be also specified with an array that represents the coordinates
    of the values F along the dimensions.
    For instance a uniform spacing:

    >>> x = np.arange(f.size)
    >>> np.gradient(f, x)
    array([1. ,  1.5,  2.5,  3.5,  4.5,  5. ])

    Or a non uniform one:

    >>> x = np.array([0., 1., 1.5, 3.5, 4., 6.], dtype=float)
    >>> np.gradient(f, x)
    array([1. ,  3. ,  3.5,  6.7,  6.9,  2.5])

    For two dimensional arrays, the return will be two arrays ordered by
    axis. In this example the first array stands for the gradient in
    rows and the second one in columns direction:

    >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float))
    [array([[ 2.,  2., -1.],
           [ 2.,  2., -1.]]), array([[1. , 2.5, 4. ],
           [1. , 1. , 1. ]])]

    In this example the spacing is also specified:
    uniform for axis=0 and non uniform for axis=1

    >>> dx = 2.
    >>> y = [1., 1.5, 3.5]
    >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float), dx, y)
    [array([[ 1. ,  1. , -0.5],
           [ 1. ,  1. , -0.5]]), array([[2. , 2. , 2. ],
           [2. , 1.7, 0.5]])]

    It is possible to specify how boundaries are treated using `edge_order`

    >>> x = np.array([0, 1, 2, 3, 4])
    >>> f = x**2
    >>> np.gradient(f, edge_order=1)
    array([1.,  2.,  4.,  6.,  7.])
    >>> np.gradient(f, edge_order=2)
    array([0., 2., 4., 6., 8.])

    The `axis` keyword can be used to specify a subset of axes of which the
    gradient is calculated

    >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float), axis=0)
    array([[ 2.,  2., -1.],
           [ 2.,  2., -1.]])

    Notes
    -----
    Assuming that :math:`f\\in C^{3}` (i.e., :math:`f` has at least 3 continuous
    derivatives) and let :math:`h_{*}` be a non-homogeneous stepsize, we
    minimize the \"consistency error\" :math:`\\eta_{i}` between the true gradient
    and its estimate from a linear combination of the neighboring grid-points:

    .. math::

        \\eta_{i} = f_{i}^{\\left(1\\right)} -
                    \\left[ \\alpha f\\left(x_{i}\\right) +
                            \\beta f\\left(x_{i} + h_{d}\\right) +
                            \\gamma f\\left(x_{i}-h_{s}\\right)
                    \\right]

    By substituting :math:`f(x_{i} + h_{d})` and :math:`f(x_{i} - h_{s})`
    with their Taylor series expansion, this translates into solving
    the following the linear system:

    .. math::

        \\left\\{
            \\begin{array}{r}
                \\alpha+\\beta+\\gamma=0 \\\\
                \\beta h_{d}-\\gamma h_{s}=1 \\\\
                \\beta h_{d}^{2}+\\gamma h_{s}^{2}=0
            \\end{array}
        \\right.

    The resulting approximation of :math:`f_{i}^{(1)}` is the following:

    .. math::

        \\hat f_{i}^{(1)} =
            \\frac{
                h_{s}^{2}f\\left(x_{i} + h_{d}\\right)
                + \\left(h_{d}^{2} - h_{s}^{2}\\right)f\\left(x_{i}\\right)
                - h_{d}^{2}f\\left(x_{i}-h_{s}\\right)}
                { h_{s}h_{d}\\left(h_{d} + h_{s}\\right)}
            + \\mathcal{O}\\left(\\frac{h_{d}h_{s}^{2}
                                + h_{s}h_{d}^{2}}{h_{d}
                                + h_{s}}\\right)

    It is worth noting that if :math:`h_{s}=h_{d}`
    (i.e., data are evenly spaced)
    we find the standard second order approximation:

    .. math::

        \\hat f_{i}^{(1)}=
            \\frac{f\\left(x_{i+1}\\right) - f\\left(x_{i-1}\\right)}{2h}
            + \\mathcal{O}\\left(h^{2}\\right)

    With a similar procedure the forward/backward
Download .txt
gitextract_0j37h__8/

├── .github/
│   ├── FUNDING.yml
│   └── workflows/
│       └── test.yml
├── .gitignore
├── .gitmodules
├── .travis.yml
├── CHANGELOG.md
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── codegen-test/
│   ├── README.md
│   ├── deps.edn
│   ├── scripts/
│   │   ├── compile
│   │   └── run
│   └── src/
│       ├── code/
│       │   ├── codegen.clj
│       │   └── main.clj
│       └── python/
│           └── numpy.clj
├── deps.edn
├── dockerfiles/
│   ├── CondaDockerfile
│   ├── Py38Dockerfile
│   └── Py39Dockerfile
├── docs/
│   ├── Usage.html
│   ├── css/
│   │   └── default.css
│   ├── embedded.html
│   ├── environments.html
│   ├── highlight/
│   │   └── solarized-light.css
│   ├── index.html
│   ├── js/
│   │   └── page_effects.js
│   ├── libpython-clj.python.html
│   ├── libpython-clj.python.np-array.html
│   ├── libpython-clj.require.html
│   ├── libpython-clj2.codegen.html
│   ├── libpython-clj2.embedded.html
│   ├── libpython-clj2.java-api.html
│   ├── libpython-clj2.python.class.html
│   ├── libpython-clj2.python.html
│   ├── libpython-clj2.python.np-array.html
│   ├── libpython-clj2.require.html
│   ├── new-to-clojure.html
│   ├── scopes-and-gc.html
│   └── slicing.html
├── questions/
│   ├── 32bit.txt
│   ├── 64bit.txt
│   ├── compile32.sh
│   ├── compile64.sh
│   ├── ssizet_size.cpp
│   └── typeobject.cpp
├── resources/
│   └── clj-kondo.exports/
│       └── clj-python/
│           └── libpython-clj/
│               ├── config.edn
│               └── hooks/
│                   └── libpython_clj/
│                       ├── jna/
│                       │   └── base/
│                       │       └── def_pylib_fn.clj
│                       └── require/
│                           └── import_python.clj
├── scripts/
│   ├── build-conda-docker
│   ├── build-py38-docker
│   ├── build-py39-docker
│   ├── conda-repl
│   ├── deploy
│   ├── install
│   ├── py38-repl
│   ├── run-conda-docker
│   ├── run-py38-docker
│   ├── run-py39-docker
│   └── run-tests
├── src/
│   └── libpython_clj2/
│       ├── codegen.clj
│       ├── embedded.clj
│       ├── java_api.clj
│       ├── metadata.clj
│       ├── python/
│       │   ├── base.clj
│       │   ├── bridge_as_jvm.clj
│       │   ├── bridge_as_python.clj
│       │   ├── class.clj
│       │   ├── copy.clj
│       │   ├── dechunk_map.clj
│       │   ├── ffi.clj
│       │   ├── fn.clj
│       │   ├── gc.clj
│       │   ├── info.clj
│       │   ├── io_redirect.clj
│       │   ├── jvm_handle.clj
│       │   ├── np_array.clj
│       │   ├── protocols.clj
│       │   ├── windows.clj
│       │   └── with.clj
│       ├── python.clj
│       ├── require.clj
│       └── sugar.clj
├── test/
│   └── libpython_clj2/
│       ├── classes_test.clj
│       ├── codegen_test.clj
│       ├── ffi_test.clj
│       ├── fncall_test.clj
│       ├── iter_gen_seq_test.clj
│       ├── java_api_test.clj
│       ├── numpy_test.clj
│       ├── python_test.clj
│       ├── require_python_test.clj
│       ├── stress_test.clj
│       └── sugar_test.clj
├── testcode/
│   └── __init__.py
└── topics/
    ├── Usage.md
    ├── embedded.md
    ├── environments.md
    ├── new-to-clojure.md
    ├── scopes-and-gc.md
    └── slicing.md
Download .txt
SYMBOL INDEX (27 symbols across 4 files)

FILE: docs/js/page_effects.js
  function visibleInParent (line 1) | function visibleInParent(element) {
  function hasFragment (line 6) | function hasFragment(link, fragment) {
  function findLinkByFragment (line 10) | function findLinkByFragment(elements, fragment) {
  function scrollToCurrentVarLink (line 14) | function scrollToCurrentVarLink(elements) {
  function setCurrentVarLink (line 33) | function setCurrentVarLink() {
  function scrollPositionId (line 47) | function scrollPositionId(element) {
  function storeScrollPosition (line 52) | function storeScrollPosition(element) {
  function recallScrollPosition (line 58) | function recallScrollPosition(element) {
  function persistScrollPosition (line 64) | function persistScrollPosition(element) {
  function sidebarContentWidth (line 69) | function sidebarContentWidth(element) {
  function calculateSize (line 74) | function calculateSize(width, snap, margin, minimum) {
  function resizeSidebars (line 83) | function resizeSidebars() {

FILE: questions/ssizet_size.cpp
  function main (line 7) | int main( int c, char** v)

FILE: questions/typeobject.cpp
  function main (line 8) | int main(int c, char** v)

FILE: testcode/__init__.py
  class WithObjClass (line 1) | class WithObjClass:
    method __init__ (line 2) | def __init__(self, suppress, fn_list):
    method __enter__ (line 6) | def __enter__(self):
    method doit_noerr (line 10) | def doit_noerr(self):
    method doit_err (line 13) | def doit_err(self):
    method __exit__ (line 16) | def __exit__(self, ex_type, ex_val, ex_traceback):
  class FileWrapper (line 21) | class FileWrapper:
    method __init__ (line 24) | def __init__(self, content):
    method __enter__ (line 27) | def __enter__(self):
    method __exit__ (line 33) | def __exit__(self, *args):
  function for_iter (line 37) | def for_iter(arg):
  function calling_custom_clojure_fn (line 44) | def calling_custom_clojure_fn(arg):
  function complex_fn (line 48) | def complex_fn(a, b, c: str = 5, *args, d=10, **kwargs):
Condensed preview — 101 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (1,753K chars).
[
  {
    "path": ".github/FUNDING.yml",
    "chars": 18,
    "preview": "github: cnuernber\n"
  },
  {
    "path": ".github/workflows/test.yml",
    "chars": 1256,
    "preview": "name: test\n\non:\n  push:\n    branches:\n      - master\n    paths-ignore:\n      - '**/README.md'\n      - '**/CHANGELOG.md'\n"
  },
  {
    "path": ".gitignore",
    "chars": 206,
    "preview": "target\n/classes\n/checkouts\nprofiles.clj\npom.xml\npom.xml.asc\n*.jar\n*.class\n/.lein-*\n.nrepl-port\n.hgignore\n.hg/\n__pycache_"
  },
  {
    "path": ".gitmodules",
    "chars": 0,
    "preview": ""
  },
  {
    "path": ".travis.yml",
    "chars": 505,
    "preview": "language: clojure\ndist: bionic\nlein: 2.9.1\nsudo: required\nbefore_install:\n  - sudo apt-get -y install python3 python3-pi"
  },
  {
    "path": "CHANGELOG.md",
    "chars": 19268,
    "preview": "# Time for a ChangeLog!\n\n## 2.026\n * Faster string conversion from c->jvm for large strings.\n * Fix for https://github.c"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 1052,
    "preview": "# Contributing\n\nContributions are welcome and come in many forms.  Let's be polite, professional, and\nkeep this a place "
  },
  {
    "path": "LICENSE",
    "chars": 14199,
    "preview": "Eclipse Public License - v 2.0\n\n    THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE\n    PUBLIC LICE"
  },
  {
    "path": "README.md",
    "chars": 10389,
    "preview": "# Deep Clojure/Python Integration\n\n### Version Info\n\n[![Clojars Project](https://img.shields.io/clojars/v/clj-python/lib"
  },
  {
    "path": "codegen-test/README.md",
    "chars": 248,
    "preview": "# Codegen Sample\n\nA quick sample testing intended codegen usage.\n\n\n## Usage\n\nWe first generate the numpy namespace with "
  },
  {
    "path": "codegen-test/deps.edn",
    "chars": 74,
    "preview": "{:paths [\"src\"]\n :deps {clj-python/libpython-clj {:mvn/version \"2.017\"}}}\n"
  },
  {
    "path": "codegen-test/scripts/compile",
    "chars": 124,
    "preview": "#!/bin/bash\n\n\n\nclj -e \"(require '[code.codegen :as cg])(cg/codegen)(println \\\"compilation successful\\\")\\\n(shutdown-agent"
  },
  {
    "path": "codegen-test/scripts/run",
    "chars": 85,
    "preview": "#!/bin/bash\n\nclj -e \"(compile 'code.main)\"\njava -cp \"$(clj -Spath):classes\" code.main"
  },
  {
    "path": "codegen-test/src/code/codegen.clj",
    "chars": 183,
    "preview": "(ns code.codegen\n  (:require [libpython-clj2.python :as py]\n            [libpython-clj2.codegen :as cgen]))\n\n\n(defn code"
  },
  {
    "path": "codegen-test/src/code/main.clj",
    "chars": 261,
    "preview": "(ns code.main\n  (:require [python.numpy :as np]\n            [libpython-clj2.python :as py])\n  (:gen-class))\n\n\n(defn setu"
  },
  {
    "path": "codegen-test/src/python/numpy.clj",
    "chars": 1058978,
    "preview": "(ns python.numpy\n\"No documentation provided\"\n(:require [libpython-clj2.python :as py]\n          [libpython-clj2.python.j"
  },
  {
    "path": "deps.edn",
    "chars": 3845,
    "preview": "{:paths [\"src\"]\n :deps {org.clojure/clojure     {:mvn/version \"1.11.1\" :scope \"provided\"}\n        cnuernber/dtype-next  "
  },
  {
    "path": "dockerfiles/CondaDockerfile",
    "chars": 1509,
    "preview": "# We will use Ubuntu for our image\nFROM ubuntu:22.04\n\n# Updating Ubuntu packages\n\nARG CLOJURE_TOOLS_VERSION=1.10.1.507\n\n"
  },
  {
    "path": "dockerfiles/Py38Dockerfile",
    "chars": 1019,
    "preview": "# We will use Ubuntu for our image\nFROM ubuntu:22.04\n\n# Updating Ubuntu packages\n\nARG CLOJURE_TOOLS_VERSION=1.10.1.507\n\n"
  },
  {
    "path": "dockerfiles/Py39Dockerfile",
    "chars": 1163,
    "preview": "# We will use Ubuntu for our image\nFROM ubuntu:22.04\n\n# Updating Ubuntu packages\n\nARG CLOJURE_TOOLS_VERSION=1.10.1.507\n\n"
  },
  {
    "path": "docs/Usage.html",
    "chars": 20552,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>LibPython-CLJ Usage</title><script async=\"tr"
  },
  {
    "path": "docs/css/default.css",
    "chars": 9428,
    "preview": "@import url('https://fonts.googleapis.com/css?family=PT+Sans');\n\nbody {\n    font-family: 'PT Sans', Helvetica, sans-seri"
  },
  {
    "path": "docs/embedded.html",
    "chars": 13130,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>Embedding Clojure In Python</title><script a"
  },
  {
    "path": "docs/environments.html",
    "chars": 4584,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>Python Environments</title><script async=\"tr"
  },
  {
    "path": "docs/highlight/solarized-light.css",
    "chars": 1145,
    "preview": "/*\n\nOrginal Style from ethanschoonover.com/solarized (c) Jeremy Hull <sourdrums@gmail.com>\n\n*/\n\n.hljs {\n  display: block"
  },
  {
    "path": "docs/index.html",
    "chars": 15097,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>libpython-clj 2.026</title><script async=\"tr"
  },
  {
    "path": "docs/js/page_effects.js",
    "chars": 3565,
    "preview": "function visibleInParent(element) {\n    var position = $(element).position().top\n    return position > -50 && position <"
  },
  {
    "path": "docs/libpython-clj.python.html",
    "chars": 48963,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>libpython-clj.python documentation</title><l"
  },
  {
    "path": "docs/libpython-clj.python.np-array.html",
    "chars": 3642,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>libpython-clj.python.np-array documentation<"
  },
  {
    "path": "docs/libpython-clj.require.html",
    "chars": 7138,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>libpython-clj.require documentation</title><"
  },
  {
    "path": "docs/libpython-clj2.codegen.html",
    "chars": 6678,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>libpython-clj2.codegen documentation</title>"
  },
  {
    "path": "docs/libpython-clj2.embedded.html",
    "chars": 6755,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>libpython-clj2.embedded documentation</title"
  },
  {
    "path": "docs/libpython-clj2.java-api.html",
    "chars": 25089,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>libpython-clj2.java-api documentation</title"
  },
  {
    "path": "docs/libpython-clj2.python.class.html",
    "chars": 8330,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>libpython-clj2.python.class documentation</t"
  },
  {
    "path": "docs/libpython-clj2.python.html",
    "chars": 44788,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>libpython-clj2.python documentation</title><"
  },
  {
    "path": "docs/libpython-clj2.python.np-array.html",
    "chars": 7564,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>libpython-clj2.python.np-array documentation"
  },
  {
    "path": "docs/libpython-clj2.require.html",
    "chars": 9941,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>libpython-clj2.require documentation</title>"
  },
  {
    "path": "docs/new-to-clojure.html",
    "chars": 12180,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>So Many Parenthesis!</title><script async=\"t"
  },
  {
    "path": "docs/scopes-and-gc.html",
    "chars": 6574,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>Scopes And Garbage Collection</title><script"
  },
  {
    "path": "docs/slicing.html",
    "chars": 5840,
    "preview": "<!DOCTYPE html PUBLIC \"\"\n    \"\">\n<html><head><meta charset=\"UTF-8\" /><title>Slicing And Slices</title><script async=\"tru"
  },
  {
    "path": "questions/32bit.txt",
    "chars": 257,
    "preview": "PyObject size: 8\nPyTypeObject size: 204\ntype.tp_basicsize: 16\ntype.tp_as_number: 48\ntype.tp_as_buffer: 80\ntype.tp_finali"
  },
  {
    "path": "questions/64bit.txt",
    "chars": 336,
    "preview": "PyObject size: 16\nPyTypeObject size: 408\ntype.tp_basicsize: 32\ntype.tp_as_number: 96\ntype.tp_as_buffer: 160\ntype.tp_fina"
  },
  {
    "path": "questions/compile32.sh",
    "chars": 59,
    "preview": "g++ typeobject.cpp -I../cpython/Include -I../cpython/32bit\n"
  },
  {
    "path": "questions/compile64.sh",
    "chars": 59,
    "preview": "g++ typeobject.cpp -I../cpython/Include -I../cpython/64bit\n"
  },
  {
    "path": "questions/ssizet_size.cpp",
    "chars": 163,
    "preview": "#include \"Python.h\"\n#include <cstdio>\n\nusing namespace std;\n\n\nint main( int c, char** v)\n{\n  printf(\"%ld\\n\", static_cast"
  },
  {
    "path": "questions/typeobject.cpp",
    "chars": 1110,
    "preview": "#include \"Python.h\"\n#include <cstdio>\n#include <cstddef>\n\nusing namespace std;\n\n\nint main(int c, char** v)\n{\n  printf(\n "
  },
  {
    "path": "resources/clj-kondo.exports/clj-python/libpython-clj/config.edn",
    "chars": 650,
    "preview": "{:hooks\n {:analyze-call {libpython-clj.jna.base/def-pylib-fn\n                 hooks.libpython-clj.jna.base.def-pylib-fn/"
  },
  {
    "path": "resources/clj-kondo.exports/clj-python/libpython-clj/hooks/libpython_clj/jna/base/def_pylib_fn.clj",
    "chars": 2986,
    "preview": ";; XXX: work on export-symbols from tech.parallel.utils?\n(ns hooks.libpython-clj.jna.base.def-pylib-fn\n  \"The def-pylib-"
  },
  {
    "path": "resources/clj-kondo.exports/clj-python/libpython-clj/hooks/libpython_clj/require/import_python.clj",
    "chars": 2219,
    "preview": "(ns hooks.libpython-clj.require.import-python\n  \"The import-python macro from libpython-clj/require.clj\"\n  (:require [cl"
  },
  {
    "path": "scripts/build-conda-docker",
    "chars": 178,
    "preview": "#!/bin/bash\n\nset -e\n\npushd dockerfiles\ndocker build -t docker-conda -f CondaDockerfile --build-arg USERID=$(id -u) --bui"
  },
  {
    "path": "scripts/build-py38-docker",
    "chars": 176,
    "preview": "#!/bin/bash\n\nset -e\n\npushd dockerfiles\ndocker build -t docker-py38 -f Py38Dockerfile --build-arg USERID=$(id -u) --build"
  },
  {
    "path": "scripts/build-py39-docker",
    "chars": 176,
    "preview": "#!/bin/bash\n\nset -e\n\npushd dockerfiles\ndocker build -t docker-py39 -f Py39Dockerfile --build-arg USERID=$(id -u) --build"
  },
  {
    "path": "scripts/conda-repl",
    "chars": 357,
    "preview": "#!/bin/bash\n\nsource activate pyclj\n\n## This is absolutely necessary.\n## https://github.com/conda/conda/issues/9500#issue"
  },
  {
    "path": "scripts/deploy",
    "chars": 182,
    "preview": "#!/bin/bash\n\n\nset -e\n\nrm -rf classes\nscripts/run-tests\nclj -X:codox\nrm -rf pom.xml\nclj -T:build jar\ncp target/classes/ME"
  },
  {
    "path": "scripts/install",
    "chars": 50,
    "preview": "#!/bin/bash\n\nset -e\n\nclj -X:depstar\nclj -X:install"
  },
  {
    "path": "scripts/py38-repl",
    "chars": 175,
    "preview": "#!/bin/bash\n\nlein update-in :dependencies conj \\[nrepl\\ \\\"0.6.0\\\"\\]\\\n     -- update-in :plugins conj \\[cider/cider-nrepl"
  },
  {
    "path": "scripts/run-conda-docker",
    "chars": 215,
    "preview": "#!/bin/bash\n\nscripts/build-conda-docker\n\ndocker run --rm -it -u $(id -u):$(id -g) \\\n  -v /$HOME/.m2:/home/$USER/.m2 \\\n  "
  },
  {
    "path": "scripts/run-py38-docker",
    "chars": 212,
    "preview": "#!/bin/bash\n\nscripts/build-py38-docker\n\ndocker run --rm -it -u $(id -u):$(id -g) \\\n  -v /$HOME/.m2:/home/$USER/.m2 \\\n  -"
  },
  {
    "path": "scripts/run-py39-docker",
    "chars": 212,
    "preview": "#!/bin/bash\n\nscripts/build-py39-docker\n\ndocker run --rm -it -u $(id -u):$(id -g) \\\n  -v /$HOME/.m2:/home/$USER/.m2 \\\n  -"
  },
  {
    "path": "scripts/run-tests",
    "chars": 32,
    "preview": "#!/bin/bash\n\nset -e\n\nclj -M:test"
  },
  {
    "path": "src/libpython_clj2/codegen.clj",
    "chars": 8161,
    "preview": "(ns libpython-clj2.codegen\n  \"Generate a namespace on disk for a python module or instances\"\n  (:require [clojure.java.i"
  },
  {
    "path": "src/libpython_clj2/embedded.clj",
    "chars": 2529,
    "preview": "(ns libpython-clj2.embedded\n  \"Tools for embedding clojure into a python host process.\n  See jbridge.py for python detai"
  },
  {
    "path": "src/libpython_clj2/java_api.clj",
    "chars": 24714,
    "preview": "(ns libpython-clj2.java-api\n  \"A java api is exposed for libpython-clj2.  The methods below are statically callable\n  wi"
  },
  {
    "path": "src/libpython_clj2/metadata.clj",
    "chars": 12573,
    "preview": "(ns libpython-clj2.metadata\n  \"Namespace to create metadata from python objects.   This namespace requires\n  python help"
  },
  {
    "path": "src/libpython_clj2/python/base.clj",
    "chars": 5028,
    "preview": "(ns libpython-clj2.python.base\n  \"Shared basic functionality and wrapper functions\"\n  (:require [libpython-clj2.python.p"
  },
  {
    "path": "src/libpython_clj2/python/bridge_as_jvm.clj",
    "chars": 15454,
    "preview": "(ns libpython-clj2.python.bridge-as-jvm\n  \"Functionality related to proxying python objects such that they interact nati"
  },
  {
    "path": "src/libpython_clj2/python/bridge_as_python.clj",
    "chars": 10113,
    "preview": "(ns libpython-clj2.python.bridge-as-python\n  \"Functionality related to proxying JVM objects into Python so for instance "
  },
  {
    "path": "src/libpython_clj2/python/class.clj",
    "chars": 7579,
    "preview": "(ns libpython-clj2.python.class\n  \"Namespace to help create a new python class from Clojure.  Used as a core\n  implement"
  },
  {
    "path": "src/libpython_clj2/python/copy.clj",
    "chars": 9686,
    "preview": "(ns libpython-clj2.python.copy\n  \"Bindings to copy jvm <-> python.  Most functions in this namespace expect the\n  GIL to"
  },
  {
    "path": "src/libpython_clj2/python/dechunk_map.clj",
    "chars": 305,
    "preview": "(ns libpython-clj2.python.dechunk-map\n  \"Utility namespace with a function that works like a single-sequence map but\n  s"
  },
  {
    "path": "src/libpython_clj2/python/ffi.clj",
    "chars": 37625,
    "preview": "(ns libpython-clj2.python.ffi\n  \"Low level bindings to the python shared library system.  Several key pieces of\n  functi"
  },
  {
    "path": "src/libpython_clj2/python/fn.clj",
    "chars": 16585,
    "preview": "(ns libpython-clj2.python.fn\n  \"Pathways for creating clojure functions from python callable objects and\n  vice versa.  "
  },
  {
    "path": "src/libpython_clj2/python/gc.clj",
    "chars": 2166,
    "preview": "(ns libpython-clj2.python.gc\n  \"Binding of various sort of gc semantics optimized specifically for\n  libpython-clj.\"\n  ("
  },
  {
    "path": "src/libpython_clj2/python/info.clj",
    "chars": 6708,
    "preview": "(ns libpython-clj2.python.info\n  \"Python system information.  Uses the installed python executable to find out\n  various"
  },
  {
    "path": "src/libpython_clj2/python/io_redirect.clj",
    "chars": 2375,
    "preview": "(ns libpython-clj2.python.io-redirect\n  \"Implementation of optional io redirection.\"\n  (:require [libpython-clj2.python."
  },
  {
    "path": "src/libpython_clj2/python/jvm_handle.clj",
    "chars": 1862,
    "preview": "(ns libpython-clj2.python.jvm-handle\n  \"Conversion of a jvm object into an integer and back.  Used for storing handles\n "
  },
  {
    "path": "src/libpython_clj2/python/np_array.clj",
    "chars": 6832,
    "preview": "(ns libpython-clj2.python.np-array\n  \"Bindings for deeper intergration of numpy into the tech.v3.datatype system.  This\n"
  },
  {
    "path": "src/libpython_clj2/python/protocols.clj",
    "chars": 2588,
    "preview": "(ns libpython-clj2.python.protocols\n  \"Internal protocols to libpython-clj.  These allow a dual system where raw pointer"
  },
  {
    "path": "src/libpython_clj2/python/windows.clj",
    "chars": 1553,
    "preview": "(ns libpython-clj2.python.windows\n  \"Windows-specific functionality so that the system works with Anaconda.\"\n  (:require"
  },
  {
    "path": "src/libpython_clj2/python/with.clj",
    "chars": 2496,
    "preview": "(ns libpython-clj2.python.with\n  \"Implementation of the python 'with' keyword\"\n  (:require [libpython-clj2.python.ffi :a"
  },
  {
    "path": "src/libpython_clj2/python.clj",
    "chars": 26217,
    "preview": "(ns libpython-clj2.python\n  \"Python bindings for Clojure.  This library dynamically finds the installed\n  python, loads "
  },
  {
    "path": "src/libpython_clj2/require.clj",
    "chars": 16296,
    "preview": "(ns libpython-clj2.require\n  \"Namespace implementing requiring python modules as Clojure namespaces.  This works via\n  s"
  },
  {
    "path": "src/libpython_clj2/sugar.clj",
    "chars": 956,
    "preview": "(ns libpython-clj2.sugar\n  (:require [libpython-clj2.python :as py]))\n\n(py/initialize!)\n\n\n(let [{{:strs [pyxfn]} :global"
  },
  {
    "path": "test/libpython_clj2/classes_test.clj",
    "chars": 3366,
    "preview": "(ns libpython-clj2.classes-test\n  (:require [libpython-clj2.python :as py]\n            [clojure.test :refer :all]\n      "
  },
  {
    "path": "test/libpython_clj2/codegen_test.clj",
    "chars": 1560,
    "preview": "(ns libpython-clj2.codegen-test\n  (:require [clojure.test :refer [deftest is testing]]\n            [clojure.string :as s"
  },
  {
    "path": "test/libpython_clj2/ffi_test.clj",
    "chars": 1041,
    "preview": "(ns libpython-clj2.ffi-test\n  \"Short low level tests used to verify specific aspects of ffi integration\"\n  (:require [li"
  },
  {
    "path": "test/libpython_clj2/fncall_test.clj",
    "chars": 1058,
    "preview": "(ns libpython-clj2.fncall-test\n  (:require [libpython-clj2.python :as py]\n            [clojure.test :refer :all]))\n\n\n(py"
  },
  {
    "path": "test/libpython_clj2/iter_gen_seq_test.clj",
    "chars": 876,
    "preview": "(ns libpython-clj2.iter-gen-seq-test\n  \"Iterators/sequences and such are crucial to handling lots of forms\n  of data and"
  },
  {
    "path": "test/libpython_clj2/java_api_test.clj",
    "chars": 4045,
    "preview": "(ns libpython-clj2.java-api-test\n  (:require [libpython-clj2.java-api :as japi]\n            [libpython-clj2.python.ffi :"
  },
  {
    "path": "test/libpython_clj2/numpy_test.clj",
    "chars": 683,
    "preview": "(ns libpython-clj2.numpy-test\n  (:require [clojure.test :refer [deftest is]]\n            [libpython-clj2.python :as py]\n"
  },
  {
    "path": "test/libpython_clj2/python_test.clj",
    "chars": 16642,
    "preview": "(ns libpython-clj2.python-test\n  (:require [libpython-clj2.python :as py :refer [py. py.. py.- py* py**]]\n            ;;"
  },
  {
    "path": "test/libpython_clj2/require_python_test.clj",
    "chars": 5284,
    "preview": "(ns libpython-clj2.require-python-test\n  (:require [libpython-clj2.require :as req\n             :refer [require-python p"
  },
  {
    "path": "test/libpython_clj2/stress_test.clj",
    "chars": 9093,
    "preview": "(ns libpython-clj2.stress-test\n  \"A set of tests meant to crash the system or just run the system out of\n  memory if it "
  },
  {
    "path": "test/libpython_clj2/sugar_test.clj",
    "chars": 2196,
    "preview": "(ns libpython-clj2.sugar-test\n  (:require [libpython-clj2.sugar :as pysug :reload true]\n            [clojure.test :refer"
  },
  {
    "path": "testcode/__init__.py",
    "chars": 1381,
    "preview": "class WithObjClass:\n    def __init__(self, suppress, fn_list):\n        self.suppress = suppress\n        self.fn_list = f"
  },
  {
    "path": "topics/Usage.md",
    "chars": 15182,
    "preview": "# LibPython-CLJ Usage\n\n\nPython objects are essentially two dictionaries, one for 'attributes' and one for\n'items'.  When"
  },
  {
    "path": "topics/embedded.md",
    "chars": 8509,
    "preview": "# Embedding Clojure In Python\n\n\nThe initial development push for `libpython-clj` was simply to embed Python in\nClojure a"
  },
  {
    "path": "topics/environments.md",
    "chars": 610,
    "preview": "# Python Environments\n\n## pyenv\n\npyenv requires that you build the shared library.  This is a separate configuration opt"
  },
  {
    "path": "topics/new-to-clojure.md",
    "chars": 7675,
    "preview": "# So Many Parenthesis!\n\n\n## About Clojure\n\n\nLISP stands for List Processing and it was originally designed by John McCar"
  },
  {
    "path": "topics/scopes-and-gc.md",
    "chars": 2468,
    "preview": "# Scopes And Garbage Collection\n\n\nlibpython-clj now supports stack-based scoping rules so you can guarantee all python\no"
  },
  {
    "path": "topics/slicing.md",
    "chars": 1855,
    "preview": "# Slicing And Slices\n\n\nThe way Python implements slicing is via overloading the `get-item` function call.\nThis is the ca"
  }
]

About this extraction

This page contains the full source code of the cnuernber/libpython-clj GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 101 files (1.6 MB), approximately 473.9k tokens, and a symbol index with 27 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!