main 15d2ef5b65ba cached
381 files
4.6 MB
1.2M tokens
3275 symbols
1 requests
Download .txt
Showing preview only (4,888K chars total). Download the full file or copy to clipboard to get everything.
Repository: OpenSourceEconomics/estimagic
Branch: main
Commit: 15d2ef5b65ba
Files: 381
Total size: 4.6 MB

Directory structure:
gitextract__osbgbsz/

├── .github/
│   ├── CODE_OF_CONDUCT.md
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.md
│   │   ├── enhancement.md
│   │   └── feature_request.md
│   ├── PULL_REQUEST_TEMPLATE/
│   │   └── pull_request_template.md
│   └── workflows/
│       ├── main.yml
│       └── publish-to-pypi.yml
├── .gitignore
├── .pre-commit-config.yaml
├── .readthedocs.yml
├── .tools/
│   ├── create_algo_selection_code.py
│   ├── test_create_algo_selection_code.py
│   └── update_algo_selection_hook.py
├── .yamllint.yml
├── CHANGES.md
├── CITATION
├── LICENSE
├── README.md
├── codecov.yml
├── docs/
│   ├── Makefile
│   ├── make.bat
│   └── source/
│       ├── _static/
│       │   ├── css/
│       │   │   ├── custom.css
│       │   │   ├── termynal.css
│       │   │   └── termynal_custom.css
│       │   └── js/
│       │       ├── custom.js
│       │       ├── require.js
│       │       └── termynal.js
│       ├── algorithms.md
│       ├── conf.py
│       ├── development/
│       │   ├── changes.md
│       │   ├── code_of_conduct.md
│       │   ├── credits.md
│       │   ├── enhancement_proposals.md
│       │   ├── ep-00-governance-model.md
│       │   ├── ep-01-pytrees.md
│       │   ├── ep-02-typing.md
│       │   ├── ep-03-alignment.md
│       │   ├── how_to_contribute.md
│       │   ├── index.md
│       │   └── styleguide.md
│       ├── estimagic/
│       │   ├── explanation/
│       │   │   ├── bootstrap_ci.md
│       │   │   ├── bootstrap_montecarlo_comparison.ipynb
│       │   │   ├── cluster_robust_likelihood_inference.md
│       │   │   └── index.md
│       │   ├── index.md
│       │   ├── reference/
│       │   │   └── index.md
│       │   └── tutorials/
│       │       ├── bootstrap_overview.ipynb
│       │       ├── estimation_tables_overview.ipynb
│       │       ├── index.md
│       │       ├── likelihood_overview.ipynb
│       │       └── msm_overview.ipynb
│       ├── explanation/
│       │   ├── explanation_of_numerical_optimizers.md
│       │   ├── implementation_of_constraints.md
│       │   ├── index.md
│       │   ├── internal_optimizers.md
│       │   ├── numdiff_background.md
│       │   ├── tests_for_supported_optimizers.md
│       │   └── why_optimization_is_hard.ipynb
│       ├── how_to/
│       │   ├── how_to_add_optimizers.ipynb
│       │   ├── how_to_algorithm_selection.ipynb
│       │   ├── how_to_benchmarking.ipynb
│       │   ├── how_to_bounds.ipynb
│       │   ├── how_to_change_plotting_backend.ipynb
│       │   ├── how_to_constraints.md
│       │   ├── how_to_criterion_function.ipynb
│       │   ├── how_to_derivatives.ipynb
│       │   ├── how_to_document_optimizers.md
│       │   ├── how_to_errors_during_optimization.ipynb
│       │   ├── how_to_globalization.ipynb
│       │   ├── how_to_logging.ipynb
│       │   ├── how_to_multistart.ipynb
│       │   ├── how_to_scaling.md
│       │   ├── how_to_slice_plot.ipynb
│       │   ├── how_to_slice_plot_3d.ipynb
│       │   ├── how_to_specify_algorithm_and_algo_options.md
│       │   ├── how_to_start_parameters.md
│       │   ├── how_to_visualize_histories.ipynb
│       │   └── index.md
│       ├── index.md
│       ├── installation.md
│       ├── reference/
│       │   ├── algo_options.md
│       │   ├── batch_evaluators.md
│       │   ├── index.md
│       │   ├── typing.md
│       │   └── utilities.md
│       ├── refs.bib
│       ├── tutorials/
│       │   ├── bayes_opt_tutorial.ipynb
│       │   ├── index.md
│       │   ├── numdiff_overview.ipynb
│       │   └── optimization_overview.ipynb
│       └── videos.md
├── pyproject.toml
├── src/
│   ├── estimagic/
│   │   ├── __init__.py
│   │   ├── batch_evaluators.py
│   │   ├── bootstrap.py
│   │   ├── bootstrap_ci.py
│   │   ├── bootstrap_helpers.py
│   │   ├── bootstrap_outcomes.py
│   │   ├── bootstrap_samples.py
│   │   ├── config.py
│   │   ├── estimate_ml.py
│   │   ├── estimate_msm.py
│   │   ├── estimation_summaries.py
│   │   ├── estimation_table.py
│   │   ├── examples/
│   │   │   ├── __init__.py
│   │   │   ├── diabetes.csv
│   │   │   ├── exam_points.csv
│   │   │   ├── logit.py
│   │   │   └── sensitivity_probit_example_data.csv
│   │   ├── lollipop_plot.py
│   │   ├── ml_covs.py
│   │   ├── msm_covs.py
│   │   ├── msm_sensitivity.py
│   │   ├── msm_weighting.py
│   │   ├── py.typed
│   │   ├── shared_covs.py
│   │   └── utilities.py
│   └── optimagic/
│       ├── __init__.py
│       ├── algorithms.py
│       ├── batch_evaluators.py
│       ├── benchmarking/
│       │   ├── __init__.py
│       │   ├── benchmark_reports.py
│       │   ├── cartis_roberts.py
│       │   ├── get_benchmark_problems.py
│       │   ├── more_wild.py
│       │   ├── noise_distributions.py
│       │   ├── process_benchmark_results.py
│       │   └── run_benchmark.py
│       ├── config.py
│       ├── constraints.py
│       ├── decorators.py
│       ├── deprecations.py
│       ├── differentiation/
│       │   ├── __init__.py
│       │   ├── derivatives.py
│       │   ├── finite_differences.py
│       │   ├── generate_steps.py
│       │   ├── numdiff_options.py
│       │   └── richardson_extrapolation.py
│       ├── examples/
│       │   ├── __init__.py
│       │   ├── criterion_functions.py
│       │   └── numdiff_functions.py
│       ├── exceptions.py
│       ├── logging/
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── logger.py
│       │   ├── read_log.py
│       │   ├── sqlalchemy.py
│       │   └── types.py
│       ├── mark.py
│       ├── optimization/
│       │   ├── __init__.py
│       │   ├── algo_options.py
│       │   ├── algorithm.py
│       │   ├── convergence_report.py
│       │   ├── create_optimization_problem.py
│       │   ├── error_penalty.py
│       │   ├── fun_value.py
│       │   ├── history.py
│       │   ├── internal_optimization_problem.py
│       │   ├── multistart.py
│       │   ├── multistart_options.py
│       │   ├── optimization_logging.py
│       │   ├── optimize.py
│       │   ├── optimize_result.py
│       │   ├── process_results.py
│       │   └── scipy_aliases.py
│       ├── optimizers/
│       │   ├── __init__.py
│       │   ├── _pounders/
│       │   │   ├── __init__.py
│       │   │   ├── _conjugate_gradient.py
│       │   │   ├── _steihaug_toint.py
│       │   │   ├── _trsbox.py
│       │   │   ├── bntr.py
│       │   │   ├── gqtpar.py
│       │   │   ├── linear_subsolvers.py
│       │   │   ├── pounders_auxiliary.py
│       │   │   └── pounders_history.py
│       │   ├── bayesian_optimizer.py
│       │   ├── bhhh.py
│       │   ├── fides.py
│       │   ├── gfo_optimizers.py
│       │   ├── iminuit_migrad.py
│       │   ├── ipopt.py
│       │   ├── nag_optimizers.py
│       │   ├── neldermead.py
│       │   ├── nevergrad_optimizers.py
│       │   ├── nlopt_optimizers.py
│       │   ├── pounders.py
│       │   ├── pygad/
│       │   │   └── __init__.py
│       │   ├── pygad_optimizer.py
│       │   ├── pygmo_optimizers.py
│       │   ├── pyswarms_optimizers.py
│       │   ├── scipy_optimizers.py
│       │   ├── tao_optimizers.py
│       │   └── tranquilo.py
│       ├── parameters/
│       │   ├── __init__.py
│       │   ├── block_trees.py
│       │   ├── bounds.py
│       │   ├── check_constraints.py
│       │   ├── consolidate_constraints.py
│       │   ├── constraint_tools.py
│       │   ├── conversion.py
│       │   ├── kernel_transformations.py
│       │   ├── nonlinear_constraints.py
│       │   ├── process_constraints.py
│       │   ├── process_selectors.py
│       │   ├── scale_conversion.py
│       │   ├── scaling.py
│       │   ├── space_conversion.py
│       │   ├── tree_conversion.py
│       │   └── tree_registry.py
│       ├── py.typed
│       ├── sandbox.py
│       ├── shared/
│       │   ├── __init__.py
│       │   ├── check_option_dicts.py
│       │   ├── compat.py
│       │   └── process_user_function.py
│       ├── timing.py
│       ├── type_conversion.py
│       ├── typing.py
│       ├── utilities.py
│       └── visualization/
│           ├── __init__.py
│           ├── backends.py
│           ├── convergence_plot.py
│           ├── deviation_plot.py
│           ├── history_plots.py
│           ├── plotting_utilities.py
│           ├── profile_plot.py
│           ├── slice_plot.py
│           └── slice_plot_3d.py
└── tests/
    ├── __init__.py
    ├── conftest.py
    ├── estimagic/
    │   ├── __init__.py
    │   ├── examples/
    │   │   └── test_logit.py
    │   ├── pickled_statsmodels_ml_covs/
    │   │   ├── logit_hessian.pickle
    │   │   ├── logit_hessian_matrix.pickle
    │   │   ├── logit_jacobian.pickle
    │   │   ├── logit_jacobian_matrix.pickle
    │   │   ├── logit_sandwich.pickle
    │   │   ├── probit_hessian.pickle
    │   │   ├── probit_hessian_matrix.pickle
    │   │   ├── probit_jacobian.pickle
    │   │   ├── probit_jacobian_matrix.pickle
    │   │   └── probit_sandwich.pickle
    │   ├── test_bootstrap.py
    │   ├── test_bootstrap_ci.py
    │   ├── test_bootstrap_outcomes.py
    │   ├── test_bootstrap_samples.py
    │   ├── test_estimate_ml.py
    │   ├── test_estimate_msm.py
    │   ├── test_estimate_msm_dict_params_and_moments.py
    │   ├── test_estimation_table.py
    │   ├── test_lollipop_plot.py
    │   ├── test_ml_covs.py
    │   ├── test_msm_covs.py
    │   ├── test_msm_sensitivity.py
    │   ├── test_msm_sensitivity_via_estimate_msm.py
    │   ├── test_msm_weighting.py
    │   └── test_shared.py
    └── optimagic/
        ├── __init__.py
        ├── benchmarking/
        │   ├── __init__.py
        │   ├── test_benchmark_reports.py
        │   ├── test_cartis_roberts.py
        │   ├── test_get_benchmark_problems.py
        │   ├── test_more_wild.py
        │   ├── test_noise_distributions.py
        │   └── test_run_benchmark.py
        ├── differentiation/
        │   ├── binary_choice_inputs.pickle
        │   ├── test_compare_derivatives_with_jax.py
        │   ├── test_derivatives.py
        │   ├── test_finite_differences.py
        │   ├── test_generate_steps.py
        │   └── test_numdiff_options.py
        ├── examples/
        │   └── test_criterion_functions.py
        ├── logging/
        │   ├── test_base.py
        │   ├── test_logger.py
        │   ├── test_sqlalchemy.py
        │   └── test_types.py
        ├── optimization/
        │   ├── test_algorithm.py
        │   ├── test_convergence_report.py
        │   ├── test_create_optimization_problem.py
        │   ├── test_error_penalty.py
        │   ├── test_fun_value.py
        │   ├── test_function_formats_ls.py
        │   ├── test_function_formats_scalar.py
        │   ├── test_history.py
        │   ├── test_history_collection.py
        │   ├── test_infinite_and_incomplete_bounds.py
        │   ├── test_internal_optimization_problem.py
        │   ├── test_invalid_jacobian_value.py
        │   ├── test_jax_derivatives.py
        │   ├── test_many_algorithms.py
        │   ├── test_multistart.py
        │   ├── test_multistart_options.py
        │   ├── test_optimize.py
        │   ├── test_optimize_result.py
        │   ├── test_params_versions.py
        │   ├── test_process_result.py
        │   ├── test_scipy_aliases.py
        │   ├── test_useful_exceptions.py
        │   ├── test_with_advanced_constraints.py
        │   ├── test_with_bounds.py
        │   ├── test_with_constraints.py
        │   ├── test_with_logging.py
        │   ├── test_with_multistart.py
        │   ├── test_with_nonlinear_constraints.py
        │   └── test_with_scaling.py
        ├── optimizers/
        │   ├── __init__.py
        │   ├── _pounders/
        │   │   ├── __init__.py
        │   │   ├── fixtures/
        │   │   │   ├── add_points_until_main_model_fully_linear_i.yaml
        │   │   │   ├── add_points_until_main_model_fully_linear_ii.yaml
        │   │   │   ├── find_affine_points_nonzero_i.yaml
        │   │   │   ├── find_affine_points_nonzero_ii.yaml
        │   │   │   ├── find_affine_points_nonzero_iii.yaml
        │   │   │   ├── find_affine_points_zero_i.yaml
        │   │   │   ├── find_affine_points_zero_ii.yaml
        │   │   │   ├── find_affine_points_zero_iii.yaml
        │   │   │   ├── find_affine_points_zero_iv.yaml
        │   │   │   ├── get_coefficients_residual_model.yaml
        │   │   │   ├── get_interpolation_matrices_residual_model.yaml
        │   │   │   ├── interpolate_f_iter_4.yaml
        │   │   │   ├── interpolate_f_iter_7.yaml
        │   │   │   ├── pounders_example_data.csv
        │   │   │   ├── scalar_model.pkl
        │   │   │   ├── update_initial_residual_model.yaml
        │   │   │   ├── update_intial_residual_model.yaml
        │   │   │   ├── update_main_from_residual_model.yaml
        │   │   │   ├── update_main_with_new_accepted_x.yaml
        │   │   │   ├── update_residual_model.yaml
        │   │   │   └── update_residual_model_with_new_accepted_x.yaml
        │   │   ├── test_linear_subsolvers.py
        │   │   ├── test_pounders_history.py
        │   │   ├── test_pounders_unit.py
        │   │   └── test_quadratic_subsolvers.py
        │   ├── test_bayesian_optimizer.py
        │   ├── test_bhhh.py
        │   ├── test_fides_options.py
        │   ├── test_gfo_optimizers.py
        │   ├── test_iminuit_migrad.py
        │   ├── test_ipopt_options.py
        │   ├── test_nag_optimizers.py
        │   ├── test_neldermead.py
        │   ├── test_nevergrad.py
        │   ├── test_pounders_integration.py
        │   ├── test_pygad_optimizer.py
        │   ├── test_pygmo_optimizers.py
        │   ├── test_pyswarms_optimizers.py
        │   ├── test_tao_optimizers.py
        │   └── test_tranquilo.py
        ├── parameters/
        │   ├── test_block_trees.py
        │   ├── test_bounds.py
        │   ├── test_check_constraints.py
        │   ├── test_constraint_tools.py
        │   ├── test_conversion.py
        │   ├── test_kernel_transformations.py
        │   ├── test_nonlinear_constraints.py
        │   ├── test_process_constraints.py
        │   ├── test_process_selectors.py
        │   ├── test_scale_conversion.py
        │   ├── test_scaling.py
        │   ├── test_space_conversion.py
        │   ├── test_tree_conversion.py
        │   └── test_tree_registry.py
        ├── shared/
        │   ├── __init__.py
        │   └── test_process_user_functions.py
        ├── test_algo_selection.py
        ├── test_batch_evaluators.py
        ├── test_constraints.py
        ├── test_decorators.py
        ├── test_deprecations.py
        ├── test_mark.py
        ├── test_timing.py
        ├── test_type_conversion.py
        ├── test_typed_dicts_consistency.py
        ├── test_utilities.py
        └── visualization/
            ├── test_backends.py
            ├── test_convergence_plot.py
            ├── test_deviation_plot.py
            ├── test_history_plots.py
            ├── test_plotting_utilities.py
            ├── test_profile_plot.py
            ├── test_slice_plot.py
            └── test_slice_plot_3d.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/CODE_OF_CONDUCT.md
================================================
# Code of Conduct

- [NumFOCUS Code of Conduct](https://numfocus.org/code-of-conduct)


================================================
FILE: .github/ISSUE_TEMPLATE/bug-report.md
================================================
---
name: Bug Report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''

---

### Bug description

A clear and concise description of what the bug is.

### To Reproduce

Ideally, provide a minimal code example. If that's not possible, describe steps to reproduce the bug.

### Expected behavior

A clear and concise description of what you expected to happen.

### Screenshots/Error messages

If applicable, add screenshots to help explain your problem.

### System

 - OS: [e.g. Ubuntu 18.04]
 - Version [e.g. 0.0.1]


================================================
FILE: .github/ISSUE_TEMPLATE/enhancement.md
================================================
---
name: Enhancement
about: Enhance an existing component.
title: ''
labels: enhancement
assignees: ''

---

* optimagic version used, if any:
* Python version, if any:
* Operating System:

### What would you like to enhance and why? Is it related to an issue/problem?

A clear and concise description of the current implementation and its limitations.

### Describe the solution you'd like

A clear and concise description of what you want to happen.

### Describe alternatives you've considered

A clear and concise description of any alternative solutions or features you've
considered and why you have discarded them.


================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: feature-request
assignees: ''

---

### Current situation

A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]; Currently there is no way of [...]

### Desired Situation

What functionality should become possible or easier?

### Proposed implementation

How would you implement the new feature? Did you consider alternative implementations?
You can start by describing interface changes like a new argument or a new function. There is no need to get too detailed here.


================================================
FILE: .github/PULL_REQUEST_TEMPLATE/pull_request_template.md
================================================
### What problem do you want to solve?

Reference the issue or discussion, if there is any. Provide a description of your
proposed solution.

### Todo

- [ ] Target the right branch and pick an appropriate title.
- [ ] Put `Closes #XXXX` in the first PR comment to auto-close the relevant issue once
  the PR is accepted. This is not applicable if there is no corresponding issue.
- [ ] Any steps that still need to be done.


================================================
FILE: .github/workflows/main.yml
================================================
---
name: main
concurrency:
  group: ${{ github.head_ref || github.run_id }}
  cancel-in-progress: true
on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - '*'
jobs:
  run-tests-linux:
    name: Run tests on ubuntu-latest py${{ matrix.python-version }}
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        python-version:
          - '312'
          - '313'
          - '314'
    steps:
      - uses: actions/checkout@v4
      - uses: prefix-dev/setup-pixi@v0.9.4
        with:
          pixi-version: v0.65.0
          cache: true
          cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}
          frozen: true
          environments: tests-linux-py${{ matrix.python-version }}
      - name: Run pytest
        shell: bash -el {0}
        run: pixi run -e tests-linux-py${{ matrix.python-version }} tests-with-cov
      - name: Upload coverage report.
        if: matrix.python-version == '312'
        uses: codecov/codecov-action@v4
        with:
          token: ${{ secrets.CODECOV_TOKEN }}
  run-tests-win-and-mac:
    name: Run tests on ${{ matrix.os }} py${{ matrix.python-version }}
    runs-on: ${{ matrix.os }}
    strategy:
      fail-fast: false
      matrix:
        os:
          - macos-latest
          - windows-latest
        python-version:
          - '312'
          - '313'
          - '314'
    steps:
      - uses: actions/checkout@v4
      - uses: prefix-dev/setup-pixi@v0.9.4
        with:
          pixi-version: v0.65.0
          cache: true
          cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}
          frozen: true
          environments: tests-py${{ matrix.python-version }}
      - name: Run pytest
        shell: bash -el {0}
        run: pixi run -e tests-py${{ matrix.python-version }} tests-fast
  run-tests-with-old-plotly:
    name: Run tests on ubuntu-latest with plotly < 6
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: prefix-dev/setup-pixi@v0.9.4
        with:
          pixi-version: v0.65.0
          cache: true
          cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}
          frozen: true
          environments: tests-old-plotly
      - name: Run pytest
        shell: bash -el {0}
        run: pixi run -e tests-old-plotly tests-fast
  run-tests-nevergrad:
    name: Run nevergrad tests py${{ matrix.python-version }}
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        python-version:
          - '312'
          - '313'
          - '314'
    steps:
      - uses: actions/checkout@v4
      - uses: prefix-dev/setup-pixi@v0.9.4
        with:
          pixi-version: v0.65.0
          cache: true
          cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}
          frozen: true
          environments: tests-nevergrad-py${{ matrix.python-version }}
      - name: Run pytest
        shell: bash -el {0}
        run: >-
          pixi run -e tests-nevergrad-py${{ matrix.python-version }}
          pytest tests/optimagic/optimizers/test_nevergrad.py
  code-in-docs:
    name: Run code snippets in documentation
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: prefix-dev/setup-pixi@v0.9.4
        with:
          pixi-version: v0.65.0
          cache: true
          cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}
          frozen: true
          environments: tests-linux-py314
      - name: Run doctest
        shell: bash -el {0}
        run: >-
          pixi run -e tests-linux-py314
          python -m doctest -v docs/source/how_to/how_to_constraints.md
  run-mypy:
    name: Run mypy
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: prefix-dev/setup-pixi@v0.9.4
        with:
          pixi-version: v0.65.0
          cache: true
          cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}
          frozen: true
          environments: type-checking
      - name: Run mypy
        shell: bash -el {0}
        run: pixi run -e type-checking mypy


================================================
FILE: .github/workflows/publish-to-pypi.yml
================================================
---
name: PyPI
on: push
jobs:
  build-n-publish:
    name: Build and publish optimagic Python 🐍 distributions 📦 to PyPI
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Set up Python 3.12
        uses: actions/setup-python@v5
        with:
          python-version: '3.12'
      - name: Install pypa/build
        run: >-
          python -m
          pip install
          build
          --user
      - name: Build a binary wheel and a source tarball
        run: >-
          python -m
          build
          --sdist
          --wheel
          --outdir dist/
      - name: Publish distribution 📦 to PyPI
        if: startsWith(github.ref, 'refs/tags')
        uses: pypa/gh-action-pypi-publish@release/v1
        with:
          password: ${{ secrets.PYPI_API_TOKEN_OPTIMAGIC }}


================================================
FILE: .gitignore
================================================
# AI
CLAUDE.md

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# MacOS specific service store
.DS_Store

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
*build/

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
*.sublime-workspace
*.sublime-project

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/
docs/build/
docs/source/_build/
docs/source/**/*.db
docs/source/**/*.db-shm
docs/source/**/*.db-wal
docs/source/refs.bib.bak

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
.pixi/

# Spyder project settings
.spyderproject
.spyproject

# VSCode project settings
.vscode

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/

*notes/

.idea/

*.bak


*.db


.pytask.sqlite3


src/estimagic/_version.py
src/optimagic/_version.py

*.~lock.*


================================================
FILE: .pre-commit-config.yaml
================================================
---
repos:
  - repo: meta
    hooks:
      - id: check-hooks-apply
      - id: check-useless-excludes
        # - id: identity  # Prints all files passed to pre-commits. Debugging.
  - repo: https://github.com/lyz-code/yamlfix
    rev: 1.19.1
    hooks:
      - id: yamlfix
        exclude: tests/optimagic/optimizers/_pounders/fixtures
  - repo: local
    hooks:
      - id: update-algo-selection-code
        name: update algo selection code
        entry: python .tools/update_algo_selection_hook.py
        language: python
        files: ^(src/optimagic/optimizers/|src/optimagic/algorithms\.py|\.tools/)
        require_serial: true
        additional_dependencies:
          - hatchling
          - ruff
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v6.0.0
    hooks:
      - id: check-added-large-files
        args:
          - --maxkb=2500
        exclude: tests/optimagic/optimizers/_pounders/fixtures/
      - id: check-case-conflict
      - id: check-merge-conflict
      - id: check-vcs-permalinks
      - id: check-yaml
      - id: check-toml
      - id: debug-statements
      - id: end-of-file-fixer
      - id: fix-byte-order-marker
        types:
          - text
      - id: forbid-submodules
      - id: mixed-line-ending
        args:
          - --fix=lf
        description: Forces to replace line ending by the UNIX 'lf' character.
      - id: name-tests-test
        args:
          - --pytest-test-first
      - id: no-commit-to-branch
        args:
          - --branch
          - main
      - id: trailing-whitespace
        exclude: docs/
      - id: check-ast
  - repo: https://github.com/adrienverge/yamllint.git
    rev: v1.38.0
    hooks:
      - id: yamllint
        exclude: tests/optimagic/optimizers/_pounders/fixtures
  - repo: https://github.com/astral-sh/ruff-pre-commit
    rev: v0.15.5
    hooks:
      # Run the linter.
      - id: ruff
        types_or:
          - python
          - pyi
          - jupyter
        args:
          - --fix
      # Run the formatter.
      - id: ruff-format
        types_or:
          - python
          - pyi
          - jupyter
  - repo: https://github.com/executablebooks/mdformat
    rev: 1.0.0
    hooks:
      - id: mdformat
        additional_dependencies:
          - mdformat-gfm
          - mdformat-gfm-alerts
          - mdformat-ruff
        args:
          - --wrap
          - '88'
        files: (README\.md)
  - repo: https://github.com/executablebooks/mdformat
    rev: 1.0.0
    hooks:
      - id: mdformat
        additional_dependencies:
          - mdformat-myst
          - mdformat-ruff
        args:
          - --wrap
          - '88'
        files: (docs/.)
        exclude: docs/source/how_to/how_to_specify_algorithm_and_algo_options.md
  - repo: https://github.com/kynan/nbstripout
    rev: 0.9.1
    hooks:
      - id: nbstripout
        exclude: |
          (?x)^(
            docs/source/estimagic/tutorials/estimation_tables_overview.ipynb|
            docs/source/estimagic/explanation/bootstrap_montecarlo_comparison.ipynb|
          )$
        args:
          - --drop-empty-cells
ci:
  autoupdate_schedule: monthly
  skip:
    - update-algo-selection-code


================================================
FILE: .readthedocs.yml
================================================
---
version: 2
build:
  os: ubuntu-24.04
  tools:
    python: '3.14'
  jobs:
    create_environment:
      - asdf plugin add pixi
      - asdf install pixi latest
      - asdf global pixi latest
    post_build:
      - pixi run -e docs build-docs
      - mkdir --parents $READTHEDOCS_OUTPUT/html/
      - cp -a docs/build/html/. "$READTHEDOCS_OUTPUT/html" && rm -r docs/build


================================================
FILE: .tools/create_algo_selection_code.py
================================================
import importlib
import inspect
import pkgutil
import textwrap
from itertools import combinations
from types import ModuleType
from typing import Callable, Type

from optimagic.config import OPTIMAGIC_ROOT
from optimagic.optimization.algorithm import Algorithm
from optimagic.typing import AggregationLevel


def main() -> None:
    """Create the source code for algorithms.py.

    The main part of the generated code are nested dataclasses that enable filtered
    autocomplete for algorithm selection. Creating them entails the following steps:

    - Discover all modules that contain optimizer classes
    - Collect all optimizer classes
    - Create a mapping from a tuple of categories (e.g. Global, Bounded, ...) to the
      optimizer classes that belong to them. To find out which optimizers need to be
      included we use the attributes stored in optimizer_class.__algo_info__.
    - Create the dataclasses that enable autocomplete for algorithm selection

    In addition we need to create the code for import statements, a AlgoSelection base
    class and some code to instantiate the dataclasses.

    """
    # create some basic inputs
    docstring = _get_docstring_code()
    modules = _import_optimizer_modules("optimagic.optimizers")
    all_algos = _get_all_algorithms(modules)
    filters = _get_filters()
    all_categories = list(filters)
    selection_info = _create_selection_info(all_algos, all_categories)

    # create the code for imports
    imports = _get_imports(modules)

    # create the code for the ABC AlgoSelection class
    parent_class_snippet = _get_base_class_code()

    # create the code for the dataclasses
    dataclass_snippets = []
    for active_categories in selection_info:
        new_snippet = create_dataclass_code(
            active_categories=active_categories,
            all_categories=all_categories,
            selection_info=selection_info,
        )
        dataclass_snippets.append(new_snippet)

    # create the code for the instantiation
    instantiation_snippet = _get_instantiation_code()

    # Combine all the content into a single string
    content = (
        docstring
        + imports
        + "\n\n"
        + parent_class_snippet
        + "\n"
        + "\n\n".join(dataclass_snippets)
        + "\n\n"
        + instantiation_snippet
    )

    # Write the combined content to the file
    with (OPTIMAGIC_ROOT / "algorithms.py").open("w") as f:
        f.write(content)


# ======================================================================================
# Functions to collect algorithms
# ======================================================================================


def _import_optimizer_modules(package_name: str) -> list[ModuleType]:
    """Collect all public modules in a given package in a list."""
    package = importlib.import_module(package_name)
    modules = []

    for _, module_name, is_pkg in pkgutil.walk_packages(
        package.__path__, package.__name__ + "."
    ):
        module_parts = module_name.split(".")
        if all(not part.startswith("_") for part in module_parts) and not is_pkg:
            module = importlib.import_module(module_name)
            modules.append(module)

    return modules


def _get_all_algorithms(modules: list[ModuleType]) -> dict[str, Type[Algorithm]]:
    """Collect all algorithms in modules."""
    out = {}
    for module in modules:
        out.update(_get_algorithms_in_module(module))
    return out


def _get_algorithms_in_module(module: ModuleType) -> dict[str, Type[Algorithm]]:
    """Collect all algorithms in a single module."""
    candidate_dict = dict(inspect.getmembers(module, inspect.isclass))
    candidate_dict = {
        k: v for k, v in candidate_dict.items() if hasattr(v, "__algo_info__")
    }
    algos = {}
    for candidate in candidate_dict.values():
        name = candidate.algo_info.name
        if issubclass(candidate, Algorithm) and candidate is not Algorithm:
            algos[name] = candidate
    return algos


# ======================================================================================
# Functions to filter algorithms by selectors
# ======================================================================================
def _is_gradient_based(algo: Type[Algorithm]) -> bool:
    return algo.algo_info.needs_jac  # type: ignore


def _is_gradient_free(algo: Type[Algorithm]) -> bool:
    return not _is_gradient_based(algo)


def _is_global(algo: Type[Algorithm]) -> bool:
    return algo.algo_info.is_global  # type: ignore


def _is_local(algo: Type[Algorithm]) -> bool:
    return not _is_global(algo)


def _is_bounded(algo: Type[Algorithm]) -> bool:
    return algo.algo_info.supports_bounds  # type: ignore


def _is_linear_constrained(algo: Type[Algorithm]) -> bool:
    return algo.algo_info.supports_linear_constraints  # type: ignore


def _is_nonlinear_constrained(algo: Type[Algorithm]) -> bool:
    return algo.algo_info.supports_nonlinear_constraints  # type: ignore


def _is_scalar(algo: Type[Algorithm]) -> bool:
    return algo.algo_info.solver_type == AggregationLevel.SCALAR  # type: ignore


def _is_least_squares(algo: Type[Algorithm]) -> bool:
    return algo.algo_info.solver_type == AggregationLevel.LEAST_SQUARES  # type: ignore


def _is_likelihood(algo: Type[Algorithm]) -> bool:
    return algo.algo_info.solver_type == AggregationLevel.LIKELIHOOD  # type: ignore


def _is_parallel(algo: Type[Algorithm]) -> bool:
    return algo.algo_info.supports_parallelism  # type: ignore


def _get_filters() -> dict[str, Callable[[Type[Algorithm]], bool]]:
    """Create a dict mapping from category names to filter functions."""
    filters: dict[str, Callable[[Type[Algorithm]], bool]] = {
        "GradientBased": _is_gradient_based,
        "GradientFree": _is_gradient_free,
        "Global": _is_global,
        "Local": _is_local,
        "Bounded": _is_bounded,
        "LinearConstrained": _is_linear_constrained,
        "NonlinearConstrained": _is_nonlinear_constrained,
        "Scalar": _is_scalar,
        "LeastSquares": _is_least_squares,
        "Likelihood": _is_likelihood,
        "Parallel": _is_parallel,
    }
    return filters


# ======================================================================================
# Functions to create a mapping from a tuple of selectors to subsets of the dict
# mapping algorithm names to algorithm classes
# ======================================================================================


def _create_selection_info(
    all_algos: dict[str, Type[Algorithm]],
    categories: list[str],
) -> dict[tuple[str, ...], dict[str, Type[Algorithm]]]:
    """Create a dict mapping from a tuple of selectors to subsets of the all_algos dict.

    Args:
        all_algos: Dictionary mapping algorithm names to algorithm classes.
        categories: List of categories to filter by.

    Returns:
        A dictionary mapping tuples of selectors to dictionaries of algorithm names
            and their corresponding classes.

    """
    category_combinations = _generate_category_combinations(categories)
    out = {}
    for comb in category_combinations:
        filtered_algos = _apply_filters(all_algos, comb)
        if filtered_algos:
            out[comb] = filtered_algos
    return out


def _generate_category_combinations(categories: list[str]) -> list[tuple[str, ...]]:
    """Generate all combinations of categories, sorted by length in descending order.

    Args:
        categories: A list of category names.

    Returns:
        A list of tuples, where each tuple represents a combination of categories.

    """
    result: list[tuple[str, ...]] = []
    for r in range(len(categories) + 1):
        result.extend(map(tuple, map(sorted, combinations(categories, r))))
    return sorted(result, key=len, reverse=True)


def _apply_filters(
    all_algos: dict[str, Type[Algorithm]], categories: tuple[str, ...]
) -> dict[str, Type[Algorithm]]:
    """Apply filters to the algorithms based on the given categories.

    Args:
        all_algos: A dictionary mapping algorithm names to algorithm classes.
        categories: A tuple of category names to filter by.

    Returns:
        filtered dictionary of algorithms that match all given categories.

    """
    filtered = all_algos
    filters = _get_filters()
    for category in categories:
        filter_func = filters[category]
        filtered = {name: algo for name, algo in filtered.items() if filter_func(algo)}
    return filtered


# ======================================================================================
# Functions to create code for the dataclasses
# ======================================================================================


def create_dataclass_code(
    active_categories: tuple[str, ...],
    all_categories: list[str],
    selection_info: dict[tuple[str, ...], dict[str, Type[Algorithm]]],
) -> str:
    """Create the source code for a dataclass representing a selection of algorithms.

    Args:
        active_categories: A tuple of active category names.
        all_categories: A list of all category names.
        selection_info: A dictionary that maps tuples of category names to dictionaries
            of algorithm names and their corresponding classes.

    Returns:
        A string containing the source code for the dataclass.

    """
    # get the children of the active categories
    children = _get_children(active_categories, all_categories, selection_info)

    # get the name of the class to be generated
    class_name = _get_class_name(active_categories)

    # get code for the dataclass fields
    field_template = "    {name}: Type[{class_name}] = {class_name}"
    field_strings = []
    for name, algo_class in selection_info[active_categories].items():
        field_strings.append(
            field_template.format(name=name, class_name=algo_class.__name__)
        )
    fields = "\n".join(field_strings)

    # get code for the properties to select children
    child_template = textwrap.dedent("""
        @property
        def {new_category}(self) -> {class_name}:
            return {class_name}()
    """)
    child_template = textwrap.indent(child_template, "    ")
    child_strings = []
    for new_category, categories in children.items():
        child_class_name = _get_class_name(categories)
        child_strings.append(
            child_template.format(
                new_category=new_category, class_name=child_class_name
            )
        )
    children_code = "\n".join(child_strings)

    # assemble the class
    out = "@dataclass(frozen=True)\n"
    out += f"class {class_name}(AlgoSelection):\n"
    out += fields + "\n"
    if children:
        out += children_code

    return out


def _get_class_name(active_categories: tuple[str, ...]) -> str:
    """Get the name of the class based on the active categories."""
    return "".join(active_categories) + "Algorithms"


def _get_children(
    active_categories: tuple[str, ...],
    all_categories: list[str],
    selection_info: dict[tuple[str, ...], dict[str, Type[Algorithm]]],
) -> dict[str, tuple[str, ...]]:
    """Get the children of the active categories.

    Args:
        active_categories: A tuple of active category names.
        all_categories: A list of all category names.
        selection_info: A dictionary that maps tuples of category names to dictionaries
            of algorithm names and their corresponding classes.

    Returns:
        A dict mapping additional categories to a sorted tuple of categories
            that contains all active categories and the additional category. Entries
            are only included if the selected categories are in `selection_info`, i.e.
            if there exist algorithms that are compatible with all categories.

    """
    inactive_categories = sorted(set(all_categories) - set(active_categories))
    out = {}
    for new_cat in inactive_categories:
        new_comb = tuple(sorted(active_categories + (new_cat,)))
        if new_comb in selection_info:
            out[new_cat] = new_comb
    return out


# ======================================================================================
# Functions to create the imports
# ======================================================================================


def _get_imports(modules: list[ModuleType]) -> str:
    """Create source code to import all algorithms."""
    snippets = [
        "from typing import Type",
        "from dataclasses import dataclass",
        "from optimagic.optimization.algorithm import Algorithm",
        "from typing import cast",
    ]
    for module in modules:
        algorithms = _get_algorithms_in_module(module)
        class_names = [algo.__name__ for algo in algorithms.values()]
        for class_name in class_names:
            snippets.append(f"from {module.__name__} import {class_name}")
    return "\n".join(snippets)


# ======================================================================================
# Functions to create the static parts of the code
# ======================================================================================


def _get_base_class_code() -> str:
    """Get the source code for the AlgoSelection class."""
    out = textwrap.dedent("""
        @dataclass(frozen=True)
        class AlgoSelection:

            def _all(self) -> list[Type[Algorithm]]:
                raw = [field.default for field in self.__dataclass_fields__.values()]
                return cast(list[Type[Algorithm]], raw)


            def _available(self) -> list[Type[Algorithm]]:
                _all = self._all()
                return [
                    a for a in _all if a.algo_info.is_available # type: ignore
                ]

            @property
            def All(self) -> list[Type[Algorithm]]:
                return self._all()

            @property
            def Available(self) -> list[Type[Algorithm]]:
                return self._available()

            @property
            def AllNames(self) -> list[str]:
                return [str(a.name) for a in self._all()]

            @property
            def AvailableNames(self) -> list[str]:
                return [str(a.name) for a in self._available()]

            @property
            def _all_algorithms_dict(self) -> dict[str, Type[Algorithm]]:
                return {str(a.name): a for a in self._all()}

            @property
            def _available_algorithms_dict(self) -> dict[str, Type[Algorithm]]:
                return {str(a.name): a for a in self._available()}

    """)
    return out


def _get_docstring_code() -> str:
    """Get the source code for the docstring of the AlgoSelection class."""
    raw = (
        '"""This code was auto-generated by a pre-commit hook and should not be '
        "changed.\n\nIf you manually change this code, all of your changes will be "
        "overwritten the next time\nthe pre-commit hook runs.\n\nDetailed information "
        "on the purpose of the code can be found here:\n"
        "https://optimagic.readthedocs.io/en/latest/development/ep-02-typing.html#"
        'algorithm-selection\n\n"""\n'
    )
    out = textwrap.dedent(raw)
    return out


def _get_instantiation_code() -> str:
    """Get the source code for instantiating some classes at the end of the module."""
    out = textwrap.dedent("""
        algos = Algorithms()
        global_algos = GlobalAlgorithms()

        ALL_ALGORITHMS = algos._all_algorithms_dict
        AVAILABLE_ALGORITHMS = algos._available_algorithms_dict
        GLOBAL_ALGORITHMS = global_algos._available_algorithms_dict
    """)
    return out


if __name__ == "__main__":
    main()


================================================
FILE: .tools/test_create_algo_selection_code.py
================================================
from create_algo_selection_code import _generate_category_combinations


def test_generate_category_combinations() -> None:
    categories = ["a", "b", "c"]
    got = _generate_category_combinations(categories)
    expected = [
        ("a", "b", "c"),
        ("a", "b"),
        ("a", "c"),
        ("b", "c"),
        ("a",),
        ("b",),
        ("c",),
    ]
    assert got == expected


================================================
FILE: .tools/update_algo_selection_hook.py
================================================
#!/usr/bin/env python
import importlib.util
import subprocess
import sys
from pathlib import Path
from typing import Any

ROOT = Path(__file__).resolve().parents[1]

# sys.executable guarantees we stay inside the pre‑commit venv
PYTHON = [sys.executable]


def run(cmd: list[str], **kwargs: Any) -> None:
    subprocess.check_call(cmd, cwd=ROOT, **kwargs)


def ensure_optimagic_is_locally_installed() -> None:
    if importlib.util.find_spec("optimagic") is None:
        run(["uv", "pip", "install", "--python", sys.executable, "-e", "."])


def main() -> int:
    ensure_optimagic_is_locally_installed()
    run(PYTHON + [".tools/create_algo_selection_code.py"])

    ruff_args = [
        "--silent",
        "--config",
        "pyproject.toml",
        "src/optimagic/algorithms.py",
    ]
    run(["ruff", "format", *ruff_args])
    run(["ruff", "check", "--fix", *ruff_args])
    return 0  # explicit success code


if __name__ == "__main__":
    sys.exit(main())


================================================
FILE: .yamllint.yml
================================================
---
yaml-files:
  - '*.yaml'
  - '*.yml'
  - .yamllint
rules:
  braces: enable
  brackets: enable
  colons: enable
  commas: enable
  comments:
    level: warning
  comments-indentation:
    level: warning
  document-end: disable
  document-start:
    level: warning
  empty-lines: enable
  empty-values: disable
  float-values: disable
  hyphens: enable
  indentation: {spaces: 2}
  key-duplicates: enable
  key-ordering: disable
  line-length:
    max: 88
    allow-non-breakable-words: true
    allow-non-breakable-inline-mappings: false
  new-line-at-end-of-file: enable
  new-lines:
    type: unix
  octal-values: disable
  quoted-strings: disable
  trailing-spaces: enable
  truthy:
    level: warning


================================================
FILE: CHANGES.md
================================================
# Changes

This is a record of all past optimagic releases and what went into them in reverse
chronological order. We follow [semantic versioning](https://semver.org/) and all
releases are available on [Anaconda.org](https://anaconda.org/optimagic-dev/optimagic).


## 0.5.3

This release introduces **multi-backend plotting** with support for matplotlib, bokeh,
and altair backends (in addition to the existing plotly backend), **3D visualizations**
of optimization problems, and several **new optimizer libraries** including PySwarms,
PyGAD, and gradient-free-optimizers. It also adds **lazy loading** for optional
dependencies to improve import times. Many contributions in this release were made by
Google Summer of Code (GSoC) 2025 contributors.

- {gh}`665` Skips nag_dfols tests when DFO-LS is not installed ({ghuser}`Swayam-maurya`).
- {gh}`664` Adds `from __future__ import annotations` to constraints.py to fix
  annotations issue with Python 3.13 and NumPy 2.4 ({ghuser}`timmens`).
- {gh}`660` Renames the `bayes_opt` parameter `n_iter` to `stopping_maxiter`
  ({ghuser}`spline2hg`).
- {gh}`659` Removes `None` as a valid option for `stopping_criterion` in
  `convergence_plot` and updates the docstring ({ghuser}`szd5654125`).
- {gh}`658` Enhances documentation and minor fixes in backend plotting
  ({ghuser}`r3kste`).
- {gh}`654` Implements the altair plotting backend ({ghuser}`r3kste`).
- {gh}`653` Adds `llms.txt` and `llms-full.txt` to documentation
  ({ghuser}`mostafafaheem`).
- {gh}`652` Implements the bokeh plotting backend ({ghuser}`r3kste`).
- {gh}`649` Implements backend plotting for `slice_plot` ({ghuser}`r3kste`).
- {gh}`647` Implements backend plotting for `convergence_plot` ({ghuser}`r3kste`).
- {gh}`645` Implements backend plotting for `profile_plot` ({ghuser}`r3kste`).
- {gh}`644` Adds a how-to guide for changing plotting backends ({ghuser}`r3kste`).
- {gh}`643` Skips doctest that fails due to negative signed zero handling
  ({ghuser}`r3kste`).
- {gh}`641` Implements backend plotting for `params_plot` ({ghuser}`r3kste`).
- {gh}`639` Adds optimizers from PySwarms ({ghuser}`spline2hg`).
- {gh}`637` Adds note about `__future__` import ({ghuser}`spline2hg`).
- {gh}`636` Wraps population-based optimizers from gradient-free-optimizers
  ({ghuser}`gauravmanmode`).
- {gh}`633` Migrates bayesian-optimizer docs to new documentation style
  ({ghuser}`spline2hg`).
- {gh}`632` Migrates nevergrad optimizers to new documentation style
  ({ghuser}`gauravmanmode`).
- {gh}`631` Migrates iminuit docs to new documentation style ({ghuser}`spline2hg`).
- {gh}`624` Wraps local optimizers from gradient-free-optimizers
  ({ghuser}`gauravmanmode`).
- {gh}`621` Implements lazy loading for optional dependencies ({ghuser}`spline2hg`).
- {gh}`619` Adopts the NumFOCUS code of conduct ({ghuser}`timmens`).
- {gh}`616` Adds optimizers from PyGAD ({ghuser}`spline2hg`).
- {gh}`600` Separates data preparation and plotting for `criterion_plot()`
  ({ghuser}`r3kste`).
- {gh}`599` Implements the matplotlib backend for `criterion_plot()` ({ghuser}`r3kste`).
- {gh}`581` Adds 3D visualizations of optimization problems ({ghuser}`shammeer-s`).
- {gh}`554` Improves documentation of algorithm options ({ghuser}`janosg`).


## 0.5.2

This minor release adds support for two additional optimizer libraries:

- [Nevergrad](https://github.com/facebookresearch/nevergrad): A library for
  gradient-free optimization developed by Facebook Research.
- [Bayesian
  Optimization](https://github.com/bayesian-optimization/BayesianOptimization): A
  library for constrained bayesian global optimization with Gaussian processes.

In addition, this release includes several bug fixes and improvements to the
documentation. Many contributions in this release were made by Google Summer of Code
(GSoC) 2025 applicants, with @gauravmanmode and @spline2hg being the accepted
contributors.

- {gh}`620` Uses interactive plotly figures in documentation ({ghuser}`timmens`).
- {gh}`618` Improves bounds processing when no bounds are specified ({ghuser}`timmens`).
- {gh}`615` Adds pre-commit hook that checks mypy version consistency ({ghuser}`timmens`).
- {gh}`613` Exposes converter functionality ({ghuser}`spline2hg`).
- {gh}`612` Fixes results processing to work with new cobyla optimizer ({ghuser}`janosg`).
- {gh}`610` Adds `needs_bounds` and `supports_infinite_bounds` fields to algorithm info ({ghuser}`gauravmanmode`).
- {gh}`608` Adds support for plotly >= 6 ({ghuser}`hmgaudecker`, {ghuser}`timmens`).
- {gh}`607` Returns `run_explorations` results in a dataclass ({ghuser}`r3kste`).
- {gh}`605` Enhances batch evaluator checking and processing, introduces the internal
  `BatchEvaluatorLiteral` literal, and updates CHANGES.md ({ghuser}`janosg`,
  {ghuser}`timmens`).
- {gh}`602` Adds optimizer wrapper for bayesian-optimization package ({ghuser}`spline2hg`).
- {gh}`601` Updates pre-commit hooks and fixes mypy issues ({ghuser}`janosg`).
- {gh}`598` Fixes and adds links to GitHub in the documentation ({ghuser}`hamogu`).
- {gh}`594` Refines newly added optimizer wrappers ({ghuser}`janosg`).
- {gh}`591` Adds multiple optimizers from the nevergrad package ({ghuser}`gauravmanmode`).
- {gh}`589` Rewrites the algorithm selection pre-commit hook in pure Python to address
  issues with bash scripts on Windows ({ghuser}`timmens`).
- {gh}`586` and {gh}`592` Ensure the SciPy `disp` parameter is exposed for the following
  SciPy algorithms: slsqp, neldermead, powell, conjugate_gradient, newton_cg, cobyla,
  truncated_newton, trust_constr ({ghuser}`sefmef`, {ghuser}`TimBerti`).
- {gh}`585` Exposes all parameters of [SciPy's
  BFGS](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-bfgs.html)
  optimizer in optimagic ({ghuser}`TimBerti`).
- {gh}`582` Adds support for handling infinite gradients during optimization
  ({ghuser}`Aziz-Shameem`).
- {gh}`579` Implements a wrapper for the PSO optimizer from the
  [nevergrad](https://github.com/facebookresearch/nevergrad) package ({ghuser}`r3kste`).
- {gh}`578` Integrates the `intersphinx-registry` package into the documentation for
  automatic linking to up-to-date external documentation
  ({ghuser}`Schefflera-Arboricola`).
- {gh}`576` Wraps oneplusone optimizer from nevergrad ({ghuser}`gauravmanmode`, {ghuser}`gulshan-123`).
- {gh}`572` and {gh}`573` Fix bugs in error handling for parameter selector processing
  and constraints checking ({ghuser}`hmgaudecker`).
- {gh}`570` Adds a how-to guide for adding algorithms to optimagic and improves internal
  documentation ({ghuser}`janosg`).
- {gh}`569` Implements a threading batch evaluator ({ghuser}`spline2hg`).
- {gh}`568` Introduces an initial wrapper for the migrad optimizer from the
  [iminuit](https://github.com/scikit-hep/iminuit) package ({ghuser}`spline2hg`).
- {gh}`567` Makes the `fun` argument optional when `fun_and_jac` is provided
  ({ghuser}`gauravmanmode`).
- {gh}`563` Fixes a bug in input harmonization for history plotting
  ({ghuser}`gauravmanmode`).
- {gh}`552` Refactors and extends the `History` class, removing the internal
  `HistoryArrays` class ({ghuser}`timmens`).
- {gh}`485` Adds bootstrap weights functionality ({ghuser}`alanlujan91`).


## 0.5.1

This is a minor release that introduces the new algorithm selection tool and several
small improvements.

To learn more about the algorithm selection feature check out the following resources:

- [How to specify and configure algorithms](https://optimagic.readthedocs.io/en/latest/how_to/how_to_specify_algorithm_and_algo_options.html)
- [How to select local optimizers](https://optimagic.readthedocs.io/en/latest/how_to/how_to_algorithm_selection.html)

- {gh}`549` Add support for Python 3.13 ({ghuser}`timmens`)
- {gh}`550` and {gh}`534` implement the new algorithm selection tool ({ghuser}`janosg`)
- {gh}`548` and {gh}`531` improve the documentation ({ghuser}`ChristianZimpelmann`)
- {gh}`544` Adjusts the results processing of the nag optimizers to be compatible
  with the latest releases ({ghuser}`timmens`)
- {gh}`543` Adds support for numpy 2.x ({ghuser}`timmens`)
- {gh}`536` Adds a how-to guide for choosing local optimizers ({ghuser}`mpetrosian`)
- {gh}`535` Allows algorithm classes and instances in estimation functions
  ({ghuser}`timmens`)
- {gh}`532` Makes several small improvements to the documentation.

## 0.5.0

This is a major release with several breaking changes and deprecations. In this
release we started implementing two major enhancement proposals and renamed the package
from estimagic to optimagic (while keeping the `estimagic` namespace for the estimation
capabilities).

- [EP-02: Static typing](https://estimagic.org/en/latest/development/ep-02-typing.html)
- [EP-03: Alignment with SciPy](https://estimagic.org/en/latest/development/ep-03-alignment.html)

The implementation of the two enhancement proposals is not complete and will likely
take until version `0.6.0`. However, all breaking changes and deprecations (with the
exception of a minor change in benchmarking) are already implemented such that updating
to version `0.5.0` is future proof.

- {gh}`500` removes the dashboard, the support for simopt optimizers and the
  `derivative_plot` ({ghuser}`janosg`)
- {gh}`502` renames estimagic to optimagic ({ghuser}`janosg`)
- {gh}`504` aligns `maximize` and `minimize` more closely with scipy. All related
  deprecations and breaking changes are listed below. As a result, scipy code that uses
  minimize with the arguments `x0`, `fun`, `jac` and `method` will run without changes
  in optimagic. Similarly, to `OptimizeResult` gets some aliases so it behaves more
  like SciPy's.
- {gh}`506` introduces the new `Bounds` object and deprecates `lower_bounds`,
  `upper_bounds`, `soft_lower_bounds` and `soft_upper_bounds` ({ghuser}`janosg`)
- {gh}`507` updates the infrastructure so we can make parallel releases under the names
  `optimagic` and `estimagic` ({ghuser}`timmens`)
- {gh}`508` introduces the new `ScalingOptions` object and deprecates the
  `scaling_options` argument of `maximize` and `minimize` ({ghuser}`timmens`)
- {gh}`512` implements the new interface for objective functions and derivatives
  ({ghuser}`janosg`)
- {gh}`513` implements the new `optimagic.MultistartOptions` object and deprecates the
  `multistart_options` argument of `maximize` and `minimize` ({ghuser}`timmens`)
- {gh}`514` and {gh}`516` introduce the `NumdiffResult` object that is returned from
  `first_derivative` and `second_derivative`. It also fixes several bugs in the
  pytree handling in `first_derivative` and `second_derivative` and deprecates
  Richardson Extrapolation and the `key` ({ghuser}`timmens`)
- {gh}`517` introduces the new `NumdiffOptions` object for configuring numerical
  differentiation during optimization or estimation ({ghuser}`timmens`)
- {gh}`519` rewrites the logging code and introduces new `LogOptions` objects
  ({ghuser}`schroedk`)
- {gh}`521` introduces the new internal algorithm interface.
  ({ghuser}`janosg` and {ghuser}`mpetrosian`)
- {gh}`522` introduces the new `Constraint` objects and deprecates passing
  dictionaries or lists of dictionaries as constraints ({ghuser}`timmens`)


### Breaking changes

- When providing a path for the argument `logging` of the functions
  `maximize` and `minimize` and the file already exists, the default
  behavior is to raise an error now. Replacement or extension
  of an existing file must be explicitly configured.
- The argument `if_table_exists` in `log_options` has no effect anymore and a
  corresponding warning is raised.
- `OptimizeResult.history` is now a `optimagic.History` object instead of a
  dictionary. Dictionary style access is implemented but deprecated. Other dictionary
  methods might not work.
- The result of `first_derivative` and `second_derivative` is now a
  `optimagic.NumdiffResult` object instead of a dictionary. Dictionary style access is
  implemented but other dictionary methods might not work.
- The dashboard is removed
- The `derivative_plot` is removed.
- Optimizers from Simopt are removed.
- Passing callables with the old internal algorithm interface as `algorithm` to
  `minimize` and `maximize` is not supported anymore. Use the new
  `Algorithm` objects instead. For examples see: https://tinyurl.com/24a5cner


### Deprecations

- The `criterion` argument of `maximize` and `minimize` is renamed to `fun` (as in
  SciPy).
- The `derivative` argument of `maximize` and `minimize` is renamed to `jac` (as
  in SciPy)
- The `criterion_and_derivative` argument of `maximize` and `minimize` is renamed
  to `fun_and_jac` to align it with the other names.
- The `criterion_kwargs` argument of `maximize` and `minimize` is renamed to
  `fun_kwargs` to align it with the other names.
- The `derivative_kwargs` argument of `maximize` and `minimize` is renamed to
  `jac_kwargs` to align it with the other names.
- The `criterion_and_derivative_kwargs` argument of `maximize` and `minimize` is
  renamed to `fun_and_jac_kwargs` to align it with the other names.
- Algorithm specific convergence and stopping criteria are renamed to align them more
  with NlOpt and SciPy names.
    - `convergence_relative_criterion_tolerance` -> `convergence_ftol_rel`
    - `convergence_absolute_criterion_tolerance` -> `convergence_ftol_abs`
    - `convergence_relative_params_tolerance` -> `convergence_xtol_rel`
    - `convergence_absolute_params_tolerance` -> `convergence_xtol_abs`
    - `convergence_relative_gradient_tolerance` -> `convergence_gtol_rel`
    - `convergence_absolute_gradient_tolerance` -> `convergence_gtol_abs`
    - `convergence_scaled_gradient_tolerance` -> `convergence_gtol_scaled`
    - `stopping_max_criterion_evaluations` -> `stopping_maxfun`
    - `stopping_max_iterations` -> `stopping_maxiter`
- The arguments `lower_bounds`, `upper_bounds`, `soft_lower_bounds` and
  `soft_upper_bounds` are deprecated and replaced by `optimagic.Bounds`. This affects
  `maximize`, `minimize`, `estimate_ml`, `estimate_msm`, `slice_plot` and several
  other functions.
- The `log_options` argument of `minimize` and `maximize` is deprecated. Instead,
  `LogOptions` objects can be passed under the `logging` argument.
- The class `OptimizeLogReader` is deprecated and redirects to
  `SQLiteLogReader`.
- The `scaling_options` argument of `maximize` and `minimize` is deprecated. Instead a
  `ScalingOptions` object can be passed under the `scaling` argument that was previously
  just a bool.
- Objective functions that return a dictionary with the special keys "value",
  "contributions" and "root_contributions" are deprecated. Instead, likelihood and
  least-squares functions are marked with a `mark.likelihood` or `mark.least_squares`
  decorator. There is a detailed how-to guide that shows the new behavior. This affects
  `maximize`, `minimize`, `slice_plot` and other functions that work with objective
  functions.
- The `multistart_options` argument of `minimize` and `maximize` is deprecated. Instead,
  a `MultistartOptions` object can be passed under the `multistart` argument.
- Richardson Extrapolation is deprecated in `first_derivative` and `second_derivative`
- The `key` argument is deprecated in `first_derivative` and `second_derivative`
- Passing dictionaries or lists of dictionaries as `constraints` to `maximize` or
  `minimize` is deprecated. Use the new `Constraint` objects instead.

## 0.4.7

This release contains minor improvements and bug fixes. It is the last release before
the package will be renamed to optimagic and two large enhancement proposals will be
implemented.

- {gh}`490` adds the attribute `optimize_result` to the `MomentsResult` class
  ({ghuser}`timmens`)
- {gh}`483` fixes a bug in the handling of keyword arguments in `bootstrap`
  ({ghuser}`alanlujan91`)
- {gh}`477` allows to use an identity weighting matrix in MSM estimation
  ({ghuser}`sidd3888`)
- {gh}`473` fixes a bug where bootstrap keyword arguments were ignored
  `get_moments_cov` ({ghuser}`timmens`)
- {gh}`467`, {gh}`478`, {gh}`479` and {gh}`480` improve the documentation
  ({ghuser}`mpetrosian`, {ghuser}`segsell`, and {ghuser}`timmens`)


## 0.4.6

This release drastically improves the optimizer benchmarking capabilities, especially
with noisy functions and parallel optimizers. It makes tranquilo and numba optional
dependencies and is the first version of estimagic to be compatible with Python
3.11.


- {gh}`464` Makes tranquilo and numba optional dependencies ({ghuser}`janosg`)
- {gh}`461` Updates docstrings for procss_benchmark_results ({ghuser}`segsell`)
- {gh}`460` Fixes several bugs in the processing of benchmark results with noisy
  functions ({ghuser}`janosg`)
- {gh}`459` Prepares benchmarking functionality for parallel optimizers
  ({ghuser}`mpetrosian` and {ghuser}`janosg`)
- {gh}`457` Removes some unused files ({ghuser}`segsell`)
- {gh}`455` Improves a local pre-commit hook ({ghuser}`ChristianZimpelmann`)


## 0.4.5

- {gh}`379` Improves the estimation table ({ghuser}`ChristianZimpelmann`)
- {gh}`445` fixes line endings in local pre-commit hook ({ghuser}`ChristianZimpelmann`)
- {gh}`443`, {gh}`444`, {gh}`445`, {gh}`446`, {gh}`448` and {gh}`449` are a major
  refactoring of tranquilo ({ghuser}`timmens` and {ghuser}`janosg`)
- {gh}`441` Adds an aggregated convergence plot for benchmarks ({ghuser}`mpetrosian`)
- {gh}`435` Completes the cartis-roberts benchmark set ({ghuser}`segsell`)

## 0.4.4

- {gh}`437` removes fuzzywuzzy as dependency ({ghuser}`aidatak97`)
- {gh}`432` makes logging compatible with sqlalchemy 2.x ({ghuser}`janosg`)
- {gh}`430` refactors the getter functions in Tranquilo ({ghuser}`janosg`)
- {gh}`427` improves pre-commit setup ({ghuser}`timmens` and {ghuser}`hmgaudecker`)
- {gh}`425` improves handling of notebooks in documentation ({ghuser}`baharcos`)
- {gh}`423` and {gh}`399` add code to calculate poisdeness constants ({ghuser}`segsell`)
- {gh}`420` improve CI infrastructure ({ghuser}`hmgaudecker`, {ghuser}`janosg`)
- {gh}`407` adds global optimizers from scipy ({ghuser}`baharcos`)

## 0.4.3

- {gh}`416` improves documentation and packaging ({ghuser}`janosg`)

## 0.4.2

- {gh}`412` Improves the output of the fides optimizer among other small changes
  ({ghuser}`janosg`)
- {gh}`411` Fixes a bug in multistart optimizations with least squares optimizers.
  See {gh}`410` for details ({ghuser}`janosg`)
- {gh}`404` speeds up the gqtpar subsolver ({ghuser}`mpetrosian` )
- {gh}`400` refactors subsolvers ({ghuser}`mpetrosian`)
- {gh}`398`, {gh}`397`, {gh}`395`, {gh}`390`, {gh}`389`, {gh}`388` continue with the
  implementation of tranquilo ({ghuser}`segsell`, {ghuser}`timmens`,
  {ghuser}`mpetrosian`, {ghuser}`janosg`)
- {gh}`391` speeds up the bntr subsolver ({ghuser}`mpetrosian`)


## 0.4.1

- {gh}`307` Adopts a code of condact and governance model
- {gh}`384` Polish documentation ({ghuser}`janosg` and {ghuser}`mpetrosian`)
- {gh}`374` Moves the documentation to MyST ({ghuser}`baharcos`)
- {gh}`365` Adds copybuttos to documentation ({ghuser}`amageh`)
- {gh}`371` Refactors the pounders algorithm ({ghuser}`segsell`)
- {gh}`369` Fixes CI ({ghuser}`janosg`)
- {gh}`367` Fixes the linux environment ({ghuser}`timmens`)
- {gh}`294` Adds the very first experimental version of tranquilo ({ghuser}`janosg`,
  {ghuser}`timmens`, {ghuser}`segsell`, {ghuser}`mpetrosian`)


## 0.4.0

- {gh}`366` Update  ({ghuser}`segsell`)
- {gh}`362` Polish documentation ({ghuser}`segsell`)

## 0.3.4

- {gh}`364` Use local random number generators ({ghuser}`timmens`)
- {gh}`363` Fix pounders test cases ({ghuser}`segsell`)
- {gh}`361` Update estimation code ({ghuser}`timmens`)
- {gh}`360` Update results object documentation ({ghuser}`timmens`)

## 0.3.3

- {gh}`357` Adds jax support ({ghuser}`janosg`)
- {gh}`359` Improves error handling with violated constaints ({ghuser}`timmens`)
- {gh}`358` Improves cartis roberts set of test functions and improves the
  default latex rendering of MultiIndex tables ({ghuser}`mpetrosian`)

## 0.3.2

- {gh}`355` Improves test coverage of contraints processing ({ghuser}`janosg`)
- {gh}`354` Improves test coverage for bounds processing ({ghuser}`timmens`)
- {gh}`353` Improves history plots ({ghuser}`timmens`)
- {gh}`352` Improves scaling and benchmarking ({ghuser}`janosg`)
- {gh}`351` Improves estimation summaries ({ghuser}`timmens`)
- {gh}`350` Allow empty queries or selectors in constraints ({ghuser}`janosg`)

## 0.3.1

- {gh}`349` fixes multiple small bugs and adds test cases for all of them
  ({ghuser}`mpetrosian`, {ghuser}`janosg` and {ghuser}`timmens`)

## 0.3.0

Fist release with pytree support in optimization, estimation and differentiation
and much better result objects in optimization and estimation.

Breaking changes

- New `OptimizeResult` object is returned by `maximize` and `minimize`. This
  breaks all code that expects the old result dictionary. Usage of the new result is
  explained in the getting started tutorial on optimization.
- New internal optimizer interface that can break optimization with custom optimizers
- The inferface of `process_constraints` changed quite drastically. This breaks
  code that used `process_constraints` to get the number of free parameters or check
  if constraints are valid. There are new high level functions
  `estimagic.check_constraints` and `estimagic.count_free_params` instead.
- Some functions from `estimagic.logging.read_log` are removed and replaced by
  `estimagic.OptimizeLogReader`.
- Convenience functions to create namedtuples are removed from `estimagic.utilities`.
- {gh}`346` Add option to use nonlinear constraints ({ghuser}`timmens`)
- {gh}`345` Moves estimation_table to new latex functionality of pandas
  ({ghuser}`mpetrosian`)
- {gh}`344` Adds pytree support to slice_plot ({ghuser}`janosg`)
- {gh}`343` Improves the result object of estimation functions and makes msm estimation
  pytree compatible ({ghuser}`janosg`)
- {gh}`342` Improves default options of the fides optimizer, allows single constraints
  and polishes the documentation ({ghuser}`janosg`)
- {gh}`340` Enables history collection for optimizers that evaluate the criterion
  function in parallel ({ghuser}`janosg`)
- {gh}`339` Incorporates user feedback and polishes the documentation.
- {gh}`338` Improves log reading functions ({ghuser}`janosg`)
- {gh}`336` Adds pytree support to the dashboard ({ghuser}`roecla`).
- {gh}`335` Introduces an `OptimizeResult` object and functionality for history
  plotting ({ghuser}`janosg`).
- {gh}`333` Uses new history collection feature to speed up benchmarking
  ({ghuser}`segsell`).
- {gh}`330` Is a major rewrite of the estimation code ({ghuser}`timmens`).
- {gh}`328` Improves quadratic surrogate solvers used in pounders and tranquilo
  ({ghuser}`segsell`).
- {gh}`326` Improves documentation of numerical derivatives ({ghuser}`timmens`).
- {gh}`325` Improves the slice_plot ({ghuser}`mpetrosian`)
- {gh}`324` Adds ability to collect optimization histories without logging
  ({ghuser}`janosg`).
- {gh}`311` and {gh}`288` rewrite all plotting code in plotly ({ghuser}`timmens`
  and {ghuser}`aidatak97`).
- {gh}`306` improves quadratic surrogate solvers used in pounders and tranquilo
  ({ghuser}`segsell`).
- {gh}`305` allows pytrees during optimization and rewrites large parts of the
  constraints processing ({ghuser}`janosg`).
- {gh}`303` introduces a new optimizer interface that makes it easier to add optimizers
  and makes it possible to access optimizer specific information outside of the
  intrenal_criterion_and_derivative ({ghuser}`janosg` and {ghuser}`roecla`).

## 0.2.5

- {gh}`302` Drastically improves error handling during optimization ({ghuser}`janosg`).

## 0.2.4

- {gh}`304` Removes the chaospy dependency ({ghuser}`segsell`).

## 0.2.3

- {gh}`295` Fixes a small bug in estimation_table ({ghuser}`mpetrosian`).
- {gh}`286` Adds pytree support for first and second derivative ({ghuser}`timmens`).
- {gh}`285` Allows to use estimation functions with external optimization
  ({ghuser}`janosg`).
- {gh}`283` Adds fast solvers for quadratic trustregion subproblems ({ghuser}`segsell`).
- {gh}`282` Vastly improves estimation tables ({ghuser}`mpetrosian`).
- {gh}`281` Adds some tools to work with pytrees ({ghuser}`janosg`
  and {ghuser}`timmens`).
- {gh}`278` adds Estimagic Enhancement Proposal 1 for the use of Pytrees in Estimagic
  ({ghuser}`janosg`)

## 0.2.2

- {gh}`276` Add parallel Nelder-Mead algorithm by {ghuser}`jacekb95`
- {gh}`267` Update fides by {ghuser}`roecla`
- {gh}`265` Refactor pounders algorithm by {ghuser}`segsell` and {ghuser}`janosg`.
- {gh}`261` Add pure Python pounders algorithm by {ghuser}`segsell`.

## 0.2.1

- {gh}`260` Update MSM and ML notebooks by {ghuser}`timmens`.
- {gh}`259` Several small fixes and improvements by {ghuser}`janosg` and
  {ghuser}`roecla`.

## 0.2.0

Add a lot of new functionality with a few minor breaking changes. We have more
optimizers, better error handling, bootstrap and inference for method of simulated
moments. The breaking changes are:
\- logging is disabled by default during optimization.
\- the log_option "if_exists" was renamed to "if_table_exists"
\- The comparison plot function is removed.
\- first_derivative now returns a dictionary, independent of arguments.
\- structure of the logging database has changed
\- there is an additional boolean flag named `scaling` in minimize and maximize

- {gh}`251` Allows the loading, running and visualization of benchmarks
  ({ghuser}`janosg`, {ghuser}`mpetrosian` and {ghuser}`roecla`)
- {gh}`196` Adds support for multistart optimizations ({ghuser}`asouther4` and
  {ghuser}`janosg`)
- {gh}`248` Adds the fides optimizer ({ghuser}`roecla`)
- {gh}`146` Adds `estimate_ml` functionality ({ghuser}`janosg`, {ghuser}`LuisCald`
  and {ghuser}`s6soverd`).
- {gh}`235` Improves the documentation ({ghuser}`roecla`)
- {gh}`216` Adds the ipopt optimizer ({ghuser}`roecla`)
- {gh}`215` Adds optimizers from the pygmo library ({ghuser}`roecla` and
  {ghuser}`janosg`)
- {gh}`212` Adds optimizers from the nlopt library ({ghuser}`mpetrosian`)
- {gh}`228` Restructures testing and makes changes to log_options.
- {gh}`149` Adds `estimate_msm` functionality ({ghuser}`janosg` and {ghuser}`loikein`)
- {gh}`219` Several enhancements by ({ghuser}`tobiasraabe`)
- {gh}`218` Improve documentation by ({ghuser}`sofyaakimova`) and ({ghuser}`effieHan`)
- {gh}`214` Fix bug with overlapping "fixed" and "linear" constraints ({ghuser}`janosg`)
- {gh}`211` Improve error handling of log reading functions by ({ghuser}`janosg`)
- {gh}`210` Automatically drop empty constraints by ({ghuser}`janosg`)
- {gh}`192` Add option to scale optimization problems by ({ghuser}`janosg`)
- {gh}`202` Refactoring of bootstrap code ({ghuser}`janosg`)
- {gh}`148` Add bootstrap functionality ({ghuser}`RobinMusolff`)
- {gh}`208` Several small improvements ({ghuser}`janosg`)
- {gh}`206` Improve latex and html tables ({ghuser}`mpetrosian`)
- {gh}`205` Add scipy's least squares optimizers (based on {gh}`197` by
  ({ghuser}`yradeva93`)
- {gh}`198` More unit tests for optimizers ({ghuser}`mchandra12`)
- {gh}`200` Plot intermediate outputs of `first_derivative` ({ghuser}`timmens`)

## 0.1.3 - 2021-06-25

- {gh}`195` Illustrate optimizers in documentation ({ghuser}`sofyaakimova`),
  ({ghuser}`effieHan`) and ({ghuser}`janosg`)
- {gh}`201` More stable covariance matrix calculation ({ghuser}`janosg`)
- {gh}`199` Return intermediate outputs of first_derivative ({ghuser}`timmens`)

## 0.1.2 - 2021-02-07

- {gh}`189` Improve documentation and logging ({ghuser}`roecla`)

## 0.1.1 - 2021-01-13

This release greatly expands the set of available optimization algorithms, has a better
and prettier dashboard and improves the documentation.

- {gh}`187` Implement dot notation in algo_options ({ghuser}`roecla`)
- {gh}`183` Improve documentation ({ghuser}`SofiaBadini`)
- {gh}`182` Allow for constraints in likelihood inference ({ghuser}`janosg`)
- {gh}`181` Add DF-OLS optimizer from Numerical Algorithm Group ({ghuser}`roecla`)
- {gh}`180` Add pybobyqa optimizer from Numerical Algorithm Group ({ghuser}`roecla`)
- {gh}`179` Allow base_steps and min_steps to be scalars ({ghuser}`tobiasraabe`)
- {gh}`178` Refactoring of dashboard code ({ghuser}`roecla`)
- {gh}`177` Add stride as a new dashboard argument ({ghuser}`roecla`)
- {gh}`176` Minor fix of plot width in dashboard ({ghuser}`janosg`)
- {gh}`174` Various dashboard improvements ({ghuser}`roecla`)
- {gh}`173` Add new color palettes and use them in dashboard ({ghuser}`janosg`)
- {gh}`172` Add high level log reading functions ({ghuser}`janosg`)

## 0.1.0dev1 - 2020-09-08

This release entails a complete rewrite of the optimization code with many breaking
changes. In particular, some optimizers that were available before are not anymore.
Those will be re-introduced soon. The breaking changes include:

- The database is restructured. The new version simplifies the code,
  makes logging faster and avoids the sql column limit.
- Users can provide closed form derivative and/or criterion_and_derivative where
  the latter one can exploit synergies in the calculation of criterion and derivative.
  This is also compatible with constraints.
- Our own (parallelized) first_derivative function is used to calculate gradients
  during the optimization when no closed form gradients are provided.
- Optimizer options like convergence criteria and optimization results are harmonized
  across optimizers.
- Users can choose from several batch evaluators whenever we parallelize
  (e.g. for parallel optimizations or parallel function evaluations for numerical
  derivatives) or pass in their own batch evaluator function as long as it has a
  compatible interface. The batch evaluator interface also standardizes error handling.
- There is a well defined internal optimizer interface. Users can select the
  pre-implemented optimizers by algorithm="name_of_optimizer" or their own optimizer
  by algorithm=custom_minimize_function
- Optimizers from pygmo and nlopt are no longer supported (will be re-introduced)
- Greatly improved error handling.
- {gh}`169` Add additional dashboard arguments
- {gh}`168` Rename lower and upper to lower_bound and upper_bound
  ({ghuser}`ChristianZimpelmann`)
- {gh}`167` Improve dashboard styling ({ghuser}`roecla`)
- {gh}`166` Re-add POUNDERS from TAO ({ghuser}`tobiasraabe`)
- {gh}`165` Re-add the scipy optimizers with harmonized options ({ghuser}`roecla`)
- {gh}`164` Closed form derivatives for parameter transformations ({ghuser}`timmens`)
- {gh}`163` Complete rewrite of optimization with breaking changes ({ghuser}`janosg`)
- {gh}`162` Improve packaging and relax version constraints ({ghuser}`tobiasraabe`)
- {gh}`160` Generate parameter tables in tex and html ({ghuser}`mpetrosian`)

## 0.0.31 - 2020-06-20

- {gh}`130` Improve wrapping of POUNDERS algorithm ({ghuser}`mo2561057`)
- {gh}`159` Add Richardson Extrapolation to first_derivative ({ghuser}`timmens`)

## 0.0.30 - 2020-04-22

- {gh}`158` allows to specify a gradient in maximize and minimize ({ghuser}`janosg`)

## 0.0.29 - 2020-04-16

- {gh}`154` Version restrictions for pygmo ({ghuser}`janosg`)
- {gh}`153` adds documentation for the CLI ({ghuser}`tobiasraabe`)
- {gh}`152` makes estimagic work with pandas 1.0 ({ghuser}`SofiaBadini`)

## 0.0.28 - 2020-03-17

- {gh}`151` estimagic becomes a noarch package. ({ghuser}`janosg`).
- {gh}`150` adds command line interface to the dashboard ({ghuser}`tobiasraabe`)


================================================
FILE: CITATION
================================================

Please use one of the following samples to cite the optimagic version (change
x.y) from this installation

Text:

[optimagic]  optimagic x.y, 2024
Janos Gabler, https://github.com/optimagic-dev/optimagic

BibTeX:

@Unpublished{Gabler2024,
      Title  = {optimagic: A library for nonlinear optimization},
      Author = {Janos Gabler},
      Year   = {2024},
      Url    = {https://github.com/optimagic-dev/optimagic}
    }

If you are unsure about which version of optimagic you are using run: `conda list optimagic`.


================================================
FILE: LICENSE
================================================
Copyright 2019-2021 Janos Gabler

Permission is hereby granted, free of charge, to any person obtaining a copy of this
software and associated documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to the following
conditions:

The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT
OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.


================================================
FILE: README.md
================================================
<a href="https://optimagic.readthedocs.io">
    <p align="center">
        <img src="https://raw.githubusercontent.com/optimagic-dev/optimagic/main/docs/source/_static/images/optimagic_logo.svg" width=50% alt="optimagic">
    </p>
</a>

______________________________________________________________________

[![PyPI](https://img.shields.io/pypi/v/optimagic?color=blue)](https://pypi.org/project/optimagic)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/optimagic)](https://pypi.org/project/optimagic)
[![image](https://img.shields.io/conda/vn/conda-forge/optimagic.svg)](https://anaconda.org/conda-forge/optimagic)
[![image](https://img.shields.io/conda/pn/conda-forge/optimagic.svg)](https://anaconda.org/conda-forge/optimagic)
[![PyPI - License](https://img.shields.io/pypi/l/optimagic)](https://pypi.org/project/optimagic)
[![image](https://readthedocs.org/projects/optimagic/badge/?version=latest)](https://optimagic.readthedocs.io/en/latest)
[![image](https://img.shields.io/github/actions/workflow/status/optimagic-dev/optimagic/main.yml?branch=main)](https://github.com/optimagic-dev/optimagic/actions?query=branch%3Amain)
[![image](https://codecov.io/gh/optimagic-dev/optimagic/branch/main/graph/badge.svg)](https://codecov.io/gh/optimagic-dev/optimagic)
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/optimagic-dev/optimagic/main.svg)](https://results.pre-commit.ci/latest/github/optimagic-dev/optimagic/main)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![Downloads](https://pepy.tech/badge/optimagic/month)](https://pepy.tech/project/optimagic)
[![NumFOCUS](https://img.shields.io/badge/NumFOCUS-affiliated%20project-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](https://numfocus.org/sponsored-projects/affiliated-projects)

optimagic is a Python package for numerical optimization. It is a unified interface to
optimizers from SciPy, NlOpt, and many other Python packages. Its features include:

- **SciPy-compatible API.** optimagic's `minimize` function works just like SciPy's, so
  you don't have to adjust your code. You simply get more optimizers for free.
- **Powerful diagnostic tools.** Visualize optimizer histories, compare runs, and
  diagnose convergence problems.
- **Parallel numerical derivatives.** Compute gradients, jacobians, and hessians with
  parallel execution.
- **Bounded, constrained, and unconstrained optimization.** Support for bounds, linear
  constraints, nonlinear constraints, fixed parameters, and more.
- **Statistical inference on estimated parameters.** The estimagic subpackage provides
  functionality for confidence intervals, standard errors, and p-values.

# Installation

optimagic is available on [PyPI](https://pypi.org/project/optimagic) and on
[conda-forge](https://anaconda.org/conda-forge/optimagic). Install the package with

```console
$ pip install optimagic
```

or

```console
$ conda install -c conda-forge optimagic
```

optimagic ships with all `scipy` optimizers out of the box. Additional algorithms become
available if you install optional packages. For an overview of all supported optimizers
and how to enable them, see the
[list of algorithms](https://optimagic.readthedocs.io/en/latest/algorithms.html).

# Usage

```python
import optimagic as om
import numpy as np


def fun(x):
    return x @ x


result = om.minimize(fun, params=np.array([1, 2, 3]), algorithm="scipy_lbfgsb")
result.params.round(9)  # np.array([0., 0., 0.])
```

# Documentation

You find the documentation at <https://optimagic.readthedocs.io> with
[tutorials](https://optimagic.readthedocs.io/en/latest/tutorials/index.html) and
[how-to guides](https://optimagic.readthedocs.io/en/latest/how_to/index.html).

# Changes

Consult the
[release notes](https://optimagic.readthedocs.io/en/latest/development/changes.html) to
find out about what is new.

# License

optimagic is distributed under the terms of the [MIT license](LICENSE).

# Citation

If you use optimagic for your research, please cite it with the following key to help
others discover the tool.

```bibtex
@Unpublished{Gabler2024,
    Title  = {optimagic: A library for nonlinear optimization},
    Author = {Janos Gabler},
    Year   = {2022},
    Url    = {https://github.com/optimagic-dev/optimagic}
}
```

# Acknowledgment

We thank all institutions that have funded or supported optimagic (formerly estimagic).

<table>
  <tc>
    <td><img src="docs/source/_static/images/numfocus_logo.png" width="200"></td>
    <td><img src="docs/source/_static/images/aai-institute-logo.svg" width="185"></td>
    <td><img src="docs/source/_static/images/tra_logo.png" width="240"></td>
    <td><img src="docs/source/_static/images/hoover_logo.png" width="192"></td>

</tc>
</table>


================================================
FILE: codecov.yml
================================================
---
codecov:
  notify:
    require_ci_to_pass: true
coverage:
  precision: 2
  round: down
  range: 50...100
  status:
    patch:
      default:
        target: 80%
    project:
      default:
        target: 90%
ignore:
  # Uses numba
  - src/optimagic/benchmarking/cartis_roberts.py
  - tests/**/*


================================================
FILE: docs/Makefile
================================================
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line.
SPHINXOPTS    =
SPHINXBUILD   = sphinx-build
SPHINXPROJ    = optimagic
SOURCEDIR     = source
BUILDDIR      = build

# Put it first so that "make" without argument is like "make help".
help:
	@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help Makefile

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
	@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)


================================================
FILE: docs/make.bat
================================================
@ECHO OFF

pushd %~dp0

REM Command file for Sphinx documentation

if "%SPHINXBUILD%" == "" (
	set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
set SPHINXPROJ=optimagic

if "%1" == "" goto help

%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
	echo.
	echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
	echo.installed, then set the SPHINXBUILD environment variable to point
	echo.to the full path of the 'sphinx-build' executable. Alternatively you
	echo.may add the Sphinx directory to PATH.
	echo.
	echo.If you don't have Sphinx installed, grab it from
	echo.http://sphinx-doc.org/
	exit /b 1
)

%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
goto end

:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%

:end
popd


================================================
FILE: docs/source/_static/css/custom.css
================================================
/* Remove execution count for notebook cells. */
div.prompt {
  display: none;
}


/* Classes for the index page. */
.index-card-image {
    padding-top: 1rem;
    height: 68px;
    text-align: center;
}

.index-card-link {
    color: var(--sd-color-card-text);
    font-weight: bold;
}

pre {
  padding-left: 20px
}

li pre {
  padding-left: 20px
}

.highlight {
    background: #f5f5f5
}

.highlight button.copybtn{
    background-color: #f5f5f5;
}

.highlight button.copybtn:hover {
    background-color: #f5f5f5;
}


================================================
FILE: docs/source/_static/css/termynal.css
================================================
/**
 * termynal.js
 *
 * @author Ines Montani <ines@ines.io>
 * @version 0.0.1
 * @license MIT
 */

:root {
    --color-bg: #0c0c0c;
    --color-text: #f2f2f2;
    --color-text-subtle: #a2a2a2;
}

[data-termynal] {
    width: 750px;
    max-width: 100%;
    background: var(--color-bg);
    color: var(--color-text);
    /* font-size: 18px; */
    font-size: 15px;
    /* font-family: 'Fira Mono', Consolas, Menlo, Monaco, 'Courier New', Courier, monospace; */
    font-family: 'Roboto Mono', 'Fira Mono', Consolas, Menlo, Monaco, 'Courier New', Courier, monospace;
    border-radius: 4px;
    padding: 75px 45px 35px;
    position: relative;
    -webkit-box-sizing: border-box;
            box-sizing: border-box;
    line-height: 1.2;
}

[data-termynal]:before {
    content: '';
    position: absolute;
    top: 15px;
    left: 15px;
    display: inline-block;
    width: 15px;
    height: 15px;
    border-radius: 50%;
    /* A little hack to display the window buttons in one pseudo element. */
    background: #d9515d;
    -webkit-box-shadow: 25px 0 0 #f4c025, 50px 0 0 #3ec930;
            box-shadow: 25px 0 0 #f4c025, 50px 0 0 #3ec930;
}

[data-termynal]:after {
    content: 'bash';
    position: absolute;
    color: var(--color-text-subtle);
    top: 5px;
    left: 0;
    width: 100%;
    text-align: center;
}

a[data-terminal-control] {
    text-align: right;
    display: block;
    color: #aebbff;
}

[data-ty] {
    display: block;
    line-height: 2;
}

[data-ty]:before {
    /* Set up defaults and ensure empty lines are displayed. */
    content: '';
    display: inline-block;
    vertical-align: middle;
}

[data-ty="input"]:before,
[data-ty-prompt]:before {
    margin-right: 0.75em;
    color: var(--color-text-subtle);
}

[data-ty="input"]:before {
    content: '$';
}

[data-ty][data-ty-prompt]:before {
    content: attr(data-ty-prompt);
}

[data-ty-cursor]:after {
    content: attr(data-ty-cursor);
    font-family: monospace;
    margin-left: 0.5em;
    -webkit-animation: blink 1s infinite;
            animation: blink 1s infinite;
}


/* Cursor animation */

@-webkit-keyframes blink {
    50% {
        opacity: 0;
    }
}

@keyframes blink {
    50% {
        opacity: 0;
    }
}


================================================
FILE: docs/source/_static/css/termynal_custom.css
================================================
.termynal-comment {
    color: #4a968f;
    font-style: italic;
    display: block;
}

.termy [data-termynal] {
    white-space: pre-wrap;
}

a.external-link::after {
    /* \00A0 is a non-breaking space
        to make the mark be on the same line as the link
    */
    content: "\00A0[↪]";
}

a.internal-link::after {
    /* \00A0 is a non-breaking space
        to make the mark be on the same line as the link
    */
    content: "\00A0↪";
}

:root {
    --termynal-green: #137C39;
    --termynal-red: #BF2D2D;
    --termynal-yellow: #F4C041;
    --termynal-white: #f2f2f2;
    --termynal-black: #0c0c0c;
    --termynal-blue: #11a8cd;
    --termynal-grey: #7f7f7f;
}

.termynal-failed {
    color: var(--termynal-red);
}

.termynal-failed-textonly {
    color: var(--termynal-white);
    background: var(--termynal-red);
    font-weight: bold;
}

.termynal-success {
    color: var(--termynal-green);
}

.termynal-success-textonly {
    color: var(--termynal-white);
    background: var(--termynal-green);
    font-weight: bold;
}

.termynal-skipped {
    color: var(--termynal-yellow);
}

.termynal-skipped-textonly {
    color: var(--termynal-black);
    background: var(--termynal-yellow);
    font-weight: bold;
}

.termynal-warning {
    color: var(--termynal-yellow);
}

.termynal-command {
    color: var(--termynal-green);
    font-weight: bold;
}

.termynal-option {
    color: var(--termynal-yellow);
    font-weight: bold;
}

.termynal-switch {
    color: var(--termynal-red);
    font-weight: bold;
}

.termynal-metavar {
    color: yellow;
    font-weight: bold;
}

.termynal-dim {
    color: var(--termynal-grey);
}

.termynal-number {
    color: var(--termynal-blue);
}


================================================
FILE: docs/source/_static/js/custom.js
================================================
/*

The following code is copied from https://github.com/tiangolo/typer.

The MIT License (MIT)

Copyright (c) 2019 Sebastián Ramírez

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

*/

document.querySelectorAll(".use-termynal").forEach(node => {
    node.style.display = "block";
    new Termynal(node, {
        lineDelay: 500
    });
});
const progressLiteralStart = "---> 100%";
const promptLiteralStart = "$ ";
const customPromptLiteralStart = "# ";
const termynalActivateClass = "termy";
let termynals = [];

function createTermynals() {
    document
        .querySelectorAll(`.${termynalActivateClass} .highlight`)
        .forEach(node => {
            const text = node.textContent;
            const lines = text.split("\n");
            const useLines = [];
            let buffer = [];
            function saveBuffer() {
                if (buffer.length) {
                    let isBlankSpace = true;
                    buffer.forEach(line => {
                        if (line) {
                            isBlankSpace = false;
                        }
                    });
                    dataValue = {};
                    if (isBlankSpace) {
                        dataValue["delay"] = 0;
                    }
                    if (buffer[buffer.length - 1] === "") {
                        // A last single <br> won't have effect
                        // so put an additional one
                        buffer.push("");
                    }
                    const bufferValue = buffer.join("<br>");
                    dataValue["value"] = bufferValue;
                    useLines.push(dataValue);
                    buffer = [];
                }
            }
            for (let line of lines) {
                if (line === progressLiteralStart) {
                    saveBuffer();
                    useLines.push({
                        type: "progress"
                    });
                } else if (line.startsWith(promptLiteralStart)) {
                    saveBuffer();
                    const value = line.replace(promptLiteralStart, "").trimEnd();
                    useLines.push({
                        type: "input",
                        value: value
                    });
                } else if (line.startsWith("// ")) {
                    saveBuffer();
                    const value = "💬 " + line.replace("// ", "").trimEnd();
                    useLines.push({
                        value: value,
                        class: "termynal-comment",
                        delay: 0
                    });
                } else if (line.startsWith(customPromptLiteralStart)) {
                    saveBuffer();
                    const promptStart = line.indexOf(promptLiteralStart);
                    if (promptStart === -1) {
                        console.error("Custom prompt found but no end delimiter", line)
                    }
                    const prompt = line.slice(0, promptStart).replace(customPromptLiteralStart, "")
                    let value = line.slice(promptStart + promptLiteralStart.length);
                    useLines.push({
                        type: "input",
                        value: value,
                        prompt: prompt
                    });
                } else {
                    buffer.push(line);
                }
            }
            saveBuffer();
            const div = document.createElement("div");
            node.replaceWith(div);
            const termynal = new Termynal(div, {
                lineData: useLines,
                noInit: true,
                lineDelay: 500
            });
            termynals.push(termynal);
        });
}

function loadVisibleTermynals() {
    termynals = termynals.filter(termynal => {
        if (termynal.container.getBoundingClientRect().top - innerHeight <= 0) {
            termynal.init();
            return false;
        }
        return true;
    });
}
window.addEventListener("scroll", loadVisibleTermynals);
createTermynals();
loadVisibleTermynals();


================================================
FILE: docs/source/_static/js/require.js
================================================
/** vim: et:ts=4:sw=4:sts=4
 * @license RequireJS 2.3.7 Copyright jQuery Foundation and other contributors.
 * Released under MIT license, https://github.com/requirejs/requirejs/blob/master/LICENSE
 */
var requirejs,require,define;!function(global,setTimeout){var req,s,head,baseElement,dataMain,src,interactiveScript,currentlyAddingScript,mainScript,subPath,version="2.3.7",commentRegExp=/\/\*[\s\S]*?\*\/|([^:"'=]|^)\/\/.*$/gm,cjsRequireRegExp=/[^.]\s*require\s*\(\s*["']([^'"\s]+)["']\s*\)/g,jsSuffixRegExp=/\.js$/,currDirRegExp=/^\.\//,op=Object.prototype,ostring=op.toString,hasOwn=op.hasOwnProperty,isBrowser=!("undefined"==typeof window||"undefined"==typeof navigator||!window.document),isWebWorker=!isBrowser&&"undefined"!=typeof importScripts,readyRegExp=isBrowser&&"PLAYSTATION 3"===navigator.platform?/^complete$/:/^(complete|loaded)$/,defContextName="_",isOpera="undefined"!=typeof opera&&"[object Opera]"===opera.toString(),contexts={},cfg={},globalDefQueue=[],useInteractive=!1,disallowedProps=["__proto__","constructor"];function commentReplace(e,t){return t||""}function isFunction(e){return"[object Function]"===ostring.call(e)}function isArray(e){return"[object Array]"===ostring.call(e)}function each(e,t){if(e)for(var i=0;i<e.length&&(!e[i]||!t(e[i],i,e));i+=1);}function eachReverse(e,t){if(e)for(var i=e.length-1;-1<i&&(!e[i]||!t(e[i],i,e));--i);}function hasProp(e,t){return hasOwn.call(e,t)}function getOwn(e,t){return hasProp(e,t)&&e[t]}function eachProp(e,t){for(var i in e)if(hasProp(e,i)&&-1==disallowedProps.indexOf(i)&&t(e[i],i))break}function mixin(i,e,r,n){e&&eachProp(e,function(e,t){!r&&hasProp(i,t)||(!n||"object"!=typeof e||!e||isArray(e)||isFunction(e)||e instanceof RegExp?i[t]=e:(i[t]||(i[t]={}),mixin(i[t],e,r,n)))})}function bind(e,t){return function(){return t.apply(e,arguments)}}function scripts(){return document.getElementsByTagName("script")}function defaultOnError(e){throw e}function getGlobal(e){var t;return e&&(t=global,each(e.split("."),function(e){t=t[e]}),t)}function makeError(e,t,i,r){t=new Error(t+"\nhttps://requirejs.org/docs/errors.html#"+e);return t.requireType=e,t.requireModules=r,i&&(t.originalError=i),t}if(void 0===define){if(void 0!==requirejs){if(isFunction(requirejs))return;cfg=requirejs,requirejs=void 0}void 0===require||isFunction(require)||(cfg=require,require=void 0),req=requirejs=function(e,t,i,r){var n,o=defContextName;return isArray(e)||"string"==typeof e||(n=e,isArray(t)?(e=t,t=i,i=r):e=[]),n&&n.context&&(o=n.context),r=(r=getOwn(contexts,o))||(contexts[o]=req.s.newContext(o)),n&&r.configure(n),r.require(e,t,i)},req.config=function(e){return req(e)},req.nextTick=void 0!==setTimeout?function(e){setTimeout(e,4)}:function(e){e()},require=require||req,req.version=version,req.jsExtRegExp=/^\/|:|\?|\.js$/,req.isBrowser=isBrowser,s=req.s={contexts:contexts,newContext:newContext},req({}),each(["toUrl","undef","defined","specified"],function(t){req[t]=function(){var e=contexts[defContextName];return e.require[t].apply(e,arguments)}}),isBrowser&&(head=s.head=document.getElementsByTagName("head")[0],baseElement=document.getElementsByTagName("base")[0],baseElement)&&(head=s.head=baseElement.parentNode),req.onError=defaultOnError,req.createNode=function(e,t,i){var r=e.xhtml?document.createElementNS("http://www.w3.org/1999/xhtml","html:script"):document.createElement("script");return r.type=e.scriptType||"text/javascript",r.charset="utf-8",r.async=!0,r},req.load=function(t,i,r){var e,n=t&&t.config||{};if(isBrowser)return(e=req.createNode(n,i,r)).setAttribute("data-requirecontext",t.contextName),e.setAttribute("data-requiremodule",i),!e.attachEvent||e.attachEvent.toString&&e.attachEvent.toString().indexOf("[native code")<0||isOpera?(e.addEventListener("load",t.onScriptLoad,!1),e.addEventListener("error",t.onScriptError,!1)):(useInteractive=!0,e.attachEvent("onreadystatechange",t.onScriptLoad)),e.src=r,n.onNodeCreated&&n.onNodeCreated(e,n,i,r),currentlyAddingScript=e,baseElement?head.insertBefore(e,baseElement):head.appendChild(e),currentlyAddingScript=null,e;if(isWebWorker)try{setTimeout(function(){},0),importScripts(r),t.completeLoad(i)}catch(e){t.onError(makeError("importscripts","importScripts failed for "+i+" at "+r,e,[i]))}},isBrowser&&!cfg.skipDataMain&&eachReverse(scripts(),function(e){if(head=head||e.parentNode,dataMain=e.getAttribute("data-main"))return mainScript=dataMain,cfg.baseUrl||-1!==mainScript.indexOf("!")||(mainScript=(src=mainScript.split("/")).pop(),subPath=src.length?src.join("/")+"/":"./",cfg.baseUrl=subPath),mainScript=mainScript.replace(jsSuffixRegExp,""),req.jsExtRegExp.test(mainScript)&&(mainScript=dataMain),cfg.deps=cfg.deps?cfg.deps.concat(mainScript):[mainScript],!0}),define=function(e,i,t){var r,n;"string"!=typeof e&&(t=i,i=e,e=null),isArray(i)||(t=i,i=null),!i&&isFunction(t)&&(i=[],t.length)&&(t.toString().replace(commentRegExp,commentReplace).replace(cjsRequireRegExp,function(e,t){i.push(t)}),i=(1===t.length?["require"]:["require","exports","module"]).concat(i)),useInteractive&&(r=currentlyAddingScript||getInteractiveScript())&&(e=e||r.getAttribute("data-requiremodule"),n=contexts[r.getAttribute("data-requirecontext")]),n?(n.defQueue.push([e,i,t]),n.defQueueMap[e]=!0):globalDefQueue.push([e,i,t])},define.amd={jQuery:!0},req.exec=function(text){return eval(text)},req(cfg)}function newContext(u){var t,e,f,c,i,b={waitSeconds:7,baseUrl:"./",paths:{},bundles:{},pkgs:{},shim:{},config:{}},d={},p={},r={},l=[],h={},n={},m={},g=1,x=1;function v(e,t,i){var r,n,o,a,s,u,c,d,p,f=t&&t.split("/"),l=b.map,h=l&&l["*"];if(e){t=(e=e.split("/")).length-1,b.nodeIdCompat&&jsSuffixRegExp.test(e[t])&&(e[t]=e[t].replace(jsSuffixRegExp,""));for(var m,g=e="."===e[0].charAt(0)&&f?f.slice(0,f.length-1).concat(e):e,x=0;x<g.length;x++)"."===(m=g[x])?(g.splice(x,1),--x):".."!==m||0===x||1===x&&".."===g[2]||".."===g[x-1]||0<x&&(g.splice(x-1,2),x-=2);e=e.join("/")}if(i&&l&&(f||h)){e:for(o=(n=e.split("/")).length;0<o;--o){if(s=n.slice(0,o).join("/"),f)for(a=f.length;0<a;--a)if(r=(r=getOwn(l,f.slice(0,a).join("/")))&&getOwn(r,s)){u=r,c=o;break e}!d&&h&&getOwn(h,s)&&(d=getOwn(h,s),p=o)}!u&&d&&(u=d,c=p),u&&(n.splice(0,c,u),e=n.join("/"))}return getOwn(b.pkgs,e)||e}function q(t){isBrowser&&each(scripts(),function(e){if(e.getAttribute("data-requiremodule")===t&&e.getAttribute("data-requirecontext")===f.contextName)return e.parentNode.removeChild(e),!0})}function E(e){var t=getOwn(b.paths,e);return t&&isArray(t)&&1<t.length&&(t.shift(),f.require.undef(e),f.makeRequire(null,{skipMap:!0})([e]),1)}function w(e){var t,i=e?e.indexOf("!"):-1;return-1<i&&(t=e.substring(0,i),e=e.substring(i+1,e.length)),[t,e]}function y(e,t,i,r){var n,o,a,s=null,u=t?t.name:null,c=e,d=!0,p="";return e||(d=!1,e="_@r"+(g+=1)),s=(a=w(e))[0],e=a[1],s&&(s=v(s,u,r),o=getOwn(h,s)),e&&(s?p=i?e:o&&o.normalize?o.normalize(e,function(e){return v(e,u,r)}):-1===e.indexOf("!")?v(e,u,r):e:(s=(a=w(p=v(e,u,r)))[0],i=!0,n=f.nameToUrl(p=a[1]))),{prefix:s,name:p,parentMap:t,unnormalized:!!(e=!s||o||i?"":"_unnormalized"+(x+=1)),url:n,originalName:c,isDefine:d,id:(s?s+"!"+p:p)+e}}function S(e){var t=e.id;return getOwn(d,t)||(d[t]=new f.Module(e))}function k(e,t,i){var r=e.id,n=getOwn(d,r);!hasProp(h,r)||n&&!n.defineEmitComplete?(n=S(e)).error&&"error"===t?i(n.error):n.on(t,i):"defined"===t&&i(h[r])}function M(t,e){var i=t.requireModules,r=!1;e?e(t):(each(i,function(e){e=getOwn(d,e);e&&(e.error=t,e.events.error)&&(r=!0,e.emit("error",t))}),r||req.onError(t))}function O(){globalDefQueue.length&&(each(globalDefQueue,function(e){var t=e[0];"string"==typeof t&&(f.defQueueMap[t]=!0),l.push(e)}),globalDefQueue=[])}function j(e){delete d[e],delete p[e]}function P(){var r,e=1e3*b.waitSeconds,n=e&&f.startTime+e<(new Date).getTime(),o=[],a=[],s=!1,u=!0;if(!t){if(t=!0,eachProp(p,function(e){var t=e.map,i=t.id;if(e.enabled&&(t.isDefine||a.push(e),!e.error))if(!e.inited&&n)E(i)?s=r=!0:(o.push(i),q(i));else if(!e.inited&&e.fetched&&t.isDefine&&(s=!0,!t.prefix))return u=!1}),n&&o.length)return(e=makeError("timeout","Load timeout for modules: "+o,null,o)).contextName=f.contextName,M(e);u&&each(a,function(e){!function r(n,o,a){var e=n.map.id;n.error?n.emit("error",n.error):(o[e]=!0,each(n.depMaps,function(e,t){var e=e.id,i=getOwn(d,e);!i||n.depMatched[t]||a[e]||(getOwn(o,e)?(n.defineDep(t,h[e]),n.check()):r(i,o,a))}),a[e]=!0)}(e,{},{})}),n&&!r||!s||(isBrowser||isWebWorker)&&(i=i||setTimeout(function(){i=0,P()},50)),t=!1}}function a(e){hasProp(h,e[0])||S(y(e[0],null,!0)).init(e[1],e[2])}function o(e,t,i,r){e.detachEvent&&!isOpera?r&&e.detachEvent(r,t):e.removeEventListener(i,t,!1)}function s(e){e=e.currentTarget||e.srcElement;return o(e,f.onScriptLoad,"load","onreadystatechange"),o(e,f.onScriptError,"error"),{node:e,id:e&&e.getAttribute("data-requiremodule")}}function R(){var e;for(O();l.length;){if(null===(e=l.shift())[0])return M(makeError("mismatch","Mismatched anonymous define() module: "+e[e.length-1]));a(e)}f.defQueueMap={}}return c={require:function(e){return e.require||(e.require=f.makeRequire(e.map))},exports:function(e){if(e.usingExports=!0,e.map.isDefine)return e.exports?h[e.map.id]=e.exports:e.exports=h[e.map.id]={}},module:function(e){return e.module||(e.module={id:e.map.id,uri:e.map.url,config:function(){return getOwn(b.config,e.map.id)||{}},exports:e.exports||(e.exports={})})}},(e=function(e){this.events=getOwn(r,e.id)||{},this.map=e,this.shim=getOwn(b.shim,e.id),this.depExports=[],this.depMaps=[],this.depMatched=[],this.pluginMaps={},this.depCount=0}).prototype={init:function(e,t,i,r){r=r||{},this.inited||(this.factory=t,i?this.on("error",i):this.events.error&&(i=bind(this,function(e){this.emit("error",e)})),this.depMaps=e&&e.slice(0),this.errback=i,this.inited=!0,this.ignore=r.ignore,r.enabled||this.enabled?this.enable():this.check())},defineDep:function(e,t){this.depMatched[e]||(this.depMatched[e]=!0,--this.depCount,this.depExports[e]=t)},fetch:function(){if(!this.fetched){this.fetched=!0,f.startTime=(new Date).getTime();var e=this.map;if(!this.shim)return e.prefix?this.callPlugin():this.load();f.makeRequire(this.map,{enableBuildCallback:!0})(this.shim.deps||[],bind(this,function(){return e.prefix?this.callPlugin():this.load()}))}},load:function(){var e=this.map.url;n[e]||(n[e]=!0,f.load(this.map.id,e))},check:function(){if(this.enabled&&!this.enabling){var t,i,e=this.map.id,r=this.depExports,n=this.exports,o=this.factory;if(this.inited){if(this.error)this.emit("error",this.error);else if(!this.defining){if(this.defining=!0,this.depCount<1&&!this.defined){if(isFunction(o)){if(this.events.error&&this.map.isDefine||req.onError!==defaultOnError)try{n=f.execCb(e,o,r,n)}catch(e){t=e}else n=f.execCb(e,o,r,n);if(this.map.isDefine&&void 0===n&&((r=this.module)?n=r.exports:this.usingExports&&(n=this.exports)),t)return t.requireMap=this.map,t.requireModules=this.map.isDefine?[this.map.id]:null,t.requireType=this.map.isDefine?"define":"require",M(this.error=t)}else n=o;this.exports=n,this.map.isDefine&&!this.ignore&&(h[e]=n,req.onResourceLoad)&&(i=[],each(this.depMaps,function(e){i.push(e.normalizedMap||e)}),req.onResourceLoad(f,this.map,i)),j(e),this.defined=!0}this.defining=!1,this.defined&&!this.defineEmitted&&(this.defineEmitted=!0,this.emit("defined",this.exports),this.defineEmitComplete=!0)}}else hasProp(f.defQueueMap,e)||this.fetch()}},callPlugin:function(){var s=this.map,u=s.id,e=y(s.prefix);this.depMaps.push(e),k(e,"defined",bind(this,function(e){var o,t,i=getOwn(m,this.map.id),r=this.map.name,n=this.map.parentMap?this.map.parentMap.name:null,a=f.makeRequire(s.parentMap,{enableBuildCallback:!0});this.map.unnormalized?(e.normalize&&(r=e.normalize(r,function(e){return v(e,n,!0)})||""),k(t=y(s.prefix+"!"+r,this.map.parentMap,!0),"defined",bind(this,function(e){this.map.normalizedMap=t,this.init([],function(){return e},null,{enabled:!0,ignore:!0})})),(r=getOwn(d,t.id))&&(this.depMaps.push(t),this.events.error&&r.on("error",bind(this,function(e){this.emit("error",e)})),r.enable())):i?(this.map.url=f.nameToUrl(i),this.load()):((o=bind(this,function(e){this.init([],function(){return e},null,{enabled:!0})})).error=bind(this,function(e){this.inited=!0,(this.error=e).requireModules=[u],eachProp(d,function(e){0===e.map.id.indexOf(u+"_unnormalized")&&j(e.map.id)}),M(e)}),o.fromText=bind(this,function(e,t){var i=s.name,r=y(i),n=useInteractive;t&&(e=t),n&&(useInteractive=!1),S(r),hasProp(b.config,u)&&(b.config[i]=b.config[u]);try{req.exec(e)}catch(e){return M(makeError("fromtexteval","fromText eval for "+u+" failed: "+e,e,[u]))}n&&(useInteractive=!0),this.depMaps.push(r),f.completeLoad(i),a([i],o)}),e.load(s.name,a,o,b))})),f.enable(e,this),this.pluginMaps[e.id]=e},enable:function(){(p[this.map.id]=this).enabled=!0,this.enabling=!0,each(this.depMaps,bind(this,function(e,t){var i,r;if("string"==typeof e){if(e=y(e,this.map.isDefine?this.map:this.map.parentMap,!1,!this.skipMap),this.depMaps[t]=e,r=getOwn(c,e.id))return void(this.depExports[t]=r(this));this.depCount+=1,k(e,"defined",bind(this,function(e){this.undefed||(this.defineDep(t,e),this.check())})),this.errback?k(e,"error",bind(this,this.errback)):this.events.error&&k(e,"error",bind(this,function(e){this.emit("error",e)}))}r=e.id,i=d[r],hasProp(c,r)||!i||i.enabled||f.enable(e,this)})),eachProp(this.pluginMaps,bind(this,function(e){var t=getOwn(d,e.id);t&&!t.enabled&&f.enable(e,this)})),this.enabling=!1,this.check()},on:function(e,t){(this.events[e]||(this.events[e]=[])).push(t)},emit:function(e,t){each(this.events[e],function(e){e(t)}),"error"===e&&delete this.events[e]}},(f={config:b,contextName:u,registry:d,defined:h,urlFetched:n,defQueue:l,defQueueMap:{},Module:e,makeModuleMap:y,nextTick:req.nextTick,onError:M,configure:function(e){e.baseUrl&&"/"!==e.baseUrl.charAt(e.baseUrl.length-1)&&(e.baseUrl+="/"),"string"==typeof e.urlArgs&&(i=e.urlArgs,e.urlArgs=function(e,t){return(-1===t.indexOf("?")?"?":"&")+i});var i,r=b.shim,n={paths:!0,bundles:!0,config:!0,map:!0};eachProp(e,function(e,t){n[t]?(b[t]||(b[t]={}),mixin(b[t],e,!0,!0)):b[t]=e}),e.bundles&&eachProp(e.bundles,function(e,t){each(e,function(e){e!==t&&(m[e]=t)})}),e.shim&&(eachProp(e.shim,function(e,t){!(e=isArray(e)?{deps:e}:e).exports&&!e.init||e.exportsFn||(e.exportsFn=f.makeShimExports(e)),r[t]=e}),b.shim=r),e.packages&&each(e.packages,function(e){var t=(e="string"==typeof e?{name:e}:e).name;e.location&&(b.paths[t]=e.location),b.pkgs[t]=e.name+"/"+(e.main||"main").replace(currDirRegExp,"").replace(jsSuffixRegExp,"")}),eachProp(d,function(e,t){e.inited||e.map.unnormalized||(e.map=y(t,null,!0))}),(e.deps||e.callback)&&f.require(e.deps||[],e.callback)},makeShimExports:function(t){return function(){var e;return(e=t.init?t.init.apply(global,arguments):e)||t.exports&&getGlobal(t.exports)}},makeRequire:function(o,a){function s(e,t,i){var r,n;return a.enableBuildCallback&&t&&isFunction(t)&&(t.__requireJsBuild=!0),"string"==typeof e?isFunction(t)?M(makeError("requireargs","Invalid require call"),i):o&&hasProp(c,e)?c[e](d[o.id]):req.get?req.get(f,e,o,s):(r=y(e,o,!1,!0).id,hasProp(h,r)?h[r]:M(makeError("notloaded",'Module name "'+r+'" has not been loaded yet for context: '+u+(o?"":". Use require([])")))):(R(),f.nextTick(function(){R(),(n=S(y(null,o))).skipMap=a.skipMap,n.init(e,t,i,{enabled:!0}),P()}),s)}return a=a||{},mixin(s,{isBrowser:isBrowser,toUrl:function(e){var t,i=e.lastIndexOf("."),r=e.split("/")[0];return-1!==i&&(!("."===r||".."===r)||1<i)&&(t=e.substring(i,e.length),e=e.substring(0,i)),f.nameToUrl(v(e,o&&o.id,!0),t,!0)},defined:function(e){return hasProp(h,y(e,o,!1,!0).id)},specified:function(e){return e=y(e,o,!1,!0).id,hasProp(h,e)||hasProp(d,e)}}),o||(s.undef=function(i){O();var e=y(i,o,!0),t=getOwn(d,i);t.undefed=!0,q(i),delete h[i],delete n[e.url],delete r[i],eachReverse(l,function(e,t){e[0]===i&&l.splice(t,1)}),delete f.defQueueMap[i],t&&(t.events.defined&&(r[i]=t.events),j(i))}),s},enable:function(e){getOwn(d,e.id)&&S(e).enable()},completeLoad:function(e){var t,i,r,n=getOwn(b.shim,e)||{},o=n.exports;for(O();l.length;){if(null===(i=l.shift())[0]){if(i[0]=e,t)break;t=!0}else i[0]===e&&(t=!0);a(i)}if(f.defQueueMap={},r=getOwn(d,e),!t&&!hasProp(h,e)&&r&&!r.inited){if(!(!b.enforceDefine||o&&getGlobal(o)))return E(e)?void 0:M(makeError("nodefine","No define call for "+e,null,[e]));a([e,n.deps||[],n.exportsFn])}P()},nameToUrl:function(e,t,i){var r,n,o,a,s,u=getOwn(b.pkgs,e);if(u=getOwn(m,e=u?u:e))return f.nameToUrl(u,t,i);if(req.jsExtRegExp.test(e))a=e+(t||"");else{for(r=b.paths,o=(n=e.split("/")).length;0<o;--o)if(s=getOwn(r,n.slice(0,o).join("/"))){isArray(s)&&(s=s[0]),n.splice(0,o,s);break}a=n.join("/"),a=("/"===(a+=t||(/^data\:|^blob\:|\?/.test(a)||i?"":".js")).charAt(0)||a.match(/^[\w\+\.\-]+:/)?"":b.baseUrl)+a}return b.urlArgs&&!/^blob\:/.test(a)?a+b.urlArgs(e,a):a},load:function(e,t){req.load(f,e,t)},execCb:function(e,t,i,r){return t.apply(r,i)},onScriptLoad:function(e){"load"!==e.type&&!readyRegExp.test((e.currentTarget||e.srcElement).readyState)||(interactiveScript=null,e=s(e),f.completeLoad(e.id))},onScriptError:function(e){var i,r=s(e);if(!E(r.id))return i=[],eachProp(d,function(e,t){0!==t.indexOf("_@r")&&each(e.depMaps,function(e){if(e.id===r.id)return i.push(t),!0})}),M(makeError("scripterror",'Script error for "'+r.id+(i.length?'", needed by: '+i.join(", "):'"'),e,[r.id]))}}).require=f.makeRequire(),f}function getInteractiveScript(){return interactiveScript&&"interactive"===interactiveScript.readyState||eachReverse(scripts(),function(e){if("interactive"===e.readyState)return interactiveScript=e}),interactiveScript}}(this,"undefined"==typeof setTimeout?void 0:setTimeout);


================================================
FILE: docs/source/_static/js/termynal.js
================================================
/*

The original author of the file is Ines Montani.

termynal.js
A lightweight, modern and extensible animated terminal window, using
async/await.

@author Ines Montani <ines@ines.io>
@version 0.0.1
@license MIT

Additions were made by https://github.com/tiangolo/typer.

The MIT License (MIT)

Copyright (c) 2019 Sebastián Ramírez

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

*/

'use strict';

/** Generate a terminal widget. */
class Termynal {
    /**
     * Construct the widget's settings.
     * @param {(string|Node)=} container - Query selector or container element.
     * @param {Object=} options - Custom settings.
     * @param {string} options.prefix - Prefix to use for data attributes.
     * @param {number} options.startDelay - Delay before animation, in ms.
     * @param {number} options.typeDelay - Delay between each typed character, in ms.
     * @param {number} options.lineDelay - Delay between each line, in ms.
     * @param {number} options.progressLength - Number of characters displayed as progress bar.
     * @param {string} options.progressChar – Character to use for progress bar, defaults to █.
	 * @param {number} options.progressPercent - Max percent of progress.
     * @param {string} options.cursor – Character to use for cursor, defaults to ▋.
     * @param {Object[]} lineData - Dynamically loaded line data objects.
     * @param {boolean} options.noInit - Don't initialise the animation.
     */
    constructor(container = '#termynal', options = {}) {
        this.container = (typeof container === 'string') ? document.querySelector(container) : container;
        this.pfx = `data-${options.prefix || 'ty'}`;
        this.originalStartDelay = this.startDelay = options.startDelay
            || parseFloat(this.container.getAttribute(`${this.pfx}-startDelay`)) || 600;
        this.originalTypeDelay = this.typeDelay = options.typeDelay
            || parseFloat(this.container.getAttribute(`${this.pfx}-typeDelay`)) || 90;
        this.originalLineDelay = this.lineDelay = options.lineDelay
            || parseFloat(this.container.getAttribute(`${this.pfx}-lineDelay`)) || 1500;
        this.progressLength = options.progressLength
            || parseFloat(this.container.getAttribute(`${this.pfx}-progressLength`)) || 40;
        this.progressChar = options.progressChar
            || this.container.getAttribute(`${this.pfx}-progressChar`) || '█';
		this.progressPercent = options.progressPercent
            || parseFloat(this.container.getAttribute(`${this.pfx}-progressPercent`)) || 100;
        this.cursor = options.cursor
            || this.container.getAttribute(`${this.pfx}-cursor`) || '▋';
        this.lineData = this.lineDataToElements(options.lineData || []);
        this.loadLines()
        if (!options.noInit) this.init()
    }

    loadLines() {
        // Load all the lines and create the container so that the size is fixed
        // Otherwise it would be changing and the user viewport would be constantly
        // moving as she/he scrolls
        const finish = this.generateFinish()
        finish.style.visibility = 'hidden'
        this.container.appendChild(finish)
        // Appends dynamically loaded lines to existing line elements.
        this.lines = [...this.container.querySelectorAll(`[${this.pfx}]`)].concat(this.lineData);
        for (let line of this.lines) {
            line.style.visibility = 'hidden'
            this.container.appendChild(line)
        }
        const restart = this.generateRestart()
        restart.style.visibility = 'hidden'
        this.container.appendChild(restart)
        this.container.setAttribute('data-termynal', '');
    }

    /**
     * Initialise the widget, get lines, clear container and start animation.
     */
    init() {
        /**
         * Calculates width and height of Termynal container.
         * If container is empty and lines are dynamically loaded, defaults to browser `auto` or CSS.
         */
        const containerStyle = getComputedStyle(this.container);
        this.container.style.width = containerStyle.width !== '0px' ?
            containerStyle.width : undefined;
        this.container.style.minHeight = containerStyle.height !== '0px' ?
            containerStyle.height : undefined;

        this.container.setAttribute('data-termynal', '');
        this.container.innerHTML = '';
        for (let line of this.lines) {
            line.style.visibility = 'visible'
        }
        this.start();
    }

    /**
     * Start the animation and rener the lines depending on their data attributes.
     */
    async start() {
        this.addFinish()
        await this._wait(this.startDelay);

        for (let line of this.lines) {
            const type = line.getAttribute(this.pfx);
            const delay = line.getAttribute(`${this.pfx}-delay`) || this.lineDelay;

            if (type == 'input') {
                line.setAttribute(`${this.pfx}-cursor`, this.cursor);
                await this.type(line);
                await this._wait(delay);
            }

            else if (type == 'progress') {
                await this.progress(line);
                await this._wait(delay);
            }

            else {
                this.container.appendChild(line);
                await this._wait(delay);
            }

            line.removeAttribute(`${this.pfx}-cursor`);
        }
        this.addRestart()
        this.finishElement.style.visibility = 'hidden'
        this.lineDelay = this.originalLineDelay
        this.typeDelay = this.originalTypeDelay
        this.startDelay = this.originalStartDelay
    }

    generateRestart() {
        const restart = document.createElement('a')
        restart.onclick = (e) => {
            e.preventDefault()
            this.container.innerHTML = ''
            this.init()
        }
        restart.href = '#'
        restart.setAttribute('data-terminal-control', '')
        restart.innerHTML = "restart ↻"
        return restart
    }

    generateFinish() {
        const finish = document.createElement('a')
        finish.onclick = (e) => {
            e.preventDefault()
            this.lineDelay = 0
            this.typeDelay = 0
            this.startDelay = 0
        }
        finish.href = '#'
        finish.setAttribute('data-terminal-control', '')
        finish.innerHTML = "fast →"
        this.finishElement = finish
        return finish
    }

    addRestart() {
        const restart = this.generateRestart()
        this.container.appendChild(restart)
    }

    addFinish() {
        const finish = this.generateFinish()
        this.container.appendChild(finish)
    }

    /**
     * Animate a typed line.
     * @param {Node} line - The line element to render.
     */
    async type(line) {
        const chars = [...line.textContent];
        line.textContent = '';
        this.container.appendChild(line);

        for (let char of chars) {
            const delay = line.getAttribute(`${this.pfx}-typeDelay`) || this.typeDelay;
            await this._wait(delay);
            line.textContent += char;
        }
    }

    /**
     * Animate a progress bar.
     * @param {Node} line - The line element to render.
     */
    async progress(line) {
        const progressLength = line.getAttribute(`${this.pfx}-progressLength`)
            || this.progressLength;
        const progressChar = line.getAttribute(`${this.pfx}-progressChar`)
            || this.progressChar;
        const chars = progressChar.repeat(progressLength);
		const progressPercent = line.getAttribute(`${this.pfx}-progressPercent`)
			|| this.progressPercent;
        line.textContent = '';
        this.container.appendChild(line);

        for (let i = 1; i < chars.length + 1; i++) {
            await this._wait(this.typeDelay);
            const percent = Math.round(i / chars.length * 100);
            line.textContent = `${chars.slice(0, i)} ${percent}%`;
			if (percent>progressPercent) {
				break;
			}
        }
    }

    /**
     * Helper function for animation delays, called with `await`.
     * @param {number} time - Timeout, in ms.
     */
    _wait(time) {
        return new Promise(resolve => setTimeout(resolve, time));
    }

    /**
     * Converts line data objects into line elements.
     *
     * @param {Object[]} lineData - Dynamically loaded lines.
     * @param {Object} line - Line data object.
     * @returns {Element[]} - Array of line elements.
     */
    lineDataToElements(lineData) {
        return lineData.map(line => {
            let div = document.createElement('div');
            div.innerHTML = `<span ${this._attributes(line)}>${line.value || ''}</span>`;

            return div.firstElementChild;
        });
    }

    /**
     * Helper function for generating attributes string.
     *
     * @param {Object} line - Line data object.
     * @returns {string} - String of attributes.
     */
    _attributes(line) {
        let attrs = '';
        for (let prop in line) {
            // Custom add class
            if (prop === 'class') {
                attrs += ` class=${line[prop]} `
                continue
            }
            if (prop === 'type') {
                attrs += `${this.pfx}="${line[prop]}" `
            } else if (prop !== 'value') {
                attrs += `${this.pfx}-${prop}="${line[prop]}" `
            }
        }

        return attrs;
    }
}

/**
* HTML API: If current script has container(s) specified, initialise Termynal.
*/
if (document.currentScript.hasAttribute('data-termynal-container')) {
    const containers = document.currentScript.getAttribute('data-termynal-container');
    containers.split('|')
        .forEach(container => new Termynal(container))
}


================================================
FILE: docs/source/algorithms.md
================================================
(list_of_algorithms)=

# Optimizers

Check out {ref}`how-to-select-algorithms` to see how to select an algorithm and specify
`algo_options` when using `maximize` or `minimize`. The default algorithm options are
discussed in {ref}`algo_options` and their type hints are documented in {ref}`typing`.

## Optimizers from SciPy

(scipy-algorithms)=

optimagic supports most [SciPy](https://scipy.org/) algorithms and SciPy is
automatically installed when you install optimagic.

```{eval-rst}
.. dropdown::  scipy_lbfgsb

    **How to use this algorithm:**

    .. code-block::

        import optimagic as om
        om.minimize(
          ...,
          algorithm=om.algos.scipy_lbfgsb(stopping_maxiter=1_000, ...)
        )
        
    or
        
    .. code-block::

        om.minimize(
          ...,
          algorithm="scipy_lbfgsb",
          algo_options={"stopping_maxiter": 1_000, ...}
        )

    **Description and available options:**

    .. autoclass:: optimagic.optimizers.scipy_optimizers.ScipyLBFGSB

```

```{eval-rst}
.. dropdown::  scipy_slsqp

    .. code-block::

        "scipy_slsqp"

    Minimize a scalar function of one or more variables using the SLSQP algorithm.

    SLSQP stands for Sequential Least Squares Programming.

    SLSQP is a line search algorithm. It is well suited for continuously
    differentiable scalar optimization problems with up to several hundred parameters.

    The optimizer is taken from scipy which wraps the SLSQP optimization subroutine
    originally implemented by :cite:`Kraft1988`.

    .. note::
        SLSQP's general nonlinear constraints are not supported yet by optimagic.

    - **convergence.ftol_abs** (float): Precision goal for the value of
      f in the stopping criterion.
    - **stopping.maxiter** (int): If the maximum number of iterations is reached,
      the optimization stops, but we do not count this as convergence.
    - **display** (bool): Set to True to print convergence messages. Default is False. Scipy name: **disp**.

```

```{eval-rst}
.. dropdown::  scipy_neldermead

    .. code-block::

      "scipy_neldermead"

    Minimize a scalar function using the Nelder-Mead algorithm.

    The Nelder-Mead algorithm is a direct search method (based on function comparison)
    and is often applied to nonlinear optimization problems for which derivatives are
    not known.
    Unlike most modern optimization methods, the Nelder–Mead heuristic can converge to
    a non-stationary point, unless the problem satisfies stronger conditions than are
    necessary for modern methods.

    Nelder-Mead is never the best algorithm to solve a problem but rarely the worst.
    Its popularity is likely due to historic reasons and much larger than its
    properties warrant.

    The argument `initial_simplex` is not supported by optimagic as it is not
    compatible with optimagic's handling of constraints.

    - **stopping.maxiter** (int): If the maximum number of iterations is reached, the optimization stops,
      but we do not count this as convergence.
    - **stopping.maxfun** (int): If the maximum number of function evaluation is reached,
      the optimization stops but we do not count this as convergence.
    - **convergence.xtol_abs** (float): Absolute difference in parameters between iterations
      that is tolerated to declare convergence. As no relative tolerances can be passed to Nelder-Mead,
      optimagic sets a non zero default for this.
    - **convergence.ftol_abs** (float): Absolute difference in the criterion value between
      iterations that is tolerated to declare convergence. As no relative tolerances can be passed to Nelder-Mead,
      optimagic sets a non zero default for this.
    - **display** (bool): Set to True to print convergence messages. Default is False. SciPy name: **disp**.
    - **adaptive** (bool): Adapt algorithm parameters to dimensionality of problem.
      Useful for high-dimensional minimization (:cite:`Gao2012`, p. 259-277). scipy's default is False.

```

```{eval-rst}
.. dropdown::  scipy_powell

   .. code-block::

       "scipy_powell"

   Minimize a scalar function using the modified Powell method.

    .. warning::
        In our benchmark using a quadratic objective function, the Powell algorithm
        did not find the optimum very precisely (less than 4 decimal places).
        If you require high precision, you should refine an optimum found with Powell
        with another local optimizer.

    The criterion function need not be differentiable.

    Powell's method is a conjugate direction method, minimizing the function by a
    bi-directional search in each parameter's dimension.

    The argument ``direc``, which is the initial set of direction vectors and which
    is part of the scipy interface is not supported by optimagic because it is
    incompatible with how optimagic handles constraints.

    - **convergence.xtol_rel (float)**: Stop when the relative movement between parameter
      vectors is smaller than this.
    - **convergence.ftol_rel** (float): Stop when the relative improvement between two
      iterations is smaller than this. More formally, this is expressed as

        .. math::

            \frac{(f^k - f^{k+1})}{\\max{{\{|f^k|, |f^{k+1}|, 1\}}}} \leq
            \text{relative_criterion_tolerance}

    - **stopping.maxfun** (int): If the maximum number of function evaluation is reached,
      the optimization stops but we do not count thisas convergence.
    - **stopping.maxiter** (int): If the maximum number of iterations is reached, the optimization stops,
      but we do not count this as convergence.
    - **display** (bool): Set to True to print convergence messages. Default is False. SciPy name: **disp**.

```

```{eval-rst}
.. dropdown::  scipy_bfgs

    .. code-block::

        "scipy_bfgs"

    Minimize a scalar function of one or more variables using the BFGS algorithm.

    BFGS stands for Broyden-Fletcher-Goldfarb-Shanno algorithm. It is a quasi-Newton
    method that can be used for solving unconstrained nonlinear optimization problems.

    BFGS is not guaranteed to converge unless the function has a quadratic Taylor
    expansion near an optimum. However, BFGS can have acceptable performance even
    for non-smooth optimization instances.

    - **convergence.gtol_abs** (float): Stop if all elements of the gradient are smaller than this.
    - **stopping.maxiter** (int): If the maximum number of iterations is reached, the optimization stops,
      but we do not count this as convergence.
    - **norm** (float): Order of the vector norm that is used to calculate the gradient's "score" that
      is compared to the gradient tolerance to determine convergence. Default is infinite which means that
      the largest entry of the gradient vector is compared to the gradient tolerance.
    - **display** (bool): Set to True to print convergence messages. Default is False. SciPy name: **disp**.
    - **convergence_xtol_rel** (float): Relative tolerance for `x`. Terminate successfully if step size is less than `xk * xrtol` where `xk` is the current parameter vector. Default is 1e-5. SciPy name: **xrtol**.
    - **armijo_condition** (float): Parameter for Armijo condition rule. Default is 1e-4. Ensures 

        .. math::

            f(x_k+\alpha p_k) \le f(x_k) \;+\mathrm{armijo\_condition}\,\cdot\,\alpha\,\nabla f(x_k)^\top p_k, 
        
      so each step yields at least a fraction **armijo_condition** of the predicted decrease. Smaller ⇒ more aggressive steps, larger ⇒ more conservative ones. SciPy name: **c1**.
    - **curvature_condition** (float): Parameter for curvature condition rule. Default is 0.9. Ensures 
      
        .. math::

            \nabla f(x_k+\alpha p_k)^\top p_k \ge \mathrm{curvature\_condition}\,\cdot\,\nabla f(x_k)^\top p_k, 
        
      so the new slope isn’t too negative. Smaller ⇒ stricter curvature reduction (smaller steps), larger ⇒ looser (bigger steps). SciPy name: **c2**.
```

```{eval-rst}
.. dropdown::  scipy_conjugate_gradient

    .. code-block::

        "scipy_conjugate_gradient"

    Minimize a function using a nonlinear conjugate gradient algorithm.

    The conjugate gradient method finds functions' local optima using just the gradient.

    This conjugate gradient algorithm is based on that of Polak and Ribiere, detailed
    in :cite:`Nocedal2006`, pp. 120-122.

    Conjugate gradient methods tend to work better when:

      - the criterion has a unique global minimizing point, and no local minima or
        other stationary points.
      - the criterion is, at least locally, reasonably well approximated by a
        quadratic function.
      - the criterion is continuous and has a continuous gradient.
      - the gradient is not too large, e.g., has a norm less than 1000.
      - The initial guess is reasonably close to the criterion's global minimizer.

    - **convergence.gtol_abs** (float): Stop if all elements of the
      gradient are smaller than this.
    - **stopping.maxiter** (int): If the maximum number of iterations is reached,
      the optimization stops, but we do not count this as convergence.
    - **norm** (float): Order of the vector norm that is used to calculate the gradient's
      "score" that is compared to the gradient tolerance to determine convergence.
      Default is infinite which means that the largest entry of the gradient vector
      is compared to the gradient tolerance.
    - **display** (bool): Set to True to print convergence messages. Default is False. SciPy name: **disp**.

```

```{eval-rst}
.. dropdown::  scipy_newton_cg

    .. code-block::

        "scipy_newton_cg"

    Minimize a scalar function using Newton's conjugate gradient algorithm.

    .. warning::
        In our benchmark using a quadratic objective function, the truncated newton
        algorithm did not find the optimum very precisely (less than 4 decimal places).
        If you require high precision, you should refine an optimum found with Powell
        with another local optimizer.

    Newton's conjugate gradient algorithm uses an approximation of the Hessian to find
    the minimum of a function. It is practical for small and large problems
    (see :cite:`Nocedal2006`, p. 140).

    Newton-CG methods are also called truncated Newton methods. This function differs
    scipy_truncated_newton because

    - ``scipy_newton_cg``'s algorithm is written purely in Python using NumPy
      and scipy while ``scipy_truncated_newton``'s algorithm calls a C function.

    - ``scipy_newton_cg``'s algorithm is only for unconstrained minimization
      while ``scipy_truncated_newton``'s algorithm supports bounds.

    Conjugate gradient methods tend to work better when:

      - the criterion has a unique global minimizing point, and no local minima or
        other stationary points.
      - the criterion is, at least locally, reasonably well approximated by a
        quadratic function.
      - the criterion is continuous and has a continuous gradient.
      - the gradient is not too large, e.g., has a norm less than 1000.
      - The initial guess is reasonably close to the criterion's global minimizer.

    - **convergence.xtol_rel** (float): Stop when the relative movement
      between parameter vectors is smaller than this. Newton CG uses the average
      relative change in the parameters for determining the convergence.
    - **stopping.maxiter** (int): If the maximum number of iterations is reached,
      the optimization stops, but we do not count this as convergence.
    - **display** (bool): Set to True to print convergence messages. Default is False. SciPy name: **disp**.



```

```{eval-rst}
.. dropdown::  scipy_cobyla

  .. code-block::

      "scipy_cobyla"

  Minimize a scalar function of one or more variables using the COBYLA algorithm.

  COBYLA stands for Constrained Optimization By Linear Approximation.
  It is derivative-free and supports nonlinear inequality and equality constraints.

  .. note::
      Cobyla's general nonlinear constraints is not supported yet by optimagic.

  Scipy's implementation wraps the FORTRAN implementation of the algorithm.

  For more information on COBYLA see :cite:`Powell1994`, :cite:`Powell1998` and
  :cite:`Powell2007`.

  - **stopping.maxiter** (int): If the maximum number of iterations is reached,
    the optimization stops, but we do not count this as convergence.
  - **convergence.xtol_rel** (float): Stop when the relative movement
    between parameter vectors is smaller than this. In case of COBYLA this is
    a lower bound on the size of the trust region and can be seen as the
    required accuracy in the variables but this accuracy is not guaranteed.
  - **trustregion.initial_radius** (float): Initial value of the trust region radius.
    Since a linear approximation is likely only good near the current simplex,
    the linear program is given the further requirement that the solution,
    which will become the next evaluation point must be within a radius
    RHO_j from x_j. RHO_j only decreases, never increases. The initial RHO_j is
    the `trustregion.initial_radius`. In this way COBYLA's iterations behave
    like a trust region algorithm.
  - **display** (bool): Set to True to print convergence messages. Default is False. SciPy name: **disp**.

```

```{eval-rst}
.. dropdown::  scipy_truncated_newton

    .. code-block::

        "scipy_truncated_newton"

    Minimize a scalar function using truncated Newton algorithm.

    This function differs from scipy_newton_cg because

    - ``scipy_newton_cg``'s algorithm is written purely in Python using NumPy
      and scipy while ``scipy_truncated_newton``'s algorithm calls a C function.

    - ``scipy_newton_cg``'s algorithm is only for unconstrained minimization
      while ``scipy_truncated_newton``'s algorithm supports bounds.

    Conjugate gradient methods tend to work better when:

    - the criterion has a unique global minimizing point, and no local minima or
      other stationary points.
    - the criterion is, at least locally, reasonably well approximated by a
      quadratic function.
    - the criterion is continuous and has a continuous gradient.
    - the gradient is not too large, e.g., has a norm less than 1000.
    - The initial guess is reasonably close to the criterion's global minimizer.

    optimagic does not support the ``scale``  nor ``offset`` argument as they are not
    compatible with the way optimagic handles constraints. It also does not support
    ``messg_num`` which is an additional way to control the verbosity of the optimizer.

    - **func_min_estimate** (float): Minimum function value estimate. Defaults to 0.
    - **stopping.maxiter** (int): If the maximum number of iterations is reached,
      the optimization stops, but we do not count this as convergence.
    - **stopping.maxfun** (int): If the maximum number of function
      evaluation is reached, the optimization stops but we do not count this as
      convergence.
    - **convergence.xtol_abs** (float): Absolute difference in parameters
      between iterations after scaling that is tolerated to declare convergence.
    - **convergence.ftol_abs** (float): Absolute difference in the
      criterion value between iterations after scaling that is tolerated
      to declare convergence.
    - **convergence.gtol_abs** (float): Stop if the value of the
      projected gradient (after applying x scaling factors) is smaller than this.
      If convergence.gtol_abs < 0.0,
      convergence.gtol_abs is set to
      1e-2 * sqrt(accuracy).
    - **max_hess_evaluations_per_iteration** (int): Maximum number of hessian*vector
      evaluations per main iteration. If ``max_hess_evaluations == 0``, the
      direction chosen is ``- gradient``. If ``max_hess_evaluations < 0``,
      ``max_hess_evaluations`` is set to ``max(1,min(50,n/2))`` where n is the
      length of the parameter vector. This is also the default.
    - **max_step_for_line_search** (float): Maximum step for the line search.
      It may be increased during the optimization. If too small, it will be set
      to 10.0. By default we use scipy's default.
    - **line_search_severity** (float): Severity of the line search. If < 0 or > 1,
      set to 0.25. optimagic defaults to scipy's default.
    - **finitie_difference_precision** (float): Relative precision for finite difference
      calculations. If <= machine_precision, set to sqrt(machine_precision).
      optimagic defaults to scipy's default.
    - **criterion_rescale_factor** (float): Scaling factor (in log10) used to trigger
      criterion rescaling. If 0, rescale at each iteration. If a large value,
      never rescale. If < 0, rescale is set to 1.3. optimagic defaults to scipy's
      default.
    - **display** (bool): Set to True to print convergence messages. Default is False. SciPy name: **disp**.


```

```{eval-rst}
.. dropdown::  scipy_trust_constr

    .. code-block::

        "scipy_trust_constr"

    Minimize a scalar function of one or more variables subject to constraints.

    .. warning::
        In our benchmark using a quadratic objective function, the trust_constr
        algorithm did not find the optimum very precisely (less than 4 decimal places).
        If you require high precision, you should refine an optimum found with trust_constr
        with another local optimizer.

    .. note::
        Its general nonlinear constraints' handling is not supported yet by optimagic.

    It switches between two implementations depending on the problem definition.
    It is the most versatile constrained minimization algorithm
    implemented in SciPy and the most appropriate for large-scale problems.
    For equality constrained problems it is an implementation of Byrd-Omojokun
    Trust-Region SQP method described in :cite:`Lalee1998` and in :cite:`Conn2000`,
    p. 549. When inequality constraints  are imposed as well, it switches to the
    trust-region interior point method described in :cite:`Byrd1999`.
    This interior point algorithm in turn, solves inequality constraints by
    introducing slack variables and solving a sequence of equality-constrained
    barrier problems for progressively smaller values of the barrier parameter.
    The previously described equality constrained SQP method is
    used to solve the subproblems with increasing levels of accuracy
    as the iterate gets closer to a solution.

    It approximates the Hessian using the Broyden-Fletcher-Goldfarb-Shanno (BFGS)
    Hessian update strategy.

    - **convergence.gtol_abs** (float): Tolerance for termination
      by the norm of the Lagrangian gradient. The algorithm will terminate
      when both the infinity norm (i.e., max abs value) of the Lagrangian
      gradient and the constraint violation are smaller than the
      convergence.gtol_abs.
      For this algorithm we use scipy's gradient tolerance for trust_constr.
      This smaller tolerance is needed for the sum of squares tests to pass.
    - **stopping.maxiter** (int): If the maximum number of iterations is reached,
      the optimization stops, but we do not count this as convergence.
    - **convergence.xtol_rel** (float): Tolerance for termination by
      the change of the independent variable. The algorithm will terminate when
      the radius of the trust region used in the algorithm is smaller than the
      convergence.xtol_rel.
    - **trustregion.initial_radius** (float): Initial value of the trust region radius.
      The trust radius gives the maximum distance between solution points in
      consecutive iterations. It reflects the trust the algorithm puts in the
      local approximation of the optimization problem. For an accurate local
      approximation the trust-region should be large and for an approximation
      valid only close to the current point it should be a small one.
      The trust radius is automatically updated throughout the optimization
      process, with ``trustregion_initial_radius`` being its initial value.
    - **display** (bool): Set to True to print convergence messages. Default is False. SciPy name: **disp**.

```

```{eval-rst}
.. dropdown::  scipy_ls_dogbox

    .. code-block::

        "scipy_ls_dogbox"

    Minimize a nonlinear least squares problem using a rectangular trust region method.

    Typical use case is small problems with bounds. Not recommended for problems with
    rank-deficient Jacobian.

    The algorithm supports the following options:

    - **convergence.ftol_rel** (float): Stop when the relative
      improvement between two iterations is below this.
    - **convergence.gtol_rel** (float): Stop when the gradient,
      divided by the absolute value of the criterion function is smaller than this.
    - **stopping.maxfun** (int): If the maximum number of function
      evaluation is reached, the optimization stops but we do not count this as
      convergence.
    - **tr_solver** (str): Method for solving trust-region subproblems, relevant only
      for 'trf' and 'dogbox' methods.

      - 'exact' is suitable for not very large problems with dense
        Jacobian matrices. The computational complexity per iteration is
        comparable to a singular value decomposition of the Jacobian
        matrix.
      - 'lsmr' is suitable for problems with sparse and large Jacobian
        matrices. It uses the iterative procedure
        `scipy.sparse.linalg.lsmr` for finding a solution of a linear
        least-squares problem and only requires matrix-vector product
        evaluations.
        If None (default), the solver is chosen based on the type of Jacobian
        returned on the first iteration.
    - **tr_solver_options** (dict):  Keyword options passed to trust-region solver.

      - ``tr_solver='exact'``: `tr_options` are ignored.
      - ``tr_solver='lsmr'``: options for `scipy.sparse.linalg.lsmr`.

```

```{eval-rst}
.. dropdown::  scipy_ls_trf

    .. code-block::

        "scipy_ls_trf"

    Minimize a nonlinear least squares problem using a trustregion reflective method.

    Trust Region Reflective algorithm, particularly suitable for large sparse problems
    with bounds. Generally robust method.

    The algorithm supports the following options:

    - **convergence.ftol_rel** (float): Stop when the relative
      improvement between two iterations is below this.
    - **convergence.gtol_rel** (float): Stop when the gradient,
      divided by the absolute value of the criterion function is smaller than this.
    - **stopping.maxfun** (int): If the maximum number of function
      evaluation is reached, the optimization stops but we do not count this as
      convergence.
    - **tr_solver** (str): Method for solving trust-region subproblems, relevant only
      for 'trf' and 'dogbox' methods.

      - 'exact' is suitable for not very large problems with dense
        Jacobian matrices. The computational complexity per iteration is
        comparable to a singular value decomposition of the Jacobian
        matrix.
      - 'lsmr' is suitable for problems with sparse and large Jacobian
        matrices. It uses the iterative procedure
        `scipy.sparse.linalg.lsmr` for finding a solution of a linear
        least-squares problem and only requires matrix-vector product
        evaluations.
        If None (default), the solver is chosen based on the type of Jacobian
        returned on the first iteration.
    - **tr_solver_options** (dict):  Keyword options passed to trust-region solver.

      - ``tr_solver='exact'``: `tr_options` are ignored.
      - ``tr_solver='lsmr'``: options for `scipy.sparse.linalg.lsmr`.

```

```{eval-rst}
.. dropdown::  scipy_ls_lm

    .. code-block::

        "scipy_ls_lm"

    Minimize a nonlinear least squares problem using a Levenberg-Marquardt method.

    Does not handle bounds and sparse Jacobians. Usually the most efficient method for
    small unconstrained problems.

    The algorithm supports the following options:

    - **convergence.ftol_rel** (float): Stop when the relative
      improvement between two iterations is below this.
    - **convergence.gtol_rel** (float): Stop when the gradient,
      divided by the absolute value of the criterion function is smaller than this.
    - **stopping.maxfun** (int): If the maximum number of function
      evaluation is reached, the optimization stops but we do not count this as
      convergence.
    - **tr_solver** (str): Method for solving trust-region subproblems, relevant only
      for 'trf' and 'dogbox' methods.

      - 'exact' is suitable for not very large problems with dense
        Jacobian matrices. The computational complexity per iteration is
        comparable to a singular value decomposition of the Jacobian
        matrix.
      - 'lsmr' is suitable for problems with sparse and large Jacobian
        matrices. It uses the iterative procedure
        `scipy.sparse.linalg.lsmr` for finding a solution of a linear
        least-squares problem and only requires matrix-vector product
        evaluations.
        If None (default), the solver is chosen based on the type of Jacobian
        returned on the first iteration.
    - **tr_solver_options** (dict):  Keyword options passed to trust-region solver.

      - ``tr_solver='exact'``: `tr_options` are ignored.
      - ``tr_solver='lsmr'``: options for `scipy.sparse.linalg.lsmr`.

```

```{eval-rst}
.. dropdown::  scipy_basinhopping

    .. code-block::

        "scipy_basinhopping"

    Find the global minimum of a function using the basin-hopping algorithm which combines a global stepping algorithm with local minimization at each step.

    Basin-hopping is a two-phase method that combines a global stepping algorithm with local minimization at each step. Designed to mimic the natural process of energy minimization of clusters of atoms, it works well for similar problems with “funnel-like, but rugged” energy landscapes.

    This is mainly supported for completeness. Consider optimagic's built in multistart
    optimization for a similar approach that can run multiple optimizations in parallel,
    supports all local algorithms in optimagic (as opposed to just those from scipy)
    and allows for a better visualization of the multistart history.

    When provided the derivative is passed to the local minimization method.

    The algorithm supports the following options:

    - **local_algorithm** (str/callable): Any scipy local minimizer: valid options are.
      "Nelder-Mead". "Powell". "CG". "BFGS". "Newton-CG". "L-BFGS-B". "TNC". "COBYLA".
      "SLSQP". "trust-constr". "dogleg". "trust-ncg". "trust-exact". "trust-krylov".
      or a custom function for local minimization, default is "L-BFGS-B".
    - **n_local_optimizations**: (int) The number local optimizations. Default is 100 as
      in scipy's default.
    - **temperature**: (float) Controls the randomness in the optimization process.
      Higher the temperatures the larger jumps in function value will be accepted.
      Default is 1.0 as in scipy's default.
    - **stepsize**: (float) Maximum step size. Default is 0.5 as in scipy's default.
    - **local_algo_options**: (dict) Additional keyword arguments for the local
      minimizer. Check the documentation of the local scipy algorithms for details on
      what is supported.
    - **take_step**: (callable) Replaces the default step-taking routine. Default is
      None as in scipy's default.
    - **accept_test**: (callable) Define a test to judge the acception of steps. Default
      is None as in scipy's default.
    - **interval**: (int) Determined how often the step size is updated. Default is 50
      as in scipy's default.
    - **convergence.n_unchanged_iterations**: (int) Number of iterations the global
      minimum estimate stays the same to stops the algorithm. Default is None as in
      scipy's default.
    - **seed**: (None, int, numpy.random.Generator,numpy.random.RandomState)Default is
      None as in scipy's default.
    - **target_accept_rate**: (float) Adjusts the step size. Default is 0.5 as in scipy's default.
    - **stepwise_factor**: (float) Step size multiplier upon each step. Lies between (0,1), default is 0.9 as in scipy's default.

```

```{eval-rst}
.. dropdown::  scipy_brute

    .. code-block::

        "scipy_brute"

    Find the global minimum of a fuction over a given range by brute force.

    Brute force evaluates the criterion at each point and that is why better suited for problems with very few parameters.

    The start values are not actually used because the grid is only defined by bounds.
    It is still necessary for optimagic to infer the number and format of the
    parameters.

    Due to the parallelization, this algorithm cannot collect a history of parameters
    and criterion evaluations.

    The algorithm supports the following options:

    - **n_grid_points** (int):  the number of grid points to use for the brute force
      search. Default is 20 as in scipy.
    - **polishing_function** (callable):  Function to seek a more precise minimum near
      brute-force' best gridpoint taking brute-force's result at initial guess as a
      positional argument. Default is None providing no polishing.
    - **n_cores** (int): The number of cores on which the function is evaluated in
      parallel. Default 1.
    - **batch_evaluator** (str or callable). An optimagic batch evaluator. Default
      'joblib'.

```

```{eval-rst}
.. dropdown::  scipy_differential_evolution

    .. code-block::

        "scipy_differential_evolution"

    Find the global minimum of a multivariate function using differential evolution (DE). DE is a gradient-free method.

    Due to optimagic's general parameter format the integrality and vectorized
    arguments are not supported.

    The algorithm supports the following options:

    - **strategy** (str): Measure of quality to improve a candidate solution, can be one
      of the following keywords (default 'best1bin'.)
      - ‘best1bin’
      - ‘best1exp’
      - ‘rand1exp’
      - ‘randtobest1exp’
      - ‘currenttobest1exp’
      - ‘best2exp’
      - ‘rand2exp’
      - ‘randtobest1bin’
      - ‘currenttobest1bin’
      - ‘best2bin’
      - ‘rand2bin’
      - ‘rand1bin’

    - **stopping.maxiter** (int): The maximum number of criterion evaluations
      without polishing is(stopping.maxiter + 1) * population_size * number of
      parameters
    - **population_size_multiplier** (int): A multiplier setting the population size.
      The number of individuals in the population is population_size * number of
      parameters. The default 15.
    - **convergence.ftol_rel** (float): Default 0.01.
    - **mutation_constant** (float/tuple): The differential weight denoted by F in
      literature. Should be within 0 and 2.  The tuple form is used to specify
      (min, max) dithering which can help speed convergence.  Default is (0.5, 1).
    - **recombination_constant** (float): The crossover probability or CR in the
      literature determines the probability that two solution vectors will be combined
      to produce a new solution vector. Should be between 0 and 1. The default is 0.7.
    - **seed** (int): DE is stochastic. Define a seed for reproducability.
    - **polish** (bool): Uses scipy's L-BFGS-B for unconstrained problems and
      trust-constr for constrained problems to slightly improve the minimization.
      Default is True.
    - **sampling_method** (str/np.array): Specify the sampling method for the initial
      population. It can be one of the following options
      - "latinhypercube"
      - "sobol"
      - "halton"
      - "random"
      - an array specifying the initial population of shape (total population size,
      number of parameters). The initial population is clipped to bounds before use.
      Default is 'latinhypercube'

    - **convergence.ftol_abs** (float):
      CONVERGENCE_SECOND_BEST_ABSOLUTE_CRITERION_TOLERANCE
    - **n_cores** (int): The number of cores on which the function is evaluated in
      parallel. Default 1.
    - **batch_evaluator** (str or callable). An optimagic batch evaluator. Default
      'joblib'.

```

```{eval-rst}
.. dropdown::  scipy_shgo

    .. code-block::

        "scipy_shgo"

    Find the global minimum of a fuction using simplicial homology global optimization.

    The algorithm supports the following options:

    - **local_algorithm** (str): The local optimization algorithm to be used. Only
      COBYLA and SLSQP supports constraints. Valid options are
      "Nelder-Mead". "Powell". "CG". "BFGS". "Newton-CG". "L-BFGS-B". "TNC". "COBYLA".
      "SLSQP". "trust-constr". "dogleg". "trust-ncg". "trust-exact". "trust-krylov"
      or a custom function for local minimization, default is "L-BFGS-B".
    - **local_algo_options**: (dict) Additional keyword arguments for the local
      minimizer. Check the documentation of the local scipy algorithms for details on
      what is supported.
    - **n_sampling_points** (int): Specify the number of sampling points to construct
      the simplical complex.
    - **n_simplex_iterations** (int): Number of iterations to construct the simplical
      complex. Default is 1 as in scipy.
    - **sampling_method** (str/callable): The method to use for sampling the search
      space. Default 'simplicial'.
    - **max_sampling_evaluations** (int): The maximum number of evaluations of the
      criterion function in the sampling phase.
    - **convergence.minimum_criterion_value** (float): Specify the global minimum when
      it is known. Default is - np.inf. For maximization problems, flip the sign.
    - **convergence.minimum_criterion_tolerance** (float): Specify the relative error
      between the current best minimum and the supplied global criterion_minimum
      allowed. Default is scipy's default, 1e-4.
    - **stopping.maxiter** (int): The maximum number of iterations.
    - **stopping.maxfun** (int): The maximum number of criterion
      evaluations.
    - **stopping.max_processing_time** (int): The maximum time allowed for the
      optimization.
    - **minimum_homology_group_rank_differential** (int): The minimum difference in the
      rank of the homology group between iterations.
    - **symmetry** (bool): Specify whether the criterion contains symetric variables.
    - **minimize_every_iteration** (bool): Specify whether the gloabal sampling points
      are passed to the local algorithm in every iteration.
    - **max_local_minimizations_per_iteration** (int): The maximum number of local
      optimizations per iteration. Default False, i.e. no limit.
    - **infinity_constraints** (bool): Specify whether to save the sampling points
      outside the feasible domain. Default is True.

```

```{eval-rst}
.. dropdown::  scipy_dual_annealing

    .. code-block::

        "scipy_dual_annealing"

    Find the global minimum of a function using dual annealing for continuous variables.

    The algorithm supports the following options:

    - **stopping.maxiter** (int): Specify the maximum number of global searh
      iterations.
    - **local_algorithm** (str): The local optimization algorithm to be used. valid
      options are: "Nelder-Mead", "Powell", "CG", "BFGS", "Newton-CG", "L-BFGS-B",
      "TNC", "COBYLA", "SLSQP", "trust-constr", "dogleg", "trust-ncg", "trust-exact",
      "trust-krylov", Default "L-BFGS-B".
    - **local_algo_options**: (dict) Additional keyword arguments for the local
      minimizer. Check the documentation of the local scipy algorithms for details on
      what is supported.
    - **initial_temperature** (float): The temparature algorithm starts with. The higher values lead to a wider search space. The range is (0.01, 5.e4] and default is 5230.0.
    - **restart_temperature_ratio** (float): Reanneling starts when the algorithm is decreased to initial_temperature * restart_temperature_ratio. Default is 2e-05.
    - **visit** (float): Specify the thickness of visiting distribution's tails. Range is (1, 3] and default is scipy's default, 2.62.
    - **accept** (float): Controls the probability of acceptance. Range is (-1e4, -5] and default is scipy's default, -5.0. Smaller values lead to lower acceptance probability.
    - **stopping.maxfun** (int): soft limit for the number of criterion evaluations.
    - **seed** (int, None or RNG): Dual annealing is a stochastic process. Seed or
      random number generator. Default None.
    - **no_local_search** (bool): Specify whether to apply a traditional Generalized Simulated Annealing with no local search. Default is False.

```

```{eval-rst}
.. dropdown::  scipy_direct

    .. code-block::

        "scipy_direct"

    Find the global minimum of a function using dividing rectangles method. It is not necessary to provide an initial guess.

    The algorithm supports the following options:

    - **eps** (float): Specify the minimum difference of the criterion values between the current best hyperrectangle and the next potentially best hyperrectangle to be divided determining the trade off between global and local search. Default is 1e-6 differing from scipy's default 1e-4.
    - **stopping.maxfun** (int/None): Maximum number of criterion evaluations allowed. Default is None which caps the number of evaluations at 1000 * number of dimentions automatically.
    - **stopping.maxiter** (int): Maximum number of iterations allowed.
    - **locally_biased** (bool): Determine whether to use the locally biased variant of the algorithm DIRECT_L. Default is True.
    - **convergence.minimum_criterion_value** (float): Specify the global minimum when it is known. Default is minus infinity. For maximization problems, flip the sign.
    - **convergence.minimum_criterion_tolerance** (float): Specify the relative error between the current best minimum and the supplied global criterion_minimum allowed. Default is scipy's default, 1e-4.
    - **volume_hyperrectangle_tolerance** (float): Specify the smallest volume of the hyperrectangle containing the lowest criterion value allowed. Range is (0,1). Default is 1e-16.
    - **length_hyperrectangle_tolerance** (float): Depending on locally_biased it can refer to normalized side (True) or diagonal (False) length of the hyperrectangle containing the lowest criterion value. Range is (0,1). Default is scipy's default, 1e-6.

```

(own-algorithms)=

## Own optimizers

We implement a few algorithms from scratch. They are currently considered experimental.

```{eval-rst}
.. dropdown:: bhhh

    .. code-block::

        "bhhh"

    Minimize a likelihood function using the BHHH algorithm.

    BHHH (:cite:`Berndt1974`) can - and should ONLY - be used for minimizing
    (or maximizing) a likelihood. It is similar to the Newton-Raphson
    algorithm, but replaces the Hessian matrix with the outer product of the
    gradient. This approximation is based on the information matrix equality
    (:cite:`Halbert1982`) and is thus only vaid when minimizing (or maximizing)
    a likelihood.

    The criterion function :func:`func` should return a dictionary with
    at least the entry ``{"contributions": array_or_pytree}`` where ``array_or_pytree``
    contains the likelihood contributions of each individual.

    bhhh supports the following options:

    - **convergence.gtol_abs** (float): Stopping criterion for the
      gradient tolerance. Default is 1e-8.
    - **stopping.maxiter** (int): Maximum number of iterations.
      If reached, terminate. Default is 200.

```

```{eval-rst}
.. dropdown:: neldermead_parallel

    .. code-block::

        "neldermead_parallel"

    Minimize a function using the neldermead_parallel algorithm.

    This is a parallel Nelder-Mead algorithm following Lee D., Wiswall M., A parallel
    implementation of the simplex function minimization routine,
    Computational Economics, 2007.

    The algorithm was implemented by Jacek Barszczewski

    The algorithm supports the following options:

    - **init_simplex_method** (string or callable): Name of the method to create initial
      simplex or callable which takes as an argument initial value of parameters
      and returns initial simplex as j+1 x j array, where j is length of x.
      The default is "gao_han".
    - **n_cores** (int): Degree of parallization. The default is 1 (no parallelization).

    - **adaptive** (bool): Adjust parameters of Nelder-Mead algorithm to account
      for simplex size. The default is True.

    - **stopping.maxiter** (int): Maximum number of algorithm iterations.
      The default is STOPPING_MAX_ITERATIONS.

    - **convergence.ftol_abs** (float): maximal difference between
      function value evaluated on simplex points.
      The default is CONVERGENCE_SECOND_BEST_ABSOLUTE_CRITERION_TOLERANCE.

    - **convergence.xtol_abs** (float): maximal distance between points
      in the simplex. The default is CONVERGENCE_SECOND_BEST_ABSOLUTE_PARAMS_TOLERANCE.

    - **batch_evaluator** (string or callable): See :ref:`batch_evaluators` for
        details. Default "joblib".

```

```{eval-rst}
.. dropdown:: pounders

    .. code-block::

        "pounders"

    Minimize a function using the POUNDERS algorithm.

    POUNDERs (:cite:`Benson2017`, :cite:`Wild2015`, `GitHub repository
    <https://github.com/erdc/petsc4py>`_)

    can be a useful tool for economists who estimate structural models using
    indirect inference, because unlike commonly used algorithms such as Nelder-Mead,
    POUNDERs is tailored for minimizing a non-linear sum of squares objective function,
    and therefore may require fewer iterations to arrive at a local optimum than
    Nelder-Mead.

    Scaling the problem is necessary such that bounds correspond to the unit hypercube
    :math:`[0, 1]^n`. For unconstrained problems, scale each parameter such that unit
    changes in parameters result in similar order-of-magnitude changes in the criterion
    value(s).

    pounders supports the following options:


    - **convergence.gtol_abs**: Convergence tolerance for the
      absolute gradient norm. Stop if norm of the gradient is less than this.
      Default is 1e-8.
    - **convergence.gtol_rel**: Convergence tolerance for the
      relative gradient norm. Stop if norm of the gradient relative to the criterion
      value is less than this. Default is 1-8.
    - **convergence.gtol_scaled**: Convergence tolerance for the
      scaled gradient norm. Stop if norm of the gradient divided by norm of the
      gradient at the initial parameters is less than this.
      Disabled, i.e. set to False, by default.
    - **max_interpolation_points** (int): Maximum number of interpolation points.
      Default is `2 * n + 1`, where `n` is the length of the parameter vector.
    - **stopping.maxiter** (int): Maximum number of iterations.
      If reached, terminate. Default is 2000.
    - **trustregion_initial_radius (float)**: Delta, initial trust-region radius.
      0.1 by default.
    - **trustregion_minimal_radius** (float): Minimal trust-region radius.
      1e-6 by default.
    - **trustregion_maximal_radius** (float): Maximal trust-region radius.
      1e6 by default.
    - **trustregion_shrinking_factor_not_successful** (float): Shrinking factor of
      the trust-region radius in case the solution vector of the suproblem
      is not accepted, but the model is fully linear (i.e. "valid").
      Defualt is 0.5.
    - **trustregion_expansion_factor_successful** (float): Shrinking factor of
      the trust-region radius in case the solution vector of the suproblem
      is accepted. Default is 2.
    - **theta1** (float): Threshold for adding the current x candidate to the
      model. Function argument to find_affine_points(). Default is 1e-5.
    - **theta2** (float): Threshold for adding the current x candidate to the model.
      Argument to get_interpolation_matrices_residual_model(). Default is 1e-4.
    - **trustregion_threshold_successful** (float): First threshold for accepting the
      solution vector of the subproblem as the best x candidate. Default is 0.
    - **trustregion_threshold_very_successful** (float): Second threshold for accepting
      the solution vector of the subproblem as the best x candidate. Default is 0.1.
    - **c1** (float): Treshold for accepting the norm of our current x candidate.
      Function argument to find_affine_points() for the case where input array
      *model_improving_points* is zero.
    - **c2** (int): Treshold for accepting the norm of our current x candidate.
      Equal to 10 by default. Argument to *find_affine_points()* in case
      the input array *model_improving_points* is not zero.
    - **trustregion_subproblem_solver** (str): Solver to use for the trust-region
      subproblem. Two internal solvers are supported:
      - "bntr": Bounded Newton Trust-Region (default, supports bound constraints)
      - "gqtpar": (does not support bound constraints)
    - **trustregion_subsolver_options** (dict): Options dictionary containing
      the stopping criteria for the subproblem. It takes different keys depending
      on the type of subproblem solver used. With the exception of the stopping criterion
      "maxiter", which is always included.

      If the subsolver "bntr" is used, the dictionary also contains the tolerance levels
      "gtol_abs", "gtol_rel", and "gtol_scaled". Moreover, the "conjugate_gradient_method"
      can be provided. Available conjugate gradient methods are:
      - "cg". In this case, two additional stopping criteria are "gtol_abs_cg" and "gtol_rel_cg"
      - "steihaug-toint"
      - "trsbox" (default)

      If the subsolver "gqtpar" is employed, the two stopping criteria are
      "k_easy" and "k_hard".

      None of the dictionary keys need to be specified by default, but can be.
    - **batch_evaluator** (str or callable): Name of a pre-implemented batch evaluator
      (currently "joblib" and "pathos_mp") or callable with the same interface
      as the optimagic batch_evaluators. Default is "joblib".
    - **n_cores (int)**: Number of processes used to parallelize the function
      evaluations. Default is 1.

```

(tao-algorithms)=

## Optimizers from the Toolkit for Advanced Optimization (TAO)

We wrap the pounders algorithm from the Toolkit of Advanced optimization. To use it you
need to have [petsc4py](https://pypi.org/project/petsc4py/) installed.

```{eval-rst}
.. dropdown::  tao_pounders

    .. code-block::

        "tao_pounders"

    Minimize a function using the POUNDERs algorithm.

    POUNDERs (:cite:`Benson2017`, :cite:`Wild2015`, `GitHub repository
    <https://github.com/erdc/petsc4py>`_)

    can be a useful tool for economists who estimate structural models using
    indirect inference, because unlike commonly used algorithms such as Nelder-Mead,
    POUNDERs is tailored for minimizing a non-linear sum of squares objective function,
    and therefore may require fewer iterations to arrive at a local optimum than
    Nelder-Mead.

    Scaling the problem is necessary such that bounds correspond to the unit hypercube
    :math:`[0, 1]^n`. For unconstrained problems, scale each parameter such that unit
    changes in parameters result in similar order-of-magnitude changes in the criterion
    value(s).

    POUNDERs has several convergence criteria. Let :math:`X` be the current parameter
    vector, :math:`X_0` the initial parameter vector, :math:`g` the gradient, and
    :math:`f` the criterion function.

    ``absolute_gradient_tolerance`` stops the optimization if the norm of the gradient
    falls below :math:`\epsilon`.

    .. math::

        ||g(X)|| < \epsilon

    ``relative_gradient_tolerance`` stops the optimization if the norm of the gradient
    relative to the criterion value falls below :math:`epsilon`.

    .. math::

        \frac{||g(X)||}{|f(X)|} < \epsilon

    ``scaled_gradient_tolerance`` stops the optimization if the norm of the gradient is
    lower than some fraction :math:`epsilon` of the norm of the gradient at the initial
    parameters.

    .. math::

        \frac{||g(X)||}{||g(X0)||} < \epsilon

    - **convergence.gtol_abs** (float): Stop if norm of gradient is less than this.
      If set to False the algorithm will not consider convergence.gtol_abs.
    - **convergence.gtol_rel** (float): Stop if relative norm of gradient is less
      than this. If set to False the algorithm will not consider
      convergence.gtol_rel.
    - **convergence.scaled_gradient_tolerance** (float): Stop if scaled norm of gradient is smaller
      than this. If set to False the algorithm will not consider
      convergence.scaled_gradient_tolerance.
    - **trustregion.initial_radius** (float): Initial value of the trust region radius.
      It must be :math:`> 0`.
    - **stopping.maxiter** (int): Alternative Stopping criterion.
      If set the routine will stop after the number of specified iterations or
      after the step size is sufficiently small. If the variable is set the
      default criteria will all be ignored.


```

(nag-algorithms)=

## Optimizers from the Numerical Algorithms Group (NAG)

We wrap two algorithms from the numerical algorithms group. To use them, you need to
install each of them separately:

- `pip install DFO-LS`
- `pip install Py-BOBYQA`

```{eval-rst}
.. dropdown::  nag_dfols

    *Note*: We recommend to install `DFO-LS` version 1.5.3 or higher. Versions of 1.5.0 or lower also work but the versions `1.5.1` and `1.5.2` contain bugs that can lead to errors being raised.

    .. code-block::

        "nag_dfols"

    Minimize a function with least squares structure using DFO-LS.

    The DFO-LS algorithm :cite:`Cartis2018b` is designed to solve the nonlinear
    least-squares minimization problem (with optional bound constraints).
    Remember to cite :cite:`Cartis2018b` when using DF-OLS in addition to optimagic.

    .. math::

        \min_{x\in\mathbb{R}^n}  &\quad  f(x) := \sum_{i=1}^{m}r_{i}(x)^2 \\
        \text{s.t.} &\quad  \text{lower_bounds} \leq x \leq \text{upper_bounds}

    The :math:`r_{i}` are called root contributions in optimagic.

    DFO-LS is a derivative-free optimization algorithm, which means it does not require
    the user to provide the derivatives of f(x) or :math:`r_{i}(x)`, nor does it
    attempt to estimate them internally (by using finite differencing, for instance).

    There are two main situations when using a derivative-free algorithm
    (such as DFO-LS) is preferable to a derivative-based algorithm (which is the vast
    majority of least-squares solvers):

    1. If the residuals are noisy, then calculating or even estimating their derivatives
       may be impossible (or at least very inaccurate). By noisy, we mean that if we
       evaluate :math:`r_{i}(x)` multiple times at the same value of x, we get different
       results. This may happen when a Monte Carlo simulation is used, for instance.

    2. If the residuals are expensive to evaluate, then estimating derivatives
       (which requires n evaluations of each :math:`r_{i}(x)` for every point of
       interest x) may be prohibitively expensive. Derivative-free methods are designed
       to solve the problem with the fewest number of evaluations of the criterion as
       possible.

    To read the detailed documentation of the algorithm `click here
    <https://numericalalgorithmsgroup.github.io/dfols/>`_.

    There are four possible convergence criteria:

    1. when the lower trust region radius is shrunk below a minimum
       (``convergence.minimal_trustregion_radius_tolerance``).

    2. when the improvements of iterations become very small
       (``convergence.slow_progress``). This is very similar to
       ``relative_criterion_tolerance`` but ``convergence.slow_progress`` is more
       general allowing to specify not only the threshold for convergence but also
       a period over which the improvements must have been very small.

    3. when a sufficient reduction to the criterion value at the start parameters
       has been reached, i.e. when
       :math:`\frac{f(x)}{f(x_0)} \leq
       \text{convergence.ftol_scaled}`

    4. when all evaluations on the interpolation points fall within a scaled version of
       the noise level of the criterion function. This is only applicable if the
       criterion function is noisy. You can specify this criterion with
       ``convergence.noise_corrected_criterion_tolerance``.

    DF-OLS supports resetting the optimization and doing a fast start by
    starting with a smaller interpolation set and growing it dynamically.
    For more information see `their detailed documentation
    <https://numericalalgorithmsgroup.github.io/dfols/>`_ and :cite:`Cartis2018b`.

    - **clip_criterion_if_overflowing** (bool): see :ref:`algo_options`.
      convergence.minimal_trustregion_radius_tolerance (float): see
      :ref:`algo_options`.
    - **convergence.noise_corrected_criterion_tolerance** (float): Stop when the
      evaluations on the set of interpolation points all fall within this factor
      of the noise level.
      The default is 1, i.e. when all evaluations are within the noise level.
      If you want to not use this criterion but still flag your
      criterion function as noisy, set this tolerance to 0.0.

      .. warning::
          Very small values, as in most other tolerances don't make sense here.

    - **convergence.ftol_scaled** (float):
      Terminate if a point is reached where the ratio of the criterion value
      to the criterion value at the start params is below this value, i.e. if
      :math:`f(x_k)/f(x_0) \leq
      \text{convergence.ftol_scaled}`. Note this is
      deactivated unless the lowest mathematically possible criterion value (0.0)
      is actually achieved.
    - **convergence.slow_progress** (dict): Arguments for converging when the evaluations
      over several iterations only yield small improvements on average, see
      see :ref:`algo_options` for details.
    - **initial_directions (str)**: see :ref:`algo_options`.
    - **interpolation_rounding_error** (float): see :ref:`algo_options`.
    - **noise_additive_level** (float): Used for determining the presence of noise
      and the convergence by all interpolation points being within noise level.
      0 means no additive noise. Only multiplicative or additive is supported.
    - **noise_multiplicative_level** (float): Used for determining the presence of noise
      and the convergence by all interpolation points being within noise level.
      0 means no multiplicative noise. Only multiplicative or additive is
      supported.
    - **noise_n_evals_per_point** (callable): How often to evaluate the criterion
      function at each point.
      This is only applicable for criterion functions with noise,
      when averaging multiple evaluations at the same point produces a more
      accurate value.
      The input parameters are the ``upper_trustregion_radius`` (:math:`\Delta`),
      the ``lower_trustregion_radius`` (:math:`\rho`),
      how many iterations the algorithm has been running for, ``n_iterations``
      and how many resets have been performed, ``n_resets``.
      The function must return an integer.
      Default is no averaging (i.e.
      ``noise_n_evals_per_point(...) = 1``).
    - **random_directions_orthogonal** (bool): see :ref:`algo_options`.
    - **stopping.maxfun** (int): see :ref:`algo_options`.
    - **threshold_for_safety_step** (float): see :ref:`algo_options`.
    - **trustregion.expansion_factor_successful** (float): see :ref:`algo_options`.
    - **trustregion.expansion_factor_very_successful** (float): see :ref:`algo_options`.
    - **trustregion.fast_start_options** (dict): see :ref:`algo_options`.
    - **trustregion.initial_radius** (float): Initial value of the trust region radius.
    - **trustregion.method_to_replace_extra_points (str)**: If replacing extra points in
      successful iterations, whether to use geometry improving steps or the
      momentum method. Can be "geometry_improving" or "momentum".
    - **trustregion.n_extra_points_to_replace_successful** (int): The number of extra
      points (other than accepting the trust region step) to replace. Useful when
      ``trustregion.n_interpolation_points > len(x) + 1``.
    - **trustregion.n_interpolation_points** (int): The number of interpolation points to
      use. The default is :code:`len(x) + 1`. If using resets, this is the
      number of points to use in the first run of the solver, before any resets.
    - **trustregion.precondition_interpolation** (bool): see :ref:`algo_options`.
    - **trustregion.shrinking_factor_not_successful** (float): see :ref:`algo_options`.
    - **trustregion.shrinking_factor_lower_radius** (float): see :ref:`algo_options`.
    - **trustregion.shrinking_factor_upper_radius** (float): see :ref:`algo_options`.
    - **trustregion.threshold_successful** (float): Share of the predicted improvement
      that has to be achieved for a trust region iteration to count as successful.
    - **trustregion.threshold_very_successful** (float): Share of the predicted
      improvement that has to be achieved for a trust region iteration to count
      as very successful.

```

```{eval-rst}
.. dropdown::  nag_pybobyqa

    .. code-block::

        "nag_pybobyqa"

    Minimize a function using the BOBYQA algorithm.

    BOBYQA (:cite:`Powell2009`, :cite:`Cartis2018`, :cite:`Cartis2018a`) is a
    derivative-free trust-region method. It is designed to solve nonlinear local
    minimization problems.

    Remember to cite :cite:`Powell2009` and :cite:`Cartis2018` when using pybobyqa in
    addition to optimagic. If you take advantage of the ``seek_global_optimum`` option,
    cite :cite:`Cartis2018a` additionally.

    There are two main situations when using a derivative-free algorithm like BOBYQA
    is preferable to derivative-based algorithms:

    1. The criterion function is not deterministic, i.e. if we evaluate the criterion
       function multiple times at the same parameter vector we get different results.

    2. The criterion function is very expensive to evaluate and only finite differences
       are available to calculate its derivative.

    The detailed documentation of the algorithm can be found `here
    <https://numericalalgorithmsgroup.github.io/pybobyqa/>`_.

    There are four possible convergence criteria:

    1. when the trust region radius is shrunk below a minimum. This is
       approximately equivalent to an absolute parameter tolerance.

    2. when the criterion value falls below an absolute, user-specified value,
       the optimization terminates successfully.

    3. when insufficient improvements have been gained over a certain number of
       iterations. The (absolute) threshold for what constitutes an insufficient
       improvement, how many iterations have to be insufficient and with which
       iteration to compare can all be specified by the user.

    4. when all evaluations on the interpolation points fall within a scaled version of
       the noise level of the criterion function. This is only applicable if the
       criterion function is noisy.

    - **clip_criterion_if_overflowing** (bool): see :ref:`algo_options`.
    - **convergence.criterion_value** (float): Terminate successfully if
      the criterion value falls below this threshold. This is deactivated
      (i.e. set to -inf) by default.
    - **convergence.minimal_trustregion_radius_tolerance** (float): Minimum allowed
      value of the trust region radius, which determines when a successful
      termination occurs.
    - **convergence.noise_corrected_criterion_tolerance** (float): Stop when the
      evaluations on the set of interpolation points all fall within this
      factor of the noise level.
      The default is 1, i.e. when all evaluations are within the noise level.
      If you want to not use this criterion but still flag your
      criterion function as noisy, set this tolerance to 0.0.

      .. warning::
          Very small values, as in most other tolerances don't make sense here.

    - **convergence.slow_progress** (dict): Arguments for converging when the evaluations
      over several iterations only yield small improvements on average, see
      see :ref:`algo_options` for details.
    - **initial_directions** (str)``: see :ref:`algo_options`.
    - **interpolation_rounding_error** (float): see :ref:`algo_options`.
    - **noise_additive_level** (float): Used for determining the presence of noise
      and the convergence by all interpolation points being within noise level.
      0 means no additive noise. Only multiplicative or additive is supported.
    - **noise_multiplicative_level** (float): Used for determining the presence of noise
      and the convergence by all interpolation points being within noise level.
      0 means no multiplicative noise. Only multiplicative or additive is
      supported.
    - **noise_n_evals_per_point** (callable): How often to evaluate the criterion
      function at each point.
      This is only applicable for criterion functions with noise,
      when averaging multiple evaluations at the same point produces a more
      accurate value.
      The input parameters are the ``upper_trustregion_radius`` (``delta``),
      the ``lower_trustregion_radius`` (``rho``),
      how many iterations the algorithm has been running for, ``n_iterations``
      and how many resets have been performed, ``n_resets``.
      The function must return an integer.
      Default is no averaging (i.e. ``noise_n_evals_per_point(...) = 1``).
    - **random_directions_orthogonal** (bool): see :ref:`algo_options`.
    - **seek_global_optimum** (bool): whether to apply the heuristic to escape local
      minima presented in :cite:`Cartis2018a`. Only applies for noisy criterion
      functions.
    - **stopping.maxfun** (int): see :ref:`algo_options`.
    - **threshold_for_safety_step** (float): see :ref:`algo_options`.
    - **trustregion.expansion_factor_successful** (float): see :ref:`algo_options`.
    - **trustregion.expansion_factor_very_successful** (float): see :ref:`algo_options`.
    - **trustregion.initial_radius** (float): Initial value of the trust region radius.
    - **trustregion.minimum_change_hession_for_underdetermined_interpolation** (bool):
      Whether to solve the underdetermined quadratic interpolation problem by
      minimizing the Frobenius norm of the Hessian, or change in Hessian.
    - **trustregion.n_interpolation_points** (int): The number of interpolation points to
      use. With $n=len(x)$ the default is $2n+1$ if the criterion is not noisy.
      Otherwise, it is set to $(n+1)(n+2)/2)$.

      Larger values are particularly useful for noisy problems.
      Py-BOBYQA requires

      .. math::
          n + 1 \leq \text{trustregion.n_interpolation_points} \leq (n+1)(n+2)/2.
    - **trustregion.precondition_interpolation** (bool): see :ref:`algo_options`.
    - **trustregion.reset_options** (dict): Options for resetting the optimization,
      see :ref:`algo_options` for details.
    - **trustregion.shrinking_factor_not_successful** (float): see :ref:`algo_options`.
    - **trustregion.shrinking_factor_upper_radius** (float): see :ref:`algo_options`.
    - **trustregion.shrinking_factor_lower_radius** (float): see :ref:`algo_options`.
    - **trustregion.threshold_successful** (float): see :ref:`algo_options`.
    - **trustregion.threshold_very_successful** (float): see :ref:`algo_options`.



```

(pygmo-algorithms)=

## PYGMO2 Optimizers

Please cite {cite}`Biscani2020` in addition to optimagic when using pygmo. optimagic
supports the following [pygmo2](https://esa.github.io/pygmo2) optimizers.

```{eval-rst}
.. dropdown::  pygmo_gaco

    .. code-block::

        "pygmo_gaco"

    Minimize a scalar function using the generalized ant colony algorithm.

    The version available through pygmo is an generalized version of the
    original ant colony algorithm proposed by :cite:`Schlueter2009`.

    This algorithm can be applied to box-bounded problems.

    Ant colony optimization is a class of optimization algorithms modeled on the
    actions of an ant colony. Artificial "ants" (e.g. simulation agents) locate
    optimal solutions by moving through a parameter space representing all
    possible solutions. Real ants lay down pheromones directing each other to
    resources while exploring their environment. The simulated "ants" similarly
    record their positions and the quality of their solutions, so that in later
    simulation iterations more ants locate better solutions.

    The generalized ant colony algorithm generates future generations of ants by
    using a multi-kernel gaussian distribution based on three parameters (i.e.,
    pheromone values) which are computed depending on the quality of each
    previous solution. The solutions are ranked through an oracle penalty
    method.

    - **population_size** (int): Size of the population. If None, it's twice the
      number of parameters but at least 64.
    - **batch_evaluator** (str or Callable): Name of a pre-implemented batch
      evaluator (currently 'joblib' and 'pathos_mp') or Callable with the same
      interface as the optimagic batch_evaluators. See :ref:`batch_evaluators`.
    - **n_cores** (int): Number of cores to use.
    - **seed** (int): seed used by the internal random number generator.
    - **discard_start_params** (bool): If True, the start params are not guaranteed
      to be part of the initial population. This saves one criterion function
      evaluation that cannot be done in parallel with other evaluations. Default
      False.

    - **stopping.maxiter** (int): Number of generations to evolve.
    - **kernel_size** (int): Number of solutions stored in the solution archive.
    - **speed_parameter_q** (float): This parameter manages the convergence speed
      towards the found minima (the smaller the faster). In the pygmo
      documentation it is referred to as $q$. It must be positive and can be
      larger than 1. The default is 1.0 until **threshold** is reached. Then it
      is set to 0.01.
    - **oracle** (float): oracle parameter used in the penalty method.
    - **accuracy** (float): accuracy parameter for maintaining a minimum penalty
      function's values distances.
    - **threshold** (int): when the iteration counter reaches the threshold the
      convergence speed is set to 0.01 automatically. To deactivate this effect
      set the threshold to stopping.maxiter which is the largest allowed
      value.
    - **speed_of_std_values_convergence** (int): parameter that determines the
      convergence speed of the standard deviations. This must be an integer
      (`n_gen_mark` in pygmo and pagmo).
    - **stopping.max_n_without_improvements** (int): if a positive integer is
      assigned here, the algorithm will count the runs without improvements, if
      this number exceeds the given value, the algorithm will be stopped.
    - **stopping.maxfun** (int): maximum number of function
      evaluations.
    - **focus** (float): this parameter makes the search for the optimum greedier
      and more focused on local improvements (the higher the greedier). If the
      value is very high, the search is more focused around the current best
      solutions. Values larger than 1 are allowed.
    - **cache** (bool): if True, memory is activated in the algorithm for multiple calls.

```

```{eval-rst}
.. dropdown::  pygmo_bee_colony

    .. code-block::

        "pygmo_bee_colony"

    Minimize a scalar function using the artifical bee colony algorithm.

    The Artificial Bee Colony Algorithm was originally proposed by
    :cite:`Karaboga2007`. The implemented version of the algorithm is proposed
    in :cite:`Mernik2015`. The algorithm is only suited for bounded parameter
    spaces.

    - **stopping.maxiter** (int): Number of generations to evolve.
    - **seed** (int): seed used by the internal random number generator.
    - **discard_start_params** (bool): If True, the start params are not guaranteed
      to be part of the initial population. This saves one criterion function
      evaluation that cannot be done in parallel with other evaluations. Default
      False.
    - **max_n_trials** (int): Maximum number of trials for abandoning a source.
      Default is 1.
    - **population_size** (int): Size of the population. If None, it's twice the
      number of parameters but at least 20.
```

```{eval-rst}
.. dropdown::  pygmo_de

    .. code-block::

        "pygmo_de"

    Minimize a scalar function using the differential evolution algorithm.

    Differential Evolution is a heuristic optimizer originally presented in
    :cite:`Storn1997`. The algorithm is only suited for bounded parameter
    spaces.

    - **population_size** (int): Size of the population. If None, it's twice the
      number of parameters but at least 10.
    - **seed** (int): seed used by the internal random number generator.
    - **discard_start_params** (bool): If True, the start params are not guaranteed
      to be part of the initial population. This saves one criterion function
      evaluation that cannot be done in parallel with other evaluations. Default
      False.
    - **stopping.maxiter** (int): Number of generations to evolve.
    - **weight_coefficient** (float): Weight coefficient. It is denoted by $F$ in
      the main paper and must lie in [0, 2]. It controls the amplification of
      the differential variation $(x_{r_2, G} - x_{r_3, G})$.
    - **crossover_probability** (float): Crossover probability.
    - **mutation_variant (str or int)**: code for the mutation variant to create a
      new candidate individual. The default is . The following are available:

        - "best/1/exp" (1, when specified as int)
        - "rand/1/exp" (2, when specified as int)
        - "rand-to-best/1/exp" (3, when specified as int)
        - "best/2/exp" (4, when specified as int)
        - "rand/2/exp" (5, when specified as int)
        - "best/1/bin" (6, when specified as int)
        - "rand/1/bin" (7, when specified as int)
        - "rand-to-best/1/bin" (8, when specified as int)
        - "best/2/bin" (9, when specified as int)
        - "rand/2/bin" (10, when specified as int)
    - **convergence.criterion_tolerance**: stopping criteria on the criterion
      tolerance. Default is 1e-6. It is not clear whether this is the absolute
      or relative criterion tolerance.
    - **convergence.xtol_rel**: stopping criteria on the x
      tolerance. In pygmo the default is 1e-6 but we use our default value of
      1e-5.
```

```{eval-rst}
.. dropdown::  pygmo_sea

    .. code-block::

        "pygmo_sea"

    Minimize a scalar function using the (N+1)-ES simple evolutionary algorithm.

    This algorithm represents the simplest evolutionary strategy, where a population of
    $\lambda$ individuals at each generation produces one offspring by mutating its best
    individual uniformly at random within the bounds. Should the offspring be better
    than the worst individual in the population it will substitute it.

    See :cite:`Oliveto2007`.

    The algorithm is only suited for bounded parameter spaces.

    - **population_size** (int): Size of the population. If None, it's twice the number of
      parameters but at least 10.
    - **seed** (int): seed used by the internal random number generator.
    - **discard_start_params** (bool): If True, the start params are not guaranteed to be
      part of the initial population. This saves one criterion function evaluation that
      cannot be done in parallel with other evaluations. Default False.
    - **stopping.maxiter** (int): number of generations to consider. Each generation
      will compute the objective function once.

```

```{eval-rst}
.. dropdown::  pygmo_sga

    .. code-block::

        "pygmo_sga"

    Minimize a scalar function using a simple genetic algorithm.

    A detailed description of the algorithm can be found `in the pagmo2 documentation
    <https://esa.github.io/pagmo2/docs/cpp/algorithms/sga.html>`_.

    See also :cite:`Oliveto2007`.

    - **population_size** (int): Size of the population. If None, it's twice the number of
      parameters but at least 64.
    - **seed** (int): seed used by the internal random number generator.
    - **discard_start_params** (bool): If True, the start params are not guaranteed to be
      part of the initial population. This saves one criterion function evaluation that
      cannot be done in parallel with other evaluations. Default False.
    - **stopping.maxiter** (int): Number of generations to evolve.
    - **crossover_probability** (float): Crossover probability.
    - **crossover_strategy** (str): the crossover strategy. One of “exponential”,“binomial”,
      “single” or “sbx”. Default is "exponential".
    - **eta_c** (float): distribution index for “sbx” crossover. This is an inactive
      parameter if other types of crossovers are selected. Can be in [1, 100].
    - **mutation_probability** (float): Mutation probability.
    - **mutation_strategy** (str): Mutation strategy. Must be "gaussian", "polynomial" or
      "uniform". Default is "polynomial".
    - **mutation_polynomial_distribution_index** (float): Must be in [0, 1]. Default is 1.
    - **mutation_gaussian_width** (float): Must be in [0, 1]. Default is 1.
    - **selection_strategy (str)**: Selection strategy. Must be "tournament" or "truncated".
    - **selection_truncated_n_best** (int): number of best individuals to use in the
      "truncated" selection mechanism.
    - **selection_tournament_size** (int): size of the tournament in the "tournament"
      selection mechanism. Default is 1.
```

```{eval-rst}
.. dropdown::  pygmo_sade

    .. code-block::

        "pygmo_sade"

    Minimize a scalar function using Self-adaptive Differential Evolution.

    The original Differential Evolution algorithm (pygmo_de) can be significantly
    improved introducing the idea of parameter self-adaptation.

    Many different proposals have been made to self-adapt both the crossover and the
    F parameters of the original differential evolution algorithm. pygmo's
    implementation supports two different mechanisms. The first one, proposed by
    :cite:`Brest2006`, does not make use of the differential evolution operators to
    produce new values for the weight coefficient $F$ and the crossover probability
    $CR$ and, strictly speaking, is thus not self-adaptation, rather parameter control.
    The resulting differential evolution variant is often referred to as jDE.
    The second variant is inspired by the ideas introduced by :cite:`Elsayed2011` and
    uses a variaton of the selected DE operator to produce new $CR$ anf $F$ parameters
    for each individual. This variant is referred to iDE.

    - **population_size** (int): Size of the population. If None, it's twice the number of
      parameters but at least 64.
    - **seed** (int): seed used by the internal random number generator.
    - **discard_start_params** (bool): If True, the start params are not guaranteed to be
      part of the initial population. This saves one criterion function evaluation that
      cannot be done in parallel with other evaluations. Default False.
    - jde (bool): Whether to use the jDE self-adaptation variant to control the $F$ and
      $CR$ parameter. If True jDE is used, else iDE.
    - **stopping.maxiter** (int): Number of generations to evolve.
    - **mutation_variant** (int or str): code for the mutation variant to create a new
      candidate individual. The default is "rand/1/exp". The first ten are the
      classical mutation variants introduced in the orginal DE algorithm, the remaining
      ones are, instead, considered in the work by :cite:`Elsayed2011`.
      The following are available:

        - "best/1/exp" or 1
        - "rand/1/exp" or 2
        - "rand-to-best/1/exp" or 3
        - "best/2/exp" or 4
        - "rand/2/exp" or 5
        - "best/1/bin" or 6
        - "rand/1/bin" or 7
        - "rand-to-best/1/bin" or 8
        - "best/2/bin" or 9
        - "rand/2/bin" or 10
        - "rand/3/exp" or 11
        - "rand/3/bin" or 12
        - "best/3/exp" or 13
        - "best/3/bin" or 14
        - "rand-to-current/2/exp" or 15
        - "rand-to-current/2/bin" or 16
        - "rand-to-best-and-current/2/exp" or 17
        - "rand-to-best-and-current/2/bin" or 18

    - **keep_adapted_params** (bool):  when true the adapted parameters $CR$ anf $F$ are
      not reset between successive calls to the evolve method. Default is False.
    - ftol (float): stopping criteria on the x tolerance.
    - xtol (float): stopping criteria on the f tolerance.


```

```{eval-rst}
.. dropdown::  pygmo_cmaes

    .. code-block::

        "pygmo_cmaes"

    Minimize a scalar function using the Covariance Matrix Evolutionary Strategy.

    CMA-ES is one of the most successful algorithm, classified as an Evolutionary
    Strategy, for derivative-free global optimization. The version supported by
    optimagic is the version described in :cite:`Hansen2006`.

    In contrast to the pygmo version, optimagic always sets force_bounds to True. This
    avoids that ill defined parameter values are evaluated.

    - **population_size** (int): Size of the population. If None, it's twice the number of
      parameters but at least 64.
    - **seed** (int): seed used by the internal random number generator.
    - **discard_start_params** (bool): If True, the start params are not guaranteed to be
      part of the initial population. This saves one criterion function evaluation that
      cannot be done in parallel with other evaluations. Default False.

    - **stopping.maxiter** (int): Number of generations to evolve.
    - **backward_horizon** (float): backward time horizon for the evolution path. It must
      lie betwen 0 and 1.
    - **variance_loss_compensation** (float): makes partly up for the small variance loss in
      case the indicator is zero. `cs` in the MATLAB Code of :cite:`Hansen2006`. It must
      lie between 0 and 1.
    - **learning_rate_rank_one_update** (float): learning rate for the rank-one update of
      the covariance matrix. `c1` in the pygmo and pagmo documentation. It must lie
      between 0 and 1.
    - **learning_rate_rank_mu_update** (float): learning rate for the rank-mu update of the
      covariance matrix. `cmu` in the pygmo and pagmo documentation. It must lie between
      0 and 1.
    - **initial_step_size** (float): initial step size, :math:`\sigma^0` in the original
      paper.
    - **ftol** (float): stopping criteria on the x tolerance.
    - **xtol** (float): stopping criteria on the f tolerance.
    - **keep_adapted_params** (bool):  when true the adapted parameters are not reset
      between successive calls to the evolve method. Default is False.

```

```{eval-rst}
.. dropdown::  pygmo_simulated_annealing

    .. code-block::

        "pygmo_simulated_annealing"

    Minimize a function with the simulated annealing algorithm.

    This version of the simulated annealing algorithm is, essentially, an iterative
    random search procedure with adaptive moves along the coordinate directions. It
    permits uphill moves under the control of metropolis criterion, in the hope to avoid
    the first local minima encountered. This version is the one proposed in
    :cite:`Corana1987`.

    .. note: When selecting the starting and final temperature values it helps to think
        about the tempertaure as the deterioration in the objective function value that
        still has a 37% chance of being accepted.

    - **population_size** (int): Size of the population. If None, it's twice the number of
      parameters but at least 64.
    - **seed** (int): seed used by the internal random number generator.
    - **discard_start_params** (bool): If True, the start params are not guaranteed to be
      part of the initial population. This saves one criterion function evaluation that
      cannot be done in parallel with other evaluations. Default False.
    - **start_temperature** (float): starting temperature. Must be > 0.
    - **end_temperature** (float): final temperature. Our default (0.01) is lower than in
      pygmo and pagmo. The final temperature must be positive.
    - **n_temp_adjustments** (int): number of temperature adjustments in the annealing
      schedule.
    - **n_range_adjustments** (int): number of adjustments of the search range performed at
      a constant temperature.
    - **bin_size** (int): number of mutations that are used to compute the acceptance rate.
    - **start_range** (float): starting range for mutating the decision vector. It must lie
      between 0 and 1.
```

```{eval-rst}
.. dropdown::  pygmo_pso

    .. code-block::

        "pygmo_pso"

    Minimize a scalar function using Particle Swarm Optimization.

    Particle swarm optimization (PSO) is a population based algorithm inspired by the
    foraging behaviour of swarms. In PSO each point has memory of the position where it
    achieved the best performance xli (local memory) and of the best decision vector
    :math:`x^g` in a certain neighbourhood, and uses this information to update its
    position.

    For a survey on particle swarm optimization algorithms, see :cite:`Poli2007`.

    Each particle determines its future position :math:`x_{i+1} = x_i + v_i` where

    .. math:: v_{i+1} = \omega (v_i + \eta_1 \cdot \mathbf{r}_1 \cdot (x_i - x^{l}_i) +
        \eta_2 \cdot \mathbf{r}_2 \cdot (x_i - x^g))

    - **population_size** (int): Size of the population. If None, it's twice the number of
      parameters but at least 10.
    - **seed** (int): seed used by the internal random number generator.
    - **discard_start_params** (bool): If True, the start params are not guaranteed to be
      part of the initial population. This saves one criterion function evaluation that
      cannot be done in parallel with other evaluations. Default False.
    - **stopping.maxiter** (int): Number of generations to evolve.

    - **omega** (float): depending on the variant chosen, :math:`\omega` is the particles'
      inertia weight or the construction coefficient. It must lie between 0 and 1.
    - **force_of_previous_best** (float): :math:`\eta_1` in the equation above. It's the
      magnitude of the force, applied to the particle’s velocity, in the direction of
      its previous best position. It must lie between 0 and 4.
    - **force_of_best_in_neighborhood** (float): :math:`\eta_2` in the equation above. It's
      the magnitude of the force, applied to the particle’s velocity, in the direction
      of the best position in its neighborhood. It must lie between 0 and 4.
    - **max_velocity** (float): maximum allowed particle velocity as fraction of the box
      bounds. It must lie between 0 and 1.
    - **algo_variant (int or str)**: algorithm variant to be used:
        - 1 or "canonical_inertia": Canonical (with inertia weight)
        - 2 or "social_and_cog_rand": Same social and cognitive rand.
        - 3 or "all_components_rand": Same rand. for all components
        - 4 or "one_rand": Only one rand.
        - 5 or "canonical_constriction": Canonical (with constriction fact.)
        - 6 or "fips": Fully Informed (FIPS)

    - **neighbor_definition (int or str)**: swarm topology that defines each particle's
      neighbors that is to be used:

        - 1 or "gbest"
        - 2 or "lbest"
        - 3 or "Von Neumann"
        - 4 or "Adaptive random"

    - **neighbor_param** (int): the neighbourhood parameter. If the lbest topology is
      selected (neighbor_definition=2), it represents each particle's indegree (also
      outdegree) in the swarm topology. Particles have neighbours up to a radius of k =
      neighbor_param / 2 in the ring. If the Randomly-varying neighbourhood topology is
      selected (neighbor_definition=4), it represents each particle’s maximum outdegree
      in the swarm topology. The minimum outdegree is 1 (the particle always connects
      back to itself). If neighbor_definition is 1 or 3 this parameter is ignored.
    - **keep_velocities** (bool): when true the particle velocities are not reset between
      successive calls to `evolve`.
```

```{eval-rst}
.. dropdown::  pygmo_pso_gen

    .. code-block::

        "pygmo_pso_gen"

    Minimize a scalar function with generational Particle Swarm Optimization.

    Particle Swarm Optimization (generational) is identical to pso, but does update the
    velocities of each particle before new particle positions are computed (taking into
    consideration all updated particle velocities). Each particle is thus evaluated on
    the same seed within a generation as opposed to the standard PSO which evaluates
    single particle at a time. Consequently, the generational PSO algorithm is suited
    for stochastic optimization problems.

    For a survey on particle swarm optimization algorithms, see :cite:`Poli2007`.

    Each particle determines its future position :math:`x_{i+1} = x_i + v_i` where

    .. math:: v_{i+1} = \omega (v_i + \eta_1 \cdot \mathbf{r}_1 \cdot (x_i - x^{l}_i) +
        \eta_2 \cdot \mathbf{r}_2 \cdot (x_i - x^g))

    - **population_size** (int): Size of the population. If None, it's twice the number of
      parameters but at least 10.
    - **batch_evaluator (str or Callable)**: Name of a pre-implemented batch evaluator
      (currently 'joblib' and 'pathos_mp') or Callable with the same interface as the
      optimagic batch_evaluators. See :ref:`batch_evaluators`.
    - **n_cores** (int): Number of cores to use.
    - **seed** (int): seed used by the internal random number generator.
    - **discard_start_params** (bool): If True, the start params are not guaranteed to be
      part of the initial population. This saves one criterion function evaluation that
      cannot be done in parallel with other evaluations. Default False.
    - **stopping.maxiter** (int): Number of generations to evolve.

    - **omega** (float): depending on the variant chosen, :math:`\omega` is the particles'
      inertia weight or the constructuion coefficient. It must lie between 0 and 1.
    - **force_of_previous_best** (float): :math:`\eta_1` in the equation above. It's the
      magnitude of the force, applied to the particle’s velocity, in the direction of
      its previous best position. It must lie between 0 and 4.
    - **force_of_best_in_neighborhood** (float): :math:`\eta_2` in the equation above. It's
      the magnitude of the force, applied to the particle’s velocity, in the direction
      of the best position in its neighborhood. It must lie between 0 and 4.
    - **max_velocity** (float): maximum allowed particle velocity as fraction of the box
      bounds. It must lie between 0 and 1.
    - **algo_variant** (int): code of the algorithm's variant to be used:

        - 1 or "canonical_inertia": Canonical (with inertia weight)
        - 2 or "social_and_cog_rand": Same social and cognitive rand.
        - 3 or "all_components_rand": Same rand. for all components
        - 4 or "one_rand": Only one rand.
        - 5 or "canonical_constriction": Canonical (with constriction fact.)
        - 6 or "fips": Fully Informed (FIPS)

    - **neighbor_definition** (int): code for the swarm topology that defines each
      particle's neighbors that is to be used:

        - 1 or "gbest"
        - 2 or "lbest"
        - 3 or "Von Neumann"
        - 4 or "Adaptive random"

    - **neighbor_param** (int): the neighbourhood parameter. If the lbest topology is
      selected (neighbor_definition=2), it represents each particle's indegree (also
      outdegree) in the swarm topology. Particles have neighbours up to a radius of k =
      neighbor_param / 2 in the ring. If the Randomly-varying neighbourhood topology is
      selected (neighbor_definition=4), it represents each particle’s maximum outdegree
      in the swarm topology. The minimum outdegree is 1 (the particle always connects
      back to itself). If neighbor_definition is 1 or 3 this parameter is ignored.
    - **keep_velocities** (bool): when true the particle velocities are not reset between
      successive calls to `evolve`.
```

```{eval-rst}
.. dropdown::  pygmo_mbh

    .. code-block::

        "pygmo_mbh"

    Minimize a scalar function using generalized Monotonic Basin Hopping.

    Mon
Download .txt
gitextract__osbgbsz/

├── .github/
│   ├── CODE_OF_CONDUCT.md
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.md
│   │   ├── enhancement.md
│   │   └── feature_request.md
│   ├── PULL_REQUEST_TEMPLATE/
│   │   └── pull_request_template.md
│   └── workflows/
│       ├── main.yml
│       └── publish-to-pypi.yml
├── .gitignore
├── .pre-commit-config.yaml
├── .readthedocs.yml
├── .tools/
│   ├── create_algo_selection_code.py
│   ├── test_create_algo_selection_code.py
│   └── update_algo_selection_hook.py
├── .yamllint.yml
├── CHANGES.md
├── CITATION
├── LICENSE
├── README.md
├── codecov.yml
├── docs/
│   ├── Makefile
│   ├── make.bat
│   └── source/
│       ├── _static/
│       │   ├── css/
│       │   │   ├── custom.css
│       │   │   ├── termynal.css
│       │   │   └── termynal_custom.css
│       │   └── js/
│       │       ├── custom.js
│       │       ├── require.js
│       │       └── termynal.js
│       ├── algorithms.md
│       ├── conf.py
│       ├── development/
│       │   ├── changes.md
│       │   ├── code_of_conduct.md
│       │   ├── credits.md
│       │   ├── enhancement_proposals.md
│       │   ├── ep-00-governance-model.md
│       │   ├── ep-01-pytrees.md
│       │   ├── ep-02-typing.md
│       │   ├── ep-03-alignment.md
│       │   ├── how_to_contribute.md
│       │   ├── index.md
│       │   └── styleguide.md
│       ├── estimagic/
│       │   ├── explanation/
│       │   │   ├── bootstrap_ci.md
│       │   │   ├── bootstrap_montecarlo_comparison.ipynb
│       │   │   ├── cluster_robust_likelihood_inference.md
│       │   │   └── index.md
│       │   ├── index.md
│       │   ├── reference/
│       │   │   └── index.md
│       │   └── tutorials/
│       │       ├── bootstrap_overview.ipynb
│       │       ├── estimation_tables_overview.ipynb
│       │       ├── index.md
│       │       ├── likelihood_overview.ipynb
│       │       └── msm_overview.ipynb
│       ├── explanation/
│       │   ├── explanation_of_numerical_optimizers.md
│       │   ├── implementation_of_constraints.md
│       │   ├── index.md
│       │   ├── internal_optimizers.md
│       │   ├── numdiff_background.md
│       │   ├── tests_for_supported_optimizers.md
│       │   └── why_optimization_is_hard.ipynb
│       ├── how_to/
│       │   ├── how_to_add_optimizers.ipynb
│       │   ├── how_to_algorithm_selection.ipynb
│       │   ├── how_to_benchmarking.ipynb
│       │   ├── how_to_bounds.ipynb
│       │   ├── how_to_change_plotting_backend.ipynb
│       │   ├── how_to_constraints.md
│       │   ├── how_to_criterion_function.ipynb
│       │   ├── how_to_derivatives.ipynb
│       │   ├── how_to_document_optimizers.md
│       │   ├── how_to_errors_during_optimization.ipynb
│       │   ├── how_to_globalization.ipynb
│       │   ├── how_to_logging.ipynb
│       │   ├── how_to_multistart.ipynb
│       │   ├── how_to_scaling.md
│       │   ├── how_to_slice_plot.ipynb
│       │   ├── how_to_slice_plot_3d.ipynb
│       │   ├── how_to_specify_algorithm_and_algo_options.md
│       │   ├── how_to_start_parameters.md
│       │   ├── how_to_visualize_histories.ipynb
│       │   └── index.md
│       ├── index.md
│       ├── installation.md
│       ├── reference/
│       │   ├── algo_options.md
│       │   ├── batch_evaluators.md
│       │   ├── index.md
│       │   ├── typing.md
│       │   └── utilities.md
│       ├── refs.bib
│       ├── tutorials/
│       │   ├── bayes_opt_tutorial.ipynb
│       │   ├── index.md
│       │   ├── numdiff_overview.ipynb
│       │   └── optimization_overview.ipynb
│       └── videos.md
├── pyproject.toml
├── src/
│   ├── estimagic/
│   │   ├── __init__.py
│   │   ├── batch_evaluators.py
│   │   ├── bootstrap.py
│   │   ├── bootstrap_ci.py
│   │   ├── bootstrap_helpers.py
│   │   ├── bootstrap_outcomes.py
│   │   ├── bootstrap_samples.py
│   │   ├── config.py
│   │   ├── estimate_ml.py
│   │   ├── estimate_msm.py
│   │   ├── estimation_summaries.py
│   │   ├── estimation_table.py
│   │   ├── examples/
│   │   │   ├── __init__.py
│   │   │   ├── diabetes.csv
│   │   │   ├── exam_points.csv
│   │   │   ├── logit.py
│   │   │   └── sensitivity_probit_example_data.csv
│   │   ├── lollipop_plot.py
│   │   ├── ml_covs.py
│   │   ├── msm_covs.py
│   │   ├── msm_sensitivity.py
│   │   ├── msm_weighting.py
│   │   ├── py.typed
│   │   ├── shared_covs.py
│   │   └── utilities.py
│   └── optimagic/
│       ├── __init__.py
│       ├── algorithms.py
│       ├── batch_evaluators.py
│       ├── benchmarking/
│       │   ├── __init__.py
│       │   ├── benchmark_reports.py
│       │   ├── cartis_roberts.py
│       │   ├── get_benchmark_problems.py
│       │   ├── more_wild.py
│       │   ├── noise_distributions.py
│       │   ├── process_benchmark_results.py
│       │   └── run_benchmark.py
│       ├── config.py
│       ├── constraints.py
│       ├── decorators.py
│       ├── deprecations.py
│       ├── differentiation/
│       │   ├── __init__.py
│       │   ├── derivatives.py
│       │   ├── finite_differences.py
│       │   ├── generate_steps.py
│       │   ├── numdiff_options.py
│       │   └── richardson_extrapolation.py
│       ├── examples/
│       │   ├── __init__.py
│       │   ├── criterion_functions.py
│       │   └── numdiff_functions.py
│       ├── exceptions.py
│       ├── logging/
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── logger.py
│       │   ├── read_log.py
│       │   ├── sqlalchemy.py
│       │   └── types.py
│       ├── mark.py
│       ├── optimization/
│       │   ├── __init__.py
│       │   ├── algo_options.py
│       │   ├── algorithm.py
│       │   ├── convergence_report.py
│       │   ├── create_optimization_problem.py
│       │   ├── error_penalty.py
│       │   ├── fun_value.py
│       │   ├── history.py
│       │   ├── internal_optimization_problem.py
│       │   ├── multistart.py
│       │   ├── multistart_options.py
│       │   ├── optimization_logging.py
│       │   ├── optimize.py
│       │   ├── optimize_result.py
│       │   ├── process_results.py
│       │   └── scipy_aliases.py
│       ├── optimizers/
│       │   ├── __init__.py
│       │   ├── _pounders/
│       │   │   ├── __init__.py
│       │   │   ├── _conjugate_gradient.py
│       │   │   ├── _steihaug_toint.py
│       │   │   ├── _trsbox.py
│       │   │   ├── bntr.py
│       │   │   ├── gqtpar.py
│       │   │   ├── linear_subsolvers.py
│       │   │   ├── pounders_auxiliary.py
│       │   │   └── pounders_history.py
│       │   ├── bayesian_optimizer.py
│       │   ├── bhhh.py
│       │   ├── fides.py
│       │   ├── gfo_optimizers.py
│       │   ├── iminuit_migrad.py
│       │   ├── ipopt.py
│       │   ├── nag_optimizers.py
│       │   ├── neldermead.py
│       │   ├── nevergrad_optimizers.py
│       │   ├── nlopt_optimizers.py
│       │   ├── pounders.py
│       │   ├── pygad/
│       │   │   └── __init__.py
│       │   ├── pygad_optimizer.py
│       │   ├── pygmo_optimizers.py
│       │   ├── pyswarms_optimizers.py
│       │   ├── scipy_optimizers.py
│       │   ├── tao_optimizers.py
│       │   └── tranquilo.py
│       ├── parameters/
│       │   ├── __init__.py
│       │   ├── block_trees.py
│       │   ├── bounds.py
│       │   ├── check_constraints.py
│       │   ├── consolidate_constraints.py
│       │   ├── constraint_tools.py
│       │   ├── conversion.py
│       │   ├── kernel_transformations.py
│       │   ├── nonlinear_constraints.py
│       │   ├── process_constraints.py
│       │   ├── process_selectors.py
│       │   ├── scale_conversion.py
│       │   ├── scaling.py
│       │   ├── space_conversion.py
│       │   ├── tree_conversion.py
│       │   └── tree_registry.py
│       ├── py.typed
│       ├── sandbox.py
│       ├── shared/
│       │   ├── __init__.py
│       │   ├── check_option_dicts.py
│       │   ├── compat.py
│       │   └── process_user_function.py
│       ├── timing.py
│       ├── type_conversion.py
│       ├── typing.py
│       ├── utilities.py
│       └── visualization/
│           ├── __init__.py
│           ├── backends.py
│           ├── convergence_plot.py
│           ├── deviation_plot.py
│           ├── history_plots.py
│           ├── plotting_utilities.py
│           ├── profile_plot.py
│           ├── slice_plot.py
│           └── slice_plot_3d.py
└── tests/
    ├── __init__.py
    ├── conftest.py
    ├── estimagic/
    │   ├── __init__.py
    │   ├── examples/
    │   │   └── test_logit.py
    │   ├── pickled_statsmodels_ml_covs/
    │   │   ├── logit_hessian.pickle
    │   │   ├── logit_hessian_matrix.pickle
    │   │   ├── logit_jacobian.pickle
    │   │   ├── logit_jacobian_matrix.pickle
    │   │   ├── logit_sandwich.pickle
    │   │   ├── probit_hessian.pickle
    │   │   ├── probit_hessian_matrix.pickle
    │   │   ├── probit_jacobian.pickle
    │   │   ├── probit_jacobian_matrix.pickle
    │   │   └── probit_sandwich.pickle
    │   ├── test_bootstrap.py
    │   ├── test_bootstrap_ci.py
    │   ├── test_bootstrap_outcomes.py
    │   ├── test_bootstrap_samples.py
    │   ├── test_estimate_ml.py
    │   ├── test_estimate_msm.py
    │   ├── test_estimate_msm_dict_params_and_moments.py
    │   ├── test_estimation_table.py
    │   ├── test_lollipop_plot.py
    │   ├── test_ml_covs.py
    │   ├── test_msm_covs.py
    │   ├── test_msm_sensitivity.py
    │   ├── test_msm_sensitivity_via_estimate_msm.py
    │   ├── test_msm_weighting.py
    │   └── test_shared.py
    └── optimagic/
        ├── __init__.py
        ├── benchmarking/
        │   ├── __init__.py
        │   ├── test_benchmark_reports.py
        │   ├── test_cartis_roberts.py
        │   ├── test_get_benchmark_problems.py
        │   ├── test_more_wild.py
        │   ├── test_noise_distributions.py
        │   └── test_run_benchmark.py
        ├── differentiation/
        │   ├── binary_choice_inputs.pickle
        │   ├── test_compare_derivatives_with_jax.py
        │   ├── test_derivatives.py
        │   ├── test_finite_differences.py
        │   ├── test_generate_steps.py
        │   └── test_numdiff_options.py
        ├── examples/
        │   └── test_criterion_functions.py
        ├── logging/
        │   ├── test_base.py
        │   ├── test_logger.py
        │   ├── test_sqlalchemy.py
        │   └── test_types.py
        ├── optimization/
        │   ├── test_algorithm.py
        │   ├── test_convergence_report.py
        │   ├── test_create_optimization_problem.py
        │   ├── test_error_penalty.py
        │   ├── test_fun_value.py
        │   ├── test_function_formats_ls.py
        │   ├── test_function_formats_scalar.py
        │   ├── test_history.py
        │   ├── test_history_collection.py
        │   ├── test_infinite_and_incomplete_bounds.py
        │   ├── test_internal_optimization_problem.py
        │   ├── test_invalid_jacobian_value.py
        │   ├── test_jax_derivatives.py
        │   ├── test_many_algorithms.py
        │   ├── test_multistart.py
        │   ├── test_multistart_options.py
        │   ├── test_optimize.py
        │   ├── test_optimize_result.py
        │   ├── test_params_versions.py
        │   ├── test_process_result.py
        │   ├── test_scipy_aliases.py
        │   ├── test_useful_exceptions.py
        │   ├── test_with_advanced_constraints.py
        │   ├── test_with_bounds.py
        │   ├── test_with_constraints.py
        │   ├── test_with_logging.py
        │   ├── test_with_multistart.py
        │   ├── test_with_nonlinear_constraints.py
        │   └── test_with_scaling.py
        ├── optimizers/
        │   ├── __init__.py
        │   ├── _pounders/
        │   │   ├── __init__.py
        │   │   ├── fixtures/
        │   │   │   ├── add_points_until_main_model_fully_linear_i.yaml
        │   │   │   ├── add_points_until_main_model_fully_linear_ii.yaml
        │   │   │   ├── find_affine_points_nonzero_i.yaml
        │   │   │   ├── find_affine_points_nonzero_ii.yaml
        │   │   │   ├── find_affine_points_nonzero_iii.yaml
        │   │   │   ├── find_affine_points_zero_i.yaml
        │   │   │   ├── find_affine_points_zero_ii.yaml
        │   │   │   ├── find_affine_points_zero_iii.yaml
        │   │   │   ├── find_affine_points_zero_iv.yaml
        │   │   │   ├── get_coefficients_residual_model.yaml
        │   │   │   ├── get_interpolation_matrices_residual_model.yaml
        │   │   │   ├── interpolate_f_iter_4.yaml
        │   │   │   ├── interpolate_f_iter_7.yaml
        │   │   │   ├── pounders_example_data.csv
        │   │   │   ├── scalar_model.pkl
        │   │   │   ├── update_initial_residual_model.yaml
        │   │   │   ├── update_intial_residual_model.yaml
        │   │   │   ├── update_main_from_residual_model.yaml
        │   │   │   ├── update_main_with_new_accepted_x.yaml
        │   │   │   ├── update_residual_model.yaml
        │   │   │   └── update_residual_model_with_new_accepted_x.yaml
        │   │   ├── test_linear_subsolvers.py
        │   │   ├── test_pounders_history.py
        │   │   ├── test_pounders_unit.py
        │   │   └── test_quadratic_subsolvers.py
        │   ├── test_bayesian_optimizer.py
        │   ├── test_bhhh.py
        │   ├── test_fides_options.py
        │   ├── test_gfo_optimizers.py
        │   ├── test_iminuit_migrad.py
        │   ├── test_ipopt_options.py
        │   ├── test_nag_optimizers.py
        │   ├── test_neldermead.py
        │   ├── test_nevergrad.py
        │   ├── test_pounders_integration.py
        │   ├── test_pygad_optimizer.py
        │   ├── test_pygmo_optimizers.py
        │   ├── test_pyswarms_optimizers.py
        │   ├── test_tao_optimizers.py
        │   └── test_tranquilo.py
        ├── parameters/
        │   ├── test_block_trees.py
        │   ├── test_bounds.py
        │   ├── test_check_constraints.py
        │   ├── test_constraint_tools.py
        │   ├── test_conversion.py
        │   ├── test_kernel_transformations.py
        │   ├── test_nonlinear_constraints.py
        │   ├── test_process_constraints.py
        │   ├── test_process_selectors.py
        │   ├── test_scale_conversion.py
        │   ├── test_scaling.py
        │   ├── test_space_conversion.py
        │   ├── test_tree_conversion.py
        │   └── test_tree_registry.py
        ├── shared/
        │   ├── __init__.py
        │   └── test_process_user_functions.py
        ├── test_algo_selection.py
        ├── test_batch_evaluators.py
        ├── test_constraints.py
        ├── test_decorators.py
        ├── test_deprecations.py
        ├── test_mark.py
        ├── test_timing.py
        ├── test_type_conversion.py
        ├── test_typed_dicts_consistency.py
        ├── test_utilities.py
        └── visualization/
            ├── test_backends.py
            ├── test_convergence_plot.py
            ├── test_deviation_plot.py
            ├── test_history_plots.py
            ├── test_plotting_utilities.py
            ├── test_profile_plot.py
            ├── test_slice_plot.py
            └── test_slice_plot_3d.py
Download .txt
Showing preview only (316K chars total). Download the full file or copy to clipboard to get everything.
SYMBOL INDEX (3275 symbols across 233 files)

FILE: .tools/create_algo_selection_code.py
  function main (line 14) | def main() -> None:
  function _import_optimizer_modules (line 80) | def _import_optimizer_modules(package_name: str) -> list[ModuleType]:
  function _get_all_algorithms (line 96) | def _get_all_algorithms(modules: list[ModuleType]) -> dict[str, Type[Alg...
  function _get_algorithms_in_module (line 104) | def _get_algorithms_in_module(module: ModuleType) -> dict[str, Type[Algo...
  function _is_gradient_based (line 121) | def _is_gradient_based(algo: Type[Algorithm]) -> bool:
  function _is_gradient_free (line 125) | def _is_gradient_free(algo: Type[Algorithm]) -> bool:
  function _is_global (line 129) | def _is_global(algo: Type[Algorithm]) -> bool:
  function _is_local (line 133) | def _is_local(algo: Type[Algorithm]) -> bool:
  function _is_bounded (line 137) | def _is_bounded(algo: Type[Algorithm]) -> bool:
  function _is_linear_constrained (line 141) | def _is_linear_constrained(algo: Type[Algorithm]) -> bool:
  function _is_nonlinear_constrained (line 145) | def _is_nonlinear_constrained(algo: Type[Algorithm]) -> bool:
  function _is_scalar (line 149) | def _is_scalar(algo: Type[Algorithm]) -> bool:
  function _is_least_squares (line 153) | def _is_least_squares(algo: Type[Algorithm]) -> bool:
  function _is_likelihood (line 157) | def _is_likelihood(algo: Type[Algorithm]) -> bool:
  function _is_parallel (line 161) | def _is_parallel(algo: Type[Algorithm]) -> bool:
  function _get_filters (line 165) | def _get_filters() -> dict[str, Callable[[Type[Algorithm]], bool]]:
  function _create_selection_info (line 189) | def _create_selection_info(
  function _generate_category_combinations (line 213) | def _generate_category_combinations(categories: list[str]) -> list[tuple...
  function _apply_filters (line 229) | def _apply_filters(
  function create_dataclass_code (line 255) | def create_dataclass_code(
  function _get_class_name (line 314) | def _get_class_name(active_categories: tuple[str, ...]) -> str:
  function _get_children (line 319) | def _get_children(
  function _get_imports (line 353) | def _get_imports(modules: list[ModuleType]) -> str:
  function _get_base_class_code (line 374) | def _get_base_class_code() -> str:
  function _get_docstring_code (line 419) | def _get_docstring_code() -> str:
  function _get_instantiation_code (line 433) | def _get_instantiation_code() -> str:

FILE: .tools/test_create_algo_selection_code.py
  function test_generate_category_combinations (line 4) | def test_generate_category_combinations() -> None:

FILE: .tools/update_algo_selection_hook.py
  function run (line 14) | def run(cmd: list[str], **kwargs: Any) -> None:
  function ensure_optimagic_is_locally_installed (line 18) | def ensure_optimagic_is_locally_installed() -> None:
  function main (line 23) | def main() -> int:

FILE: docs/source/_static/js/custom.js
  function createTermynals (line 41) | function createTermynals() {
  function loadVisibleTermynals (line 122) | function loadVisibleTermynals() {

FILE: docs/source/_static/js/require.js
  function commentReplace (line 5) | function commentReplace(e,t){return t||""}
  function isFunction (line 5) | function isFunction(e){return"[object Function]"===ostring.call(e)}
  function isArray (line 5) | function isArray(e){return"[object Array]"===ostring.call(e)}
  function each (line 5) | function each(e,t){if(e)for(var i=0;i<e.length&&(!e[i]||!t(e[i],i,e));i+...
  function eachReverse (line 5) | function eachReverse(e,t){if(e)for(var i=e.length-1;-1<i&&(!e[i]||!t(e[i...
  function hasProp (line 5) | function hasProp(e,t){return hasOwn.call(e,t)}
  function getOwn (line 5) | function getOwn(e,t){return hasProp(e,t)&&e[t]}
  function eachProp (line 5) | function eachProp(e,t){for(var i in e)if(hasProp(e,i)&&-1==disallowedPro...
  function mixin (line 5) | function mixin(i,e,r,n){e&&eachProp(e,function(e,t){!r&&hasProp(i,t)||(!...
  function bind (line 5) | function bind(e,t){return function(){return t.apply(e,arguments)}}
  function scripts (line 5) | function scripts(){return document.getElementsByTagName("script")}
  function defaultOnError (line 5) | function defaultOnError(e){throw e}
  function getGlobal (line 5) | function getGlobal(e){var t;return e&&(t=global,each(e.split("."),functi...
  function makeError (line 5) | function makeError(e,t,i,r){t=new Error(t+"\nhttps://requirejs.org/docs/...
  function newContext (line 5) | function newContext(u){var t,e,f,c,i,b={waitSeconds:7,baseUrl:"./",paths...
  function getInteractiveScript (line 5) | function getInteractiveScript(){return interactiveScript&&"interactive"=...

FILE: docs/source/_static/js/termynal.js
  class Termynal (line 42) | class Termynal {
    method constructor (line 58) | constructor(container = '#termynal', options = {}) {
    method loadLines (line 80) | loadLines() {
    method init (line 102) | init() {
    method start (line 124) | async start() {
    method generateRestart (line 157) | generateRestart() {
    method generateFinish (line 170) | generateFinish() {
    method addRestart (line 185) | addRestart() {
    method addFinish (line 190) | addFinish() {
    method type (line 199) | async type(line) {
    method progress (line 215) | async progress(line) {
    method _wait (line 240) | _wait(time) {
    method lineDataToElements (line 251) | lineDataToElements(lineData) {
    method _attributes (line 266) | _attributes(line) {

FILE: src/estimagic/__init__.py
  class OptimizeLogReader (line 63) | class OptimizeLogReader(_OptimizeLogReader):
    method __init__ (line 64) | def __init__(self, path):
  class OptimizeResult (line 75) | class OptimizeResult(_OptimizeResult):
    method __post_init__ (line 76) | def __post_init__(self):

FILE: src/estimagic/bootstrap.py
  function bootstrap (line 20) | def bootstrap(
  class BootstrapResult (line 121) | class BootstrapResult:
    method _se (line 127) | def _se(self):
    method _cov (line 131) | def _cov(self):
    method _ci (line 135) | def _ci(self):
    method _p_values (line 139) | def _p_values(self):
    method _summary (line 143) | def _summary(self):
    method base_outcome (line 147) | def base_outcome(self):
    method outcomes (line 158) | def outcomes(self):
    method se (line 174) | def se(self):
    method cov (line 191) | def cov(self, return_type="pytree"):
    method ci (line 221) | def ci(self, ci_method="percentile", ci_level=0.95):
    method p_values (line 248) | def p_values(self):
    method summary (line 259) | def summary(self, ci_method="percentile", ci_level=0.95):
  function _calulcate_summary_data_bootstrap (line 287) | def _calulcate_summary_data_bootstrap(bootstrap_result, ci_method, ci_le...

FILE: src/estimagic/bootstrap_ci.py
  function calculate_ci (line 7) | def calculate_ci(
  function _ci_percentile (line 55) | def _ci_percentile(estimates, alpha):
  function _ci_bc (line 78) | def _ci_bc(estimates, base_outcome, alpha):
  function _ci_t (line 113) | def _ci_t(estimates, base_outcome, alpha):
  function _ci_normal (line 145) | def _ci_normal(estimates, base_outcome, alpha):
  function _ci_basic (line 173) | def _ci_basic(estimates, base_outcome, alpha):
  function _eqf (line 202) | def _eqf(sample):

FILE: src/estimagic/bootstrap_helpers.py
  function check_inputs (line 4) | def check_inputs(

FILE: src/estimagic/bootstrap_outcomes.py
  function get_bootstrap_outcomes (line 6) | def get_bootstrap_outcomes(
  function _get_bootstrap_outcomes_from_indices (line 62) | def _get_bootstrap_outcomes_from_indices(
  function _take_indices_and_calculate_outcome (line 101) | def _take_indices_and_calculate_outcome(indices, data, outcome):

FILE: src/estimagic/bootstrap_samples.py
  function get_bootstrap_indices (line 5) | def get_bootstrap_indices(
  function _calculate_bootstrap_indices_weights (line 48) | def _calculate_bootstrap_indices_weights(data, weight_by, cluster_by):
  function _convert_cluster_ids_to_indices (line 75) | def _convert_cluster_ids_to_indices(cluster_col, drawn_clusters):
  function get_bootstrap_samples (line 89) | def get_bootstrap_samples(
  function _get_bootstrap_samples_from_indices (line 123) | def _get_bootstrap_samples_from_indices(data, bootstrap_indices):

FILE: src/estimagic/estimate_ml.py
  function estimate_ml (line 56) | def estimate_ml(
  class LikelihoodResult (line 436) | class LikelihoodResult:
    method __post_init__ (line 454) | def __post_init__(self):
    method _get_free_cov (line 475) | def _get_free_cov(
    method params (line 515) | def params(self):
    method optimize_result (line 519) | def optimize_result(self):
    method jacobian (line 523) | def jacobian(self):
    method hessian (line 531) | def hessian(self):
    method _se (line 539) | def _se(self):
    method _cov (line 543) | def _cov(self):
    method _summary (line 547) | def _summary(self):
    method _ci (line 551) | def _ci(self):
    method _p_values (line 555) | def _p_values(self):
    method se (line 558) | def se(
    method cov (line 608) | def cov(
    method summary (line 661) | def summary(
    method ci (line 713) | def ci(
    method p_values (line 773) | def p_values(
    method to_pickle (line 826) | def to_pickle(self, path):
  function _calculate_free_cov_ml (line 836) | def _calculate_free_cov_ml(

FILE: src/estimagic/estimate_msm.py
  function estimate_msm (line 61) | def estimate_msm(
  function get_msm_optimization_functions (line 378) | def get_msm_optimization_functions(
  function _msm_criterion (line 447) | def _msm_criterion(
  function _partial_kwargs (line 465) | def _partial_kwargs(func, kwargs):
  class MomentsResult (line 484) | class MomentsResult:
    method _get_free_cov (line 502) | def _get_free_cov(self, method, n_samples, bounds_handling, seed):
    method params (line 536) | def params(self):
    method optimize_result (line 540) | def optimize_result(self):
    method weights (line 544) | def weights(self):
    method jacobian (line 548) | def jacobian(self):
    method _se (line 552) | def _se(self):
    method _cov (line 556) | def _cov(self):
    method _summary (line 560) | def _summary(self):
    method _ci (line 564) | def _ci(self):
    method _p_values (line 568) | def _p_values(self):
    method se (line 571) | def se(
    method cov (line 622) | def cov(
    method summary (line 676) | def summary(
    method ci (line 728) | def ci(
    method p_values (line 789) | def p_values(
    method sensitivity (line 840) | def sensitivity(
    method to_pickle (line 994) | def to_pickle(self, path):
  function _calculate_free_cov_msm (line 1004) | def _calculate_free_cov_msm(

FILE: src/estimagic/estimation_table.py
  function estimation_table (line 17) | def estimation_table(
  function render_latex (line 234) | def render_latex(
  function _render_latex (line 302) | def _render_latex(
  function render_html (line 414) | def render_html(
  function _process_model (line 504) | def _process_model(model):
  function _get_estimation_table_body_and_footer (line 540) | def _get_estimation_table_body_and_footer(
  function _build_estimation_table_body (line 624) | def _build_estimation_table_body(
  function _build_estimation_table_footer (line 713) | def _build_estimation_table_footer(
  function _reindex_and_float_format_params (line 764) | def _reindex_and_float_format_params(
  function _get_params_frames_with_common_index (line 776) | def _get_params_frames_with_common_index(models):
  function _get_common_index (line 784) | def _get_common_index(dfs):
  function _get_cols_to_format (line 792) | def _get_cols_to_format(show_inference, confidence_intervals):
  function _apply_number_formatting_frames (line 808) | def _apply_number_formatting_frames(dfs, columns, number_format, add_tra...
  function _update_show_col_groups (line 825) | def _update_show_col_groups(show_col_groups, column_groups):
  function _set_default_stats_options (line 840) | def _set_default_stats_options(stats_options):
  function _get_model_names (line 859) | def _get_model_names(processed_models):
  function _check_order_of_model_names (line 880) | def _check_order_of_model_names(model_names):
  function _get_default_column_names_and_groups (line 899) | def _get_default_column_names_and_groups(model_names):
  function _customize_col_groups (line 923) | def _customize_col_groups(default_col_groups, custom_col_groups):
  function _customize_col_names (line 962) | def _customize_col_names(default_col_names, custom_col_names):
  function _create_group_to_col_position (line 997) | def _create_group_to_col_position(column_groups):
  function _convert_frame_to_string_series (line 1017) | def _convert_frame_to_string_series(
  function _combine_series (line 1075) | def _combine_series(value_sr, inference_sr):
  function _create_statistics_sr (line 1105) | def _create_statistics_sr(
  function _process_frame_indices (line 1187) | def _process_frame_indices(
  function _generate_notes_latex (line 1238) | def _generate_notes_latex(
  function _generate_notes_html (line 1298) | def _generate_notes_html(
  function _extract_params_from_sm (line 1360) | def _extract_params_from_sm(model):
  function _extract_info_from_sm (line 1372) | def _extract_info_from_sm(model):
  function _apply_number_format (line 1391) | def _apply_number_format(df_raw, number_format, format_integers):
  function _format_non_scientific_numbers (line 1430) | def _format_non_scientific_numbers(number_string, format_string):
  function _process_number_format (line 1439) | def _process_number_format(raw_format):
  function _get_digits_after_decimal (line 1459) | def _get_digits_after_decimal(df):
  function _center_align_integers_and_non_numeric_strings (line 1481) | def _center_align_integers_and_non_numeric_strings(sr):
  function _get_updated_styler (line 1494) | def _get_updated_styler(
  function _is_integer (line 1510) | def _is_integer(num):

FILE: src/estimagic/examples/logit.py
  function logit_loglike_and_derivative (line 9) | def logit_loglike_and_derivative(params, y, x):
  function scalar_logit_fun_and_jac (line 14) | def scalar_logit_fun_and_jac(params, y, x):
  function logit_loglike (line 19) | def logit_loglike(params, y, x):
  function logit_grad (line 43) | def logit_grad(params, y, x):
  function logit_jac (line 47) | def logit_jac(params, y, x):
  function logit_hess (line 72) | def logit_hess(params, y, x):  # noqa: ARG001

FILE: src/estimagic/lollipop_plot.py
  function lollipop_plot (line 10) | def lollipop_plot(
  function _harmonize_data (line 139) | def _harmonize_data(data):
  function _make_string_index (line 163) | def _make_string_index(ind):

FILE: src/estimagic/ml_covs.py
  function cov_hessian (line 11) | def cov_hessian(hess):
  function cov_jacobian (line 42) | def cov_jacobian(jac):
  function cov_robust (line 66) | def cov_robust(jac, hess):
  function se_from_cov (line 97) | def se_from_cov(cov):
  function cov_cluster_robust (line 115) | def cov_cluster_robust(jac, hess, design_info):
  function cov_strata_robust (line 147) | def cov_strata_robust(jac, hess, design_info):
  function _sandwich_step (line 181) | def _sandwich_step(hess, meat):
  function _clustering (line 202) | def _clustering(jac, design_info):
  function _stratification (line 229) | def _stratification(jac, design_info):

FILE: src/estimagic/msm_covs.py
  function cov_robust (line 8) | def cov_robust(jac, weights, moments_cov):
  function cov_optimal (line 44) | def cov_optimal(jac, weights):

FILE: src/estimagic/msm_sensitivity.py
  function calculate_sensitivity_to_bias (line 20) | def calculate_sensitivity_to_bias(jac, weights):
  function calculate_fundamental_sensitivity_to_noise (line 49) | def calculate_fundamental_sensitivity_to_noise(
  function calculate_actual_sensitivity_to_noise (line 106) | def calculate_actual_sensitivity_to_noise(
  function calculate_actual_sensitivity_to_removal (line 163) | def calculate_actual_sensitivity_to_removal(jac, weights, moments_cov, p...
  function calculate_fundamental_sensitivity_to_removal (line 215) | def calculate_fundamental_sensitivity_to_removal(jac, moments_cov, param...
  function calculate_sensitivity_to_weighting (line 273) | def calculate_sensitivity_to_weighting(jac, weights, moments_cov, params...
  function _sandwich (line 347) | def _sandwich(a, b):
  function _sandwich_plus (line 353) | def _sandwich_plus(a, b, c):

FILE: src/estimagic/msm_weighting.py
  function get_moments_cov (line 14) | def get_moments_cov(
  function get_weighting_matrix (line 76) | def get_weighting_matrix(
  function _assemble_block_diagonal_matrix (line 135) | def _assemble_block_diagonal_matrix(matrices):

FILE: src/estimagic/shared_covs.py
  function transform_covariance (line 12) | def transform_covariance(
  function calculate_summary_data_estimation (line 90) | def calculate_summary_data_estimation(
  function calculate_estimation_summary (line 123) | def calculate_estimation_summary(
  function process_pandas_arguments (line 218) | def process_pandas_arguments(**kwargs):
  function _to_numpy (line 267) | def _to_numpy(df_or_array, name):
  function _check_names_coincide (line 279) | def _check_names_coincide(name_dict):
  function get_derivative_case (line 290) | def get_derivative_case(derivative):
  function calculate_ci (line 301) | def calculate_ci(free_values, free_standard_errors, ci_level):
  function calculate_p_values (line 309) | def calculate_p_values(free_values, free_standard_errors):
  function calculate_free_estimates (line 315) | def calculate_free_estimates(estimates, internal_estimates):
  function transform_free_cov_to_cov (line 331) | def transform_free_cov_to_cov(free_cov, free_params, params, return_type):
  function transform_free_values_to_params_tree (line 349) | def transform_free_values_to_params_tree(values, free_params, params):
  class FreeParams (line 359) | class FreeParams(NamedTuple):

FILE: src/optimagic/algorithms.py
  class AlgoSelection (line 120) | class AlgoSelection:
    method _all (line 121) | def _all(self) -> list[Type[Algorithm]]:
    method _available (line 125) | def _available(self) -> list[Type[Algorithm]]:
    method All (line 134) | def All(self) -> list[Type[Algorithm]]:
    method Available (line 138) | def Available(self) -> list[Type[Algorithm]]:
    method AllNames (line 142) | def AllNames(self) -> list[str]:
    method AvailableNames (line 146) | def AvailableNames(self) -> list[str]:
    method _all_algorithms_dict (line 150) | def _all_algorithms_dict(self) -> dict[str, Type[Algorithm]]:
    method _available_algorithms_dict (line 154) | def _available_algorithms_dict(self) -> dict[str, Type[Algorithm]]:
  class BoundedGlobalGradientFreeNonlinearConstrainedParallelScalarAlgorithms (line 159) | class BoundedGlobalGradientFreeNonlinearConstrainedParallelScalarAlgorit...
  class BoundedGlobalGradientBasedNonlinearConstrainedScalarAlgorithms (line 168) | class BoundedGlobalGradientBasedNonlinearConstrainedScalarAlgorithms(Alg...
  class BoundedGradientBasedLocalNonlinearConstrainedScalarAlgorithms (line 173) | class BoundedGradientBasedLocalNonlinearConstrainedScalarAlgorithms(Algo...
  class BoundedGlobalGradientFreeNonlinearConstrainedScalarAlgorithms (line 182) | class BoundedGlobalGradientFreeNonlinearConstrainedScalarAlgorithms(Algo...
    method Parallel (line 189) | def Parallel(
  class BoundedGlobalGradientFreeNonlinearConstrainedParallelAlgorithms (line 196) | class BoundedGlobalGradientFreeNonlinearConstrainedParallelAlgorithms(Al...
    method Scalar (line 202) | def Scalar(
  class BoundedGlobalGradientFreeParallelScalarAlgorithms (line 209) | class BoundedGlobalGradientFreeParallelScalarAlgorithms(AlgoSelection):
    method NonlinearConstrained (line 235) | def NonlinearConstrained(
  class GlobalGradientFreeNonlinearConstrainedParallelScalarAlgorithms (line 242) | class GlobalGradientFreeNonlinearConstrainedParallelScalarAlgorithms(Alg...
    method Bounded (line 248) | def Bounded(
  class BoundedGradientFreeLocalNonlinearConstrainedScalarAlgorithms (line 255) | class BoundedGradientFreeLocalNonlinearConstrainedScalarAlgorithms(AlgoS...
  class BoundedGradientFreeLocalParallelScalarAlgorithms (line 260) | class BoundedGradientFreeLocalParallelScalarAlgorithms(AlgoSelection):
  class BoundedGradientFreeLeastSquaresLocalParallelAlgorithms (line 265) | class BoundedGradientFreeLeastSquaresLocalParallelAlgorithms(AlgoSelecti...
  class BoundedGradientFreeNonlinearConstrainedParallelScalarAlgorithms (line 271) | class BoundedGradientFreeNonlinearConstrainedParallelScalarAlgorithms(Al...
    method Global (line 277) | def Global(
  class BoundedGlobalNonlinearConstrainedParallelScalarAlgorithms (line 284) | class BoundedGlobalNonlinearConstrainedParallelScalarAlgorithms(AlgoSele...
    method GradientFree (line 290) | def GradientFree(
  class BoundedGlobalGradientBasedNonlinearConstrainedAlgorithms (line 297) | class BoundedGlobalGradientBasedNonlinearConstrainedAlgorithms(AlgoSelec...
    method Scalar (line 301) | def Scalar(self) -> BoundedGlobalGradientBasedNonlinearConstrainedScal...
  class BoundedGlobalGradientBasedScalarAlgorithms (line 306) | class BoundedGlobalGradientBasedScalarAlgorithms(AlgoSelection):
    method NonlinearConstrained (line 312) | def NonlinearConstrained(
  class GlobalGradientBasedNonlinearConstrainedScalarAlgorithms (line 319) | class GlobalGradientBasedNonlinearConstrainedScalarAlgorithms(AlgoSelect...
    method Bounded (line 323) | def Bounded(self) -> BoundedGlobalGradientBasedNonlinearConstrainedSca...
  class BoundedGradientBasedLocalNonlinearConstrainedAlgorithms (line 328) | class BoundedGradientBasedLocalNonlinearConstrainedAlgorithms(AlgoSelect...
    method Scalar (line 336) | def Scalar(self) -> BoundedGradientBasedLocalNonlinearConstrainedScala...
  class BoundedGradientBasedLocalScalarAlgorithms (line 341) | class BoundedGradientBasedLocalScalarAlgorithms(AlgoSelection):
    method NonlinearConstrained (line 357) | def NonlinearConstrained(
  class BoundedGradientBasedLeastSquaresLocalAlgorithms (line 364) | class BoundedGradientBasedLeastSquaresLocalAlgorithms(AlgoSelection):
  class GradientBasedLocalNonlinearConstrainedScalarAlgorithms (line 370) | class GradientBasedLocalNonlinearConstrainedScalarAlgorithms(AlgoSelecti...
    method Bounded (line 378) | def Bounded(self) -> BoundedGradientBasedLocalNonlinearConstrainedScal...
  class BoundedGradientBasedNonlinearConstrainedScalarAlgorithms (line 383) | class BoundedGradientBasedNonlinearConstrainedScalarAlgorithms(AlgoSelec...
    method Global (line 392) | def Global(self) -> BoundedGlobalGradientBasedNonlinearConstrainedScal...
    method Local (line 396) | def Local(self) -> BoundedGradientBasedLocalNonlinearConstrainedScalar...
  class BoundedGlobalGradientFreeNonlinearConstrainedAlgorithms (line 401) | class BoundedGlobalGradientFreeNonlinearConstrainedAlgorithms(AlgoSelect...
    method Parallel (line 408) | def Parallel(
    method Scalar (line 414) | def Scalar(self) -> BoundedGlobalGradientFreeNonlinearConstrainedScala...
  class BoundedGlobalGradientFreeScalarAlgorithms (line 419) | class BoundedGlobalGradientFreeScalarAlgorithms(AlgoSelection):
    method NonlinearConstrained (line 481) | def NonlinearConstrained(
    method Parallel (line 487) | def Parallel(self) -> BoundedGlobalGradientFreeParallelScalarAlgorithms:
  class BoundedGlobalGradientFreeParallelAlgorithms (line 492) | class BoundedGlobalGradientFreeParallelAlgorithms(AlgoSelection):
    method NonlinearConstrained (line 518) | def NonlinearConstrained(
    method Scalar (line 524) | def Scalar(self) -> BoundedGlobalGradientFreeParallelScalarAlgorithms:
  class GlobalGradientFreeNonlinearConstrainedScalarAlgorithms (line 529) | class GlobalGradientFreeNonlinearConstrainedScalarAlgorithms(AlgoSelecti...
    method Bounded (line 536) | def Bounded(self) -> BoundedGlobalGradientFreeNonlinearConstrainedScal...
    method Parallel (line 540) | def Parallel(
  class GlobalGradientFreeNonlinearConstrainedParallelAlgorithms (line 547) | class GlobalGradientFreeNonlinearConstrainedParallelAlgorithms(AlgoSelec...
    method Bounded (line 553) | def Bounded(
    method Scalar (line 559) | def Scalar(self) -> GlobalGradientFreeNonlinearConstrainedParallelScal...
  class GlobalGradientFreeParallelScalarAlgorithms (line 564) | class GlobalGradientFreeParallelScalarAlgorithms(AlgoSelection):
    method Bounded (line 590) | def Bounded(self) -> BoundedGlobalGradientFreeParallelScalarAlgorithms:
    method NonlinearConstrained (line 594) | def NonlinearConstrained(
  class BoundedGradientFreeLocalNonlinearConstrainedAlgorithms (line 601) | class BoundedGradientFreeLocalNonlinearConstrainedAlgorithms(AlgoSelecti...
    method Scalar (line 605) | def Scalar(self) -> BoundedGradientFreeLocalNonlinearConstrainedScalar...
  class BoundedGradientFreeLocalScalarAlgorithms (line 610) | class BoundedGradientFreeLocalScalarAlgorithms(AlgoSelection):
    method NonlinearConstrained (line 622) | def NonlinearConstrained(
    method Parallel (line 628) | def Parallel(self) -> BoundedGradientFreeLocalParallelScalarAlgorithms:
  class BoundedGradientFreeLeastSquaresLocalAlgorithms (line 633) | class BoundedGradientFreeLeastSquaresLocalAlgorithms(AlgoSelection):
    method Parallel (line 640) | def Parallel(self) -> BoundedGradientFreeLeastSquaresLocalParallelAlgo...
  class BoundedGradientFreeLocalParallelAlgorithms (line 645) | class BoundedGradientFreeLocalParallelAlgorithms(AlgoSelection):
    method LeastSquares (line 651) | def LeastSquares(self) -> BoundedGradientFreeLeastSquaresLocalParallel...
    method Scalar (line 655) | def Scalar(self) -> BoundedGradientFreeLocalParallelScalarAlgorithms:
  class GradientFreeLocalNonlinearConstrainedScalarAlgorithms (line 660) | class GradientFreeLocalNonlinearConstrainedScalarAlgorithms(AlgoSelection):
    method Bounded (line 665) | def Bounded(self) -> BoundedGradientFreeLocalNonlinearConstrainedScala...
  class GradientFreeLocalParallelScalarAlgorithms (line 670) | class GradientFreeLocalParallelScalarAlgorithms(AlgoSelection):
    method Bounded (line 675) | def Bounded(self) -> BoundedGradientFreeLocalParallelScalarAlgorithms:
  class GradientFreeLeastSquaresLocalParallelAlgorithms (line 680) | class GradientFreeLeastSquaresLocalParallelAlgorithms(AlgoSelection):
    method Bounded (line 685) | def Bounded(self) -> BoundedGradientFreeLeastSquaresLocalParallelAlgor...
  class BoundedGradientFreeNonlinearConstrainedScalarAlgorithms (line 690) | class BoundedGradientFreeNonlinearConstrainedScalarAlgorithms(AlgoSelect...
    method Global (line 698) | def Global(self) -> BoundedGlobalGradientFreeNonlinearConstrainedScala...
    method Local (line 702) | def Local(self) -> BoundedGradientFreeLocalNonlinearConstrainedScalarA...
    method Parallel (line 706) | def Parallel(
  class BoundedGradientFreeNonlinearConstrainedParallelAlgorithms (line 713) | class BoundedGradientFreeNonlinearConstrainedParallelAlgorithms(AlgoSele...
    method Global (line 719) | def Global(self) -> BoundedGlobalGradientFreeNonlinearConstrainedParal...
    method Scalar (line 723) | def Scalar(self) -> BoundedGradientFreeNonlinearConstrainedParallelSca...
  class BoundedGradientFreeParallelScalarAlgorithms (line 728) | class BoundedGradientFreeParallelScalarAlgorithms(AlgoSelection):
    method Global (line 755) | def Global(self) -> BoundedGlobalGradientFreeParallelScalarAlgorithms:
    method Local (line 759) | def Local(self) -> BoundedGradientFreeLocalParallelScalarAlgorithms:
    method NonlinearConstrained (line 763) | def NonlinearConstrained(
  class BoundedGradientFreeLeastSquaresParallelAlgorithms (line 770) | class BoundedGradientFreeLeastSquaresParallelAlgorithms(AlgoSelection):
    method Local (line 775) | def Local(self) -> BoundedGradientFreeLeastSquaresLocalParallelAlgorit...
  class GradientFreeNonlinearConstrainedParallelScalarAlgorithms (line 780) | class GradientFreeNonlinearConstrainedParallelScalarAlgorithms(AlgoSelec...
    method Bounded (line 786) | def Bounded(
    method Global (line 792) | def Global(self) -> GlobalGradientFreeNonlinearConstrainedParallelScal...
  class BoundedGlobalNonlinearConstrainedScalarAlgorithms (line 797) | class BoundedGlobalNonlinearConstrainedScalarAlgorithms(AlgoSelection):
    method GradientBased (line 805) | def GradientBased(
    method GradientFree (line 811) | def GradientFree(
    method Parallel (line 817) | def Parallel(self) -> BoundedGlobalNonlinearConstrainedParallelScalarA...
  class BoundedGlobalNonlinearConstrainedParallelAlgorithms (line 822) | class BoundedGlobalNonlinearConstrainedParallelAlgorithms(AlgoSelection):
    method GradientFree (line 828) | def GradientFree(
    method Scalar (line 834) | def Scalar(self) -> BoundedGlobalNonlinearConstrainedParallelScalarAlg...
  class BoundedGlobalParallelScalarAlgorithms (line 839) | class BoundedGlobalParallelScalarAlgorithms(AlgoSelection):
    method GradientFree (line 865) | def GradientFree(self) -> BoundedGlobalGradientFreeParallelScalarAlgor...
    method NonlinearConstrained (line 869) | def NonlinearConstrained(
  class GlobalNonlinearConstrainedParallelScalarAlgorithms (line 876) | class GlobalNonlinearConstrainedParallelScalarAlgorithms(AlgoSelection):
    method Bounded (line 882) | def Bounded(self) -> BoundedGlobalNonlinearConstrainedParallelScalarAl...
    method GradientFree (line 886) | def GradientFree(
  class BoundedLocalNonlinearConstrainedScalarAlgorithms (line 893) | class BoundedLocalNonlinearConstrainedScalarAlgorithms(AlgoSelection):
    method GradientBased (line 902) | def GradientBased(
    method GradientFree (line 908) | def GradientFree(
  class BoundedLocalParallelScalarAlgorithms (line 915) | class BoundedLocalParallelScalarAlgorithms(AlgoSelection):
    method GradientFree (line 919) | def GradientFree(self) -> BoundedGradientFreeLocalParallelScalarAlgori...
  class BoundedLeastSquaresLocalParallelAlgorithms (line 924) | class BoundedLeastSquaresLocalParallelAlgorithms(AlgoSelection):
    method GradientFree (line 929) | def GradientFree(self) -> BoundedGradientFreeLeastSquaresLocalParallel...
  class BoundedNonlinearConstrainedParallelScalarAlgorithms (line 934) | class BoundedNonlinearConstrainedParallelScalarAlgorithms(AlgoSelection):
    method Global (line 940) | def Global(self) -> BoundedGlobalNonlinearConstrainedParallelScalarAlg...
    method GradientFree (line 944) | def GradientFree(
  class BoundedGlobalGradientBasedAlgorithms (line 951) | class BoundedGlobalGradientBasedAlgorithms(AlgoSelection):
    method NonlinearConstrained (line 957) | def NonlinearConstrained(
    method Scalar (line 963) | def Scalar(self) -> BoundedGlobalGradientBasedScalarAlgorithms:
  class GlobalGradientBasedNonlinearConstrainedAlgorithms (line 968) | class GlobalGradientBasedNonlinearConstrainedAlgorithms(AlgoSelection):
    method Bounded (line 972) | def Bounded(self) -> BoundedGlobalGradientBasedNonlinearConstrainedAlg...
    method Scalar (line 976) | def Scalar(self) -> GlobalGradientBasedNonlinearConstrainedScalarAlgor...
  class GlobalGradientBasedScalarAlgorithms (line 981) | class GlobalGradientBasedScalarAlgorithms(AlgoSelection):
    method Bounded (line 987) | def Bounded(self) -> BoundedGlobalGradientBasedScalarAlgorithms:
    method NonlinearConstrained (line 991) | def NonlinearConstrained(
  class BoundedGradientBasedLocalAlgorithms (line 998) | class BoundedGradientBasedLocalAlgorithms(AlgoSelection):
    method LeastSquares (line 1016) | def LeastSquares(self) -> BoundedGradientBasedLeastSquaresLocalAlgorit...
    method NonlinearConstrained (line 1020) | def NonlinearConstrained(
    method Scalar (line 1026) | def Scalar(self) -> BoundedGradientBasedLocalScalarAlgorithms:
  class GradientBasedLocalNonlinearConstrainedAlgorithms (line 1031) | class GradientBasedLocalNonlinearConstrainedAlgorithms(AlgoSelection):
    method Bounded (line 1039) | def Bounded(self) -> BoundedGradientBasedLocalNonlinearConstrainedAlgo...
    method Scalar (line 1043) | def Scalar(self) -> GradientBasedLocalNonlinearConstrainedScalarAlgori...
  class GradientBasedLocalScalarAlgorithms (line 1048) | class GradientBasedLocalScalarAlgorithms(AlgoSelection):
    method Bounded (line 1067) | def Bounded(self) -> BoundedGradientBasedLocalScalarAlgorithms:
    method NonlinearConstrained (line 1071) | def NonlinearConstrained(
  class GradientBasedLeastSquaresLocalAlgorithms (line 1078) | class GradientBasedLeastSquaresLocalAlgorithms(AlgoSelection):
    method Bounded (line 1084) | def Bounded(self) -> BoundedGradientBasedLeastSquaresLocalAlgorithms:
  class GradientBasedLikelihoodLocalAlgorithms (line 1089) | class GradientBasedLikelihoodLocalAlgorithms(AlgoSelection):
  class BoundedGradientBasedNonlinearConstrainedAlgorithms (line 1094) | class BoundedGradientBasedNonlinearConstrainedAlgorithms(AlgoSelection):
    method Global (line 1103) | def Global(self) -> BoundedGlobalGradientBasedNonlinearConstrainedAlgo...
    method Local (line 1107) | def Local(self) -> BoundedGradientBasedLocalNonlinearConstrainedAlgori...
    method Scalar (line 1111) | def Scalar(self) -> BoundedGradientBasedNonlinearConstrainedScalarAlgo...
  class BoundedGradientBasedScalarAlgorithms (line 1116) | class BoundedGradientBasedScalarAlgorithms(AlgoSelection):
    method Global (line 1135) | def Global(self) -> BoundedGlobalGradientBasedScalarAlgorithms:
    method Local (line 1139) | def Local(self) -> BoundedGradientBasedLocalScalarAlgorithms:
    method NonlinearConstrained (line 1143) | def NonlinearConstrained(
  class BoundedGradientBasedLeastSquaresAlgorithms (line 1150) | class BoundedGradientBasedLeastSquaresAlgorithms(AlgoSelection):
    method Local (line 1155) | def Local(self) -> BoundedGradientBasedLeastSquaresLocalAlgorithms:
  class GradientBasedNonlinearConstrainedScalarAlgorithms (line 1160) | class GradientBasedNonlinearConstrainedScalarAlgorithms(AlgoSelection):
    method Bounded (line 1169) | def Bounded(self) -> BoundedGradientBasedNonlinearConstrainedScalarAlg...
    method Global (line 1173) | def Global(self) -> GlobalGradientBasedNonlinearConstrainedScalarAlgor...
    method Local (line 1177) | def Local(self) -> GradientBasedLocalNonlinearConstrainedScalarAlgorit...
  class BoundedGlobalGradientFreeAlgorithms (line 1182) | class BoundedGlobalGradientFreeAlgorithms(AlgoSelection):
    method NonlinearConstrained (line 1244) | def NonlinearConstrained(
    method Parallel (line 1250) | def Parallel(self) -> BoundedGlobalGradientFreeParallelAlgorithms:
    method Scalar (line 1254) | def Scalar(self) -> BoundedGlobalGradientFreeScalarAlgorithms:
  class GlobalGradientFreeNonlinearConstrainedAlgorithms (line 1259) | class GlobalGradientFreeNonlinearConstrainedAlgorithms(AlgoSelection):
    method Bounded (line 1266) | def Bounded(self) -> BoundedGlobalGradientFreeNonlinearConstrainedAlgo...
    method Parallel (line 1270) | def Parallel(self) -> GlobalGradientFreeNonlinearConstrainedParallelAl...
    method Scalar (line 1274) | def Scalar(self) -> GlobalGradientFreeNonlinearConstrainedScalarAlgori...
  class GlobalGradientFreeScalarAlgorithms (line 1279) | class GlobalGradientFreeScalarAlgorithms(AlgoSelection):
    method Bounded (line 1341) | def Bounded(self) -> BoundedGlobalGradientFreeScalarAlgorithms:
    method NonlinearConstrained (line 1345) | def NonlinearConstrained(
    method Parallel (line 1351) | def Parallel(self) -> GlobalGradientFreeParallelScalarAlgorithms:
  class GlobalGradientFreeParallelAlgorithms (line 1356) | class GlobalGradientFreeParallelAlgorithms(AlgoSelection):
    method Bounded (line 1382) | def Bounded(self) -> BoundedGlobalGradientFreeParallelAlgorithms:
    method NonlinearConstrained (line 1386) | def NonlinearConstrained(
    method Scalar (line 1392) | def Scalar(self) -> GlobalGradientFreeParallelScalarAlgorithms:
  class BoundedGradientFreeLocalAlgorithms (line 1397) | class BoundedGradientFreeLocalAlgorithms(AlgoSelection):
    method LeastSquares (line 1413) | def LeastSquares(self) -> BoundedGradientFreeLeastSquaresLocalAlgorithms:
    method NonlinearConstrained (line 1417) | def NonlinearConstrained(
    method Parallel (line 1423) | def Parallel(self) -> BoundedGradientFreeLocalParallelAlgorithms:
    method Scalar (line 1427) | def Scalar(self) -> BoundedGradientFreeLocalScalarAlgorithms:
  class GradientFreeLocalNonlinearConstrainedAlgorithms (line 1432) | class GradientFreeLocalNonlinearConstrainedAlgorithms(AlgoSelection):
    method Bounded (line 1437) | def Bounded(self) -> BoundedGradientFreeLocalNonlinearConstrainedAlgor...
    method Scalar (line 1441) | def Scalar(self) -> GradientFreeLocalNonlinearConstrainedScalarAlgorit...
  class GradientFreeLocalScalarAlgorithms (line 1446) | class GradientFreeLocalScalarAlgorithms(AlgoSelection):
    method Bounded (line 1461) | def Bounded(self) -> BoundedGradientFreeLocalScalarAlgorithms:
    method NonlinearConstrained (line 1465) | def NonlinearConstrained(
    method Parallel (line 1471) | def Parallel(self) -> GradientFreeLocalParallelScalarAlgorithms:
  class GradientFreeLeastSquaresLocalAlgorithms (line 1476) | class GradientFreeLeastSquaresLocalAlgorithms(AlgoSelection):
    method Bounded (line 1483) | def Bounded(self) -> BoundedGradientFreeLeastSquaresLocalAlgorithms:
    method Parallel (line 1487) | def Parallel(self) -> GradientFreeLeastSquaresLocalParallelAlgorithms:
  class GradientFreeLocalParallelAlgorithms (line 1492) | class GradientFreeLocalParallelAlgorithms(AlgoSelection):
    method Bounded (line 1499) | def Bounded(self) -> BoundedGradientFreeLocalParallelAlgorithms:
    method LeastSquares (line 1503) | def LeastSquares(self) -> GradientFreeLeastSquaresLocalParallelAlgorit...
    method Scalar (line 1507) | def Scalar(self) -> GradientFreeLocalParallelScalarAlgorithms:
  class BoundedGradientFreeNonlinearConstrainedAlgorithms (line 1512) | class BoundedGradientFreeNonlinearConstrainedAlgorithms(AlgoSelection):
    method Global (line 1520) | def Global(self) -> BoundedGlobalGradientFreeNonlinearConstrainedAlgor...
    method Local (line 1524) | def Local(self) -> BoundedGradientFreeLocalNonlinearConstrainedAlgorit...
    method Parallel (line 1528) | def Parallel(self) -> BoundedGradientFreeNonlinearConstrainedParallelA...
    method Scalar (line 1532) | def Scalar(self) -> BoundedGradientFreeNonlinearConstrainedScalarAlgor...
  class BoundedGradientFreeScalarAlgorithms (line 1537) | class BoundedGradientFreeScalarAlgorithms(AlgoSelection):
    method Global (line 1608) | def Global(self) -> BoundedGlobalGradientFreeScalarAlgorithms:
    method Local (line 1612) | def Local(self) -> BoundedGradientFreeLocalScalarAlgorithms:
    method NonlinearConstrained (line 1616) | def NonlinearConstrained(
    method Parallel (line 1622) | def Parallel(self) -> BoundedGradientFreeParallelScalarAlgorithms:
  class BoundedGradientFreeLeastSquaresAlgorithms (line 1627) | class BoundedGradientFreeLeastSquaresAlgorithms(AlgoSelection):
    method Local (line 1634) | def Local(self) -> BoundedGradientFreeLeastSquaresLocalAlgorithms:
    method Parallel (line 1638) | def Parallel(self) -> BoundedGradientFreeLeastSquaresParallelAlgorithms:
  class BoundedGradientFreeParallelAlgorithms (line 1643) | class BoundedGradientFreeParallelAlgorithms(AlgoSelection):
    method Global (line 1672) | def Global(self) -> BoundedGlobalGradientFreeParallelAlgorithms:
    method LeastSquares (line 1676) | def LeastSquares(self) -> BoundedGradientFreeLeastSquaresParallelAlgor...
    method Local (line 1680) | def Local(self) -> BoundedGradientFreeLocalParallelAlgorithms:
    method NonlinearConstrained (line 1684) | def NonlinearConstrained(
    method Scalar (line 1690) | def Scalar(self) -> BoundedGradientFreeParallelScalarAlgorithms:
  class GradientFreeNonlinearConstrainedScalarAlgorithms (line 1695) | class GradientFreeNonlinearConstrainedScalarAlgorithms(AlgoSelection):
    method Bounded (line 1704) | def Bounded(self) -> BoundedGradientFreeNonlinearConstrainedScalarAlgo...
    method Global (line 1708) | def Global(self) -> GlobalGradientFreeNonlinearConstrainedScalarAlgori...
    method Local (line 1712) | def Local(self) -> GradientFreeLocalNonlinearConstrainedScalarAlgorithms:
    method Parallel (line 1716) | def Parallel(self) -> GradientFreeNonlinearConstrainedParallelScalarAl...
  class GradientFreeNonlinearConstrainedParallelAlgorithms (line 1721) | class GradientFreeNonlinearConstrainedParallelAlgorithms(AlgoSelection):
    method Bounded (line 1727) | def Bounded(self) -> BoundedGradientFreeNonlinearConstrainedParallelAl...
    method Global (line 1731) | def Global(self) -> GlobalGradientFreeNonlinearConstrainedParallelAlgo...
    method Scalar (line 1735) | def Scalar(self) -> GradientFreeNonlinearConstrainedParallelScalarAlgo...
  class GradientFreeParallelScalarAlgorithms (line 1740) | class GradientFreeParallelScalarAlgorithms(AlgoSelection):
    method Bounded (line 1768) | def Bounded(self) -> BoundedGradientFreeParallelScalarAlgorithms:
    method Global (line 1772) | def Global(self) -> GlobalGradientFreeParallelScalarAlgorithms:
    method Local (line 1776) | def Local(self) -> GradientFreeLocalParallelScalarAlgorithms:
    method NonlinearConstrained (line 1780) | def NonlinearConstrained(
  class GradientFreeLeastSquaresParallelAlgorithms (line 1787) | class GradientFreeLeastSquaresParallelAlgorithms(AlgoSelection):
    method Bounded (line 1792) | def Bounded(self) -> BoundedGradientFreeLeastSquaresParallelAlgorithms:
    method Local (line 1796) | def Local(self) -> GradientFreeLeastSquaresLocalParallelAlgorithms:
  class BoundedGlobalNonlinearConstrainedAlgorithms (line 1801) | class BoundedGlobalNonlinearConstrainedAlgorithms(AlgoSelection):
    method GradientBased (line 1809) | def GradientBased(self) -> BoundedGlobalGradientBasedNonlinearConstrai...
    method GradientFree (line 1813) | def GradientFree(self) -> BoundedGlobalGradientFreeNonlinearConstraine...
    method Parallel (line 1817) | def Parallel(self) -> BoundedGlobalNonlinearConstrainedParallelAlgorit...
    method Scalar (line 1821) | def Scalar(self) -> BoundedGlobalNonlinearConstrainedScalarAlgorithms:
  class BoundedGlobalScalarAlgorithms (line 1826) | class BoundedGlobalScalarAlgorithms(AlgoSelection):
    method GradientBased (line 1891) | def GradientBased(self) -> BoundedGlobalGradientBasedScalarAlgorithms:
    method GradientFree (line 1895) | def GradientFree(self) -> BoundedGlobalGradientFreeScalarAlgorithms:
    method NonlinearConstrained (line 1899) | def NonlinearConstrained(self) -> BoundedGlobalNonlinearConstrainedSca...
    method Parallel (line 1903) | def Parallel(self) -> BoundedGlobalParallelScalarAlgorithms:
  class BoundedGlobalParallelAlgorithms (line 1908) | class BoundedGlobalParallelAlgorithms(AlgoSelection):
    method GradientFree (line 1934) | def GradientFree(self) -> BoundedGlobalGradientFreeParallelAlgorithms:
    method NonlinearConstrained (line 1938) | def NonlinearConstrained(
    method Scalar (line 1944) | def Scalar(self) -> BoundedGlobalParallelScalarAlgorithms:
  class GlobalNonlinearConstrainedScalarAlgorithms (line 1949) | class GlobalNonlinearConstrainedScalarAlgorithms(AlgoSelection):
    method Bounded (line 1957) | def Bounded(self) -> BoundedGlobalNonlinearConstrainedScalarAlgorithms:
    method GradientBased (line 1961) | def GradientBased(self) -> GlobalGradientBasedNonlinearConstrainedScal...
    method GradientFree (line 1965) | def GradientFree(self) -> GlobalGradientFreeNonlinearConstrainedScalar...
    method Parallel (line 1969) | def Parallel(self) -> GlobalNonlinearConstrainedParallelScalarAlgorithms:
  class GlobalNonlinearConstrainedParallelAlgorithms (line 1974) | class GlobalNonlinearConstrainedParallelAlgorithms(AlgoSelection):
    method Bounded (line 1980) | def Bounded(self) -> BoundedGlobalNonlinearConstrainedParallelAlgorithms:
    method GradientFree (line 1984) | def GradientFree(self) -> GlobalGradientFreeNonlinearConstrainedParall...
    method Scalar (line 1988) | def Scalar(self) -> GlobalNonlinearConstrainedParallelScalarAlgorithms:
  class GlobalParallelScalarAlgorithms (line 1993) | class GlobalParallelScalarAlgorithms(AlgoSelection):
    method Bounded (line 2019) | def Bounded(self) -> BoundedGlobalParallelScalarAlgorithms:
    method GradientFree (line 2023) | def GradientFree(self) -> GlobalGradientFreeParallelScalarAlgorithms:
    method NonlinearConstrained (line 2027) | def NonlinearConstrained(
  class BoundedLocalNonlinearConstrainedAlgorithms (line 2034) | class BoundedLocalNonlinearConstrainedAlgorithms(AlgoSelection):
    method GradientBased (line 2043) | def GradientBased(self) -> BoundedGradientBasedLocalNonlinearConstrain...
    method GradientFree (line 2047) | def GradientFree(self) -> BoundedGradientFreeLocalNonlinearConstrained...
    method Scalar (line 2051) | def Scalar(self) -> BoundedLocalNonlinearConstrainedScalarAlgorithms:
  class BoundedLocalScalarAlgorithms (line 2056) | class BoundedLocalScalarAlgorithms(AlgoSelection):
    method GradientBased (line 2081) | def GradientBased(self) -> BoundedGradientBasedLocalScalarAlgorithms:
    method GradientFree (line 2085) | def GradientFree(self) -> BoundedGradientFreeLocalScalarAlgorithms:
    method NonlinearConstrained (line 2089) | def NonlinearConstrained(self) -> BoundedLocalNonlinearConstrainedScal...
    method Parallel (line 2093) | def Parallel(self) -> BoundedLocalParallelScalarAlgorithms:
  class BoundedLeastSquaresLocalAlgorithms (line 2098) | class BoundedLeastSquaresLocalAlgorithms(AlgoSelection):
    method GradientBased (line 2107) | def GradientBased(self) -> BoundedGradientBasedLeastSquaresLocalAlgori...
    method GradientFree (line 2111) | def GradientFree(self) -> BoundedGradientFreeLeastSquaresLocalAlgorithms:
    method Parallel (line 2115) | def Parallel(self) -> BoundedLeastSquaresLocalParallelAlgorithms:
  class BoundedLocalParallelAlgorithms (line 2120) | class BoundedLocalParallelAlgorithms(AlgoSelection):
    method GradientFree (line 2126) | def GradientFree(self) -> BoundedGradientFreeLocalParallelAlgorithms:
    method LeastSquares (line 2130) | def LeastSquares(self) -> BoundedLeastSquaresLocalParallelAlgorithms:
    method Scalar (line 2134) | def Scalar(self) -> BoundedLocalParallelScalarAlgorithms:
  class LocalNonlinearConstrainedScalarAlgorithms (line 2139) | class LocalNonlinearConstrainedScalarAlgorithms(AlgoSelection):
    method Bounded (line 2149) | def Bounded(self) -> BoundedLocalNonlinearConstrainedScalarAlgorithms:
    method GradientBased (line 2153) | def GradientBased(self) -> GradientBasedLocalNonlinearConstrainedScala...
    method GradientFree (line 2157) | def GradientFree(self) -> GradientFreeLocalNonlinearConstrainedScalarA...
  class LocalParallelScalarAlgorithms (line 2162) | class LocalParallelScalarAlgorithms(AlgoSelection):
    method Bounded (line 2167) | def Bounded(self) -> BoundedLocalParallelScalarAlgorithms:
    method GradientFree (line 2171) | def GradientFree(self) -> GradientFreeLocalParallelScalarAlgorithms:
  class LeastSquaresLocalParallelAlgorithms (line 2176) | class LeastSquaresLocalParallelAlgorithms(AlgoSelection):
    method Bounded (line 2181) | def Bounded(self) -> BoundedLeastSquaresLocalParallelAlgorithms:
    method GradientFree (line 2185) | def GradientFree(self) -> GradientFreeLeastSquaresLocalParallelAlgorit...
  class BoundedNonlinearConstrainedScalarAlgorithms (line 2190) | class BoundedNonlinearConstrainedScalarAlgorithms(AlgoSelection):
    method Global (line 2204) | def Global(self) -> BoundedGlobalNonlinearConstrainedScalarAlgorithms:
    method GradientBased (line 2208) | def GradientBased(self) -> BoundedGradientBasedNonlinearConstrainedSca...
    method GradientFree (line 2212) | def GradientFree(self) -> BoundedGradientFreeNonlinearConstrainedScala...
    method Local (line 2216) | def Local(self) -> BoundedLocalNonlinearConstrainedScalarAlgorithms:
    method Parallel (line 2220) | def Parallel(self) -> BoundedNonlinearConstrainedParallelScalarAlgorit...
  class BoundedNonlinearConstrainedParallelAlgorithms (line 2225) | class BoundedNonlinearConstrainedParallelAlgorithms(AlgoSelection):
    method Global (line 2231) | def Global(self) -> BoundedGlobalNonlinearConstrainedParallelAlgorithms:
    method GradientFree (line 2235) | def GradientFree(self) -> BoundedGradientFreeNonlinearConstrainedParal...
    method Scalar (line 2239) | def Scalar(self) -> BoundedNonlinearConstrainedParallelScalarAlgorithms:
  class BoundedParallelScalarAlgorithms (line 2244) | class BoundedParallelScalarAlgorithms(AlgoSelection):
    method Global (line 2271) | def Global(self) -> BoundedGlobalParallelScalarAlgorithms:
    method GradientFree (line 2275) | def GradientFree(self) -> BoundedGradientFreeParallelScalarAlgorithms:
    method Local (line 2279) | def Local(self) -> BoundedLocalParallelScalarAlgorithms:
    method NonlinearConstrained (line 2283) | def NonlinearConstrained(
  class BoundedLeastSquaresParallelAlgorithms (line 2290) | class BoundedLeastSquaresParallelAlgorithms(AlgoSelection):
    method GradientFree (line 2295) | def GradientFree(self) -> BoundedGradientFreeLeastSquaresParallelAlgor...
    method Local (line 2299) | def Local(self) -> BoundedLeastSquaresLocalParallelAlgorithms:
  class NonlinearConstrainedParallelScalarAlgorithms (line 2304) | class NonlinearConstrainedParallelScalarAlgorithms(AlgoSelection):
    method Bounded (line 2310) | def Bounded(self) -> BoundedNonlinearConstrainedParallelScalarAlgorithms:
    method Global (line 2314) | def Global(self) -> GlobalNonlinearConstrainedParallelScalarAlgorithms:
    method GradientFree (line 2318) | def GradientFree(self) -> GradientFreeNonlinearConstrainedParallelScal...
  class GlobalGradientBasedAlgorithms (line 2323) | class GlobalGradientBasedAlgorithms(AlgoSelection):
    method Bounded (line 2329) | def Bounded(self) -> BoundedGlobalGradientBasedAlgorithms:
    method NonlinearConstrained (line 2333) | def NonlinearConstrained(self) -> GlobalGradientBasedNonlinearConstrai...
    method Scalar (line 2337) | def Scalar(self) -> GlobalGradientBasedScalarAlgorithms:
  class GradientBasedLocalAlgorithms (line 2342) | class GradientBasedLocalAlgorithms(AlgoSelection):
    method Bounded (line 2365) | def Bounded(self) -> BoundedGradientBasedLocalAlgorithms:
    method LeastSquares (line 2369) | def LeastSquares(self) -> GradientBasedLeastSquaresLocalAlgorithms:
    method Likelihood (line 2373) | def Likelihood(self) -> GradientBasedLikelihoodLocalAlgorithms:
    method NonlinearConstrained (line 2377) | def NonlinearConstrained(self) -> GradientBasedLocalNonlinearConstrain...
    method Scalar (line 2381) | def Scalar(self) -> GradientBasedLocalScalarAlgorithms:
  class BoundedGradientBasedAlgorithms (line 2386) | class BoundedGradientBasedAlgorithms(AlgoSelection):
    method Global (line 2407) | def Global(self) -> BoundedGlobalGradientBasedAlgorithms:
    method LeastSquares (line 2411) | def LeastSquares(self) -> BoundedGradientBasedLeastSquaresAlgorithms:
    method Local (line 2415) | def Local(self) -> BoundedGradientBasedLocalAlgorithms:
    method NonlinearConstrained (line 2419) | def NonlinearConstrained(
    method Scalar (line 2425) | def Scalar(self) -> BoundedGradientBasedScalarAlgorithms:
  class GradientBasedNonlinearConstrainedAlgorithms (line 2430) | class GradientBasedNonlinearConstrainedAlgorithms(AlgoSelection):
    method Bounded (line 2439) | def Bounded(self) -> BoundedGradientBasedNonlinearConstrainedAlgorithms:
    method Global (line 2443) | def Global(self) -> GlobalGradientBasedNonlinearConstrainedAlgorithms:
    method Local (line 2447) | def Local(self) -> GradientBasedLocalNonlinearConstrainedAlgorithms:
    method Scalar (line 2451) | def Scalar(self) -> GradientBasedNonlinearConstrainedScalarAlgorithms:
  class GradientBasedScalarAlgorithms (line 2456) | class GradientBasedScalarAlgorithms(AlgoSelection):
    method Bounded (line 2478) | def Bounded(self) -> BoundedGradientBasedScalarAlgorithms:
    method Global (line 2482) | def Global(self) -> GlobalGradientBasedScalarAlgorithms:
    method Local (line 2486) | def Local(self) -> GradientBasedLocalScalarAlgorithms:
    method NonlinearConstrained (line 2490) | def NonlinearConstrained(self) -> GradientBasedNonlinearConstrainedSca...
  class GradientBasedLeastSquaresAlgorithms (line 2495) | class GradientBasedLeastSquaresAlgorithms(AlgoSelection):
    method Bounded (line 2501) | def Bounded(self) -> BoundedGradientBasedLeastSquaresAlgorithms:
    method Local (line 2505) | def Local(self) -> GradientBasedLeastSquaresLocalAlgorithms:
  class GradientBasedLikelihoodAlgorithms (line 2510) | class GradientBasedLikelihoodAlgorithms(AlgoSelection):
    method Local (line 2514) | def Local(self) -> GradientBasedLikelihoodLocalAlgorithms:
  class GlobalGradientFreeAlgorithms (line 2519) | class GlobalGradientFreeAlgorithms(AlgoSelection):
    method Bounded (line 2581) | def Bounded(self) -> BoundedGlobalGradientFreeAlgorithms:
    method NonlinearConstrained (line 2585) | def NonlinearConstrained(self) -> GlobalGradientFreeNonlinearConstrain...
    method Parallel (line 2589) | def Parallel(self) -> GlobalGradientFreeParallelAlgorithms:
    method Scalar (line 2593) | def Scalar(self) -> GlobalGradientFreeScalarAlgorithms:
  class GradientFreeLocalAlgorithms (line 2598) | class GradientFreeLocalAlgorithms(AlgoSelection):
    method Bounded (line 2617) | def Bounded(self) -> BoundedGradientFreeLocalAlgorithms:
    method LeastSquares (line 2621) | def LeastSquares(self) -> GradientFreeLeastSquaresLocalAlgorithms:
    method NonlinearConstrained (line 2625) | def NonlinearConstrained(self) -> GradientFreeLocalNonlinearConstraine...
    method Parallel (line 2629) | def Parallel(self) -> GradientFreeLocalParallelAlgorithms:
    method Scalar (line 2633) | def Scalar(self) -> GradientFreeLocalScalarAlgorithms:
  class BoundedGradientFreeAlgorithms (line 2638) | class BoundedGradientFreeAlgorithms(AlgoSelection):
    method Global (line 2713) | def Global(self) -> BoundedGlobalGradientFreeAlgorithms:
    method LeastSquares (line 2717) | def LeastSquares(self) -> BoundedGradientFreeLeastSquaresAlgorithms:
    method Local (line 2721) | def Local(self) -> BoundedGradientFreeLocalAlgorithms:
    method NonlinearConstrained (line 2725) | def NonlinearConstrained(self) -> BoundedGradientFreeNonlinearConstrai...
    method Parallel (line 2729) | def Parallel(self) -> BoundedGradientFreeParallelAlgorithms:
    method Scalar (line 2733) | def Scalar(self) -> BoundedGradientFreeScalarAlgorithms:
  class GradientFreeNonlinearConstrainedAlgorithms (line 2738) | class GradientFreeNonlinearConstrainedAlgorithms(AlgoSelection):
    method Bounded (line 2747) | def Bounded(self) -> BoundedGradientFreeNonlinearConstrainedAlgorithms:
    method Global (line 2751) | def Global(self) -> GlobalGradientFreeNonlinearConstrainedAlgorithms:
    method Local (line 2755) | def Local(self) -> GradientFreeLocalNonlinearConstrainedAlgorithms:
    method Parallel (line 2759) | def Parallel(self) -> GradientFreeNonlinearConstrainedParallelAlgorithms:
    method Scalar (line 2763) | def Scalar(self) -> GradientFreeNonlinearConstrainedScalarAlgorithms:
  class GradientFreeScalarAlgorithms (line 2768) | class GradientFreeScalarAlgorithms(AlgoSelection):
    method Bounded (line 2842) | def Bounded(self) -> BoundedGradientFreeScalarAlgorithms:
    method Global (line 2846) | def Global(self) -> GlobalGradientFreeScalarAlgorithms:
    method Local (line 2850) | def Local(self) -> GradientFreeLocalScalarAlgorithms:
    method NonlinearConstrained (line 2854) | def NonlinearConstrained(self) -> GradientFreeNonlinearConstrainedScal...
    method Parallel (line 2858) | def Parallel(self) -> GradientFreeParallelScalarAlgorithms:
  class GradientFreeLeastSquaresAlgorithms (line 2863) | class GradientFreeLeastSquaresAlgorithms(AlgoSelection):
    method Bounded (line 2870) | def Bounded(self) -> BoundedGradientFreeLeastSquaresAlgorithms:
    method Local (line 2874) | def Local(self) -> GradientFreeLeastSquaresLocalAlgorithms:
    method Parallel (line 2878) | def Parallel(self) -> GradientFreeLeastSquaresParallelAlgorithms:
  class GradientFreeParallelAlgorithms (line 2883) | class GradientFreeParallelAlgorithms(AlgoSelection):
    method Bounded (line 2913) | def Bounded(self) -> BoundedGradientFreeParallelAlgorithms:
    method Global (line 2917) | def Global(self) -> GlobalGradientFreeParallelAlgorithms:
    method LeastSquares (line 2921) | def LeastSquares(self) -> GradientFreeLeastSquaresParallelAlgorithms:
    method Local (line 2925) | def Local(self) -> GradientFreeLocalParallelAlgorithms:
    method NonlinearConstrained (line 2929) | def NonlinearConstrained(
    method Scalar (line 2935) | def Scalar(self) -> GradientFreeParallelScalarAlgorithms:
  class BoundedGlobalAlgorithms (line 2940) | class BoundedGlobalAlgorithms(AlgoSelection):
    method GradientBased (line 3005) | def GradientBased(self) -> BoundedGlobalGradientBasedAlgorithms:
    method GradientFree (line 3009) | def GradientFree(self) -> BoundedGlobalGradientFreeAlgorithms:
    method NonlinearConstrained (line 3013) | def NonlinearConstrained(self) -> BoundedGlobalNonlinearConstrainedAlg...
    method Parallel (line 3017) | def Parallel(self) -> BoundedGlobalParallelAlgorithms:
    method Scalar (line 3021) | def Scalar(self) -> BoundedGlobalScalarAlgorithms:
  class GlobalNonlinearConstrainedAlgorithms (line 3026) | class GlobalNonlinearConstrainedAlgorithms(AlgoSelection):
    method Bounded (line 3034) | def Bounded(self) -> BoundedGlobalNonlinearConstrainedAlgorithms:
    method GradientBased (line 3038) | def GradientBased(self) -> GlobalGradientBasedNonlinearConstrainedAlgo...
    method GradientFree (line 3042) | def GradientFree(self) -> GlobalGradientFreeNonlinearConstrainedAlgori...
    method Parallel (line 3046) | def Parallel(self) -> GlobalNonlinearConstrainedParallelAlgorithms:
    method Scalar (line 3050) | def Scalar(self) -> GlobalNonlinearConstrainedScalarAlgorithms:
  class GlobalScalarAlgorithms (line 3055) | class GlobalScalarAlgorithms(AlgoSelection):
    method Bounded (line 3120) | def Bounded(self) -> BoundedGlobalScalarAlgorithms:
    method GradientBased (line 3124) | def GradientBased(self) -> GlobalGradientBasedScalarAlgorithms:
    method GradientFree (line 3128) | def GradientFree(self) -> GlobalGradientFreeScalarAlgorithms:
    method NonlinearConstrained (line 3132) | def NonlinearConstrained(self) -> GlobalNonlinearConstrainedScalarAlgo...
    method Parallel (line 3136) | def Parallel(self) -> GlobalParallelScalarAlgorithms:
  class GlobalParallelAlgorithms (line 3141) | class GlobalParallelAlgorithms(AlgoSelection):
    method Bounded (line 3167) | def Bounded(self) -> BoundedGlobalParallelAlgorithms:
    method GradientFree (line 3171) | def GradientFree(self) -> GlobalGradientFreeParallelAlgorithms:
    method NonlinearConstrained (line 3175) | def NonlinearConstrained(self) -> GlobalNonlinearConstrainedParallelAl...
    method Scalar (line 3179) | def Scalar(self) -> GlobalParallelScalarAlgorithms:
  class BoundedLocalAlgorithms (line 3184) | class BoundedLocalAlgorithms(AlgoSelection):
    method GradientBased (line 3215) | def GradientBased(self) -> BoundedGradientBasedLocalAlgorithms:
    method GradientFree (line 3219) | def GradientFree(self) -> BoundedGradientFreeLocalAlgorithms:
    method LeastSquares (line 3223) | def LeastSquares(self) -> BoundedLeastSquaresLocalAlgorithms:
    method NonlinearConstrained (line 3227) | def NonlinearConstrained(self) -> BoundedLocalNonlinearConstrainedAlgo...
    method Parallel (line 3231) | def Parallel(self) -> BoundedLocalParallelAlgorithms:
    method Scalar (line 3235) | def Scalar(self) -> BoundedLocalScalarAlgorithms:
  class LocalNonlinearConstrainedAlgorithms (line 3240) | class LocalNonlinearConstrainedAlgorithms(AlgoSelection):
    method Bounded (line 3250) | def Bounded(self) -> BoundedLocalNonlinearConstrainedAlgorithms:
    method GradientBased (line 3254) | def GradientBased(self) -> GradientBasedLocalNonlinearConstrainedAlgor...
    method GradientFree (line 3258) | def GradientFree(self) -> GradientFreeLocalNonlinearConstrainedAlgorit...
    method Scalar (line 3262) | def Scalar(self) -> LocalNonlinearConstrainedScalarAlgorithms:
  class LocalScalarAlgorithms (line 3267) | class LocalScalarAlgorithms(AlgoSelection):
    method Bounded (line 3298) | def Bounded(self) -> BoundedLocalScalarAlgorithms:
    method GradientBased (line 3302) | def GradientBased(self) -> GradientBasedLocalScalarAlgorithms:
    method GradientFree (line 3306) | def GradientFree(self) -> GradientFreeLocalScalarAlgorithms:
    method NonlinearConstrained (line 3310) | def NonlinearConstrained(self) -> LocalNonlinearConstrainedScalarAlgor...
    method Parallel (line 3314) | def Parallel(self) -> LocalParallelScalarAlgorithms:
  class LeastSquaresLocalAlgorithms (line 3319) | class LeastSquaresLocalAlgorithms(AlgoSelection):
    method Bounded (line 3329) | def Bounded(self) -> BoundedLeastSquaresLocalAlgorithms:
    method GradientBased (line 3333) | def GradientBased(self) -> GradientBasedLeastSquaresLocalAlgorithms:
    method GradientFree (line 3337) | def GradientFree(self) -> GradientFreeLeastSquaresLocalAlgorithms:
    method Parallel (line 3341) | def Parallel(self) -> LeastSquaresLocalParallelAlgorithms:
  class LikelihoodLocalAlgorithms (line 3346) | class LikelihoodLocalAlgorithms(AlgoSelection):
    method GradientBased (line 3350) | def GradientBased(self) -> GradientBasedLikelihoodLocalAlgorithms:
  class LocalParallelAlgorithms (line 3355) | class LocalParallelAlgorithms(AlgoSelection):
    method Bounded (line 3362) | def Bounded(self) -> BoundedLocalParallelAlgorithms:
    method GradientFree (line 3366) | def GradientFree(self) -> GradientFreeLocalParallelAlgorithms:
    method LeastSquares (line 3370) | def LeastSquares(self) -> LeastSquaresLocalParallelAlgorithms:
    method Scalar (line 3374) | def Scalar(self) -> LocalParallelScalarAlgorithms:
  class BoundedNonlinearConstrainedAlgorithms (line 3379) | class BoundedNonlinearConstrainedAlgorithms(AlgoSelection):
    method Global (line 3393) | def Global(self) -> BoundedGlobalNonlinearConstrainedAlgorithms:
    method GradientBased (line 3397) | def GradientBased(self) -> BoundedGradientBasedNonlinearConstrainedAlg...
    method GradientFree (line 3401) | def GradientFree(self) -> BoundedGradientFreeNonlinearConstrainedAlgor...
    method Local (line 3405) | def Local(self) -> BoundedLocalNonlinearConstrainedAlgorithms:
    method Parallel (line 3409) | def Parallel(self) -> BoundedNonlinearConstrainedParallelAlgorithms:
    method Scalar (line 3413) | def Scalar(self) -> BoundedNonlinearConstrainedScalarAlgorithms:
  class BoundedScalarAlgorithms (line 3418) | class BoundedScalarAlgorithms(AlgoSelection):
    method Global (line 3505) | def Global(self) -> BoundedGlobalScalarAlgorithms:
    method GradientBased (line 3509) | def GradientBased(self) -> BoundedGradientBasedScalarAlgorithms:
    method GradientFree (line 3513) | def GradientFree(self) -> BoundedGradientFreeScalarAlgorithms:
    method Local (line 3517) | def Local(self) -> BoundedLocalScalarAlgorithms:
    method NonlinearConstrained (line 3521) | def NonlinearConstrained(self) -> BoundedNonlinearConstrainedScalarAlg...
    method Parallel (line 3525) | def Parallel(self) -> BoundedParallelScalarAlgorithms:
  class BoundedLeastSquaresAlgorithms (line 3530) | class BoundedLeastSquaresAlgorithms(AlgoSelection):
    method GradientBased (line 3539) | def GradientBased(self) -> BoundedGradientBasedLeastSquaresAlgorithms:
    method GradientFree (line 3543) | def GradientFree(self) -> BoundedGradientFreeLeastSquaresAlgorithms:
    method Local (line 3547) | def Local(self) -> BoundedLeastSquaresLocalAlgorithms:
    method Parallel (line 3551) | def Parallel(self) -> BoundedLeastSquaresParallelAlgorithms:
  class BoundedParallelAlgorithms (line 3556) | class BoundedParallelAlgorithms(AlgoSelection):
    method Global (line 3585) | def Global(self) -> BoundedGlobalParallelAlgorithms:
    method GradientFree (line 3589) | def GradientFree(self) -> BoundedGradientFreeParallelAlgorithms:
    method LeastSquares (line 3593) | def LeastSquares(self) -> BoundedLeastSquaresParallelAlgorithms:
    method Local (line 3597) | def Local(self) -> BoundedLocalParallelAlgorithms:
    method NonlinearConstrained (line 3601) | def NonlinearConstrained(self) -> BoundedNonlinearConstrainedParallelA...
    method Scalar (line 3605) | def Scalar(self) -> BoundedParallelScalarAlgorithms:
  class NonlinearConstrainedScalarAlgorithms (line 3610) | class NonlinearConstrainedScalarAlgorithms(AlgoSelection):
    method Bounded (line 3625) | def Bounded(self) -> BoundedNonlinearConstrainedScalarAlgorithms:
    method Global (line 3629) | def Global(self) -> GlobalNonlinearConstrainedScalarAlgorithms:
    method GradientBased (line 3633) | def GradientBased(self) -> GradientBasedNonlinearConstrainedScalarAlgo...
    method GradientFree (line 3637) | def GradientFree(self) -> GradientFreeNonlinearConstrainedScalarAlgori...
    method Local (line 3641) | def Local(self) -> LocalNonlinearConstrainedScalarAlgorithms:
    method Parallel (line 3645) | def Parallel(self) -> NonlinearConstrainedParallelScalarAlgorithms:
  class NonlinearConstrainedParallelAlgorithms (line 3650) | class NonlinearConstrainedParallelAlgorithms(AlgoSelection):
    method Bounded (line 3656) | def Bounded(self) -> BoundedNonlinearConstrainedParallelAlgorithms:
    method Global (line 3660) | def Global(self) -> GlobalNonlinearConstrainedParallelAlgorithms:
    method GradientFree (line 3664) | def GradientFree(self) -> GradientFreeNonlinearConstrainedParallelAlgo...
    method Scalar (line 3668) | def Scalar(self) -> NonlinearConstrainedParallelScalarAlgorithms:
  class ParallelScalarAlgorithms (line 3673) | class ParallelScalarAlgorithms(AlgoSelection):
    method Bounded (line 3701) | def Bounded(self) -> BoundedParallelScalarAlgorithms:
    method Global (line 3705) | def Global(self) -> GlobalParallelScalarAlgorithms:
    method GradientFree (line 3709) | def GradientFree(self) -> GradientFreeParallelScalarAlgorithms:
    method Local (line 3713) | def Local(self) -> LocalParallelScalarAlgorithms:
    method NonlinearConstrained (line 3717) | def NonlinearConstrained(self) -> NonlinearConstrainedParallelScalarAl...
  class LeastSquaresParallelAlgorithms (line 3722) | class LeastSquaresParallelAlgorithms(AlgoSelection):
    method Bounded (line 3727) | def Bounded(self) -> BoundedLeastSquaresParallelAlgorithms:
    method GradientFree (line 3731) | def GradientFree(self) -> GradientFreeLeastSquaresParallelAlgorithms:
    method Local (line 3735) | def Local(self) -> LeastSquaresLocalParallelAlgorithms:
  class GradientBasedAlgorithms (line 3740) | class GradientBasedAlgorithms(AlgoSelection):
    method Bounded (line 3766) | def Bounded(self) -> BoundedGradientBasedAlgorithms:
    method Global (line 3770) | def Global(self) -> GlobalGradientBasedAlgorithms:
    method LeastSquares (line 3774) | def LeastSquares(self) -> GradientBasedLeastSquaresAlgorithms:
    method Likelihood (line 3778) | def Likelihood(self) -> GradientBasedLikelihoodAlgorithms:
    method Local (line 3782) | def Local(self) -> GradientBasedLocalAlgorithms:
    method NonlinearConstrained (line 3786) | def NonlinearConstrained(self) -> GradientBasedNonlinearConstrainedAlg...
    method Scalar (line 3790) | def Scalar(self) -> GradientBasedScalarAlgorithms:
  class GradientFreeAlgorithms (line 3795) | class GradientFreeAlgorithms(AlgoSelection):
    method Bounded (line 3873) | def Bounded(self) -> BoundedGradientFreeAlgorithms:
    method Global (line 3877) | def Global(self) -> GlobalGradientFreeAlgorithms:
    method LeastSquares (line 3881) | def LeastSquares(self) -> GradientFreeLeastSquaresAlgorithms:
    method Local (line 3885) | def Local(self) -> GradientFreeLocalAlgorithms:
    method NonlinearConstrained (line 3889) | def NonlinearConstrained(self) -> GradientFreeNonlinearConstrainedAlgo...
    method Parallel (line 3893) | def Parallel(self) -> GradientFreeParallelAlgorithms:
    method Scalar (line 3897) | def Scalar(self) -> GradientFreeScalarAlgorithms:
  class GlobalAlgorithms (line 3902) | class GlobalAlgorithms(AlgoSelection):
    method Bounded (line 3967) | def Bounded(self) -> BoundedGlobalAlgorithms:
    method GradientBased (line 3971) | def GradientBased(self) -> GlobalGradientBasedAlgorithms:
    method GradientFree (line 3975) | def GradientFree(self) -> GlobalGradientFreeAlgorithms:
    method NonlinearConstrained (line 3979) | def NonlinearConstrained(self) -> GlobalNonlinearConstrainedAlgorithms:
    method Parallel (line 3983) | def Parallel(self) -> GlobalParallelAlgorithms:
    method Scalar (line 3987) | def Scalar(self) -> GlobalScalarAlgorithms:
  class LocalAlgorithms (line 3992) | class LocalAlgorithms(AlgoSelection):
    method Bounded (line 4031) | def Bounded(self) -> BoundedLocalAlgorithms:
    method GradientBased (line 4035) | def GradientBased(self) -> GradientBasedLocalAlgorithms:
    method GradientFree (line 4039) | def GradientFree(self) -> GradientFreeLocalAlgorithms:
    method LeastSquares (line 4043) | def LeastSquares(self) -> LeastSquaresLocalAlgorithms:
    method Likelihood (line 4047) | def Likelihood(self) -> LikelihoodLocalAlgorithms:
    method NonlinearConstrained (line 4051) | def NonlinearConstrained(self) -> LocalNonlinearConstrainedAlgorithms:
    method Parallel (line 4055) | def Parallel(self) -> LocalParallelAlgorithms:
    method Scalar (line 4059) | def Scalar(self) -> LocalScalarAlgorithms:
  class BoundedAlgorithms (line 4064) | class BoundedAlgorithms(AlgoSelection):
    method Global (line 4157) | def Global(self) -> BoundedGlobalAlgorithms:
    method GradientBased (line 4161) | def GradientBased(self) -> BoundedGradientBasedAlgorithms:
    method GradientFree (line 4165) | def GradientFree(self) -> BoundedGradientFreeAlgorithms:
    method LeastSquares (line 4169) | def LeastSquares(self) -> BoundedLeastSquaresAlgorithms:
    method Local (line 4173) | def Local(self) -> BoundedLocalAlgorithms:
    method NonlinearConstrained (line 4177) | def NonlinearConstrained(self) -> BoundedNonlinearConstrainedAlgorithms:
    method Parallel (line 4181) | def Parallel(self) -> BoundedParallelAlgorithms:
    method Scalar (line 4185) | def Scalar(self) -> BoundedScalarAlgorithms:
  class NonlinearConstrainedAlgorithms (line 4190) | class NonlinearConstrainedAlgorithms(AlgoSelection):
    method Bounded (line 4205) | def Bounded(self) -> BoundedNonlinearConstrainedAlgorithms:
    method Global (line 4209) | def Global(self) -> GlobalNonlinearConstrainedAlgorithms:
    method GradientBased (line 4213) | def GradientBased(self) -> GradientBasedNonlinearConstrainedAlgorithms:
    method GradientFree (line 4217) | def GradientFree(self) -> GradientFreeNonlinearConstrainedAlgorithms:
    method Local (line 4221) | def Local(self) -> LocalNonlinearConstrainedAlgorithms:
    method Parallel (line 4225) | def Parallel(self) -> NonlinearConstrainedParallelAlgorithms:
    method Scalar (line 4229) | def Scalar(self) -> NonlinearConstrainedScalarAlgorithms:
  class ScalarAlgorithms (line 4234) | class ScalarAlgorithms(AlgoSelection):
    method Bounded (line 4327) | def Bounded(self) -> BoundedScalarAlgorithms:
    method Global (line 4331) | def Global(self) -> GlobalScalarAlgorithms:
    method GradientBased (line 4335) | def GradientBased(self) -> GradientBasedScalarAlgorithms:
    method GradientFree (line 4339) | def GradientFree(self) -> GradientFreeScalarAlgorithms:
    method Local (line 4343) | def Local(self) -> LocalScalarAlgorithms:
    method NonlinearConstrained (line 4347) | def NonlinearConstrained(self) -> NonlinearConstrainedScalarAlgorithms:
    method Parallel (line 4351) | def Parallel(self) -> ParallelScalarAlgorithms:
  class LeastSquaresAlgorithms (line 4356) | class LeastSquaresAlgorithms(AlgoSelection):
    method Bounded (line 4366) | def Bounded(self) -> BoundedLeastSquaresAlgorithms:
    method GradientBased (line 4370) | def GradientBased(self) -> GradientBasedLeastSquaresAlgorithms:
    method GradientFree (line 4374) | def GradientFree(self) -> GradientFreeLeastSquaresAlgorithms:
    method Local (line 4378) | def Local(self) -> LeastSquaresLocalAlgorithms:
    method Parallel (line 4382) | def Parallel(self) -> LeastSquaresParallelAlgorithms:
  class LikelihoodAlgorithms (line 4387) | class LikelihoodAlgorithms(AlgoSelection):
    method GradientBased (line 4391) | def GradientBased(self) -> GradientBasedLikelihoodAlgorithms:
    method Local (line 4395) | def Local(self) -> LikelihoodLocalAlgorithms:
  class ParallelAlgorithms (line 4400) | class ParallelAlgorithms(AlgoSelection):
    method Bounded (line 4430) | def Bounded(self) -> BoundedParallelAlgorithms:
    method Global (line 4434) | def Global(self) -> GlobalParallelAlgorithms:
    method GradientFree (line 4438) | def GradientFree(self) -> GradientFreeParallelAlgorithms:
    method LeastSquares (line 4442) | def LeastSquares(self) -> LeastSquaresParallelAlgorithms:
    method Local (line 4446) | def Local(self) -> LocalParallelAlgorithms:
    method NonlinearConstrained (line 4450) | def NonlinearConstrained(self) -> NonlinearConstrainedParallelAlgorithms:
    method Scalar (line 4454) | def Scalar(self) -> ParallelScalarAlgorithms:
  class Algorithms (line 4459) | class Algorithms(AlgoSelection):
    method Bounded (line 4560) | def Bounded(self) -> BoundedAlgorithms:
    method Global (line 4564) | def Global(self) -> GlobalAlgorithms:
    method GradientBased (line 4568) | def GradientBased(self) -> GradientBasedAlgorithms:
    method GradientFree (line 4572) | def GradientFree(self) -> GradientFreeAlgorithms:
    method LeastSquares (line 4576) | def LeastSquares(self) -> LeastSquaresAlgorithms:
    method Likelihood (line 4580) | def Likelihood(self) -> LikelihoodAlgorithms:
    method Local (line 4584) | def Local(self) -> LocalAlgorithms:
    method NonlinearConstrained (line 4588) | def NonlinearConstrained(self) -> NonlinearConstrainedAlgorithms:
    method Parallel (line 4592) | def Parallel(self) -> ParallelAlgorithms:
    method Scalar (line 4596) | def Scalar(self) -> ScalarAlgorithms:

FILE: src/optimagic/batch_evaluators.py
  function pathos_mp_batch_evaluator (line 28) | def pathos_mp_batch_evaluator(
  function joblib_batch_evaluator (line 97) | def joblib_batch_evaluator(
  function threading_batch_evaluator (line 152) | def threading_batch_evaluator(
  function _check_inputs (line 225) | def _check_inputs(
  function process_batch_evaluator (line 263) | def process_batch_evaluator(

FILE: src/optimagic/benchmarking/benchmark_reports.py
  function convergence_report (line 9) | def convergence_report(
  function rank_report (line 55) | def rank_report(
  function traceback_report (line 128) | def traceback_report(problems, results, return_type="dataframe"):
  function _get_success_info (line 201) | def _get_success_info(results, converged_info):
  function _get_problem_dimensions (line 227) | def _get_problem_dimensions(problems):

FILE: src/optimagic/benchmarking/cartis_roberts.py
  function njit (line 30) | def njit(func):
  function luksan11 (line 43) | def luksan11(x):
  function luksan12 (line 52) | def luksan12(x):
  function luksan13 (line 67) | def luksan13(x):
  function luksan14 (line 85) | def luksan14(x):
  function luksan15 (line 106) | def luksan15(x):
  function luksan16 (line 128) | def luksan16(x):
  function luksan17 (line 149) | def luksan17(x):
  function luksan21 (line 170) | def luksan21(x):
  function luksan22 (line 190) | def luksan22(x):
  function morebvne (line 203) | def morebvne(x):
  function flosp2 (line 220) | def flosp2(x, a, b, ra=1.0e7):
  function oscigrne (line 332) | def oscigrne(x):
  function spmsqrt (line 348) | def spmsqrt(x):
  function semicon2 (line 423) | def semicon2(x):
  function qr3d (line 470) | def qr3d(x, m=5):
  function qr3dbd (line 512) | def qr3dbd(x, m=5):
  function eigen (line 560) | def eigen(x, param):
  function powell_singular (line 570) | def powell_singular(x):
  function hydcar (line 582) | def hydcar(
  function methane (line 703) | def methane(x):
  function argtrig (line 1086) | def argtrig(x):
  function artif (line 1097) | def artif(x):
  function arwhdne (line 1110) | def arwhdne(x):
  function bdvalues (line 1120) | def bdvalues(x):
  function bratu_2d (line 1138) | def bratu_2d(x, alpha):
  function bratu_3d (line 1160) | def bratu_3d(x, alpha):
  function broydn_3d (line 1186) | def broydn_3d(x):
  function broydn_bd (line 1199) | def broydn_bd(x):
  function cbratu_2d (line 1216) | def cbratu_2d(x):
  function chandheq (line 1249) | def chandheq(x):
  function chemrcta (line 1262) | def chemrcta(x):
  function chemrctb (line 1305) | def chemrctb(x):
  function chnrsbne (line 1332) | def chnrsbne(x):
  function drcavty (line 1396) | def drcavty(x, r):
  function freurone (line 1451) | def freurone(x):
  function hatfldg (line 1461) | def hatfldg(x):
  function integreq (line 1472) | def integreq(x):
  function msqrta (line 1502) | def msqrta(x):
  function penalty_1 (line 1520) | def penalty_1(x, a=1e-5):
  function penalty_2 (line 1527) | def penalty_2(x, a=1e-10):
  function vardimne (line 1541) | def vardimne(x):
  function yatpsq_1 (line 1551) | def yatpsq_1(x, dim_in):
  function yatpsq_2 (line 1574) | def yatpsq_2(x, dim_in):
  function get_start_points_msqrta (line 1591) | def get_start_points_msqrta(dim_in, flag=1):
  function get_start_points_bdvalues (line 1602) | def get_start_points_bdvalues(n, a=1):
  function get_start_points_spmsqrt (line 1610) | def get_start_points_spmsqrt(m):
  function get_start_points_qr3d (line 1639) | def get_start_points_qr3d(m):
  function get_start_points_qr3dbd (line 1646) | def get_start_points_qr3dbd(m):
  function get_start_points_hydcar20 (line 1655) | def get_start_points_hydcar20():
  function get_start_points_hydcar6 (line 1721) | def get_start_points_hydcar6():
  function get_start_points_methanb8 (line 1745) | def get_start_points_methanb8():
  function get_start_points_methanl8 (line 1781) | def get_start_points_methanl8():

FILE: src/optimagic/benchmarking/get_benchmark_problems.py
  function get_benchmark_problems (line 14) | def get_benchmark_problems(
  function _get_raw_problems (line 134) | def _get_raw_problems(name):
  function _step_func (line 212) | def _step_func(x, raw_func):
  function _create_problem_inputs (line 216) | def _create_problem_inputs(
  function _create_problem_solution (line 250) | def _create_problem_solution(specification, scaling_options):
  function _get_scaling_factor (line 269) | def _get_scaling_factor(x, options):
  function _internal_criterion_template (line 273) | def _internal_criterion_template(
  function _get_combined_noise (line 293) | def _get_combined_noise(fval, additive_options, multiplicative_options, ...
  function _sample_from_distribution (line 316) | def _sample_from_distribution(distribution, mean, std, size, rng, correl...
  function _process_noise_options (line 328) | def _process_noise_options(options, is_multiplicative):
  function _clip_away_from_zero (line 365) | def _clip_away_from_zero(a, clipval):

FILE: src/optimagic/benchmarking/more_wild.py
  function linear_full_rank (line 32) | def linear_full_rank(x, dim_out):
  function linear_rank_one (line 40) | def linear_rank_one(x, dim_out):
  function linear_rank_one_zero_columns_rows (line 48) | def linear_rank_one_zero_columns_rows(x, dim_out):
  function rosenbrock (line 57) | def rosenbrock(x):
  function helical_valley (line 65) | def helical_valley(x):
  function powell_singular (line 81) | def powell_singular(x):
  function freudenstein_roth (line 91) | def freudenstein_roth(x):
  function bard (line 99) | def bard(x, y):
  function kowalik_osborne (line 111) | def kowalik_osborne(x, y1, y2):
  function meyer (line 119) | def meyer(x, y):
  function watson (line 128) | def watson(x):
  function box_3d (line 142) | def box_3d(x, dim_out):
  function jennrich_sampson (line 154) | def jennrich_sampson(x, dim_out):
  function brown_dennis (line 164) | def brown_dennis(x, dim_out):
  function chebyquad (line 175) | def chebyquad(x, dim_out):
  function brown_almost_linear (line 195) | def brown_almost_linear(x):
  function osborne_one (line 205) | def osborne_one(x, y):
  function osborne_two (line 214) | def osborne_two(x, y):
  function bdqrtic (line 226) | def bdqrtic(x):
  function cube (line 243) | def cube(x):
  function mancino (line 250) | def mancino(x):
  function heart_eight (line 263) | def heart_eight(x, y):
  function get_start_points_mancino (line 302) | def get_start_points_mancino(n, a=1):

FILE: src/optimagic/benchmarking/noise_distributions.py
  function _standard_logistic (line 4) | def _standard_logistic(size, rng):
  function _standard_uniform (line 9) | def _standard_uniform(size, rng):
  function _standard_normal (line 15) | def _standard_normal(size, rng):
  function _standard_gumbel (line 19) | def _standard_gumbel(size, rng):
  function _standard_laplace (line 26) | def _standard_laplace(size, rng):

FILE: src/optimagic/benchmarking/process_benchmark_results.py
  function process_benchmark_results (line 5) | def process_benchmark_results(
  function _process_one_result (line 75) | def _process_one_result(
  function _check_convergence (line 164) | def _check_convergence(values, threshold):
  function _aggregate_idxs_with_and (line 175) | def _aggregate_idxs_with_and(x, y):
  function _aggregate_idxs_with_or (line 183) | def _aggregate_idxs_with_or(x, y):

FILE: src/optimagic/benchmarking/run_benchmark.py
  function run_benchmark (line 20) | def run_benchmark(
  function _process_optimize_options (line 105) | def _process_optimize_options(raw_options, max_evals, disable_convergence):
  function _get_optimization_arguments_and_keys (line 145) | def _get_optimization_arguments_and_keys(problems, opt_options):
  function _process_one_result (line 172) | def _process_one_result(optimize_result, problem):

FILE: src/optimagic/config.py
  function _is_installed (line 32) | def _is_installed(module_name: str) -> bool:

FILE: src/optimagic/constraints.py
  class Constraint (line 16) | class Constraint(ABC):
    method _to_dict (line 20) | def _to_dict(self) -> dict[str, Any]:
  function identity_selector (line 24) | def identity_selector(x: PyTree) -> PyTree:
  class FixedConstraint (line 29) | class FixedConstraint(Constraint):
    method _to_dict (line 43) | def _to_dict(self) -> dict[str, Any]:
    method __post_init__ (line 46) | def __post_init__(self) -> None:
  class IncreasingConstraint (line 52) | class IncreasingConstraint(Constraint):
    method _to_dict (line 66) | def _to_dict(self) -> dict[str, Any]:
    method __post_init__ (line 69) | def __post_init__(self) -> None:
  class DecreasingConstraint (line 75) | class DecreasingConstraint(Constraint):
    method _to_dict (line 89) | def _to_dict(self) -> dict[str, Any]:
    method __post_init__ (line 92) | def __post_init__(self) -> None:
  class EqualityConstraint (line 98) | class EqualityConstraint(Constraint):
    method _to_dict (line 112) | def _to_dict(self) -> dict[str, Any]:
    method __post_init__ (line 115) | def __post_init__(self) -> None:
  class ProbabilityConstraint (line 121) | class ProbabilityConstraint(Constraint):
    method _to_dict (line 138) | def _to_dict(self) -> dict[str, Any]:
    method __post_init__ (line 141) | def __post_init__(self) -> None:
  class PairwiseEqualityConstraint (line 147) | class PairwiseEqualityConstraint(Constraint):
    method _to_dict (line 163) | def _to_dict(self) -> dict[str, Any]:
    method __post_init__ (line 166) | def __post_init__(self) -> None:
  class FlatCovConstraint (line 175) | class FlatCovConstraint(Constraint):
    method _to_dict (line 196) | def _to_dict(self) -> dict[str, Any]:
    method __post_init__ (line 203) | def __post_init__(self) -> None:
  class FlatSDCorrConstraint (line 214) | class FlatSDCorrConstraint(Constraint):
    method _to_dict (line 238) | def _to_dict(self) -> dict[str, Any]:
    method __post_init__ (line 245) | def __post_init__(self) -> None:
  class LinearConstraint (line 256) | class LinearConstraint(Constraint):
    method _to_dict (line 288) | def _to_dict(self) -> dict[str, Any]:
    method __post_init__ (line 300) | def __post_init__(self) -> None:
  class NonlinearConstraint (line 334) | class NonlinearConstraint(Constraint):
    method _to_dict (line 370) | def _to_dict(self) -> dict[str, Any]:
    method __post_init__ (line 385) | def __post_init__(self) -> None:
  function _all_none (line 411) | def _all_none(*args: Any) -> bool:
  function _select_non_none (line 415) | def _select_non_none(**kwargs: Any) -> dict[str, Any]:

FILE: src/optimagic/decorators.py
  function catch (line 24) | def catch(
  function unpack (line 93) | def unpack(func=None, symbol=None):
  function deprecated (line 121) | def deprecated(func, msg):

FILE: src/optimagic/deprecations.py
  function throw_criterion_future_warning (line 25) | def throw_criterion_future_warning():
  function throw_criterion_kwargs_future_warning (line 34) | def throw_criterion_kwargs_future_warning():
  function throw_derivative_future_warning (line 44) | def throw_derivative_future_warning():
  function throw_derivative_kwargs_future_warning (line 55) | def throw_derivative_kwargs_future_warning():
  function throw_criterion_and_derivative_future_warning (line 66) | def throw_criterion_and_derivative_future_warning():
  function throw_criterion_and_derivative_kwargs_future_warning (line 78) | def throw_criterion_and_derivative_kwargs_future_warning():
  function throw_scaling_options_future_warning (line 90) | def throw_scaling_options_future_warning():
  function throw_multistart_options_future_warning (line 99) | def throw_multistart_options_future_warning():
  function throw_derivatives_step_ratio_future_warning (line 110) | def throw_derivatives_step_ratio_future_warning():
  function throw_derivatives_n_steps_future_warning (line 118) | def throw_derivatives_n_steps_future_warning():
  function throw_derivatives_return_info_future_warning (line 126) | def throw_derivatives_return_info_future_warning():
  function throw_derivatives_return_func_value_future_warning (line 134) | def throw_derivatives_return_func_value_future_warning():
  function throw_numdiff_result_func_evals_future_warning (line 142) | def throw_numdiff_result_func_evals_future_warning():
  function throw_numdiff_result_derivative_candidates_future_warning (line 150) | def throw_numdiff_result_derivative_candidates_future_warning():
  function throw_numdiff_options_deprecated_in_estimate_ml_future_warning (line 158) | def throw_numdiff_options_deprecated_in_estimate_ml_future_warning():
  function throw_numdiff_options_deprecated_in_estimate_msm_future_warning (line 168) | def throw_numdiff_options_deprecated_in_estimate_msm_future_warning():
  function throw_dict_access_future_warning (line 177) | def throw_dict_access_future_warning(attribute, obj_name):
  function throw_none_valued_batch_evaluator_warning (line 186) | def throw_none_valued_batch_evaluator_warning():
  function throw_make_subplot_kwargs_in_slice_plot_future_warning (line 195) | def throw_make_subplot_kwargs_in_slice_plot_future_warning():
  function replace_and_warn_about_deprecated_algo_options (line 204) | def replace_and_warn_about_deprecated_algo_options(algo_options):
  function replace_and_warn_about_deprecated_bounds (line 241) | def replace_and_warn_about_deprecated_bounds(
  function convert_dict_to_function_value (line 273) | def convert_dict_to_function_value(candidate):
  function is_dict_output (line 295) | def is_dict_output(candidate):
  function throw_dict_output_warning (line 301) | def throw_dict_output_warning():
  function infer_problem_type_from_dict_output (line 314) | def infer_problem_type_from_dict_output(output):
  function replace_dict_output (line 327) | def replace_dict_output(func: Callable[P, Any]) -> Callable[P, Any]:
  function throw_key_warning_in_derivatives (line 353) | def throw_key_warning_in_derivatives():
  function throw_dict_constraints_future_warning_if_required (line 364) | def throw_dict_constraints_future_warning_if_required(
  function replace_and_warn_about_deprecated_multistart_options (line 405) | def replace_and_warn_about_deprecated_multistart_options(options):
  function replace_and_warn_about_deprecated_base_steps (line 468) | def replace_and_warn_about_deprecated_base_steps(
  function replace_and_warn_about_deprecated_derivatives (line 488) | def replace_and_warn_about_deprecated_derivatives(candidate, name):
  function handle_log_options_throw_deprecated_warning (line 513) | def handle_log_options_throw_deprecated_warning(
  function pre_process_constraints (line 555) | def pre_process_constraints(

FILE: src/optimagic/differentiation/derivatives.py
  class NumdiffResult (line 30) | class NumdiffResult:
    method func_evals (line 66) | def func_evals(self) -> pd.DataFrame | dict[str, pd.DataFrame | None] ...
    method derivative_candidates (line 71) | def derivative_candidates(self) -> pd.DataFrame | None:
    method __getitem__ (line 75) | def __getitem__(self, key: str) -> Any:
  class Evals (line 80) | class Evals(NamedTuple):
  function first_derivative (line 85) | def first_derivative(
  function second_derivative (line 388) | def second_derivative(
  function _is_1d_array (line 724) | def _is_1d_array(candidate: Any) -> bool:
  function _reshape_one_step_evals (line 728) | def _reshape_one_step_evals(raw_evals_one_step, n_steps, dim_x):
  function _process_unpacker (line 749) | def _process_unpacker(
  function _process_scalar_or_array_argument (line 770) | def _process_scalar_or_array_argument(candidate, x, name):
  function _reshape_two_step_evals (line 791) | def _reshape_two_step_evals(raw_evals_two_step, n_steps, dim_x):
  function _reshape_cross_step_evals (line 816) | def _reshape_cross_step_evals(raw_evals_cross_step, n_steps, dim_x, f0):
  function _convert_evaluation_data_to_frame (line 847) | def _convert_evaluation_data_to_frame(steps, evals):
  function _convert_richardson_candidates_to_frame (line 895) | def _convert_richardson_candidates_to_frame(jac, err):
  function _convert_evals_to_numpy (line 923) | def _convert_evals_to_numpy(
  function _consolidate_one_step_derivatives (line 973) | def _consolidate_one_step_derivatives(candidates, preference_order):
  function _consolidate_extrapolated (line 993) | def _consolidate_extrapolated(candidates):
  function _compute_richardson_candidates (line 1034) | def _compute_richardson_candidates(jac_candidates, steps, n_steps):
  function _select_minimizer_along_axis (line 1074) | def _select_minimizer_along_axis(derivative, errors):
  function _nan_skipping_batch_evaluator (line 1110) | def _nan_skipping_batch_evaluator(
  function _split_into_str_and_int (line 1158) | def _split_into_str_and_int(s):
  function _collect_additional_info (line 1178) | def _collect_additional_info(steps, evals, updated_candidates, target):
  function _is_scalar_nan (line 1203) | def _is_scalar_nan(value):
  function _unflatten_if_not_nan (line 1207) | def _unflatten_if_not_nan(leaves, treedef, registry):

FILE: src/optimagic/differentiation/finite_differences.py
  class Evals (line 17) | class Evals(NamedTuple):
  function jacobian (line 22) | def jacobian(evals, steps, f0, method):
  function hessian (line 61) | def hessian(evals, steps, f0, method):
  function _calculate_outer_product_steps (line 142) | def _calculate_outer_product_steps(signed_steps, n_steps, dim_x):

FILE: src/optimagic/differentiation/generate_steps.py
  class Steps (line 9) | class Steps(NamedTuple):
  function generate_steps (line 14) | def generate_steps(
  function _calculate_or_validate_base_steps (line 134) | def _calculate_or_validate_base_steps(base_steps, x, target, min_steps, ...
  function _set_unused_side_to_nan (line 187) | def _set_unused_side_to_nan(
  function _rescale_to_accomodate_bounds (line 233) | def _rescale_to_accomodate_bounds(
  function _fillna (line 273) | def _fillna(x, val):

FILE: src/optimagic/differentiation/numdiff_options.py
  class NumdiffOptions (line 14) | class NumdiffOptions:
    method __post_init__ (line 44) | def __post_init__(self) -> None:
  class NumdiffOptionsDict (line 48) | class NumdiffOptionsDict(TypedDict):
  function pre_process_numdiff_options (line 59) | def pre_process_numdiff_options(
  function _validate_attribute_types_and_values (line 99) | def _validate_attribute_types_and_values(options: NumdiffOptions) -> None:
  class NumdiffPurpose (line 152) | class NumdiffPurpose(str, Enum):
  function get_default_numdiff_options (line 158) | def get_default_numdiff_options(

FILE: src/optimagic/differentiation/richardson_extrapolation.py
  function richardson_extrapolation (line 7) | def richardson_extrapolation(sequence, steps, method="central", num_term...
  function _richardson_coefficients (line 88) | def _richardson_coefficients(num_terms, step_ratio, exponentiation_step,...
  function _estimate_error (line 143) | def _estimate_error(new_seq, old_seq, richardson_coef):
  function _get_order_and_exponentiation_step (line 193) | def _get_order_and_exponentiation_step(method):
  function _compute_step_ratio (line 257) | def _compute_step_ratio(steps):

FILE: src/optimagic/examples/criterion_functions.py
  function trid_scalar (line 27) | def trid_scalar(params: PyTree) -> float:
  function trid_gradient (line 34) | def trid_gradient(params: PyTree) -> PyTree:
  function trid_fun_and_gradient (line 46) | def trid_fun_and_gradient(params: PyTree) -> tuple[float, PyTree]:
  function rhe_scalar (line 54) | def rhe_scalar(params: PyTree) -> float:
  function rhe_gradient (line 64) | def rhe_gradient(params: PyTree) -> PyTree:
  function rhe_fun_and_gradient (line 72) | def rhe_fun_and_gradient(params: PyTree) -> tuple[float, PyTree]:
  function rhe_ls (line 80) | def rhe_ls(params: PyTree) -> NDArray[np.float64]:
  function rhe_function_value (line 91) | def rhe_function_value(params: PyTree) -> FunctionValue:
  function rosenbrock_scalar (line 99) | def rosenbrock_scalar(params: PyTree) -> float:
  function rosenbrock_gradient (line 105) | def rosenbrock_gradient(params: PyTree) -> PyTree:
  function rosenbrock_fun_and_gradient (line 118) | def rosenbrock_fun_and_gradient(params: PyTree) -> tuple[float, PyTree]:
  function rosenbrock_ls (line 124) | def rosenbrock_ls(params: PyTree) -> NDArray[np.float64]:
  function rosenbrock_function_value (line 135) | def rosenbrock_function_value(params: PyTree) -> FunctionValue:
  function sos_ls (line 141) | def sos_ls(params: PyTree) -> NDArray[np.float64]:
  function sos_ls_with_pd_objects (line 147) | def sos_ls_with_pd_objects(params: PyTree) -> pd.Series[float]:
  function sos_scalar (line 153) | def sos_scalar(params: PyTree) -> float:
  function sos_gradient (line 159) | def sos_gradient(params: PyTree) -> PyTree:
  function sos_likelihood (line 166) | def sos_likelihood(params: PyTree) -> NDArray[np.float64]:
  function sos_likelihood_jacobian (line 171) | def sos_likelihood_jacobian(params: PyTree) -> PyTree:
  function sos_ls_jacobian (line 180) | def sos_ls_jacobian(params: PyTree) -> PyTree:
  function sos_fun_and_gradient (line 189) | def sos_fun_and_gradient(params: PyTree) -> tuple[float, PyTree]:
  function sos_likelihood_fun_and_jac (line 195) | def sos_likelihood_fun_and_jac(
  function sos_ls_fun_and_jac (line 203) | def sos_ls_fun_and_jac(
  function _get_x (line 213) | def _get_x(params: PyTree) -> NDArray[np.float64]:
  function _unflatten_gradient (line 222) | def _unflatten_gradient(flat: NDArray[np.float64], params: PyTree) -> Py...

FILE: src/optimagic/examples/numdiff_functions.py
  function logit_loglike (line 22) | def logit_loglike(params, y, x):
  function logit_loglikeobs (line 26) | def logit_loglikeobs(params, y, x):
  function logit_loglike_gradient (line 31) | def logit_loglike_gradient(params, y, x):
  function logit_loglikeobs_jacobian (line 36) | def logit_loglikeobs_jacobian(params, y, x):
  function logit_loglike_hessian (line 41) | def logit_loglike_hessian(params, y, x):  # noqa: ARG001
  function probit_loglike (line 51) | def probit_loglike(params, y, x):
  function probit_loglikeobs (line 55) | def probit_loglikeobs(params, y, x):
  function probit_loglike_gradient (line 60) | def probit_loglike_gradient(params, y, x):
  function probit_loglikeobs_jacobian (line 67) | def probit_loglikeobs_jacobian(params, y, x):
  function probit_loglike_hessian (line 74) | def probit_loglike_hessian(params, y, x):

FILE: src/optimagic/exceptions.py
  class OptimagicError (line 5) | class OptimagicError(Exception):
  class TableExistsError (line 9) | class TableExistsError(OptimagicError):
  class InvalidFunctionError (line 13) | class InvalidFunctionError(OptimagicError):
  class UserFunctionRuntimeError (line 22) | class UserFunctionRuntimeError(OptimagicError):
  class MissingInputError (line 26) | class MissingInputError(OptimagicError):
  class AliasError (line 30) | class AliasError(OptimagicError):
  class InvalidKwargsError (line 34) | class InvalidKwargsError(OptimagicError):
  class InvalidParamsError (line 38) | class InvalidParamsError(OptimagicError):
  class InvalidConstraintError (line 42) | class InvalidConstraintError(OptimagicError):
  class InvalidBoundsError (line 46) | class InvalidBoundsError(OptimagicError):
  class IncompleteBoundsError (line 50) | class IncompleteBoundsError(OptimagicError):
  class InvalidScalingError (line 54) | class InvalidScalingError(OptimagicError):
  class InvalidMultistartError (line 58) | class InvalidMultistartError(OptimagicError):
  class InvalidNumdiffOptionsError (line 62) | class InvalidNumdiffOptionsError(OptimagicError):
  class NotInstalledError (line 66) | class NotInstalledError(OptimagicError):
  class NotAvailableError (line 70) | class NotAvailableError(OptimagicError):
  class InvalidAlgoOptionError (line 74) | class InvalidAlgoOptionError(OptimagicError):
  class InvalidAlgoInfoError (line 78) | class InvalidAlgoInfoError(OptimagicError):
  class InvalidPlottingBackendError (line 82) | class InvalidPlottingBackendError(OptimagicError):
  class StopOptimizationError (line 86) | class StopOptimizationError(OptimagicError):
    method __init__ (line 87) | def __init__(self, message, current_status):
    method __reduce__ (line 92) | def __reduce__(self):
  function get_traceback (line 97) | def get_traceback():

FILE: src/optimagic/logging/base.py
  class _KeyValueStore (line 17) | class _KeyValueStore(Generic[InputType, OutputType], ABC):
    method __init__ (line 34) | def __init__(
    method primary_key (line 56) | def primary_key(self) -> str:
    method insert (line 66) | def insert(self, value: InputType) -> None:
    method _select_by_key (line 74) | def _select_by_key(self, key: int) -> list[OutputType]:
    method _select_all (line 78) | def _select_all(self) -> list[OutputType]:
    method select (line 81) | def select(self, key: int | None = None) -> list[OutputType]:
    method select_last_rows (line 98) | def select_last_rows(self, n_rows: int) -> list[OutputType]:
    method to_df (line 109) | def to_df(self) -> pd.DataFrame:
  class UpdatableKeyValueStore (line 120) | class UpdatableKeyValueStore(_KeyValueStore[InputType, OutputType], ABC):
    method update (line 128) | def update(self, key: int, value: InputType | dict[str, Any]) -> None:
    method _update (line 143) | def _update(self, key: int, value: InputType | dict[str, Any]) -> None:
    method _check_fields (line 146) | def _check_fields(self, value: InputType | dict[str, Any]) -> None:
  class NonUpdatableKeyValueStore (line 156) | class NonUpdatableKeyValueStore(_KeyValueStore[InputType, OutputType], A...
    method __getattr__ (line 157) | def __getattr__(self, name: str) -> Any:
  class RobustPickler (line 168) | class RobustPickler:
    method loads (line 170) | def loads(
    method dumps (line 207) | def dumps(

FILE: src/optimagic/logging/logger.py
  class LogOptions (line 43) | class LogOptions:
    method __init_subclass__ (line 52) | def __init_subclass__(
    method available_option_types (line 60) | def available_option_types(cls) -> list[Type[LogOptions]]:
  class LogReader (line 67) | class LogReader(Generic[_LogOptionsType], ABC):
    method problem_df (line 81) | def problem_df(self) -> pd.DataFrame:
    method from_options (line 85) | def from_options(cls, log_options: LogOptions) -> LogReader[_LogOption...
    method _create (line 99) | def _create(cls, log_options: _LogOptionsType) -> LogReader[_LogOption...
    method read_iteration (line 102) | def read_iteration(self, iteration: int) -> IterationStateWithId:
    method read_history (line 139) | def read_history(self) -> IterationHistory:
    method _normalize_direction (line 163) | def _normalize_direction(
    method _build_history_dataframe (line 170) | def _build_history_dataframe(self) -> pd.DataFrame:
    method _split_exploration_and_optimization (line 204) | def _split_exploration_and_optimization(
    method _sort_exploration (line 216) | def _sort_exploration(
    method _extract_best_history (line 227) | def _extract_best_history(
    method read_multistart_history (line 265) | def read_multistart_history(
    method read_start_params (line 294) | def read_start_params(self) -> PyTree:
  class LogStore (line 307) | class LogStore(Generic[_LogOptionsType, _LogReaderType], ABC):
    method __init__ (line 320) | def __init__(
    method from_options (line 335) | def from_options(
    method create (line 351) | def create(
  class SQLiteLogOptions (line 357) | class SQLiteLogOptions(SQLAlchemyConfig, LogOptions):
    method __init__ (line 378) | def __init__(
    method path (line 394) | def path(self) -> str | Path:
    method create_engine (line 397) | def create_engine(self) -> Engine:
    method _configure_engine (line 402) | def _configure_engine(self, engine: Engine) -> None:
  class SQLiteLogReader (line 439) | class SQLiteLogReader(LogReader[SQLiteLogOptions]):
    method __init__ (line 451) | def __init__(self, path: str | Path):
    method _create (line 463) | def _create(cls, log_options: SQLiteLogOptions) -> SQLiteLogReader:
  class _SQLiteLogStore (line 477) | class _SQLiteLogStore(LogStore[SQLiteLogOptions, SQLiteLogReader]):
    method _handle_existing_database (line 488) | def _handle_existing_database(
    method create (line 516) | def create(cls, log_options: SQLiteLogOptions) -> _SQLiteLogStore:

FILE: src/optimagic/logging/read_log.py
  class OptimizeLogReader (line 22) | class OptimizeLogReader:
    method __new__ (line 23) | def __new__(cls, *args, **kwargs):  # type: ignore

FILE: src/optimagic/logging/sqlalchemy.py
  class SQLAlchemyConfig (line 32) | class SQLAlchemyConfig:
    method __init__ (line 43) | def __init__(
    method metadata (line 50) | def metadata(self) -> MetaData:
    method create_engine (line 63) | def create_engine(self) -> Engine:
    method _configure_reflect (line 73) | def _configure_reflect() -> None:
  class TableConfig (line 89) | class TableConfig:
    method column_names (line 107) | def column_names(self) -> list[str]:
    method create_table (line 110) | def create_table(self, metadata: MetaData, engine: Engine) -> sql.Table:
  class _SQLAlchemyStoreMixin (line 129) | class _SQLAlchemyStoreMixin:
    method __init__ (line 141) | def __init__(self, db_config: SQLAlchemyConfig, table_config: TableCon...
    method column_names (line 148) | def column_names(self) -> list[str]:
    method table_name (line 152) | def table_name(self) -> str:
    method table (line 156) | def table(self) -> sql.Table:
    method engine (line 160) | def engine(self) -> Engine:
    method _select_row_by_key (line 163) | def _select_row_by_key(self, key: int) -> list[Any]:
    method _select_all_rows (line 169) | def _select_all_rows(self) -> list[Any]:
    method _select_last_rows (line 173) | def _select_last_rows(self, n_rows: int) -> list[Any]:
    method _insert (line 182) | def _insert(self, insert_values: dict[str, Any]) -> None:
    method _execute_read_statement (line 186) | def _execute_read_statement(self, statement: Executable) -> list[Any]:
    method _execute_write_statement (line 190) | def _execute_write_statement(self, statement: Executable) -> None:
  class SQLAlchemySimpleStore (line 203) | class SQLAlchemySimpleStore(
    method __init__ (line 222) | def __init__(
    method __reduce__ (line 239) | def __reduce__(
    method insert (line 253) | def insert(self, value: InputType) -> None:
    method _select_by_key (line 262) | def _select_by_key(self, key: int) -> list[OutputType]:
    method _select_all (line 266) | def _select_all(self) -> list[OutputType]:
    method select_last_rows (line 270) | def select_last_rows(self, n_rows: int) -> list[OutputType]:
    method _post_process (line 283) | def _post_process(self, results: Sequence[sql.Row]) -> list[OutputType...
  class SQLAlchemyTableStore (line 292) | class SQLAlchemyTableStore(
    method __init__ (line 308) | def __init__(
    method __reduce__ (line 318) | def __reduce__(
    method insert (line 331) | def insert(self, value: InputType) -> None:
    method _update (line 340) | def _update(self, key: int, value: InputType | dict[str, Any]) -> None:
    method _select_by_key (line 352) | def _select_by_key(self, key: int) -> list[OutputType]:
    method _select_all (line 356) | def _select_all(self) -> list[OutputType]:
    method select_last_rows (line 360) | def select_last_rows(self, n_rows: int) -> list[OutputType]:
    method _post_process (line 373) | def _post_process(self, results: Sequence[sql.Row]) -> list[OutputType...
  class IterationStore (line 380) | class IterationStore(SQLAlchemySimpleStore[IterationState, IterationStat...
    method __init__ (line 391) | def __init__(
  class StepStore (line 404) | class StepStore(SQLAlchemyTableStore[StepResult, StepResultWithId]):
    method __init__ (line 415) | def __init__(
  class ProblemStore (line 441) | class ProblemStore(
    method __init__ (line 456) | def __init__(

FILE: src/optimagic/logging/types.py
  class StepStatus (line 14) | class StepStatus(str, Enum):
  class StepType (line 34) | class StepType(str, Enum):
  class ExistenceStrategy (line 50) | class ExistenceStrategy(str, Enum):
  class IterationState (line 69) | class IterationState(DictLikeAccess):
    method combine (line 93) | def combine(self, other: "IterationState") -> "IterationState":
  class IterationStateWithId (line 123) | class IterationStateWithId(IterationState):
    method __post_init__ (line 136) | def __post_init__(self) -> None:
  class StepResult (line 142) | class StepResult(DictLikeAccess):
    method __post_init__ (line 158) | def __post_init__(self) -> None:
  class StepResultWithId (line 166) | class StepResultWithId(StepResult):
    method __post_init__ (line 179) | def __post_init__(self) -> None:
  class ProblemInitialization (line 186) | class ProblemInitialization(DictLikeAccess):
  class ProblemInitializationWithId (line 201) | class ProblemInitializationWithId(ProblemInitialization):
    method __post_init__ (line 214) | def __post_init__(self) -> None:

FILE: src/optimagic/mark.py
  function scalar (line 14) | def scalar(func: ScalarFuncT) -> ScalarFuncT:
  function least_squares (line 31) | def least_squares(func: VectorFuncT) -> VectorFuncT:
  function likelihood (line 48) | def likelihood(func: VectorFuncT) -> VectorFuncT:
  function minimizer (line 69) | def minimizer(

FILE: src/optimagic/optimization/algo_options.py
  function get_population_size (line 161) | def get_population_size(population_size, x, lower_bound=10):

FILE: src/optimagic/optimization/algorithm.py
  class AlgoInfo (line 22) | class AlgoInfo:
    method __post_init__ (line 38) | def __post_init__(self) -> None:
  class InternalOptimizeResult (line 76) | class InternalOptimizeResult:
    method __post_init__ (line 115) | def __post_init__(self) -> None:
  class AlgorithmMeta (line 174) | class AlgorithmMeta(ABCMeta):
    method __repr__ (line 177) | def __repr__(self) -> str:
    method name (line 185) | def name(self) -> str:
    method algo_info (line 193) | def algo_info(self) -> AlgoInfo:
  class Algorithm (line 205) | class Algorithm(ABC, metaclass=AlgorithmMeta):
    method _solve_internal_problem (line 214) | def _solve_internal_problem(
    method __post_init__ (line 219) | def __post_init__(self) -> None:
    method with_option (line 237) | def with_option(self, **kwargs: Any) -> Self:
    method with_stopping (line 248) | def with_stopping(self, **kwargs: Any) -> Self:
    method with_convergence (line 259) | def with_convergence(self, **kwargs: Any) -> Self:
    method solve_internal_problem (line 270) | def solve_internal_problem(
    method with_option_if_applicable (line 301) | def with_option_if_applicable(self, **kwargs: Any) -> Self:
    method name (line 316) | def name(self) -> str:
    method algo_info (line 324) | def algo_info(self) -> AlgoInfo:

FILE: src/optimagic/optimization/convergence_report.py
  function get_convergence_report (line 7) | def get_convergence_report(history: History) -> dict[str, dict[str, floa...
  function _get_max_f_changes (line 36) | def _get_max_f_changes(critvals: NDArray[np.float64]) -> tuple[float, fl...
  function _get_max_x_changes (line 48) | def _get_max_x_changes(params: NDArray[np.float64]) -> tuple[float, float]:

FILE: src/optimagic/optimization/create_optimization_problem.py
  class OptimizationProblem (line 52) | class OptimizationProblem:
  function create_optimization_problem (line 91) | def create_optimization_problem(
  function pre_process_derivatives (line 560) | def pre_process_derivatives(candidate, name, solver_type):
  function pre_process_user_algorithm (line 584) | def pre_process_user_algorithm(

FILE: src/optimagic/optimization/error_penalty.py
  function _scalar_penalty (line 16) | def _scalar_penalty(
  function _likelihood_penalty (line 28) | def _likelihood_penalty(
  function _penalty_residuals (line 42) | def _penalty_residuals(
  function get_error_penalty_function (line 61) | def get_error_penalty_function(
  function _process_error_penalty (line 116) | def _process_error_penalty(

FILE: src/optimagic/optimization/fun_value.py
  class FunctionValue (line 17) | class FunctionValue:
  class SpecificFunctionValue (line 22) | class SpecificFunctionValue(FunctionValue, ABC):
    method internal_value (line 24) | def internal_value(
  class ScalarFunctionValue (line 31) | class ScalarFunctionValue(SpecificFunctionValue):
    method __post_init__ (line 35) | def __post_init__(self) -> None:
    method internal_value (line 45) | def internal_value(self, solver_type: AggregationLevel) -> float:
  class LeastSquaresFunctionValue (line 57) | class LeastSquaresFunctionValue(SpecificFunctionValue):
    method __post_init__ (line 61) | def __post_init__(self) -> None:
    method internal_value (line 71) | def internal_value(
  class LikelihoodFunctionValue (line 88) | class LikelihoodFunctionValue(SpecificFunctionValue):
    method __post_init__ (line 92) | def __post_init__(self) -> None:
    method internal_value (line 101) | def internal_value(
  function _get_flat_value (line 120) | def _get_flat_value(value: PyTree) -> NDArray[np.float64]:
  function convert_fun_output_to_function_value (line 134) | def convert_fun_output_to_function_value(
  function _convert_output_to_scalar_function_value (line 147) | def _convert_output_to_scalar_function_value(
  function _convert_output_to_least_squares_function_value (line 159) | def _convert_output_to_least_squares_function_value(
  function _convert_output_to_likelihood_function_value (line 171) | def _convert_output_to_likelihood_function_value(
  function enforce_return_type (line 186) | def enforce_return_type(
  function enforce_return_type_with_jac (line 231) | def enforce_return_type_with_jac(

FILE: src/optimagic/optimization/history.py
  class HistoryEntry (line 17) | class HistoryEntry:
  class History (line 25) | class History:
    method __init__ (line 27) | def __init__(
    method add_entry (line 61) | def add_entry(self, entry: HistoryEntry, batch_id: int | None = None) ...
    method add_batch (line 71) | def add_batch(
    method _get_next_batch_id (line 88) | def _get_next_batch_id(self) -> int:
    method fun_data (line 102) | def fun_data(self, cost_model: CostModel, monotone: bool = False) -> p...
    method fun (line 154) | def fun(self) -> list[float | None]:
    method monotone_fun (line 158) | def monotone_fun(self) -> NDArray[np.float64]:
    method is_accepted (line 170) | def is_accepted(self) -> NDArray[np.bool_]:
    method params_data (line 188) | def params_data(
    method params (line 259) | def params(self) -> list[PyTree]:
    method flat_params (line 263) | def flat_params(self) -> list[list[float]]:
    method flat_param_names (line 267) | def flat_param_names(self) -> list[str]:
    method _get_total_timings (line 273) | def _get_total_timings(
    method _get_timings_per_task (line 304) | def _get_timings_per_task(
    method start_time (line 332) | def start_time(self) -> list[float]:
    method stop_time (line 336) | def stop_time(self) -> list[float]:
    method batches (line 343) | def batches(self) -> list[int]:
    method _is_serial (line 346) | def _is_serial(self) -> bool:
    method task (line 353) | def task(self) -> list[EvalTask]:
    method time (line 361) | def time(self) -> list[float]:
    method criterion (line 371) | def criterion(self) -> list[float | None]:
    method runtime (line 377) | def runtime(self) -> list[float]:
    method __getitem__ (line 385) | def __getitem__(self, key: str) -> Any:
  function _get_flat_params (line 396) | def _get_flat_params(params: list[PyTree]) -> list[list[float]]:
  function _get_flat_param_names (line 407) | def _get_flat_param_names(param: PyTree) -> list[str]:
  function _is_1d_array (line 418) | def _is_1d_array(param: PyTree) -> bool:
  function _calculate_monotone_sequence (line 422) | def _calculate_monotone_sequence(
  function _validate_args_are_all_none_or_lists_of_same_length (line 444) | def _validate_args_are_all_none_or_lists_of_same_length(
  function _task_to_categorical (line 461) | def _task_to_categorical(task: list[EvalTask]) -> "pd.Series[str]":
  function _apply_reduction_to_batches (line 466) | def _apply_reduction_to_batches(
  function _get_batch_starts_and_stops (line 525) | def _get_batch_starts_and_stops(batch_ids: list[int]) -> tuple[list[int]...

FILE: src/optimagic/optimization/internal_optimization_problem.py
  class InternalBounds (line 37) | class InternalBounds(Bounds):
  class InternalOptimizationProblem (line 44) | class InternalOptimizationProblem:
    method __init__ (line 45) | def __init__(
    method fun (line 87) | def fun(self, x: NDArray[np.float64]) -> float | NDArray[np.float64]:
    method jac (line 102) | def jac(self, x: NDArray[np.float64]) -> NDArray[np.float64]:
    method fun_and_jac (line 118) | def fun_and_jac(
    method batch_fun (line 130) | def batch_fun(
    method batch_jac (line 164) | def batch_jac(
    method batch_fun_and_jac (line 198) | def batch_fun_and_jac(
    method exploration_fun (line 233) | def exploration_fun(
    method with_new_history (line 253) | def with_new_history(self) -> Self:
    method with_error_handling (line 258) | def with_error_handling(self, error_handling: ErrorHandling) -> Self:
    method with_step_id (line 263) | def with_step_id(self, step_id: int) -> Self:
    method bounds (line 273) | def bounds(self) -> InternalBounds:
    method converter (line 278) | def converter(self) -> Converter:
    method linear_constraints (line 329) | def linear_constraints(self) -> list[dict[str, Any]] | None:
    method nonlinear_constraints (line 334) | def nonlinear_constraints(self) -> list[dict[str, Any]] | None:
    method direction (line 358) | def direction(self) -> Direction:
    method history (line 363) | def history(self) -> History:
    method logger (line 368) | def logger(self) -> LogStore[Any, Any] | None:
    method _evaluate_fun (line 378) | def _evaluate_fun(
    method _evaluate_jac (line 388) | def _evaluate_jac(
    method _evaluate_exploration_fun (line 410) | def _evaluate_exploration_fun(
    method _evaluate_fun_and_jac (line 420) | def _evaluate_fun_and_jac(
    method _pure_evaluate_fun (line 446) | def _pure_evaluate_fun(
    method _pure_evaluate_jac (line 509) | def _pure_evaluate_jac(
    method _pure_evaluate_numerical_fun_and_jac (line 570) | def _pure_evaluate_numerical_fun_and_jac(
    method _pure_exploration_fun (line 657) | def _pure_exploration_fun(
    method _pure_evaluate_fun_and_jac (line 715) | def _pure_evaluate_fun_and_jac(
  function _assert_finite_jac (line 792) | def _assert_finite_jac(
  function _process_fun_value (line 830) | def _process_fun_value(
  function _process_jac_value (line 858) | def _process_jac_value(
  class SphereExampleInternalOptimizationProblem (line 882) | class SphereExampleInternalOptimizationProblem(InternalOptimizationProbl...
    method __init__ (line 892) | def __init__(
  class SphereExampleInternalOptimizationProblemWithConverter (line 964) | class SphereExampleInternalOptimizationProblemWithConverter(
    method __init__ (line 990) | def __init__(

FILE: src/optimagic/optimization/multistart.py
  function run_multistart_optimization (line 37) | def run_multistart_optimization(
  function determine_steps (line 184) | def determine_steps(n_samples, stopping_maxopt):
  function _draw_exploration_sample (line 217) | def _draw_exploration_sample(
  class _InternalExplorationResult (line 292) | class _InternalExplorationResult:
  function run_explorations (line 306) | def run_explorations(
  function get_batched_optimization_sample (line 360) | def get_batched_optimization_sample(sorted_sample, stopping_maxopt, batc...
  function update_convergence_state (line 390) | def update_convergence_state(

FILE: src/optimagic/optimization/multistart_options.py
  class MultistartOptions (line 20) | class MultistartOptions:
    method __post_init__ (line 84) | def __post_init__(self) -> None:
  class MultistartOptionsDict (line 88) | class MultistartOptionsDict(TypedDict):
  function pre_process_multistart (line 114) | def pre_process_multistart(
  function _validate_attribute_types_and_values (line 162) | def _validate_attribute_types_and_values(options: MultistartOptions) -> ...
  function _tiktak_weights (line 286) | def _tiktak_weights(
  function _linear_weights (line 292) | def _linear_weights(
  class InternalMultistartOptions (line 307) | class InternalMultistartOptions:
    method __post_init__ (line 330) | def __post_init__(self) -> None:
  function get_internal_multistart_options_from_public (line 350) | def get_internal_multistart_options_from_public(

FILE: src/optimagic/optimization/optimization_logging.py
  function log_scheduled_steps_and_get_ids (line 7) | def log_scheduled_steps_and_get_ids(

FILE: src/optimagic/optimization/optimize.py
  function maximize (line 89) | def maximize(
  function minimize (line 286) | def minimize(
  function _optimize (line 483) | def _optimize(problem: OptimizationProblem) -> OptimizeResult:

FILE: src/optimagic/optimization/optimize_result.py
  class OptimizeResult (line 17) | class OptimizeResult:
    method criterion (line 75) | def criterion(self) -> float:
    method start_criterion (line 81) | def start_criterion(self) -> float:
    method n_criterion_evaluations (line 90) | def n_criterion_evaluations(self) -> int | None:
    method n_derivative_evaluations (line 99) | def n_derivative_evaluations(self) -> int | None:
    method x (line 112) | def x(self) -> PyTree:
    method x0 (line 116) | def x0(self) -> PyTree:
    method nfev (line 120) | def nfev(self) -> int | None:
    method nit (line 124) | def nit(self) -> int | None:
    method njev (line 128) | def njev(self) -> int | None:
    method nhev (line 132) | def nhev(self) -> int | None:
    method __getitem__ (line 136) | def __getitem__(self, key):
    method __repr__ (line 139) | def __repr__(self) -> str:
    method to_pickle (line 195) | def to_pickle(self, path):
  class MultistartInfo (line 206) | class MultistartInfo:
    method __getitem__ (line 223) | def __getitem__(self, key):
    method n_optimizations (line 228) | def n_optimizations(self) -> int:
  function _format_convergence_report (line 232) | def _format_convergence_report(report, algorithm):
  function _create_stars (line 259) | def _create_stars(sr):
  function _format_float (line 269) | def _format_float(number):

FILE: src/optimagic/optimization/process_results.py
  class ExtraResultFields (line 16) | class ExtraResultFields:
  function process_single_result (line 26) | def process_single_result(
  function process_multistart_result (line 75) | def process_multistart_result(
  function _process_multistart_info (line 134) | def _process_multistart_info(
  function _dummy_result_from_traceback (line 175) | def _dummy_result_from_traceback(
  function _sum_or_none (line 191) | def _sum_or_none(summands: list[int | None | float]) -> int | None:

FILE: src/optimagic/optimization/scipy_aliases.py
  function map_method_to_algorithm (line 7) | def map_method_to_algorithm(method):
  function split_fun_and_jac (line 49) | def split_fun_and_jac(fun_and_jac, target="fun"):

FILE: src/optimagic/optimizers/_pounders/_conjugate_gradient.py
  function minimize_trust_cg (line 6) | def minimize_trust_cg(
  function _update_vectors_for_next_iteration (line 69) | def _update_vectors_for_next_iteration(
  function _get_distance_to_trustregion_boundary (line 100) | def _get_distance_to_trustregion_boundary(candidate, direction, radius):

FILE: src/optimagic/optimizers/_pounders/_steihaug_toint.py
  function minimize_trust_stcg (line 6) | def minimize_trust_stcg(model_gradient, model_hessian, trustregion_radius):
  function _update_candidate_vector_and_iteration_number (line 149) | def _update_candidate_vector_and_iteration_number(
  function _take_step_to_trustregion_boundary (line 183) | def _take_step_to_trustregion_boundary(x_candidate, p, dp, radius_sq, no...
  function _check_convergence (line 191) | def _check_convergence(

FILE: src/optimagic/optimizers/_pounders/_trsbox.py
  function minimize_trust_trsbox (line 6) | def minimize_trust_trsbox(
  function _perform_alternative_trustregion_step (line 197) | def _perform_alternative_trustregion_step(
  function _apply_bounds_to_candidate_vector (line 354) | def _apply_bounds_to_candidate_vector(
  function _take_unconstrained_step_up_to_boundary (line 368) | def _take_unconstrained_step_up_to_boundary(
  function _update_candidate_vectors_and_reduction (line 387) | def _update_candidate_vectors_and_reduction(
  function _take_constrained_step_up_to_boundary (line 435) | def _take_constrained_step_up_to_boundary(
  function _calc_upper_bound_on_tangent (line 459) | def _calc_upper_bound_on_tangent(
  function _calc_greatest_criterion_reduction (line 518) | def _calc_greatest_criterion_reduction(
  function _update_candidate_vectors_and_reduction_alt_step (line 562) | def _update_candidate_vectors_and_reduction_alt_step(
  function _compute_new_search_direction_and_norm (line 600) | def _compute_new_search_direction_and_norm(
  function _calc_new_reduction (line 616) | def _calc_new_reduction(tangent, sine, s_hess_s, x_hess_x, x_hess_s, x_g...
  function _update_tangent (line 624) | def _update_tangent(

FILE: src/optimagic/optimizers/_pounders/bntr.py
  class ActiveBounds (line 19) | class ActiveBounds(NamedTuple):
  function bntr (line 27) | def bntr(
  function _take_preliminary_gradient_descent_step_and_check_for_solution (line 243) | def _take_preliminary_gradient_descent_step_and_check_for_solution(
  function _compute_conjugate_gradient_step (line 390) | def _compute_conjugate_gradient_step(
  function _compute_predicted_reduction_from_conjugate_gradient_step (line 515) | def _compute_predicted_reduction_from_conjugate_gradient_step(
  function _perform_gradient_descent_step (line 545) | def _perform_gradient_descent_step(
  function _update_trustregion_radius_conjugate_gradient (line 613) | def _update_trustregion_radius_conjugate_gradient(
  function _get_information_on_active_bounds (line 669) | def _get_information_on_active_bounds(
  function _find_hessian_submatrix_where_bounds_inactive (line 693) | def _find_hessian_submatrix_where_bounds_inactive(model, active_bounds_i...
  function _check_for_convergence (line 702) | def _check_for_convergence(
  function _apply_bounds_to_x_candidate (line 758) | def _apply_bounds_to_x_candidate(x, lower_bounds, upper_bounds, bound_to...
  function _project_gradient_onto_feasible_set (line 766) | def _project_gradient_onto_feasible_set(gradient_unprojected, active_bou...
  function _apply_bounds_to_conjugate_gradient_step (line 776) | def _apply_bounds_to_conjugate_gradient_step(
  function _update_trustregion_radius_and_gradient_descent (line 805) | def _update_trustregion_radius_and_gradient_descent(
  function _get_fischer_burmeister_direction_vector (line 893) | def _get_fischer_burmeister_direction_vector(x, gradient, lower_bounds, ...
  function _get_fischer_burmeister_scalar (line 907) | def _get_fischer_burmeister_scalar(a, b):
  function _evaluate_model_criterion (line 929) | def _evaluate_model_criterion(

FILE: src/optimagic/optimizers/_pounders/gqtpar.py
  class HessianInfo (line 11) | class HessianInfo(NamedTuple):
  class DampingFactors (line 17) | class DampingFactors(NamedTuple):
  function gqtpar (line 23) | def gqtpar(model, x_candidate, *, k_easy=0.1, k_hard=0.2, maxiter=200):
  function _get_initial_guess_for_lambdas (line 149) | def _get_initial_guess_for_lambdas(
  function add_lambda_and_factorize_hessian (line 213) | def add_lambda_and_factorize_hessian(main_model, hessian_info, lambdas):
  function _find_new_candidate_and_update_parameters (line 263) | def _find_new_candidate_and_update_parameters(
  function _check_for_interior_convergence_and_update (line 317) | def _check_for_interior_convergence_and_update(
  function _update_lambdas_when_factorization_unsuccessful (line 350) | def _update_lambdas_when_factorization_unsuccessful(
  function _get_new_lambda_candidate (line 373) | def _get_new_lambda_candidate(lower_bound, upper_bound):
  function _compute_gershgorin_bounds (line 392) | def _compute_gershgorin_bounds(main_model):
  function _compute_newton_step (line 423) | def _compute_newton_step(lambdas, p_norm, w_norm):
  function _update_candidate_and_parameters_when_candidate_within_trustregion (line 441) | def _update_candidate_and_parameters_when_candidate_within_trustregion(
  function _update_lambdas_when_candidate_outside_trustregion (line 497) | def _update_lambdas_when_candidate_outside_trustregion(
  function _compute_smallest_step_len_for_candidate_vector (line 511) | def _compute_smallest_step_len_for_candidate_vector(x_candidate, z_min):
  function _solve_scalar_quadratic_equation (line 532) | def _solve_scalar_quadratic_equation(z, d):
  function _compute_terms_to_make_leading_submatrix_singular (line 571) | def _compute_terms_to_make_leading_submatrix_singular(hessian_info, k):

FILE: src/optimagic/optimizers/_pounders/linear_subsolvers.py
  class LinearModel (line 8) | class LinearModel(NamedTuple):
  function minimize_trsbox_linear (line 13) | def minimize_trsbox_linear(
  function improve_geomtery_trsbox_linear (line 94) | def improve_geomtery_trsbox_linear(
  function _find_next_active_bound (line 182) | def _find_next_active_bound(
  function _take_constrained_step_up_to_boundary (line 234) | def _take_constrained_step_up_to_boundary(
  function _take_unconstrained_step_up_to_boundary (line 266) | def _take_unconstrained_step_up_to_boundary(
  function _get_distance_to_trustregion_boundary (line 290) | def _get_distance_to_trustregion_boundary(

FILE: src/optimagic/optimizers/_pounders/pounders_auxiliary.py
  class ResidualModel (line 16) | class ResidualModel(NamedTuple):
  class MainModel (line 22) | class MainModel(NamedTuple):
  function create_initial_residual_model (line 27) | def create_initial_residual_model(history, accepted_index, delta):
  function update_residual_model (line 69) | def update_residual_model(residual_model, coefficients_to_add, delta, de...
  function create_main_from_residual_model (line 102) | def create_main_from_residual_model(
  function update_main_model_with_new_accepted_x (line 137) | def update_main_model_with_new_accepted_x(main_model, x_candidate):
  function update_residual_model_with_new_accepted_x (line 155) | def update_residual_model_with_new_accepted_x(residual_model, x_candidate):
  function solve_subproblem (line 186) | def solve_subproblem(
  function find_affine_points (line 315) | def find_affine_points(
  function add_geomtery_points_to_make_main_model_fully_linear (line 389) | def add_geomtery_points_to_make_main_model_fully_linear(
  function evaluate_residual_model (line 470) | def evaluate_residual_model(
  function get_feature_matrices_residual_model (line 515) | def get_feature_matrices_residual_model(
  function fit_residual_model (line 632) | def fit_residual_model(
  function update_trustregion_radius (line 714) | def update_trustregion_radius(
  function get_last_model_indices_and_check_for_repeated_model (line 736) | def get_last_model_indices_and_check_for_repeated_model(
  function add_accepted_point_to_residual_model (line 758) | def add_accepted_point_to_residual_model(model_indices, accepted_index, ...
  function _get_monomial_basis (line 766) | def _get_monomial_basis(x):

FILE: src/optimagic/optimizers/_pounders/pounders_history.py
  class LeastSquaresHistory (line 6) | class LeastSquaresHistory:
    method __init__ (line 26) | def __init__(self):
    method add_entries (line 36) | def add_entries(self, xs, residuals):
    method add_centered_entries (line 67) | def add_centered_entries(self, xs, residuals, center_info):
    method get_entries (line 86) | def get_entries(self, index=None):
    method get_xs (line 109) | def get_xs(self, index=None):
    method get_residuals (line 125) | def get_residuals(self, index=None):
    method get_critvals (line 141) | def get_critvals(self, index=None):
    method get_centered_entries (line 157) | def get_centered_entries(self, center_info, index=None):
    method get_centered_xs (line 180) | def get_centered_xs(self, center_info, index=None):
    method get_centered_residuals (line 198) | def get_centered_residuals(self, center_info, index=None):
    method get_centered_critvals (line 216) | def get_centered_critvals(self, center_info, index=None):
    method get_n_fun (line 235) | def get_n_fun(self):
    method get_best_index (line 238) | def get_best_index(self):
    method get_best_entries (line 241) | def get_best_entries(self):
    method get_best_x (line 244) | def get_best_x(self):
    method get_best_residuals (line 247) | def get_best_residuals(self):
    method get_best_critval (line 250) | def get_best_critval(self):
    method get_best_centered_entries (line 253) | def get_best_centered_entries(self, center_info):
  function _add_entries_to_array (line 257) | def _add_entries_to_array(arr, new, position):

FILE: src/optimagic/optimizers/bayesian_optimizer.py
  class BayesOpt (line 51) | class BayesOpt(Algorithm):
    method _solve_internal_problem (line 206) | def _solve_internal_problem(
    method _process_constraints (line 278) | def _process_constraints(
  function _process_bounds (line 294) | def _process_bounds(bounds: InternalBounds) -> dict[str, tuple[float, fl...
  function _extract_params_from_kwargs (line 324) | def _extract_params_from_kwargs(params_dict: dict[str, Any]) -> NDArray[...
  function _process_acquisition_function (line 337) | def _process_acquisition_function(
  function _process_bayes_opt_result (line 462) | def _process_bayes_opt_result(

FILE: src/optimagic/optimizers/bhhh.py
  class BHHH (line 33) | class BHHH(Algorithm):
    method _solve_internal_problem (line 38) | def _solve_internal_problem(
  function bhhh_internal (line 54) | def bhhh_internal(

FILE: src/optimagic/optimizers/fides.py
  class Fides (line 49) | class Fides(Algorithm):
    method _solve_internal_problem (line 82) | def _solve_internal_problem(
  function fides_internal (line 114) | def fides_internal(
  function _process_fides_res (line 213) | def _process_fides_res(raw_res, opt):
  function _process_exitflag (line 234) | def _process_exitflag(exitflag):
  function _create_hessian_updater_from_user_input (line 252) | def _create_hessian_updater_from_user_input(hessian_update_strategy):

FILE: src/optimagic/optimizers/gfo_optimizers.py
  class GFOCommonOptions (line 40) | class GFOCommonOptions:
  class GFOHillClimbing (line 138) | class GFOHillClimbing(GFOCommonOptions, Algorithm):
    method _solve_internal_problem (line 181) | def _solve_internal_problem(
  class GFOStochasticHillClimbing (line 219) | class GFOStochasticHillClimbing(Algorithm, GFOCommonOptions):
    method _solve_internal_problem (line 275) | def _solve_internal_problem(
  class GFORepulsingHillClimbing (line 314) | class GFORepulsingHillClimbing(Algorithm, GFOCommonOptions):
    method _solve_internal_problem (line 354) | def _solve_internal_problem(
  class GFOSimulatedAnnealing (line 394) | class GFOSimulatedAnnealing(Algorithm, GFOCommonOptions):
    method _solve_internal_problem (line 442) | def _solve_internal_problem(
  class GFODownhillSimplex (line 482) | class GFODownhillSimplex(Algorithm, GFOCommonOptions):
    method _solve_internal_problem (line 509) | def _solve_internal_problem(
  class GFOPowellsMethod (line 547) | class GFOPowellsMethod(Algorithm, GFOCommonOptions):
    method _solve_internal_problem (line 571) | def _solve_internal_problem(
  class GFOParticleSwarmOptimization (line 612) | class GFOParticleSwarmOptimization(Algorithm, GFOCommonOptions):
    method _solve_internal_problem (line 651) | def _solve_internal_problem(
  class GFOParallelTempering (line 696) | class GFOParallelTempering(Algorithm, GFOCommonOptions):
    method _solve_internal_problem (line 726) | def _solve_internal_problem(
  class GFOSpiralOptimization (line 769) | class GFOSpiralOptimization(Algorithm, GFOCommonOptions):
    method _solve_internal_problem (line 816) | def _solve_internal_problem(
  class GFOGeneticAlgorithm (line 858) | class GFOGeneticAlgorithm(Algorithm, GFOCommonOptions):
    method _solve_internal_problem (line 926) | def _solve_internal_problem(
  class GFOEvolutionStrategy (line 971) | class GFOEvolutionStrategy(Algorithm, GFOCommonOptions):
    method _solve_internal_problem (line 1016) | def _solve_internal_problem(
  class GFODifferentialEvolution (line 1060) | class GFODifferentialEvolution(Algorithm, GFOCommonOptions):
    method _solve_internal_problem (line 1122) | def _solve_internal_problem(
  function _gfo_internal (line 1154) | def _gfo_internal(
  function _get_search_space_gfo (line 1217) | def _get_search_space_gfo(
  function _get_gfo_constraints (line 1246) | def _get_gfo_constraints() -> list[Any]:
  function _get_initialize_gfo (line 1251) | def _get_initialize_gfo(
  function _process_result_gfo (line 1282) | def _process_result_gfo(opt: "BaseOptimizer") -> InternalOptimizeResult:
  function _value2para (line 1305) | def _value2para(x: NDArray[np.float64]) -> dict[str, float]:

FILE: src/optimagic/optimizers/iminuit_migrad.py
  class IminuitMigrad (line 44) | class IminuitMigrad(Algorithm):
    method _solve_internal_problem (line 84) | def _solve_internal_problem(
  function _process_minuit_result (line 118) | def _process_minuit_result(minuit_result: Minuit) -> InternalOptimizeRes...
  function _convert_bounds_to_minuit_limits (line 140) | def _convert_bounds_to_minuit_limits(

FILE: src/optimagic/optimizers/ipopt.py
  class Ipopt (line 50) | class Ipopt(Algorithm):
    method _solve_internal_problem (line 346) | def _solve_internal_problem(
  function _get_scipy_bounds (line 653) | def _get_scipy_bounds(bounds: InternalBounds) -> ScipyBounds:
  function _convert_bool_to_str (line 657) | def _convert_bool_to_str(var, name):
  function _convert_none_to_str (line 681) | def _convert_none_to_str(var):

FILE: src/optimagic/optimizers/nag_optimizers.py
  class NagDFOLS (line 347) | class NagDFOLS(Algorithm):
    method _solve_internal_problem (line 397) | def _solve_internal_problem(
  function nag_dfols_internal (line 436) | def nag_dfols_internal(
  class NagPyBOBYQA (line 652) | class NagPyBOBYQA(Algorithm):
    method _solve_internal_problem (line 699) | def _solve_internal_problem(
  function nag_pybobyqa_internal (line 740) | def nag_pybobyqa_internal(
  function _process_nag_result (line 866) | def _process_nag_result(nag_result_obj, len_x):
  function _create_nag_advanced_options (line 911) | def _create_nag_advanced_options(
  function _change_evals_per_point_interface (line 1012) | def _change_evals_per_point_interface(func):
  function _build_options_dict (line 1037) | def _build_options_dict(user_input, default_options):
  function _get_fast_start_method (line 1062) | def _get_fast_start_method(user_value):

FILE: src/optimagic/optimizers/neldermead.py
  class NelderMeadParallel (line 43) | class NelderMeadParallel(Algorithm):
    method _solve_internal_problem (line 93) | def _solve_internal_problem(
  function neldermead_parallel (line 122) | def neldermead_parallel(
  function _init_algo_params (line 323) | def _init_algo_params(adaptive, j):
  function _init_simplex (line 343) | def _init_simplex(x):
  function _pfeffer (line 355) | def _pfeffer(x):
  function _nash (line 370) | def _nash(x):
  function _gao_han (line 383) | def _gao_han(x):
  function _varadhan_borchers (line 408) | def _varadhan_borchers(x):

FILE: src/optimagic/optimizers/nevergrad_optimizers.py
  class NevergradPSO (line 62) | class NevergradPSO(Algorithm):
    method _solve_internal_problem (line 142) | def _solve_internal_problem(
  class NevergradCMAES (line 191) | class NevergradCMAES(Algorithm):
    method _solve_internal_problem (line 341) | def _solve_internal_problem(
  class NevergradOnePlusOne (line 415) | class NevergradOnePlusOne(Algorithm):
    method _solve_internal_problem (line 539) | def _solve_internal_problem(
  class NevergradDifferentialEvolution (line 591) | class NevergradDifferentialEvolution(Algorithm):
    method _solve_internal_problem (line 665) | def _solve_internal_problem(
  class NevergradBayesOptim (line 713) | class NevergradBayesOptim(Algorithm):
    method _solve_internal_problem (line 752) | def _solve_internal_problem(
  class NevergradEMNA (line 796) | class NevergradEMNA(Algorithm):
    method _solve_internal_problem (line 846) | def _solve_internal_problem(
  class NevergradCGA (line 890) | class NevergradCGA(Algorithm):
    method _solve_internal_problem (line 913) | def _solve_internal_problem(
  class NevergradEDA (line 952) | class NevergradEDA(Algorithm):
    method _solve_internal_problem (line 976) | def _solve_internal_problem(
  class NevergradTBPSA (line 1015) | class NevergradTBPSA(Algorithm):
    method _solve_internal_problem (line 1054) | def _solve_internal_problem(
  class NevergradRandomSearch (line 1096) | class NevergradRandomSearch(Algorithm):
    method _solve_internal_problem (line 1144) | def _solve_internal_problem(
  class NevergradSamplingSearch (line 1190) | class NevergradSamplingSearch(Algorithm):
    method _solve_internal_problem (line 1246) | def _solve_internal_problem(
  class NevergradNGOpt (line 1293) | class NevergradNGOpt(Algorithm):
    method _solve_internal_problem (line 1375) | def _solve_internal_problem(
  class NevergradMeta (line 1415) | class NevergradMeta(Algorithm):
    method _solve_internal_problem (line 1482) | def _solve_internal_problem(
  function _nevergrad_internal (line 1506) | def _nevergrad_internal(

FILE: src/optimagic/optimizers/nlopt_optimizers.py
  class NloptBOBYQA (line 55) | class NloptBOBYQA(Algorithm):
    method _solve_internal_problem (line 62) | def _solve_internal_problem(
  class NloptNelderMead (line 96) | class NloptNelderMead(Algorithm):
    method _solve_internal_problem (line 102) | def _solve_internal_problem(
  class NloptPRAXIS (line 135) | class NloptPRAXIS(Algorithm):
    method _solve_internal_problem (line 142) | def _solve_internal_problem(
  class NloptCOBYLA (line 176) | class NloptCOBYLA(Algorithm):
    method _solve_internal_problem (line 183) | def _solve_internal_problem(
  class NloptSbplx (line 217) | class NloptSbplx(Algorithm):
    method _solve_internal_problem (line 224) | def _solve_internal_problem(
  class NloptNEWUOA (line 258) | class NloptNEWUOA(Algorithm):
    method _solve_internal_problem (line 265) | def _solve_internal_problem(
  class NloptTNewton (line 307) | class NloptTNewton(Algorithm):
    method _solve_internal_problem (line 314) | def _solve_internal_problem(
  class NloptLBFGSB (line 348) | class NloptLBFGSB(Algorithm):
    method _solve_internal_problem (line 355) | def _solve_internal_problem(
  class NloptCCSAQ (line 389) | class NloptCCSAQ(Algorithm):
    method _solve_internal_problem (line 396) | def _solve_internal_problem(
  class NloptMMA (line 430) | class NloptMMA(Algorithm):
    method _solve_internal_problem (line 437) | def _solve_internal_problem(
  class NloptVAR (line 476) | class NloptVAR(Algorithm):
    method _solve_internal_problem (line 484) | def _solve_internal_problem(
  class NloptSLSQP (line 522) | class NloptSLSQP(Algorithm):
    method _solve_internal_problem (line 529) | def _solve_internal_problem(
  class NloptDirect (line 563) | class NloptDirect(Algorithm):
    method _solve_internal_problem (line 573) | def _solve_internal_problem(
  class NloptESCH (line 626) | class NloptESCH(Algorithm):
    method _solve_internal_problem (line 633) | def _solve_internal_problem(
  class NloptISRES (line 667) | class NloptISRES(Algorithm):
    method _solve_internal_problem (line 674) | def _solve_internal_problem(
  class NloptCRS2LM (line 708) | class NloptCRS2LM(Algorithm):
    method _solve_internal_problem (line 716) | def _solve_internal_problem(
  function _minimize_nlopt (line 739) | def _minimize_nlopt(
  function _process_nlopt_results (line 792) | def _process_nlopt_results(nlopt_obj, solution_x, is_global):
  function _get_nlopt_constraints (line 828) | def _get_nlopt_constraints(constraints, filter_type):
  function _internal_to_nlopt_constaint (line 835) | def _internal_to_nlopt_constaint(c):

FILE: src/optimagic/optimizers/pounders.py
  class Pounders (line 57) | class Pounders(Algorithm):
    method _solve_internal_problem (line 83) | def _solve_internal_problem(
  function internal_solve_pounders (line 165) | def internal_solve_pounders(
  function _check_for_convergence (line 595) | def _check_for_convergence(

FILE: src/optimagic/optimizers/pygad_optimizer.py
  class ParentSelectionFunction (line 43) | class ParentSelectionFunction(Protocol):
    method __call__ (line 58) | def __call__(
  class CrossoverFunction (line 64) | class CrossoverFunction(Protocol):
    method __call__ (line 78) | def __call__(
  class MutationFunction (line 87) | class MutationFunction(Protocol):
    method __call__ (line 99) | def __call__(
  class GeneConstraintFunction (line 105) | class GeneConstraintFunction(Protocol):
    method __call__ (line 124) | def __call__(
  class _BuiltinMutation (line 132) | class _BuiltinMutation:
    method to_pygad_params (line 146) | def to_pygad_params(self) -> dict[str, Any]:
  class RandomMutation (line 166) | class RandomMutation(_BuiltinMutation):
    method to_pygad_params (line 216) | def to_pygad_params(self) -> dict[str, Any]:
  class SwapMutation (line 228) | class SwapMutation(_BuiltinMutation):
  class InversionMutation (line 243) | class InversionMutation(_BuiltinMutation):
  class ScrambleMutation (line 258) | class ScrambleMutation(_BuiltinMutation):
  class AdaptiveMutation (line 272) | class AdaptiveMutation(_BuiltinMutation):
    method to_pygad_params (line 351) | def to_pygad_params(self) -> dict[str, Any]:
  class Pygad (line 394) | class Pygad(Algorithm):
    method _solve_internal_problem (line 603) | def _solve_internal_problem(
  function _convert_mutation_to_pygad_params (line 741) | def _convert_mutation_to_pygad_params(mutation: Any) -> dict[str, Any]:
  function _get_default_mutation_params (line 776) | def _get_default_mutation_params(mutation_type: Any = "random") -> dict[...
  function _create_mutation_from_string (line 787) | def _create_mutation_from_string(mutation_type: str) -> _BuiltinMutation:
  function _determine_effective_batch_size (line 814) | def _determine_effective_batch_size(batch_size: int | None, n_cores: int...
  function _build_stop_criteria (line 851) | def _build_stop_criteria(
  function _validate_user_defined_functions (line 881) | def _validate_user_defined_functions(
  function _validate_string_choice (line 946) | def _validate_string_choice(value: str, valid_choices: list[str], name: ...
  function _validate_protocol_function (line 952) | def _validate_protocol_function(
  function _process_pygad_result (line 960) | def _process_pygad_result(ga_instance: Any) -> InternalOptimizeResult:

FILE: src/optimagic/optimizers/pygmo_optimizers.py
  class PygmoGaco (line 70) | class PygmoGaco(Algorithm):
    method _solve_internal_problem (line 87) | def _solve_internal_problem(
  class PygmoBeeColony (line 138) | class PygmoBeeColony(Algorithm):
    method _solve_internal_problem (line 145) | def _solve_internal_problem(
  class PygmoDe (line 186) | class PygmoDe(Algorithm):
    method _solve_internal_problem (line 210) | def _solve_internal_problem(
  class PygmoSea (line 271) | class PygmoSea(Algorithm):
    method _solve_internal_problem (line 279) | def _solve_internal_problem(
  class PygmoSga (line 319) | class PygmoSga(Algorithm):
    method _solve_internal_problem (line 347) | def _solve_internal_problem(
  class PygmoSade (line 465) | class PygmoSade(Algorithm):
    method _solve_internal_problem (line 495) | def _solve_internal_problem(
  class PygmoCmaes (line 563) | class PygmoCmaes(Algorithm):
    method _solve_internal_problem (line 582) | def _solve_internal_problem(
  class PygmoSimulatedAnnealing (line 637) | class PygmoSimulatedAnnealing(Algorithm):
    method _solve_internal_problem (line 653) | def _solve_internal_problem(
  class PygmoPso (line 698) | class PygmoPso(Algorithm):
    method _solve_internal_problem (line 728) | def _solve_internal_problem(
  class PygmoPsoGen (line 805) | class PygmoPsoGen(Algorithm):
    method _solve_internal_problem (line 836) | def _solve_internal_problem(
  class PygmoMbh (line 912) | class PygmoMbh(Algorithm):
    method _solve_internal_problem (line 921) | def _solve_internal_problem(
  class PygmoXnes (line 965) | class PygmoXnes(Algorithm):
    method _solve_internal_problem (line 982) | def _solve_internal_problem(
  class PygmoGwo (line 1044) | class PygmoGwo(Algorithm):
    method _solve_internal_problem (line 1050) | def _solve_internal_problem(
  class PygmoCompassSearch (line 1088) | class PygmoCompassSearch(Algorithm):
    method _solve_internal_problem (line 1100) | def _solve_internal_problem(
  class PygmoIhs (line 1150) | class PygmoIhs(Algorithm):
    method _solve_internal_problem (line 1164) | def _solve_internal_problem(
  class PygmoDe1220 (line 1211) | class PygmoDe1220(Algorithm):
    method _solve_internal_problem (line 1222) | def _solve_internal_problem(
  function _minimize_pygmo (line 1282) | def _minimize_pygmo(
  function _create_pygmo_problem (line 1320) | def _create_pygmo_problem(
  function _create_algorithm (line 1345) | def _create_algorithm(
  function _create_population (line 1365) | def _create_population(
  function _process_pygmo_result (line 1388) | def _process_pygmo_result(evolved: pg.population) -> InternalOptimizeRes...
  function _convert_str_to_int (line 1401) | def _convert_str_to_int(str_to_int, value):

FILE: src/optimagic/optimizers/pyswarms_optimizers.py
  class Topology (line 48) | class Topology:
  class StarTopology (line 53) | class StarTopology(Topology):
  class RingTopology (line 62) | class RingTopology(Topology):
  class VonNeumannTopology (line 85) | class VonNeumannTopology(Topology):
  class PyramidTopology (line 100) | class PyramidTopology(Topology):
  class RandomTopology (line 113) | class RandomTopology(Topology):
  class PSOCommonOptions (line 138) | class PSOCommonOptions:
  class PySwarmsGlobalBestPSO (line 235) | class PySwarmsGlobalBestPSO(Algorithm, PSOCommonOptions):
    method _solve_internal_problem (line 274) | def _solve_internal_problem(
  class PySwarmsLocalBestPSO (line 316) | class PySwarmsLocalBestPSO(Algorithm, PSOCommonOptions):
    method _solve_internal_problem (line 363) | def _solve_internal_problem(
  class PySwarmsGeneralPSO (line 411) | class PySwarmsGeneralPSO(Algorithm, PSOCommonOptions):
    method _solve_internal_problem (line 464) | def _solve_internal_problem(
  function _pyswarms_internal (line 496) | def _pyswarms_internal(
  function _resolve_topology_config (line 576) | def _resolve_topology_config(
  function _build_velocity_clamp (line 618) | def _build_velocity_clamp(
  function _get_pyswarms_bounds (line 628) | def _get_pyswarms_bounds(
  function _create_initial_positions (line 645) | def _create_initial_positions(
  function _create_batch_objective (line 684) | def _create_batch_objective(
  function _process_pyswarms_result (line 709) | def _process_pyswarms_result(

FILE: src/optimagic/optimizers/scipy_optimizers.py
  class ScipyLBFGSB (line 105) | class ScipyLBFGSB(Algorithm):
    method _solve_internal_problem (line 169) | def _solve_internal_problem(
  class ScipySLSQP (line 208) | class ScipySLSQP(Algorithm):
    method _solve_internal_problem (line 213) | def _solve_internal_problem(
  class ScipyNelderMead (line 250) | class ScipyNelderMead(Algorithm):
    method _solve_internal_problem (line 258) | def _solve_internal_problem(
  class ScipyPowell (line 297) | class ScipyPowell(Algorithm):
    method _solve_internal_problem (line 304) | def _solve_internal_problem(
  class ScipyBFGS (line 341) | class ScipyBFGS(Algorithm):
    method _solve_internal_problem (line 350) | def _solve_internal_problem(
  class ScipyConjugateGradient (line 385) | class ScipyConjugateGradient(Algorithm):
    method _solve_internal_problem (line 391) | def _solve_internal_problem(
  class ScipyNewtonCG (line 423) | class ScipyNewtonCG(Algorithm):
    method _solve_internal_problem (line 428) | def _solve_internal_problem(
  class ScipyCOBYLA (line 463) | class ScipyCOBYLA(Algorithm):
    method _solve_internal_problem (line 469) | def _solve_internal_problem(
  class ScipyLSTRF (line 517) | class ScipyLSTRF(Algorithm):
    method _solve_internal_problem (line 525) | def _solve_internal_problem(
  class ScipyLSDogbox (line 570) | class ScipyLSDogbox(Algorithm):
    method _solve_internal_problem (line 578) | def _solve_internal_problem(
  class ScipyLSLM (line 623) | class ScipyLSLM(Algorithm):
    method _solve_internal_problem (line 629) | def _solve_internal_problem(
  class ScipyTruncatedNewton (line 663) | class ScipyTruncatedNewton(Algorithm):
    method _solve_internal_problem (line 677) | def _solve_internal_problem(
  class ScipyTrustConstr (line 722) | class ScipyTrustConstr(Algorithm):
    method _solve_internal_problem (line 730) | def _solve_internal_problem(
  function process_scipy_result (line 764) | def process_scipy_result(scipy_res: ScipyOptimizeResult) -> InternalOpti...
  function _int_if_not_none (line 786) | def _int_if_not_none(value: SupportsInt | None) -> int | None:
  function _get_scipy_constraints (line 792) | def _get_scipy_constraints(constraints):
  function _internal_to_scipy_constraint (line 802) | def _internal_to_scipy_constraint(c):
  class ScipyBasinhopping (line 828) | class ScipyBasinhopping(Algorithm):
    method _solve_internal_problem (line 860) | def _solve_internal_problem(
  class ScipyBrute (line 910) | class ScipyBrute(Algorithm):
    method _solve_internal_problem (line 916) | def _solve_internal_problem(
  class ScipyDifferentialEvolution (line 961) | class ScipyDifferentialEvolution(Algorithm):
    method _solve_internal_problem (line 998) | def _solve_internal_problem(
  class ScipySHGO (line 1039) | class ScipySHGO(Algorithm):
    method _solve_internal_problem (line 1075) | def _solve_internal_problem(
  class ScipyDualAnnealing (line 1147) | class ScipyDualAnnealing(Algorithm):
    method _solve_internal_problem (line 1181) | def _solve_internal_problem(
  class ScipyDirect (line 1229) | class ScipyDirect(Algorithm):
    method _solve_internal_problem (line 1242) | def _solve_internal_problem(
  function _get_workers (line 1261) | def _get_workers(n_cores, batch_evaluator):
  function _get_scipy_bounds (line 1271) | def _get_scipy_bounds(bounds: InternalBounds) -> ScipyBounds | None:
  function process_scipy_result_old (line 1280) | def process_scipy_result_old(scipy_results_obj):

FILE: src/optimagic/optimizers/tao_optimizers.py
  class TAOPounders (line 42) | class TAOPounders(Algorithm):
    method _solve_internal_problem (line 51) | def _solve_internal_problem(
  function tao_pounders (line 86) | def tao_pounders(
  function _initialise_petsc_array (line 208) | def _initialise_petsc_array(len_or_array):
  function _max_iters (line 231) | def _max_iters(max_iterations, tao):
  function _gatol_conv (line 238) | def _gatol_conv(absolute_gradient_tolerance, tao):
  function _grtol_conv (line 245) | def _grtol_conv(relative_gradient_tolerance, tao):
  function _grtol_gatol_conv (line 258) | def _grtol_gatol_conv(relative_gradient_tolerance, absolute_gradient_tol...
  function _translate_tao_convergence_reason (line 274) | def _translate_tao_convergence_reason(tao_resaon):
  function _process_pounders_results (line 292) | def _process_pounders_results(residuals_out, tao):

FILE: src/optimagic/optimizers/tranquilo.py
  class Tranquilo (line 69) | class Tranquilo(Algorithm):
    method _solve_internal_problem (line 179) | def _solve_internal_problem(
  class TranquiloLS (line 258) | class TranquiloLS(Algorithm):
    method _solve_internal_problem (line 366) | def _solve_internal_problem(

FILE: src/optimagic/parameters/block_trees.py
  function matrix_to_block_tree (line 11) | def matrix_to_block_tree(matrix, outer_tree, inner_tree):
  function hessian_to_block_tree (line 70) | def hessian_to_block_tree(hessian, f_tree, params_tree):
  function block_tree_to_matrix (line 132) | def block_tree_to_matrix(block_tree, outer_tree, inner_tree):
  function block_tree_to_hessian (line 181) | def block_tree_to_hessian(block_hessian, f_tree, params_tree):
  function _convert_to_numpy (line 240) | def _convert_to_numpy(obj, only_pandas=True):
  function _convert_pandas_objects_to_numpy (line 248) | def _convert_pandas_objects_to_numpy(obj):
  function _convert_raw_block_to_pandas (line 260) | def _convert_raw_block_to_pandas(raw_block, leaf_outer, leaf_inner):
  function _select_non_none (line 296) | def _select_non_none(first, second):
  function _reshape_list (line 308) | def _reshape_list(list_to_reshape, shapes):
  function _is_pd_object (line 327) | def _is_pd_object(obj):
  function _check_dimensions_matrix (line 331) | def _check_dimensions_matrix(matrix, outer_tree, inner_tree):
  function _check_dimensions_hessian (line 344) | def _check_dimensions_hessian(hessian, f_tree, params_tree):

FILE: src/optimagic/parameters/bounds.py
  class Bounds (line 19) | class Bounds:
  function pre_process_bounds (line 26) | def pre_process_bounds(
  function _process_bounds_sequence (line 63) | def _process_bounds_sequence(bounds: Sequence[tuple[float, float]]) -> B...
  function get_internal_bounds (line 75) | def get_internal_bounds(
  function _update_bounds_and_flatten (line 162) | def _update_bounds_and_flatten(
  function _is_fast_path (line 217) | def _is_fast_path(params: PyTree, bounds: Bounds, add_soft_bounds: bool)...
  function _is_1d_array (line 231) | def _is_1d_array(candidate: Any) -> bool:
  function _get_fast_path_bounds (line 235) | def _get_fast_path_bounds(

FILE: src/optimagic/parameters/check_constraints.py
  function check_constraints_are_satisfied (line 16) | def check_constraints_are_satisfied(flat_constraints, param_values, para...
  function _get_message (line 100) | def _get_message(constraint, param_names, explanation=""):
  function check_types (line 122) | def check_types(constraints):
  function check_for_incompatible_overlaps (line 150) | def check_for_incompatible_overlaps(transformations, parnames):
  function check_fixes_and_bounds (line 181) | def check_fixes_and_bounds(constr_info, transformations, parnames):
  function _iloc (line 258) | def _iloc(dictionary, positions):

FILE: src/optimagic/parameters/consolidate_constraints.py
  function consolidate_constraints (line 20) | def consolidate_constraints(
  function _consolidate_equality_constraints (line 120) | def _consolidate_equality_constraints(equality_constraints):
  function _join_overlapping_lists (line 150) | def _join_overlapping_lists(candidates):
  function _unite_first_with_all_intersecting_elements (line 174) | def _unite_first_with_all_intersecting_elements(indices):
  function _consolidate_fixes_with_equality_constraints (line 193) | def _consolidate_fixes_with_equality_constraints(
  function _consolidate_bounds_with_equality_constraints (line 226) | def _consolidate_bounds_with_equality_constraints(
  function _split_constraints (line 254) | def _split_constraints(constraints, type_):
  function simplify_covariance_and_sdcorr_constraints (line 265) | def simplify_covariance_and_sdcorr_constraints(
  function _plug_equality_constraints_into_selectors (line 311) | def _plug_equality_constraints_into_selectors(
  function _consolidate_linear_constraints (line 359) | def _consolidate_linear_constraints(
  function _transform_linear_constraints_to_pandas_objects (line 430) | def _transform_linear_constraints_to_pandas_objects(linear_constranits, ...
  function _plug_equality_constraints_into_linear_weights (line 461) | def _plug_equality_constraints_into_linear_weights(weights, post_replace...
  function _plug_fixes_into_linear_weights_and_rhs (line 486) | def _plug_fixes_into_linear_weights_and_rhs(
  function _express_bounds_as_linear_constraints (line 518) | def _express_bounds_as_linear_constraints(weights, rhs, lower, upper):
  function _rescale_linear_constraints (line 562) | def _rescale_linear_constraints(weights, rhs):
  function _drop_redundant_linear_constraints (line 591) | def _drop_redundant_linear_constraints(weights, rhs):
  function _check_consolidated_weights (line 636) | def _check_consolidated_weights(weights, param_names):
  function _get_kernel_transformation_matrices (line 666) | def _get_kernel_transformation_matrices(weights):
  function _is_redundant (line 697) | def _is_redundant(candidate, others):
  function _unique_values (line 714) | def _unique_values(arr, dropna=True):

FILE: src/optimagic/parameters/constraint_tools.py
  function count_free_params (line 6) | def count_free_params(
  function check_constraints (line 54) | def check_constraints(

FILE: src/optimagic/parameters/conversion.py
  function get_converter (line 15) | def get_converter(
  class Converter (line 150) | class Converter:
  function _fast_params_from_internal (line 157) | def _fast_params_from_internal(x, return_type="tree"):
  function _get_fast_path_converter (line 165) | def _get_fast_path_converter(params, bounds, solver_type):
  function _is_fast_path (line 201) | def _is_fast_path(
  function _is_fast_deriv_eval (line 226) | def _is_fast_deriv_eval(d, solver_type):
  function _is_1d_arr (line 241) | def _is_1d_arr(candidate):
  function _is_2d_arr (line 245) | def _is_2d_arr(candidate):

FILE: src/optimagic/parameters/kernel_transformations.py
  function covariance_to_internal (line 44) | def covariance_to_internal(external_values, constr):
  function covariance_to_internal_jacobian (line 51) | def covariance_to_internal_jacobian(external_values, constr):
  function covariance_from_internal (line 83) | def covariance_from_internal(internal_values, constr):
  function covariance_from_internal_jacobian (line 90) | def covariance_from_internal_jacobian(internal_values, constr):
  function sdcorr_to_internal (line 142) | def sdcorr_to_internal(external_values, constr):
  function sdcorr_to_internal_jacobian (line 149) | def sdcorr_to_internal_jacobian(external_values, constr):
  function sdcorr_from_internal (line 183) | def sdcorr_from_internal(internal_values, constr):
  function sdcorr_from_internal_jacobian (line 190) | def sdcorr_from_internal_jacobian(internal_values, constr):
  function probability_to_internal (line 274) | def probability_to_internal(external_values, constr):
  function probability_to_internal_jacobian (line 279) | def probability_to_internal_jacobian(external_values, constr):
  function probability_from_internal (line 313) | def probability_from_internal(internal_values, constr):
  function probability_from_internal_jacobian (line 318) | def probability_from_internal_jacobian(internal_values, constr):
  function linear_to_internal (line 348) | def linear_to_internal(external_values, constr):
  function linear_to_internal_jacobian (line 353) | def linear_to_internal_jacobian(external_values, constr):
  function linear_from_internal (line 357) | def linear_from_internal(internal_values, constr):
  function linear_from_internal_jacobian (line 362) | def linear_from_internal_jacobian(internal_values, constr):
  function _elimination_matrix (line 366) | def _elimination_matrix(dim):
  function _duplication_matrix (line 408) | def _duplication_matrix(dim):
  function _transformation_matrix (line 447) | def _transformation_matrix(dim):
  function _commutation_matrix (line 503) | def _commutation_matrix(dim):
  function _unit_vector_or_zeros (line 538) | def _unit_vector_or_zeros(index, size):

FILE: src/optimagic/parameters/nonlinear_constraints.py
  function process_nonlinear_constraints (line 16) | def process_nonlinear_constraints(
  function _process_nonlinear_constraint (line 93) | def _process_nonlinear_constraint(
  function equality_as_inequality_constraints (line 235) | def equality_as_inequality_constraints(nonlinear_constraints):
  function _equality_to_inequality (line 241) | def _equality_to_inequality(c):
  function vector_as_list_of_scalar_constraints (line 267) | def vector_as_list_of_scalar_constraints(nonlinear_constraints):
  function _vector_to_list_of_scalar (line 280) | def _vector_to_list_of_scalar(constraint):
  function _get_components (line 295) | def _get_components(fun, jac, idx):
  function _process_selector (line 318) | def _process_selector(c):
  function _compose_funcs (line 336) | def _compose_funcs(f, g):
  function _identity (line 340) | def _identity(x):
  function _extend_jacobian (line 349) | def _extend_jacobian(jac_mat, selection_indices, n_params):
  function _get_selection_indices (line 362) | def _get_selection_indices(params, selector):
  function _get_transformation (line 381) | def _get_transformation(lower_bounds, upper_bounds):
  function _get_transformation_type (line 409) | def _get_transformation_type(lower_bounds, upper_bounds):
  function _check_validity_and_return_evaluation (line 430) | def _check_validity_and_return_evaluation(c, params, skip_checks):

FILE: src/optimagic/parameters/process_constraints.py
  function process_constraints (line 31) | def process_constraints(
  function _replace_pairwise_equality_by_equality (line 120) | def _replace_pairwise_equality_by_equality(constraints):
  function _process_linear_weights (line 143) | def _process_linear_weights(constraints):
  function _replace_increasing_and_decreasing_by_linear (line 181) | def _replace_increasing_and_decreasing_by_linear(constraints):
  function _create_internal_bounds (line 216) | def _create_internal_bounds(lower, upper, constraints):
  function _create_internal_free (line 259) | def _create_internal_free(is_fixed_to_value, is_fixed_to_other, constrai...
  function _create_pre_replacements (line 285) | def _create_pre_replacements(internal_free):
  function _create_internal_fixed_value (line 306) | def _create_internal_fixed_value(fixed_values, constraints):

FILE: src/optimagic/parameters/process_selectors.py
  function process_selectors (line 13) | def process_selectors(constraints, params, tree_converter, param_names):
  function _get_selection_field (line 98) | def _get_selection_field(constraint, selector_case, params_case):
  function _get_selection_evaluator (line 139) | def _get_selection_evaluator(field, constraint, params_case, registry):
  function _get_params_case (line 192) | def _get_params_case(params):
  function _get_selector_case (line 204) | def _get_selector_case(constraint):
  function _fail_if_duplicates (line 212) | def _fail_if_duplicates(
  function _fail_if_selections_are_incompatible (line 226) | def _fail_if_selections_are_incompatible(selected, constraint):
  function _find_duplicates (line 244) | def _find_duplicates(list_):

FILE: src/optimagic/parameters/scale_conversion.py
  class ScaleConverter (line 11) | class ScaleConverter:
    method params_to_internal (line 15) | def params_to_internal(self, vec: NDArray[np.float64]) -> NDArray[np.f...
    method params_from_internal (line 23) | def params_from_internal(self, vec: NDArray[np.float64]) -> NDArray[np...
    method derivative_to_internal (line 31) | def derivative_to_internal(
    method derivative_from_internal (line 39) | def derivative_from_internal(
  function get_scale_converter (line 48) | def get_scale_converter(
  function calculate_scaling_factor_and_offset (line 108) | def calculate_scaling_factor_and_offset(

FILE: src/optimagic/parameters/scaling.py
  class ScalingOptions (line 10) | class ScalingOptions:
    method __post_init__ (line 31) | def __post_init__(self) -> None:
  class ScalingOptionsDict (line 35) | class ScalingOptionsDict(TypedDict):
  function pre_process_scaling (line 41) | def pre_process_scaling(
  function _validate_attribute_types_and_values (line 83) | def _validate_attribute_types_and_values(options: ScalingOptions) -> None:

FILE: src/optimagic/parameters/space_conversion.py
  function get_space_converter (line 46) | def get_space_converter(
  class SpaceConverter (line 154) | class SpaceConverter:
  function reparametrize_to_internal (line 161) | def reparametrize_to_internal(
  function reparametrize_from_internal (line 192) | def reparametrize_from_internal(
  function convert_external_derivative_to_internal (line 236) | def convert_external_derivative_to_internal(
  function _multiply_from_left (line 326) | def _multiply_from_left(mat_list):
  function _multiply_from_right (line 339) | def _multiply_from_right(mat_list):
  function pre_replace (line 352) | def pre_replace(internal_values, fixed_values, pre_replacements):
  function pre_replace_jacobian (line 385) | def pre_replace_jacobian(pre_replacements, dim_in):
  function transformation_jacobian (line 424) | def transformation_jacobian(transformations, pre_replaced):
  function post_replace (line 453) | def post_replace(external_values, post_replacements):
  function post_replace_jacobian (line 481) | def post_replace_jacobian(post_replacements):
  class InternalParams (line 515) | class InternalParams:

FILE: src/optimagic/parameters/tree_conversion.py
  function get_tree_converter (line 13) | def get_tree_converter(
  function _get_params_flatten (line 100) | def _get_params_flatten(registry):
  function _get_params_unflatten (line 107) | def _get_params_unflatten(registry, treedef):
  function _get_best_key_and_aggregator (line 114) | def _get_best_key_and_aggregator(needed_key, available_keys):
  function _get_derivative_flatten (line 141) | def _get_derivative_flatten(registry, solver_type, params, func_eval, de...
  class TreeConverter (line 174) | class TreeConverter(NamedTuple):
  class FlatParams (line 180) | class FlatParams(NamedTuple):

FILE: src/optimagic/parameters/tree_registry.py
  function get_registry (line 11) | def get_registry(extended=False, data_col="value"):
  function _flatten_df (line 47) | def _flatten_df(df, data_col):
  function _unflatten_df (line 61) | def _unflatten_df(aux_data, leaves, data_col):
  function _get_df_names (line 73) | def _get_df_names(df):
  function _index_element_to_string (line 83) | def _index_element_to_string(element):

FILE: src/optimagic/shared/check_option_dicts.py
  function check_optimization_options (line 4) | def check_optimization_options(options, usage, algorithm_mandatory=True):

FILE: src/optimagic/shared/compat.py
  function pd_df_map (line 9) | def pd_df_map(df, func, na_action=None, **kwargs):

FILE: src/optimagic/shared/process_user_function.py
  function partial_func_of_params (line 16) | def partial_func_of_params(func, kwargs, name="your function", skip_chec...
  function filter_kwargs (line 72) | def filter_kwargs(func, kwargs):
  function get_unpartialled_arguments (line 82) | def get_unpartialled_arguments(func):
  function get_arguments_without_default (line 92) | def get_arguments_without_default(func):
  function get_kwargs_from_args (line 104) | def get_kwargs_from_args(args, func, offset=0):
  function infer_aggregation_level (line 121) | def infer_aggregation_level(func):

FILE: src/optimagic/timing.py
  class CostModel (line 6) | class CostModel:
    method __post_init__ (line 13) | def __post_init__(self) -> None:

FILE: src/optimagic/type_conversion.py
  function _process_float_like (line 12) | def _process_float_like(value: Any) -> float:
  function _process_int_like (line 17) | def _process_int_like(value: Any) -> int:
  function _process_positive_int_like (line 27) | def _process_positive_int_like(value: Any) -> PositiveInt:
  function _process_non_negative_int_like (line 35) | def _process_non_negative_int_like(value: Any) -> NonNegativeInt:
  function _process_positive_float_like (line 43) | def _process_positive_float_like(value: Any) -> PositiveFloat:
  function _process_non_negative_float_like (line 51) | def _process_non_negative_float_like(value: Any) -> NonNegativeFloat:
  function _process_gt_one_float_like (line 59) | def _process_gt_one_float_like(value: Any) -> GtOneFloat:
  function _process_bool_like (line 67) | def _process_bool_like(value: Any) -> bool:

FILE: src/optimagic/typing.py
  class AggregationLevel (line 27) | class AggregationLevel(Enum):
  class Direction (line 35) | class Direction(str, Enum):
  class DictLikeAccess (line 43) | class DictLikeAccess:
    method __getitem__ (line 49) | def __getitem__(self, key: str) -> Any:
    method __iter__ (line 55) | def __iter__(self) -> Iterator[str]:
    method _dict_repr (line 58) | def _dict_repr(self) -> dict[str, Any]:
    method keys (line 61) | def keys(self) -> KeysView[str]:
    method items (line 64) | def items(self) -> ItemsView[str, Any]:
    method values (line 67) | def values(self) -> ValuesView[str]:
  class TupleLikeAccess (line 72) | class TupleLikeAccess:
    method __getitem__ (line 77) | def __getitem__(self, index: int | slice) -> Any:
    method __len__ (line 81) | def __len__(self) -> int:
    method __iter__ (line 84) | def __iter__(self) -> Iterator[str]:
  class ErrorHandling (line 89) | class ErrorHandling(Enum):
  class EvalTask (line 97) | class EvalTask(Enum):
  class BatchEvaluator (line 106) | class BatchEvaluator(Protocol):
    method __call__ (line 107) | def __call__(
  class IterationHistory (line 147) | class IterationHistory(DictLikeAccess):
  class MultiStartIterationHistory (line 163) | class MultiStartIterationHistory(TupleLikeAccess):

FILE: src/optimagic/utilities.py
  function fast_numpy_full (line 15) | def fast_numpy_full(length: int, fill_value: float) -> NDArray[np.float64]:
  function chol_params_to_lower_triangular_matrix (line 27) | def chol_params_to_lower_triangular_matrix(params):
  function cov_params_to_matrix (line 34) | def cov_params_to_matrix(cov_params):
  function cov_matrix_to_params (line 50) | def cov_matrix_to_params(cov):
  function sdcorr_params_to_sds_and_corr (line 54) | def sdcorr_params_to_sds_and_corr(sdcorr_params):
  function sds_and_corr_to_cov (line 63) | def sds_and_corr_to_cov(sds, corr):
  function cov_to_sds_and_corr (line 68) | def cov_to_sds_and_corr(cov):
  function sdcorr_params_to_matrix (line 75) | def sdcorr_params_to_matrix(sdcorr_params):
  function cov_matrix_to_sdcorr_params (line 91) | def cov_matrix_to_sdcorr_params(cov):
  function number_of_triangular_elements_to_dimension (line 98) | def number_of_triangular_elements_to_dimension(num):
  function dimension_to_number_of_triangular_elements (line 114) | def dimension_to_number_of_triangular_elements(dim):
  function propose_alternatives (line 124) | def propose_alternatives(requested, possibilities, number=3):
  function robust_cholesky (line 154) | def robust_cholesky(matrix, threshold=None, return_info=False):
  function robust_inverse (line 193) | def robust_inverse(matrix, msg=""):
  function _internal_robust_cholesky (line 220) | def _internal_robust_cholesky(matrix, threshold):
  function _make_cholesky_unique (line 265) | def _make_cholesky_unique(chol):
  function hash_array (line 279) | def hash_array(arr):
  function calculate_trustregion_initial_radius (line 286) | def calculate_trustregion_initial_radius(x):
  function to_pickle (line 302) | def to_pickle(obj, path):
  function read_pickle (line 307) | def read_pickle(path):
  function isscalar (line 311) | def isscalar(element):
  function get_rng (line 319) | def get_rng(seed):
  function list_of_dicts_to_dict_of_lists (line 339) | def list_of_dicts_to_dict_of_lists(list_of_dicts):
  function dict_of_lists_to_list_of_dicts (line 356) | def dict_of_lists_to_list_of_dicts(dict_of_lists):

FILE: src/optimagic/visualization/backends.py
  class LinePlotFunction (line 22) | class LinePlotFunction(Protocol):
    method __call__ (line 23) | def __call__(
  class GridLinePlotFunction (line 55) | class GridLinePlotFunction(Protocol):
    method __call__ (line 56) | def __call__(
  function _line_plot_plotly (line 88) | def _line_plot_plotly(
  function _grid_line_plot_plotly (line 182) | def _grid_line_plot_plotly(
  function _line_plot_matplotlib (line 255) | def _line_plot_matplotlib(
  function _grid_line_plot_matplotlib (line 345) | def _grid_line_plot_matplotlib(
  function _line_plot_bokeh (line 426) | def _line_plot_bokeh(
  function _grid_line_plot_bokeh (line 529) | def _grid_line_plot_bokeh(
  function _line_plot_altair (line 619) | def _line_plot_altair(
  function _grid_line_plot_altair (line 729) | def _grid_line_plot_altair(
  function line_plot (line 868) | def line_plot(
  function grid_line_plot (line 931) | def grid_line_plot(
  function _get_plot_function (line 1035) | def _get_plot_function(
  function _get_plot_function (line 1042) | def _get_plot_function(
  function _get_plot_function (line 1048) | def _get_plot_function(

FILE: src/optimagic/visualization/convergence_plot.py
  function convergence_plot (line 62) | def convergence_plot(
  function _extract_convergence_plot_lines (line 244) | def _extract_convergence_plot_lines(
  function _check_only_allowed_subset_provided (line 305) | def _check_only_allowed_subset_provided(

FILE: src/optimagic/visualization/deviation_plot.py
  function deviation_plot (line 10) | def deviation_plot(

FILE: src/optimagic/visualization/history_plots.py
  function criterion_plot (line 42) | def criterion_plot(
  function _harmonize_inputs_to_dict (line 116) | def _harmonize_inputs_to_dict(
  function _convert_key_to_str (line 150) | def _convert_key_to_str(key: Any) -> str:
  function params_plot (line 160) | def params_plot(
  class _PlottingMultistartHistory (line 227) | class _PlottingMultistartHistory:
  function _retrieve_optimization_data_from_results (line 243) | def _retrieve_optimization_data_from_results(
  function _retrieve_optimization_data_from_single_result (line 267) | def _retrieve_optimization_data_from_single_result(
  function _retrieve_optimization_data_from_result_object (line 316) | def _retrieve_optimization_data_from_result_object(
  function _retrieve_optimization_data_from_database (line 385) | def _retrieve_optimization_data_from_database(
  function _get_stacked_local_histories (line 447) | def _get_stacked_local_histories(
  function _extract_criterion_plot_lines (line 485) | def _extract_criterion_plot_lines(
  function _extract_params_plot_lines (line 558) | def _extract_params_plot_lines(

FILE: src/optimagic/visualization/plotting_utilities.py
  class LineData (line 16) | class LineData:
  class MarkerData (line 36) | class MarkerData:
  function combine_plots (line 53) | def combine_plots(
  function create_grid_plot (line 172) | def create_grid_plot(
  function create_ind_dict (line 249) | def create_ind_dict(
  function _clean_legend_duplicates (line 316) | def _clean_legend_duplicates(fig):
  function get_make_subplot_kwargs (line 330) | def get_make_subplot_kwargs(sharex, sharey, kwrgs, plots_per_row, plots):
  function get_layout_kwargs (line 353) | def get_layout_kwargs(layout_kwargs, legend_kwargs, title_kwargs, templa...
  function _ensure_array_from_plotly_data (line 377) | def _ensure_array_from_plotly_data(data: Any) -> np.ndarray:
  function _decode_base64_data (line 403) | def _decode_base64_data(b64data: str, dtype: str) -> np.ndarray:
  function get_palette_cycle (line 408) | def get_palette_cycle(palette: list[str] | str) -> "itertools.cycle[str]":

FILE: src/optimagic/visualization/profile_plot.py
  function profile_plot (line 37) | def profile_plot(
  function _extract_profile_plot_lines (line 158) | def _extract_profile_plot_lines(
  function create_solution_times (line 205) | def create_solution_times(
  function _determine_alpha_grid (line 249) | def _determine_alpha_grid(solution_times: pd.DataFrame) -> list[np.float...
  function _find_switch_points (line 259) | def _find_switch_points(solution_times: pd.DataFrame) -> NDArray[np.floa...
  function _get_profile_plot_xlabel (line 280) | def _get_profile_plot_xlabel(runtime_measure: str, normalize_runtime: bo...

FILE: src/optimagic/visualization/slice_plot.py
  function slice_plot (line 34) | def slice_plot(
  function _get_processed_func_and_func_eval (line 198) | def _get_processed_func_and_func_eval(
  function _get_plot_data (line 229) | def _get_plot_data(
  function _retrieve_func_values (line 300) | def _retrieve_func_values(
  function _extract_slice_plot_lines_and_labels (line 325) | def _extract_slice_plot_lines_and_labels(
  function _get_axis_limits (line 369) | def _get_axis_limits(

FILE: src/optimagic/visualization/slice_plot_3d.py
  function slice_plot_3d (line 28) | def slice_plot_3d(  # type: ignore[no-untyped-def]
  function generate_evaluation_points (line 363) | def generate_evaluation_points(  # type: ignore[no-untyped-def]
  function plot_data_cache (line 416) | def plot_data_cache(  # type: ignore[no-untyped-def]
  function plot_line (line 474) | def plot_line(  # type: ignore[no-untyped-def]
  function plot_surface (line 528) | def plot_surface(  # type: ignore[no-untyped-def]
  function plot_contour (line 570) | def plot_contour(  # type: ignore[no-untyped-def]
  class ProjectionConfig (line 612) | class ProjectionConfig(str, Enum):
    method validate (line 620) | def validate(cls, value):  # type: ignore[no-untyped-def]
    method is_univariate (line 631) | def is_univariate(self) -> bool:
    method is_surface (line 635) | def is_surface(self) -> bool:
    method is_contour (line 639) | def is_contour(self) -> bool:
  class Projection (line 643) | class Projection:
    method __init__ (line 652) | def __init__(self, value):  # type: ignore[no-untyped-def]
    method _parse (line 659) | def _parse(self, value):  # type: ignore[no-untyped-def]
    method is_univariate (line 679) | def is_univariate(self) -> bool:
    method is_dict (line 683) | def is_dict(self) -> bool:
    method get_config (line 686) | def get_config(self):  # type: ignore[no-untyped-def]
  function compute_yaxis_range (line 692) | def compute_yaxis_range(y: list[float], expand_yrange: float) -> list[fl...
  function combine_plots (line 699) | def combine_plots(  # type: ignore[no-untyped-def]
  function _get_subplot_spec (line 811) | def _get_subplot_spec(  # type: ignore[no-untyped-def]
  function evaluate_plot_kwargs (line 835) | def evaluate_plot_kwargs(plot_kwargs):  # type: ignore[no-untyped-def]
  function evaluate_make_subplot_kwargs (line 865) | def evaluate_make_subplot_kwargs(  # type: ignore[no-untyped-def]
  function evaluate_layout_kwargs (line 919) | def evaluate_layout_kwargs(  # type: ignore[no-untyped-def]

FILE: tests/conftest.py
  function fresh_directory (line 11) | def fresh_directory(tmp_path):  # noqa: PT004
  function logit_inputs (line 17) | def logit_inputs():
  function logit_object (line 30) | def logit_object():
  function close_mpl_figures (line 38) | def close_mpl_figures():

FILE: tests/estimagic/examples/test_logit.py
  function test_logit_loglikes (line 8) | def test_logit_loglikes(logit_inputs, logit_object):
  function test_logit_jac (line 16) | def test_logit_jac(logit_inputs, logit_object):
  function test_logit_grad (line 25) | def test_logit_grad(logit_inputs, logit_object):
  function test_logit_hessian (line 32) | def test_logit_hessian(logit_inputs, logit_object):

FILE: tests/estimagic/test_bootstrap.py
  function aaae (line 10) | def aaae(obj1, obj2, decimal=6):
  function setup (line 17) | def setup():
  function expected (line 33) | def expected():
  function seaborn_example (line 66) | def seaborn_example():
  function _outcome_func (line 84) | def _outcome_func(data, shift=0):
  function _outcome_ols (line 101) | def _outcome_ols(data):
  function test_bootstrap_with_outcome_kwargs (line 110) | def test_bootstrap_with_outcome_kwargs(shift, setup):
  function test_bootstrap_existing_outcomes (line 122) | def test_bootstrap_existing_outcomes(setup):
  function test_bootstrap_from_outcomes (line 138) | def test_bootstrap_from_outcomes(setup, expected):
  function test_bootstrap_from_outcomes_private_methods (line 161) | def test_bootstrap_from_outcomes_private_methods(setup, expected):
  function test_bootstrap_from_outcomes_single_outcome (line 178) | def test_bootstrap_from_outcomes_single_outcome(setup, expected):
  function test_outcome_not_callable (line 188) | def test_outcome_not_callable(setup):
  function test_existing_result_wrong_input_type (line 198) | def test_existing_result_wrong_input_type(input_type, setup):
  function test_cov_correct_return_type (line 212) | def test_cov_correct_return_type(return_type, setup):
  function test_cov_wrong_return_type (line 220) | def test_cov_wrong_return_type(setup):
  function test_existing_result (line 234) | def test_existing_result(seaborn_example):

FILE: tests/estimagic/test_bootstrap_ci.py
  function aaae (line 14) | def aaae(obj1, obj2, decimal=6):
  function setup (line 21) | def setup():
  function expected (line 35) | def expected():
  function _outcome_fun_series (line 52) | def _outcome_fun_series(data):
  function _outcome_func_dict (line 56) | def _outcome_func_dict(data):
  function _outcome_func_arr (line 60) | def _outcome_func_arr(data):
  function test_ci (line 71) | def test_ci(outcome, method, setup, expected):
  function test_check_inputs_data (line 84) | def test_check_inputs_data():
  function test_check_inputs_weight_by (line 93) | def test_check_inputs_weight_by(setup):
  function test_get_bootstrap_indices_heterogeneous_weights (line 99) | def test_get_bootstrap_indices_heterogeneous_weights():
  function test_check_inputs_cluster_by (line 116) | def test_check_inputs_cluster_by(setup):
  function test_check_inputs_ci_method (line 125) | def test_check_inputs_ci_method(setup):
  function test_check_inputs_ci_level (line 138) | def test_check_inputs_ci_level(setup):

FILE: tests/estimagic/test_bootstrap_outcomes.py
  function data (line 17) | def data():
  function _mean_return_series (line 22) | def _mean_return_series(data):
  function _mean_return_dict (line 27) | def _mean_return_dict(data):
  function _mean_return_array (line 32) | def _mean_return_array(data):
  function test_get_bootstrap_estimates_runs (line 46) | def test_get_bootstrap_estimates_runs(outcome, data):
  function test_bootstrap_estimates_from_indices_without_errors (line 56) | def test_bootstrap_estimates_from_indices_without_errors(data):
  function test_get_bootstrap_estimates_with_error_and_raise (line 70) | def test_get_bootstrap_estimates_with_error_and_raise(data):
  function test_get_bootstrap_estimates_with_all_errors_and_continue (line 86) | def test_get_bootstrap_estimates_with_all_errors_and_continue(data):
  function test_get_bootstrap_estimates_with_some_errors_and_continue (line 103) | def test_get_bootstrap_estimates_with_some_errors_and_continue(data):

FILE: tests/estimagic/test_bootstrap_samples.py
  function data (line 19) | def data():
  function test_get_bootstrap_indices_randomization_works_without_clustering (line 27) | def test_get_bootstrap_indices_randomization_works_without_clustering(da...
  function test_get_bootstrap_indices_radomization_works_with_clustering (line 33) | def test_get_bootstrap_indices_radomization_works_with_clustering(data):
  function test_get_bootstrap_indices_randomization_works_with_weights (line 39) | def test_get_bootstrap_indices_randomization_works_with_weights(data):
  function test_get_bootstrap_indices_randomization_works_with_weights_and_clustering (line 45) | def test_get_bootstrap_indices_randomization_works_with_weights_and_clus...
  function test_get_bootstrap_indices_randomization_works_with_and_without_weights (line 53) | def test_get_bootstrap_indices_randomization_works_with_and_without_weig...
  function test_get_boostrap_indices_randomization_works_with_extreme_case (line 61) | def test_get_boostrap_indices_randomization_works_with_extreme_case(data):
  function test_clustering_leaves_households_intact (line 70) | def test_clustering_leaves_households_intact(data):
  function test_convert_cluster_ids_to_indices (line 81) | def test_convert_cluster_ids_to_indices():
  function test_get_bootstrap_samples_from_indices (line 89) | def test_get_bootstrap_samples_from_indices():
  function test_get_bootstrap_samples_runs (line 97) | def test_get_bootstrap_samples_runs(data):
  function sample_data (line 103) | def sample_data():
  function test_no_weights_no_clusters (line 107) | def test_no_weights_no_clusters(sample_data):
  function test_weights_no_clusters (line 112) | def test_weights_no_clusters(sample_data):
  function test_weights_and_clusters (line 118) | def test_weights_and_clusters(sample_data):
  function test_invalid_weight_column (line 126) | def test_invalid_weight_column():
  function test_invalid_cluster_column (line 132) | def test_invalid_cluster_column(sample_data):
  function test_empty_dataframe (line 137) | def test_empty_dataframe():
  function test_some_zero_weights_with_clusters (line 143) | def test_some_zero_weights_with_clusters():

FILE: tests/estimagic/test_estimate_ml.py
  function aaae (line 25) | def aaae(obj1, obj2, decimal=3):
  function multivariate_normal_loglike (line 37) | def multivariate_normal_loglike(params, data):
  function multivariate_normal_example (line 45) | def multivariate_normal_example():
  function test_estimate_ml_with_constraints (line 61) | def test_estimate_ml_with_constraints(multivariate_normal_example):
  function logit_np_inputs (line 99) | def logit_np_inputs():
  function fitted_logit_model (line 113) | def fitted_logit_model(logit_object):
  function test_estimate_ml_with_logit_no_constraints (line 145) | def test_estimate_ml_with_logit_no_constraints(
  function test_estimate_ml_with_logit_constraints (line 250) | def test_estimate_ml_with_logit_constraints(
  function test_estimate_ml_optimize_options_false (line 329) | def test_estimate_ml_optimize_options_false(fitted_logit_model, logit_np...
  function test_estimate_ml_algorithm_type (line 354) | def test_estimate_ml_algorithm_type(logit_np_inputs):
  function test_estimate_ml_algorithm (line 368) | def test_estimate_ml_algorithm(logit_np_inputs):
  function normal_loglike (line 388) | def normal_loglike(params, y):
  function normal_inputs (line 393) | def normal_inputs():
  function test_estimate_ml_general_pytree (line 403) | def test_estimate_ml_general_pytree(normal_inputs):
  function test_to_pickle (line 434) | def test_to_pickle(normal_inputs, tmp_path):
  function test_caching (line 452) | def test_caching(normal_inputs):

FILE: tests/estimagic/test_estimate_msm.py
  function _sim_pd (line 19) | def _sim_pd(params):
  function _sim_np (line 23) | def _sim_np(params):
  function _sim_dict_pd (line 27) | def _sim_dict_pd(params):
  function _sim_dict_np (line 31) | def _sim_dict_np(params):
  function test_estimate_msm (line 48) | def test_estimate_msm(simulate_moments, moments_cov, optimize_options):
  function test_check_and_process_optimize_options_with_invalid_entries (line 104) | def test_check_and_process_optimize_options_with_invalid_entries():
  function test_estimate_msm_ls (line 121) | def test_estimate_msm_ls(simulate_moments, moments_cov, optimize_options):
  function test_estimate_msm_with_jacobian (line 142) | def test_estimate_msm_with_jacobian():
  function test_estimate_msm_with_algorithm_type (line 165) | def test_estimate_msm_with_algorithm_type():
  function test_estimate_msm_with_algorithm (line 182) | def test_estimate_msm_with_algorithm():
  function test_to_pickle (line 199) | def test_to_pickle(tmp_path):
  function test_caching (line 218) | def test_caching():

FILE: tests/estimagic/test_estimate_msm_dict_params_and_moments.py
  function test_estimate_msm_dict_params_and_moments (line 12) | def test_estimate_msm_dict_params_and_moments():
  function assert_almost_equal (line 96) | def assert_almost_equal(x, y, decimal=6):

FILE: tests/estimagic/test_estimation_table.py
  function _get_models_multiindex (line 36) | def _get_models_multiindex():
  function _get_models_single_index (line 50) | def _get_models_single_index():
  function _get_models_multiindex_multi_column (line 62) | def _get_models_multiindex_multi_column():
  function _read_csv_string (line 77) | def _read_csv_string(string, index_cols=None):
  function test_estimation_table (line 95) | def test_estimation_table():
  function test_one_and_stage_rendering_are_equal (line 141) | def test_one_and_stage_rendering_are_equal(return_type, render_func, mod...
  function test_process_model_stats_model (line 155) | def test_process_model_stats_model():
  function test_convert_model_to_series_with_ci (line 187) | def test_convert_model_to_series_with_ci():
  function test_convert_model_to_series_with_se (line 215) | def test_convert_model_to_series_with_se():
  function test_convert_model_to_series_without_inference (line 234) | def test_convert_model_to_series_without_inference():
  function test_create_statistics_sr (line 251) | def test_create_statistics_sr():
  function test_process_frame_indices_index (line 282) | def test_process_frame_indices_index():
  function test_process_frame_indices_columns (line 312) | def test_process_frame_indices_columns():
  function test_apply_number_format_tuple (line 330) | def test_apply_number_format_tuple():
  function test_apply_number_format_int (line 340) | def test_apply_number_format_int():
  function test_apply_number_format_callable (line 350) | def test_apply_number_format_callable():
  function test_get_digits_after_decimal (line 362) | def test_get_digits_after_decimal():
  function test_create_group_to_col_position (line 371) | def test_create_group_to_col_position():
  function test_get_model_names (line 385) | def test_get_model_names():
  function test_get_default_column_names_and_groups (line 395) | def test_get_default_column_names_and_groups():
  function test_get_default_column_names_and_groups_undefined_groups (line 404) | def test_get_default_column_names_and_groups_undefined_groups():
  function test_customize_col_groups (line 412) | def test_customize_col_groups():
  function test_customize_col_names_dict (line 420) | def test_customize_col_names_dict():
  function test_customize_col_names_list (line 428) | def test_customize_col_names_list():
  function test_get_params_frames_with_common_index (line 436) | def test_get_params_frames_with_common_index():
  function test_get_params_frames_with_common_index_multiindex (line 458) | def test_get_params_frames_with_common_index_multiindex():
  function test_check_order_of_model_names_raises_error (line 471) | def test_check_order_of_model_names_raises_error():
  function test_manual_extra_info (line 477) | def test_manual_extra_info():

FILE: tests/estimagic/test_lollipop_plot.py
  function test_lollipop_plot_runs (line 7) | def test_lollipop_plot_runs():

FILE: tests/estimagic/test_ml_covs.py
  function jac (line 23) | def jac():
  function hess (line 37) | def hess():
  function design_options (line 50) | def design_options():
  function test_clustering (line 64) | def test_clustering(jac, design_options):
  function test_stratification (line 77) | def test_stratification(jac, design_options):
  function test_sandwich_step (line 91) | def test_sandwich_step(hess):
  function test_cov_robust (line 105) | def test_cov_robust(jac, hess):
  function test_cov_cluster_robust (line 119) | def test_cov_cluster_robust(jac, hess, design_options):
  function test_cov_strata_robust (line 138) | def test_cov_strata_robust(jac, hess, design_options):
  function test_cov_hessian (line 156) | def test_cov_hessian(hess):
  function test_cov_jacobian (line 170) | def test_cov_jacobian(jac):
  function get_expected_covariance (line 186) | def get_expected_covariance(model, cov_method):
  function get_input (line 203) | def get_input(model, input_types):
  function test_cov_function_against_statsmodels (line 231) | def test_cov_function_against_statsmodels(model, method):

FILE: tests/estimagic/test_msm_covs.py
  function test_cov_robust_and_cov_optimal_are_equivalent_in_special_case (line 24) | def test_cov_robust_and_cov_optimal_are_equivalent_in_special_case(jac, ...

FILE: tests/estimagic/test_msm_sensitivity.py
  function simulate_aggregated_moments (line 20) | def simulate_aggregated_moments(params, x, y):
  function simulate_moment_contributions (line 28) | def simulate_moment_contributions(params, x, y):
  function moments_cov (line 52) | def moments_cov(params, func_kwargs):
  function params (line 60) | def params():
  function func_kwargs (line 70) | def func_kwargs():
  function jac (line 79) | def jac(params, func_kwargs):
  function weights (line 91) | def weights(moments_cov):
  function params_cov_opt (line 96) | def params_cov_opt(jac, weights):
  function test_sensitivity_to_bias (line 100) | def test_sensitivity_to_bias(jac, weights, params):
  function test_fundamental_sensitivity_to_noise (line 114) | def test_fundamental_sensitivity_to_noise(
  function test_actual_sensitivity_to_noise (line 135) | def test_actual_sensitivity_to_noise(jac, weights, moments_cov, params_c...
  function test_actual_sensitivity_to_removal (line 155) | def test_actual_sensitivity_to_removal(
  function test_fundamental_sensitivity_to_removal (line 174) | def test_fundamental_sensitivity_to_removal(jac, moments_cov, params_cov...
  function test_sensitivity_to_weighting (line 191) | def test_sensitivity_to_weighting(jac, weights, moments_cov, params_cov_...

FILE: tests/estimagic/test_msm_sensitivity_via_estimate_msm.py
  function simulate_aggregated_moments (line 11) | def simulate_aggregated_moments(params, x, y):
  function simulate_moment_contributions (line 19) | def simulate_moment_contributions(params, x, y):
  function moments_cov (line 43) | def moments_cov(params, func_kwargs):
  function params (line 51) | def params():
  function func_kwargs (line 61) | def func_kwargs():
  function msm_res (line 70) | def msm_res(params, moments_cov, func_kwargs):
  function test_sensitivity_to_bias (line 84) | def test_sensitivity_to_bias(msm_res):
  function test_fundamental_sensitivity_to_noise (line 97) | def test_fundamental_sensitivity_to_noise(msm_res):
  function test_actual_sensitivity_to_noise (line 110) | def test_actual_sensitivity_to_noise(msm_res):
  function test_actual_sensitivity_to_removal (line 123) | def test_actual_sensitivity_to_removal(msm_res):
  function test_fundamental_sensitivity_to_removal (line 137) | def test_fundamental_sensitivity_to_removal(msm_res):
  function test_sensitivity_to_weighting (line 151) | def test_sensitivity_to_weighting(msm_res):

FILE: tests/estimagic/test_msm_weighting.py
  function expected_values (line 18) | def expected_values():
  function test_get_weighting_matrix (line 30) | def test_get_weighting_matrix(moments_cov, method):
  function test_assemble_block_diagonal_matrix_pd (line 50) | def test_assemble_block_diagonal_matrix_pd(expected_values):
  function test_assemble_block_diagonal_matrix_mixed (line 62) | def test_assemble_block_diagonal_matrix_mixed(expected_values):
  function test_get_moments_cov_runs_with_pytrees (line 69) | def test_get_moments_cov_runs_with_pytrees():
  function test_get_moments_cov_passes_bootstrap_kwargs_to_bootstrap (line 95) | def test_get_moments_cov_passes_bootstrap_kwargs_to_bootstrap():

FILE: tests/estimagic/test_shared.py
  function inputs (line 23) | def inputs():
  function test_process_pandas_arguments_all_pd (line 32) | def test_process_pandas_arguments_all_pd(inputs):
  function test_process_pandas_arguments_incompatible_names (line 43) | def test_process_pandas_arguments_incompatible_names(inputs):
  function _from_internal (line 50) | def _from_internal(x, return_type="flat"):  # noqa: ARG001
  class FakeConverter (line 54) | class FakeConverter(NamedTuple):
  class FakeInternalParams (line 59) | class FakeInternalParams(NamedTuple):
  function test_transform_covariance_no_bounds (line 66) | def test_transform_covariance_no_bounds():
  function test_transform_covariance_with_clipping (line 89) | def test_transform_covariance_with_clipping():
  function test_transform_covariance_invalid_bounds (line 113) | def test_transform_covariance_invalid_bounds():
  class FakeFreeParams (line 134) | class FakeFreeParams(NamedTuple):
  function test_transform_free_cov_to_cov_pytree (line 140) | def test_transform_free_cov_to_cov_pytree():
  function test_transform_free_cov_to_cov_array (line 155) | def test_transform_free_cov_to_cov_array():
  function test_transform_free_cov_to_cov_dataframe (line 168) | def test_transform_free_cov_to_cov_dataframe():
  function test_transform_free_cov_to_cov_invalid (line 184) | def test_transform_free_cov_to_cov_invalid():
  function test_transform_free_values_to_params_tree (line 194) | def test_transform_free_values_to_params_tree():
  function test_get_derivative_case (line 206) | def test_get_derivative_case():
  function test_to_numpy_invalid (line 212) | def test_to_nump
Condensed preview — 381 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (5,050K chars).
[
  {
    "path": ".github/CODE_OF_CONDUCT.md",
    "chars": 86,
    "preview": "# Code of Conduct\n\n- [NumFOCUS Code of Conduct](https://numfocus.org/code-of-conduct)\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug-report.md",
    "chars": 545,
    "preview": "---\nname: Bug Report\nabout: Create a report to help us improve\ntitle: ''\nlabels: bug\nassignees: ''\n\n---\n\n### Bug descrip"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/enhancement.md",
    "chars": 623,
    "preview": "---\nname: Enhancement\nabout: Enhance an existing component.\ntitle: ''\nlabels: enhancement\nassignees: ''\n\n---\n\n* optimagi"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "chars": 594,
    "preview": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: ''\nlabels: feature-request\nassignees: ''\n\n---\n\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE/pull_request_template.md",
    "chars": 425,
    "preview": "### What problem do you want to solve?\n\nReference the issue or discussion, if there is any. Provide a description of you"
  },
  {
    "path": ".github/workflows/main.yml",
    "chars": 4190,
    "preview": "---\nname: main\nconcurrency:\n  group: ${{ github.head_ref || github.run_id }}\n  cancel-in-progress: true\non:\n  push:\n    "
  },
  {
    "path": ".github/workflows/publish-to-pypi.yml",
    "chars": 823,
    "preview": "---\nname: PyPI\non: push\njobs:\n  build-n-publish:\n    name: Build and publish optimagic Python 🐍 distributions 📦 to PyPI\n"
  },
  {
    "path": ".gitignore",
    "chars": 1588,
    "preview": "# AI\nCLAUDE.md\n\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# MacOS specific service stor"
  },
  {
    "path": ".pre-commit-config.yaml",
    "chars": 3195,
    "preview": "---\nrepos:\n  - repo: meta\n    hooks:\n      - id: check-hooks-apply\n      - id: check-useless-excludes\n        # - id: id"
  },
  {
    "path": ".readthedocs.yml",
    "chars": 376,
    "preview": "---\nversion: 2\nbuild:\n  os: ubuntu-24.04\n  tools:\n    python: '3.14'\n  jobs:\n    create_environment:\n      - asdf plugin"
  },
  {
    "path": ".tools/create_algo_selection_code.py",
    "chars": 15715,
    "preview": "import importlib\nimport inspect\nimport pkgutil\nimport textwrap\nfrom itertools import combinations\nfrom types import Modu"
  },
  {
    "path": ".tools/test_create_algo_selection_code.py",
    "chars": 394,
    "preview": "from create_algo_selection_code import _generate_category_combinations\n\n\ndef test_generate_category_combinations() -> No"
  },
  {
    "path": ".tools/update_algo_selection_hook.py",
    "chars": 972,
    "preview": "#!/usr/bin/env python\nimport importlib.util\nimport subprocess\nimport sys\nfrom pathlib import Path\nfrom typing import Any"
  },
  {
    "path": ".yamllint.yml",
    "chars": 708,
    "preview": "---\nyaml-files:\n  - '*.yaml'\n  - '*.yml'\n  - .yamllint\nrules:\n  braces: enable\n  brackets: enable\n  colons: enable\n  com"
  },
  {
    "path": "CHANGES.md",
    "chars": 31464,
    "preview": "# Changes\n\nThis is a record of all past optimagic releases and what went into them in reverse\nchronological order. We fo"
  },
  {
    "path": "CITATION",
    "chars": 521,
    "preview": "\nPlease use one of the following samples to cite the optimagic version (change\nx.y) from this installation\n\nText:\n\n[opti"
  },
  {
    "path": "LICENSE",
    "chars": 1057,
    "preview": "Copyright 2019-2021 Janos Gabler\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this\ns"
  },
  {
    "path": "README.md",
    "chars": 4844,
    "preview": "<a href=\"https://optimagic.readthedocs.io\">\n    <p align=\"center\">\n        <img src=\"https://raw.githubusercontent.com/o"
  },
  {
    "path": "codecov.yml",
    "chars": 300,
    "preview": "---\ncodecov:\n  notify:\n    require_ci_to_pass: true\ncoverage:\n  precision: 2\n  round: down\n  range: 50...100\n  status:\n "
  },
  {
    "path": "docs/Makefile",
    "chars": 611,
    "preview": "# Minimal makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line.\nSPHINXOPTS    =\nSPHI"
  },
  {
    "path": "docs/make.bat",
    "chars": 781,
    "preview": "@ECHO OFF\n\npushd %~dp0\n\nREM Command file for Sphinx documentation\n\nif \"%SPHINXBUILD%\" == \"\" (\n\tset SPHINXBUILD=sphinx-bu"
  },
  {
    "path": "docs/source/_static/css/custom.css",
    "chars": 519,
    "preview": "/* Remove execution count for notebook cells. */\ndiv.prompt {\n  display: none;\n}\n\n\n/* Classes for the index page. */\n.in"
  },
  {
    "path": "docs/source/_static/css/termynal.css",
    "chars": 2217,
    "preview": "/**\n * termynal.js\n *\n * @author Ines Montani <ines@ines.io>\n * @version 0.0.1\n * @license MIT\n */\n\n:root {\n    --color-"
  },
  {
    "path": "docs/source/_static/css/termynal_custom.css",
    "chars": 1690,
    "preview": ".termynal-comment {\n    color: #4a968f;\n    font-style: italic;\n    display: block;\n}\n\n.termy [data-termynal] {\n    whit"
  },
  {
    "path": "docs/source/_static/js/custom.js",
    "chars": 5057,
    "preview": "/*\n\nThe following code is copied from https://github.com/tiangolo/typer.\n\nThe MIT License (MIT)\n\nCopyright (c) 2019 Seba"
  },
  {
    "path": "docs/source/_static/js/require.js",
    "chars": 17590,
    "preview": "/** vim: et:ts=4:sw=4:sts=4\n * @license RequireJS 2.3.7 Copyright jQuery Foundation and other contributors.\n * Released "
  },
  {
    "path": "docs/source/_static/js/termynal.js",
    "chars": 10698,
    "preview": "/*\n\nThe original author of the file is Ines Montani.\n\ntermynal.js\nA lightweight, modern and extensible animated terminal"
  },
  {
    "path": "docs/source/algorithms.md",
    "chars": 231043,
    "preview": "(list_of_algorithms)=\n\n# Optimizers\n\nCheck out {ref}`how-to-select-algorithms` to see how to select an algorithm and spe"
  },
  {
    "path": "docs/source/conf.py",
    "chars": 9416,
    "preview": "#!/usr/bin/env python3\n#\n# optimagic documentation build configuration file, created by\n# sphinx-quickstart on Fri Jan 1"
  },
  {
    "path": "docs/source/development/changes.md",
    "chars": 49,
    "preview": "(changes)=\n\n```{include} ../../../CHANGES.md\n```\n"
  },
  {
    "path": "docs/source/development/code_of_conduct.md",
    "chars": 270,
    "preview": "(coc)=\n\n## Code of Conduct\n\nThe optimagic project has a [Code of Conduct][conduct] to which all contributors must\nadhere"
  },
  {
    "path": "docs/source/development/credits.md",
    "chars": 5420,
    "preview": "# Credits\n\n## The optimagic Team\n\n```{eval-rst}\n+---------------------------------------------------------------+-------"
  },
  {
    "path": "docs/source/development/enhancement_proposals.md",
    "chars": 409,
    "preview": "# Enhancement Proposals\n\noptimagic Enhancement Proposals (EPs) can be used to discuss and design large changes.\nEP-00 de"
  },
  {
    "path": "docs/source/development/ep-00-governance-model.md",
    "chars": 8706,
    "preview": "(ep-00)=\n\n# EP-00: Governance model & code of conduct\n\n```{eval-rst}\n+------------+-------------------------------------"
  },
  {
    "path": "docs/source/development/ep-01-pytrees.md",
    "chars": 29870,
    "preview": "(eppytrees)=\n\n# EP-01: Pytrees\n\n```{eval-rst}\n+------------+------------------------------------------------------------"
  },
  {
    "path": "docs/source/development/ep-02-typing.md",
    "chars": 67784,
    "preview": "(eeptyping)=\n\n# EP-02: Static typing\n\n```{eval-rst}\n+------------+------------------------------------------------------"
  },
  {
    "path": "docs/source/development/ep-03-alignment.md",
    "chars": 6579,
    "preview": "(eepalignment)=\n\n# EP-03: Alignment with SciPy\n\n```{eval-rst}\n+------------+--------------------------------------------"
  },
  {
    "path": "docs/source/development/how_to_contribute.md",
    "chars": 5619,
    "preview": "(how-to-contribute)=\n\n# How to contribute\n\n## 1. Intro\n\nWe welcome and greatly appreciate contributions of all forms and"
  },
  {
    "path": "docs/source/development/index.md",
    "chars": 135,
    "preview": "# Development\n\n```{toctree}\n---\nmaxdepth: 1\n---\ncode_of_conduct\nhow_to_contribute\nstyleguide\nenhancement_proposals\ncredi"
  },
  {
    "path": "docs/source/development/styleguide.md",
    "chars": 6313,
    "preview": "(style_guide)=\n\n# Styleguide\n\nYour contribution should fulfill the criteria provided below.\n\n## Styleguide for the codeb"
  },
  {
    "path": "docs/source/estimagic/explanation/bootstrap_ci.md",
    "chars": 3977,
    "preview": "(bootstrap-cis)=\n\n# Bootstrap Confidence Intervals\n\nWe use the notation and formulations provided in chapter 10 of {cite"
  },
  {
    "path": "docs/source/estimagic/explanation/bootstrap_montecarlo_comparison.ipynb",
    "chars": 57194,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Bootstrap Monte Carlo Comparison\""
  },
  {
    "path": "docs/source/estimagic/explanation/cluster_robust_likelihood_inference.md",
    "chars": 216,
    "preview": "(robust_likelihood_inference)=\n\n# Robust Likelihood inference\n\n(to be written.)\n\nIn case of an urgent request for this g"
  },
  {
    "path": "docs/source/estimagic/explanation/index.md",
    "chars": 133,
    "preview": "# Explanation\n\n```{toctree}\n---\nmaxdepth: 1\n---\nbootstrap_ci\nbootstrap_montecarlo_comparison\ncluster_robust_likelihood_i"
  },
  {
    "path": "docs/source/estimagic/index.md",
    "chars": 1779,
    "preview": "(estimagic)=\n\n# Estimagic\n\n*estimagic* is a subpackage of *optimagic* that helps you to fit nonlinear statistical\nmodels"
  },
  {
    "path": "docs/source/estimagic/reference/index.md",
    "chars": 1105,
    "preview": "# estimagic API\n\n```{eval-rst}\n.. currentmodule:: estimagic\n```\n\n(estimation)=\n\n## Estimation\n\n```{eval-rst}\n.. dropdown"
  },
  {
    "path": "docs/source/estimagic/tutorials/bootstrap_overview.ipynb",
    "chars": 13081,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Bootstrap Tutorial\\n\",\n    \"\\n\",\n"
  },
  {
    "path": "docs/source/estimagic/tutorials/estimation_tables_overview.ipynb",
    "chars": 37441,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# How to generate publication quali"
  },
  {
    "path": "docs/source/estimagic/tutorials/index.md",
    "chars": 439,
    "preview": "# Estimagic Tutorials\n\nEstimagic hast functions to estimate the parameters of maximum likelihood or simulation\nmodels. Y"
  },
  {
    "path": "docs/source/estimagic/tutorials/likelihood_overview.ipynb",
    "chars": 6015,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Likelihood estimation\\n\",\n    \"\\n"
  },
  {
    "path": "docs/source/estimagic/tutorials/msm_overview.ipynb",
    "chars": 9858,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Method of Simulated"
  },
  {
    "path": "docs/source/explanation/explanation_of_numerical_optimizers.md",
    "chars": 5628,
    "preview": "(explanation-of-numerical-optimizers)=\n\n# Introduction to basic types of numerical optimization algorithms\n\nThere are hu"
  },
  {
    "path": "docs/source/explanation/implementation_of_constraints.md",
    "chars": 8840,
    "preview": "(implementation_of_constraints)=\n\n# How constraints are implemented\n\nMost of the optimizers wrapped in optimagic cannot "
  },
  {
    "path": "docs/source/explanation/index.md",
    "chars": 385,
    "preview": "# Explanation\n\nThis section provides background information on numerical topics and details of\noptimagic. It is complete"
  },
  {
    "path": "docs/source/explanation/internal_optimizers.md",
    "chars": 4156,
    "preview": "(internal_optimizer_interface)=\n\n# Internal optimizers for optimagic\n\noptimagic provides a large collection of optimizat"
  },
  {
    "path": "docs/source/explanation/numdiff_background.md",
    "chars": 2485,
    "preview": "# Numerical differentiation: methods\n\nIn this section we explain the mathematical background of forward, backward and ce"
  },
  {
    "path": "docs/source/explanation/tests_for_supported_optimizers.md",
    "chars": 13136,
    "preview": "# How supported optimization algorithms are tested\n\noptimagic provides a unified interface that supports a large number "
  },
  {
    "path": "docs/source/explanation/why_optimization_is_hard.ipynb",
    "chars": 7879,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Why optimization is difficult\\n\","
  },
  {
    "path": "docs/source/how_to/how_to_add_optimizers.ipynb",
    "chars": 30615,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"vscode\": {\n     \"languageId\": \"plaintext\"\n    }\n   }"
  },
  {
    "path": "docs/source/how_to/how_to_algorithm_selection.ipynb",
    "chars": 14605,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"(how-to-select-algorithms)=\\n\",\n   "
  },
  {
    "path": "docs/source/how_to/how_to_benchmarking.ipynb",
    "chars": 10527,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0\",\n   \"metadata\": {},\n   \"source\": [\n    \"# How to Benchmark Op"
  },
  {
    "path": "docs/source/how_to/how_to_bounds.ipynb",
    "chars": 8494,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0\",\n   \"metadata\": {},\n   \"source\": [\n    \"(how-to-bounds)=\\n\",\n"
  },
  {
    "path": "docs/source/how_to/how_to_change_plotting_backend.ipynb",
    "chars": 10933,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0\",\n   \"metadata\": {},\n   \"source\": [\n    \"# How to change the p"
  },
  {
    "path": "docs/source/how_to/how_to_constraints.md",
    "chars": 16575,
    "preview": "(constraints)=\n\n# How to specify constraints\n\n## Constraints vs bounds\n\noptimagic distinguishes between bounds and const"
  },
  {
    "path": "docs/source/how_to/how_to_criterion_function.ipynb",
    "chars": 6302,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"(how-to-fun)=\\n\",\n    \"\\n\",\n    \"# "
  },
  {
    "path": "docs/source/how_to/how_to_derivatives.ipynb",
    "chars": 8449,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"(how-to-jac)=\\n\",\n    \"\\n\",\n    \"# "
  },
  {
    "path": "docs/source/how_to/how_to_document_optimizers.md",
    "chars": 9989,
    "preview": "# How to document optimizers\n\nThis guide shows you how to document algorithms in optimagic using our new documentation\ns"
  },
  {
    "path": "docs/source/how_to/how_to_errors_during_optimization.ipynb",
    "chars": 8350,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0\",\n   \"metadata\": {},\n   \"source\": [\n    \"(how-to-errors)=\\n\",\n"
  },
  {
    "path": "docs/source/how_to/how_to_globalization.ipynb",
    "chars": 288,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# How to choose a strategy for glob"
  },
  {
    "path": "docs/source/how_to/how_to_logging.ipynb",
    "chars": 5859,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"(how-to-logging)=\\n\",\n    \"\\n\",\n   "
  },
  {
    "path": "docs/source/how_to/how_to_multistart.ipynb",
    "chars": 15216,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0\",\n   \"metadata\": {},\n   \"source\": [\n    \"(how-to-multistart)=\\"
  },
  {
    "path": "docs/source/how_to/how_to_scaling.md",
    "chars": 6098,
    "preview": "(scaling)=\n\n# How to scale optimization problems\n\nReal world optimization problems often comprise parameters of vastly d"
  },
  {
    "path": "docs/source/how_to/how_to_slice_plot.ipynb",
    "chars": 4072,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# How to visualize an optimization "
  },
  {
    "path": "docs/source/how_to/how_to_slice_plot_3d.ipynb",
    "chars": 9968,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Visualizing Objecti"
  },
  {
    "path": "docs/source/how_to/how_to_specify_algorithm_and_algo_options.md",
    "chars": 4177,
    "preview": "(specify-algorithm)=\n\n# How to specify and configure algorithms\n\nThis how-to guide is about the mechanics of specifying "
  },
  {
    "path": "docs/source/how_to/how_to_start_parameters.md",
    "chars": 4197,
    "preview": "(params)=\n\n# How to specify `params`\n\n`params` is the first argument of any criterion function in optimagic. It collects"
  },
  {
    "path": "docs/source/how_to/how_to_visualize_histories.ipynb",
    "chars": 5137,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0\",\n   \"metadata\": {},\n   \"source\": [\n    \"# How to visualize op"
  },
  {
    "path": "docs/source/how_to/index.md",
    "chars": 661,
    "preview": "(how-to)=\n\n# How-to Guides\n\nHow-to Guides show how to achieve specific tasks. In many cases they show you how to use\nadv"
  },
  {
    "path": "docs/source/index.md",
    "chars": 4339,
    "preview": "# \n\n<div style=\"padding-top: 50px;\">\n</div>\n\n```{raw} html\n<img src=\"_static/images/optimagic_logo.svg\" class=\"only-ligh"
  },
  {
    "path": "docs/source/installation.md",
    "chars": 1380,
    "preview": "# Installation\n\n## Basic installation\n\nThe preferred way to install optimagic is via `conda` or `mamba`. To do so, open "
  },
  {
    "path": "docs/source/reference/algo_options.md",
    "chars": 134,
    "preview": "(algo_options)=\n\n# The default algorithm options\n\n```{eval-rst}\n.. automodule:: optimagic.optimization.algo_options\n    "
  },
  {
    "path": "docs/source/reference/batch_evaluators.md",
    "chars": 116,
    "preview": "(batch_evaluators)=\n\n# Batch evaluators\n\n```{eval-rst}\n.. automodule:: optimagic.batch_evaluators\n    :members:\n```\n"
  },
  {
    "path": "docs/source/reference/index.md",
    "chars": 2838,
    "preview": "# optimagic API\n\n```{eval-rst}\n.. currentmodule:: optimagic\n```\n\n(maximize-and-minimize)=\n\n## Optimization\n\n```{eval-rst"
  },
  {
    "path": "docs/source/reference/typing.md",
    "chars": 87,
    "preview": "(typing)=\n\n# Types\n\n```{eval-rst}\n\n.. automodule:: optimagic.typing\n    :members:\n\n```\n"
  },
  {
    "path": "docs/source/reference/utilities.md",
    "chars": 103,
    "preview": "(utilities)=\n\n# Utility functions\n\n```{eval-rst}\n.. automodule:: optimagic.utilities\n    :members:\n```\n"
  },
  {
    "path": "docs/source/refs.bib",
    "chars": 39843,
    "preview": "% Encoding: UTF-8\n\n\n\n@Book{Dennis1996,\n  Title                    = {Numerical Methods for Unconstrained Optimization an"
  },
  {
    "path": "docs/source/tutorials/bayes_opt_tutorial.ipynb",
    "chars": 24629,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0\",\n   \"metadata\": {},\n   \"source\": [\n    \"# `bayes_opt` Optimiz"
  },
  {
    "path": "docs/source/tutorials/index.md",
    "chars": 1405,
    "preview": "(tutorials)=\n\n# Tutorials\n\nThis section provides an overview of optimagic. It's a good starting point if you are\nnew to "
  },
  {
    "path": "docs/source/tutorials/numdiff_overview.ipynb",
    "chars": 8209,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Numerical differentiation\\n\",\n   "
  },
  {
    "path": "docs/source/tutorials/optimization_overview.ipynb",
    "chars": 15252,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Numerical optimization\\n\",\n    \"\\"
  },
  {
    "path": "docs/source/videos.md",
    "chars": 1867,
    "preview": "(list_of_videos)=\n\n# Videos\n\nCheck out our tutorials, talks and screencasts about optimagic.\n\n## Talks and tutorials\n\n##"
  },
  {
    "path": "pyproject.toml",
    "chars": 17100,
    "preview": "# ======================================================================================\n# Project metadata\n# =========="
  },
  {
    "path": "src/estimagic/__init__.py",
    "chars": 4424,
    "preview": "import warnings\nfrom dataclasses import dataclass\n\nfrom estimagic import utilities\nfrom estimagic.bootstrap import Boots"
  },
  {
    "path": "src/estimagic/batch_evaluators.py",
    "chars": 881,
    "preview": "from optimagic.batch_evaluators import joblib_batch_evaluator as _joblib_batch_evaluator\nfrom optimagic.batch_evaluators"
  },
  {
    "path": "src/estimagic/bootstrap.py",
    "chars": 10413,
    "preview": "import functools\nfrom dataclasses import dataclass\nfrom functools import cached_property\nfrom typing import Any\n\nimport "
  },
  {
    "path": "src/estimagic/bootstrap_ci.py",
    "chars": 6500,
    "preview": "import numpy as np\nfrom scipy.stats import norm\n\nfrom estimagic.bootstrap_helpers import check_inputs\n\n\ndef calculate_ci"
  },
  {
    "path": "src/estimagic/bootstrap_helpers.py",
    "chars": 1680,
    "preview": "import pandas as pd\n\n\ndef check_inputs(\n    data=None,\n    weight_by=None,\n    cluster_by=None,\n    ci_method=\"percentil"
  },
  {
    "path": "src/estimagic/bootstrap_outcomes.py",
    "chars": 3461,
    "preview": "from estimagic.bootstrap_helpers import check_inputs\nfrom estimagic.bootstrap_samples import get_bootstrap_indices\nfrom "
  },
  {
    "path": "src/estimagic/bootstrap_samples.py",
    "chars": 4096,
    "preview": "import numpy as np\nimport pandas as pd\n\n\ndef get_bootstrap_indices(\n    data,\n    rng,\n    weight_by=None,\n    cluster_b"
  },
  {
    "path": "src/estimagic/config.py",
    "chars": 75,
    "preview": "from pathlib import Path\n\nEXAMPLE_DIR = Path(__file__).parent / \"examples\"\n"
  },
  {
    "path": "src/estimagic/estimate_ml.py",
    "chars": 35287,
    "preview": "import warnings\nfrom dataclasses import asdict, dataclass, field\nfrom functools import cached_property\nfrom typing impor"
  },
  {
    "path": "src/estimagic/estimate_msm.py",
    "chars": 41519,
    "preview": "\"\"\"Do a method of simlated moments estimation.\"\"\"\n\nimport functools\nimport warnings\nfrom collections.abc import Callable"
  },
  {
    "path": "src/estimagic/estimation_summaries.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/estimagic/estimation_table.py",
    "chars": 59635,
    "preview": "import re\nfrom copy import deepcopy\nfrom functools import partial\nfrom pathlib import Path\nfrom warnings import warn\n\nim"
  },
  {
    "path": "src/estimagic/examples/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/estimagic/examples/diabetes.csv",
    "chars": 90456,
    "preview": ",Age,Sex,BMI,ABP,S1,S2,S3,S4,S5,S6,target\n0,0.0380759064334241,0.0506801187398187,0.0616962065186885,0.0218723549949558,"
  },
  {
    "path": "src/estimagic/examples/exam_points.csv",
    "chars": 1011,
    "preview": "points\n275.5\n351.5\n346.25\n228.25\n108.25\n380.75\n346.25\n360.75\n196\n414.75\n370.5\n371.75\n143.75\n333.5\n397.5\n405.75\n154.75\n32"
  },
  {
    "path": "src/estimagic/examples/logit.py",
    "chars": 2623,
    "preview": "\"\"\"Likelihood functions and derivatives of a logit model.\"\"\"\n\nimport numpy as np\nimport pandas as pd\n\nfrom optimagic imp"
  },
  {
    "path": "src/estimagic/examples/sensitivity_probit_example_data.csv",
    "chars": 24256,
    "preview": ",y,intercept,x1,x2\n0,1,1.0,2.967339833505456,0.7105279305877271\n1,1,1.0,-0.4737153743988922,-1.1947183078244987\n2,0,1.0,"
  },
  {
    "path": "src/estimagic/lollipop_plot.py",
    "chars": 5071,
    "preview": "import math\n\nimport pandas as pd\nimport plotly.graph_objects as go\n\nfrom optimagic.config import PLOTLY_PALETTE, PLOTLY_"
  },
  {
    "path": "src/estimagic/ml_covs.py",
    "chars": 9863,
    "preview": "\"\"\"Functions for inferences in maximum likelihood models.\"\"\"\n\nimport numpy as np\nimport pandas as pd\n\nfrom estimagic.sha"
  },
  {
    "path": "src/estimagic/msm_covs.py",
    "chars": 2402,
    "preview": "import pandas as pd\n\nfrom estimagic.shared_covs import process_pandas_arguments\nfrom optimagic.exceptions import INVALID"
  },
  {
    "path": "src/estimagic/msm_sensitivity.py",
    "chars": 11767,
    "preview": "\"\"\"Implement local sensitivity measures for method of moments.\n\nmeasures:\nm1: Andrews, Gentzkow & Shapiro (2017, Quarter"
  },
  {
    "path": "src/estimagic/msm_weighting.py",
    "chars": 5306,
    "preview": "import functools\n\nimport numpy as np\nimport pandas as pd\nfrom pybaum import tree_just_flatten\nfrom scipy.linalg import b"
  },
  {
    "path": "src/estimagic/py.typed",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/estimagic/shared_covs.py",
    "chars": 13315,
    "preview": "from typing import NamedTuple\n\nimport numpy as np\nimport pandas as pd\nimport scipy\nfrom pybaum import tree_just_flatten,"
  },
  {
    "path": "src/estimagic/utilities.py",
    "chars": 4462,
    "preview": "from optimagic.decorators import deprecated\nfrom optimagic.utilities import (\n    calculate_trustregion_initial_radius a"
  },
  {
    "path": "src/optimagic/__init__.py",
    "chars": 3273,
    "preview": "from __future__ import annotations\n\nfrom optimagic import constraints, mark, sandbox, timing, utilities\nfrom optimagic.a"
  },
  {
    "path": "src/optimagic/algorithms.py",
    "chars": 193831,
    "preview": "\"\"\"This code was auto-generated by a pre-commit hook and should not be changed.\n\nIf you manually change this code, all o"
  },
  {
    "path": "src/optimagic/batch_evaluators.py",
    "chars": 9930,
    "preview": "\"\"\"A collection of batch evaluators for process based parallelism.\n\nAll batch evaluators have the same interface and any"
  },
  {
    "path": "src/optimagic/benchmarking/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/optimagic/benchmarking/benchmark_reports.py",
    "chars": 10492,
    "preview": "import pandas as pd\n\nfrom optimagic.benchmarking.process_benchmark_results import (\n    process_benchmark_results,\n)\nfro"
  },
  {
    "path": "src/optimagic/benchmarking/cartis_roberts.py",
    "chars": 129894,
    "preview": "\"\"\"Define the medium scale CUTEst Benchmark Set.\n\nThis benchmark set is contains 60 test cases for nonlinear least squar"
  },
  {
    "path": "src/optimagic/benchmarking/get_benchmark_problems.py",
    "chars": 12715,
    "preview": "from functools import partial, wraps\n\nimport numpy as np\n\nfrom optimagic import mark\nfrom optimagic.benchmarking.cartis_"
  },
  {
    "path": "src/optimagic/benchmarking/more_wild.py",
    "chars": 30753,
    "preview": "\"\"\"Define the More-Wild Benchmark Set.\n\nThis benchmark set is contains 53 test cases for nonlinear least squares solvers"
  },
  {
    "path": "src/optimagic/benchmarking/noise_distributions.py",
    "chars": 783,
    "preview": "import numpy as np\n\n\ndef _standard_logistic(size, rng):\n    scale = np.sqrt(3) / np.pi\n    return rng.logistic(loc=0, sc"
  },
  {
    "path": "src/optimagic/benchmarking/process_benchmark_results.py",
    "chars": 7008,
    "preview": "import numpy as np\nimport pandas as pd\n\n\ndef process_benchmark_results(\n    problems, results, stopping_criterion, x_pre"
  },
  {
    "path": "src/optimagic/benchmarking/run_benchmark.py",
    "chars": 8303,
    "preview": "\"\"\"Functions to create, run and visualize optimization benchmarks.\n\nTO-DO:\n- Add other benchmark sets:\n    - finish medi"
  },
  {
    "path": "src/optimagic/config.py",
    "chars": 2710,
    "preview": "import importlib.util\nfrom pathlib import Path\n\nimport plotly.express as px\n\nDOCS_DIR = Path(__file__).parent.parent / \""
  },
  {
    "path": "src/optimagic/constraints.py",
    "chars": 15452,
    "preview": "from __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom dataclasses import KW_ONLY, dataclass\nfrom "
  },
  {
    "path": "src/optimagic/decorators.py",
    "chars": 3650,
    "preview": "\"\"\"This module contains various decorators.\n\nThere are two kinds of decorators defined in this module which consists of "
  },
  {
    "path": "src/optimagic/deprecations.py",
    "chars": 23076,
    "preview": "import logging\nimport warnings\nfrom dataclasses import replace\nfrom functools import wraps\nfrom pathlib import Path\nfrom"
  },
  {
    "path": "src/optimagic/differentiation/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/optimagic/differentiation/derivatives.py",
    "chars": 48582,
    "preview": "import functools\nimport itertools\nimport re\nfrom dataclasses import dataclass\nfrom itertools import product\nfrom typing "
  },
  {
    "path": "src/optimagic/differentiation/finite_differences.py",
    "chars": 7449,
    "preview": "\"\"\"Finite difference formulae for jacobians and hessians.\n\nAll functions in this module should not only work for the sim"
  },
  {
    "path": "src/optimagic/differentiation/generate_steps.py",
    "chars": 11718,
    "preview": "import warnings\nfrom typing import NamedTuple\n\nimport numpy as np\n\nfrom optimagic.utilities import fast_numpy_full\n\n\ncla"
  },
  {
    "path": "src/optimagic/differentiation/numdiff_options.py",
    "chars": 6359,
    "preview": "from dataclasses import dataclass\nfrom enum import Enum\nfrom typing import Callable, Literal, TypedDict\n\nfrom typing_ext"
  },
  {
    "path": "src/optimagic/differentiation/richardson_extrapolation.py",
    "chars": 10176,
    "preview": "import numpy as np\nfrom scipy import stats\nfrom scipy.linalg import pinv\nfrom scipy.ndimage import convolve1d\n\n\ndef rich"
  },
  {
    "path": "src/optimagic/examples/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/optimagic/examples/criterion_functions.py",
    "chars": 6679,
    "preview": "\"\"\"Import common objective functions in several optimagic compatible versions.\n\nAll implemented functions accept arbitra"
  },
  {
    "path": "src/optimagic/examples/numdiff_functions.py",
    "chars": 2216,
    "preview": "\"\"\"Functions with known gradients, jacobians or hessians.\n\nAll functions take a numpy array with parameters as their fir"
  },
  {
    "path": "src/optimagic/exceptions.py",
    "chars": 3117,
    "preview": "import sys\nfrom traceback import format_exception\n\n\nclass OptimagicError(Exception):\n    \"\"\"Base exception for optimagic"
  },
  {
    "path": "src/optimagic/logging/__init__.py",
    "chars": 188,
    "preview": "from .logger import (\n    SQLiteLogOptions as SQLiteLogOptions,\n)\nfrom .logger import (\n    SQLiteLogReader as SQLiteLog"
  },
  {
    "path": "src/optimagic/logging/base.py",
    "chars": 6919,
    "preview": "import io\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom dataclasses import asdict, fields, is_dataclass\nfrom "
  },
  {
    "path": "src/optimagic/logging/logger.py",
    "chars": 18542,
    "preview": "from __future__ import annotations\n\nimport os\nfrom abc import ABC, abstractmethod\nfrom pathlib import Path\nfrom typing i"
  },
  {
    "path": "src/optimagic/logging/read_log.py",
    "chars": 914,
    "preview": "\"\"\"Deprecated module:\n\nFunctions to read data from the database used for logging.\n\nThe functions in the module are meant"
  },
  {
    "path": "src/optimagic/logging/sqlalchemy.py",
    "chars": 14183,
    "preview": "from __future__ import annotations\n\nimport traceback\nimport warnings\nfrom dataclasses import asdict, dataclass\nfrom func"
  },
  {
    "path": "src/optimagic/logging/types.py",
    "chars": 5784,
    "preview": "from dataclasses import dataclass\nfrom enum import Enum\nfrom typing import Literal\n\nfrom optimagic.optimization.fun_valu"
  },
  {
    "path": "src/optimagic/mark.py",
    "chars": 5514,
    "preview": "from functools import wraps\nfrom typing import Any, Callable, ParamSpec, TypeVar\n\nfrom optimagic.optimization.algorithm "
  },
  {
    "path": "src/optimagic/optimization/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/optimagic/optimization/algo_options.py",
    "chars": 5763,
    "preview": "import numpy as np\n\nCONVERGENCE_FTOL_REL = 2e-9\n\"\"\"float: Stop when the relative improvement between two iterations is b"
  },
  {
    "path": "src/optimagic/optimization/algorithm.py",
    "chars": 12786,
    "preview": "import typing\nimport warnings\nfrom abc import ABC, ABCMeta, abstractmethod\nfrom dataclasses import dataclass, replace\nfr"
  },
  {
    "path": "src/optimagic/optimization/convergence_report.py",
    "chars": 1838,
    "preview": "import numpy as np\nfrom numpy.typing import NDArray\n\nfrom optimagic.optimization.history import History\n\n\ndef get_conver"
  },
  {
    "path": "src/optimagic/optimization/create_optimization_problem.py",
    "chars": 22524,
    "preview": "import warnings\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom typing import Any, Callable, Type\n\nfrom "
  },
  {
    "path": "src/optimagic/optimization/error_penalty.py",
    "chars": 4594,
    "preview": "from typing import Callable\n\nimport numpy as np\nfrom numpy.typing import NDArray\n\nfrom optimagic.config import CRITERION"
  },
  {
    "path": "src/optimagic/optimization/fun_value.py",
    "chars": 9565,
    "preview": "import functools\nfrom abc import ABC, abstractmethod\nfrom dataclasses import dataclass\nfrom typing import Any, Callable,"
  },
  {
    "path": "src/optimagic/optimization/history.py",
    "chars": 20008,
    "preview": "import warnings\nfrom dataclasses import dataclass\nfrom functools import partial\nfrom typing import Any, Callable, Iterab"
  },
  {
    "path": "src/optimagic/optimization/internal_optimization_problem.py",
    "chars": 39311,
    "preview": "import time\nimport warnings\nfrom copy import copy\nfrom dataclasses import asdict, dataclass, replace\nfrom typing import "
  },
  {
    "path": "src/optimagic/optimization/multistart.py",
    "chars": 17963,
    "preview": "\"\"\"Functions for multi start optimization a la TikTak.\n\nTikTak (`Arnoud, Guvenen, and Kleineberg\n<https://www.nber.org/s"
  },
  {
    "path": "src/optimagic/optimization/multistart_options.py",
    "chars": 17169,
    "preview": "from dataclasses import dataclass\nfrom functools import partial\nfrom typing import Callable, Literal, Sequence, TypedDic"
  },
  {
    "path": "src/optimagic/optimization/optimization_logging.py",
    "chars": 1095,
    "preview": "from typing import Any, cast\n\nfrom optimagic.logging.logger import LogStore\nfrom optimagic.logging.types import StepResu"
  },
  {
    "path": "src/optimagic/optimization/optimize.py",
    "chars": 34455,
    "preview": "\"\"\"Public functions for optimization.\n\nThis module defines the public functions `maximize` and `minimize` that will be c"
  },
  {
    "path": "src/optimagic/optimization/optimize_result.py",
    "chars": 8183,
    "preview": "import warnings\nfrom dataclasses import dataclass\nfrom typing import Any, Dict, Optional\n\nimport numpy as np\nimport pand"
  },
  {
    "path": "src/optimagic/optimization/process_results.py",
    "chars": 6539,
    "preview": "from dataclasses import dataclass, replace\nfrom typing import Any\n\nimport numpy as np\n\nfrom optimagic.optimization.algor"
  },
  {
    "path": "src/optimagic/optimization/scipy_aliases.py",
    "chars": 2141,
    "preview": "import functools\n\nfrom optimagic.exceptions import InvalidFunctionError\nfrom optimagic.utilities import propose_alternat"
  },
  {
    "path": "src/optimagic/optimizers/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/optimagic/optimizers/_pounders/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/optimagic/optimizers/_pounders/_conjugate_gradient.py",
    "chars": 4298,
    "preview": "\"\"\"Implementation of the Conjugate Gradient algorithm.\"\"\"\n\nimport numpy as np\n\n\ndef minimize_trust_cg(\n    model_gradien"
  },
  {
    "path": "src/optimagic/optimizers/_pounders/_steihaug_toint.py",
    "chars": 4971,
    "preview": "\"\"\"Implementation of the Steihaug-Toint Conjugate Gradient algorithm.\"\"\"\n\nimport numpy as np\n\n\ndef minimize_trust_stcg(m"
  },
  {
    "path": "src/optimagic/optimizers/_pounders/_trsbox.py",
    "chars": 20193,
    "preview": "\"\"\"Implementation of the quadratic trustregion solver TRSBOX.\"\"\"\n\nimport numpy as np\n\n\ndef minimize_trust_trsbox(\n    mo"
  },
  {
    "path": "src/optimagic/optimizers/_pounders/bntr.py",
    "chars": 30861,
    "preview": "\"\"\"Auxiliary functions for the quadratic BNTR trust-region subsolver.\"\"\"\n\nfrom functools import reduce\nfrom typing impor"
  },
  {
    "path": "src/optimagic/optimizers/_pounders/gqtpar.py",
    "chars": 20355,
    "preview": "\"\"\"Auxiliary functions for the quadratic GQTPAR trust-region subsolver.\"\"\"\n\nfrom typing import NamedTuple\n\nimport numpy "
  },
  {
    "path": "src/optimagic/optimizers/_pounders/linear_subsolvers.py",
    "chars": 11769,
    "preview": "\"\"\"Collection of linear trust-region subsolvers.\"\"\"\n\nfrom typing import NamedTuple\n\nimport numpy as np\n\n\nclass LinearMod"
  },
  {
    "path": "src/optimagic/optimizers/_pounders/pounders_auxiliary.py",
    "chars": 28477,
    "preview": "\"\"\"Auxiliary functions for the pounders algorithm.\"\"\"\n\nfrom typing import NamedTuple\n\nimport numpy as np\nfrom scipy.lina"
  },
  {
    "path": "src/optimagic/optimizers/_pounders/pounders_history.py",
    "chars": 9118,
    "preview": "\"\"\"History class for pounders and similar optimizers.\"\"\"\n\nimport numpy as np\n\n\nclass LeastSquaresHistory:\n    \"\"\"Contain"
  },
  {
    "path": "src/optimagic/optimizers/bayesian_optimizer.py",
    "chars": 17933,
    "preview": "\"\"\"Implement Bayesian optimization using bayes_opt.\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import data"
  },
  {
    "path": "src/optimagic/optimizers/bhhh.py",
    "chars": 4142,
    "preview": "\"\"\"Implement Berndt-Hall-Hall-Hausman (BHHH) algorithm.\"\"\"\n\nfrom dataclasses import dataclass\nfrom typing import Callabl"
  },
  {
    "path": "src/optimagic/optimizers/fides.py",
    "chars": 10222,
    "preview": "\"\"\"Implement the fides optimizer.\"\"\"\n\nimport logging\nfrom dataclasses import dataclass\nfrom typing import Callable, Lite"
  },
  {
    "path": "src/optimagic/optimizers/gfo_optimizers.py",
    "chars": 42235,
    "preview": "from __future__ import annotations\n\nimport math\nfrom dataclasses import dataclass\nfrom functools import partial\nfrom typ"
  },
  {
    "path": "src/optimagic/optimizers/iminuit_migrad.py",
    "chars": 6470,
    "preview": "\"\"\"Implement the MIGRAD algorithm from iminuit.\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclas"
  },
  {
    "path": "src/optimagic/optimizers/ipopt.py",
    "chars": 28388,
    "preview": "\"\"\"Implement cyipopt's Interior Point Optimizer.\"\"\"\n\nfrom dataclasses import dataclass\nfrom typing import Any, Literal\n\n"
  },
  {
    "path": "src/optimagic/optimizers/nag_optimizers.py",
    "chars": 48937,
    "preview": "\"\"\"Implement algorithms by the (Numerical Algorithms Group)[https://www.nag.com/].\n\nThe following arguments are not supp"
  },
  {
    "path": "src/optimagic/optimizers/neldermead.py",
    "chars": 13740,
    "preview": "\"\"\"Implementation of parallelosation of Nelder-Mead algorithm.\"\"\"\n\nfrom dataclasses import dataclass\nfrom typing import "
  },
  {
    "path": "src/optimagic/optimizers/nevergrad_optimizers.py",
    "chars": 51391,
    "preview": "\"\"\"Implement optimizers from the nevergrad package.\"\"\"\n\nfrom __future__ import annotations\n\nimport math\nfrom dataclasses"
  },
  {
    "path": "src/optimagic/optimizers/nlopt_optimizers.py",
    "chars": 29260,
    "preview": "\"\"\"Implement `nlopt` algorithms.\n\nThe documentation is heavily based on (nlopt documentation)[nlopt.readthedocs.io].\n\n\"\""
  },
  {
    "path": "src/optimagic/optimizers/pounders.py",
    "chars": 23998,
    "preview": "\"\"\"Implement the POUNDERS algorithm.\"\"\"\n\nimport warnings\nfrom dataclasses import dataclass\nfrom typing import Any, Liter"
  },
  {
    "path": "src/optimagic/optimizers/pygad/__init__.py",
    "chars": 1479,
    "preview": "\"\"\"PyGAD optimizer configuration classes and utilities.\n\nThis module provides easy access to PyGAD mutation classes and "
  },
  {
    "path": "src/optimagic/optimizers/pygad_optimizer.py",
    "chars": 32234,
    "preview": "\"\"\"Implement PyGAD genetic algorithm optimizer.\"\"\"\n\nfrom __future__ import annotations\n\nimport warnings\nfrom dataclasses"
  },
  {
    "path": "src/optimagic/optimizers/pygmo_optimizers.py",
    "chars": 46068,
    "preview": "\"\"\"Implement pygmo optimizers.\n\nNotes for converting to the new algorithm interface:\n\n- `create_algo_options` is not nee"
  },
  {
    "path": "src/optimagic/optimizers/pyswarms_optimizers.py",
    "chars": 23228,
    "preview": "\"\"\"Implement PySwarms particle swarm optimization algorithms.\n\nThis module provides optimagic-compatible wrappers for Py"
  },
  {
    "path": "src/optimagic/optimizers/scipy_optimizers.py",
    "chars": 43273,
    "preview": "\"\"\"Implement scipy algorithms.\n\nThe following ``scipy`` algorithms are not supported because they\nrequire the specificat"
  },
  {
    "path": "src/optimagic/optimizers/tao_optimizers.py",
    "chars": 10922,
    "preview": "\"\"\"This module implements the POUNDERs algorithm.\"\"\"\n\nimport functools\nfrom dataclasses import dataclass\n\nimport numpy a"
  },
  {
    "path": "src/optimagic/optimizers/tranquilo.py",
    "chars": 14822,
    "preview": "from __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import TYPE_CHECKING, Callable, Litera"
  },
  {
    "path": "src/optimagic/parameters/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/optimagic/parameters/block_trees.py",
    "chars": 14133,
    "preview": "\"\"\"Functions to convert between array and block-tree representations of a matrix.\"\"\"\n\nimport numpy as np\nimport pandas a"
  },
  {
    "path": "src/optimagic/parameters/bounds.py",
    "chars": 9158,
    "preview": "from __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import Any, Literal, Sequence\n\nimport "
  },
  {
    "path": "src/optimagic/parameters/check_constraints.py",
    "chars": 9987,
    "preview": "\"\"\"Check compatibility of pc with each other and with bounds and fixes.\n\nSee the module docstring of process_constraints"
  },
  {
    "path": "src/optimagic/parameters/consolidate_constraints.py",
    "chars": 24697,
    "preview": "\"\"\"Functions to consolidate user provided constraints.\n\nConsolidation means that redundant constraints are dropped and o"
  },
  {
    "path": "src/optimagic/parameters/constraint_tools.py",
    "chars": 3398,
    "preview": "from optimagic import deprecations\nfrom optimagic.parameters.bounds import pre_process_bounds\nfrom optimagic.parameters."
  },
  {
    "path": "src/optimagic/parameters/conversion.py",
    "chars": 7589,
    "preview": "\"\"\"Aggregate the multiple parameter and function output conversions into on.\"\"\"\n\nfrom dataclasses import dataclass, repl"
  }
]

// ... and 181 more files (download for full content)

About this extraction

This page contains the full source code of the OpenSourceEconomics/estimagic GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 381 files (4.6 MB), approximately 1.2M tokens, and a symbol index with 3275 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!