Showing preview only (315K chars total). Download the full file or copy to clipboard to get everything.
Repository: rfeinman/pytorch-minimize
Branch: main
Commit: 7fed93e99ae7
Files: 67
Total size: 296.5 KB
Directory structure:
gitextract_oaw5ondp/
├── .readthedocs.yaml
├── LICENSE
├── MANIFEST.in
├── README.md
├── docs/
│ ├── Makefile
│ ├── make.bat
│ └── source/
│ ├── _static/
│ │ └── custom.css
│ ├── api/
│ │ ├── index.rst
│ │ ├── minimize-bfgs.rst
│ │ ├── minimize-cg.rst
│ │ ├── minimize-constr-frankwolfe.rst
│ │ ├── minimize-constr-lbfgsb.rst
│ │ ├── minimize-constr-trust-constr.rst
│ │ ├── minimize-dogleg.rst
│ │ ├── minimize-lbfgs.rst
│ │ ├── minimize-newton-cg.rst
│ │ ├── minimize-newton-exact.rst
│ │ ├── minimize-trust-exact.rst
│ │ ├── minimize-trust-krylov.rst
│ │ └── minimize-trust-ncg.rst
│ ├── conf.py
│ ├── examples/
│ │ └── index.rst
│ ├── index.rst
│ ├── install.rst
│ └── user_guide/
│ └── index.rst
├── examples/
│ ├── constrained_optimization_adversarial_examples.ipynb
│ ├── rosen_minimize.ipynb
│ ├── scipy_benchmark.py
│ └── train_mnist_Minimizer.py
├── pyproject.toml
├── tests/
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_imports.py
│ └── torchmin/
│ ├── __init__.py
│ ├── test_bounds.py
│ ├── test_minimize.py
│ └── test_minimize_constr.py
└── torchmin/
├── __init__.py
├── _optimize.py
├── _version.py
├── benchmarks.py
├── bfgs.py
├── cg.py
├── constrained/
│ ├── frankwolfe.py
│ ├── lbfgsb.py
│ └── trust_constr.py
├── function.py
├── line_search.py
├── lstsq/
│ ├── __init__.py
│ ├── cg.py
│ ├── common.py
│ ├── least_squares.py
│ ├── linear_operator.py
│ ├── lsmr.py
│ └── trf.py
├── minimize.py
├── minimize_constr.py
├── newton.py
├── optim/
│ ├── __init__.py
│ ├── minimizer.py
│ └── scipy_minimizer.py
└── trustregion/
├── __init__.py
├── base.py
├── dogleg.py
├── exact.py
├── krylov.py
└── ncg.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .readthedocs.yaml
================================================
version: 2
# Tell RTD which build image to use and which Python to install
build:
os: ubuntu-22.04
tools:
python: "3.8"
# Build from the docs/ directory with Sphinx
sphinx:
configuration: docs/source/conf.py
# Explicitly set the version of Python and its requirements
python:
install:
- method: pip
path: .
extra_requirements:
- docs
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2021 Reuben Feinman
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: MANIFEST.in
================================================
# Include essential files
include README.md
include LICENSE
include pyproject.toml
# Include source package
recursive-include torchmin *.py
# Include tests
recursive-include tests *.py
# Exclude unwanted directories
prune docs
prune examples
prune tmp
global-exclude *.pyc
global-exclude *.pyo
global-exclude __pycache__
global-exclude .DS_Store
================================================
FILE: README.md
================================================
# PyTorch Minimize
For the most up-to-date information on pytorch-minimize, see the docs site: [pytorch-minimize.readthedocs.io](https://pytorch-minimize.readthedocs.io/)
Pytorch-minimize represents a collection of utilities for minimizing multivariate functions in PyTorch.
It is inspired heavily by SciPy's `optimize` module and MATLAB's [Optimization Toolbox](https://www.mathworks.com/products/optimization.html).
Unlike SciPy and MATLAB, which use numerical approximations of function derivatives, pytorch-minimize uses _real_ first- and second-order derivatives, computed seamlessly behind the scenes with autograd.
Both CPU and CUDA are supported.
__Author__: Reuben Feinman
__At a glance:__
```python
import torch
from torchmin import minimize
def rosen(x):
return torch.sum(100*(x[..., 1:] - x[..., :-1]**2)**2
+ (1 - x[..., :-1])**2)
# initial point
x0 = torch.tensor([1., 8.])
# Select from the following methods:
# ['bfgs', 'l-bfgs', 'cg', 'newton-cg', 'newton-exact',
# 'trust-ncg', 'trust-krylov', 'trust-exact', 'dogleg']
# BFGS
result = minimize(rosen, x0, method='bfgs')
# Newton Conjugate Gradient
result = minimize(rosen, x0, method='newton-cg')
# Newton Exact
result = minimize(rosen, x0, method='newton-exact')
```
__Solvers:__ BFGS, L-BFGS, Conjugate Gradient (CG), Newton Conjugate Gradient (NCG), Newton Exact, Dogleg, Trust-Region Exact, Trust-Region NCG, Trust-Region GLTR (Krylov)
__Examples:__ See the [Rosenbrock minimization notebook](https://github.com/rfeinman/pytorch-minimize/blob/master/examples/rosen_minimize.ipynb) for a demonstration of function minimization with a handful of different algorithms.
__Install with pip:__
pip install pytorch-minimize
__Install from source (bleeding edge):__
pip install git+https://github.com/rfeinman/pytorch-minimize.git
## Motivation
Although PyTorch offers many routines for stochastic optimization, utilities for deterministic optimization are scarce; only L-BFGS is included in the `optim` package, and it's modified for mini-batch training.
MATLAB and SciPy are industry standards for deterministic optimization.
These libraries have a comprehensive set of routines; however, automatic differentiation is not supported.*
Therefore, the user must provide explicit 1st- and 2nd-order gradients (if they are known) or use finite-difference approximations.
The motivation for pytorch-minimize is to offer a set of tools for deterministic optimization with automatic gradients and GPU acceleration.
__
*MATLAB offers minimal autograd support via the Deep Learning Toolbox, but the integration is not seamless: data must be converted to "dlarray" structures, and only a [subset of functions](https://www.mathworks.com/help/deeplearning/ug/list-of-functions-with-dlarray-support.html) are supported.
Furthermore, derivatives must still be constructed and provided as function handles.
Pytorch-minimize uses autograd to compute derivatives behind the scenes, so all you provide is an objective function.
## Library
The pytorch-minimize library includes solvers for general-purpose function minimization (unconstrained & constrained), as well as for nonlinear least squares problems.
### 1. Unconstrained Minimizers
The following solvers are available for _unconstrained_ minimization:
- __BFGS/L-BFGS.__ BFGS is a cannonical quasi-Newton method for unconstrained optimization. I've implemented both the standard BFGS and the "limited memory" L-BFGS. For smaller scale problems where memory is not a concern, BFGS should be significantly faster than L-BFGS (especially on CUDA) since it avoids Python for loops and instead uses pure torch.
- __Conjugate Gradient (CG).__ The conjugate gradient algorithm is a generalization of linear conjugate gradient to nonlinear optimization problems. Pytorch-minimize includes an implementation of the Polak-Ribiére CG algorithm described in Nocedal & Wright (2006) chapter 5.2.
- __Newton Conjugate Gradient (NCG).__ The Newton-Raphson method is a staple of unconstrained optimization. Although computing full Hessian matrices with PyTorch's reverse-mode automatic differentiation can be costly, computing Hessian-vector products is cheap, and it also saves a lot of memory. The Conjugate Gradient (CG) variant of Newton's method is an effective solution for unconstrained minimization with Hessian-vector products. I've implemented a lightweight NewtonCG minimizer that uses HVP for the linear inverse sub-problems.
- __Newton Exact.__ In some cases, we may prefer a more precise variant of the Newton-Raphson method at the cost of additional complexity. I've also implemented an "exact" variant of Newton's method that computes the full Hessian matrix and uses Cholesky factorization for linear inverse sub-problems. When Cholesky fails--i.e. the Hessian is not positive definite--the solver resorts to one of two options as specified by the user: 1) steepest descent direction (default), or 2) solve the inverse hessian with LU factorization.
- __Trust-Region Newton Conjugate Gradient.__ Description coming soon.
- __Trust-Region Newton Generalized Lanczos (Krylov).__ Description coming soon.
- __Trust-Region Exact.__ Description coming soon.
- __Dogleg.__ Description coming soon.
To access the unconstrained minimizer interface, use the following import statement:
from torchmin import minimize
Use the argument `method` to specify which of the afformentioned solvers should be applied.
### 2. Constrained Minimizers
The following solvers are available for _constrained_ minimization:
- __Trust-Region Constrained Algorithm.__ Pytorch-minimize includes a single constrained minimization routine based on SciPy's 'trust-constr' method. The algorithm accepts generalized nonlinear constraints and variable boundries via the "constr" and "bounds" arguments. For equality constrained problems, it is an implementation of the Byrd-Omojokun Trust-Region SQP method. When inequality constraints are imposed, the trust-region interior point method is used. NOTE: The current trust-region constrained minimizer is not a custom implementation, but rather a wrapper for SciPy's `optimize.minimize` routine. It uses autograd behind the scenes to build jacobian & hessian callables before invoking scipy. Inputs and objectivs should use torch tensors like other pytorch-minimize routines. CUDA is supported but not recommended; data will be moved back-and-forth between GPU/CPU.
To access the constrained minimizer interface, use the following import statement:
from torchmin import minimize_constr
### 3. Nonlinear Least Squares
The library also includes specialized solvers for nonlinear least squares problems.
These solvers revolve around the Gauss-Newton method, a modification of Newton's method tailored to the lstsq setting.
The least squares interface can be imported as follows:
from torchmin import least_squares
The least_squares function is heavily motivated by scipy's `optimize.least_squares`.
Much of the scipy code was borrowed directly (all rights reserved) and ported from numpy to torch.
Rather than have the user provide a jacobian function, in the new interface, jacobian-vector products are computed behind the scenes with autograd.
At the moment, only the Trust Region Reflective ("trf") method is implemented, and bounds are not yet supported.
## Examples
The [Rosenbrock minimization tutorial](https://github.com/rfeinman/pytorch-minimize/blob/master/examples/rosen_minimize.ipynb) demonstrates how to use pytorch-minimize to find the minimum of a scalar-valued function of multiple variables using various optimization strategies.
In addition, the [SciPy benchmark](https://github.com/rfeinman/pytorch-minimize/blob/master/examples/scipy_benchmark.py) provides a comparison of pytorch-minimize solvers to their analogous solvers from the `scipy.optimize` library.
For those transitioning from scipy, this script will help get a feel for the design of the current library.
Unlike scipy, jacobian and hessian functions need not be provided to pytorch-minimize solvers, and numerical approximations are never used.
For constrained optimization, the [adversarial examples tutorial](https://github.com/rfeinman/pytorch-minimize/blob/master/examples/constrained_optimization_adversarial_examples.ipynb) demonstrates how to use the trust-region constrained routine to generate an optimal adversarial perturbation given a constraint on the perturbation norm.
## Optimizer API
As an alternative to the functional API, pytorch-minimize also includes an "optimizer" API based on the `torch.optim.Optimizer` class.
To access the optimizer class, import as follows:
from torchmin import Minimizer
## Citing this work
If you use pytorch-minimize for academic research, you may cite the library as follows:
```
@misc{Feinman2021,
author = {Feinman, Reuben},
title = {Pytorch-minimize: a library for numerical optimization with autograd},
publisher = {GitHub},
year = {2021},
url = {https://github.com/rfeinman/pytorch-minimize},
}
```
================================================
FILE: docs/Makefile
================================================
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
================================================
FILE: docs/make.bat
================================================
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd
================================================
FILE: docs/source/_static/custom.css
================================================
.wy-table-responsive table td {
white-space: normal;
}
================================================
FILE: docs/source/api/index.rst
================================================
=================
API Documentation
=================
.. currentmodule:: torchmin
Functional API
==============
The functional API provides an interface similar to those of SciPy's :mod:`optimize` module and MATLAB's ``fminunc``/``fmincon`` routines. Parameters are provided as a single torch Tensor, and an :class:`OptimizeResult` instance is returned that includes the optimized parameter value as well as other useful information (e.g. final function value, parameter gradient, etc.).
There are 3 core utilities in the functional API, designed for 3 unique
numerical optimization problems.
**Unconstrained minimization**
.. autosummary::
:toctree: generated
minimize
The :func:`minimize` function is a general utility for *unconstrained* minimization. It implements a number of different routines based on Newton and Quasi-Newton methods for numerical optimization. The following methods are supported, accessed via the `method` argument:
.. toctree::
minimize-bfgs
minimize-lbfgs
minimize-cg
minimize-newton-cg
minimize-newton-exact
minimize-dogleg
minimize-trust-ncg
minimize-trust-exact
minimize-trust-krylov
**Constrained minimization**
.. autosummary::
:toctree: generated
minimize_constr
The :func:`minimize_constr` function is a general utility for *constrained* minimization. Algorithms for constrained minimization use Newton and Quasi-Newton methods on the KKT conditions of the constrained optimization problem. The following methods are currently supported:
.. toctree::
minimize-constr-lbfgsb
minimize-constr-frankwolfe
minimize-constr-trust-constr
.. note::
Method ``'trust-constr'`` is currently a wrapper for SciPy's *trust-constr* minimization method. CUDA tensors are supported, but CUDA will only be used for function and gradient evaluation, with the remaining solver computations performed on CPU (with numpy arrays).
**Nonlinear least-squares**
.. autosummary::
:toctree: generated
least_squares
The :func:`least_squares` function is a specialized utility for nonlinear least-squares minimization problems. Algorithms for least-squares revolve around the Gauss-Newton method, a modification of Newton's method tailored to residual sum-of-squares (RSS) optimization. The following methods are currently supported:
- Trust-region reflective
- Dogleg - COMING SOON
- Gauss-Newton line search - COMING SOON
Optimizer API
==============
The optimizer API provides an alternative interface based on PyTorch's :mod:`optim` module. This interface follows the schematic of PyTorch optimizers and will be familiar to those migrating from torch.
.. autosummary::
:toctree: generated
Minimizer
Minimizer.step
The :class:`Minimizer` class inherits from :class:`torch.optim.Optimizer` and constructs an object that holds the state of the provided variables. Unlike the functional API, which expects parameters to be a single Tensor, parameters can be passed to :class:`Minimizer` as iterables of Tensors. The class serves as a wrapper for :func:`torchmin.minimize()` and can use any of its methods (selected via the `method` argument) to perform unconstrained minimization.
.. autosummary::
:toctree: generated
ScipyMinimizer
ScipyMinimizer.step
Although the :class:`Minimizer` class will be sufficient for most problems where torch optimizers would be used, it does not support constraints. Another optimizer is provided, :class:`ScipyMinimizer`, which supports parameter bounds and linear/nonlinear constraint functions. This optimizer is a wrapper for :func:`scipy.optimize.minimize`. When using bound constraints, `bounds` are passed as iterables with same length as `params`, i.e. one bound specification per parameter Tensor.
================================================
FILE: docs/source/api/minimize-bfgs.rst
================================================
minimize(method='bfgs')
----------------------------------------
.. autofunction:: torchmin.bfgs._minimize_bfgs
================================================
FILE: docs/source/api/minimize-cg.rst
================================================
minimize(method='cg')
----------------------------------------
.. autofunction:: torchmin.cg._minimize_cg
================================================
FILE: docs/source/api/minimize-constr-frankwolfe.rst
================================================
minimize_constr(method='frank-wolfe')
----------------------------------------
.. autofunction:: torchmin.constrained.frankwolfe._minimize_frankwolfe
================================================
FILE: docs/source/api/minimize-constr-lbfgsb.rst
================================================
minimize_constr(method='l-bfgs-b')
----------------------------------------
.. autofunction:: torchmin.constrained.lbfgsb._minimize_lbfgsb
================================================
FILE: docs/source/api/minimize-constr-trust-constr.rst
================================================
minimize_constr(method='trust-constr')
----------------------------------------
.. autofunction:: torchmin.constrained.trust_constr._minimize_trust_constr
================================================
FILE: docs/source/api/minimize-dogleg.rst
================================================
minimize(method='dogleg')
----------------------------------------
.. autofunction:: torchmin.trustregion._minimize_dogleg
================================================
FILE: docs/source/api/minimize-lbfgs.rst
================================================
minimize(method='l-bfgs')
----------------------------------------
.. autofunction:: torchmin.bfgs._minimize_lbfgs
================================================
FILE: docs/source/api/minimize-newton-cg.rst
================================================
minimize(method='newton-cg')
----------------------------------------
.. autofunction:: torchmin.newton._minimize_newton_cg
================================================
FILE: docs/source/api/minimize-newton-exact.rst
================================================
minimize(method='newton-exact')
----------------------------------------
.. autofunction:: torchmin.newton._minimize_newton_exact
================================================
FILE: docs/source/api/minimize-trust-exact.rst
================================================
minimize(method='trust-exact')
----------------------------------------
.. autofunction:: torchmin.trustregion._minimize_trust_exact
================================================
FILE: docs/source/api/minimize-trust-krylov.rst
================================================
minimize(method='trust-krylov')
----------------------------------------
.. autofunction:: torchmin.trustregion._minimize_trust_krylov
================================================
FILE: docs/source/api/minimize-trust-ncg.rst
================================================
minimize(method='trust-ncg')
----------------------------------------
.. autofunction:: torchmin.trustregion._minimize_trust_ncg
================================================
FILE: docs/source/conf.py
================================================
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath('../../'))
import torchmin
# -- Project information -----------------------------------------------------
project = 'pytorch-minimize'
copyright = '2021, Reuben Feinman'
author = 'Reuben Feinman'
# The full version, including alpha/beta/rc tags
release = torchmin.__version__
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
import sphinx_rtd_theme
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.mathjax',
'sphinx.ext.napoleon',
'sphinx.ext.viewcode',
'sphinx.ext.autosectionlabel',
'sphinx_rtd_theme'
]
# autosectionlabel throws warnings if section names are duplicated.
# The following tells autosectionlabel to not throw a warning for
# duplicated section names that are in different documents.
autosectionlabel_prefix_document = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
# ==== Customizations ====
# Disable displaying type annotations, these can be very verbose
autodoc_typehints = 'none'
# build the templated autosummary files
autosummary_generate = True
#numpydoc_show_class_members = False
# Enable overriding of function signatures in the first line of the docstring.
#autodoc_docstring_signature = True
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
#html_theme = 'alabaster'
html_theme = 'sphinx_rtd_theme' # addition
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# ==== Customizations ====
# Called automatically by Sphinx, making this `conf.py` an "extension".
def setup(app):
# at the moment, we use custom.css to specify a maximum with for tables,
# such as those generated by autosummary.
app.add_css_file('custom.css')
================================================
FILE: docs/source/examples/index.rst
================================================
Examples
=========
The examples site is in active development. Check back soon for more complete examples of how to use pytorch-minimize.
Unconstrained minimization
---------------------------
.. code-block:: python
from torchmin import minimize
from torchmin.benchmarks import rosen
# initial point
x0 = torch.randn(100, device='cpu')
# BFGS
result = minimize(rosen, x0, method='bfgs')
# Newton Conjugate Gradient
result = minimize(rosen, x0, method='newton-cg')
Constrained minimization
---------------------------
For constrained optimization, the `adversarial examples tutorial <https://github.com/rfeinman/pytorch-minimize/blob/master/examples/constrained_optimization_adversarial_examples.ipynb>`_ demonstrates how to use trust-region constrained optimization to generate an optimal adversarial perturbation given a constraint on the perturbation norm.
Nonlinear least-squares
---------------------------
Coming soon.
Scipy benchmark
---------------------------
The `SciPy benchmark <https://github.com/rfeinman/pytorch-minimize/blob/master/examples/scipy_benchmark.py>`_ provides a comparison of pytorch-minimize solvers to their analogous solvers from the :mod:`scipy.optimize` module.
For those transitioning from scipy, this script will help get a feel for the design of the current library.
Unlike scipy, jacobian and hessian functions need not be provided to pytorch-minimize solvers, and numerical approximations are never used.
Minimizer (optimizer API)
---------------------------
Another way to use the optimization tools from pytorch-minimize is via :class:`torchmin.Minimizer`, a pytorch Optimizer class. For a demo on how to use the Minimizer class, see the `MNIST classifier <https://github.com/rfeinman/pytorch-minimize/blob/master/examples/train_mnist_Minimizer.py>`_ tutorial.
================================================
FILE: docs/source/index.rst
================================================
Pytorch-minimize
================
Pytorch-minimize is a library for numerical optimization with automatic differentiation and GPU acceleration. It implements a number of canonical techniques for deterministic (or "full-batch") optimization not offered in the :mod:`torch.optim` module. The library is inspired heavily by SciPy's :mod:`optimize` module and MATLAB's `Optimization Toolbox <https://www.mathworks.com/products/optimization.html>`_. Unlike SciPy and MATLAB, which use numerical approximations of derivatives that are slow and often inaccurate, pytorch-minimize uses *real* first- and second-order derivatives, computed seamlessly behind the scenes with autograd. Both CPU and CUDA are supported.
:Author: Reuben Feinman
:Version: |release|
Pytorch-minimize is currently in Beta; expect the API to change before a first official release. Some of the source code was taken directly from SciPy and ported to PyTorch. As such, here is their copyright notice:
Copyright (c) 2001-2002 Enthought, Inc. 2003-2019, SciPy Developers. All rights reserved.
Table of Contents
=================
.. toctree::
:maxdepth: 2
install
.. toctree::
:maxdepth: 2
user_guide/index
.. toctree::
:maxdepth: 2
api/index
.. toctree::
:maxdepth: 2
examples/index
================================================
FILE: docs/source/install.rst
================================================
Install
===========
To install pytorch-minimize, users may either 1) install the official PyPI release via pip, or 2) install a *bleeding edge* distribution from source.
**Install via pip (official PyPI release)**::
pip install pytorch-minimize
**Install from source (bleeding edge)**::
pip install git+https://github.com/rfeinman/pytorch-minimize.git
**PyTorch requirement**
This library uses latest features from the actively-developed :mod:`torch.linalg` module. For maximum performance, users should install pytorch>=1.9, as it includes some new items not available in prior releases (e.g. `torch.linalg.cholesky_ex <https://pytorch.org/docs/stable/generated/torch.linalg.cholesky_ex.html>`_). Pytorch-minimize will automatically use these features when available.
================================================
FILE: docs/source/user_guide/index.rst
================================================
===========
User Guide
===========
.. currentmodule:: torchmin
Using the :func:`minimize` function
------------------------------------
Coming soon.
================================================
FILE: examples/constrained_optimization_adversarial_examples.ipynb
================================================
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "dried-niagara",
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pylab as plt\n",
"import torch\n",
"import torch.nn as nn\n",
"import torch.nn.functional as F\n",
"import torch.optim as optim\n",
"from torch.utils.data import DataLoader\n",
"from torchvision import transforms, datasets\n",
"\n",
"from torchmin import minimize_constr"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "whole-fifty",
"metadata": {},
"outputs": [],
"source": [
"device = torch.device('cuda:0')\n",
"\n",
"root = '/path/to/data' # fill in torchvision dataset path\n",
"train_data = datasets.MNIST(root, train=True, transform=transforms.ToTensor())\n",
"train_loader = DataLoader(train_data, batch_size=128, shuffle=True)"
]
},
{
"cell_type": "markdown",
"id": "closed-interview",
"metadata": {},
"source": [
"# Train CNN classifier"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "following-knowing",
"metadata": {},
"outputs": [],
"source": [
"def CNN():\n",
" return nn.Sequential(\n",
" nn.Conv2d(1, 10, kernel_size=5),\n",
" nn.SiLU(),\n",
" nn.AvgPool2d(2),\n",
" nn.Conv2d(10, 20, kernel_size=5),\n",
" nn.SiLU(),\n",
" nn.AvgPool2d(2),\n",
" nn.Dropout(0.2),\n",
" nn.Flatten(1),\n",
" nn.Linear(320, 50),\n",
" nn.Dropout(0.2),\n",
" nn.Linear(50, 10),\n",
" nn.LogSoftmax(1)\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "accessory-killer",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 1 - loss: 0.4923\n",
"epoch 2 - loss: 0.1428\n",
"epoch 3 - loss: 0.1048\n",
"epoch 4 - loss: 0.0883\n",
"epoch 5 - loss: 0.0754\n",
"epoch 6 - loss: 0.0672\n",
"epoch 7 - loss: 0.0626\n",
"epoch 8 - loss: 0.0578\n",
"epoch 9 - loss: 0.0524\n",
"epoch 10 - loss: 0.0509\n"
]
}
],
"source": [
"torch.manual_seed(382)\n",
"net = CNN().to(device)\n",
"optimizer = optim.Adam(net.parameters())\n",
"for epoch in range(10):\n",
" epoch_loss = 0\n",
" for (x, y) in train_loader:\n",
" x = x.to(device, non_blocking=True)\n",
" y = y.to(device, non_blocking=True)\n",
" logits = net(x)\n",
" loss = F.nll_loss(logits, y)\n",
" optimizer.zero_grad(set_to_none=True)\n",
" loss.backward()\n",
" optimizer.step()\n",
" epoch_loss += loss.item() * x.size(0)\n",
" print('epoch %2d - loss: %0.4f' % (epoch+1, epoch_loss / len(train_data)))"
]
},
{
"cell_type": "markdown",
"id": "therapeutic-elimination",
"metadata": {},
"source": [
"# set up adversarial example environment"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "developing-afghanistan",
"metadata": {},
"outputs": [],
"source": [
"# evaluation mode settings\n",
"net = net.requires_grad_(False).eval()\n",
"\n",
"# move net to CPU\n",
"# Note: using CUDA-based inputs and objectives is allowed\n",
"# but inefficient with trust-constr, as the data will be\n",
"# moved back-and-forth from CPU\n",
"net = net.cpu()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "mighty-realtor",
"metadata": {},
"outputs": [],
"source": [
"def nll_objective(x, y):\n",
" assert x.numel() == 28**2\n",
" assert y.numel() == 1\n",
" x = x.view(1, 1, 28, 28)\n",
" return F.nll_loss(net(x), y.view(1))"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "better-nerve",
"metadata": {},
"outputs": [],
"source": [
"# select a random image from the dataset\n",
"torch.manual_seed(338)\n",
"x, y = next(iter(train_loader))\n",
"img = x[0]\n",
"label = y[0]"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "presidential-astrology",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor(1.4663e-05)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"nll_objective(img, label)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "independent-slovenia",
"metadata": {},
"outputs": [],
"source": [
"# minimization objective for adversarial examples\n",
"# goal is to maximize NLL of perturbed image (image + perturbation)\n",
"fn = lambda eps: - nll_objective(img + eps, label)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "bacterial-champagne",
"metadata": {},
"outputs": [],
"source": [
"# plotting utility\n",
"\n",
"def plot_distortion(img, eps, y):\n",
" assert img.numel() == 28**2\n",
" assert eps.numel() == 28**2\n",
" img = img.view(28, 28)\n",
" img_ = img + eps.view(28, 28)\n",
" fig, axes = plt.subplots(1,2,figsize=(4,2))\n",
" for i, x in enumerate((img, img_)):\n",
" axes[i].imshow(x.cpu(), cmap=plt.cm.binary)\n",
" axes[i].set_xticks([])\n",
" axes[i].set_yticks([])\n",
" axes[i].set_title('nll: %0.4f' % nll_objective(x, y))\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"id": "ambient-thread",
"metadata": {},
"source": [
"# craft adversarial example\n",
"\n",
"We will use our constrained optimizer to find the optimal unit-norm purturbation $\\epsilon$ \n",
"\n",
"\\begin{equation}\n",
"\\max_{\\epsilon} NLL(x + \\epsilon) \\quad \\text{s.t.} \\quad ||\\epsilon|| = 1\n",
"\\end{equation}"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "surprised-symposium",
"metadata": {},
"outputs": [],
"source": [
"torch.manual_seed(227)\n",
"eps0 = torch.randn_like(img)\n",
"eps0 /= eps0.norm()"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "missing-bargain",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"-2.2291887944447808e-05"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"fn(eps0).item()"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "miniature-fight",
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"`xtol` termination condition is satisfied.\n",
"Number of iterations: 32, function evaluations: 50, CG iterations: 52, optimality: 1.02e-04, constraint violation: 0.00e+00, execution time: 0.57 s.\n"
]
}
],
"source": [
"res = minimize_constr(\n",
" fn, eps0, \n",
" max_iter=100,\n",
" constr=dict(\n",
" fun=lambda x: x.square().sum(), \n",
" lb=1, ub=1\n",
" ),\n",
" disp=1\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "wanted-journal",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor(1.)\n"
]
}
],
"source": [
"eps = res.x\n",
"print(eps.norm())"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "spanish-wright",
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAPEAAACHCAYAAADHsL/VAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Z1A+gAAAACXBIWXMAAAsTAAALEwEAmpwYAAAOOUlEQVR4nO2de4xV1RXGvyXKGwVkBDsRBiFSA5QmQJsoKg8fRSEytdSmQCltItgYYgAjoaFgbXnUxCb9g2hMFCehtYrQClWIMRBBQyoJCExiQQggwQoDhfASVHb/uHdO916ds889973nfr9kkrVY++yz7xzWnPXdvc8+YowBISRcrqn0AAghhcEkJiRwmMSEBA6TmJDAYRITEjhMYkICpyqTWETGisgxyz8sIvdWckyk+PA6F4eqTOJCkAwrReRU9ucPIiKe9hNE5BMRuSgiW0RkQK59iUhD9piL2T7uVX3/VESOiMgFEfmbiPQuzaeuPdJcZxHpKCJrs38kjIiMVfGeIvKqiJzI/iy1Yv1F5Lz6MSIyPxtfpGKXROSqiPQp4cd3aHdJDOAxAFMAjADwHQCTAMxuq2H2F70OwGIAvQHsBPDXFH39BcAuADcC+DWAtSJSl+17KIAXAcwA0BfARQCrCv94JEvO1znLdgDTAfy7jdgfAXQF0ADgewBmiMgsADDGHDXGdG/9ATAcwFUAb2bjy1R8JYCtxpiWwj9ijhhjKvID4DCABQD2ADiLTPJ0zsbGAjim2t6bY78fAnjM8n8JYEdM28cAfGj53QBcAvDtpL4A3AbgMoAeVnwbgDlZexmAP1uxQQCu2O1r4acarrM67hiAserfWgCMtvxFALbFHL8EwJaYmAA4CGBmOX/Hlb4T/xjADwAMROav6c+TDhCRMSJyxtNkKICPLf/j7L8ltjXGXEDmIgxtK676GgrgkDHmnCdu930QmSS+zTP29kqlr3MuiLKHxbT7GYBXY2J3IVN1vVnAOFJT6ST+kzHmuDHmNIANAL6bdIAxZrsxpqenSXdk/uK3chZA9xi9pNu2tu+RQ19pj9XxWqLS1zmJTQAWikgPERkM4BfIlNcOItKapGtj+pkJYK0x5nweY8ibSiexrU8uInNhCuU8gOst/3oA50223klo29r+XEzc7ivtsTpeS1T6OicxFxkZdQDA35H5ruNYG+1mAnizrSQVkS4ApiL+Ll0yKp3EpaAZmS87WhmR/bfEtiLSDRnt2txWXPXVDOBWEenhidt93wqgE4D9KT4LiSfNdfZijDltjJlmjOlnjBmKTF78026TQ5L+EMBpAFvzGUMhtMckbgIwT0TqReRbAOYDWB3Tdj2AYSLyiIh0BvAbAHuMMZ8k9WWM2Q9gN4AlItJZRBqR0XutemgNgMkiclf2j8NvAaxTGprkT5rrDBHplL3GANAxe80kGxskIjeKSAcRmYjMF56/U100AjgDYEvMKWYCaMqzEiiI4JI4mxQ+zfEiMrprL4B9AP6R/bfW45tFZBoAGGNOAngEwO8B/AfA9wH8JNe+sm1HZY9dAeBH2T5hjGkGMAeZZD6BjBb+VV4fugYp5nXO8i9kSuZ6AJuzduuagJHZfs4BWA5gWvb62cQmqYjUAxiPzB+WsiMV+MNBCCkiwd2JCSEuTGJCAodJTEjgMIkJCRwmMSGBc22axn369DENDQ0lGgrJh8OHD6OlpSWfpYZt0qtXL1NfX1+s7oIlv9WbfgqZCWpubm4xxtS1FUuVxA0NDdi5c2feAyHFZ9SoUUXtr76+HuvWrcupbTmmJ0uRTJU6byG/ryFDhhyJi7GcJiRwUt2JSW2T5k6Spq3vrqf7KdWd2ddvuaqBfO/UvBMTEjhMYkICh+U0KQm6BLVLxTTlaZq2hZT7+jy2X6ySvlRfBPJOTEjgMIkJCRwmMSGBQ01MysI11xTnfuHTlTqWpq1PE2vy1fRJmjhfrc07MSGBwyQmJHCYxIQEDjUxyRnf3K+O+zRmkvaz+03Sub6233zzTU7naMvPdbwdOnSIjQHpvgvgsktCahQmMSGBw3Ka5I2vZNZlpl1WJpXTV69ejWxdEieV9HHn1P2mmY7S+EroUi0p9cE7MSGBwyQmJHCYxIQEDjUx8ZLmEUJbg2rdeO21//uvlqRrv/7668i2dWwShehRn6+1te3rWLl2InHGU/IzEEJKCpOYkMBp1+X0jh07IvvTTz91YnbJBgCzZs0q+vl1nzNmzHD8cePGFf2c5USXknYJbZfPOqbR18KeVtJTTLq83rNnT2QfO3bMiX311VeOv2TJkshOmlLySQObKVOmOP6DDz7o+L4thYtVevNOTEjgMIkJCRwmMSGBUxWa+PLly5H99NNPOzGtZdOwd+/eyP7ss8+8bUsxFbB69WrHf/vttx3/tddei+yxY8cW/fzFJs20jObSpUuR/fzzzzuxgwcPOr6tg5O0q33syZMnnZg+1tbpSVNXtg72aeKNGzc6/vvvv+/4y5Yti+zRo0d7z5kvvBMTEjhMYkICh0lMSOBURBNv2LDB8desWRPZr7/+ermHUzZOnDjh+J9//nmFRpI7ue7WAbga9L333nNi9jV/5513nJhvBw59Dt9yTh3zPW6YtONGro9Oam2tdbl9zZN0eJrHNZ3jcm5JCKlKmMSEBE7ZyunNmzdH9syZM53YmTNninKOAQMGOL6vZOrYsaPjv/LKK7Ft9dTQc889F9n29FgSd955p+Pfd999OR9bjehyddu2bZG9aNEiJ3b27NnI1mWlr2TWJfLNN98c21b3q4995plnYs+5fft2x29qaops37JQHRs+fLjj29NKpXrCiXdiQgKHSUxI4DCJCQmcsmli+6v3QjTwnDlzIruurs6JPfXUU47fvXv3vM9jo7XMyy+/HNnHjx/PuR/9XUCfPn0KG1iJiNvNI0nLnjp1KrJtDazRjynq7y6mTp0a2b169XJi06dPd/zOnTvHjk9fN9+OHJq33norsltaWpyYbzfOhx56yPFvuOGGyC5k2aoP3okJCRwmMSGBwyQmJHAqsuxSa6IRI0ZEtta5+rG1gQMHRnanTp1KMDrgwIEDjv/GG284vk8H2zrcfgwNKM0WQKUmzYu6bfQc7ZAhQyK7d+/eTmz+/PmO369fv9h+9Lys/YhjmmWN+tFUWwMD7pLY6667zol17do1sh9//HEn1tjYGHvOYmlgDe/EhAQOk5iQwClbOT1y5MjI1iXyE088Ua5h5MTs2bMdf+vWrTkfu3LlysjWpVZ75/bbb4/sBQsWOLFHH300spPeI2yXzHpZq25r95U0xWSzePFix7d3zQSALl26RLaejpo3b15k690u07wjWcP3ExNSozCJCQkcJjEhgVM2TWzrJduuFuxpo0KWhU6cOLEIowkDPUUyaNCgyB48eLATs/Wq1q5aN9q+fouDnmKy40kvDj99+nRkX7hwwYnZGhgAunXrFtl6isl+c4eeAtOfxY4nLfWkJiakRmESExI4TGJCAqcq3gBRDdg7MO7evbtyA2kn+Lbc0drQt/wwSRPb88haa2st+9FHH0W2XnbZs2dPx7e3b9JjsHVvmqWeSZrY7iuNPuadmJDAYRITEjg1W07rqYBdu3bl1c+wYcMc337CpdbwvZjbfnJNP8XmK5HTLNHU5ao9TQS4U0x9+/Z1Yrr0tkvoW265xYnZ4/ftHgK4vwctG9IsE/XBOzEhgcMkJiRwmMSEBE7NamK97G7VqlU5H2trJL3rx0033VTYwKqAvHddtPSgfsOGvaxR78iiteGXX34ZOxbt2+exd74E/l+Xv/vuu5GtNbDW5T169IjspUuXOjHf9x6l2r3DB+/EhAQOk5iQwGESExI4NauJC9l5cvLkyZFt7+LY3knSe/b8qda9tl7V2lVj96PnXbXWtueN9Tn1Fjy2DrZ3yQSAc+fOOb79Bkt7900g/g0ZbY037jggeclmrvBOTEjgMIkJCZyaKaf15uAffPBBzsfaOzUCwPLly4syptDxTZ/oUtGewtFPBWl8O0TqqSG79NYvCrefWgLckvnixYtO7I477nB8e8dT38vBk57Iso/Nd1llErwTExI4TGJCAodJTEjg1Iwm1rpWv1nARr+cfNq0aY5vL8mrJZKmSHy7bNgx/Siir9+kNyjYx+o3iehppCtXrkS2XpL5wAMPOL59jfV47WkurYmTdty04QvVCCEAmMSEBE+7Kqf379/v+C+88EJk26VUEnpz+0mTJhU2sMCIKwGTSkXb9/2+fdMw2tdtjx496vhr166NbL3pv29Ter0jy4QJExzftxOJXUInTRvZ4+cUEyGkTZjEhAQOk5iQwGlXmviee+5x/C+++CKvflasWFGM4bQLfC/x9r0Yzadzk/rxLWucPn2647e0tMSOXetp+ymnhQsXOjG9M6aPUmnbfPUz78SEBA6TmJDAYRITEjjBaeJ9+/ZFtr3DBgCcOnUq537sOWQAuP/++yNb7+RAMiQtE8x1GWHS43uHDh2K7CeffNKJ+V4Ar/t99tlnHf/uu++ObH2NfS8LT9Lw+VIsbc07MSGBwyQmJHCCK6dfeumlyD5y5Eje/dTV1Tl+Q0ND3n21Z3wlsl6OaJeZhZScGzdujGwtkfQ5fcsjdcncv3//yNalrN483vdZ8i2DS7WRPO/EhAQOk5iQwGESExI4Va+J169f7/hNTU159fPwww87fq09XpiGuOV/SY8Q2nE9ZRPXDgC2bNni+Js2bYpsrXN9fY0bN86JjRkzxvFt3Zs0bVSsXSrTLKXM+0V2eR1FCKkamMSEBA6TmJDAqXpNXCzmzp3r+PpNAqRtfJrOp+HSxLR+tpdPptlNcurUqbHnBJLfPOE7TyngbpeEEABMYkKCp+rL6cbGRq9PykeaKaakY23Gjx/v9X3nzDWWllIskeSyS0JImzCJCQkcJjEhgSNpdISInASQ//N/pBQMMMbUJTfLDV7jqiX2OqdKYkJI9cFympDAYRITEjhMYkICh0lMSOAwiQkJHCYxIYHDJCYkcJjEhAQOk5iQwPkvxBBGQYvNKc4AAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 288x144 with 2 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"plot_distortion(img.detach(), eps, label)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "varying-commission",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
================================================
FILE: examples/rosen_minimize.ipynb
================================================
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"from matplotlib.colors import LogNorm\n",
"import torch\n",
"\n",
"from torchmin import minimize\n",
"from torchmin.benchmarks import rosen"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Rosenbrock func\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYkAAAFECAYAAADSq8LXAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAByz0lEQVR4nO29e9A12V3X+/n13vt5nvd9Z96ZzEyuk0BCiClHziGWMcHD0RMgwUkEIhRqRovDJeUYi1haSmkojmJJnQN6DiiYSBzJ1BBLE5HDZaIDIaCcARVIiAEnhMAQAnlnJpkkc3tvz2Xv/p0/ulfv1avX6tu+9d57fd/q9+nutXqt1b2712/97qKqRERERERE+JBsegAREREREcNFJBIREREREUFEIhEREREREUQkEhERERERQUQiERERERERRCQSERERERFBrJ1IiMi9IvK4iDxknbtFRD4gIr+b/31W4No7ReTjIvKwiLxtfaOOiIiI2E9sgpO4D7jTOfc24BdU9WXAL+THJYjICHgH8HrgDuAuEbljtUONiIiI2G+snUio6oPAE87pNwI/mu//KPDnPZe+CnhYVT+hqqfAe/PrIiIiIiJWhPGmB5Djuar6GICqPiYiz/HUuR34lHV8CXh1m8YP5EjPJRcAqRaK5xymqq9+Zcd7CNLyXGAc+aGaHd919rn8evWV2e3Z/Yjdj9seqJTP2ceaWPsjYJQyGacAnM0SmCaQ5k1p1oGYwam12V3Yzv/Wvre8DssKIhD6rQL11HPO3S+es8zLSs9VgESRcdbahYNTrp4eoNN8PZeCpNaztPfzDsR6tsU5ewD2szURF9xnVvlttFLmrx9qL/CjqKeyVnYC/Wrgtw6MITiO+bln0ic+p6rP9o61JW6T5+kpp52vu8yT71dVV8rSGyKSAN8DXAQ+pKo/2nCJF0MhEm3g+2SD04GI3A3cDXAkF/iyoz8HIkiS2JXAHJvJM5H8MMn2S5OqU1+kcl1x7F5v6lr1VGTOyyXJ/Bxk50VQ019xbF2fMD9Osmt1NCcqmmR1AHRUrq9Jfm6Ulaf5fjqa10/HkOZviFr76QHMDmF2lD3+6YWU5FmnPOeWywB89qkbmD11SHIt63x0LCRnkJzlQ52CTEFm+XFKMflBPgmmzCceBUm1dFye6DzHNtoSDecNUx/R9B2bZ106Zv7sE4rfOU0yoqrmWY4hnWQbwOyckp6fcXDrMQB/8oV/yK/+wRcy/fwRAKOrCePrQpLPQckpjM6y5wnZs02mILPsppNZ9pyT/FhmWdn8WWf7pn723K3ymWaEJNVSfTPZZr+LzhcEaVo+VlOen0itc5C1ax9bfaFpeVL31nXqWNerpvO2AvXVOf65a//6D1gQp5zyavmqztf9vP74bU11RORe4GuAx1X1S6zzdwI/CIyAH1HV7yOTtNxOJrm51HlAOYZi3fQZEXk+QP73cU+dS8CLrOMXAo+GGlTVe1T1lar6ygM5XOpgvUjbzkQLwl0l2ueXDNH55h2DZjOlpILOhFmabaNxio40e7sMx2FvlI/VmZxL9UyXIuX6AU4J2rXX3J+/bXOspfF7uEAPQVH3GVh1C6IioCNlMp4xGc84nk1IpwmkAmn2rKH62wR/qx7oTWRXhVTX930tC5J039rhPhydbo2+9uXAf1PVvw389b63MhQicT/wzfn+NwM/7anzQeBlIvISETkA3pRft1rYK5dV9+OBqM5FAi3Qqb5nNV5MNAHCYFa8RZ180zRhptk2Hs0gmTdWrKjtybIycGtzzzXdhkt4aNGHr7+u7YXa9x17RHelZ1EQC0VGytHBGUcHZzxzekg6S8rclW8r3UD5d3TFUW3Q+B55xFbB+usKIjqgYKWSSOcNuE1EPmRtd7vtBnS6IX3tJeDJvM6s772sXdwkIu8BXkP2QC4B3w18H/BjIvJm4A+Bv5DXfQEZ6/QGVZ2KyFuB95OxVPeq6kfXPf69g1Ka3JLTTNwUERERgEgXzsDG51T1lT2uC+lrfxD45yLyp4EH+wwINkAkVPWuQFFFiKeqjwJvsI4fAB5Y0dD6waxeQgrwrm0tox0PzKqy01pLy39dxbNYimmdCWfTTKExGc+QkVqrZEVFSjp/tdU9AXGP6U9zSZM9+OKcc95tozTuBjRyCyGuwuGObC5BHCJbEZcVx1ockygXDjKlw7Wzg0zM5IiVvL/JELHM1f2AOIU6CBjOoCtuEpF7gPep6vs6dulCVfUa8OY+A7GxTYrr/UZKxj85sCdSL5zvqqgfKA9NNmIRmIoYI9dJTGfZ6un8ZApJWczkE73YYxb7HtzJtbjHXB7voXY2UckaK5fZ9+4iKJpqKPfqIgJtq33sexaFkjsTN904OQHgkcs3obNMHwFzkZN9L15xUsPvGiI6IdQS2rT+2r1EP07iaVWtiJhaoJO+tiuGopPYPPqsUlawsmmtU/DUqVxX047voy9PPFqcs/UP6Zj5JGVxFJLCbJYwmyWMRElGaaaXSLSivPYpnr3KZt++92bmbZRk/A7sFX6jfqRFe7Xjde/XqVsaQzLfSCAZp1w8OObiwTGnZyNkJmFdhKtLovweNP3OFTjvjPddbPF+dtWltUbf77TNdctSjguZFWPXLeckRORrO/a4Un1t5CQiOiE5myuvIyIifLBlqZ3QyEn4dLqq+q5V6msjkahDqnO/hh2ELULKVlru8p7wsS2iUCAVZrlOIhElGaeko6xCaaWMo4swf7V6bIuJbB2EIghaFn95bzBwHyE0/NQlPwiREsdRKq9r26OTKImfRsponHKQZMYo0+koN301g2iwVvLdawNnsVMYiKlsyR+rPRp1EiGd7ir1tXtHJDRN+/6A64VPB9FFse2Z830wk08xgRv5vyvb9kxMhXgjd4ZLZ1mHqQrJaK6TCCl2zfjEmShNWUjH4L+ReZ2SfqLN9TW6hFB5XRvu/flMad3nQZIT1NxH4jT3apxNR9m7YD9zV4fgOXaJgXTRG3SZZ31inC3RUWi6woGuiJPYBPaOSCwNK7REagtJKTymW41HsT5q38zotO/oh4VMJ1FRdloOdQAzFcajGacFJ5GNreAMPN030jTfRG8phV1Ft5dQtIRPD+F14HPbDpSXhu0jiMncEkwSzf0jMg/rdCbIbO5EB47+wR1KG86iSzmUOZEhEIChWznZkRm6oa9100oRiUREJyTTeXiOiHpUCFXE/iByEhFrQR9uxaz0XFFVqjCyl8blYpdrsM9Xyi2LGmDuCZyvdqezhMl4BqP5ctdwE+bYXU0Llg7CldNbYiRzuSIl6xnfhLwsPwn3vNFHBHUQDeIl1wTWWDgBMMp8JC6fZaFkdJY4XJu1mSY84qXScHzchRsTyUVfjmHoq/x1YYd0mVsgnI+w0T1MR80E4jVvxCvTtstK+6Z+Kmi+nU1HjEcpkmRbRiAobeoc14mDakU9eMoqk3q9Utlb3nTslDUqrZ17L+ktEgpz4WSUcuPkhKunB1w9PYCZ5AEOmUd/rVNcE9BbFPX9L0PXuE8rM3HdBQiIJJ03+pvArhSRkzDKK0eGqJqaH65V/UYUUS/z4zyq6yphrIJq4axINVBmCIdPJyFKFt1zZjiJEZPkBHGtm8yqOgt4W9ZR2DuuYttRoxj1gwmjbiarEufg0WG0ir9k9+PjIDzjc9v3OszhL7P9IyDzkbj58DqffPKWrDx3pLM9q93AfnVEo6tH9lrEY6VIr9qf+wgonlUDbNAqFdUlSF9OIoqbIrYfUSfRHo3K+IjdxYaNWpaJ+LmvEqnOBXqel0ZUi9VwEE2xoezyVcxKNWInd3VrdBKzaYIYr2vI/CUsPwmgFLvJ7Be34vbn4QoqsOpUYia1Xai6K37P+TbX2rAMwIp6FfNXIQutDozGKedGZ0WIEze3RtuwG0tD0V9DBy25gcZIsQPxc1gYO0Qkok5iVQixvC5yH4NlI/Qx1uaIsMqziV9L16HzpESlNs095JvOBNUsr8RobHQSc71ERVQTmDh9vhPudYVjWx3a6DSaaLWVz6Lav9NO0/04OgkTr0lGyuFkyvXZhOnZiOlZ5kjnht+o84nIEvo4v3FAbxESLa1E19DlPde0/fezW4g6iYgthMOdbLW4yeVImjiUNhyMhc5OfBG7CaGvn0TUSUTUoMHctSKa6ipa0qoiu6qoDjTq4zqcFa3oXHGdzhJmqTDOxU0mbLgmlmjMVezax5bYyJS7k6/iWQlb1wd9FNzb891ug6ipUWntXhvgQIC55VcubjocT3nm9CjLRgeZdZOdxzpkuRQiRpWAfS7nUXNtCBXl+HJEUbsDS5a6A4hEogtaeTU36BAWbX/JqIQOD8G2ZsIjzkgtpUIqnM5Gma8EzMOGO3L5inUT83Lfqr8wcvIQh5LfRH6tSxBrb6+GeNhxmrzX+K51ymwiUiIySS5uSjKCeuHglKdPjwrv9UwfMe+gEBcWg6DVJL8xp75FCETPaMiDQCQSW4oBhNJYFYogeMYctJid2hEtSXPFsqWbUIsQFPTAfmMcPYapm+YJiG44yhLnyEjRRNHcLLBQWjv6BrGGLFCeaF0xDlWiUeSbMA1Y1/gIRq0znNVepQ2fDsO5n5LawKfHsHNtJJCMsytuOjzm0csX5/kjlLny2r5pt3uLYLvcQatQGjUhwnfaa3xVRGaH5pmouI7oBJluegTbg9Z+GRG7BWHd+SRWiv3iJNrAmOD1cYZZRmjxupW/Gxk2pEMoyuuLu8DmFNSVi5t9E4QuFWZpwigXoSQjZTaay0hKq2jmXEUp7IajU2iMg+RyG+Yc1fNNHETX83XtqannETsB2TJtpIxy0dzNB9f5xNmthbipYEsCOomuntKNaGrLXXmHuJRlrdAXMYndpDltjN0UsWq08qHog2Ky0crsVon66iq2nbKSiMMSichMmE0TkrzCaDxjmkyKcNgmIqw3r4Tnr+v34NI+V4mtUo7r1NrPwm7TJ2pydCp15UGlvKOTyMRwyihX8h8mU06no3nUVycUR9c4Tf7wKjU/7BIQw3UIPdOXDhL7TSQ65WdoqDsEfUfTGNwgf+YaoDpLev5qppMw+opSWA57tZtCOk1I85l0NEoLCydgHuzPJgKu3N7HFbSd6DVAKFrCDr/RCr56PiJizguWpVemszk8yOR4V2cHmX9ESXFdM/m7zx78z8k8C7esabU9hAl/mdZT67gfI27aEew3kYjoDJk6yuuIIGKo8D3GpheMS0T83IeMUHY6KF7CIrNcJTR4+drGYH82V0DDot0tNNZNRj6dCuksKUJLTMYzrjuZ6ko6CEcEUzANts7CGr8pD4qcivrGQqjdTF0SMYl9fj5m+9i1Wio4JcrXqnOtbQ6sicJIOTc5A+Cpk3N5UL/5jZWsk1yuIYDaFKd1dcGrZ6hYSIWe6V46SnsQiUTETsI7+Wu+K2UC4oqboJglJVU095WAjEgUIcMBTaSIBGsuc01iRebtmwm4ra+DGwW2iVhUwno0KaR9/foIhk+HUYib8uMEklHKxcNjAC6fHZJOk0InUegmahTXpXvz/IYR64YrP22NmJluJ7GI89wC7Ytqc8wiLM6g4/BCYcPTEZUJykc0jK8EwLnJNPeVMI1XV+EVj2T7r6M2cRXVISsm93yrGE+h83WKat/1IUW2ubYgEkoyTrnl8BoAv3Pt2fN4TWbsahOD8j1V9qGXiKtrLokK1iHrH4J+pA366yQGad00GBW8iLxcRD5ibc+IyN9y6rxGRJ626vyDDQ13b5HMNj2CiIgtgEj3baAYDCehqh8HXgEgIiPgEeAnPVV/SVW/ZkWDGPSPVQsz9oCxkkHIKqZYo4WsnUqVHGMjj06CVJjm4qZRkkWDnZl0prl1k6uTcC2c7KGUxE+m3L0nMxxXNOVe64OPUyBwzhVH5eMvuqjhOjTPQleI3kbKaJxy0+Q6AMenE3QmJLbHtaOHKDUfui9PzCa7vUbOocSpbMkK3ocNjb0Nl78tGAyRcPBVwO+p6h9seiALOdctEz0IWEUkVfO9+HwiClNXp9w1gXVFIiiFr4TBaJQWfhJFsD8TpiOdE4qsg2wr3a1VLr57sStrgFC0RCUmk+9an3jMul6d+6nsm3vJw4MbnBkfCeNzYnJJhJ41ZSLim/xrCUJFVNVjUh0KERlSLorByGgWx1CJxJuA9wTK/pSI/AbwKPAdqvpRXyURuRu4G+CIC8vxhnaxCOfh81no3D+dJr8mFPGbTNtuuZbDhFdSaJpjK68EwCxNmIxnHFvpTF3rpmwn8NeMx9IxqJQOvZxDSZHdAnXRX10LpUodn17Frm/v50H9AEiUo8mUJ08vAFnCJpPXurg5n04C/3FxjSlftrXRpufhleS6WHKbwvZKJDwYHL0TkQPg64B/7yn+MPCFqvqlwD8HfirUjqreo6qvVNVXHsjhSsa6j0hi7Kb22J15ImKPMTgiAbwe+LCqfsYtUNVnVPVKvv8AMBGR29Y9wN4wSd8Lc0VF8m0tfbeAL99AMBuatbnHxlcinSWc5ZwEiRZb4SvgrrY9m1Je5Qf1Ba5FkVVWZ91VKXfEXrX6CXO9Vb/OsqloL8k2GSkXDk546vSIp0weCTvTn0d/UPFwp6WeIYQ1vH/Fe66ardzdb2GnkCvRdkRxPUQicRcBUZOIPE8ke5oi8iqy8X++aweapmi6Zq+fpo+hxXCaiEkljeWicJtQf/pSU1YoRVPJRU4CaWYKezCakYzSLO91PkGq2azJ2Ju8p7gpSpNvJTeDXc+tT1lXUNEbeNp326z05+ub8j2496b5vRtimYxTbjm6zjMnRzxzkuWRkFTKCmZ3K3XmnHMU1H3QJUx4q/es6d1eM6FYy/c/ACKRW4P+koi8U0Re07edQREJETkPvA74CevcW0TkLfnhNwIP5TqJHwLepLqTS5H1oePTi+KmMOoSF0U0YEhK5yVARTpvbSAi94rI4yLykHP+ThH5uIg8LCJvM8MArgBHwKW+9zIoxbWqXgNudc6909p/O/D2dY/LizRtl8e2i3VUH0V4nckqzImAW6xWf2bfqtQUAda1bhLPcZorrs+mI8bnjknG2eotCxuu1opcKqInwaPYtu/FM7iQx3VnE9g6biZU37nGcA7FfRT3midgypX448mMWw+u8ntn+WtvHOnS8m9RZ91UGVqF27B+zFAE2Kbn03ct1uW6toRi3VKArjDixNXgPrI58N1Fd5nLwDvIFtiXgA+KyP1k7gL/n4g8F/gB4K/06XBQRGJoUE2RRUL+akrnkMFNHtzm+wiJfZpgPkSfZVXN5CGqaEkQP991s9iJUSLkE90sj+E0yonEWaJeE9GidVeub7qziEDFuskakxtavLUVmCOass/7rJsqsZo8YqhC3GRyaOS6GMmJxMF4xrnRKSen2adofCRC1k0VaC7yqSn3YtGV+6pESLoYAdAFr18aVqRjUNUHReTFzulXAQ+r6ieyruW9wBtV9bfy8ieB3tY7kUgYtOUM1oW+5rU2keka5A+qk79bbIflsLu0VrvGBFaMCex0hKowznMmFCE6rInT9pMQh0BUOAGXs2hzvgsnUXfOnK8TLXmIXEmXYhT4wNHkjGemRwUhFRPczya4bjM+3YQpt4lLC4SJzwJEZGgS4E1wHv2IxG0i8iHr+B5VvafFdbcDn7KOLwGvFpFvAP4scDMLSGAikYjoBJlRjTgbAVgcjEEb4hSxc1Da6xgcnJGZ+XcN8OfrTFX1J7D0u30RicSqsOYQH5IuefJuSI1qe2PPr5n/NdwE5MH+ZqPMDBayVfRIUeNxbcRNjojGXn2XHqUlVsqLKw51S/G4to+let61tLLvxdVJzBMuUZi+Alw4OOWJkwvM8mCIGMsmK1S463G90hwVQ5DWDI0T6Yr+Oom+Af4uAS+yjl9I5my8FAxIvrIlWKNtd615YccxhMI1NMXysc0wQ3Ur7ZgtN4HVWRY2/Gg85Wg8zRTYiZbFMnWbuWWz+cQ9WJOxZ8JuQiiURsgU1/hGlPQRnnEX47F8RBhlpq/JOOWmw2OePjkinQrpVEr+EdQ8d/d5l8o9cZvqfudGM9e2uSRoeGeXjSETk34msDeJyD0i8rUde/sg8DIReUnujPwm4P5l3UrkJJaNZcd6WlRRHYBPPxEKD146NZqXFaHCXZ2EkvtKZCc095W44fAEyPInzKzQFJkyV0oTtOR/zbHXWslSHBeyepw6ZrjWvTaGT/dwDL6cEZV9m6A5nFCJ00i0UOI/++gKj16+iKa5TsLykSjG6jzbWhGWh4BXbm8Vc+sqOJAtNovtKW5q5CRE5D3Aa8j0F5eA71bVd4nIW4H3k80U94bCFfVBJBIRnRB1EhERLdBvjdiYdEhV7wqcfwB4oFevDYhEYotRm3goX14XmeXEWnbXtWnSoTqCyJIOwteGRz9hVqzpTJjOEpL8xGicMh1pZaUtLudgmsqPbX+JksmszWFg6SfEKbfb8yHECTjlvpDhdUmUKpZcVuTXG8fHHJ9NYGYNPrUvro6/fDMBbqHRRNWpb4tuavpbqzhpm7EiTmIT2D8i0Vah3Cdq7CqV1W3aXmL/heipRlfhEo1SqHBrspM0Cxue5jPpeJRyMtJKOlM3vwTWcbaTN6/O/C0NiusAsbCvtxFSXJfO2dfl5SX/CVfEZOrmeomjPKf1STrmdDoqIuYWPhKeZ2r+ekVOjr5iaWhDFFZJOPq03VZM5ba9LB8LqS6ythk7dCtrwLavopag7PaJmgRKsvPCOsecm2XB/mZpth1Opkgyj+GEHezPJhCOXL/coaOodo+pWeH72nfrN7Xv4zLq7sG612SccuHglAsHp3z2+EamZyNkJiUfCVcp3RadCcSevdNrw3oV1yvF/nESEQsh6iQ6QGgU70XsJtpY1HkQxU0RC8InTnJFKhZEFa3ToCnzlZgKtb4Rauk3KvaflI/NCrhIZwrpNOE0T2d6MJplXtcmnaloYeFk7sXVUZTET1oybvLDmqBLMZ18txbQPYRQ3GaNjsL1sLbTlSbjlGcdZelKnzg5Nw8PTvbMxPK4Lp4l5WM7XEq9zkKd37n+3oI6h9B1Q13JbxRS/U63GFHctCy/h4F8LEGfB/dcSG7rTP5ue6kV8ynoQ+FskvtLnJ6NOT0bczCeMhqnxeTpCxVuH1dENz5dgCvqwalLub2mUOHeNj19u6KokrgrD4VeFjtl5q/PPrzCsw+vcDkPD258SirPziEWIVR+97r67m/vU3x3FHOtFDv2jdYgipsiFkexmu9pY9caAUJT+BjUXGPnlcjOKzrLfCUAEpTRKC28jou8EsYCyCzEnBV6CXWinHyuXUhx7emzuNwlLqYNnz4Dc2/Z1cay6ebJNQCOz8aZl3XBSUhlgvbt18ZoWsNcGK2cahB4f1ogipsith8yU3TR3Nz7gqiT2Fv0dKYbJCKR2BRUMzl0vnoWQFPp71ndEGvJoNHbONCm641tm8BW4jg5q1wjbjKRTlOEg8mU4+QgqzDKdS0hHYRHjCRY/RvOwem/ooto0jUEdBMhb2tXrOWawBZllp+EjFKOJlOuz7J7Pz0bZ+HBXc7A1UlYx76wKPObaEeVOomSFuEaUqM3ydtI3ZdlR7FDgvxIJNoi5DfRlP/BV7euftv2nDDg3nZMuHAa6jLXJVQ+X1d8XcdF2KIQpRSkTmaZrwRQBPubi5t0nvca5gTDmXRL/hI2VXAH7Zzvpbh2RVBQJRTW+KrKaiw/EM0IIZm46cbDYx4/uRGA6TTJTV/nxLiii2iDQL3WugXbRWAJqXYb2+hDKELXDCx8h9Kbk2j0uN4EIpGIKKEp50QUN1lwxUlNxxH7gaiT2HF04QxWPY4ljyEU1A/K5zNvabyhOczQfAH+7ONSkLo8sqnxKj6djTgYzYp0ptPEUlaT9SszalfuSFmJbkROxTjdR9dA/BrFSXZTjuirYvWU/9UEK8xJNrrxZMYth9d54uQcQG7+Oh+8pHNuoriXpmP3VtLq+boov0tHFy5haN7ay+p6h9ZRkUisC2sI89E1p0Qj12CLn+q+Ny3rCFxb/iz8tRSRTs+mI248OCkioc7yTHW2dZNPJ1EeXPmc/agUh1C4RCOko7Dbs4mS25/nuoo1k70/0kK0NhqlPO/oGT51+ebs0cwyH4lg/ghPdxXYdRvmxa5EoTEGVGUsPSbmgYmLloJNLzKXiB1SrwwUmrJITBhRrXFwapL79iwr9V+eWDQxikgqk5l9jWvrLylZELvcFFZEGY9mjEezTF7v5pew/Asqfg2uLsB3ex7OoHTcRCACbQb9OOx2hTxGE9aW3ePhQWb+ev1szPXC/FXKTnT2M3UJb77vvhed/BoWmZMD71zte9qq3cW+k6Eh5JNTtxH9JCJ2AeIRQ0X4oYnOfUUi9gtRJ7Gl6CPyqW0vhWQ7ZszasOJtUSPXVmsla8Q9cwsdQVRJc53EbJagKhxMsnSm15MUTUZgJSFSkfl35pP3WzqJgrOw1UmWKM2ro/DdnsN9qLXvFUW5HIRVPhc3ZRyE0b9cODjlqbPznJxOsvKZifoqxVh9SYeC1k5LkNS0DRM+CKRL5jZWIeoSljvPbBj7RSQcaJoiq5zkl6l8Dpi8VvNFNCjeQ+Waz6p2LKdSP+X5o85voiJTz3USRr49m444mY05GGVEQkZKOpoTMdcvQpxJ2fXILmiAo8i2TWCbRDFeAuEQpeCxfc5cZ71WJl4TwMXDYz59fLHwGZFZJm6qiJhqxtuolFas39HaL9VpeiDl8lpR0jLn7RUrm3XZRCbUz+7QiP0mEm2guZxUZAAcwyryVYSUuHaZqaKQjubnS9ZNznWFHsK0oxS+AOk0y3l94eAUyJIQzSw/CXWtnRwiUSEi7j14jlt9tA116pznihhN1vjV8o0wSvrnHF3h0tWbCp+RLGZT+VmV8kU4x43+E13PL4oBOcbpkHQakUisBiLySeAyMAOmqvpKp1yAHwTeAFwDvkVVP7zuce4zkllOKCKakbCa3M8Rg0fkJFaLr1DVzwXKXg+8LN9eDfxw/nd4WLb+oydKAQFd2Xyq0OQYp3Wshl2P0mpXnWPjKwGgacLp2Zib83DZRbA/E5sikdzaybAK4Mr9S0Py3Fope53LYbhwJW92n3h0EJ7+y391Lm7KLZtMutIXHj3Jx554buYfATBroYNoI3pqu6IPRIAdVMC+bTeJnScR2gkMQIbSCW8E3q0ZfgW4WUSev7TWu8grN/VRhcImN8mp6VZe177PxNVbvxAz5fVyc0+siLAAB5Np2WRUtBDhFD4HtqmgLf/Hrx8whKM0x0pgc4ZbIQiB/tzx2NFsjcKaJPOROJpMOZpMmSSzIvKrMX8ldQbgeZ5uGPayYYDn+S8Ar47DxbLCd/dBl77XpIOwoczfjWgCu3wo8HMiosC/VNV7nPLbgU9Zx5fyc4+5DYnI3cDdAEdcWM1odwyuLLzida2EnfWslbtt6WTazZSzOVeTZhZOs9y57nA8JRmnzEwsJxHEk/Pa1lFUHOwsvYNPR2FPKUHGwuUoPITH7Lt5IyqKa0snMRqn3Hh4DMCjxzdzkgf1K56N+9ytv+IdaD0qnImnPGLF6MdIRBPYFvhyVX1URJ4DfEBEfltVH7TKW6hX85MZgbkH4Kbk1vhZLAkxfWl7yEgLYhARsa0YFJFQ1Ufzv4+LyE8CrwJsInEJeJF1/ELg0fWNsAXWFPvJm5q0Llx4Q9TYUIiOUJIhUc3SmNqFtk7CvdZwE1BEhD2eZr4Ch5NpYeEEc5GNOCtz1y/B5hxUyhxCaQUu5Rvwrhh84iVbJ0GVU6iIuErjz/aNZdNtR1mSoc8c38jMRH6FIoy6vfL3ipKsvz6/hlKcLPfWmpZIdRIZj1hnbfqLIelJOmKXFNeD0UmIyAURudHsA18NPORUux/43yXDl5GxZxVRU0QO7wfuEQW1+BaLCT8hKDMv913eTNDAIo7TLDODPZ2NmCSzQnktI83zS1BKb+qV4TpbRWfRc/PpHHz9lLbEql/oV7RQWr/o/JO86PyTPFnktJYisJ+r42mlX7Lqtf39Wv3uWzwxDwYChfK6yzZQDImTeC7wk5mVK2Pg36rqz4rIWwBU9Z3AA2Tmrw+TmcB+64bGurfY6rAcDkdR4TCkYcJtuN41eY3ipv3FLnESgyESqvoJ4Es9599p7Svw7Wsb1EDMWINoSjzUhB4KUfsyWyRSEi/Z3EN+vhS+OgVNE07OstdvfD5lMp5xXJjAZhxEvmCYh9wOiHsKUZNzP96wHL6J3nNN+cZrRE2O6AujcM+9rFMSjiZnPGfyDABXTw8yTmKWX24sv2yuwMMl1CmiO2HR64fu9zEU89kBTxtdMRgisTWo0zl09Yg2KUwBknz+MgHhko5t5SjyF/ROg2qNrW455BMvWfveMB2pGWMmbpqadKaapTM1oStmySizaDIpPz3y/0oocYsQuDqJCofg++lq2i+dw19WKrd0Esk45cLBKZ87uwGA49PJ3PQV5t7WIR2Eu+8cB0NyLDhXdg4RXhrD/L0uUpeayburOKuu/oBFY1vLbXuwQ7fSANtlf5M23vYYllWn9kPCWpXashXPNSH/C/uypGY168rTPXoJs1LWXHk9myaczMYcjTPl9Wic6SbU5zfh6gkMPLoD33GtjXrDtY16iVJocC0U1qNxyq3nrvGp68/iU9efxdl0lImgbG7Lp7+xn6EHFZ2CR5eRna//nYt3InS9fc0y39lNwr2XZYcp97wvrbYVINf1/rqIfE3fNvaHSEQsBQutMHcchnswMF7WER5smlCsGD2d6RohIveKyOMi8pBz/k4R+biIPCwib7OK/h7wY4vcSxQ3GawieF6oH2jf17rSqdpy++A4XAH+/K8tXpK8umvWaQL8mQQ7JjTFydmYi0fHjEYZBTobacFBAGgimTjOEChXJwHl9KXO8Cqhwt2K5nRIz+B+xG7/peB+WnATR5MzAF547kl+6+nnATA7G809z6Ewf62E5bC7s1f5bbmFUnl98VLQZ9LfZVPa1X2v9wFvB94970pGwDuA15G5CXxQRO4HXgD8FnC0SIeRSCwLKyYyXr8It9zuv+14jKy4KYYTVGThwU9PnXJ7AoRMXj2TLHUncDIdM07SLDwHcJJM0FEyvx/jM2Hfnq2TMETAHZBFREr91xEHz/UVnYNbXxxCYUV9PZqc8eKjz/Orj784u/U8O1/lmYT0DJ6H3EYcVUEXhW6XMOEtyhfGlnEdSnvOwMFtIvIh6/geN+qEqj4oIi92rnsV8HBu/IOIvJcshNENwAXgDuC6iDygPULlRiKxTixqLbWANVNBZAITZbU+uOEsgJKfRBE8UKUkS/cSB3WOjZ8AWc7rVIXDcUYkro6VqVmRYwiEZhwFZMpe+x5kzk2YbsUeSE5Aaj9ch1OoxGpy+zOEgTn3oJZ1llFYAzw9O8e1IslQIKe188zCz859yM5tuLqKEEpEaYFJeFHx41CskZaNfp/559zI1y3hC1f0alV9K4CIfEvedq9fK+okIjoh6iTa49Zz1zY9hIhNoIc+YsEAfz6SNF+rqd6nqv+h7+1ETqILFuEENIU1Jy4KhdqoKy+4ASPKsfQApTVfzQLQDRVeWDaZtoxOwkpnejYbFYre0XjGbDTKPK8BZrlewphVJtm5ip+Evfqn42LOJ8pydRJ2g1YAQnJOp+B8LHHTreeu8fvXbuM09wlhZkV+Zb7yd0OD13IDTlnp2VrnvLfZsHDfSPC/RSyLhsqJ9Jsm+gb4W2m4oshJhDBUOWidKeKCQ24zgYSiirpiEQKTn31eZpLHcRpxPJ1wOJpyOJpmCmwr1HYh2rF0A675qs+ctWQ14k78nnOh6319zMNv6DyceR5SREYph5MpX3D+Cb7g/BM8fnwD02nCdJoU5q9zZX712fh8TuqefdNvthBC1w/BjDyEAYxtzZzEB4GXichLROQAeBNZCKOlIHISi8LEqw/lyl6VdVIfRbml03C5iCauo+g2sZTSwQnEc2xPcians5XO9Hg65lm5eOZgMuVkPGGW5IPNFddF0LxUKit9+6/LRRSqiYA+JqRzcBXXIUU2dg4JIMnzR7z06HEAfu2zX1hYcmWWXfOLK8TVHbQLW1fREm7d0nGfRfwqJuA2bW4gN0RvrIiTEJH3AK8hU3JfAr5bVd8lIm8F3k/2hd+rqh/tNQIPIpGI6IStjt20ZlxOj7gxOd70MCI2gJ7WTTeJyD3A+1T1fd52Ve8KnH+ALLbd0hGJxK7BTTe6CCdTE5qjYgJrcQpmJa/WqteuX4oIC3k608zCCeBwPOOKiQgL86iqJpaTUCQiMrcqWHoUqtxEK+smS+zk85MoHRvvanND1njHk1mRZOhyepTFa5qZdKVStm6ynk/xHCmX+XQQFRTX91zlu9cNVZS0DXDFmu0Rkw5FLBdNvhOtUBFzWJprV2YTynfdIF4qlNj2RJiHDAdgJkzzsOGQ+RaMrEx1hZLYKIaTLJeFLT5SKJvAaploNELKREQ9RKPsPKdlvwmjQyHL2f2cc1f4navPBeDkdJwRB5jnjvDoasDR4dio1Q146lYm/cD1HTGoXNgDhvYTLzdyEpvA/hGJnhZKqinS1TppgFFkC9+GnjoSTaQyoRXTRo0cveRVXOS8zo7TlFISoouHx0zGM86SrEI6SiCRwjjMTNqldKZQVUoH6J13iDUKbVdHUSjSCz8JSg50hwdTvuj85/jIUy8EYDodFYEbk5yDqnijO8/Lu08NEemIwU72PayVepr/D9EyapCcRJQuR3SCDO/DGiyeOjm36SFEbArSYxso9p5IaJqi22Q14cKS7duoOL155dgLiCSsuuKIPEImsSVTTytmkcyEdJpw/WzC9bMJY8lMSJOxkoxzWX9isSO5Oaz6Vvi2PsHaGs0PnbpePYRT12TOY6RIkoUFN57Wf+Tcp4GMUJh0pTKz4jSZ3009z8bhvLJz2uht3fQ7Zceei11uxfc5BN6zbcK6vnOl+X1bsgnsSrF/4qZlYl1BAdeBVL3xm0qJghLmoTEgKApxxU+uc11BQKwgdzoTzqajvCvh3OSMK+MsM88sGZWd10xqUMuM1BYpGf2EPTxXie2edMVNPp2EHYbDjtVEojBKC3HTzYfXuXR6C1dPD7L7mSYlnYSd07piHtxEADzlpYROPuwS97dCMZku8zntkOJ67zmJiG7Ya3GTq69vCA1+crq/a7DB6jzWgT6ipgGvNff3LQ6hyTmuLfpwGTq3VdJUumWn8/W3AKdTBPgT5xz5eU/9EgfhKF9t8YmxPrLDdJDOOYnrZxOOxmdF6HAZpYXyGpinMrUV15ai2vfR+biKirLa3vdxFqbMcDLkXMVISXJLrMPJlNvPP83vXH0u0/x+sMJwuCE3SqI5e7B1im3Kim/3/EJe1m2TUdVd72alW8YY+mCDYuSefhKDROQkNg03vWOofAkoZ6Yz5wKTVmNb9hgDdfCIeQqdhL0vpUx1x9MJR6MpB5NsE+ODkG9zcU82WHV1FFT1DD79RF25acMWNRkdxDwsR7YZfcTRZMrRZMofu+ERPn39RiC3bMrjNWVb+TlURHahB+6K6xpQ+7uWzi9pQq57T1Nd6nu8FejHSUSdRISDLiv9FnUrOSVWAFsnUdJBOJOYq4NwkxCpzUnkhMKErjiejklEOZeLb66NU1KjvIbsOSRzHYFotj/nwrAtYLM6vptp0klYJq4limf6NuPJzV+NA92zx5d55uSImQnFYeWPMASyxAm4E7jnuBjamufZVkSkK6exB+jJSUSdRMQWwk1As886iQY859yV0rHOVkuwIwaMqJOICK7sV5hutI+HtRv6u6mut54JzxESK3l0FOXrKa+Ii+O5jkFnUkyqp2cjTtMR5yZZ0p7x+IhpMiI1eqJE8xhS5vqcg8rHbiyb7FvpopNww3JkznPqJBnKdBEASR71FTJC8ZGrX8C100mWYIg0s2YqUrdK4FnUPcAaDqK43l+hLefRl0NZqYI6GO144AsVafe9bQsikWiCWTkv23O6r1LZiGhCGerM9xNq2rq+MfKrO9MW5wMX2iIVa+JTLBGUp7ywwc8n0ulsxPXphBsnJ0AeFXY0mcdyGgk6mz8+E5nW9pC2hyPOfRRTjKvctq+3V3dmP7GOE4rxZPqIM15+w2cA+OgzL+DEyUQXejaNimzffFhDFFoTmyadbtM8vIhOeFWT/JC43B0iElHctG6kupyXedkfmttcoP1SBNiS3qFmPE6RHZ5jvknBTcymCcdnE8ZJyjhJORzPMh8Eo7w2q3o7n4OlUC7pEHyrOg97X3GoS+zNVZRrKd/FaJxyw+EJf+L87/Mnzv8+nzs+z2yWzLmjXGFtNrCIgedZuvAZHNRiRXGbgu33wbK+g6FihxTXgyESIvIiEfnPIvIxEfmoiPxNT53XiMjTIvKRfPsHmxjrPiOmL+2AdIeWkxGd0NPj+mlVvXtIwf1gWOKmKfB3VPXDInIj8Osi8gFV/S2n3i+p6tdsYHzNWLcHdl1/6spYWo6tSVxVc03Jj8LmMkz3RvwUsugpkhAlHJ+Ni9Dh5yenXB4fMs2TEKUjQUZaBM2TQhyUNah5UqKQTsILi7uYm75qqayUZMgK6DcZz7jt6BoPnzwPgCsnh4Vlk+Ei7HSlFXNWtcbqPJveAf261O8SJnzd+oCh6x9C2KH1wWCIhKo+BjyW718WkY8BtwMukVigEyfP9Non9fXnuQ6hMJdtqWgv4g1BNmG6FVrIwiv5JQqxkxTKa8j+np6NuZZHhb0wOeVgMuV0nMv5p0bMJPN7sRTrYkROZmI2/4XGaBEHmIur1DaBtRTXJJnfRpITifMHp7zkwuf4zSu3A3B8NiHN4zXZ92uek++4Fp7yksiuowhqUN7Qi+S37tWfSxCX33+N5HArMYwZy4GIvBj448Cveor/lIj8hoj8jIj8sZo27haRD4nIh061ZXawIX08LVEK/OaWtQnyZ8OREVcCzQE6CpfZ/ZRWzA7HENwsBfZslnDt7IBrZwccjGa5XiLbKjoJo1TON020qleokQWrUxeZt2GU1CrMnflGc12E0Ue85uLH+PT1i3z6+kVOp6O5l7VRyJvNd9/Os6kER/T8JqHzlbImub9THBIn1r1ng8YmxtxHHzFgzmMwnISBiNwA/L/A31LVZ5ziDwNfqKpXROQNwE8BL/O1o6r3APcA3JTcuoVv9zAhszmhiGiACa0SsX8Y8KTfFYPiJERkQkYg/o2q/oRbrqrPqOqVfP8BYCIit611kH2tMla9otmgQrnOO9grU7dWzMWWUoSuMBZOJ9MxJ9NsHXPh4ITxeMZ4PCMZp3n4cObez6IlTiBrlLkuweYWHO7BrmNzERXrppxzMaKmw8mUw8mU246u8dnpRZ46PsdTx+eYno3mjnTGssle6buvQt2r4YjoNoJVv1t9v43ds44apHXTYDgJERHgXcDHVPUHAnWeB3xGVVVEXkU2TXx+ZYPapVDgPijFisf1mQj5UIS4iLoAf25YDrXOAXPFrjnOTUdPzrLX89p0wg2TU56cZKHDT8cp6XTuzCZppqy2/Say82YQUp6HfSaxMI/qavQR1rGtk5Bc1HSUO9C99IbP8quXv4hruW+E0UfYYUcqoTW0vO+G6VgkFIcv6F+BnZtXh4kYlmM1+HLgm4CvtExc3yAibxGRt+R1vhF4SER+A/gh4E2qAxGUDiFxUYsInk1y5bBnb0YFZFo+rm3b5SICE2PBVdj5JdKkiAp7NddLnJucATAap8gonTvXWSv+8nHevPFpMB+uKwc2vg82kbHyVxSJhSznufF4VsRqev3F3+SRazdzmhO1wjfCIoKFf4RJ3erjsDzH3mdqP3tz7KCJsFTbDVwwhM+ry7c1hPFC1EmsAqr6yzQ8KlV9O/D29Yxoh+BwRE3hPTp5YnsmtVLbjZNg+ZSkglpnDKEwODc54/rJZF5/pPOEPmQTuqT2sdNngIsIHbvPoSBMOQyhMDCEAsgIhaWTcJXCC4uPGglBU3lLQjEEDGER1gExLEdEe6wqrMe6MJfYNNexUJn8rXpG9GSH6cgsf/ITKZDrJSDLLwGZvwTAlfEh09GYWT5h60hz7sSSnYlYYqe867pZ0zJ5LVk3QcGZ2JzE4cGU552/DMCnpzfx+ePz5fwRM+b7dtwrRy9hVCeN4qWm+XvA83sr7JJ+YeCcQVdEIjFQiOo88RC01o2Y1aEbMjzEHZQCAFZW/VWdTNCyydVB2G07MnbX/NNWXkPGTaSzeehw4zNx8SCL5fTM5IiT8Zh0mo9tJEV6VXMzkmjh8VzEdgrNQ7biGgrRlBbiLM1zWmQDHOV5rO+44TEAfvmZP8K104OCqOlMSFIpiZgqyn3f87CeWW89QkDkV0sfvUS+46RtEfitNJVdNnaISAxJJxGxBZBZc519wV03/Xrp2BC1AnGu3Fv0DMsxSEQiMSTUZfBadmavBUS8QS9fe4XcorxiImtZORkLJ50Jp9NRprxOZhwkM84dnOWmsEoy1rmD2wgYMXeeK4LwgTZsRUpSEygwv86E4CA3ezWmrzcfXgcyQvGH157FcZ7PuvC0NpyRwzVUOJom0VJAlLcQlineb3pfd0mM1AVRcb2fUE0RN6xGXViLJWeeA0uc1PRW2crlPnCuL4X+tkT/pm5pKnAIhblWA8cuwZBU0HQuLppNE66fTpiey579jZMTrhwcFjqAdCqQjKxMcfnzKcJy5DfjKttt/bqlYC4RDShETSZW09FkyosuPMkvXnspwFwfYVlnzfNH+EVdwTAd7jN06tsWU6V2aq7tjBbXdxIpLauup0zXHdajLQYw6YvIHwX+JnAb8Auq+sN92tlPIuFRJmuaIskGGKu+SYpSwjkl6vqqCwjoKSvpLLDk/r56pi7UTzSeCVCtfBLkBMI4pKV56HATy+ni5CSzchplx7OxMku1SoTMl5pSmk3de8pOWsdOUiFJMi5iMs5kbTceHvPlF3+XX3r65QC5PmLuQDfnIMxDqxKF4POpIygBNMd+6jbxNqLPvLxBPYW6llGr5m5WKD4SkXuBrwEeV9Uvsc7fCfwg2azwI6r6far6MeAtkq1s/1XfPqO4KaITYqjw9ojPao+xOnHTfcCdpa5ERsA7gNcDdwB3icgdednXAb8M/ELfW4lEYpvRtDprdJzzlC8hGYzfAcz6HsyxVvd9YTrSWcLpdMTl00Munx6SSMoNByccHkw5PJgyGs8K5zop9BOKnZSoJFpyV3qFDiLf8jbE2kbjlPMHp5w/OOV55y/ztRce4ZFrN/HItZs4Ps0srWRmbbZlk6WbqNyv9XwW0T9URFaB37FRTLRoeUTBRK5Cca2qDwJPOKdfBTysqp9Q1VPgvcAb8/r3q+r/AvyVvvezn+KmNkhTWET81EbHsIywH13FVR55fAhBp7piUvPJbcrX211628DzN5fj2/GPZtOk8Jc4nk24ODnhmckRACenmc/ELDXiIckebaEUoRwRuqJUoaSDKHwi8mMjarpwkPlp3HHDY7zjyS/lqZNzAEynIzRNykmG7JSl7j3WzLNz3YOH0AKFt3WgjdZK7a5z/bKIQ5t2Fu1rCI53/T7r20TkQ9bxPXmg0ibcDnzKOr4EvFpEXgN8A3AIPNBrROwJkVBAU0W21aGtBUJe1GZF2+jfIDVEoag7z+Hgbcf6tm1nusJxzpHL2zoEO7ZTpqOQYiWsuc+E8Wa+cnbA885d5obcb+L62YRZmhSEIFXJ00fkOgJyYmxPtPk9F3/Fis2UEwiTL2I8mXH+4JRnn7sKwHfe+jt80x/8b1w9PQBgNh1lXt+5ebDkITkqegh73yEa4jy70HOtoG2YlSa9Rs28uut+D7oKHUW/qeaMLNL1+zpmp/Mu5VT1F4Ff7DUSC3tBJCKWB0lrCEVEGcogrFwi1oz+iuu+Af4uAS+yjl8IPNprBB5EIrEIFhEXpbr8UB19LJ4CqHAVXSa8wOq15IHtHBcWTlCE6ChiMM3megmAK6eHTI+uFh7YV04POZuOSE1mOxWUZG7xKpZewHuzmU6isG629BAAB7mo6aU3fBaA9155Fp++diMnhW+EFF7W2QnmoieHc6iN0Frz7GrR1OYiWIXkZpGV+7ZwNf0+7ZtE5B66cxIfBF4mIi8BHgHeBPzlXiPwIBKJtljFpL5OdAzyV7oOas1jK34S+Tl7343l5IaicMVNklrzQZ7VzY7ldOXskFsOrwFww8EJJ9Mxs1keFkMl80vM7TIEQe0BuDoJycxcKcRNaRHpFeDcwRnPPneVN9/yXwH4/s98FZdPjgo/DU3nDnTAPH+EFSq8LixHGwc7b7iO0vNsP3luVWC/Ntgdh71GTkJE3gO8hkx/cQn4blV9l4i8FXg/2TLxXlX96LIGFYnEstGkSDbL5VXluu7pd1FwDk3fm+QhsNOynqPiJ1HaoUwU8mP1lNnHhpPQmRZ6CYDTs8zS6eaDzOv55oNjrk8nTHMikebmIuk0zZsTxAmyNx80hU6iCOA3UsaTGYcHWVz0Gw+PeekNn+XR6Q0APHLtZo7PxgXRYpY57onFPYTu3XQrdhlOXZy67rk2ea1DOo62WDXhaOMEt+3EqzsaOQlVvStw/gEWUE7XIRKJiG7Y9URMy0TUSewtdinpUCORyL31fhF4SlW/zjp/HvjvwH8C/jHw94GvAJ4PPAb8O+Afqer15Q87og8qeoYmHYYzyXk9lQP9YImzKqImu/38b8naKV+VG6sTSSXTS+Qr9+l0xNXTA65MDwG4+eAaNx4ccJqnOp2lSd52HrZjli9cQzKxXNyU5JzEaDIr9BAAzzl3hW991n/lnZ//MwA8cXyek7NxMR6d5WE4ivFLSbxUsWbyDKNRVxFAUziPCpad0yLCjx1aHDQSCVVNReRbgN8UkW9T1Xvzon+cX/8dwJ8m+yL/OvC7wB8F7gFuBQZHGTeGNa7CRbUSLnyx9sgD37Vv0/aTqOSP0HJ56dgtVwq9BGQmp8dnE545zYjExckxtxxe49gQCZWypEKSLFhgESbDlntlBMKkJAWYjLMggiaI3x+94dN8enYDn7x6KwBXTVhwE58pd54rpSt1w4Pbz6SGaGTipW4z97In+rWavO6qSGm9iuuVopW4SVU/ISLfAfwzEfkF4IvJCMJrVPUq8LP5ZvAJEfk/ge8hEolhYUFCtc0msMk4LYXzdo8n41kpE97t55/mkWs3FcefPz7PrUfXimOdSTlTXUo5hkEUN+0n1m8Cu1K01kmo6jtF5OuBfw28GPiBPOVoCBeBJxcbXsRCCE1SIUJhn081S+bT1L65zqcYduvlcJ3rfFFiJbV0/JrtzwP+5eHDc07iyuEhtx5c5ZajbOV/lo5QFYrEojkHlKaSOchZX3AyTkkSJRnNuYijg7MiLent55/m7936EG/7zKuAjFCcnLqiJms1byybnPtzHeQqznOhBXXBebhWATWwrX265q7e0YX92rFDi4Ouiuu3AL+Xb38/VElEvoBMDPV/9R/amuBaA7W1DlqDSWyRnQ7mYbCLwuX0HcpkNy8nL7dOGqJQB89kEwzT4YpgKE+smkqW7MhYO6VJHqYje32fOjni4uQ6zzrIVvmns9FctAScnMFslsw97nPiKXkHiSjjUcrBJLdmOjjhOeeu8GUXfw+An7p6K5+8eitXTjKiNJ2O5nkjoBCFuWG8vffWRvdQRzSK8nKF5kixS579C6KVU/eCoK+ByrQ1ed2gqW9PTmJ7xU0Wvg24TubR90XAx9wKIvJcMnvdDwD/dNEBLh2ars78tC7eU1ei0iAWKghIk/Nc8WEsgagowVDhofqhfbHO+ZzrMHoIchGXkpmaQqHEPjnNYjldnRzy1Ol5nn/uaQBuObzGVJOCCIgo01nCdDYq2hfJiANAkqQcjOaK6lvPXePlN3yGu258BIC/89iX8/nj80VioUIfYeeMSJnPDI5Oxb5/ce699jmtE4v4WSzQFlA/6a8yDtMqc1HskLip9ScvIn8SeBvwjWQE4L48RK1d53nAfwYeAr5JdVe1UvuLGP46jMqziW///qIS1rjFNlC04iRE5Ah4N3Cfqv6MiPx34KPA3wW+N6/zfDIC8VHgLlWdrmbIa0bfpEB94UmIVIu+43P1FUYEY4mXGgP+5fBZ7JReeZ+oxZHZuzoK2wPbDduheZgO42F9/Wyci5wyPcLNk2ukKiTMxUmn0zGzUeZBPcu5DFM+HqUcjc8KncZLLnyeu276IP/syVcA8Kmrz+Lq6YGVCS8pdBGl+4ci/If57tV3303Phvm9N84hNdc19tkF5j1ru0jYhBf0QNakQ89Z3RVtxU3fCxwBfxtAVT8tIt8O/KiIvI8svvkvkgWV+ltkLuPm2s+q6myJY95dLGJ51CSecjykF23feF2Dv13jKwHkHs/W3OT5lkse22oRDjP2dD4pkyrM5mE6Ts/GXD095ImT8wCcG51y2+GVQpw0TlKujSac5eKmNKeOo7zBo/GUi5MTXnDuKQBed/F/8GvHX8BDl28H4MmTc1w/mWTRXiHTP+R5I4pjyw/Dp7Sui+xaVmJr7aLSy8ktODk2coeLtD+QiXvt2CedhIj8GeBvAK9V1cvmvKq+V0S+gSxT0juBl+XbHzpNvAT4ZJvB+FLwOeWSl78BuAZ8i6p+uE3bG0Pblf4SfChax2PqWR866iSKfih8JWC+7/pNuKvqIsy5ylwvAVlq07TsXHd8Nubp0yy/xNHoBl5w7imefXgFgMNkyuXpIcezTIeR5su8gyRbu9wwOea5h5d51YVMUX2sB/yXZ17G49ezMBxXTg5zZbUJE+I6zwEqYULg7Fcsm6CWMLhYRDLR3QdjCZP8OnJIDA07pJNo40z3YKieqv5F6/BHFhmIlYLvdWShbz8oIver6m9Z1V7PnBi9Gvjh/G/EmiBpP0IREbFP2Edx0zpQpOADEBGTgs8mEm8E3p0rxH9FRG4Wkeer6mPrH66DbYhp1GgxZb3crqxcAvXstm3rnkrfzvV2k1peSBYBBI1HdB4Rdh7wj0zUMzIe2EkucsqSAD2RnONgNOX5R88A8OyDy1ycHHMyy173M01IRDk/OgPgOZNnePnRYxynGafxS5dfyiPXbubp4yzz3MnpmJnJPkfet+NRLSlzeb1HvFQbdqPueVliu+D1bhsBUZYX27CCX9cYl2ntNPCpoAuGRCS8Kfha1LmdLFbUWqD5iyTLMqNdtmLcvOdt9Q8d6gdjN5VmfOucS2g6TYyO4toKHW50FHPnuoTpKOHYmMQmKU+MZoVi+rlHl7k4us4k94MYoUxkxk2jzK/ixtF1Hjl7Fh+/9jwA/vDaLTx5co7j3A9jNksyUVNJB1E2gbXHW1HkW6g1f3Wfm+/ZuO21nT+7zn/LtmJbwUSvqzRhXQTCfukk1givb3CPOllFkbvJQ4IccX6xkW0jAlxD2yB9gNfrWt2wE63H49l35fTWsctZGL0EAEnGTRRjy/UTZ6OM0l0/GzNKjua3gXAyGXPzJCMK50cnTIAnZhcA+P2TZ/OZk4t89iTTQXz++DxXTg45yYnE9CzjIgxRKohCwTlU9RHiIxq+ezf7bZ9lU72WVkW1xGUbuIuho5/iaDt1EmtEmxR8rdP05QnE7wG4mNwa3/olIeokOsDHYUXsBaJOYjVok4LvfuCtub7i1WSUd/P6iBCa9BQrDu3RKxJsQPxV0UPU3FsRi8l3Hvy+Ax4LoLk1E6VYTmhucpobVmsipEnCbJpdcCKT0tBSFY5nE56ZZtzFYTIlkZRpbrt7dXrAlbPDwjrq6ulBroeYx2eiYvLqWjeVx1/Z76AzaPStahOSoyc3sPKwGk2cTuRiBofBEAlVnfpS8InIW/Lyd5JlXnoD8DCZCey3LnUMaYqEwmrsGnzpTNsSFM8KueTnINY505fKXAduFNeu6MsRxdiK4ZJznYm2aufEtkxiZ4kW+achIxKnsxHXxpnOYiwpiSjTXBF9PBtzPJ0UocaPTye5HsIQCcd5LldSl9KTWorsWp2Ee2/FeQ2cDxEBz7kAdi5daQfoKsN67AkGQyQAbwq+nDiYfQW+fWkdNq3k62IxLYpVxpAq9ZP/7chQGM7Bnch9bHRrRz1Xz2APzeU81FPfiuWU+UrkZbOMmzBEbzbNPKpPz+axmqazhONpRiRGSZ7WNL+Zs9mI09moCBN+djZiejaqRHotFNWFDmV+XLo/30TvHNv32UZ83eTw5vO96IR10o1VKpybiMK6PMGjuCliX9E2VEdEfFb7jF363SORWBTrju3UF+qREbmnmhStzr3WmsTWWewELHxKK3JbJ6Hl8iJznfHITnPfhWRuEjuzxqQqpGnCKF9lmnAdxvM6TYWz6aiIBTWbjjI9hAnTnqdPlZJ4qyxusu83aNkUunf32dQsdqv+EU0y/objNm0MBdsyTqGvdVM0gd00NNV5ToE+6KJoXoVznfuRbIAwmax0VUU2YYIZUt7aE6Y4xy7RsKyqMh8FLWIYaz5pFyaqQh5Cw4ibJAv4lxMBE0LciJtSzcJ8zIoAflLoIcA4z5XzRbiKa3HHW0ccfM8hNJm3caRbN0wOCbO/ivbbYkHxka5K/LRDYTn2REsbEULnWD6biO45FKTlL7/W6zmiHtvCFfSF9NgGir3iJAaBjuKpInNcKhWSXspc18ZjOqBgDgb6c1e8ltVShYvogFJEWHNct7q2RTfkdW1FsmgWqiN/piqAJKRTo6BOstSleXY/l5NQhdTiHIzjnBTiJsqiJke0VHoG1tiL5+RRTnfiBjycRjCsR6Dd0GKgdX6QusxzaU/T2V0mFAOe9LsiEokho4vIakm6kU4e2U3tlCbOMiEqmcwyt3Aqju1ys28mqqR8nJnDlq2dENCcqqpmOaxnBVGx2s071FSK2EwYfYTtF2ETBmPu6ugkfOatPssmn/mrmz9iUSxVJNVlMt/lib81lvQjDgSRSAwVPR3tCs4jtJQJEZMUPzdixEujPCeEPaaAIhuX0yjGVr5s3k75b8FZ2JOuTXRSMp8Ok/c7zW11zaQtknEWpv0EUk3mnJDzAWsq2YCLAIJSBPEDCvPXUhgOxy/CVVAHFdbOcWguCYYQDznSdRUDNkzmsojOYZ9FkjB48VFXRCIR0QmSaplQREREVBBNYCMy+Fb7xpnH54TXhjtYlkltiDNoiyZRl6WjgLC+o/b6gM6jYibqqVsS9wiFyAlARRHmHt5oLqIqHAKlOG/+qhvV1VgwMecKSh7XPh2Ke2zuqa5uC3TOne2u/jcRAbYtB1LHdfgc47aFS4nipoitQEN8pWWtdoJtuZO8O/lb9WzltavIdsVP8/Ys5YIROZGZrGYakDkxsN1EfDqJIh4UzHUShSjHpCe1iIszHlcH0+SF3frZWM9gWdih+Wu4GAAnISJ/HvhzwHOAd6jqz/VpJ5rAGgwhvs2GlX6uhYov1ENmrdMwTtXG1a6R45f6YP5tlcq1fFxsqbtZgfdy3wljBWV8HcjzUKh1LKbcbDlRMO2Z9qkZi++eCutG5zk2TtJqnmFNRau8LiRHZ6ujZb+Dm/qO7H6bnuUKINJ9a9eu3Csij4vIQ875O0Xk4yLysIi8DUBVf0pV/yrwLcBf6nsvkUhEdEJchUZENMCsDLpu7XAfcGepu3nq59cDdwB3icgdVpX/Iy/vhShuWhc2lN60sHby9b3gmCrmsuY979ik10/C/mv27ePU6kclFznl7UlWrraSQ6zEQO7ATduWTqJq4mqH5fCM0x2fs9+buFY4g57tFO35G1h5iPA67KLZbL/P6jYR+ZB1fE+eF6eAqj4oIi92rvOmfhaRjwHfB/yMqn6414jYVyJRGNR3YKRWPcmvvH38L647sddN9G3KLFTDVMwnZ3UnVbcNj4zf1iHYfhUlgkFGECWZEw0l11cUdZwbcIiA5ATDDsNR8YOwxlcx0fURDee+5m1pRWTlvSaEUFnds+3S/jKwaiLQWaS2hvDh/T7lz6nqK3tcF0r9/DeA15LFhPpiO6J2F+wnkVgVNsQttIYvyF/jNc6hiZ+kNgdRve9OinFn4jQhyu2Fv51PovDDKE3KMpfR51yEFH4UgEhwPOL0Xyiq3SRHjuw/pAvwhuvoMI/5CYU2lLdvP9TuIDH08S0XfQP8eZdtqvpDwA8tOqhIJCI6IaYvbY9lWpBFbBt6Ebe+Af5ap3Xug/i5bxqpDsL2u86yqXtbnpNa3UL6OiNKqngyu9cbq6Zif27dVCQJskJqVK2hHFGS8ZVwxlfoKBo217LJ9yyC7XieYd/fYGELp1VgIO/52iA9tpyTEJGv7dhbkfpZRA7IUj/fv4S7ACIn0RuqKdJGpzGwfBNFDoaujnapwkgw4v2SuMf4H7ht25N7ze2bibM2XIctjlJX3EVpsi24HVt85NqilvqRyvV2JrqqHqF8XFG0O39bm73af5mPxVev1F6Pybd1YL91oQMh03XoFBaBUAn90hKNnISIvAd4DZmS+xLw3ar6Ll/q5z4D8GE/iMQQVlLLhpG/Q/vc1O71TfGbfHUCE35FtLKIfsaaCO182UalUnKuk3zCy4lWRrykCGmuSU40hGw8iZZDfucK7oJQJYa7kKL/0uo/70fs/i2iURDPRV65Rbi6Rb2tAyhyoG/A52DlWEUq4X6vfqNOQlXvCpyvpH5eFvaDSMBqXoSBIBjqewdgK7CB6uTrEC2XWBUExCB1CaOjcHcmVa8Suua4VL+Bg9o2DEJstSoslTvRlXESm8Buzprbgq4vZkp4JWfK1oVab+DqqbLfkNbK+NvK8UtinIqeQSo6h4rntKXTQP3Xlq5XZyzW/XrHDDXj1or5a+UZhh7xor9z19DfvtfUvIdd59ahi4qWhfXqJFaK/eEkthwFu98EWwzVsIz1Wd+4zncVs1TPsqKio2jRTxMEay5zJsyKDkND9Y1znObiJut6G8Xk7zjXGbFX6hzbxMJqoyjvdqv+MbUsD5/3F7RZ4EpHkdJOcxg90VPSOkhOIhKJiE6IJrBzNBG/aAK7p6gzkthCxM/dgaYp6gtR3Hih56VoCtDWtt2+K7UNc/aNohRHbFQR53g4CF+5K9oxoqV5HSmJmyqb4SJssZPLTQTGVeEoqJa3uTf7HitttcAiJrMl9H1nurynXb+Lnt9Ar+94CchsJLTzRhQ3DRTGfDCU56EuP8Sq0TM7XQlNJrihciUsN8nLbFGUa81UtXZiPuF1FT35xEtWe8bSqbCINTYKVr4JFam9H7s9r4mrdewjIF5i0Qae66qEQqtldX15CXPD4JYlMtqkL0QTUVjj2KK4ackQkf8b+FrgFPg94FtV9SlPvU8Cl4EZMO0Z52Qv0ajTqCMKppx24pNKAiJP25ncOzupzqRbTMhGReCZlN38Ej5zWduNwjjo2ffgWiLZE7Bt5VRRVrtEwSFSvvolAuSDSyi6LIIXnPuiTmH56GndNEgMRdz0AeBLVPV/Bn4H+M6aul+hqq+IBGIz2KF3PyJidehn3TRIDIJIqOrPqeo0P/wVstgjw8S6wgusYnXXsck6glDxB8AjFw/IkpdFaIKyfsrHro7Cawrbpg33/nqPu870VZ26zljo8LsUbdYMZhWv8ro4k6GG+ZCok1g1vg34d4EyBX5Osif6L91Y61uBJh3IqsN4FB9wtf1Gpzwz9lEbU9xyFyUdgnu5R3zjhhL36SBK8iRr0jVFxeX5ieCoHTl/SCdRqm+Vh8RRlXIPvMSiy9zXMFEGRUmbDt891Al+Sej59e63TkJEfh54nqfou1T1p/M63wVMgX8TaObLVfVREXkO8AER+W1VfTDQ393A3QBHnC/Oa6rIosrgNkjTzSi7lw2HaKlNIKyyOnNPb5k9cfqusydPCegg1LrcOXZoVP28W+GAqopsn2K7NB48k32gr1ZWTaGyTU3668SarJJ0RYRK6O1xPUisjUio6mvrykXkm4GvAb5K1f/Gq+qj+d/HReQnyTIyeYlEzmXcA3BRbtmdX2zDkJmWCUVEGC6litgb7BKRGMRSV0TuBP4e8HWqei1Q54KI3Gj2ga8GHvLVXSu2NARy0HrGTrYTurbpdt0VuH0+JHrx6QQ8/bbRQeDWMdfWheUgcG2gj9pn4Lumro5zj7XXWfVqx1DzOw4uAmxbbNF3JtJ9GyoGQSSAtwM3komQPiIi7wQQkReIiIls+Fzgl0XkN4BfA/6jqv7sZobbE+uyVW+LTqEXWrYXFIf42pzHL/L2V0c4HILhIyAh/4WQHiB0vduHVwzVQEDmbTmFDcSgDTotWof2ju2SmGxxRMV1CKr6xYHzjwJvyPc/AXzpOse1Dyg5xJXOB/wqjD7HEqX4dA4VXwlPv66znb2KrrSHNZ+Yvu1j668bRqomrFRpPK7Ow8ul4JzDL1GqW+k3Teqllb6JxeX2XXf9Ci3KItoh6ffA91txHbEjWIYXeETEDkNkt3QSkUjUYZHEOV3RZ/K1wzUAmkqWRKc4ueSx933v24TsqLF+Kupbxza3YcqDJrCe5qtjrLZvf+fuvjsHVDiPQNu+9rI6PR/uKuYiW2yY5pyJOe4zznXqEgYivopEYlfRRBTaEI1V+zkYUUTSPBZRzQgHlDPO+dA2C92oPNmDNYG74ifznYinPtXy0iTri/sUigVFed81gW31uTrirlpxk+9vC8LgG0zIRyIkXmprNFA+137CahWiY10T8TLG0iUt6hKJ2ZAV0V0RiURELSqr/pm2c6YbInyEpsOx+ywq3IpHl7KzSBmO2cvgoH11EoNE/JnXhSWsvhYKxNZqVRbq1znRY8XlNYWt6aNkWWTO5VvFughr5W+Litx9ynXs45LpqfPXK3Yy/Xru0yuKstFU3gaO6XVQUd6mbZNhrqtpbLrgO2n3v0MQiGE5dgYL5r1WTZFF82abdI5ryL/dOrtdV+Siqor4qejXv8KeT7jlcYUsn2wxlajVhTNhu1FefVKz6j1Ux1XxWXAJEv7j0rXFsVbbDNX3EqXVTKRriwC7pLSlumg7a0qfmvRTFkXrpogFsCwlunl3gyE0NK/m6ify/mvMXrv07VVe22Nrc2yPxb4n5x5LBCWAWk7HXaXXcB21f0N9dUQbpXfj5L9M2rBjnMDCGLhzXFdEIhHRDa2W5RER+wuht5/EIBGJxNDRFDW2L7Rhtk9ptoiyUBYH5eKnnLPX0fycfyzOUJxj18QVyatY4iRbi+xznis174ij5o3M9ysiJFc34Vwf8uCuWHL5YJsyp55zXeebJonKKlb+WxQyYx2IJrARy8E6/TA8aPKKrtS3dQ7QKLqqa6fJRNaepF0TWx8FcEVKJULh6kjMjmdiL8bh6iDs+h4CUqpnj6Oujz5o0H00YeNxm/ZCNLVb1k2RSCwTXl8DMyv0IAbrICI14/MpurVOvx7SOej8HHgmfbevEGdht28fqp+GVGhKiMj4iIGWj10uotKOy1WUjp0OfRxIiCvpSgQ2HUq8t1Pghse9ZEROImJvIWkDoYiYI+pv9hIiUSexvVjQ5HWX0claySPyWMSJLCR+CnEmlfr2NZ5xNqkFQmaplbYCq/3a6xcVL3n6C/bV9tqIOVZkEtvTBHapEJEvAr4LuElVv7FvO3HGDEDd7FjLzpbVpOgbCpvtyt8NasZf8S/AL0IpOYDVTIK+9lBK+eODznOe/tyt0r962sPqr+X9tb2XxmfTBqHfaRNYd+rSNC19n5qm1e93zejpTNeiXblXRB4XkYec83eKyMdF5GEReRuAqn5CVd+86L1EItEXu2jNUfNxV5TWXa61LXZ8oa8DfTVO4taxOSfWPj038bRXp3PwText7i+7Nn8eKWGlsvtsnXevdn4ZymJjmRh4oi/JFdddt5a4D7iz1J/ICHgH8HrgDuAuEbljWfcTiUREJ0TRRXuszZs5Ym+gqg8CTzinXwU8nHMOp8B7gTcuq89IJAyGsjKpy+7WcJ3YMXjydkz2t/bt0FpsUSsS6donlMfaRgRjVvyBNiuciI8DcMQ+FS4hh4CXowhxOvM61v20fSQtOLrmNjr0h/XszZa/R7LA+zgYLmYDnEdPTuI2EfmQtbUN0XE78Cnr+BJwu4jcmmf5/OMi8p1972W/FNfrRppCsmI63NbENqVVeHEDUcUbmqOmr1qRlAIhxXPAZLa2HKiYyVqnS/knTJWABjtohuojAObYrevW85Vb59uauTYShT4hOdzr24jv1zXhb1iXsAws4HF9BnwYeJ+qvq9jly5UVT8PvKXPQGxEIrEstE0atMx8Ez38KIrYTH37zwP51SYNyo8b9RjFmOqJhltuiEDRfjas0jxWEArPeLzwEQibyPi4pgAhaPR9qEEbRX6X9sL99GxgWcSiSztD4fLbor8JbN8Af5eAF1nHLwQe7TMAH6K4KaIbhiJCiIgYMBK080b/UOEfBF4mIi8RkQPgTcD9y7qXyEkMCV1SmC6a9EXNf8uL31Tfl8NZNHRtX1dJNWeXgb8dS1dR8akIoNFfwir3iou66Ava9tkVbeI2LdpXF4nQtnEBS4CxbuqBRk5CRN4DvIZMf3EJ+G5VfZeIvBV4P9kXe6+qfrTPAHzYSyJh0hTKsoPmrQELi4vc9uwgfG36s0NnlOqVxUoVkZSNUP6JQrzkhANxxVdQIgyic5GTad4N+Ff0471Hqx+rTzv8hk/UVKdXKIlzvEQhPIm4fdWJubz9+dpcsqh/my23lpmmNISeROImEbmHGp2Eqt4VOP8A8ECfTpuwl0QiYgFsOCjhRtGG+7FQSygjdhYLKK5j0qGIGrQJGbLKCbqp7VbiobBSvjbirE/R7fTn5VTs9rGkU2JxE572g2Ow9sVzrijrsNJvI06qXeUvkHa2Uxt9scUcxSqxS7GbouJ6VfB9PAP/oDq/1032566tvG0x1GLCDvpFtNEvOP2VfCJ8m92+25ZXvONc6463jijUiKmWHdJiEHNVXXyk0Hcy8G+lDsLaFdcrxSCIhIj8QxF5REQ+km9vCNSrxCdZOSoT3YZe4HUpAGvuT2zBv3u+w/G8L4JEoyLTD+gE7MnadYYrxVpqIkpq1W/Zfmh8rry+jU+ESxCbjr1tl8a0xvd0U8rpSgj2gRCWHo50OefxtKre3dFHYuUYkrjpn6rq/xMqtOKTvI7MLviDInK/qv7WugYYAZIquoUK/7XAFcm1EdFF7CSSjWd3Wh4GwUm0xErjkxTYclZ3aahdeS8mJw9yGu4qPSTOafPz2Ne57XlETaW5vGX73tV94LgtZ9C+/5r3NL6+GTb0HRvFdQ9OIoqbGvBWEfnNPBTuszzl3vgk6xlaGKopukhM+iaitC1Eq2mIHWTvbcRVrhgI9xgqRKBuq9T3tem233HMBWoy1fnrN5QPAWt4jxf6ztaMnjqJQYqb1kYkROTnReQhz/ZG4IeBlwKvAB4Dvt/XhOdc8K0TkbtNoKwzTtZiG71r8NnC26EyqvUb5OpOOOxaRW9I7t9hlV+5tqZ+xSKpLbcS0lEEjivExJn3WhPIALbZf2FT0FSXOj+sOFT42rE2nYSqvrZNPRH5V8B/8BR1ik+iqvcA9wBclFuG+wtsGaLtf3vEZ7W/6KmTaHSm2wQGobgWkeer6mP54dcDD3mqFfFJgEfI4pP85YU7NyzsqtOa9vFxWCSAXypZ1Fdz3j3Xtl3zrvt8HIIydWfclWO8fGHJTwLrXhAqPhNWE2a/GE5+0BT9NdQ/zl/DWdjlfg/saget9Q5tOYBQta5zkunPDQeeR4VdawDAdXE/axJXicAoOtMtHf9ERF5B9gl8EvhrACLyAuBHVPUNqjpdZXySpaFLlNcusZqW2S/ZJOBOvo31Q22nCqMF7sMlGvkEH3SeaxPzaQlhORphE5FAWaf2SvWdC2rTxXZoPEDMasexzEm8i1gnis4GgUEQCVX9psD5R4E3WMcri08yOLTlIhYNPV6kz5SwR3RRF7Qlw+X1kO5wbPrLCqvnncjiZc6iciLb7+JxbcZU9KfO+RrC0NVvpJPSO9BnHcQkoVoEXQnLniPZCmuDdhgEkYjYHkjanlDsPQJitYhdh0Y/iYiBYNnvYcsVYGuLG3fFXTl2RBmu7L9oW1utmOtMYH2+EcFjLG6hZb9BEU6T7qGFWKqNJVl4bEte0e7O3Lcy7JqfROQkumAd6Ug3DFd5XK2Q//WJwwLioYpoyQn2FxQ1edrxdVOoIELiK9+KvmZy9op/PMSnTindKFJqMH0NjrMlBmxRuVwMNN3pqN8PFxXX2wZNU6SOKLRRPIcIS0jn0FKZXatIXgZC42shQmll+um2EyAKrRMUeepXCAU4ZlDlY7d+JxPWuvotuAUXtX03cQcr1gm0UnzXvce+69suwBoU3zoAoiE7Jm6KRGId6MqBLNPqacvR1degUSHeMGEvpER26i/qJ7H13EBXB7UBTPDLwpCd47pi/4hEm7wNQ8ai1kyd+qJ2Bd8omiq1VeVMqn4R826rXEVusksNMxDgTNpM1hUPbcrHJYsm59i7srZ1G6F+irrtJ5TGuWddc9MuWDCtyG9iAT+JQWL/iMQqsYMcQMmhDfxEIzTxNRCFprp9V+ElsRM0m78W46iOtW//1bb9Dbau2zCenQ3HsaXhdJId0vBHIhHRDS31A3uJ+GwiMDqJ7SRuPmyx3GUF2PSqpYn93eT4mhZGnrEFzTQ95+pMaBsD7tl1fGKepuudMXmtmBpX8tV2Gj2y2z6Hpt99k4vWprFtOnLrhr6ZEWnnjWgCG7HtKERPAX1IWJxEVcfgy3ltJlVfG5RFX5VYTZ7+KuNpO9GH6tqTvloingYCVunHN28G2mhakO6smGmLYfwkeiCawO4dvL4EPRXPfQIEpviD/JGdzybaGl2DbxgizWIVH1Hw6RgsnUGbNuoHZvlNOGazTQp2X7C+0vmu33uIgwq01Zbb6lQeqF8E80udYyvgX2csM6jfThA9ZRRNYCP2FSv3z4iI2HIIMXZTxBag82Su2i7IX9F+vfWRn3Pwc1FZhNcaayjKZq+CFiInU14JJe5GAKRmdV+Mr9q3XVYO+Kce8VOoXa0tc9EsYqovL9XtEdwvirAWR+QkdgSaKjIUk9U+4qRQO9C/LRMVFmnhNd3QVx1R8Ooc8sv6DN2e13ze1m08wJ1xLLoYrJvMuxCNTuWm7XVGfV1nWwtiHRkqRaLH9X6iKZHOKvuFdn0te0ye9nzciWj9fFqrj/D2S71i2mmjouj2Ka5bzA1BxbXhElrOL22ssWr7bhpbHZY9IbdOhrRGQlAJkrikvpdoidUzdtMgEYlERCdEnURExHZARC4A/wI4BX5RVf9Nn3ain8SSoJu2B1/VSs4196xDqmG79IApaCYzr2/WlvlXzE89Y3X7a7MK93IRAY7A6CPK4wnXt9vqYv4KZM+z8flofd+LYsPiItV0899XB2QmsGnnrVXbIveKyOMi8pBz/k4R+biIPCwib8tPfwPw46r6V4Gv63s/kUiEsIoPwxv9sm5ibTMxsz5nqpZ9hSbluhwIhRy9lzlls8OaaP3m1q+OrfuwzP300U3UPasSBvbbB3+/0Hu+qu9so4RNGfXYWuI+4E77hIiMgHcArwfuAO4SkTuAFwKfyqvN+t5NFDdFdEPSUuRUpyxu0J0UOo4GXwzJPywzlmCAwLqx2HXc9p1zrZznAtdW69RUaMX57I7Me9dgOIlVQFUfFJEXO6dfBTysqp8AEJH3Am8ELpERio+wAEMQiUREN6RE/jMiogGjfizebSLyIev4HlW9p8V1tzPnGCAjDq8Gfgh4u4j8OeB9fQYEkUisDmuwftqoErnF6rzOpLXJIqp0Pa6fRMBaqo576LLw9nAV3rFpeb9tpNleprEN41kX1sLBbDmXtECAv8+p6it7dVmFqupV4Fv7DMRGXBMOFY2B0zb0IVlvzFxxu9hY2yiva6932m/SMQThTvw1fXSCWkr6YJ0GOXoL/cba0PQsNh0ocwBYc4C/S8CLrOMXAo8u614iJ7FncLmPztzIPombmriTBiyamS5iOyH09rjuG+Dvg8DLROQlwCPAm4C/3GcAPuzL5w40eFtukYndohAT4M14V1shJvq32aK8YWVvcya17WmgvRAn0MRROOWdstQF2zTPtqFeizoLcw+a/cZucL+9gabB73s1HthK0mOjBSchIu8B/hvwchG5JCJvVtUp8Fbg/cDHgB9T1Y8u624iJ7Eu+PJc94rsmr/UXcOJtPHczuM3AWjSY2wtv7fWK2wzZMqL+FAY8YIjyisvK1R4SNRUjuWEn0A19dXCUqozVOeireDYdV7eR5TWd3L1moHvFtXK0peuhpNQ1bsC5x8AHujTaRMGQSRE5N8BL88PbwaeUtVXeOp9ErhMZvM77ankWQzmhXYn/KHCvKs9hlsE3oM5wWjTTh4Tq5YQ6DxE3yJpSmFu8lrKd2FRlc6hwovzVkFbxXQD2uhwROk+CXfgXLzYxnl6wMSlZ/rSm0TkHuB9qtrbGmnZGASRUNW/ZPZF5PuBp2uqf4Wqfm71o4rwYp90EhERPSAoo5h0aDUQEQH+IvCVmx5La/QV/4RgZKey+Ey8VBNZtZzXWmcoMgNp0XYKam65hWmtuvVaKJXbB8lrPm5tqiodrbc6zi3L0CdV2lsGVqHj2xKrKaG3n0TkJFrgTwOfUdXfDZQr8HMiosC/rHM0EZG7gbsBjjjfe0Cas7TSVry0bKJRGkwHPcEq/TRSkKQ9AQrli2iqX4ijKPtJNGW9K4ijPad09LgOmtV6rqn4SdiK91bEpJtpq6xa+dyFUKzSFLsjUdABiZ96elzvNychIj8PPM9T9F2q+tP5/l3Ae2qa+XJVfVREngN8QER+W1Uf9FXMCcg9ABflluEtQVY1iS+aT6Kp+VK49Lwr2ukVgmEzeg/GNFw9rnBRPoLRpIto4iiWgLbEYVk5LlpjlZP/ljvLNUHQvpzEILE2IqGqr60rF5ExWdTCP1HTxqP538dF5CfJYpZ4iUT9YNKliHP2ETFUeEREM3rqJKK4qQGvBX5bVS/5CvPY6ImqXs73vxr4R717GyKhqOMu2nAe60qEtIS+vZZTrfuqci++THQlayfnev+Y1Fun6n0dbqMRRrTU5fpNrrzb9N3kKT40rNgnKstxvTvipiHNkm/CETWJyAtExNj+Phf4ZRH5DeDXgP+oqj+79FHUhe7ug42HLV4f7PwKXa8rha4o2tHSpBoMn93QX+FI1rM81Ec1n4QzXud+Oi0uXT3HrmMV38mWKLqHjsFwEqr6LZ5zjwJvyPc/AXzpmoe1PViTaWprUZOVP3xVoSlc/4dKutKKgrvlpGErpD39rRK9fCRWjd0Rr68J2teZbpAYEicRsQWIeQwiIuphTGDXGOBvpRgMJxERQIO1UlDu3rH9bPE9NzSVVPqF5vD20d4Cqg2a/CQK/cRCJrDhsmWLgRbScbgwITksCy1ZgiinWRwXFw82kn4/6CB1EpFIrBuhST/V1fhWBCCqaCqNvOTcbNWehXt2WhAk6WYGm4fwKD67kJ9EyQTWIUqdTGCr5Y1+El1gE5lFJ9dcF9J6DOkGuMF1pS0dCER2S9wUiUQXuBY8zrFqiqzTYqqrNZP9YfblEGROYCRZjIuxCZBNBIz3dV1Av/zCYky17fsSFNXUD6Ku3HGgs72s7XhNi3IgxpFu4YneDvDX57o1QV1LJLf/gRKbDjmrB49IJCK6waMQjoiImGMBE9hBYu8V15rqimLKLwHLHNdKwziUTUBXDo9OoE6H0CunQ42oaak6hCbYfhWr6nOZ78ZQvyVY43eeiZu6bkTF9R5ipfGT2ukwFlZsVxpsUUVXM5/Vio/c/BEOx9PZR8G9xnP9ykxiN+AfMRirtaGMYwFk1k1RcR2xr9hjcVNjsqQ9fjYRZSQ75AW5P0SiTxiOTYa5WAY2Pf6cvZdEGjPLeWFW9OCNItuUTKh1qHK3fg38Vk7+qK9tMvBVIshuWlzTZiW/6THWoQ8nsuQwHTHAX8RyseKorYOEY9JqiEApNLidX8LbBs1+EnYZhIlFkyipRl/hg8+yqZrqdMAT7SqwR/e7gLhpkIhEYp3w5bkOIcT5tCUqq+IiNknLbD8LX3A/ixh4V/Etv9s2fhI7PdEvHNSvwyp6QDkgloldEjftvXVTRAMqdumbGUZExLbAcBJdN6J1U8RasArxlREPbZCLqOUMujrT1fRRi75OeCvGSriaXeWS1oRo3bSrGGKOiXVBTeSmecyKLH5TuU6GTKOgaVZXEluj0EExXTueXLZPrptoTFlKJZ+EXVY3rqbw462iwVrjXSa3NVds540ab+tCJFburIjbVOhE9nyyX3HuCB+EtUbYWTkikegLV7+wjthLTXqGPuHCzTfU9ro2HIobaykPG16rqM7PZeUNXEslPAqNfhLQYcXfxk+iYfKthOJw2qwoso3F0CoU24vkxG66bh1EyLWmGrweQ6PiOmKPsWmz2oiIgSMLy7E72KV7aYW1uea36ce3CltbaIsW2dgGBje7W2O2N2sF3woN9Tv3H+xnc899GWHDWyHUT1s/jDV8p4MNxzMwRE6iJTRNkbbmqxuAqLYLvbGFfhmFz0TJ98EvcoJ1+ElULb62LjJ0B0Ix9MWEDk38JDDans+rEZFIrAuLimmWIeZZRorTNRGXUvyntkrrdftJuPDpMoaOZcyvywpbviMQhNEOxWcZ7tI4YpjYsQ96ldgqYhGxVCQ9tnVBRL5IRN4lIj/epn4kEtuOljLmoYsMwJLxh8baUcfQVWfQWcdQN54lJRlaB1q9G+vSZewABBiJdN5atS1yr4g8LiIPOefvFJGPi8jDIvK2ujZU9ROq+ua29xPFTUPCmlOY9sIyc17bJq/ZTrhbXywnS7+yFj+JwCTZqI/QchtrzUexaeypcjhZnbjpPuDtwLvNCREZAe8AXgdcAj4oIvcDI+B7neu/TVUf79LhfhOJOue5VU/YbXUMdeNoM8Zlmazmvg+ooonkE32uLE/nDnWFPgDLSzv3A5Ckp9d2wOeha9huc41NFLxtNMxrTUSlCwofiXy8dkBA8+xMylISy5Eu/12X5lPR5X0MnW/7vayaI6kbyxqc67KwHKuZO1T1QRF5sXP6VcDDqvoJABF5L/BGVf1e4GsW7TOKm7YBi67GVhiyoSKqaFDwblr8MrTxlNA0Nnd+G1oojiFxDRseS4J03oDbRORD1tY2RMftwKes40v5OS9E5FYReSfwx0XkO5sa329OYtexCBeh2X+Sh96oDdu9CqjlkR3IRGfXLVbVdv0m09dKn+XDshlsPTGs1F8z5iFBtDdXU1wfsRCE9joGB2fAh4H3qer7OnVZRfCHVNXPA29p2/haP30R+Qsi8lERSUXklU7Zd+ZKl4+LyJ8NXH+LiHxARH43//us9Yy8Bo5CTzVFNxAvZinIHezqFJnrCPIXmmwlLcv/S4rmEnfjXNikA/CU+whExZnOGY9df9McSvE7bvGkX/mOtuhekh7/yAP8dSQQkHEOL7KOXwg8urx7WS8eAr4BeNA+KSJ3AG8C/hhwJ/AvcmWMi7cBv6CqLwN+IT+OWCM2PfltFeKzao8tIgBNyMJy9BI39Q0V/kHgZSLyEhE5IJtL71/W/ayVSKjqx1T1456iNwLvVdUTVf194GEyZYyv3o/m+z8K/PmVDHSZWNVqrqnNrn1umPkpLH6CJqUNx772fG34tjbX9h2LDoCwdv1tl/1uLavfrYEwkqTzRgtOQkTeA/w34OUicklE3qyqU+CtwPuBjwE/pqofXdbdDEUncTvwK9ZxSPHyXFV9DEBVHxOR54QazJU+dwMccX55I11VWAuf/mANocuNaKlVSI9lQueCVF/+ahtNlkxzS6Dyb7NwPok6EVbTtS48EWHXibX5yfhErXtGVDJOotd3e5OI3EONTkJV7wqcfwB4oE+nTVg6kRCRnwee5yn6LlX96dBlnnMLvQGqeg9wTz6myz+vP/7xosUtVRksCbcBn9v0IAaC+CwyxOcwx8sXbeDXf/Pk/aPn/+5tPS793F4kHVLV1/a4rK3i5TMi8vyci3g+0NYp5OOq+srmarsPEflQfBYZ4rPIEJ/DHCLyoUXbUNU7lzGWoWAofhL3A28SkUMReQnwMuDXAvW+Od//ZiDEmURERERELAHrNoH9ehG5BPwp4D+KyPsBciXLjwG/Bfws8O2qOsuv+RHLXPb7gNeJyO+SuaB/3zrHHxEREbFvEB2o8meZEJG7cx3F3iM+iznis8gQn8Mc8VlUsRdEIiIiIiKiH4aik4iIiIiIGCB2mkgsGgZkVyEi/1BEHhGRj+TbGzY9pnWiS+z9XYeIfFJE/kf+Hixs2bNN8OVmGGTonw1jp4kEi4cB2WX8U1V9Rb6txAlniLBi778euAO4K38f9hlfkb8H+2YGex/Z928jhv5xsNNEYglhQCJ2D0XsfVU9Bd5L9j5E7BlU9UHgCef09oX+WTF2mkjUoFP89R3FW0XkN3OWe59Y6vjbl6HAz4nIr3fIX7DLKIX+AYKhf/YFQ4nd1BtDCQMyNNQ9F+CHge8hu+fvAb4f+Lb1jW6j2PnfviO+XFUfzeOgfUBEfjtfYUdEADtAJFYcBmRr0fa5iMi/Av7DioczJOz8b98Fqvpo/vdxEflJMnHcPhOJvqF/dhb7Km5qGwZkJ5G//AZfT6bg3xesNPb+NkFELojIjWYf+Gr2613wIYb+cbD1nEQdROTrgX8OPJssDMhHVPXPqupHRcSEAZlihQHZE/wTEXkFmZjlk8Bf2+ho1ghVnYqIib0/Au5dZuz9LcNzgZ+ULLT6GPi3qvqzmx3S+pDnZngNWW7pS8B3k4X6+TEReTPwh8Bf2NwIh4HocR0REREREcS+ipsiIiIiIlogEomIiIiIiCAikYiIiIiICCISiYiIiIiIICKRiIiIiIgIIhKJiIiIiIggIpGI2EmISCIiD4rI/c7583mY8B/Oj79LRP6LiFwVkWgPHhHhIBKJiJ2EqqbAtwBfKSJ2XKp/TOY49h358SHwE8A/W+f4IiK2BdGZLmKnISJvAf4J8D8BX0zmaf0aVf1lp943Av9eVX0BACMi9hY7HZYjIkJV35mHZ/nXwIuBH3AJRERERBhR3BSxD3gL8L8CJ8Df3/BYIiK2CpFIROwDvg24ThYW/Is2PJaIiK1CJBIROw0R+ZNkeYq/EfgAcN8e5jOPiOiNSCQidhYicgS8G7hPVX8GuJtMef13NzqwiIgtQiQSEbuM7wWOgL8NoKqfBr4d+Ici8iUAIvIFeW6NF+fHr8i3GzYy4oiIgSGawEbsJETkzwD/CXitqv6iU/ZjZLqJLwN+hHkmMhtf4V4XEbGPiEQiIiIiIiKIKG6KiIiIiAgiEomIiIiIiCAikYiIiIiICCISiYiIiIiIICKRiIiIiIgIIhKJiIiIiIggIpGIiIiIiAgiEomIiIiIiCAikYiIiIiICOL/B5NeRRX/wzbmAAAAAElFTkSuQmCC\n",
"text/plain": [
"<Figure size 432x360 with 2 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"x, y = torch.meshgrid(\n",
" torch.linspace(-10,10,200), \n",
" torch.linspace(-10,10,200)\n",
")\n",
"xy = torch.stack([x, y], -1)\n",
"z = rosen(xy, reduce=False)\n",
"\n",
"fig, ax = plt.subplots(figsize=(6,5))\n",
"c = ax.pcolormesh(x, y, z, shading='auto', cmap='viridis_r', \n",
" norm=LogNorm(vmin=z.min(), vmax=z.max()))\n",
"ax.set_xlabel('X1', fontsize=14)\n",
"ax.set_ylabel('X2', fontsize=14, rotation=0)\n",
"ax.yaxis.set_label_coords(-0.15, 0.5)\n",
"fig.colorbar(c, ax=ax)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1. Minimize (single point)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor(4900.)"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x0 = torch.tensor([1., 8.])\n",
"\n",
"rosen(x0)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"initial fval: 4900.0000\n",
"iter 1 - fval: 119.3775\n",
"iter 2 - fval: 26.0475\n",
"iter 3 - fval: 2.2403\n",
"iter 4 - fval: 0.9742\n",
"iter 5 - fval: 0.9085\n",
"iter 6 - fval: 0.9070\n",
"iter 7 - fval: 0.8999\n",
"iter 8 - fval: 0.8847\n",
"iter 9 - fval: 0.8506\n",
"iter 10 - fval: 0.8048\n",
"iter 11 - fval: 0.7286\n",
"iter 12 - fval: 0.5654\n",
"iter 13 - fval: 0.4128\n",
"iter 14 - fval: 0.3506\n",
"iter 15 - fval: 0.2667\n",
"iter 16 - fval: 0.1814\n",
"iter 17 - fval: 0.1401\n",
"iter 18 - fval: 0.1074\n",
"iter 19 - fval: 0.0681\n",
"iter 20 - fval: 0.0385\n",
"iter 21 - fval: 0.0196\n",
"iter 22 - fval: 0.0157\n",
"iter 23 - fval: 0.0063\n",
"iter 24 - fval: 0.0030\n",
"iter 25 - fval: 0.0009\n",
"iter 26 - fval: 0.0002\n",
"iter 27 - fval: 0.0000\n",
"iter 28 - fval: 0.0000\n",
"iter 29 - fval: 0.0000\n",
"iter 30 - fval: 0.0000\n",
"Optimization terminated successfully.\n",
" Current function value: 0.000000\n",
" Iterations: 30\n",
" Function evaluations: 39\n",
"\n",
"final x: tensor([1.0000, 1.0000])\n"
]
}
],
"source": [
"# BFGS\n",
"res = minimize(\n",
" rosen, x0, \n",
" method='bfgs', \n",
" options=dict(line_search='strong-wolfe'),\n",
" max_iter=50,\n",
" disp=2\n",
")\n",
"print()\n",
"print('final x: {}'.format(res.x))"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"initial fval: 4900.0000\n",
"iter 1 - fval: 119.3775\n",
"iter 2 - fval: 2.7829\n",
"iter 3 - fval: 2.7823\n",
"iter 4 - fval: 2.7822\n",
"iter 5 - fval: 2.7818\n",
"iter 6 - fval: 2.7810\n",
"iter 7 - fval: 2.7785\n",
"iter 8 - fval: 2.7723\n",
"iter 9 - fval: 2.7563\n",
"iter 10 - fval: 2.7187\n",
"iter 11 - fval: 2.6477\n",
"iter 12 - fval: 2.5353\n",
"iter 13 - fval: 2.2997\n",
"iter 14 - fval: 1.8811\n",
"iter 15 - fval: 1.5526\n",
"iter 16 - fval: 1.1877\n",
"iter 17 - fval: 1.0779\n",
"iter 18 - fval: 0.9352\n",
"iter 19 - fval: 0.6669\n",
"iter 20 - fval: 0.5938\n",
"iter 21 - fval: 0.4380\n",
"iter 22 - fval: 0.3308\n",
"iter 23 - fval: 0.2343\n",
"iter 24 - fval: 0.1972\n",
"iter 25 - fval: 0.1279\n",
"iter 26 - fval: 0.0869\n",
"iter 27 - fval: 0.0695\n",
"iter 28 - fval: 0.0473\n",
"iter 29 - fval: 0.0298\n",
"iter 30 - fval: 0.0158\n",
"iter 31 - fval: 0.0065\n",
"iter 32 - fval: 0.0029\n",
"iter 33 - fval: 0.0004\n",
"iter 34 - fval: 0.0001\n",
"iter 35 - fval: 0.0000\n",
"iter 36 - fval: 0.0000\n",
"iter 37 - fval: 0.0000\n",
"iter 38 - fval: 0.0000\n",
"Optimization terminated successfully.\n",
" Current function value: 0.000000\n",
" Iterations: 39\n",
" Function evaluations: 48\n",
"\n",
"final x: tensor([1.0000, 1.0000])\n"
]
}
],
"source": [
"# L-BFGS\n",
"res = minimize(\n",
" rosen, x0, \n",
" method='l-bfgs', \n",
" options=dict(line_search='strong-wolfe'),\n",
" max_iter=50,\n",
" disp=2\n",
")\n",
"print()\n",
"print('final x: {}'.format(res.x))"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"initial fval: 4900.0000\n",
"iter 1 - fval: 6.0505\n",
"iter 2 - fval: 2.8156\n",
"iter 3 - fval: 2.8144\n",
"iter 4 - fval: 2.3266\n",
"iter 5 - fval: 2.1088\n",
"iter 6 - fval: 1.7060\n",
"iter 7 - fval: 1.5851\n",
"iter 8 - fval: 1.2548\n",
"iter 9 - fval: 1.1625\n",
"iter 10 - fval: 0.8967\n",
"iter 11 - fval: 0.8249\n",
"iter 12 - fval: 0.6160\n",
"iter 13 - fval: 0.5591\n",
"iter 14 - fval: 0.4051\n",
"iter 15 - fval: 0.3299\n",
"iter 16 - fval: 0.2217\n",
"iter 17 - fval: 0.1886\n",
"iter 18 - fval: 0.1167\n",
"iter 19 - fval: 0.0987\n",
"iter 20 - fval: 0.0543\n",
"iter 21 - fval: 0.0442\n",
"iter 22 - fval: 0.0210\n",
"iter 23 - fval: 0.0118\n",
"iter 24 - fval: 0.0035\n",
"iter 25 - fval: 0.0021\n",
"iter 26 - fval: 0.0005\n",
"iter 27 - fval: 0.0000\n",
"iter 28 - fval: 0.0000\n",
"iter 29 - fval: 0.0000\n",
"Optimization terminated successfully.\n",
" Current function value: 0.000000\n",
" Iterations: 29\n",
" Function evaluations: 84\n",
" CG iterations: 41\n"
]
}
],
"source": [
"# Newton CG\n",
"res = minimize(\n",
" rosen, x0, \n",
" method='newton-cg',\n",
" options=dict(line_search='strong-wolfe'),\n",
" max_iter=50, \n",
" disp=2\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"initial fval: 4900.0000\n",
"iter 1 - fval: 0.0000\n",
"iter 2 - fval: 0.0000\n",
"Optimization terminated successfully.\n",
" Current function value: 0.000000\n",
" Iterations: 2\n",
" Function evaluations: 3\n"
]
}
],
"source": [
"# Newton Exact\n",
"res = minimize(\n",
" rosen, x0, \n",
" method='newton-exact',\n",
" options=dict(line_search='strong-wolfe', tikhonov=1e-4),\n",
" max_iter=50, \n",
" disp=2\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# minimize (batch of points)\n",
"\n",
"In addition to optimizing a single point, we can also optimize a batch of points.\n",
"\n",
"Results for batch inputs may differ from sequential point-wise optimization due to convergence stopping. Assuming that all points run for `max_iter` iterations, then they should be equivalent up to two conditions:\n",
"1. When using line search, the optimal step size at each iteration may differ accross points. Batch mode will select a single step size for all points, whereas sequential optimization will select one per point.\n",
"2. When using conjugate gradient (e.g. Newton-CG), there is also convergence stopping for linear inverse sub-problems."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"tensor(602.9989)"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"torch.manual_seed(337)\n",
"x0 = torch.randn(4,2)\n",
"\n",
"rosen(x0)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"initial fval: 602.9989\n",
"iter 1 - fval: 339.5845\n",
"iter 2 - fval: 146.6088\n",
"iter 3 - fval: 92.8062\n",
"iter 4 - fval: 88.3703\n",
"iter 5 - fval: 79.7213\n",
"iter 6 - fval: 33.5407\n",
"iter 7 - fval: 31.6904\n",
"iter 8 - fval: 22.7846\n",
"iter 9 - fval: 7.9474\n",
"iter 10 - fval: 6.1061\n",
"iter 11 - fval: 4.0852\n",
"iter 12 - fval: 3.6830\n",
"iter 13 - fval: 3.5514\n",
"iter 14 - fval: 3.2786\n",
"iter 15 - fval: 2.9614\n",
"iter 16 - fval: 2.3316\n",
"iter 17 - fval: 1.9711\n",
"iter 18 - fval: 1.8731\n",
"iter 19 - fval: 1.5464\n",
"iter 20 - fval: 1.2185\n",
"iter 21 - fval: 1.1106\n",
"iter 22 - fval: 0.9249\n",
"iter 23 - fval: 0.7769\n",
"iter 24 - fval: 0.6506\n",
"iter 25 - fval: 0.6083\n",
"iter 26 - fval: 0.5571\n",
"iter 27 - fval: 0.5008\n",
"iter 28 - fval: 0.4542\n",
"iter 29 - fval: 0.4272\n",
"iter 30 - fval: 0.4088\n",
"iter 31 - fval: 0.3964\n",
"iter 32 - fval: 0.3924\n",
"iter 33 - fval: 0.3894\n",
"iter 34 - fval: 0.3876\n",
"iter 35 - fval: 0.3837\n",
"iter 36 - fval: 0.3795\n",
"iter 37 - fval: 0.3708\n",
"iter 38 - fval: 0.3569\n",
"iter 39 - fval: 0.3319\n",
"iter 40 - fval: 0.3180\n",
"iter 41 - fval: 0.3149\n",
"iter 42 - fval: 0.2971\n",
"iter 43 - fval: 0.2936\n",
"iter 44 - fval: 0.2769\n",
"iter 45 - fval: 0.2584\n",
"iter 46 - fval: 0.2416\n",
"iter 47 - fval: 0.2346\n",
"iter 48 - fval: 0.2293\n",
"iter 49 - fval: 0.2224\n",
"iter 50 - fval: 0.2204\n",
"Warning: Maximum number of iterations has been exceeded.\n",
" Current function value: 0.220413\n",
" Iterations: 50\n",
" Function evaluations: 91\n",
"\n",
"final x: \n",
"tensor([[1.3291, 1.7689],\n",
" [1.1972, 1.4332],\n",
" [1.2357, 1.5273],\n",
" [0.8715, 0.7573]])\n"
]
}
],
"source": [
"# BFGS\n",
"res = minimize(\n",
" rosen, x0, \n",
" method='bfgs', \n",
" options=dict(line_search='strong-wolfe'),\n",
" max_iter=50,\n",
" disp=2\n",
")\n",
"print()\n",
"print('final x: \\n{}'.format(res.x))"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"initial fval: 602.9989\n",
"iter 1 - fval: 339.5845\n",
"iter 2 - fval: 95.0195\n",
"iter 3 - fval: 21.5655\n",
"iter 4 - fval: 5.2430\n",
"iter 5 - fval: 4.9395\n",
"iter 6 - fval: 4.9042\n",
"iter 7 - fval: 4.8668\n",
"iter 8 - fval: 4.7709\n",
"iter 9 - fval: 4.6042\n",
"iter 10 - fval: 4.2993\n",
"iter 11 - fval: 3.7640\n",
"iter 12 - fval: 3.0566\n",
"iter 13 - fval: 3.0339\n",
"iter 14 - fval: 2.3893\n",
"iter 15 - fval: 2.2195\n",
"iter 16 - fval: 2.0010\n",
"iter 17 - fval: 1.5127\n",
"iter 18 - fval: 1.2743\n",
"iter 19 - fval: 1.0382\n",
"iter 20 - fval: 0.8332\n",
"iter 21 - fval: 0.7181\n",
"iter 22 - fval: 0.5824\n",
"iter 23 - fval: 0.4413\n",
"iter 24 - fval: 0.3279\n",
"iter 25 - fval: 0.2649\n",
"iter 26 - fval: 0.1784\n",
"iter 27 - fval: 0.1088\n",
"iter 28 - fval: 0.0634\n",
"iter 29 - fval: 0.0492\n",
"iter 30 - fval: 0.0307\n",
"iter 31 - fval: 0.0207\n",
"iter 32 - fval: 0.0144\n",
"iter 33 - fval: 0.0130\n",
"iter 34 - fval: 0.0120\n",
"iter 35 - fval: 0.0119\n",
"iter 36 - fval: 0.0118\n",
"iter 37 - fval: 0.0116\n",
"iter 38 - fval: 0.0112\n",
"iter 39 - fval: 0.0101\n",
"iter 40 - fval: 0.0078\n",
"iter 41 - fval: 0.0044\n",
"iter 42 - fval: 0.0030\n",
"iter 43 - fval: 0.0024\n",
"iter 44 - fval: 0.0010\n",
"iter 45 - fval: 0.0002\n",
"iter 46 - fval: 0.0000\n",
"iter 47 - fval: 0.0000\n",
"iter 48 - fval: 0.0000\n",
"iter 49 - fval: 0.0000\n",
"iter 50 - fval: 0.0000\n",
"Warning: Maximum number of iterations has been exceeded.\n",
" Current function value: 0.000027\n",
" Iterations: 50\n",
" Function evaluations: 56\n",
"\n",
"final x: \n",
"tensor([[1.0013, 1.0026],\n",
" [1.0032, 1.0064],\n",
" [0.9962, 0.9923],\n",
" [0.9997, 0.9995]])\n"
]
}
],
"source": [
"# L-BFGS\n",
"res = minimize(\n",
" rosen, x0, \n",
" method='l-bfgs', \n",
" options=dict(line_search='strong-wolfe'),\n",
" max_iter=50,\n",
" disp=2\n",
")\n",
"print()\n",
"print('final x: \\n{}'.format(res.x))"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"initial fval: 602.9989\n",
"iter 1 - fval: 367.2567\n",
"iter 2 - fval: 47.2528\n",
"iter 3 - fval: 17.9457\n",
"iter 4 - fval: 5.0277\n",
"iter 5 - fval: 4.5732\n",
"iter 6 - fval: 4.2605\n",
"iter 7 - fval: 3.5691\n",
"iter 8 - fval: 3.1882\n",
"iter 9 - fval: 3.0478\n",
"iter 10 - fval: 2.9619\n",
"iter 11 - fval: 2.5264\n",
"iter 12 - fval: 2.1677\n",
"iter 13 - fval: 1.8351\n",
"iter 14 - fval: 1.7114\n",
"iter 15 - fval: 1.3834\n",
"iter 16 - fval: 1.1907\n",
"iter 17 - fval: 0.8378\n",
"iter 18 - fval: 0.7662\n",
"iter 19 - fval: 0.5443\n",
"iter 20 - fval: 0.4112\n",
"iter 21 - fval: 0.2527\n",
"iter 22 - fval: 0.2030\n",
"iter 23 - fval: 0.1118\n",
"iter 24 - fval: 0.0870\n",
"iter 25 - fval: 0.0403\n",
"iter 26 - fval: 0.0301\n",
"iter 27 - fval: 0.0104\n",
"iter 28 - fval: 0.0071\n",
"iter 29 - fval: 0.0023\n",
"iter 30 - fval: 0.0000\n",
"iter 31 - fval: 0.0000\n",
"iter 32 - fval: 0.0000\n",
"Optimization terminated successfully.\n",
" Current function value: 0.000000\n",
" Iterations: 32\n",
" Function evaluations: 83\n",
" CG iterations: 44\n",
"\n",
"final x: \n",
"tensor([[1.0000, 1.0000],\n",
" [1.0000, 1.0000],\n",
" [1.0000, 1.0000],\n",
" [0.9999, 0.9999]])\n"
]
}
],
"source": [
"# Newton CG\n",
"res = minimize(\n",
" rosen, x0, \n",
" method='newton-cg', \n",
" options=dict(line_search='strong-wolfe'),\n",
" max_iter=50,\n",
" disp=2\n",
")\n",
"print()\n",
"print('final x: \\n{}'.format(res.x))"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"initial fval: 602.9989\n",
"iter 1 - fval: 5.2148\n",
"iter 2 - fval: 4.4287\n",
"iter 3 - fval: 3.6888\n",
"iter 4 - fval: 2.7710\n",
"iter 5 - fval: 1.9319\n",
"iter 6 - fval: 1.5444\n",
"iter 7 - fval: 1.0819\n",
"iter 8 - fval: 0.8638\n",
"iter 9 - fval: 0.5068\n",
"iter 10 - fval: 0.3768\n",
"iter 11 - fval: 0.2248\n",
"iter 12 - fval: 0.1487\n",
"iter 13 - fval: 0.0758\n",
"iter 14 - fval: 0.0412\n",
"iter 15 - fval: 0.0125\n",
"iter 16 - fval: 0.0055\n",
"iter 17 - fval: 0.0003\n",
"iter 18 - fval: 0.0000\n",
"iter 19 - fval: 0.0000\n",
"iter 20 - fval: 0.0000\n",
"Optimization terminated successfully.\n",
" Current function value: 0.000000\n",
" Iterations: 20\n",
" Function evaluations: 27\n",
"\n",
"final x: \n",
"tensor([[1., 1.],\n",
" [1., 1.],\n",
" [1., 1.],\n",
" [1., 1.]])\n"
]
}
],
"source": [
"# Newton Exact\n",
"res = minimize(\n",
" rosen, x0, \n",
" method='newton-exact', \n",
" options=dict(line_search='strong-wolfe', tikhonov=1e-4),\n",
" max_iter=50,\n",
" disp=2\n",
")\n",
"print()\n",
"print('final x: \\n{}'.format(res.x))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
================================================
FILE: examples/scipy_benchmark.py
================================================
"""
A comparison of pytorch-minimize solvers to the analogous solvers from
scipy.optimize.
Pytorch-minimize uses autograd to compute 1st- and 2nd-order derivatives
implicitly, therefore derivative functions need not be provided or known.
In contrast, scipy.optimize requires that they be provided, or else it will
use imprecise numerical approximations. For fair comparison I am providing
derivative functions to scipy.optimize in this script. In general, however,
we will not have access to these functions, so applications of scipy.optimize
are far more limited.
"""
import torch
from torchmin import minimize
from torchmin.benchmarks import rosen
from scipy import optimize
# Many scipy optimizers convert the data to double-precision, so
# we will use double precision in torch for a fair comparison
torch.set_default_dtype(torch.float64)
def print_header(title, num_breaks=1):
print('\n'*num_breaks + '='*50)
print(' '*20 + title)
print('='*50 + '\n')
def main():
torch.manual_seed(991)
x0 = torch.randn(100)
x0_np = x0.numpy()
print('\ninitial loss: %0.4f\n' % rosen(x0))
# ---- BFGS ----
print_header('BFGS')
print('-'*19 + ' pytorch ' + '-'*19)
res = minimize(rosen, x0, method='bfgs', tol=1e-5, disp=True)
print('\n' + '-'*20 + ' scipy ' + '-'*20)
res = optimize.minimize(
optimize.rosen, x0_np,
method='bfgs',
jac=optimize.rosen_der,
tol=1e-5,
options=dict(disp=True)
)
# ---- Newton CG ----
print_header('Newton-CG')
print('-'*19 + ' pytorch ' + '-'*19)
res = minimize(rosen, x0, method='newton-cg', tol=1e-5, disp=True)
print('\n' + '-'*20 + ' scipy ' + '-'*20)
res = optimize.minimize(
optimize.rosen, x0_np,
method='newton-cg',
jac=optimize.rosen_der,
hessp=optimize.rosen_hess_prod,
tol=1e-5,
options=dict(disp=True)
)
# ---- Newton Exact ----
# NOTE: Scipy does not have a precise analogue to "newton-exact," but they
# have something very close called "trust-exact." Like newton-exact,
# trust-exact also uses Cholesky factorization of the explicit Hessian
# matrix. However, whereas newton-exact first computes the newton direction
# and then uses line search to determine a step size, trust-exact first
# specifies a step size boundary and then solves for the optimal newton
# step within this boundary (a constrained optimization problem).
print_header('Newton-Exact')
print('-'*19 + ' pytorch ' + '-'*19)
res = minimize(rosen, x0, method='newton-exact', tol=1e-5, disp=True)
print('\n' + '-'*20 + ' scipy ' + '-'*20)
res = optimize.minimize(
optimize.rosen, x0_np,
method='trust-exact',
jac=optimize.rosen_der,
hess=optimize.rosen_hess,
options=dict(gtol=1e-5, disp=True)
)
print()
if __name__ == '__main__':
main()
================================================
FILE: examples/train_mnist_Minimizer.py
================================================
import argparse
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets
from torchmin import Minimizer
def MLPClassifier(input_size, hidden_sizes, num_classes):
layers = []
for i, hidden_size in enumerate(hidden_sizes):
layers.append(nn.Linear(input_size, hidden_size))
layers.append(nn.ReLU())
input_size = hidden_size
layers.append(nn.Linear(input_size, num_classes))
layers.append(nn.LogSoftmax(-1))
return nn.Sequential(*layers)
@torch.no_grad()
def evaluate(model):
train_output = model(X_train)
test_output = model(X_test)
train_loss = F.nll_loss(train_output, y_train)
test_loss = F.nll_loss(test_output, y_test)
print('Loss (cross-entropy):\n train: {:.4f} - test: {:.4f}'.format(train_loss, test_loss))
train_accuracy = (train_output.argmax(-1) == y_train).float().mean()
test_accuracy = (test_output.argmax(-1) == y_test).float().mean()
print('Accuracy:\n train: {:.4f} - test: {:.4f}'.format(train_accuracy, test_accuracy))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--mnist_root', type=str, required=True,
help='root path for the MNIST dataset')
parser.add_argument('--method', type=str, default='newton-cg',
help='optimization method to use')
parser.add_argument('--device', type=str, default='cpu',
help='device to use for training')
parser.add_argument('--quiet', action='store_true',
help='whether to train in quiet mode (no loss printing)')
parser.add_argument('--plot_weight', action='store_true',
help='whether to plot the learned weights')
args = parser.parse_args()
device = torch.device(args.device)
# --------------------------------------------
# Load MNIST dataset
# --------------------------------------------
train_data = datasets.MNIST(args.mnist_root, train=True)
X_train = (train_data.data.float().view(-1, 784) / 255.).to(device)
y_train = train_data.targets.to(device)
test_data = datasets.MNIST(args.mnist_root, train=False)
X_test = (test_data.data.float().view(-1, 784) / 255.).to(device)
y_test = test_data.targets.to(device)
# --------------------------------------------
# Initialize model
# --------------------------------------------
mlp = MLPClassifier(784, hidden_sizes=[50], num_classes=10)
mlp = mlp.to(device)
print('-------- Initial evaluation ---------')
evaluate(mlp)
# --------------------------------------------
# Fit model with Minimizer
# --------------------------------------------
optimizer = Minimizer(mlp.parameters(),
method=args.method,
tol=1e-6,
max_iter=200,
disp=0 if args.quiet else 2)
def closure():
optimizer.zero_grad()
output = mlp(X_train)
loss = F.nll_loss(output, y_train)
# loss.backward() <-- do not call backward!
return loss
loss = optimizer.step(closure)
# --------------------------------------------
# Evaluate fitted model
# --------------------------------------------
print('-------- Final evaluation ---------')
evaluate(mlp)
if args.plot_weight:
weight = mlp[0].weight.data.cpu().view(-1, 28, 28)
vmin, vmax = weight.min(), weight.max()
fig, axes = plt.subplots(4, 4, figsize=(6, 6))
axes = axes.ravel()
for i in range(len(axes)):
axes[i].matshow(weight[i], cmap='gray', vmin=0.5 * vmin, vmax=0.5 * vmax)
axes[i].set_xticks(())
axes[i].set_yticks(())
plt.show()
================================================
FILE: pyproject.toml
================================================
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "pytorch-minimize"
dynamic = ["version"]
description = "Newton and Quasi-Newton optimization with PyTorch"
readme = "README.md"
requires-python = ">=3.7"
license = {text = "MIT License"}
authors = [
{name = "Reuben Feinman", email = "reuben.feinman@nyu.edu"}
]
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
]
dependencies = [
"numpy>=1.18.0",
"scipy>=1.6",
"torch>=1.9.0",
]
[project.optional-dependencies]
dev = [
"pytest",
]
docs = [
"sphinx==3.5.3",
"jinja2<3.1",
"sphinx_rtd_theme==0.5.2",
"readthedocs-sphinx-search==0.3.2",
]
[project.urls]
Documentation = "https://pytorch-minimize.readthedocs.io"
Homepage = "https://github.com/rfeinman/pytorch-minimize"
[tool.setuptools]
include-package-data = false # Only include .py files
[tool.setuptools.dynamic]
version = {attr = "torchmin._version.__version__"}
[tool.setuptools.packages.find]
exclude = ["tests*", "docs*", "examples*", "tmp*"]
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
python_classes = ["Test*"]
python_functions = ["test_*"]
addopts = [
"-v",
"--strict-markers",
"--tb=short",
]
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
"cuda: marks tests that require CUDA",
]
================================================
FILE: tests/__init__.py
================================================
================================================
FILE: tests/conftest.py
================================================
"""Shared pytest fixtures for torchmin tests."""
import pytest
import torch
from torchmin.benchmarks import rosen
@pytest.fixture
def random_seed():
"""Set random seed for reproducibility."""
torch.manual_seed(42)
yield 42
# =============================================================================
# Objective Function Fixtures
# =============================================================================
# To add a new test problem, create a fixture that returns a dict with:
# - 'objective': callable, the objective function
# - 'x0': Tensor, initial point
# - 'solution': Tensor, known optimal solution
# - 'name': str, descriptive name for the problem
@pytest.fixture(scope='session')
def least_squares_problem():
"""
Generate a least squares problem for testing optimization algorithms.
Creates a linear regression problem: min ||Y - X @ B||^2
where X is N x D, Y is N x M, and B is D x M.
This is a session-scoped fixture, so the same problem instance is used
across all tests for consistency.
Returns
-------
dict
Dictionary containing:
- objective: callable, the objective function
- x0: Tensor, initial parameter values (zeros)
- solution: Tensor, the true solution
- X: Tensor, design matrix
- Y: Tensor, target values
"""
torch.manual_seed(42)
N, D, M = 100, 7, 5
X = torch.randn(N, D)
Y = torch.randn(N, M)
def objective(B):
return torch.sum((Y - X @ B) ** 2)
# target B
#trueB = torch.linalg.inv(X.T @ X) @ X.T @ Y
trueB = torch.linalg.lstsq(X, Y).solution # XB = Y (solve for B)
# initial B
B0 = torch.zeros(D, M)
return {
'objective': objective,
'x0': B0,
'solution': trueB,
'X': X,
'Y': Y,
'name': 'least_squares',
}
@pytest.fixture(scope='session')
def rosenbrock_problem():
"""Rosenbrock function (banana function)."""
torch.manual_seed(42)
D = 10
x0 = torch.zeros(D)
x_sol = torch.ones(D)
return {
'objective': rosen,
'x0': x0,
'solution': x_sol,
'name': 'rosenbrock',
}
# =============================================================================
# Other Fixtures
# =============================================================================
@pytest.fixture(params=['cpu', 'cuda'])
def device(request):
"""
Parametrize tests across CPU and CUDA devices.
Automatically skips CUDA tests if CUDA is not available.
"""
if request.param == 'cuda' and not torch.cuda.is_available():
pytest.skip('CUDA not available')
return torch.device(request.param)
================================================
FILE: tests/test_imports.py
================================================
"""Test that all public APIs are importable and accessible."""
import pytest
def test_import_main_package():
"""Test importing the main torchmin package."""
import torchmin
assert hasattr(torchmin, '__version__')
def test_import_core_functions():
"""Test importing core minimize functions."""
from torchmin import minimize, minimize_constr, Minimizer
def test_import_benchmarks():
"""Test importing benchmark functions."""
from torchmin.benchmarks import rosen
@pytest.mark.parametrize('method', [
'bfgs',
'l-bfgs',
'cg',
'newton-cg',
'newton-exact',
'trust-ncg',
# 'trust-krylov',
'trust-exact',
'dogleg',
])
def test_method_available(method):
"""Test that all advertised methods are available and callable."""
import torch
from torchmin import minimize
# Simple quadratic objective: f(x) = ||x||^2
x0 = torch.zeros(2)
result = minimize(lambda x: x.square().sum(), x0, method=method, max_iter=1)
assert result is not None
================================================
FILE: tests/torchmin/__init__.py
================================================
================================================
FILE: tests/torchmin/test_bounds.py
================================================
import pytest
import torch
from scipy.optimize import Bounds
from torchmin import minimize, minimize_constr
from torchmin.benchmarks import rosen
@pytest.mark.parametrize(
'method',
['l-bfgs-b', 'trust-constr'],
)
def test_equivalent_bounds(method):
x0 = torch.tensor([-1.0, 1.5])
def minimize_with_bounds(bounds):
return minimize_constr(
rosen,
x0,
method=method,
bounds=bounds,
tol=1e-6,
)
def assert_equivalent(src_result, tgt_result):
return torch.testing.assert_close(
src_result.x,
tgt_result.x,
rtol=1e-5,
atol=1e-3,
msg=f"Solution {src_result.x} not close to expected {tgt_result.x}"
)
result_0 = minimize_with_bounds(
bounds=(torch.tensor([-2.0, -2.0]), torch.tensor([2.0, 2.0]))
)
equivalent_bounds_to_test = [
([-2.0, -2.0], [2.0, 2.0]),
(-2.0, 2.0),
Bounds(-2.0, 2.0),
]
for bounds in equivalent_bounds_to_test:
result = minimize_with_bounds(bounds)
assert_equivalent(result, result_0)
print(f'Test passed with bounds: {bounds}')
def test_invalid_bounds():
x0 = torch.tensor([-1.0, 1.5])
invalid_bounds_to_test = [
(torch.tensor([-2.0]), torch.tensor([2.0, 2.0])),
(-2.0,),
torch.tensor([-2.0, -2.0, 2.0, 2.0]),
]
for bounds in invalid_bounds_to_test:
with pytest.raises(Exception):
result = minimize_constr(
rosen,
x0,
method='l-bfgs-b',
bounds=bounds,
)
# TODO: remove this block
if __name__ == '__main__':
test_equivalent_bounds(method='l-bfgs-b')
test_invalid_bounds()
================================================
FILE: tests/torchmin/test_minimize.py
================================================
"""
Test unconstrained minimization methods on various objective functions.
This module tests all unconstrained optimization methods provided by torchmin
on a variety of test problems. To add a new test problem:
1. Create a fixture in conftest.py (or here) following the standard format
2. Add the fixture name to the PROBLEMS list below
NOTE: The problem fixtures are defined in `conftest.py`
"""
import pytest
import torch
from torchmin import minimize
# All unconstrained optimization methods
ALL_METHODS = [
'bfgs',
'l-bfgs',
'cg',
'newton-cg',
'newton-exact',
'trust-ncg',
# 'trust-krylov', # TODO: fix trust-krylov solver and add this back
'trust-exact',
'dogleg',
]
# All test problems - add new problem fixture names here
PROBLEMS = [
'least_squares_problem',
'rosenbrock_problem',
]
# =============================================================================
# Fixtures
# =============================================================================
@pytest.fixture
def problem(request):
"""
Indirect fixture that routes to specific problem fixtures.
This allows parametrizing over multiple problem fixtures without
duplicating test code.
"""
return request.getfixturevalue(request.param)
# =============================================================================
# Tests
# =============================================================================
@pytest.mark.parametrize('problem', PROBLEMS, indirect=True)
@pytest.mark.parametrize('method', ALL_METHODS)
def test_minimize(method, problem):
"""Test minimization methods on various optimization problems."""
result = minimize(problem['objective'], problem['x0'], method=method)
# TODO: should we check result.success??
# assert result.success, (
# f"Optimization failed for method {method} on {problem['name']}: "
# f"{result.message}"
# )
torch.testing.assert_close(
result.x, problem['solution'],
rtol=1e-4, atol=1e-3,
msg=f"Solution incorrect for method {method} on {problem['name']}"
)
================================================
FILE: tests/torchmin/test_minimize_constr.py
================================================
"""
Test constrained minimization methods.
This module tests the minimize_constr function on various types of constraints,
including inactive constraints (that don't affect the solution) and active
constraints (that bind at the optimum).
"""
import pytest
import torch
from torchmin import minimize, minimize_constr
# from torchmin.constrained.trust_constr import _minimize_trust_constr as minimize_constr
from torchmin.benchmarks import rosen
# Test constants
RTOL = 1e-2
ATOL = 1e-2
MAX_ITER = 50
TOLERANCE = 1e-6 # Numerical tolerance for constraint satisfaction
# =============================================================================
# Fixtures
# =============================================================================
@pytest.fixture(scope='session')
def rosen_start():
"""Starting point for Rosenbrock optimization tests."""
return torch.tensor([1., 8.])
@pytest.fixture(scope='session')
def rosen_unconstrained_solution(rosen_start):
"""Compute the unconstrained Rosenbrock solution for comparison."""
result = minimize(
rosen,
rosen_start,
method='l-bfgs',
options=dict(line_search='strong-wolfe'),
max_iter=MAX_ITER,
disp=0
)
return result
# =============================================================================
# Constraint Functions
# =============================================================================
def sum_constraint(x):
"""Sum constraint: sum(x)."""
return x.sum()
def norm_constraint(x):
"""L2 norm squared constraint: ||x||^2."""
return x.square().sum()
# =============================================================================
# Tests
# =============================================================================
class TestUnconstrainedBaseline:
"""Test unconstrained optimization as a baseline."""
def test_rosen_unconstrained(self, rosen_start):
"""Test unconstrained Rosenbrock minimization."""
result = minimize(
rosen,
rosen_start,
method='l-bfgs',
options=dict(line_search='strong-wolfe'),
max_iter=MAX_ITER,
disp=0
)
assert result.success
class TestInactiveConstraints:
"""
Test constraints that are inactive (non-binding) at the optimum.
When the constraint is loose enough, the constrained solution should
match the unconstrained solution.
"""
@pytest.mark.parametrize('constraint_fun,constraint_name', [
(sum_constraint, 'sum'),
(norm_constraint, 'norm'),
])
def test_loose_constraints(
self,
rosen_start,
rosen_unconstrained_solution,
constraint_fun,
constraint_name
):
"""Test that loose constraints don't affect the solution."""
# Upper bound of 10 is loose enough to not affect the solution
result = minimize_constr(
rosen,
rosen_start,
method='trust-constr',
constr=dict(fun=constraint_fun, ub=10.),
max_iter=MAX_ITER,
disp=0
)
torch.testing.assert_close(
result.x,
rosen_unconstrained_solution.x,
rtol=RTOL,
atol=ATOL,
msg=f"Loose {constraint_name} constraint affected the solution"
)
class TestActiveConstraints:
"""
Test constraints that are active (binding) at the optimum.
When the constraint is tight, it should bind at the specified bound
and produce a different solution than the unconstrained case.
"""
@pytest.mark.parametrize('constraint_fun,ub', [
(sum_constraint, 1.),
(norm_constraint, 1.),
])
def test_tight_constraints(self, rosen_start, constraint_fun, ub):
"""Test that tight constraints bind at the specified bound."""
result = minimize_constr(
rosen,
rosen_start,
method='trust-constr',
constr=dict(fun=constraint_fun, ub=ub),
max_iter=MAX_ITER,
disp=0
)
# Verify the constraint is satisfied (with numerical tolerance)
constraint_value = constraint_fun(result.x)
assert constraint_value <= ub + TOLERANCE, (
f"Constraint violated: {constraint_value:.6f} > {ub}"
)
def test_frankwolfe_birkhoff_polytope():
n, d = 5, 10
X = torch.randn(n, d)
Y = torch.flipud(torch.eye(n)) @ X
def fun(P):
return torch.sum((X @ X.T @ P - P @ Y @ Y.T) ** 2)
init_P = torch.eye(n)
init_err = torch.sum((X - init_P @ Y) ** 2)
res = minimize_constr(
fun,
init_P,
method='frank-wolfe',
constr='birkhoff',
)
est_P = res.x
final_err = torch.sum((X - est_P @ Y) ** 2)
torch.testing.assert_close(est_P.sum(0), torch.ones(n))
torch.testing.assert_close(est_P.sum(1), torch.ones(n))
assert final_err < 0.01 * init_err
def test_frankwolfe_tracenorm():
dim = 5
init_X = torch.zeros((dim, dim))
eye = torch.eye(dim)
def fun(X):
return torch.sum((X - eye) ** 2)
res = minimize_constr(
fun,
init_X,
method='frank-wolfe',
constr='tracenorm',
options=dict(t=5.0),
)
est_X = res.x
torch.testing.assert_close(est_X, eye, rtol=1e-2, atol=1e-2)
res = minimize_constr(
fun,
init_X,
method='frank-wolfe',
constr='tracenorm',
options=dict(t=1.0),
)
est_X = res.x
torch.testing.assert_close(est_X, 0.2 * eye, rtol=1e-2, atol=1e-2)
def test_lbfgsb_simple_quadratic():
"""Test L-BFGS-B on a simple bounded quadratic problem.
Minimize: f(x) = (x1 - 2)^2 + (x2 - 1)^2
Subject to: 0 <= x1 <= 1.5, 0 <= x2 <= 2
The unconstrained minimum is at (2, 1), but x1 is constrained,
so the optimal solution should be at (1.5, 1).
"""
def fun(x):
return (x[0] - 2)**2 + (x[1] - 1)**2
x0 = torch.tensor([0.5, 0.5])
lb = torch.tensor([0.0, 0.0])
ub = torch.tensor([1.5, 2.0])
result = minimize_constr(
fun,
x0,
method='l-bfgs-b',
bounds=(lb, ub),
options=dict(gtol=1e-6, ftol=1e-9),
)
# Check if close to expected solution
expected_x = torch.tensor([1.5, 1.0])
expected_f = 0.25
torch.testing.assert_close(
result.x,
expected_x,
rtol=1e-5,
atol=1e-4,
msg=f"Solution {result.x} not close to expected {expected_x}"
)
assert abs(result.fun - expected_f) < 1e-4, \
f"Function value {result.fun} not close to expected {expected_f}"
def test_lbfgsb_rosenbrock():
"""Test L-BFGS-B on Rosenbrock function with bounds.
Minimize: f(x,y) = (1-x)^2 + 100(y-x^2)^2
Subject to: -2 <= x <= 2, -2 <= y <= 2
The unconstrained minimum is at (1, 1).
"""
x0 = torch.tensor([-1.0, 1.5])
lb = torch.tensor([-2.0, -2.0])
ub = torch.tensor([2.0, 2.0])
result = minimize_constr(
rosen,
x0,
method='l-bfgs-b',
bounds=(lb, ub),
options=dict(gtol=1e-6, ftol=1e-9, max_iter=100),
)
# Check if close to expected solution
expected_x = torch.tensor([1.0, 1.0])
torch.testing.assert_close(
result.x,
expected_x,
rtol=1e-5,
atol=1e-3,
msg=f"Solution {result.x} not close to expected {expected_x}"
)
assert result.fun < 1e-6, \
f"Function value {result.fun} not close to 0"
def test_lbfgsb_active_constraints():
"""Test L-BFGS-B with multiple active constraints.
Minimize: f(x) = sum(x_i^2)
Subject to: x_i >= 1 for all i
The solution should be all ones (on the boundary).
"""
def fun(x):
return (x**2).sum()
n = 5
x0 = torch.ones(n) * 2.0
lb = torch.ones(n)
ub = torch.ones(n) * 10.0
result = minimize_constr(
fun,
x0,
method='l-bfgs-b',
bounds=(lb, ub),
options=dict(gtol=1e-6, ftol=1e-9),
)
# Check if close to expected solution
expected_x = torch.ones(n)
expected_f = float(n)
torch.testing.assert_close(
result.x,
expected_x,
rtol=1e-5,
atol=1e-4,
msg=f"Solution {result.x} not close to expected {expected_x}"
)
assert abs(result.fun - expected_f) < 1e-4, \
f"Function value {result.fun} not close to expected {expected_f}"
================================================
FILE: torchmin/__init__.py
================================================
from ._version import __version__
from .minimize import minimize
from .minimize_constr import minimize_constr
from .lstsq import least_squares
from .optim import Minimizer, ScipyMinimizer
__all__ = ['minimize', 'minimize_constr', 'least_squares',
'Minimizer', 'ScipyMinimizer']
================================================
FILE: torchmin/_optimize.py
================================================
# **** Optimization Utilities ****
#
# This module contains general utilies for optimization such as
# `_status_message` and `OptimizeResult` (coming soon).
# standard status messages of optimizers (derived from SciPy)
_status_message = {
'success': 'Optimization terminated successfully.',
'maxfev': 'Maximum number of function evaluations has been exceeded.',
'maxiter': 'Maximum number of iterations has been exceeded.',
'pr_loss': 'Desired error not necessarily achieved due to precision loss.',
'nan': 'NaN result encountered.',
'out_of_bounds': 'The result is outside of the provided bounds.',
'callback_stop': 'Stopped by the user through the callback function.',
}
================================================
FILE: torchmin/_version.py
================================================
__version__ = "0.1.0"
================================================
FILE: torchmin/benchmarks.py
================================================
import torch
__all__ = ['rosen', 'rosen_der', 'rosen_hess', 'rosen_hess_prod']
# =============================
# Rosenbrock function
# =============================
def rosen(x, reduce=True):
val = 100. * (x[...,1:] - x[...,:-1]**2)**2 + (1 - x[...,:-1])**2
if reduce:
return val.sum()
else:
# don't reduce batch dimensions
return val.sum(-1)
def rosen_der(x):
xm = x[..., 1:-1]
xm_m1 = x[..., :-2]
xm_p1 = x[..., 2:]
der = torch.zeros_like(x)
der[..., 1:-1] = (200 * (xm - xm_m1**2) -
400 * (xm_p1 - xm**2) * xm - 2 * (1 - xm))
der[..., 0] = -400 * x[..., 0] * (x[..., 1] - x[..., 0]**2) - 2 * (1 - x[..., 0])
der[..., -1] = 200 * (x[..., -1] - x[..., -2]**2)
return der
def rosen_hess(x):
H = torch.diag_embed(-400*x[..., :-1], 1) - \
torch.diag_embed(400*x[..., :-1], -1)
diagonal = torch.zeros_like(x)
diagonal[..., 0] = 1200*x[..., 0].square() - 400*x[..., 1] + 2
diagonal[..., -1] = 200
diagonal[..., 1:-1] = 202 + 1200*x[..., 1:-1].square() - 400*x[..., 2:]
H.diagonal(dim1=-2, dim2=-1).add_(diagonal)
return H
def rosen_hess_prod(x, p):
Hp = torch.zeros_like(x)
Hp[..., 0] = (1200 * x[..., 0]**2 - 400 * x[..., 1] + 2) * p[..., 0] - \
400 * x[..., 0] * p[..., 1]
Hp[..., 1:-1] = (-400 * x[..., :-2] * p[..., :-2] +
(202 + 1200 * x[..., 1:-1]**2 - 400 * x[..., 2:]) * p[..., 1:-1] -
400 * x[..., 1:-1] * p[..., 2:])
Hp[..., -1] = -400 * x[..., -2] * p[..., -2] + 200*p[..., -1]
return Hp
================================================
FILE: torchmin/bfgs.py
================================================
from abc import ABC, abstractmethod
import torch
from torch import Tensor
from scipy.optimize import OptimizeResult
from ._optimize import _status_message
from .function import ScalarFunction
from .line_search import strong_wolfe
class HessianUpdateStrategy(ABC):
def __init__(self):
self.n_updates = 0
@abstractmethod
def solve(self, grad):
pass
@abstractmethod
def _update(self, s, y, rho_inv):
pass
def update(self, s, y):
rho_inv = y.dot(s)
if rho_inv <= 1e-10:
# curvature is negative; do not update
return
self._update(s, y, rho_inv)
self.n_updates += 1
class L_BFGS(HessianUpdateStrategy):
def __init__(self, x, history_size=100):
super().__init__()
self.y = []
self.s = []
self.rho = []
self.H_diag = 1.
self.alpha = x.new_empty(history_size)
self.history_size = history_size
def solve(self, grad):
mem_size = len(self.y)
d = grad.neg()
for i in reversed(range(mem_size)):
self.alpha[i] = self.s[i].dot(d) * self.rho[i]
d.add_(self.y[i], alpha=-self.alpha[i])
d.mul_(self.H_diag)
for i in range(mem_size):
beta_i = self.y[i].dot(d) * self.rho[i]
d.add_(self.s[i], alpha=self.alpha[i] - beta_i)
return d
def _update(self, s, y, rho_inv):
if len(self.y) == self.history_size:
self.y.pop(0)
self.s.pop(0)
self.rho.pop(0)
self.y.append(y)
self.s.append(s)
self.rho.append(rho_inv.reciprocal())
self.H_diag = rho_inv / y.dot(y)
class BFGS(HessianUpdateStrategy):
def __init__(self, x, inverse=True):
super().__init__()
self.inverse = inverse
if inverse:
self.I = torch.eye(x.numel(), device=x.device, dtype=x.dtype)
self.H = self.I.clone()
else:
self.B = torch.eye(x.numel(), device=x.device, dtype=x.dtype)
def solve(self, grad):
if self.inverse:
return torch.matmul(self.H, grad.neg())
else:
return torch.cholesky_solve(grad.neg().unsqueeze(1),
torch.linalg.cholesky(self.B)).squeeze(1)
def _update(self, s, y, rho_inv):
rho = rho_inv.reciprocal()
if self.inverse:
if self.n_updates == 0:
self.H.mul_(rho_inv / y.dot(y))
R = torch.addr(self.I, s, y, alpha=-rho)
torch.addr(
torch.linalg.multi_dot((R, self.H, R.t())),
s, s, alpha=rho, out=self.H)
else:
if self.n_updates == 0:
self.B.mul_(rho * y.dot(y))
Bs = torch.mv(self.B, s)
self.B.addr_(y, y, alpha=rho)
self.B.addr_(Bs, Bs, alpha=-1./s.dot(Bs))
@torch.no_grad()
def _minimize_bfgs_core(
fun, x0, lr=1., low_mem=False, history_size=100, inv_hess=True,
max_iter=None, line_search='strong-wolfe', gtol=1e-5, xtol=1e-8,
gtd_tol=1e-10, normp=float('inf'), callback=None, disp=0,
return_all=False):
"""Minimize a multivariate function with BFGS or L-BFGS.
We choose from BFGS/L-BFGS with the `low_mem` argument.
Parameters
----------
fun : callable
Scalar objective function to minimize
x0 : Tensor
Initialization point
lr : float
Step size for parameter updates. If using line search, this will be
used as the initial step size for the search.
low_mem : bool
Whether to use L-BFGS, the "low memory" variant of the BFGS algorithm.
history_size : int
History size for L-BFGS hessian estimates. Ignored if `low_mem=False`.
inv_hess : bool
Whether to parameterize the inverse hessian vs. the hessian with BFGS.
Ignored if `low_mem=True` (L-BFGS always parameterizes the inverse).
max_iter : int, optional
Maximum number of iterations to perform. Defaults to 200 * x0.numel()
line_search : str
Line search specifier. Currently the available options are
{'none', 'strong-wolfe'}.
gtol : float
Termination tolerance on 1st-order optimality (gradient norm).
xtol : float
Termination tolerance on function/parameter changes.
gtd_tol : float
Tolerence used to verify that the search direction is a *descent
direction*. The directional derivative `gtd` should be negative for
descent; this check ensures that `gtd < -xtol` (sufficiently negative).
normp : Number or str
The norm type to use for termination conditions. Can be any value
supported by `torch.norm` p argument.
callback : callable, optional
Function to call after each iteration with the current parameter
state, e.g. ``callback(x)``.
disp : int or bool
Display (verbosity) level. Set to >0 to print status messages.
return_all : bool, optional
Set to True to return a list of the best solution at each of the
iterations.
Returns
-------
result : OptimizeResult
Result of the optimization routine.
"""
lr = float(lr)
disp = int(disp)
if max_iter is None:
max_iter = x0.numel() * 200
if low_mem and not inv_hess:
raise ValueError('inv_hess=False is not available for L-BFGS.')
# construct scalar objective function
sf = ScalarFunction(fun, x0.shape)
closure = sf.closure
if line_search == 'strong-wolfe':
dir_evaluate = sf.dir_evaluate
# compute initial f(x) and f'(x)
x = x0.detach().view(-1).clone(memory_format=torch.contiguous_format)
f, g, _, _ = closure(x)
if disp > 1:
print('initial fval: %0.4f' % f)
if return_all:
allvecs = [x]
# initial settings
if low_mem:
hess = L_BFGS(x, history_size)
else:
hess = BFGS(x, inv_hess)
d = g.neg()
t = min(1., g.norm(p=1).reciprocal()) * lr
n_iter = 0
# BFGS iterations
for n_iter in range(1, max_iter+1):
# ==================================
# compute Quasi-Newton direction
# ==================================
if n_iter > 1:
d = hess.solve(g)
# directional derivative
gtd = g.dot(d)
# check if directional derivative is below tolerance
if gtd > -gtd_tol:
warnflag = 4
msg = 'A non-descent direction was encountered.'
break
# ======================
# update parameter
# ======================
if line_search == 'none':
# no line search, move with fixed-step
x_new = x + d.mul(t)
f_new, g_new, _, _ = closure(x_new)
elif line_search == 'strong-wolfe':
# Determine step size via strong-wolfe line search
f_new, g_new, t, ls_evals = \
strong_wolfe(dir_evaluate, x, t, d, f, g, gtd)
x_new = x + d.mul(t)
else:
raise ValueError('invalid line_search option {}.'.format(line_search))
if disp > 1:
print('iter %3d - fval: %0.4f' % (n_iter, f_new))
if return_all:
allvecs.append(x_new)
if callback is not None:
if callback(x_new):
warnflag = 5
msg = _status_message['callback_stop']
break
# ================================
# update hessian approximation
# ================================
s = x_new.sub(x)
y = g_new.sub(g)
hess.update(s, y)
# =========================================
# check conditions and update buffers
# =========================================
# convergence by insufficient progress
if (s.norm(p=normp) <= xtol) | ((f_new - f).abs() <= xtol):
warnflag = 0
msg = _status_message['success']
break
# update state
f[...] = f_new
x.copy_(x_new)
g.copy_(g_new)
t = lr
# convergence by 1st-order optimality
if g.norm(p=normp) <= gtol:
warnflag = 0
msg = _status_message['success']
break
# precision loss; exit
if ~f.isfinite():
warnflag = 2
msg = _status_message['pr_loss']
break
else:
# if we get to the end, the maximum num. iterations was reached
warnflag = 1
msg = _status_message['maxiter']
if disp:
print(msg)
print(" Current function value: %f" % f)
print(" Iterations: %d" % n_iter)
print(" Function evaluations: %d" % sf.nfev)
result = OptimizeResult(fun=f, x=x.view_as(x0), grad=g.view_as(x0),
status=warnflag, success=(warnflag==0),
message=msg, nit=n_iter, nfev=sf.nfev)
if not low_mem:
if inv_hess:
result['hess_inv'] = hess.H.view(2 * x0.shape)
else:
result['hess'] = hess.B.view(2 * x0.shape)
if return_all:
result['allvecs'] = allvecs
return result
def _minimize_bfgs(
fun, x0, lr=1., inv_hess=True, max_iter=None,
line_search='strong-wolfe', gtol=1e-5, xtol=1e-8, gtd_tol=1e-10,
normp=float('inf'), callback=None, disp=0, return_all=False):
"""Minimize a multivariate function with BFGS
Parameters
----------
fun : callable
Scalar objective function to minimize.
x0 : Tensor
Initialization point.
lr : float
Step size for parameter updates. If using line search, this will be
used as the initial step size for the search.
inv_hess : bool
Whether to parameterize the inverse hessian vs. the hessian with BFGS.
max_iter : int, optional
Maximum number of iterations to perform. Defaults to
``200 * x0.numel()``.
line_search : str
Line search specifier. Currently the available options are
{'none', 'strong-wolfe'}.
gtol : float
Termination tolerance on 1st-order optimality (gradient norm).
xtol : float
Termination tolerance on function/parameter changes.
gtd_tol : float
Tolerence used to verify that the search direction is a *descent
direction*. The directional derivative `gtd` should be negative for
descent; this check ensures that `gtd < -xtol` (sufficiently negative).
normp : Number or str
The norm type to use for termination conditions. Can be any value
supported by :func:`torch.norm`.
callback : callable, optional
Function to call after each iteration with the current parameter
state, e.g. ``callback(x)``.
disp : int or bool
Display (verbosity) level. Set to >0 to print status messages.
return_all : bool, optional
Set to True to return a list of the best solution at each of the
iterations.
Returns
-------
result : OptimizeResult
Result of the optimization routine.
"""
return _minimize_bfgs_core(
fun, x0, lr, low_mem=False, inv_hess=inv_hess, max_iter=max_iter,
line_search=line_search, gtol=gtol, xtol=xtol, gtd_tol=gtd_tol,
normp=normp, callback=callback, disp=disp, return_all=return_all)
def _minimize_lbfgs(
fun, x0, lr=1., history_size=100, max_iter=None,
line_search='strong-wolfe', gtol=1e-5, xtol=1e-8, gtd_tol=1e-10,
normp=float('inf'), callback=None, disp=0, return_all=False):
"""Minimize a multivariate function with L-BFGS
Parameters
----------
fun : callable
Scalar objective function to minimize.
x0 : Tensor
Initialization point.
lr : float
Step size for parameter updates. If using line search, this will be
used as the initial step size for the search.
history_size : int
History size for L-BFGS hessian estimates.
max_iter : int, optional
Maximum number of iterations to perform. Defaults to
``200 * x0.numel()``.
line_search : str
Line search specifier. Currently the available options are
{'none', 'strong-wolfe'}.
gtol : float
Termination tolerance on 1st-order optimality (gradient norm).
xtol : float
Termination tolerance on function/parameter changes.
gtd_tol : float
Tolerence used to verify that the search direction is a *descent
direction*. The directional derivative `gtd` should be negative for
descent; this check ensures that `gtd < -xtol` (sufficiently negative).
normp : Number or str
The norm type to use for termination conditions. Can be any value
supported by :func:`torch.norm`.
callback : callable, optional
Function to call after each iteration with the current parameter
state, e.g. ``callback(x)``.
disp : int or bool
Display (verbosity) level. Set to >0 to print status messages.
return_all : bool, optional
Set to True to return a list of the best solution at each of the
iterations.
Returns
-------
result : OptimizeResult
Result of the optimization routine.
"""
return _minimize_bfgs_core(
fun, x0, lr, low_mem=True, history_size=history_size,
max_iter=max_iter, line_search=line_search, gtol=gtol, xtol=xtol,
gtd_tol=gtd_tol, normp=normp, callback=callback, disp=disp,
return_all=return_all)
================================================
FILE: torchmin/cg.py
================================================
import torch
from scipy.optimize import OptimizeResult
from ._optimize import _status_message
from .function import ScalarFunction
from .line_search import strong_wolfe
dot = lambda u,v: torch.dot(u.view(-1), v.view(-1))
@torch.no_grad()
def _minimize_cg(fun, x0, max_iter=None, gtol=1e-5, normp=float('inf'),
callback=None, disp=0, return_all=False):
"""Minimize a scalar function of one or more variables using
nonlinear conjugate gradient.
The algorithm is described in Nocedal & Wright (2006) chapter 5.2.
Parameters
----------
fun : callable
Scalar objective function to minimize.
x0 : Tensor
Initialization point.
max_iter : int
Maximum number of iterations to perform. Defaults to
``200 * x0.numel()``.
gtol : float
Termination tolerance on 1st-order optimality (gradient norm).
normp : float
The norm type to use for termination conditions. Can be any value
supported by :func:`torch.norm`.
callback : callable, optional
Function to call after each iteration with the current parameter
state, e.g. ``callback(x)``
disp : int or bool
Display (verbosity) level. Set to >0 to print status messages.
return_all : bool, optional
Set to True to return a list of the best solution at each of the
iterations.
"""
disp = int(disp)
if max_iter is None:
max_iter = x0.numel() * 200
# Construct scalar objective function
sf = ScalarFunction(fun, x_shape=x0.shape)
closure = sf.closure
dir_evaluate = sf.dir_evaluate
# initialize
x = x0.detach().flatten()
f, g, _, _ = closure(x)
if disp > 1:
print('initial fval: %0.4f' % f)
if return_all:
allvecs = [x]
d = g.neg()
grad_norm = g.norm(p=normp)
old_f = f + g.norm() / 2 # Sets the initial step guess to dx ~ 1
for niter in range(1, max_iter + 1):
# delta/gtd
delta = dot(g, g)
gtd = dot(g, d)
# compute initial step guess based on (f - old_f) / gtd
t0 = torch.clamp(2.02 * (f - old_f) / gtd, max=1.0)
if t0 <= 0:
warnflag = 4
msg = 'Initial step guess is negative.'
break
old_f = f
# buffer to store next direction vector
cached_step = [None]
def polak_ribiere_powell_step(t, g_next):
y = g_next - g
beta = torch.clamp(dot(y, g_next) / delta, min=0)
d_next = -g_next + d.mul(beta)
torch.norm(g_next, p=normp, out=grad_norm)
return t, d_next
def descent_condition(t, f_next, g_next):
# Polak-Ribiere+ needs an explicit check of a sufficient
# descent condition, which is not guaranteed by strong Wolfe.
cached_step[:] = polak_ribiere_powell_step(t, g_next)
t, d_next = cached_step
# Accept step if it leads to convergence.
cond1 = grad_norm <= gtol
# Accept step if sufficient descent condition applies.
cond2 = dot(d_next, g_next) <= -0.01 * dot(g_next, g_next)
return cond1 | cond2
# Perform CG step
f, g, t, ls_evals = \
strong_wolfe(dir_evaluate, x, t0, d, f, g, gtd,
c2=0.4, extra_condition=descent_condition)
# Update x and then update d (in that order)
x = x + d.mul(t)
if t == cached_step[0]:
# Reuse already computed results if possible
d = cached_step[1]
else:
d = polak_ribiere_powell_step(t, g)[1]
if disp > 1:
print('iter %3d - fval: %0.4f' % (niter, f))
if return_all:
allvecs.append(x)
if callback is not None:
if callback(x):
warnflag = 5
msg = _status_message['callback_stop']
break
# check optimality
if grad_norm <= gtol:
warnflag = 0
msg = _status_message['success']
break
else:
# if we get to the end, the maximum iterations was reached
warnflag = 1
msg = _status_message['maxiter']
if disp:
print("%s%s" % ("Warning: " if warnflag != 0 else "", msg))
print(" Current function value: %f" % f)
print(" Iterations: %d" % niter)
print(" Function evaluations: %d" % sf.nfev)
result = OptimizeResult(fun=f, x=x.view_as(x0), grad=g.view_as(x0),
status=warnflag, success=(warnflag == 0),
message=msg, nit=niter, nfev=sf.nfev)
if return_all:
result['allvecs'] = allvecs
return result
================================================
FILE: torchmin/constrained/frankwolfe.py
================================================
import warnings
import numpy as np
import torch
from numbers import Number
from scipy.optimize import (
linear_sum_assignment,
OptimizeResult,
)
from scipy.sparse.linalg import svds
from .._optimize import _status_message
from ..function import ScalarFunction
@torch.no_grad()
def _minimize_frankwolfe(
fun, x0, constr='tracenorm', t=None, max_iter=None, gtol=1e-5,
normp=float('inf'), callback=None, disp=0):
"""Minimize a scalar function of a matrix with Frank-Wolfe (a.k.a.
conditional gradient).
The algorithm is described in [1]_. The following constraints are currently
supported:
- Trace norm. The matrix is constrained to have trace norm (a.k.a.
nuclear norm) less than t.
- Birkhoff polytope. The matrix is constrained to lie in the Birkhoff
polytope, i.e. over the space of doubly stochastic matrices. Requires
a square matrix.
Parameters
----------
fun : callable
Scalar objective function to minimize.
x0 : Tensor
Initialization point.
constr : str
Which constraint to use. Must be either 'tracenorm' or 'birkhoff'.
t : float, optional
Maximum allowed trace norm. Required when using the 'tracenorm' constr;
otherwise unused.
max_iter : int, optional
Maximum number of iterations to perform.
gtol : float
Termination tolerance on 1st-order optimality (gradient norm).
normp : float
The norm type to use for termination conditions. Can be any value
supported by :func:`torch.norm`.
callback : callable, optional
Function to call after each iteration with the current parameter
state, e.g. ``callback(x)``.
disp : int or bool
Display (verbosity) level. Set to >0 to print status messages.
Returns
-------
result : OptimizeResult
Result of the optimization routine.
References
----------
.. [1] Martin Jaggi, "Revisiting Frank-Wolfe: Projection-Free Sparse Convex
Optimization", ICML 2013.
"""
assert isinstance(constr, str)
constr = constr.lower()
if constr in {'tracenorm', 'trace-norm'}:
assert t is not None, \
f'Argument `t` is required when using the trace-norm constraint.'
assert isinstance(t, Number), \
f'Argument `t` must be a Number but got {type(t)}'
constr = 'tracenorm'
elif constr in {'birkhoff', 'birkhoff-polytope'}:
if t is not None:
warnings.warn(
'Argument `t` was provided but is unused for the'
'birkhoff-polytope constraint.'
)
constr = 'birkhoff'
else:
raise ValueError(f'Invalid constr: "{constr}".')
if x0.ndim != 2:
raise ValueError(
f'Optimization variable `x` must be a matrix to use Frank-Wolfe.'
)
m, n = x0.shape
if constr == 'birkhoff':
if m != n:
raise RuntimeError('Initial iterate must be a square matrix.')
if not ((x0.sum(0) == 1).all() and (x0.sum(1) == 1).all()):
raise RuntimeError('Initial iterate must be doubly stochastic.')
disp = int(disp)
if max_iter is None:
max_iter = m * 100
# Construct scalar objective function
sf = ScalarFunction(fun, x_shape=x0.shape)
closure = sf.closure
dir_evaluate = sf.dir_evaluate
x = x0.detach()
for niter in range(max_iter):
f, g, _, _ = closure(x)
if constr == 'tracenorm':
u, s, vh = svds(g.detach().numpy(), k=1)
uvh = x.new_tensor(u @ vh)
alpha = 2. / (niter + 2.)
x = torch.lerp(x, -t * uvh, weight=alpha)
elif constr == 'birkhoff':
row_ind, col_ind = linear_sum_assignment(g.detach().numpy())
alpha = 2. / (niter + 2.)
x = (1 - alpha) * x
x[row_ind, col_ind] += alpha
else:
raise ValueError
if disp > 1:
print('iter %3d - fval: %0.4f' % (niter, f))
if callback is not None:
if callback(x):
warnflag = 5
msg = _status_message['callback_stop']
break
# check optimality
grad_norm = g.norm(p=normp)
if grad_norm <= gtol:
warnflag = 0
msg = _status_message['success']
break
else:
# if we get to the end, the maximum iterations was reached
warnflag = 1
msg = _status_message['maxiter']
if disp:
print("%s%s" % ("Warning: " if warnflag != 0 else "", msg))
print(" Current function value: %f" % f)
print(" Iterations: %d" % niter)
print(" Function evaluations: %d" % sf.nfev)
result = OptimizeResult(fun=f, x=x.view_as(x0), grad=g.view_as(x0),
status=warnflag, success=(warnflag == 0),
message=msg, nit=niter, nfev=sf.nfev)
return result
================================================
FILE: torchmin/constrained/lbfgsb.py
================================================
import torch
from torch import Tensor
from scipy.optimize import OptimizeResult
from .._optimize import _status_message
from ..function import ScalarFunction
class L_BFGS_B:
"""Limited-memory BFGS Hessian approximation for bounded optimization.
This class maintains the L-BFGS history and provides methods for
computing search directions within bound constraints.
"""
def __init__(self, x, history_size=10):
self.y = []
self.s = []
self.rho = []
self.theta = 1.0 # scaling factor
self.history_size = history_size
self.n_updates = 0
def solve(self, grad, x, lb, ub, theta=None):
"""Compute search direction: -H * grad, respecting bounds.
Parameters
----------
grad : Tensor
Current gradient
x : Tensor
Current point
lb : Tensor
Lower bounds
ub : Tensor
Upper bounds
theta : float, optional
Scaling factor. If None, uses stored value.
Returns
-------
d : Tensor
Search direction
"""
if theta is not None:
self.theta = theta
mem_size = len(self.y)
if mem_size == 0:
# No history yet, use scaled steepest descent
return grad.neg() * self.theta
# Two-loop recursion
alpha = torch.zeros(mem_size, dtype=grad.dtype, device=grad.device)
q = grad.clone()
# First loop: backward pass
for i in reversed(range(mem_size)):
alpha[i] = self.rho[i] * self.s[i].dot(q)
q.add_(self.y[i], alpha=-alpha[i])
# Apply initial Hessian approximation
r = q * self.theta
# Second loop: forward pass
for i in range(mem_size):
beta = self.rho[i] * self.y[i].dot(r)
r.add_(self.s[i], alpha=alpha[i] - beta)
return -r
def update(self, s, y):
"""Update the L-BFGS history with new correction pair.
Parameters
----------
s : Tensor
Step vector (x_new - x)
y : Tensor
Gradient difference (g_new - g)
"""
# Check curvature condition
sy = s.dot(y)
if sy <= 1e-10:
# Skip update if curvature is too small
return False
yy = y.dot(y)
# Update scaling factor (theta = s'y / y'y)
if yy > 1e-10:
self.theta = sy / yy
# Update history
if len(self.y) >= self.history_size:
self.y.pop(0)
self.s.pop(0)
self.rho.pop(0)
self.y.append(y.clone())
self.s.append(s.clone())
self.rho.append(1.0 / sy)
self.n_updates += 1
return True
def _project_bounds(x, lb, ub):
"""Project x onto the box [lb, ub]."""
return torch.clamp(x, lb, ub)
def _gradient_projection(x, g, lb, ub):
"""Compute the projected gradient.
Returns the projected gradient and identifies the active set.
"""
# Project gradient: if at bound and gradient points out, set to zero
g_proj = g.clone()
# At lower bound with positive gradient
at_lb = (x <= lb + 1e-10) & (g > 0)
g_proj[at_lb] = 0
# At upper bound with negative gradient
at_ub = (x >= ub - 1e-10) & (g < 0)
g_proj[at_ub] = 0
return g_proj
@torch.no_grad()
def _minimize_lbfgsb(
fun, x0, bounds=None, lr=1.0, history_size=10,
max_iter=None, gtol=1e-5, ftol=1e-9,
normp=float('inf'), callback=None, disp=0, return_all=False):
"""Minimize a scalar function with L-BFGS-B.
L-BFGS-B [1]_ is a limited-memory quasi-Newton method for bound-constrained
optimization. It extends L-BFGS to handle box constraints.
Parameters
----------
fun : callable
Scalar objective function to minimize.
x0 : Tensor
Initialization point.
bounds : tuple of Tensor, optional
Bounds for variables as (lb, ub) where lb and ub are Tensors
of the same shape as x0. Use float('-inf') and float('inf')
for unbounded variables. If None, equivalent to unbounded.
lr : float
Step size for parameter updates (used as initial step in line search).
history_size : int
History size for L-BFGS Hessian estimates.
max_iter : int, optional
Maximum number of iterations. Defaults to 200 * x0.numel().
gtol : float
Termination tolerance on projected gradient norm.
ftol : float
Termination tolerance on function/parameter changes.
normp : Number or str
Norm type for termination conditions. Can be any value
supported by torch.norm.
callback : callable, optional
Function to call after each iteration: callback(x).
disp : int or bool
Display (verbosity) level. Set to >0 to print status messages.
return_all : bool, optional
Set to True to return a list of the best solution at each iteration.
Returns
-------
result : OptimizeResult
Result of the optimization routine.
References
----------
.. [1] Byrd, R. H., Lu, P., Nocedal, J., & Zhu, C. (1995). A limited memory
algorithm for bound constrained optimization. SIAM Journal on
Scientific Computing, 16(5), 1190-1208.
"""
lr = float(lr)
disp = int(disp)
if max_iter is None:
max_iter = x0.numel() * 200
# Set up bounds
x = x0.detach().view(-1).clone(memory_format=torch.contiguous_format)
n = x.numel()
if bounds is None:
lb = torch.full_like(x, float('-inf'))
ub = torch.full_like(x, float('inf'))
else:
lb, ub = bounds
lb = lb.detach().view(-1).clone(memory_format=torch.contiguous_format)
ub = ub.detach().view(-1).clone(memory_format=torch.contiguous_format)
if lb.shape != x.shape or ub.shape != x.shape:
raise ValueError('Bounds must have the same shape as x0')
# Project initial point onto feasible region
x = _project_bounds(x, lb, ub)
# Construct scalar objective function
sf = ScalarFunction(fun, x0.shape)
closure = sf.closure
# Compute initial function and gradient
f, g, _, _ = closure(x)
if disp > 1:
print('initial fval: %0.4f' % f)
print('initial gnorm: %0.4e' % g.norm(p=normp))
if return_all:
allvecs = [x.clone()]
# Initialize L-BFGS approximation
hess = L_BFGS_B(x, history_size)
# Main iteration loop
for n_iter in range(1, max_iter + 1):
# ========================================
# Check projected gradient convergence
# ========================================
g_proj = _gradient_projection(x, g, lb, ub)
g_proj_norm = g_proj.norm(p=normp)
if disp > 1:
print('iter %3d - fval: %0.4f, gnorm: %0.4e' %
(n_iter, f, g_proj_norm))
if g_proj_norm <= gtol:
warnflag = 0
msg = _status_message['success']
break
# ========================================
# Compute search direction
# ========================================
# Use projected gradient for search direction computation
# This ensures we only move in directions away from active constraints
d = hess.solve(g_proj, x, lb, ub)
# Ensure direction is a descent direction w.r.t. original gradient
gtd = g.dot(d)
if gtd > -1e-10:
# Not a descent direction, use projected steepest descent
d = -g_proj
gtd = g.dot(d)
# Find maximum step length that keeps us feasible
alpha_max = 1.0
for i in range(x.numel()):
if d[i] > 1e-10:
# Moving toward upper bound
if ub[i] < float('inf'):
alpha_max = min(alpha_max, (ub[i] - x[i]) / d[i])
elif d[i] < -1e-10:
# Moving toward lower bound
if lb[i] > float('-inf'):
alpha_max = min(alpha_max, (lb[i] - x[i]) / d[i])
# Take a step with line search on the feasible segment
# Simple backtracking: try alpha_max, 0.5*alpha_max, etc.
alpha = alpha_max
for _ in range(10):
x_new = x + alpha * d
x_new = _project_bounds(x_new, lb, ub)
f_new, g_new, _, _ = closure(x_new)
# Armijo condition (sufficient decrease)
if f_new <= f + 1e-4 * alpha * gtd:
break
alpha *= 0.5
else:
# Line search failed, take a small step
x_new = x + 0.01 * alpha_max * d
x_new = _project_bounds(x_new, lb, ub)
f_new, g_new, _, _ = closure(x_new)
if return_all:
allvecs.append(x_new.clone())
if callback is not None:
if callback(x_new.view_as(x0)):
warnflag = 5
msg = _status_message['callback_stop']
break
# ========================================
# Update Hessian approximation
# ========================================
s = x_new - x
y = g_new - g
# Update L-BFGS (skip if curvature condition fails)
hess.update(s, y)
# ========================================
# Check convergence by small progress
# ========================================
# Convergence by insufficient progress (be more lenient than gtol)
if (s.norm(p=normp) <= ftol) and ((f_new - f).abs() <= ftol):
# Double check with projected gradient
g_proj_new = _gradient_projection(x_new, g_new, lb, ub)
if g_proj_new.norm(p=normp) <= gtol:
warnflag = 0
msg = _status_message['success']
break
# Check for precision loss
if not f_new.isfinite():
warnflag = 2
msg = _status_message['pr_loss']
break
# Update state
f = f_new
x = x_new
g = g_new
else:
# Maximum iterations reached
warnflag = 1
msg = _status_message['maxiter']
if disp:
print(msg)
print(" Current function value: %f" % f)
print(" Iterations: %d" % n_iter)
print(" Function evaluations: %d" % sf.nfev)
result = OptimizeResult(
fun=f,
x=x.view_as(x0),
grad=g.view_as(x0),
status=warnflag,
success=(warnflag == 0),
message=msg,
nit=n_iter,
nfev=sf.nfev
)
if return_all:
result['allvecs'] = [v.view_as(x0) for v in allvecs]
return result
================================================
FILE: torchmin/constrained/trust_constr.py
================================================
import warnings
import numbers
import torch
import numpy as np
from scipy.optimize import minimize, Bounds, NonlinearConstraint
from scipy.sparse.linalg import LinearOperator
_constr_keys = {'fun', 'lb', 'ub', 'jac', 'hess', 'hessp', 'keep_feasible'}
_bounds_keys = {'lb', 'ub', 'keep_feasible'}
def _build_obj(f, x0):
numel = x0.numel()
def to_tensor(x):
return torch.tensor(x, dtype=x0.dtype, device=x0.device).view_as(x0)
def f_with_jac(x):
x = to_tensor(x).requires_grad_(True)
with torch.enable_grad():
fval = f(x)
grad, = torch.autograd.grad(fval, x)
return fval.detach().cpu().numpy(), grad.view(-1).cpu().numpy()
def f_hess(x):
x = to_tensor(x).requires_grad_(True)
with torch.enable_grad():
fval = f(x)
grad, = torch.autograd.grad(fval, x, create_graph=True)
def matvec(p):
p = to_tensor(p)
hvp, = torch.autograd.grad(grad, x, p, retain_graph=True)
return hvp.view(-1).cpu().numpy()
return LinearOperator((numel, numel), matvec=matvec)
return f_with_jac, f_hess
def _build_constr(constr, x0):
assert isinstance(constr, dict)
assert set(constr.keys()).issubset(_constr_keys)
assert 'fun' in constr
assert 'lb' in constr or 'ub' in constr
if 'lb' not in constr:
constr['lb'] = -np.inf
if 'ub' not in constr:
constr['ub'] = np.inf
f_ = constr['fun']
numel = x0.numel()
def to_tensor(x):
return torch.tensor(x, dtype=x0.dtype, device=x0.device).view_as(x0)
def f(x):
x = to_tensor(x)
return f_(x).cpu().numpy()
def f_jac(x):
x = to_tensor(x)
if 'jac' in constr:
grad = constr['jac'](x)
else:
x.requires_grad_(True)
with torch.enable_grad():
grad, = torch.autograd.grad(f_(x), x)
return grad.view(-1).cpu().numpy()
def f_hess(x, v):
x = to_tensor(x)
if 'hess' in constr:
hess = constr['hess'](x)
return v[0] * hess.view(numel, numel).cpu().numpy()
elif 'hessp' in constr:
def matvec(p):
p = to_tensor(p)
hvp = constr['hessp'](x, p)
return v[0] * hvp.view(-1).cpu().numpy()
return LinearOperator((numel, numel), matvec=matvec)
else:
x.requires_grad_(True)
with torch.enable_grad():
if 'jac' in constr:
grad = constr['jac'](x)
else:
grad, = torch.autograd.grad(f_(x), x, create_graph=True)
def matvec(p):
p = to_tensor(p)
if grad.grad_fn is None:
# If grad_fn is None, then grad is constant wrt x, and hess is 0.
hvp = torch.zeros_like(grad)
else:
hvp, = torch.autograd.grad(grad, x, p, retain_graph=True)
return v[0] * hvp.view(-1).cpu().numpy()
return LinearOperator((numel, numel), matvec=matvec)
return NonlinearConstraint(
fun=f, lb=constr['lb'], ub=constr['ub'],
jac=f_jac, hess=f_hess,
keep_feasible=constr.get('keep_feasible', False))
def _check_bound(val, x0):
if isinstance(val, numbers.Number):
return np.full(x0.numel(), val)
elif isinstance(val, torch.Tensor):
assert val.numel() == x0.numel()
return val.detach().cpu().numpy().flatten()
elif isinstance(val, np.ndarray):
assert val.size == x0.numel()
return val.flatten()
else:
raise ValueError('Bound value has unrecognized format.')
def _build_bounds(bounds, x0):
assert isinstance(bounds, dict)
assert set(bounds.keys()).issubset(_bounds_keys)
assert 'lb' in bounds or 'ub' in bounds
lb = _check_bound(bounds.get('lb', -np.inf), x0)
ub = _check_bound(bounds.get('ub', np.inf), x0)
keep_feasible = bounds.get('keep_feasible', False)
return Bounds(lb, ub, keep_feasible)
@torch.no_grad()
def _minimize_trust_constr(
f, x0, constr=None, bounds=None, max_iter=None, tol=None, callback=None,
disp=0, **kwargs):
"""Minimize a scalar function of one or more variables subject to
bounds and/or constraints.
.. note::
This is a wrapper for SciPy's
`'trust-constr' <https://docs.scipy.org/doc/scipy/reference/optimize.minimize-trustconstr.html>`_
method. It uses autograd behind the scenes to build Jacobian & Hessian
callables before invoking scipy. Inputs and objectives should use
PyTorch tensors like other routines. CUDA is supported; however,
data will be transferred back-and-forth between GPU/CPU.
Parameters
----------
f : callable
Scalar objective function to minimize.
x0 : Tensor
Initialization point.
constr : dict, optional
Constraint specifications. Should be a dictionary with the
following fields:
* fun (callable) - Constraint function
* lb (Tensor or float, optional) - Constraint lower bounds
* ub (Tensor or float, optional) - Constraint upper bounds
One of either `lb` or `ub` must be provided. When `lb` == `ub` it is
interpreted as an equality constraint.
bounds : dict, optional
Bounds on variables. Should a dictionary with at least one
of the following fields:
* lb (Tensor or float) - Lower bounds
* ub (Tensor or float) - Upper bounds
Bounds of `-inf`/`inf` are interpreted as no bound. When `lb` == `ub`
it is interpreted as an equality constraint.
max_iter : int, optional
Maximum number of iterations to perform. If unspecified, this will
be set to the default of the selected method.
tol : float, optional
Tolerance for termination. For detailed control, use solver-specific
options.
callback : callable, optional
Function to call after each iteration with the current parameter
state, e.g. ``callback(x)``.
disp : int
Level of algorithm's verbosity:
* 0 : work silently (default).
* 1 : display a termination report.
* 2 : display progress during iterations.
* 3 : display progress during iterations (more complete report).
**kwargs
Additional keyword arguments passed to SciPy's trust-constr solver.
See options `here <https://docs.scipy.org/doc/scipy/reference/optimize.minimize-trustconstr.html>`_.
Returns
-------
result : OptimizeResult
Result of the optimization routine.
"""
if max_iter is None:
max_iter = 1000
x0 = x0.detach()
if x0.is_cuda:
warnings.warn('GPU is not recommended for trust-constr. '
'Data will be moved back-and-forth from CPU.')
# handle callbacks
if callback is not None:
callback_ = callback
def callback(x, state):
# x = state.x
x = x0.new_tensor(x).view_as(x0)
return callback_(x)
# handle bounds
if bounds is not None:
bounds = _build_bounds(bounds, x0)
# build objective function (and hessian)
f_with_jac, f_hess = _build_obj(f, x0)
# build constraints
if constr is not None:
constraints = [_build_constr(constr, x0)]
else:
constraints = []
# optimize
x0_np = x0.cpu().numpy().flatten().copy()
result = minimize(
f_with_jac, x0_np, method='trust-constr', jac=True,
hess=f_hess, callback=callback, tol=tol,
bounds=bounds,
constraints=constraints,
options=dict(verbose=int(disp), maxiter=max_iter, **kwargs)
)
# convert the important things to torch tensors
for key in ['fun', 'grad', 'x']:
result[key] = torch.tensor(result[key], dtype=x0.dtype, device=x0.device)
result['x'] = result['x'].view_as(x0)
return result
================================================
FILE: torchmin/function.py
================================================
from typing import List, Optional
from torch import Tensor
from collections import namedtuple
import torch
import torch.autograd as autograd
from torch._vmap_internals import _vmap
from .optim.minimizer import Minimizer
__all__ = ['ScalarFunction', 'VectorFunction']
# scalar function result (value)
sf_value = namedtuple('sf_value', ['f', 'grad', 'hessp', 'hess'])
# directional evaluate result
de_value = namedtuple('de_value', ['f', 'grad'])
# vector function result (value)
vf_value = namedtuple('vf_value', ['f', 'jacp', 'jac'])
@torch.jit.script
class JacobianLinearOperator(object):
def __init__(self,
x: Tensor,
f: Tensor,
gf: Optional[Tensor] = None,
gx: Optional[Tensor] = None,
symmetric: bool = False) -> None:
self.x = x
self.f = f
self.gf = gf
self.gx = gx
self.symmetric = symmetric
# tensor-like properties
self.shape = (f.numel(), x.numel())
self.dtype = x.dtype
self.device = x.device
def mv(self, v: Tensor) -> Tensor:
if self.symmetric:
return self.rmv(v)
assert v.shape == self.x.shape
gx, gf = self.gx, self.gf
assert (gx is not None) and (gf is not None)
outputs: List[Tensor] = [gx]
inputs: List[Tensor] = [gf]
grad_outputs: List[Optional[Tensor]] = [v]
jvp = autograd.grad(outputs, inputs, grad_outputs, retain_graph=True)[0]
if jvp is None:
raise Exception
return jvp
def rmv(self, v: Tensor) -> Tensor:
assert v.shape == self.f.shape
outputs: List[Tensor] = [self.f]
inputs: List[Tensor] = [self.x]
grad_outputs: List[Optional[Tensor]] = [v]
vjp = autograd.grad(outputs, inputs, grad_outputs, retain_graph=True)[0]
if vjp is None:
raise Exception
return vjp
def jacobian_linear_operator(x, f, symmetric=False):
if symmetric:
# Use vector-jacobian product (more efficient)
gf = gx = None
else:
# Apply the "double backwards" trick to get true
# jacobian-vector product
with torch.enable_grad():
gf = torch.zeros_like(f, requires_grad=True)
gx = autograd.grad(f, x, gf, create_graph=True)[0]
return JacobianLinearOperator(x, f, gf, gx, symmetric)
class ScalarFunction(object):
"""Scalar-valued objective function with autograd backend.
This class provides a general-purpose objective wrapper which will
compute first- and second-order derivatives via autograd as specified
by the parameters of __init__.
"""
def __new__(cls, fun, x_shape, hessp=False, hess=False, twice_diffable=True):
if isinstance(fun, Minimizer):
assert fun._hessp == hessp
assert fun._hess == hess
return fun
return super(ScalarFunction, cls).__new__(cls)
def __init__(self, fun, x_shape, hessp=False, hess=False, twice_diffable=True):
self._fun = fun
self._x_shape = x_shape
self._hessp = hessp
self._hess = hess
self._I = None
self._twice_diffable = twice_diffable
self.nfev = 0
def fun(self, x):
if x.shape != self._x_shape:
x = x.view(self._x_shape)
f = self._fun(x)
if f.numel() != 1:
raise RuntimeError('ScalarFunction was supplied a function '
'that does not return scalar outputs.')
self.nfev += 1
return f
def closure(self, x):
"""Evaluate the function, gradient, and hessian/hessian-product
This method represents the core function call. It is used for
computing newton/quasi newton directions, etc.
"""
x = x.detach().requires_grad_(True)
with torch.enable_grad():
f = self.fun(x)
grad = autograd.grad(f, x, create_graph=self._hessp or self._hess)[0]
if (self._hessp or self._hess) and grad.grad_fn is None:
raise RuntimeError('A 2nd-order derivative was requested but '
'the objective is not twice-differentiable.')
hessp = None
hess = None
if self._hessp:
hessp = jacobian_linear_operator(x, grad, symmetric=self._twice_diffable)
if self._hess:
if self._I is None:
self._I = torch.eye(x.numel(), dtype=x.dtype, device=x.device)
hvp = lambda v: autograd.grad(grad, x, v, retain_graph=True)[0]
hess = _vmap(hvp)(self._I)
return sf_value(f=f.detach(), grad=grad.detach(), hessp=hessp, hess=hess)
def dir_evaluate(self, x, t, d):
"""Evaluate a direction and step size.
We define a separate "directional evaluate" function to be used
for strong-wolfe line search. Only the function value and gradient
are needed for this use case, so we avoid computational overhead.
"""
x = x + d.mul(t)
x = x.detach().requires_grad_(True)
with torch.enable_grad():
f = self.fun(x)
grad = autograd.grad(f, x)[0]
return de_value(f=float(f), grad=grad)
class VectorFunction(object):
"""Vector-valued objective function with autograd backend."""
def __init__(self, fun, x_shape, jacp=False, jac=False):
self._fun = fun
self._x_shape = x_shape
self._jacp = jacp
self._jac = jac
self._I = None
self.nfev = 0
def fun(self, x):
if x.shape != self._x_shape:
x = x.view(self._x_shape)
f = self._fun(x)
if f.dim() == 0:
raise RuntimeError('VectorFunction expected vector outputs but '
'received a scalar.')
elif f.dim() > 1:
f = f.view(-1)
self.nfev += 1
return f
def closure(self, x):
x = x.detach().requires_grad_(True)
with torch.enable_grad():
f = self.fun(x)
jacp = None
jac = None
if self._jacp:
jacp = jacobian_linear_operator(x, f)
if self._jac:
if self._I is None:
self._I = torch.eye(f.numel(), dtype=x.dtype, device=x.device)
vjp = lambda v: autograd.grad(f, x, v, retain_graph=True)[0]
jac = _vmap(vjp)(self._I)
return vf_value(f=f.detach(), jacp=jacp, jac=jac)
================================================
FILE: torchmin/line_search.py
================================================
import warnings
import torch
from torch.optim.lbfgs import _strong_wolfe, _cubic_interpolate
from scipy.optimize import minimize_scalar
__all__ = ['strong_wolfe', 'brent', 'backtracking']
def _strong_wolfe_extra(
obj_func, x, t, d, f, g, gtd, c1=1e-4, c2=0.9,
tolerance_change=1e-9, max_ls=25, extra_condition=None):
"""A modified variant of pytorch's strong-wolfe line search that supports
an "extra_condition" argument (callable).
This is required for methods such as Conjugate Gradient (polak-ribiere)
where the strong wolfe conditions do not guarantee that we have a
descent direction.
Code borrowed from pytorch::
Copyright (c) 2016 Facebook, Inc.
All rights reserved.
"""
# ported from https://github.com/torch/optim/blob/master/lswolfe.lua
if extra_condition is None:
extra_condition = lambda *args: True
d_norm = d.abs().max()
g = g.clone(memory_format=torch.contiguous_format)
# evaluate objective and gradient using initial step
f_new, g_new = obj_func(x, t, d)
ls_func_evals = 1
gtd_new = g_new.dot(d)
# bracket an interval containing a point satisfying the Wolfe criteria
t_prev, f_prev, g_prev, gtd_prev = 0, f, g, gtd
done = False
ls_iter = 0
while ls_iter < max_ls:
# check conditions
if f_new > (f + c1 * t * gtd) or (ls_iter > 1 and f_new >= f_prev):
bracket = [t_prev, t]
bracket_f = [f_prev, f_new]
bracket_g = [g_prev, g_new.clone(memory_format=torch.contiguous_format)]
bracket_gtd = [gtd_prev, gtd_new]
break
if abs(gtd_new) <= -c2 * gtd and extra_condition(t, f_new, g_new):
bracket = [t]
bracket_f = [f_new]
bracket_g = [g_new]
done = True
break
if gtd_new >= 0:
bracket = [t_prev, t]
bracket_f = [f_prev, f_new]
bracket_g = [g_prev, g_new.clone(memory_format=torch.contiguous_format)]
bracket_gtd = [gtd_prev, gtd_new]
break
# interpolate
min_step = t + 0.01 * (t - t_prev)
max_step = t * 10
tmp = t
t = _cubic_interpolate(
t_prev,
f_prev,
gtd_prev,
t,
f_new,
gtd_new,
bounds=(min_step, max_step))
# next step
t_prev = tmp
f_prev = f_new
g_prev = g_new.clone(memory_format=torch.contiguous_format)
gtd_prev = gtd_new
f_new, g_new = obj_func(x, t, d)
ls_func_evals += 1
gtd_new = g_new.dot(d)
ls_iter += 1
# reached max number of iterations?
if ls_iter == max_ls:
bracket = [0, t]
bracket_f = [f, f_new]
bracket_g = [g, g_new]
# zoom phase: we now have a point satisfying the criteria, or
# a bracket around it. We refine the bracket until we find the
# exact point satisfying the criteria
insuf_progress = False
# find high and low points in bracket
low_pos, high_pos = (0, 1) if bracket_f[0] <= bracket_f[-1] else (1, 0)
while not done and ls_iter < max_ls:
# line-search bracket is so small
if abs(bracket[1] - bracket[0]) * d_norm < tolerance_change:
break
# compute new trial value
t = _cubic_interpolate(bracket[0], bracket_f[0], bracket_gtd[0],
bracket[1], bracket_f[1], bracket_gtd[1])
# test that we are making sufficient progress:
# in case `t` is so close to boundary, we mark that we are making
# insufficient progress, and if
# + we have made insufficient progress in the last step, or
# + `t` is at one of the boundary,
# we will move `t` to a position which is `0.1 * len(bracket)`
# away from the nearest boundary point.
eps = 0.1 * (max(bracket) - min(bracket))
if min(max(bracket) - t, t - min(bracket)) < eps:
# interpolation close to boundary
if insuf_progress or t >= max(bracket) or t <= min(bracket):
# evaluate at 0.1 away from boundary
if abs(t - max(bracket)) < abs(t - min(bracket)):
t = max(bracket) - eps
else:
t = min(bracket) + eps
insuf_progress = False
else:
insuf_progress = True
else:
insuf_progress = False
# Evaluate new point
f_new, g_new = obj_func(x, t, d)
ls_func_evals += 1
gtd_new = g_new.dot(d)
ls_iter += 1
if f_new > (f + c1 * t * gtd) or f_new >= bracket_f[low_pos]:
# Armijo condition not satisfied or not lower than lowest point
bracket[high_pos] = t
bracket_f[high_pos] = f_new
bracket_g[high_pos] = g_new.clone(memory_format=torch.contiguous_format)
bracket_gtd[high_pos] = gtd_new
low_pos, high_pos = (0, 1) if bracket_f[0] <= bracket_f[1] else (1, 0)
else:
if abs(gtd_new) <= -c2 * gtd and extra_condition(t, f_new, g_new):
# Wolfe conditions satisfied
done = True
elif gtd_new * (bracket[high_pos] - bracket[low_pos]) >= 0:
# old high becomes new low
bracket[high_pos] = bracket[low_pos]
bracket_f[high_pos] = bracket_f[low_pos]
bracket_g[high_pos] = bracket_g[low_pos]
bracket_gtd[high_pos] = bracket_gtd[low_pos]
# new point becomes new low
bracket[low_pos] = t
bracket_f[low_pos] = f_new
bracket_g[low_pos] = g_new.clone(memory_format=torch.contiguous_format)
bracket_gtd[low_pos] = gtd_new
# return stuff
t = bracket[low_pos]
f_new = bracket_f[low_pos]
g_new = bracket_g[low_pos]
return f_new, g_new, t, ls_func_evals
def strong_wolfe(fun, x, t, d, f, g, gtd=None, **kwargs):
"""
Expects `fun` to take arguments {x, t, d} and return {f(x1), f'(x1)},
where x1 is the new location after taking a step from x in direction d
with step size t.
"""
if gtd is None:
gtd = g.mul(d).sum()
# use python floats for scalars as per torch.optim.lbfgs
f, t = float(f), float(t)
if 'extra_condition' in kwargs:
f, g, t, ls_nevals = _strong_wolfe_extra(
fun, x.view(-1), t, d.view(-1), f, g.view(-1), gtd, **kwargs)
else:
# in theory we shouldn't need to use pytorch's native _strong_wolfe
# since the custom implementation above is equivalent with
# extra_codition=None. But we will keep this in case they make any
# changes.
f, g, t, ls_nevals = _strong_wolfe(
fun, x.view(-1), t, d.view(-1), f, g.view(-1), gtd, **kwargs)
# convert back to torch scalar
f = torch.as_tensor(f, dtype=x.dtype, device=x.device)
return f, g.view_as(x), t, ls_nevals
def brent(fun, x, d, bounds=(0,10)):
"""
Expects `fun` to take arguments {x} and return {f(x)}
"""
def line_obj(t):
return float(fun(x + t * d))
res = minimize_scalar(line_obj, bounds=bounds, method='bounded')
return res.x
def backtracking(fun, x, t, d, f, g, mu=0.1, decay=0.98, max_ls=500, tmin=1e-5):
"""
Expects `fun` to take arguments {x, t, d} and return {f(x1), x1},
where x1 is the new location after taking a step from x in direction d
with step size t.
We use a generalized variant of the armijo condition that supports
arbitrary step functions x' = step(x,t,d). When step(x,t,d) = x + t * d
it is equivalent to the standard condition.
"""
x_new = x
f_new = f
success = False
for i in range(max_ls):
f_new, x_new = fun(x, t, d)
if f_new <= f + mu * g.mul(x_new-x).sum():
success = True
break
if t <= tmin:
warnings.warn('step size has reached the minimum threshold.')
break
t = t.mul(decay)
else:
warnings.warn('backtracking did not converge.')
return x_new, f_new, t, success
================================================
FILE: torchmin/lstsq/__init__.py
================================================
"""
This module represents a pytorch re-implementation of scipy's
`scipy.optimize._lsq` module. Some of the code is borrowed directly
from the scipy library (all rights reserved).
"""
from .least_squares import least_squares
================================================
FILE: torchmin/lstsq/cg.py
================================================
import torch
from .linear_operator import aslinearoperator, TorchLinearOperator
def cg(A, b, x0=None, max_iter=None, tol=1e-5):
if max_iter is None:
max_iter = 20 * b.numel()
if x0 is None:
x = torch.zeros_like(b)
r = b.clone()
else:
x = x0.clone()
r = b - A.mv(x)
p = r.clone()
rs = r.dot(r)
rs_new = b.new_tensor(0.)
alpha = b.new_tensor(0.)
for n_iter in range(1, max_iter+1):
Ap = A.mv(p)
torch.div(rs, p.dot(Ap), out=alpha)
x.add_(p, alpha=alpha)
r.sub_(Ap, alpha=alpha)
torch.dot(r, r, out=rs_new)
p.mul_(rs_new / rs).add_(r)
if n_iter % 10 == 0:
r_norm = rs.sqrt()
if r_norm < tol:
break
rs.copy_(rs_new, non_blocking=True)
return x
def cgls(A, b, alpha=0., **kwargs):
A = aslinearoperator(A)
m, n = A.shape
Atb = A.rmv(b)
AtA = TorchLinearOperator(shape=(n,n),
matvec=lambda x: A.rmv(A.mv(x)) + alpha * x,
rmatvec=None)
return cg(AtA, Atb, **kwargs)
================================================
FILE: torchmin/lstsq/common.py
================================================
import numpy as np
import torch
from scipy.sparse.linalg import LinearOperator
from .linear_operator import TorchLinearOperator
EPS = torch.finfo(float).eps
def in_bounds(x, lb, ub):
"""Check if a point lies within bounds."""
return torch.all((x >= lb) & (x <= ub))
def find_active_constraints(x, lb, ub, rtol=1e-10):
"""Determine which constraints are active in a given point.
The threshold is computed using `rtol` and the absolute value of the
closest bound.
Returns
-------
active : ndarray of int with shape of x
Each component shows whether the corresponding constraint is active:
* 0 - a constraint is not active.
* -1 - a lower bound is active.
* 1 - a upper bound is active.
"""
active = torch.zeros_like(x, dtype=torch.long)
if rtol == 0:
active[x <= lb] = -1
active[x >= ub] = 1
return active
lower_dist = x - lb
upper_dist = ub - x
lower_threshold = rtol * lb.abs().clamp(1, None)
upper_threshold = rtol * ub.abs().clamp(1, None)
lower_active = (lb.isfinite() &
(lower_dist <= torch.minimum(upper_dist, lower_threshold)))
active[lower_active] = -1
upper_active = (ub.isfinite() &
(upper_dist <= torch.minimum(lower_dist, upper_threshold)))
active[upper_active] = 1
return active
def make_strictly_feasible(x, lb, ub, rstep=1e-10):
"""Shift a point to the interior of a feasible region.
Each element of the returned vector is at least at a relative distance
`rstep` from the closest bound. If ``rstep=0`` then `np.nextafter` is used.
"""
x_new = x.clone()
active = find_active_constraints(x, lb, ub, rstep)
lower_mask = torch.eq(active, -1)
upper_mask = torch.eq(active, 1)
if rstep == 0:
torch.nextafter(lb[lower_mask], ub[lower_mask], out=x_new[lower_mask])
torch.nextafter(ub[upper_mask], lb[upper_mask], out=x_new[upper_mask])
else:
x_new[lower_mask] = lb[lower_mask].add(lb[lower_mask].abs().clamp(1,None), alpha=rstep)
x_new[upper_mask] = ub[upper_mask].sub(ub[upper_mask].abs().clamp(1,None), alpha=rstep)
tight_bounds = (x_new < lb) | (x_new > ub)
x_new[tight_bounds] = 0.5 * (lb[tight_bounds] + ub[tight_bounds])
return x_new
def solve_lsq_trust_region(n, m, uf, s, V, Delta, initial_alpha=None,
rtol=0.01, max_iter=10):
"""Solve a trust-region problem arising in least-squares minimization.
This function implements a method described by J. J. More [1]_ and used
in MINPACK, but it relies on a single SVD of Jacobian instead of series
of Cholesky decompositions. Before running this function, compute:
``U, s, VT = svd(J, full_matrices=False)``.
"""
def phi_and_derivative(alpha, suf, s, Delta):
"""Function of which to find zero.
It is defined as "norm of regularized (by alpha) least-squares
solution minus `Delta`". Refer to [1]_.
"""
denom = s.pow(2) + alpha
p_norm = (suf / denom).norm()
phi = p_norm - Delta
phi_prime = -(suf.pow(2) / denom.pow(3)).sum() / p_norm
return phi, phi_prime
def set_alpha(alpha_lower, alpha_upper):
new_alpha = (alpha_lower * alpha_upper).sqrt()
return new_alpha.clamp_(0.001 * alpha_upper, None)
suf = s * uf
# Check if J has full rank and try Gauss-Newton step.
eps = torch.finfo(s.dtype).eps
full_rank = m >= n and s[-1] > eps * m * s[0]
if full_rank:
p = -V.mv(uf / s)
if p.norm() <= Delta:
return p, 0.0, 0
phi, phi_prime = phi_and_derivative(0., suf, s, Delta)
alpha_lower = -phi / phi_prime
else:
alpha_lower = s.new_tensor(0.)
alpha_upper = suf.norm() / Delta
if initial_alpha is None or not full_rank and initial_alpha == 0:
alpha = set_alpha(alpha_lower, alpha_upper)
else:
alpha = initial_alpha.clone()
for it in range(max_iter):
# if alpha is outside of bounds, set new value (5.5)(a)
alpha = torch.where((alpha < alpha_lower) | (alpha > alpha_upper),
set_alpha(alpha_lower, alpha_upper),
alpha)
# compute new phi and phi' (5.5)(b)
phi, phi_prime = phi_and_derivative(alpha, suf, s, Delta)
# if phi is negative, update our upper bound (5.5)(b)
alpha_upper = torch.where(phi < 0, alpha, alpha_upper)
# update lower bound (5.5)(b)
ratio = phi / phi_prime
alpha_lower.clamp_(alpha-ratio, None)
# compute new alpha (5.5)(c)
alpha.addcdiv_((phi + Delta) * ratio, Delta, value=-1)
if phi.abs() < rtol * Delta:
break
p = -V.mv(suf / (s.pow(2) + alpha))
# Make the norm of p equal to Delta, p is changed only slightly during
# this. It is done to prevent p lie outside the trust region (which can
# cause problems later).
p.mul_(Delta / p.norm())
return p, alpha, it + 1
def right_multiplied_operator(J, d):
"""Return J diag(d) as LinearOperator."""
if isinstance(J, LinearOperator):
if torch.is_tensor(d):
d = d.data.cpu().numpy()
return LinearOperator(J.shape,
matvec=lambda x: J.matvec(np.ravel(x) * d),
matmat=lambda X: J.matmat(X * d[:, np.newaxis]),
rmatvec=lambda x: d * J.rmatvec(x))
elif isinstance(J, TorchLinearOperator):
return TorchLinearOperator(J.shape,
matvec=lambda x: J.matvec(x.view(-1) * d),
rmatvec=lambda x: d * J.rmatvec(x))
else:
raise ValueError('Expected J to be a LinearOperator or '
'TorchLinearOperator but found {}'.format(type(J)))
def build_quadratic_1d(J, g, s, diag=None, s0=None):
"""Parameterize a multivariate quadratic function along a line.
The resulting univariate quadratic function is given as follows:
::
f(t) = 0.5 * (s0 + s*t).T * (J.T*J + diag) * (s0 + s*t) +
g.T * (s0 + s*t)
"""
v = J.mv(s)
a = v.dot(v)
if diag is not None:
a += s.dot(s * diag)
a *= 0.5
b = g.dot(s)
if s0 is not None:
u = J.mv(s0)
b += u.dot(v)
c = 0.5 * u.dot(u) + g.dot(s0)
if diag is not None:
b += s.dot(s0 * diag)
c += 0.5 * s0.dot(s0 * diag)
return a, b, c
else:
return a, b
def minimize_quadratic_1d(a, b, lb, ub, c=0):
"""Minimize a 1-D quadratic function subject to bounds.
The free term `c` is 0 by default. Bounds must be finite.
"""
t = [lb, ub]
if a != 0:
extremum = -0.5 * b / a
if lb < extremum < ub:
t.append(extremum)
t = a.new_tensor(t)
y = t * (a * t + b) + c
min_index = torch.argmin(y)
return t[min_index], y[min_index]
def evaluate_quadratic(J, g, s, diag=None):
"""Compute values of a quadratic function arising in least squares.
The function is 0.5 * s.T * (J.T * J + diag) * s + g.T * s.
"""
if s.dim() == 1:
Js = J.mv(s)
q = Js.dot(Js)
if diag is not None:
q += s.dot(s * diag)
else:
Js = J.matmul(s.T)
q = Js.square().sum(0)
if diag is not None:
q += (diag * s.square()).sum(1)
l = s.matmul(g)
return 0.5 * q + l
def solve_trust_region_2d(B, g, Delta):
"""Solve a general trust-region problem in 2 dimensions.
The problem is reformulated as a 4th order algebraic equation,
the solution of which is found by numpy.roots.
"""
try:
L = torch.linalg.cholesky(B)
p = - torch.cholesky_solve(g.unsqueeze(1), L).squeeze(1)
if p.dot(p) <= Delta**2:
return p, True
except RuntimeError as exc:
if not 'cholesky' in exc.args[0]:
raise
# move things to numpy
device = B.device
dtype = B.dtype
B = B.data.cpu().numpy()
g = g.data.cpu().numpy()
Delta = float(Delta)
a = B[0, 0] * Delta**2
b = B[0, 1] * Delta**2
c = B[1, 1] * Delta**2
d = g[0] * Delta
f = g[1] * Delta
coeffs = np.array([-b + d, 2 * (a - c + f), 6 * b, 2 * (-a + c + f), -b - d])
t = np.roots(coeffs) # Can handle leading zeros.
t = np.real(t[np.isreal(t)])
p = Delta * np.vstack((2 * t / (1 + t**2), (1 - t**2) / (1 + t**2)))
value = 0.5 * np.sum(p * B.dot(p), axis=0) + np.dot(g, p)
p = p[:, np.argmin(value)]
# convert back to torch
p = torch.tensor(p, device=device, dtype=dtype)
return p, False
def update_tr_radius(Delta, actual_reduction, predicted_reduction,
step_norm, bound_hit):
"""Update the radius of a trust region based on the cost reduction.
"""
if predicted_reduction > 0:
ratio = actual_reduction / predicted_reduction
elif predicted_reduction == actual_reduction == 0:
ratio = 1
else:
ratio = 0
if ratio < 0.25:
Delta = 0.25 * step_norm
elif ratio > 0.75 and bound_hit:
Delta *= 2.0
return Delta, ratio
def check_termination(dF, F, dx_norm, x_norm, ratio, ftol, xtol):
"""Check termination condition for nonlinear least squares."""
ftol_satisfied = dF < ftol * F and ratio > 0.25
xtol_satisfied = dx_norm < xtol * (xtol + x_norm)
if ftol_satisfied and xtol_satisfied:
return 4
elif ftol_satisfied:
return 2
elif xtol_satisfied:
return 3
else:
return None
================================================
FILE: torchmin/lstsq/least_squares.py
================================================
"""
Generic interface for nonlinear least-squares minimization.
"""
from warnings import warn
import numbers
import torch
from .trf import trf
from .common import EPS, in_bounds, make_strictly_feasible
__all__ = ['least_squares']
TERMINATION_MESSAGES = {
-1: "Improper input parameters status returned from `leastsq`",
0: "The maximum number of function evaluations is exceeded.",
1: "`gtol` termination condition is satisfied.",
2: "`ftol` termination condition is satisfied.",
3: "`xtol` termination condition is satisfied.",
4: "Both `ftol` and `xtol` termination conditions are satisfied."
}
def prepare_bounds(bounds, x0):
n = x0.shape[0]
def process(b):
if isinstance(b, numbers.Number):
return x0.new_full((n,), b)
elif isinstance(b, torch.Tensor):
if b.dim() == 0:
return x0.new_full((n,), b)
gitextract_oaw5ondp/
├── .readthedocs.yaml
├── LICENSE
├── MANIFEST.in
├── README.md
├── docs/
│ ├── Makefile
│ ├── make.bat
│ └── source/
│ ├── _static/
│ │ └── custom.css
│ ├── api/
│ │ ├── index.rst
│ │ ├── minimize-bfgs.rst
│ │ ├── minimize-cg.rst
│ │ ├── minimize-constr-frankwolfe.rst
│ │ ├── minimize-constr-lbfgsb.rst
│ │ ├── minimize-constr-trust-constr.rst
│ │ ├── minimize-dogleg.rst
│ │ ├── minimize-lbfgs.rst
│ │ ├── minimize-newton-cg.rst
│ │ ├── minimize-newton-exact.rst
│ │ ├── minimize-trust-exact.rst
│ │ ├── minimize-trust-krylov.rst
│ │ └── minimize-trust-ncg.rst
│ ├── conf.py
│ ├── examples/
│ │ └── index.rst
│ ├── index.rst
│ ├── install.rst
│ └── user_guide/
│ └── index.rst
├── examples/
│ ├── constrained_optimization_adversarial_examples.ipynb
│ ├── rosen_minimize.ipynb
│ ├── scipy_benchmark.py
│ └── train_mnist_Minimizer.py
├── pyproject.toml
├── tests/
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_imports.py
│ └── torchmin/
│ ├── __init__.py
│ ├── test_bounds.py
│ ├── test_minimize.py
│ └── test_minimize_constr.py
└── torchmin/
├── __init__.py
├── _optimize.py
├── _version.py
├── benchmarks.py
├── bfgs.py
├── cg.py
├── constrained/
│ ├── frankwolfe.py
│ ├── lbfgsb.py
│ └── trust_constr.py
├── function.py
├── line_search.py
├── lstsq/
│ ├── __init__.py
│ ├── cg.py
│ ├── common.py
│ ├── least_squares.py
│ ├── linear_operator.py
│ ├── lsmr.py
│ └── trf.py
├── minimize.py
├── minimize_constr.py
├── newton.py
├── optim/
│ ├── __init__.py
│ ├── minimizer.py
│ └── scipy_minimizer.py
└── trustregion/
├── __init__.py
├── base.py
├── dogleg.py
├── exact.py
├── krylov.py
└── ncg.py
SYMBOL INDEX (182 symbols across 32 files)
FILE: docs/source/conf.py
function setup (line 96) | def setup(app):
FILE: examples/scipy_benchmark.py
function print_header (line 24) | def print_header(title, num_breaks=1):
function main (line 30) | def main():
FILE: examples/train_mnist_Minimizer.py
function MLPClassifier (line 11) | def MLPClassifier(input_size, hidden_sizes, num_classes):
function evaluate (line 24) | def evaluate(model):
function closure (line 84) | def closure():
FILE: tests/conftest.py
function random_seed (line 9) | def random_seed():
function least_squares_problem (line 26) | def least_squares_problem():
function rosenbrock_problem (line 72) | def rosenbrock_problem():
function device (line 94) | def device(request):
FILE: tests/test_imports.py
function test_import_main_package (line 5) | def test_import_main_package():
function test_import_core_functions (line 11) | def test_import_core_functions():
function test_import_benchmarks (line 16) | def test_import_benchmarks():
function test_method_available (line 32) | def test_method_available(method):
FILE: tests/torchmin/test_bounds.py
function test_equivalent_bounds (line 13) | def test_equivalent_bounds(method):
function test_invalid_bounds (line 49) | def test_invalid_bounds():
FILE: tests/torchmin/test_minimize.py
function problem (line 42) | def problem(request):
function test_minimize (line 58) | def test_minimize(method, problem):
FILE: tests/torchmin/test_minimize_constr.py
function rosen_start (line 28) | def rosen_start():
function rosen_unconstrained_solution (line 34) | def rosen_unconstrained_solution(rosen_start):
function sum_constraint (line 51) | def sum_constraint(x):
function norm_constraint (line 56) | def norm_constraint(x):
class TestUnconstrainedBaseline (line 65) | class TestUnconstrainedBaseline:
method test_rosen_unconstrained (line 68) | def test_rosen_unconstrained(self, rosen_start):
class TestInactiveConstraints (line 81) | class TestInactiveConstraints:
method test_loose_constraints (line 93) | def test_loose_constraints(
class TestActiveConstraints (line 120) | class TestActiveConstraints:
method test_tight_constraints (line 132) | def test_tight_constraints(self, rosen_start, constraint_fun, ub):
function test_frankwolfe_birkhoff_polytope (line 150) | def test_frankwolfe_birkhoff_polytope():
function test_frankwolfe_tracenorm (line 173) | def test_frankwolfe_tracenorm():
function test_lbfgsb_simple_quadratic (line 202) | def test_lbfgsb_simple_quadratic():
function test_lbfgsb_rosenbrock (line 243) | def test_lbfgsb_rosenbrock():
function test_lbfgsb_active_constraints (line 279) | def test_lbfgsb_active_constraints():
FILE: torchmin/benchmarks.py
function rosen (line 11) | def rosen(x, reduce=True):
function rosen_der (line 20) | def rosen_der(x):
function rosen_hess (line 32) | def rosen_hess(x):
function rosen_hess_prod (line 43) | def rosen_hess_prod(x, p):
FILE: torchmin/bfgs.py
class HessianUpdateStrategy (line 11) | class HessianUpdateStrategy(ABC):
method __init__ (line 12) | def __init__(self):
method solve (line 16) | def solve(self, grad):
method _update (line 20) | def _update(self, s, y, rho_inv):
method update (line 23) | def update(self, s, y):
class L_BFGS (line 32) | class L_BFGS(HessianUpdateStrategy):
method __init__ (line 33) | def __init__(self, x, history_size=100):
method solve (line 42) | def solve(self, grad):
method _update (line 55) | def _update(self, s, y, rho_inv):
class BFGS (line 66) | class BFGS(HessianUpdateStrategy):
method __init__ (line 67) | def __init__(self, x, inverse=True):
method solve (line 76) | def solve(self, grad):
method _update (line 83) | def _update(self, s, y, rho_inv):
function _minimize_bfgs_core (line 101) | def _minimize_bfgs_core(
function _minimize_bfgs (line 293) | def _minimize_bfgs(
function _minimize_lbfgs (line 347) | def _minimize_lbfgs(
FILE: torchmin/cg.py
function _minimize_cg (line 13) | def _minimize_cg(fun, x0, max_iter=None, gtol=1e-5, normp=float('inf'),
FILE: torchmin/constrained/frankwolfe.py
function _minimize_frankwolfe (line 16) | def _minimize_frankwolfe(
FILE: torchmin/constrained/lbfgsb.py
class L_BFGS_B (line 9) | class L_BFGS_B:
method __init__ (line 15) | def __init__(self, x, history_size=10):
method solve (line 23) | def solve(self, grad, x, lb, ub, theta=None):
method update (line 71) | def update(self, s, y):
function _project_bounds (line 107) | def _project_bounds(x, lb, ub):
function _gradient_projection (line 112) | def _gradient_projection(x, g, lb, ub):
function _minimize_lbfgsb (line 132) | def _minimize_lbfgsb(
FILE: torchmin/constrained/trust_constr.py
function _build_obj (line 12) | def _build_obj(f, x0):
function _build_constr (line 39) | def _build_constr(constr, x0):
function _check_bound (line 102) | def _check_bound(val, x0):
function _build_bounds (line 115) | def _build_bounds(bounds, x0):
function _minimize_trust_constr (line 127) | def _minimize_trust_constr(
FILE: torchmin/function.py
class JacobianLinearOperator (line 25) | class JacobianLinearOperator(object):
method __init__ (line 26) | def __init__(self,
method mv (line 42) | def mv(self, v: Tensor) -> Tensor:
method rmv (line 56) | def rmv(self, v: Tensor) -> Tensor:
function jacobian_linear_operator (line 67) | def jacobian_linear_operator(x, f, symmetric=False):
class ScalarFunction (line 81) | class ScalarFunction(object):
method __new__ (line 88) | def __new__(cls, fun, x_shape, hessp=False, hess=False, twice_diffable...
method __init__ (line 95) | def __init__(self, fun, x_shape, hessp=False, hess=False, twice_diffab...
method fun (line 104) | def fun(self, x):
method closure (line 115) | def closure(self, x):
method dir_evaluate (line 140) | def dir_evaluate(self, x, t, d):
class VectorFunction (line 156) | class VectorFunction(object):
method __init__ (line 158) | def __init__(self, fun, x_shape, jacp=False, jac=False):
method fun (line 166) | def fun(self, x):
method closure (line 179) | def closure(self, x):
FILE: torchmin/line_search.py
function _strong_wolfe_extra (line 9) | def _strong_wolfe_extra(
function strong_wolfe (line 163) | def strong_wolfe(fun, x, t, d, f, g, gtd=None, **kwargs):
function brent (line 192) | def brent(fun, x, d, bounds=(0,10)):
function backtracking (line 202) | def backtracking(fun, x, t, d, f, g, mu=0.1, decay=0.98, max_ls=500, tmi...
FILE: torchmin/lstsq/cg.py
function cg (line 6) | def cg(A, b, x0=None, max_iter=None, tol=1e-5):
function cgls (line 35) | def cgls(A, b, alpha=0., **kwargs):
FILE: torchmin/lstsq/common.py
function in_bounds (line 10) | def in_bounds(x, lb, ub):
function find_active_constraints (line 15) | def find_active_constraints(x, lb, ub, rtol=1e-10):
function make_strictly_feasible (line 50) | def make_strictly_feasible(x, lb, ub, rstep=1e-10):
function solve_lsq_trust_region (line 74) | def solve_lsq_trust_region(n, m, uf, s, V, Delta, initial_alpha=None,
function right_multiplied_operator (line 151) | def right_multiplied_operator(J, d):
function build_quadratic_1d (line 169) | def build_quadratic_1d(J, g, s, diag=None, s0=None):
function minimize_quadratic_1d (line 197) | def minimize_quadratic_1d(a, b, lb, ub, c=0):
function evaluate_quadratic (line 213) | def evaluate_quadratic(J, g, s, diag=None):
function solve_trust_region_2d (line 232) | def solve_trust_region_2d(B, g, Delta):
function update_tr_radius (line 273) | def update_tr_radius(Delta, actual_reduction, predicted_reduction,
function check_termination (line 292) | def check_termination(dF, F, dx_norm, x_norm, ratio, ftol, xtol):
FILE: torchmin/lstsq/least_squares.py
function prepare_bounds (line 24) | def prepare_bounds(bounds, x0):
function check_tolerance (line 41) | def check_tolerance(ftol, xtol, gtol, method):
function check_x_scale (line 65) | def check_x_scale(x_scale, x0):
function least_squares (line 87) | def least_squares(
FILE: torchmin/lstsq/linear_operator.py
function jacobian_dense (line 6) | def jacobian_dense(fun, x, vectorize=True):
function jacobian_linop (line 11) | def jacobian_linop(fun, x, return_f=False):
class TorchLinearOperator (line 38) | class TorchLinearOperator(object):
method __init__ (line 40) | def __init__(self, shape, matvec, rmatvec):
method matvec (line 45) | def matvec(self, x):
method rmatvec (line 48) | def rmatvec(self, x):
method matmat (line 51) | def matmat(self, X):
method transpose (line 57) | def transpose(self):
function aslinearoperator (line 68) | def aslinearoperator(A):
FILE: torchmin/lstsq/lsmr.py
function _sym_ortho (line 11) | def _sym_ortho(a, b, out):
function lsmr (line 19) | def lsmr(A, b, damp=0., atol=1e-6, btol=1e-6, conlim=1e8, maxiter=None,
FILE: torchmin/lstsq/trf.py
function trf (line 18) | def trf(fun, x0, f0, lb, ub, ftol, xtol, gtol, max_nfev, x_scale,
function trf_no_bounds (line 32) | def trf_no_bounds(fun, x0, f0=None, ftol=1e-8, xtol=1e-8, gtol=1e-8,
FILE: torchmin/minimize.py
function minimize (line 22) | def minimize(
FILE: torchmin/minimize_constr.py
function _maybe_to_number (line 18) | def _maybe_to_number(val):
function _check_bound (line 27) | def _check_bound(val, x0, numpy=False):
function _check_bounds (line 57) | def _check_bounds(bounds, x0, method):
function minimize_constr (line 81) | def minimize_constr(
FILE: torchmin/newton.py
function _cg_iters (line 15) | def _cg_iters(grad, hess, max_iter, normp=1):
function _minimize_newton_cg (line 73) | def _minimize_newton_cg(
function _minimize_newton_exact (line 226) | def _minimize_newton_exact(
FILE: torchmin/optim/minimizer.py
class LinearOperator (line 6) | class LinearOperator:
method __init__ (line 8) | def __init__(self, matvec, shape, dtype=torch.float, device=None):
class Minimizer (line 16) | class Minimizer(Optimizer):
method __init__ (line 41) | def __init__(self,
method nfev (line 71) | def nfev(self):
method _numel (line 74) | def _numel(self):
method _gather_flat_param (line 79) | def _gather_flat_param(self):
method _gather_flat_grad (line 89) | def _gather_flat_grad(self):
method _set_flat_param (line 101) | def _set_flat_param(self, value):
method closure (line 109) | def closure(self, x):
method dir_evaluate (line 143) | def dir_evaluate(self, x, t, d):
method step (line 156) | def step(self, closure):
FILE: torchmin/optim/scipy_minimizer.py
function _build_bounds (line 13) | def _build_bounds(bounds, params, numel_total):
function _jacobian (line 55) | def _jacobian(inputs, outputs):
class ScipyMinimizer (line 93) | class ScipyMinimizer(Optimizer):
method __init__ (line 131) | def __init__(self,
method _numel (line 161) | def _numel(self):
method _bounds (line 166) | def _bounds(self):
method _gather_flat_param (line 174) | def _gather_flat_param(self):
method _gather_flat_grad (line 184) | def _gather_flat_grad(self):
method _set_flat_param (line 196) | def _set_flat_param(self, value):
method _build_constraints (line 205) | def _build_constraints(self, constraints):
method step (line 241) | def step(self, closure):
FILE: torchmin/trustregion/base.py
class BaseQuadraticSubproblem (line 26) | class BaseQuadraticSubproblem(ABC):
method __init__ (line 32) | def __init__(self, x, closure):
method __call__ (line 47) | def __call__(self, p):
method fun (line 51) | def fun(self):
method jac (line 56) | def jac(self):
method hess (line 61) | def hess(self):
method hessp (line 68) | def hessp(self, p):
method jac_mag (line 78) | def jac_mag(self):
method get_boundaries_intersections (line 84) | def get_boundaries_intersections(self, z, d, trust_radius):
method solve (line 105) | def solve(self, trust_radius):
method hess_prod (line 110) | def hess_prod(self):
function _minimize_trust_region (line 117) | def _minimize_trust_region(fun, x0, subproblem=None, initial_trust_radiu...
FILE: torchmin/trustregion/dogleg.py
function _minimize_dogleg (line 15) | def _minimize_dogleg(
class DoglegSubproblem (line 58) | class DoglegSubproblem(BaseQuadraticSubproblem):
method cauchy_point (line 62) | def cauchy_point(self):
method newton_point (line 72) | def newton_point(self):
method solve (line 82) | def solve(self, trust_radius):
FILE: torchmin/trustregion/exact.py
function _minimize_trust_exact (line 18) | def _minimize_trust_exact(fun, x0, **trust_region_options):
function solve_triangular (line 66) | def solve_triangular(A, b, *, upper=True, transpose=False, **kwargs):
function solve_cholesky (line 77) | def solve_cholesky(A, b, **kwargs):
function estimate_smallest_singular_value (line 82) | def estimate_smallest_singular_value(U) -> Tuple[Tensor, Tensor]:
function gershgorin_bounds (line 130) | def gershgorin_bounds(H):
function singular_leading_submatrix (line 144) | def singular_leading_submatrix(A, U, k):
class IterativeSubproblem (line 165) | class IterativeSubproblem(BaseQuadraticSubproblem):
method __init__ (line 174) | def __init__(self, x, fun, k_easy=0.1, k_hard=0.2):
method _initial_values (line 219) | def _initial_values(self, tr_radius):
method solve (line 250) | def solve(self, tr_radius):
FILE: torchmin/trustregion/krylov.py
function _minimize_trust_krylov (line 11) | def _minimize_trust_krylov(fun, x0, **trust_region_options):
class KrylovSubproblem (line 63) | class KrylovSubproblem(BaseQuadraticSubproblem):
method __init__ (line 84) | def __init__(self, x, fun, k_easy=0.1, k_hard=0.2, tol=1e-5, ortho=True,
method tridiag_subproblem (line 94) | def tridiag_subproblem(self, Ta, Tb, tr_radius):
method solve (line 158) | def solve(self, tr_radius):
FILE: torchmin/trustregion/ncg.py
function _minimize_trust_ncg (line 15) | def _minimize_trust_ncg(
class CGSteihaugSubproblem (line 55) | class CGSteihaugSubproblem(BaseQuadraticSubproblem):
method solve (line 59) | def solve(self, trust_radius):
Condensed preview — 67 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (319K chars).
[
{
"path": ".readthedocs.yaml",
"chars": 372,
"preview": "version: 2\n\n# Tell RTD which build image to use and which Python to install\nbuild:\n os: ubuntu-22.04\n tools:\n pytho"
},
{
"path": "LICENSE",
"chars": 1071,
"preview": "MIT License\n\nCopyright (c) 2021 Reuben Feinman\n\nPermission is hereby granted, free of charge, to any person obtaining a "
},
{
"path": "MANIFEST.in",
"chars": 348,
"preview": "# Include essential files\ninclude README.md\ninclude LICENSE\ninclude pyproject.toml\n\n# Include source package\nrecursive-i"
},
{
"path": "README.md",
"chars": 9091,
"preview": "# PyTorch Minimize\n\nFor the most up-to-date information on pytorch-minimize, see the docs site: [pytorch-minimize.readth"
},
{
"path": "docs/Makefile",
"chars": 638,
"preview": "# Minimal makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line, and also\n# from the "
},
{
"path": "docs/make.bat",
"chars": 799,
"preview": "@ECHO OFF\r\n\r\npushd %~dp0\r\n\r\nREM Command file for Sphinx documentation\r\n\r\nif \"%SPHINXBUILD%\" == \"\" (\r\n\tset SPHINXBUILD=sp"
},
{
"path": "docs/source/_static/custom.css",
"chars": 58,
"preview": ".wy-table-responsive table td {\n white-space: normal;\n}"
},
{
"path": "docs/source/api/index.rst",
"chars": 3780,
"preview": "=================\nAPI Documentation\n=================\n\n.. currentmodule:: torchmin\n\n\nFunctional API\n==============\n\nThe "
},
{
"path": "docs/source/api/minimize-bfgs.rst",
"chars": 112,
"preview": "minimize(method='bfgs')\n----------------------------------------\n\n.. autofunction:: torchmin.bfgs._minimize_bfgs"
},
{
"path": "docs/source/api/minimize-cg.rst",
"chars": 106,
"preview": "minimize(method='cg')\n----------------------------------------\n\n.. autofunction:: torchmin.cg._minimize_cg"
},
{
"path": "docs/source/api/minimize-constr-frankwolfe.rst",
"chars": 150,
"preview": "minimize_constr(method='frank-wolfe')\n----------------------------------------\n\n.. autofunction:: torchmin.constrained.f"
},
{
"path": "docs/source/api/minimize-constr-lbfgsb.rst",
"chars": 139,
"preview": "minimize_constr(method='l-bfgs-b')\n----------------------------------------\n\n.. autofunction:: torchmin.constrained.lbfg"
},
{
"path": "docs/source/api/minimize-constr-trust-constr.rst",
"chars": 155,
"preview": "minimize_constr(method='trust-constr')\n----------------------------------------\n\n.. autofunction:: torchmin.constrained."
},
{
"path": "docs/source/api/minimize-dogleg.rst",
"chars": 123,
"preview": "minimize(method='dogleg')\n----------------------------------------\n\n.. autofunction:: torchmin.trustregion._minimize_dog"
},
{
"path": "docs/source/api/minimize-lbfgs.rst",
"chars": 115,
"preview": "minimize(method='l-bfgs')\n----------------------------------------\n\n.. autofunction:: torchmin.bfgs._minimize_lbfgs"
},
{
"path": "docs/source/api/minimize-newton-cg.rst",
"chars": 124,
"preview": "minimize(method='newton-cg')\n----------------------------------------\n\n.. autofunction:: torchmin.newton._minimize_newto"
},
{
"path": "docs/source/api/minimize-newton-exact.rst",
"chars": 130,
"preview": "minimize(method='newton-exact')\n----------------------------------------\n\n.. autofunction:: torchmin.newton._minimize_ne"
},
{
"path": "docs/source/api/minimize-trust-exact.rst",
"chars": 133,
"preview": "minimize(method='trust-exact')\n----------------------------------------\n\n.. autofunction:: torchmin.trustregion._minimiz"
},
{
"path": "docs/source/api/minimize-trust-krylov.rst",
"chars": 135,
"preview": "minimize(method='trust-krylov')\n----------------------------------------\n\n.. autofunction:: torchmin.trustregion._minimi"
},
{
"path": "docs/source/api/minimize-trust-ncg.rst",
"chars": 129,
"preview": "minimize(method='trust-ncg')\n----------------------------------------\n\n.. autofunction:: torchmin.trustregion._minimize_"
},
{
"path": "docs/source/conf.py",
"chars": 3159,
"preview": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common op"
},
{
"path": "docs/source/examples/index.rst",
"chars": 1848,
"preview": "Examples\n=========\n\nThe examples site is in active development. Check back soon for more complete examples of how to use"
},
{
"path": "docs/source/index.rst",
"chars": 1298,
"preview": "Pytorch-minimize\n================\n\nPytorch-minimize is a library for numerical optimization with automatic differentiati"
},
{
"path": "docs/source/install.rst",
"chars": 786,
"preview": "Install\n===========\n\nTo install pytorch-minimize, users may either 1) install the official PyPI release via pip, or 2) i"
},
{
"path": "docs/source/user_guide/index.rst",
"chars": 151,
"preview": "===========\nUser Guide\n===========\n\n.. currentmodule:: torchmin\n\nUsing the :func:`minimize` function\n-------------------"
},
{
"path": "examples/constrained_optimization_adversarial_examples.ipynb",
"chars": 13692,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"execution_count\": 1,\n \"id\": \"dried-niagara\",\n \"metadata\": {},\n \"outp"
},
{
"path": "examples/rosen_minimize.ipynb",
"chars": 58553,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"execution_count\": 2,\n \"metadata\": {},\n \"outputs\": [],\n \"source\": [\n "
},
{
"path": "examples/scipy_benchmark.py",
"chars": 2936,
"preview": "\"\"\"\nA comparison of pytorch-minimize solvers to the analogous solvers from\nscipy.optimize.\n\nPytorch-minimize uses autogr"
},
{
"path": "examples/train_mnist_Minimizer.py",
"chars": 3897,
"preview": "import argparse\nimport matplotlib.pyplot as plt\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom "
},
{
"path": "pyproject.toml",
"chars": 1476,
"preview": "[build-system]\nrequires = [\"setuptools>=61.0\", \"wheel\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"pytor"
},
{
"path": "tests/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "tests/conftest.py",
"chars": 2706,
"preview": "\"\"\"Shared pytest fixtures for torchmin tests.\"\"\"\nimport pytest\nimport torch\n\nfrom torchmin.benchmarks import rosen\n\n\n@py"
},
{
"path": "tests/test_imports.py",
"chars": 1024,
"preview": "\"\"\"Test that all public APIs are importable and accessible.\"\"\"\nimport pytest\n\n\ndef test_import_main_package():\n \"\"\"Te"
},
{
"path": "tests/torchmin/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "tests/torchmin/test_bounds.py",
"chars": 1790,
"preview": "import pytest\nimport torch\nfrom scipy.optimize import Bounds\n\nfrom torchmin import minimize, minimize_constr\nfrom torchm"
},
{
"path": "tests/torchmin/test_minimize.py",
"chars": 2115,
"preview": "\"\"\"\nTest unconstrained minimization methods on various objective functions.\n\nThis module tests all unconstrained optimiz"
},
{
"path": "tests/torchmin/test_minimize_constr.py",
"chars": 8465,
"preview": "\"\"\"\nTest constrained minimization methods.\n\nThis module tests the minimize_constr function on various types of constrain"
},
{
"path": "torchmin/__init__.py",
"chars": 290,
"preview": "from ._version import __version__\nfrom .minimize import minimize\nfrom .minimize_constr import minimize_constr\nfrom .lsts"
},
{
"path": "torchmin/_optimize.py",
"chars": 703,
"preview": "# **** Optimization Utilities ****\n#\n# This module contains general utilies for optimization such as\n# `_status_message`"
},
{
"path": "torchmin/_version.py",
"chars": 21,
"preview": "__version__ = \"0.1.0\""
},
{
"path": "torchmin/benchmarks.py",
"chars": 1609,
"preview": "import torch\n\n__all__ = ['rosen', 'rosen_der', 'rosen_hess', 'rosen_hess_prod']\n\n\n# =============================\n# "
},
{
"path": "torchmin/bfgs.py",
"chars": 13602,
"preview": "from abc import ABC, abstractmethod\nimport torch\nfrom torch import Tensor\nfrom scipy.optimize import OptimizeResult\n\nfro"
},
{
"path": "torchmin/cg.py",
"chars": 4764,
"preview": "import torch\nfrom scipy.optimize import OptimizeResult\n\nfrom ._optimize import _status_message\nfrom .function import Sca"
},
{
"path": "torchmin/constrained/frankwolfe.py",
"chars": 5028,
"preview": "import warnings\nimport numpy as np\nimport torch\nfrom numbers import Number\nfrom scipy.optimize import (\n linear_sum_a"
},
{
"path": "torchmin/constrained/lbfgsb.py",
"chars": 10755,
"preview": "import torch\nfrom torch import Tensor\nfrom scipy.optimize import OptimizeResult\n\nfrom .._optimize import _status_message"
},
{
"path": "torchmin/constrained/trust_constr.py",
"chars": 8054,
"preview": "import warnings\nimport numbers\nimport torch\nimport numpy as np\nfrom scipy.optimize import minimize, Bounds, NonlinearCon"
},
{
"path": "torchmin/function.py",
"chars": 6473,
"preview": "from typing import List, Optional\nfrom torch import Tensor\nfrom collections import namedtuple\nimport torch\nimport torch."
},
{
"path": "torchmin/line_search.py",
"chars": 8196,
"preview": "import warnings\nimport torch\nfrom torch.optim.lbfgs import _strong_wolfe, _cubic_interpolate\nfrom scipy.optimize import "
},
{
"path": "torchmin/lstsq/__init__.py",
"chars": 224,
"preview": "\"\"\"\nThis module represents a pytorch re-implementation of scipy's\n`scipy.optimize._lsq` module. Some of the code is borr"
},
{
"path": "torchmin/lstsq/cg.py",
"chars": 1123,
"preview": "import torch\n\nfrom .linear_operator import aslinearoperator, TorchLinearOperator\n\n\ndef cg(A, b, x0=None, max_iter=None, "
},
{
"path": "torchmin/lstsq/common.py",
"chars": 9682,
"preview": "import numpy as np\nimport torch\nfrom scipy.sparse.linalg import LinearOperator\n\nfrom .linear_operator import TorchLinear"
},
{
"path": "torchmin/lstsq/least_squares.py",
"chars": 12123,
"preview": "\"\"\"\nGeneric interface for nonlinear least-squares minimization.\n\"\"\"\nfrom warnings import warn\nimport numbers\nimport torc"
},
{
"path": "torchmin/lstsq/linear_operator.py",
"chars": 2074,
"preview": "import torch\nimport torch.autograd as autograd\nfrom torch._vmap_internals import _vmap\n\n\ndef jacobian_dense(fun, x, vect"
},
{
"path": "torchmin/lstsq/lsmr.py",
"chars": 10045,
"preview": "\"\"\"\nCode modified from scipy.sparse.linalg.lsmr\n\nCopyright (C) 2010 David Fong and Michael Saunders\n\"\"\"\nimport torch\n\nfr"
},
{
"path": "torchmin/lstsq/trf.py",
"chars": 5886,
"preview": "\"\"\"Trust Region Reflective algorithm for least-squares optimization.\n\"\"\"\nimport torch\nimport numpy as np\nfrom scipy.opti"
},
{
"path": "torchmin/minimize.py",
"chars": 3603,
"preview": "import torch\n\nfrom .bfgs import _minimize_bfgs, _minimize_lbfgs\nfrom .cg import _minimize_cg\nfrom .newton import _minimi"
},
{
"path": "torchmin/minimize_constr.py",
"chars": 6983,
"preview": "import numbers\nimport numpy as np\nimport torch\nfrom scipy.optimize import Bounds\n\nfrom .constrained.lbfgsb import _minim"
},
{
"path": "torchmin/newton.py",
"chars": 14134,
"preview": "from scipy.optimize import OptimizeResult\nfrom scipy.sparse.linalg import eigsh\nfrom torch import Tensor\nimport torch\n\nf"
},
{
"path": "torchmin/optim/__init__.py",
"chars": 76,
"preview": "from .minimizer import Minimizer\nfrom .scipy_minimizer import ScipyMinimizer"
},
{
"path": "torchmin/optim/minimizer.py",
"chars": 6281,
"preview": "from functools import reduce\nimport torch\nfrom torch.optim import Optimizer\n\n\nclass LinearOperator:\n \"\"\"A generic lin"
},
{
"path": "torchmin/optim/scipy_minimizer.py",
"chars": 10633,
"preview": "import numbers\nimport numpy as np\nimport torch\nfrom functools import reduce\nfrom torch.optim import Optimizer\nfrom scipy"
},
{
"path": "torchmin/trustregion/__init__.py",
"chars": 157,
"preview": "from .ncg import _minimize_trust_ncg\nfrom .exact import _minimize_trust_exact\nfrom .dogleg import _minimize_dogleg\nfrom "
},
{
"path": "torchmin/trustregion/base.py",
"chars": 9080,
"preview": "\"\"\"\nTrust-region optimization.\n\nCode ported from SciPy to PyTorch\n\nCopyright (c) 2001-2002 Enthought, Inc. 2003-2019, S"
},
{
"path": "torchmin/trustregion/dogleg.py",
"chars": 3853,
"preview": "\"\"\"\nDog-leg trust-region optimization.\n\nCode ported from SciPy to PyTorch\n\nCopyright (c) 2001-2002 Enthought, Inc. 2003"
},
{
"path": "torchmin/trustregion/exact.py",
"chars": 14283,
"preview": "\"\"\"\nNearly exact trust-region optimization subproblem.\n\nCode ported from SciPy to PyTorch\n\nCopyright (c) 2001-2002 Entho"
},
{
"path": "torchmin/trustregion/krylov.py",
"chars": 7918,
"preview": "\"\"\"\nTODO: this module is not yet complete. It is not ready for use.\n\"\"\"\nimport numpy as np\nfrom scipy.linalg import eigh"
},
{
"path": "torchmin/trustregion/ncg.py",
"chars": 4356,
"preview": "\"\"\"\nNewton-CG trust-region optimization.\n\nCode ported from SciPy to PyTorch\n\nCopyright (c) 2001-2002 Enthought, Inc. 20"
}
]
About this extraction
This page contains the full source code of the rfeinman/pytorch-minimize GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 67 files (296.5 KB), approximately 100.3k tokens, and a symbol index with 182 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.