Full Code of naszilla/bananas for AI

main e2c12ade9e29 cached
51 files
167.7 KB
44.4k tokens
270 symbols
1 requests
Download .txt
Repository: naszilla/bananas
Branch: main
Commit: e2c12ade9e29
Files: 51
Total size: 167.7 KB

Directory structure:
gitextract_n9lcj8a6/

├── .gitignore
├── LICENSE
├── README.md
├── acquisition_functions.py
├── bo/
│   ├── __init__.py
│   ├── acq/
│   │   ├── __init__.py
│   │   ├── acqmap.py
│   │   ├── acqopt.py
│   │   └── acquisition.py
│   ├── bo/
│   │   ├── __init__.py
│   │   └── probo.py
│   ├── dom/
│   │   ├── __init__.py
│   │   ├── list.py
│   │   └── real.py
│   ├── ds/
│   │   ├── __init__.py
│   │   └── makept.py
│   ├── fn/
│   │   ├── __init__.py
│   │   └── functionhandler.py
│   ├── pp/
│   │   ├── __init__.py
│   │   ├── gp/
│   │   │   ├── __init__.py
│   │   │   └── gp_utils.py
│   │   ├── pp_core.py
│   │   ├── pp_gp_george.py
│   │   ├── pp_gp_my_distmat.py
│   │   ├── pp_gp_stan.py
│   │   ├── pp_gp_stan_distmat.py
│   │   └── stan/
│   │       ├── __init__.py
│   │       ├── compile_stan.py
│   │       ├── gp_distmat.py
│   │       ├── gp_distmat_fixedsig.py
│   │       ├── gp_hier2.py
│   │       ├── gp_hier2_matern.py
│   │       └── gp_hier3.py
│   └── util/
│       ├── __init__.py
│       ├── datatransform.py
│       └── print_utils.py
├── darts/
│   ├── __init__.py
│   └── arch.py
├── data.py
├── meta_neural_net.py
├── meta_neuralnet.ipynb
├── metann_runner.py
├── nas_algorithms.py
├── nas_bench/
│   ├── __init__.py
│   └── cell.py
├── nas_bench_201/
│   ├── __init__.py
│   └── cell.py
├── params.py
├── run_experiments_parallel.sh
├── run_experiments_sequential.py
└── train_arch_runner.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
*.DS_Store
*.sw*
*.pyc
*.Rapp.history
*.*~
*.out
*hide
*hide_*
*save
*save_*
*saved
*saved_*
*pylintrc*
nasbench_only108.tfrecord
.ipynb_checkpoints/*

*aux.pkl
*config.pkl
*nextpt.pkl
*data.pkl
*log.txt



================================================
FILE: LICENSE
================================================
   Copyright (c) 2019, naszilla.
   All rights reserved.

                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.

================================================
FILE: README.md
================================================
# BANANAS

**Note: our naszilla/bananas repo has been extended and renamed to [naszilla/naszilla](https://github.com/naszilla/naszilla), and this repo is deprecated and not maintained. Please use [naszilla/naszilla](https://github.com/naszilla/naszilla), which has more functionality.**

[BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search](https://arxiv.org/abs/1910.11858)\
Colin White, Willie Neiswanger, and Yash Savani.\
_arXiv:1910.11858_.

## A new method for neural architecture search
BANANAS is a neural architecture search (NAS) algorithm which uses Bayesian optimization with a meta neural network to predict the validation accuracy of neural architectures. We use a path-based encoding scheme to featurize the neural architectures that are used to train the neural network model. After training on just 200 architectures, we are able to predict the validation accuracy of new architectures to within one percent on average. The full NAS algorithm beats the state of the art on the NASBench and the DARTS search spaces. On the NASBench search space, BANANAS is over 100x more efficient than random search, and 3.8x more efficent than the next-best algorithm we tried. On the DARTS search space, BANANAS finds an architecture with a test error of 2.57%.

<p align="center">
<img src="img/bananas_fig.png" alt="bananas_fig" width="70%">
</p>

## Requirements
- jupyter
- tensorflow == 1.14.0 (used for all experiments)
- nasbench (follow the installation instructions [here](https://github.com/google-research/nasbench))
- nas-bench-201 (follow the installation instructions [here](https://github.com/D-X-Y/NAS-Bench-201))
- pytorch == 1.2.0, torchvision == 0.4.0 (used for experiments on the DARTS search space)
- pybnn (used only for the DNGO baselien algorithm. Installation instructions [here](https://github.com/automl/pybnn))

If you run experiments on DARTS, you will need our fork of the darts repo:
- Download the repo: https://github.com/naszilla/darts
- If the repo is not in your home directory, i.e., `~/darts`, then update line 5 of `bananas/darts/arch.py` and line 8 of `bananas/train_arch_runner.py` with the correct path to this repo


## Train a meta neural network with a notebook on the NASBench dataset
- Download the nasbench_only108 tfrecord file (size 499MB) [here](https://storage.googleapis.com/nasbench/nasbench_only108.tfrecord)
- Place `nasbench_only108.tfrecord` in the top level folder of this repo
- Open and run `meta_neuralnet.ipynb` to reproduce Table 1 and Figure A.1 of our paper

<p align="center">
  <img src="img/metann_adj_train.png" alt="bananas_fig" width="24%">
  <img src="img/metann_adj_test.png" alt="bananas_fig" width="24%">
  <img src="img/metann_path_train.png" alt="bananas_fig" width="24%">
  <img src="img/metann_path_test.png" alt="bananas_fig" width="24%">
</p>

## Evaluate pretrained BANANAS architecture
The best architecture found by BANANAS on the DARTS search space achieved 2.57% test error. To evaluate our pretrained neural architecture, download the weights [bananas.pt](https://drive.google.com/file/d/1d8jnI0R9fvXBjkIY7CRogyxynEh6TWu_/view?usp=sharing) and put it inside the folder `<path-to-darts>/cnn`

```bash
cd <path-to-darts>/cnn; python test.py --model_path bananas.pt
```

The error on the test set should be 2.57%. This can be run on a CPU or GPU, but it will be faster on a GPU.

<p align="center">
<img src="img/bananas_normal.png" alt="bananas_normal" width="42%">
<img src="img/bananas_reduction.png" alt="bananas_reduction" width="47%">
</p>
<p align="center">
The best neural architecture found by BANANAS on CIFAR-10. Convolutional cell (left), and reduction cell (right).
</p>

## Train BANANAS architecture
Train the best architecture found by BANANAS.

```bash
cd <path-to-darts>/cnn; python train.py --auxiliary --cutout
```

This will train the architecture from scratch, which takes about 34 hours on an NVIDIA V100 GPU. 
The final test error should be 2.59%.
Setting the random seed to 4 by adding `--seed 4` will result in a test error of 2.57%.
We report the random seeds and hardware used in Table 2 of our paper [here](https://docs.google.com/spreadsheets/d/1z6bHUgX8r0y9Bh9Zxot_B9nT_9qLWJoD0Um0fTYdpus/edit?usp=sharing).

## Run BANANAS on the NASBench search space
To run BANANAS on NASBench, download `nasbench_only108.tfrecord` and place it in the top level folder of this repo.

```bash
python run_experiments_sequential.py
```

This will test the nasbench algorithm against several other NAS algorithms on the NASBench search space.
To customize your experiment, open `params.py`. Here, you can change the hyperparameters and the algorithms to run.
To run experiments with NAS-Bench-201, download `NAS-Bench-201-v1_0-e61699.pth` and place it in the top level folder of this repo.
Choose between cifar10, cifar100, and imagenet. For example,

```bash
python run_experiments_sequential.py --search_space nasbench_201_cifar10
```

<p align="center">
<img src="img/nasbench_plot.png" alt="nasbench_plot" width="70%">
</p>

## Run BANANAS on the DARTS search space
We highly recommend using multiple GPUs to run BANANAS on the DARTS search space. You can run BANANAS in parallel on GCP using the shell script:

```bash
run_experiments_parallel.sh
```

## Contributions
We welcome community contributions to this repo!

## Citation
Please cite [our paper](https://arxiv.org/abs/1910.11858) if you use code from this repo:

```bibtex
@inproceedings{white2019bananas,
  title={BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search},
  author={White, Colin and Neiswanger, Willie and Savani, Yash},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2021}
}
```



================================================
FILE: acquisition_functions.py
================================================
import numpy as np
import sys

# Different acquisition functions that can be used by BANANAS
def acq_fn(predictions, explore_type='its'):
    predictions = np.array(predictions)

    # Upper confidence bound (UCB) acquisition function
    if explore_type == 'ucb':
        explore_factor = 0.5
        mean = np.mean(predictions, axis=0)
        std = np.sqrt(np.var(predictions, axis=0))
        ucb = mean - explore_factor * std
        sorted_indices = np.argsort(ucb)

    # Expected improvement (EI) acquisition function
    elif explore_type == 'ei':
        ei_calibration_factor = 5.
        mean = list(np.mean(predictions, axis=0))
        std = list(np.sqrt(np.var(predictions, axis=0)) /
                   ei_calibration_factor)

        min_y = ytrain.min()
        gam = [(min_y - mean[i]) / std[i] for i in range(len(mean))]
        ei = [-1 * std[i] * (gam[i] * norm.cdf(gam[i]) + norm.pdf(gam[i]))
              for i in range(len(mean))]
        sorted_indices = np.argsort(ei)

    # Probability of improvement (PI) acquisition function
    elif explore_type == 'pi':
        mean = list(np.mean(predictions, axis=0))
        std = list(np.sqrt(np.var(predictions, axis=0)))
        min_y = ytrain.min()
        pi = [-1 * norm.cdf(min_y, loc=mean[i], scale=std[i]) for i in range(len(mean))]
        sorted_indices = np.argsort(pi)

    # Thompson sampling (TS) acquisition function
    elif explore_type == 'ts':
        rand_ind = np.random.randint(predictions.shape[0])
        ts = predictions[rand_ind,:]
        sorted_indices = np.argsort(ts)

    # Top exploitation 
    elif explore_type == 'percentile':
        min_prediction = np.min(predictions, axis=0)
        sorted_indices = np.argsort(min_prediction)

    # Top mean
    elif explore_type == 'mean':
        mean = np.mean(predictions, axis=0)
        sorted_indices = np.argsort(mean)

    elif explore_type == 'confidence':
        confidence_factor = 2
        mean = np.mean(predictions, axis=0)
        std = np.sqrt(np.var(predictions, axis=0))
        conf = mean + confidence_factor * std
        sorted_indices = np.argsort(conf)

    # Independent Thompson sampling (ITS) acquisition function
    elif explore_type == 'its':
        mean = np.mean(predictions, axis=0)
        std = np.sqrt(np.var(predictions, axis=0))
        samples = np.random.normal(mean, std)
        sorted_indices = np.argsort(samples)

    else:
        print('Invalid exploration type in meta neuralnet search', explore_type)
        sys.exit()

    return sorted_indices

================================================
FILE: bo/__init__.py
================================================
"""
Code for running Bayesian Optimization (BO) in NASzilla.
"""


================================================
FILE: bo/acq/__init__.py
================================================
"""
Code for acquisition strategies.
"""


================================================
FILE: bo/acq/acqmap.py
================================================
"""
Classes to manage acqmap (acquisition maps from xin to acquisition value).
"""

from argparse import Namespace
import numpy as np
import copy
from bo.acq.acquisition import Acquisitioner
from bo.util.datatransform import DataTransformer
#from bo.pp.pp_gp_george import GeorgeGpPP
#from bo.pp.pp_gp_stan import StanGpPP
from bo.pp.pp_gp_my_distmat import MyGpDistmatPP

class AcqMapper(object):
  """ Class to manage acqmap (acquisition map). """

  def __init__(self, data, amp, print_flag=True):
    """ Constructor
        Parameters:
          amp - Namespace of acqmap params
          print_flag - True or False
    """
    self.data = data
    self.set_am_params(amp)
    #self.setup_acqmap()
    if print_flag: self.print_str()

  def set_am_params(self, amp):
    """ Set the acqmap params.
        Inputs:
          amp - Namespace of acqmap parameters """
    self.amp = amp

  def get_acqmap(self, xin_is_list=True):
    """ Return acqmap.
        Inputs: xin_is_list True if input to acqmap is a list of xin """
    # Potentially do acqmap setup here. Could include inference,
    # cachining/computing quantities, instantiating objects used in acqmap
    # definition. This becomes important when we do sequential opt of acqmaps.
    return self.acqmap_list if xin_is_list else self.acqmap_single

  def acqmap_list(self, xin_list):
    """ Acqmap defined on a list of xin. """

    def get_trans_data():
      """ Returns transformed data. """
      dt = DataTransformer(self.data.y, False)
      return Namespace(X=self.data.X, y=dt.transform_data(self.data.y))

    def apply_acq_to_pmlist(pmlist, acq_str, trans_data):
      """ Apply acquisition to pmlist. """
      acqp = Namespace(acq_str=acq_str, pmout_str='sample')
      acq = Acquisitioner(trans_data, acqp, False)
      acqfn = acq.acq_method
      return [acqfn(p) for p in pmlist]

    def georgegp_acqmap(acq_str):
      """ Acqmaps for GeorgeGpPP """
      trans_data = get_trans_data()
      pp = GeorgeGpPP(trans_data, self.amp.modelp, False)
      pmlist = pp.sample_pp_pred(self.amp.nppred, xin_list) if acq_str=='ts' \
        else pp.sample_pp_post_pred(self.amp.nppred, xin_list)
      return apply_acq_to_pmlist(pmlist, acq_str, trans_data)

    def stangp_acqmap(acq_str):
      """ Acqmaps for StanGpPP """
      trans_data = get_trans_data()
      pp = StanGpPP(trans_data, self.amp.modelp, False)
      pp.infer_post_and_update_samples(print_result=True)
      pmlist, _ = pp.sample_pp_pred(self.amp.nppred, xin_list) if acq_str=='ts' \
        else pp.sample_pp_post_pred(self.amp.nppred, xin_list, full_cov=True, \
        nloop=np.min([50,self.amp.nppred]))
      return apply_acq_to_pmlist(pmlist, acq_str, trans_data)

    def mygpdistmat_acqmap(acq_str):
      """ Acqmaps for MyGpDistmatPP """
      trans_data = get_trans_data()
      pp = MyGpDistmatPP(trans_data, self.amp.modelp, False)
      pp.infer_post_and_update_samples(print_result=True)
      pmlist, _ = pp.sample_pp_pred(self.amp.nppred, xin_list) if acq_str=='ts' \
        else pp.sample_pp_post_pred(self.amp.nppred, xin_list, full_cov=True)
      return apply_acq_to_pmlist(pmlist, acq_str, trans_data)

    # Mapping of am_str to acqmap
    if self.amp.am_str=='georgegp_ei':
      return georgegp_acqmap('ei')
    elif self.amp.am_str=='georgegp_pi':
      return georgegp_acqmap('pi')
    elif self.amp.am_str=='georgegp_ucb':
      return georgegp_acqmap('ucb')
    elif self.amp.am_str=='georgegp_ts':
      return georgegp_acqmap('ts')
    elif self.amp.am_str=='stangp_ei':
      return stangp_acqmap('ei')
    elif self.amp.am_str=='stangp_pi':
      return stangp_acqmap('pi')
    elif self.amp.am_str=='stangp_ucb':
      return stangp_acqmap('ucb')
    elif self.amp.am_str=='stangp_ts':
      return stangp_acqmap('ts')
    elif self.amp.am_str=='mygpdistmat_ei':
      return mygpdistmat_acqmap('ei')
    elif self.amp.am_str=='mygpdistmat_pi':
      return mygpdistmat_acqmap('pi')
    elif self.amp.am_str=='mygpdistmat_ucb':
      return mygpdistmat_acqmap('ucb')
    elif self.amp.am_str=='mygpdistmat_ts':
      return mygpdistmat_acqmap('ts')
    elif self.amp.am_str=='null':
      return [0. for xin in xin_list]

  def acqmap_single(self, xin):
    """ Acqmap defined on a single xin. Returns acqmap(xin) value, not list. """
    return self.acqmap_list([xin])[0]

  def print_str(self):
    """ Print a description string """
    print('*AcqMapper with amp='+str(self.amp)
      +'.\n-----')


================================================
FILE: bo/acq/acqopt.py
================================================
"""
Classes to perform acquisition function optimization.
"""

from argparse import Namespace
import numpy as np

class AcqOptimizer(object):
  """ Class to perform acquisition function optimization """

  def __init__(self, optp=None, print_flag=True):
    """ Constructor
        Inputs:
          optp - Namespace of opt parameters
          print_flag - True or False
    """
    self.set_opt_params(optp)
    if print_flag: self.print_str()

  def set_opt_params(self, optp):
    """ Set the optimizer params.
        Inputs:
          acqp - Namespace of acquisition parameters """
    if optp is None:
      optp = Namespace(opt_str='rand', max_iter=1000)
    self.optp = optp

  def optimize(self, dom, am):
    """ Optimize acqfn(probmap(x)) over x in domain """
    if self.optp.opt_str=='rand':
      return self.optimize_rand(dom, am)

  def optimize_rand(self, dom, am):
    """ Optimize acqmap(x) over domain via random search """
    xin_list = dom.unif_rand_sample(self.optp.max_iter)
    amlist = am.acqmap_list(xin_list)
    return xin_list[np.argmin(amlist)]

  # Utilities 
  def print_str(self):
    """ print a description string """
    print('*AcqOptimizer with optp='+str(self.optp)
      +'.\n-----')


================================================
FILE: bo/acq/acquisition.py
================================================
"""
Classes to manage acquisition functions.
"""

from argparse import Namespace
import numpy as np
from scipy.stats import norm

class Acquisitioner(object):
  """ Class to manage acquisition functions """

  def __init__(self, data, acqp=None, print_flag=True):
    """ Constructor
        Parameters:
          acqp - Namespace of acquisition parameters
          print_flag - True or False
    """
    self.data = data
    self.set_acq_params(acqp)
    self.set_acq_method()
    if print_flag: self.print_str()

  def set_acq_params(self, acqp):
    """ Set the acquisition params.
        Parameters:
          acqp - Namespace of acquisition parameters """
    if acqp is None:
      acqp = Namespace(acq_str='ei', pmout_str='sample')
    self.acqp = acqp

  def set_acq_method(self):
    """ Set the acquisition method """
    if self.acqp.acq_str=='ei': self.acq_method = self.ei
    if self.acqp.acq_str=='pi': self.acq_method = self.pi
    if self.acqp.acq_str=='ts': self.acq_method = self.ts
    if self.acqp.acq_str=='ucb': self.acq_method = self.ucb
    if self.acqp.acq_str=='rand': self.acq_method = self.rand
    if self.acqp.acq_str=='null': self.acq_method = self.null
    #if self.acqp.acqStr=='map': return self.map

  def ei(self, pmout):
    """ Expected improvement (EI) """
    if self.acqp.pmout_str=='sample':
      return self.bbacq_ei(pmout)

  def pi(self, pmout):
    """ Probability of improvement (PI) """
    if self.acqp.pmout_str=='sample':
      return self.bbacq_pi(pmout)

  def ucb(self, pmout):
    """ Upper (lower) confidence bound (UCB) """
    if self.acqp.pmout_str=='sample':
      return self.bbacq_ucb(pmout)

  def ts(self, pmout):
    """ Thompson sampling (TS) """
    if self.acqp.pmout_str=='sample':
      return self.bbacq_ts(pmout)

  def rand(self, pmout):
    """ Uniform random sampling """
    return np.random.random()

  def null(self, pmout):
    """ Return constant 0. """
    return 0.

  # Black Box Acquisition Functions
  def bbacq_ei(self, pmout_samp, normal=False):
    """ Black box acquisition: BB-EI
        Input: pmout_samp: post-pred samples - np array (shape=(nsamp,1))
        Returns: EI acq value """
    youts = np.array(pmout_samp).flatten()
    nsamp = youts.shape[0]
    if normal:
      mu = np.mean(youts)
      sig = np.std(youts)
      gam = (self.data.y.min() - mu) / sig
      eiVal = -1*sig*(gam*norm.cdf(gam) + norm.pdf(gam))
    else:
      diffs = self.data.y.min() - youts
      ind_below_min = np.argwhere(diffs>0)
      eiVal = -1*np.sum(diffs[ind_below_min])/float(nsamp) if \
        len(ind_below_min)>0 else 0
    return eiVal

  def bbacq_pi(self, pmout_samp, normal=False):
    """ Black box acquisition: BB-PI
        Input: pmout_samp: post-pred samples - np array (shape=(nsamp,1))
        Returns: PI acq value """
    youts = np.array(pmout_samp).flatten()
    nsamp = youts.shape[0]
    if normal:
      mu = np.mean(youts)
      sig = np.sqrt(np.var(youts))
      piVal = -1*norm.cdf(self.data.y.min(),loc=mu,scale=sig)
    else:
      piVal = -1*len(np.argwhere(youts<self.data.y.min()))/float(nsamp)
    return piVal

  def bbacq_ucb(self, pmout_samp, beta=0.5, normal=True):
    """ Black box acquisition: BB-UCB
        Input: pmout_samp: post-pred samples - np array (shape=(nsamp,1))
        Returns: UCB acq value """
    youts = np.array(pmout_samp).flatten()
    nsamp = youts.shape[0]
    if normal:
      ucbVal = np.mean(youts) - beta*np.sqrt(np.var(youts))
    else:
      # TODO replace below with nonparametric ucb estimate
      ucbVal = np.mean(youts) - beta*np.sqrt(np.var(youts))
    return ucbVal

  def bbacq_ts(self, pmout_samp):
    """ Black box acquisition: BB-TS
        Input: pmout_samp: post-pred samples - np array (shape=(nsamp,1))
        Returns: TS acq value """
    return pmout_samp.mean()

  # Utilities
  def print_str(self):
    """ print a description string """
    print('*Acquisitioner with acqp='+str(self.acqp)+'.')
    print('-----')


================================================
FILE: bo/bo/__init__.py
================================================
"""
Code for Bayesian optimization.
"""


================================================
FILE: bo/bo/probo.py
================================================
"""
Classes for ProBO (probabilistic programming BO) using makept strategy.
"""

import time
from argparse import Namespace
import subprocess
import os
import pickle
import numpy as np
from bo.fn.functionhandler import get_fh
from bo.ds.makept import main

class ProBO(object):
  """ Class to carry out ProBO (probabilistic programming BO) """

  def __init__(self, fn, search_space, aux_file_path, data=None, probop=None, printFlag=True):
    """ Constructor
        Parameters:
          fn - Function to query (experiment)
          data - Initial dataset Namespace (with keys: X, y)
          probop - probo parameters Namespace
    """
    self.data = data
    self.search_space = search_space
    self.set_probo_params(probop)
    self.set_fh(fn)
    self.set_tmpdir()
    self.auxpkl = aux_file_path
    if printFlag:
      self.print_str()

  def set_probo_params(self, probop):
    """ Set ProBO parameters """
    self.probop = probop

  def set_fh(self, fn):
    """ Set function handler """
    self.fh = get_fh(fn, self.data, self.probop.fhp)

  def set_tmpdir(self):
    """ Set tmp directory and files """
    if not os.path.exists(self.probop.tmpdir):
      os.makedirs(self.probop.tmpdir)
    self.configpkl = os.path.join(self.probop.tmpdir, 'config.pkl')
    self.datapkl = os.path.join(self.probop.tmpdir, 'data.pkl')
    self.nextptpkl = os.path.join(self.probop.tmpdir, 'nextpt.pkl')

  def run_bo(self, verbose=False):
    """ Main BO loop. """
    # Serialize makerp 
    with open(self.configpkl, 'wb') as f:
      pickle.dump(self.probop.makerp, f)
    print('*Saved self.probop.makerp as ' + self.configpkl + '.\n-----')
    # Iterate
    for iteridx in range(self.probop.niter):
      starttime = time.time()
      # Serialize current data
      with open(self.datapkl, 'wb') as f:
        pickle.dump(self.data, f)

      if not hasattr(self.probop, 'mode') or self.probop.mode == 'subprocess':
        subseed = np.random.randint(111111)
        subprocess.call(['python3', 'bo/ds/makept.py', '--configpkl', self.configpkl,
                         '--datapkl', self.datapkl, '--nextptpkl',
                         self.nextptpkl, '--seed', str(subseed)])
      elif self.probop.mode == 'single_process':
        args = Namespace(configpkl=self.configpkl, datapkl=self.datapkl, nextptpkl=self.nextptpkl,
            mode=self.probop.mode, iteridx=iteridx)
        main(args, self.search_space)

      # Call fn on nextpt
      nextpt = pickle.load(open(self.nextptpkl, 'rb'))
      self.fh.call_fn_and_add_data(nextpt)
      print('FINISHED QUERY', iteridx)
      if verbose and iteridx % 10 == 0:
        print('iter', iteridx)
        print('Data is:')
        print(self.data.y)
      itertime = time.time()-starttime
      if iteridx % 10 == 0:
        self.print_iter_info(iteridx, itertime)
      self.post_iteration()

  def print_iter_info(self, iteridx, itertime):
    """ Print information at end of an iteration. """
    print('*Last query results: xin = ' + str(self.data.X[-1]) +
          ', yout = ' + str(self.data.y[-1]) + '.')
    print('*Timing: iteration took ' + str(itertime) + ' seconds.')
    print('*Finished ProBO iter = ' + str(iteridx+1) +
          '.\n' + '==='*20)

  def print_str(self):
    """ print a description string """
    print('*ProBO (using makept) with probop='+str(self.probop)
          + '.\n-----')

  def post_iteration(self):
    pairs = [(self.data.X[i], self.data.y[i]) for i in range(len(self.data.y))]
    pairs.sort(key = lambda x:x[1])
    with open(self.auxpkl, 'wb') as f:
      pickle.dump(pairs, f)





================================================
FILE: bo/dom/__init__.py
================================================
"""
Code for domain classes.
"""


================================================
FILE: bo/dom/list.py
================================================
"""
Classes for list (discrete set) domains.
"""

from argparse import Namespace
import numpy as np


class ListDomain(object):
  """ Class for defining sets defined by a list of elements """

  def __init__(self, search_space, domp=None, printFlag=True):
    """ Constructor
        Parameters:
          domp - domain parameters Namespace
    """
    self.set_domain_params(domp)
    self.search_space = search_space
    self.init_domain_list()
    if printFlag:
      self.print_str()

  def set_domain_params(self, domp):
    """ Set self.domp Namespace """
    self.domp = domp

  def init_domain_list(self):
    """ Initialize self.domain_list. """
    if self.domp.set_domain_list_auto:
      self.set_domain_list_auto()
    else:
      self.domain_list = None

  def set_domain_list_auto(self):
    self.domain_list = self.search_space.get_arch_list(self.domp.aux_file_path)

  def set_domain_list(self, domain_list):
    """ Set self.domain_list, containing elements of domain """
    self.domain_list = domain_list

  def is_in_domain(self, pt):
    """ Check if pt is in domain, and return True or False """
    return pt in self.domain_list

  def unif_rand_sample(self, n=1, replace=True):
    """ Draws a sample uniformly at random from domain, returns as a list of
        len n, with (default) or without replacement. """
    if replace:
      randind = np.random.randint(len(self.domain_list), size=n)
    else:
      randind = np.arange(min(n, len(self.domain_list)))
    return [self.domain_list[i] for i in randind]

  def print_str(self):
    """ Print a description string """
    print('*ListDomain with domp = ' + str(self.domp) + '.')
    print('-----')


================================================
FILE: bo/dom/real.py
================================================
"""
Classes for real coordinate space domains.
"""

from argparse import Namespace
import numpy as np

class RealDomain(object):
  """ Class for defining sets in real coordinate (Euclidean) space """

  def __init__(self, domp=None, printFlag=True):
    """ Constructor
        Parameters:
          domp - domain parameters Namespace
    """
    self.set_domain_params(domp)
    self.ndimx = self.domp.ndimx
    if printFlag:
      self.print_str()

  def set_domain_params(self, domp):
    """ Set self.domp Namespace """
    if domp is None:
      domp = Namespace()
      domp.ndimx = 1
      domp.min_max = [(0,1)]*domp.ndimx
    self.domp = domp

  def is_in_domain(self, pt):
    """ Check if pt is in domain, and return True or False """
    pt = np.array(pt).reshape(-1)
    if pt.shape[0] != self.ndimx:
      ret=False
    else:
      bool_list = [pt[i]>=self.domp.min_max[i][0] and
        pt[i]<=self.domp.min_max[i][1] for i in range(self.ndimx)]
      ret=False if False in bool_list else True
    return ret

  def unif_rand_sample(self, n=1):
    """ Draws a sample uniformly at random from domain """
    li = [np.random.uniform(mm[0], mm[1], n) for mm in self.domp.min_max]
    return np.array(li).T

  def print_str(self):
    """ Print a description string """
    print('*RealDomain with domp = ' + str(self.domp) + '.')
    print('-----')


================================================
FILE: bo/ds/__init__.py
================================================
"""
Code for makept (serializing and subprocesses) strategy.
"""


================================================
FILE: bo/ds/makept.py
================================================
"""
Make a point in a domain, and serialize it.
"""

import sys
import os
sys.path.append(os.path.expanduser('./'))
from argparse import Namespace, ArgumentParser
import pickle
import time
import numpy as np
from bo.dom.real import RealDomain
from bo.dom.list import ListDomain
from bo.acq.acqmap import AcqMapper
from bo.acq.acqopt import AcqOptimizer

def main(args, search_space, printinfo=False):
  starttime = time.time()
  
  # Load config and data
  makerp = pickle.load(open(args.configpkl, 'rb'))
  data = pickle.load(open(args.datapkl, 'rb'))

  if hasattr(args, 'mode') and args.mode == 'single_process':
    makerp.domp.mode = args.mode
    makerp.domp.iteridx = args.iteridx
    makerp.amp.modelp.mode = args.mode
  else:
    np.random.seed(args.seed)
  # Instantiate Domain, AcqMapper, AcqOptimizer
  dom = get_domain(makerp.domp, search_space)
  am = AcqMapper(data, makerp.amp, False)
  ao = AcqOptimizer(makerp.optp, False)
  # Optimize over domain to get nextpt 
  nextpt = ao.optimize(dom, am)
  # Serialize nextpt
  with open(args.nextptpkl, 'wb') as f:
    pickle.dump(nextpt, f)
  # Print
  itertime = time.time()-starttime
  if printinfo: print_info(nextpt, itertime, args.nextptpkl)

def get_domain(domp, search_space):
  """ Return Domain object. """
  if not hasattr(domp, 'dom_str'):
    domp.dom_str = 'real'
  if domp.dom_str=='real':
    return RealDomain(domp, False)
  elif domp.dom_str=='list':
    return ListDomain(search_space, domp, False)

def print_info(nextpt, itertime, nextptpkl):
  print('*Found nextpt = ' + str(nextpt) + '.')
  print('*Saved nextpt as ' + nextptpkl + '.')
  print('*Timing: makept took ' + str(itertime) + ' seconds.')
  print('-----')

if __name__ == "__main__":
  parser = ArgumentParser(description='Args for a single instance of acquisition optimization.')
  parser.add_argument('--seed', dest='seed', type=int, default=1111)
  parser.add_argument('--configpkl', dest='configpkl', type=str, default='config.pkl')
  parser.add_argument('--datapkl', dest='datapkl', type=str, default='data.pkl')
  parser.add_argument('--nextptpkl', dest='nextptpkl', type=str, default='nextpt.pkl')
  args = parser.parse_args()
  main(args, printinfo=False)


================================================
FILE: bo/fn/__init__.py
================================================
"""
Code for synthetic functions to query (perform experiment on).
"""


================================================
FILE: bo/fn/functionhandler.py
================================================
"""
Classes to handle functions.
"""

from argparse import Namespace
import numpy as np

def get_fh(fn, data=None, fhp=None, print_flag=True):
  """ Returns a function handler object """
  if fhp is None:
    fhp=Namespace(fhstr='basic', namestr='noname')
  # Return FH object
  if fhp.fhstr=='basic':
    return BasicFH(fn, data, fhp, print_flag)
  elif fhp.fhstr=='extrainfo':
    return ExtraInfoFH(fn, data, fhp, print_flag)
  elif fhp.fhstr=='nannn':
    return NanNNFH(fn, data, fhp, print_flag)
  elif fhp.fhstr=='replacenannn':
    return ReplaceNanNNFH(fn, data, fhp, print_flag)
  elif fhp.fhstr=='object':
    return ObjectFH(fn, data, fhp, print_flag)


class BasicFH(object):
  """ Class to handle basic functions, which map from an array xin to a real
      value yout. """

  def __init__(self, fn, data=None, fhp=None, print_flag=True):
    """ Constructor.
        Inputs:
          pmp - Namespace of probmap params
          print_flag - True or False
    """
    self.fn = fn
    self.data = data
    self.fhp = fhp
    if print_flag: self.print_str()

  def call_fn_and_add_data(self, xin):
    """ Call self.fn(xin), and update self.data """
    yout = self.fn(xin)
    print('new datapoint score', yout)
    self.add_data_single(xin, yout)

  def add_data_single(self, xin, yout):
    """ Update self.data with a single xin yout pair.
        Inputs:
          xin: np.array size=(1, -1)
          yout: np.array size=(1, 1) """
    xin = np.array(xin).reshape(1, -1)
    yout = np.array(yout).reshape(1, 1)
    newdata = Namespace(X=xin, y=yout)
    self.add_data(newdata)

  def add_data(self, newdata):
    """ Update self.data with newdata Namespace.
        Inputs:
          newdata: Namespace with fields X and y """
    if self.data is None:
      self.data = newdata
    else:
      self.data.X = np.concatenate((self.data.X, newdata.X), 0)
      self.data.y = np.concatenate((self.data.y, newdata.y), 0)

  def print_str(self):
    """ Print a description string. """
    print('*BasicFH with fhp='+str(self.fhp)
      +'.\n-----')


class ExtraInfoFH(BasicFH):
  """ Class to handle functions that map from an array xin to a real
      value yout, but also return extra info """

  def __init__(self, fn, data=None, fhp=None, print_flag=True):
    """ Constructor.
        Inputs:
          pmp - Namespace of probmap params
          print_flag - True or False
    """
    super(ExtraInfoFH, self).__init__(fn, data, fhp, False)
    self.extrainfo = []
    if print_flag: self.print_str()

  def call_fn_and_add_data(self, xin):
    """ Call self.fn(xin), and update self.data """
    yout, exinf = self.fn(xin)
    self.add_data_single(xin, yout)
    self.extrainfo.append(exinf)

  def print_str(self):
    """ Print a description string. """
    print('*ExtraInfoFH with fhp='+str(self.fhp)
      +'.\n-----')


class NanNNFH(BasicFH):
  """ Class to handle NN functions that map from an array xin to either
      a real value yout or np.NaN, but also return extra info """

  def __init__(self, fn, data=None, fhp=None, print_flag=True):
    """ Constructor.
        Inputs:
          pmp - Namespace of probmap params
          print_flag - True or False
    """
    super(NanNNFH, self).__init__(fn, data, fhp, False)
    self.extrainfo = []
    if print_flag: self.print_str()

  def call_fn_and_add_data(self, xin):
    """ Call self.fn(xin), and update self.data """
    timethresh = 60.
    yout, walltime = self.fn(xin)
    if walltime > timethresh:
      self.add_data_single_nan(xin)
    else:
      self.add_data_single(xin, yout)
      self.possibly_init_xnan()
    exinf = Namespace(xin=xin, yout=yout, walltime=walltime)
    self.extrainfo.append(exinf)

  def add_data_single_nan(self, xin):
    """ Update self.data.X_nan with a single xin.
        Inputs:
          xin: np.array size=(1, -1) """
    xin = xin.reshape(1,-1)
    newdata = Namespace(X = np.ones((0, xin.shape[1])),
                        y = np.ones((0, 1)),
                        X_nan = xin)
    self.add_data_nan(newdata)

  def add_data_nan(self, newdata):
    """ Update self.data with newdata Namespace.
        Inputs:
          newdata: Namespace with fields X, y, X_nan """
    if self.data is None:
      self.data = newdata
    else:
      self.data.X_nan = np.concatenate((self.data.X_nan, newdata.X_nan), 0)

  def possibly_init_xnan(self):
    """ If self.data doesn't have X_nan, then create it. """
    if not hasattr(self.data, 'X_nan'):
      self.data.X_nan = np.ones((0, self.data.X.shape[1]))

  def print_str(self):
    """ Print a description string. """
    print('*NanNNFH with fhp='+str(self.fhp)
      +'.\n-----')


class ReplaceNanNNFH(BasicFH):
  """ Class to handle NN functions that map from an array xin to either
      a real value yout or np.NaN. If np.NaN, we replace it with a large
      positive value. We also return extra info """

  def __init__(self, fn, data=None, fhp=None, print_flag=True):
    """ Constructor.
        Inputs:
          pmp - Namespace of probmap params
          print_flag - True or False
    """
    super(ReplaceNanNNFH, self).__init__(fn, data, fhp, False)
    self.extrainfo = []
    if print_flag: self.print_str()

  def call_fn_and_add_data(self, xin):
    """ Call self.fn(xin), and update self.data """
    timethresh = 60.
    replace_nan_val = 5.
    yout, walltime = self.fn(xin)
    if walltime > timethresh:
      yout = replace_nan_val
    self.add_data_single(xin, yout)
    exinf = Namespace(xin=xin, yout=yout, walltime=walltime)
    self.extrainfo.append(exinf)

  def print_str(self):
    """ Print a description string. """
    print('*ReplaceNanNNFH with fhp='+str(self.fhp)
      +'.\n-----')


class ObjectFH(object):
  """ Class to handle basic functions, which map from some object xin to a real
      value yout. """

  def __init__(self, fn, data=None, fhp=None, print_flag=True):
    """ Constructor.
        Inputs:
          pmp - Namespace of probmap params
          print_flag - True or False
    """
    self.fn = fn
    self.data = data
    self.fhp = fhp
    if print_flag: self.print_str()

  def call_fn_and_add_data(self, xin):
    """ Call self.fn(xin), and update self.data """
    yout = self.fn(xin)
    self.add_data_single(xin, yout)

  def add_data_single(self, xin, yout):
    """ Update self.data with a single xin yout pair. """
    newdata = Namespace(X=[xin], y=np.array(yout).reshape(1, 1))
    self.add_data(newdata)

  def add_data(self, newdata):
    """ Update self.data with newdata Namespace.
        Inputs:
          newdata: Namespace with fields X and y """
    if self.data is None:
      self.data = newdata
    else:
      self.data.X.extend(newdata.X)
      self.data.y = np.concatenate((self.data.y, newdata.y), 0)

  def print_str(self):
    """ Print a description string. """
    print('*ObjectFH with fhp='+str(self.fhp)
      +'.\n-----')


================================================
FILE: bo/pp/__init__.py
================================================
"""
Code for defining and running probabilistic programs.
"""


================================================
FILE: bo/pp/gp/__init__.py
================================================
"""
Code for Gaussian process (GP) utilities and functions.
"""


================================================
FILE: bo/pp/gp/gp_utils.py
================================================
"""
Utilities for Gaussian process (GP) inference
"""

import numpy as np
from scipy.linalg import solve_triangular
from scipy.spatial.distance import cdist 
#import GPy as gpy


def kern_gibbscontext(xmatcon1, xmatcon2, xmatact1, xmatact2, theta, alpha,
  lscon, whichlsfn=1):
  """ Gibbs kernel (ls_fn of context only) """
  actdim = xmatact1.shape[1]
  lsarr1 = ls_fn(xmatcon1, theta, whichlsfn).flatten()
  lsarr2 = ls_fn(xmatcon2, theta, whichlsfn).flatten()
  sum_sq_ls = np.add.outer(lsarr1, lsarr2)
  inexp = -1. * np.divide(cdist(xmatact1, xmatact2, 'sqeuclidean'), sum_sq_ls)
  prod_ls = np.outer(lsarr1, lsarr2)
  #coef = np.power(np.divide(2*prod_ls, sum_sq_ls), actdim/2.) # Correct
  coef = 1.
  kern_gibbscontext_only_ns = np.multiply(coef, np.exp(inexp))
  kern_expquad_ns = kern_exp_quad_noscale(xmatcon1, xmatcon2, lscon)
  return alpha**2 * np.multiply(kern_gibbscontext_only_ns, kern_expquad_ns)

def kern_gibbs1d(xmat1, xmat2, theta, alpha):
  """ Gibbs kernel in 1d """
  lsarr1 = ls_fn(xmat1, theta).flatten()
  lsarr2 = ls_fn(xmat2, theta).flatten()
  sum_sq_ls = np.add.outer(lsarr1, lsarr2)
  prod_ls = np.outer(lsarr1, lsarr2) #TODO product of this for each dim
  coef = np.sqrt(np.divide(2*prod_ls, sum_sq_ls))
  inexp = cdist(xmat1, xmat2, 'sqeuclidean') / sum_sq_ls #TODO sum of this for each dim
  return alpha**2 * coef * np.exp(-1 * inexp)

def ls_fn(xmat, theta, whichlsfn=1):
  theta = np.array(theta).reshape(-1,1)
  if theta.shape[0]==2:
    if whichlsfn==1 or whichlsfn==2:
      return np.log(1 + np.exp(theta[0][0] + np.matmul(xmat,theta[1])))   # softplus transform
    elif whichlsfn==3:
      return np.exp(theta[0][0] + np.matmul(xmat,theta[1]))               # exp transform
  elif theta.shape[0]==3:
    if whichlsfn==1:
      return np.log(1 + np.exp(theta[0][0] + np.matmul(xmat,theta[1]) +
        np.matmul(np.power(xmat,2),theta[2])))                            # softplus transform
    elif whichlsfn==2:
      return np.log(1 + np.exp(theta[0][0] + np.matmul(xmat,theta[1]) +
        np.matmul(np.abs(xmat),theta[2])))                                # softplus on abs transform
    elif whichlsfn==3:
      return np.exp(theta[0][0] + np.matmul(xmat,theta[1]) +
        np.matmul(np.power(xmat,2),theta[2]))                             # exp transform
  else:
    print('ERROR: theta parameter is incorrect.')

def kern_matern32(xmat1, xmat2, ls, alpha):
  """ Matern 3/2 kernel, currently using GPy """
  kern = gpy.kern.Matern32(input_dim=xmat1.shape[1], variance=alpha**2,
    lengthscale=ls)
  return kern.K(xmat1,xmat2)

def kern_exp_quad(xmat1, xmat2, ls, alpha):
  """ Exponentiated quadratic kernel function aka squared exponential kernel
      aka RBF kernel """
  return alpha**2 * kern_exp_quad_noscale(xmat1, xmat2, ls)

def kern_exp_quad_noscale(xmat1, xmat2, ls):
  """ Exponentiated quadratic kernel function aka squared exponential kernel
      aka RBF kernel, without scale parameter. """
  sq_norm = (-1/(2 * ls**2)) * cdist(xmat1, xmat2, 'sqeuclidean')
  return np.exp(sq_norm)

def squared_euc_distmat(xmat1, xmat2, coef=1.):
  """ Distance matrix of squared euclidean distance (multiplied by coef)
      between points in xmat1 and xmat2. """
  return coef * cdist(xmat1, xmat2, 'sqeuclidean')

def kern_distmat(xmat1, xmat2, ls, alpha, distfn):
  """ Kernel for a given distmat, via passed-in distfn (which is assumed to be
      fn of xmat1 and xmat2 only) """
  distmat = distfn(xmat1, xmat2)
  sq_norm = -distmat / ls**2
  return alpha**2 * np.exp(sq_norm)

def get_cholesky_decomp(k11_nonoise, sigma, psd_str):
  """ Returns cholesky decomposition """
  if psd_str == 'try_first':
    k11 = k11_nonoise + sigma**2 * np.eye(k11_nonoise.shape[0])
    try:
      return stable_cholesky(k11, False)
    except np.linalg.linalg.LinAlgError:
      return get_cholesky_decomp(k11_nonoise, sigma, 'project_first')
  elif psd_str == 'project_first':
    k11_nonoise = project_symmetric_to_psd_cone(k11_nonoise)
    return get_cholesky_decomp(k11_nonoise, sigma, 'is_psd')
  elif psd_str == 'is_psd':
    k11 = k11_nonoise + sigma**2 * np.eye(k11_nonoise.shape[0])
    return stable_cholesky(k11)

def stable_cholesky(mmat, make_psd=True):
  """ Returns a 'stable' cholesky decomposition of mmat """
  if mmat.size == 0:
    return mmat
  try:
    lmat = np.linalg.cholesky(mmat)
  except np.linalg.linalg.LinAlgError as e:
    if not make_psd:
      raise e
    diag_noise_power = -11
    max_mmat = np.diag(mmat).max()
    diag_noise = np.diag(mmat).max() * 1e-11
    break_loop = False
    while not break_loop:
      try:
        lmat = np.linalg.cholesky(mmat + ((10**diag_noise_power) * max_mmat)  *
          np.eye(mmat.shape[0]))
        break_loop = True
      except np.linalg.linalg.LinAlgError:
        if diag_noise_power > -9:
          print('stable_cholesky failed with diag_noise_power=%d.'%(diag_noise_power))
        diag_noise_power += 1
      if diag_noise_power >= 5:
        print('***** stable_cholesky failed: added diag noise = %e'%(diag_noise))
  return lmat

def project_symmetric_to_psd_cone(mmat, is_symmetric=True, epsilon=0):
  """ Project symmetric matrix mmat to the PSD cone """
  if is_symmetric:
    try:
      eigvals, eigvecs = np.linalg.eigh(mmat)
    except np.linalg.LinAlgError:
      print('LinAlgError encountered with np.eigh. Defaulting to eig.')
      eigvals, eigvecs = np.linalg.eig(mmat)
      eigvals = np.real(eigvals)
      eigvecs = np.real(eigvecs)
  else:
    eigvals, eigvecs = np.linalg.eig(mmat)
  clipped_eigvals = np.clip(eigvals, epsilon, np.inf)
  return (eigvecs * clipped_eigvals).dot(eigvecs.T)

def solve_lower_triangular(amat, b):
  """ Solves amat*x=b when amat is lower triangular """
  return solve_triangular_base(amat, b, lower=True)

def solve_upper_triangular(amat, b):
  """ Solves amat*x=b when amat is upper triangular """
  return solve_triangular_base(amat, b, lower=False)

def solve_triangular_base(amat, b, lower):
  """ Solves amat*x=b when amat is a triangular matrix. """
  if amat.size == 0 and b.shape[0] == 0:
    return np.zeros((b.shape))
  else:
    return solve_triangular(amat, b, lower=lower)

def sample_mvn(mu, covmat, nsamp):
  """ Sample from multivariate normal distribution with mean mu and covariance
      matrix covmat """
  mu = mu.reshape(-1,)
  ndim = len(mu)
  lmat = stable_cholesky(covmat)
  umat = np.random.normal(size=(ndim, nsamp))
  return lmat.dot(umat).T + mu


================================================
FILE: bo/pp/pp_core.py
================================================
"""
Base classes for probabilistic programs.
"""

import pickle

class DiscPP(object):
  """ Parent class for discriminative probabilistic programs """

  def __init__(self):
    """ Constructor """
    self.sample_list = []
    if not hasattr(self,'data'):
      raise NotImplementedError('Implement var data in a child class')
    #if not hasattr(self,'ndimx'):
      #raise NotImplementedError('Implement var ndimx in a child class')
    #if not hasattr(self,'ndataInit'):
      #raise NotImplementedError('Implement var ndataInit in a child class')

  def infer_post_and_update_samples(self,nsamp):
    """ Run an inference algorithm (given self.data), draw samples from the
        posterior, and store in self.sample_list. """
    raise NotImplementedError('Implement method in a child class')

  def sample_pp_post_pred(self,nsamp,input_list):
    """ Sample nsamp times from PP posterior predictive, for each x-input in
    input_list """
    raise NotImplementedError('Implement method in a child class')

  def sample_pp_pred(self,nsamp,input_list,lv_list=None):
    """ Sample nsamp times from PP predictive for parameter lv, for each
    x-input in input_list. If lv is None, draw it uniformly at random
    from self.sample_list. """
    raise NotImplementedError('Implement method in a child class')

  def add_new_data(self,newData):
    """ Add data (newData) to self.data """
    raise NotImplementedError('Implement method in a child class')

  def get_namespace_to_save(self):
    """ Return namespace containing object info (to save to file) """
    raise NotImplementedError('Implement method in a child class')

  def save_namespace_to_file(self,fileStr,printFlag):
    """ Saves results from get_namespace_to_save in fileStr """
    ppNamespaceToSave = self.get_namespace_to_save()
    ff = open(fileStr,'wb')
    pickle.dump(ppNamespaceToSave,ff)
    ff.close()
    if printFlag:
      print('*Saved DiscPP Namespace in pickle file: ' +fileStr+'\n-----')


================================================
FILE: bo/pp/pp_gp_george.py
================================================
"""
Classes for hierarchical GP models with George PP
"""

from argparse import Namespace
import numpy as np
import scipy.optimize as spo
import george
import emcee
from bo.pp.pp_core import DiscPP

class GeorgeGpPP(DiscPP):
  """ Hierarchical GPs implemented with George """

  def __init__(self,data=None,modelp=None,printFlag=True):
    """ Constructor """
    self.set_data(data)
    self.set_model_params(modelp)
    self.ndimx = self.modelp.ndimx
    self.set_kernel()
    self.set_model()
    super(GeorgeGpPP,self).__init__()
    if printFlag:
      self.print_str()

  def set_data(self,data):
    if data is None:
      pass #TODO: handle case where there's no data
    self.data = data

  def set_model_params(self,modelp):
    if modelp is None:
      modelp = Namespace(ndimx=1, noiseVar=1e-3, kernLs=1.5, kernStr='mat',
        fitType='mle')
    self.modelp = modelp

  def set_kernel(self):
    """ Set kernel for GP """
    if self.modelp.kernStr=='mat':
      self.kernel = self.data.y.var() * \
        george.kernels.Matern52Kernel(self.modelp.kernLs, ndim=self.ndimx)
    if self.modelp.kernStr=='rbf': # NOTE: periodically produces errors
      self.kernel = self.data.y.var() * \
        george.kernels.ExpSquaredKernel(self.modelp.kernLs, ndim=self.ndimx)

  def set_model(self):
    """ Set GP regression model """
    self.model = self.get_model()
    self.model.compute(self.data.X)
    self.fit_hyperparams(printOut=False)

  def get_model(self):
    """ Returns GPRegression model """
    return george.GP(kernel=self.kernel,fit_mean=True)
  
  def fit_hyperparams(self,printOut=False):
    if self.modelp.fitType=='mle':
      spo.minimize(self.neg_log_like, self.model.get_parameter_vector(),
        jac=True)
    elif self.modelp.fitType=='bayes':
      self.nburnin = 200
      nsamp = 200
      nwalkers = 36
      gpdim = len(self.model)
      self.sampler = emcee.EnsembleSampler(nwalkers, gpdim, self.log_post)
      p0 = self.model.get_parameter_vector() + 1e-4*np.random.randn(nwalkers,
        gpdim)
      print 'Running burn-in.'
      p0, _, _ = self.sampler.run_mcmc(p0, self.nburnin)
      print 'Running main chain.'
      self.sampler.run_mcmc(p0, nsamp)
    if printOut:
      print 'Final GP hyperparam (in opt or MCMC chain):'
      print self.model.get_parameter_dict()

  def infer_post_and_update_samples(self):
    """ Update self.sample_list """
    self.sample_list = [None] #TODO: need to not-break ts fn in maker_bayesopt.py

  def sample_pp_post_pred(self,nsamp,input_list):
    """ Sample from posterior predictive of PP.
        Inputs:
          input_list - list of np arrays size=(-1,)
        Returns:
          list (len input_list) of np arrays (size=(nsamp,1))."""
    inputArray = np.array(input_list)
    if self.modelp.fitType=='mle':
      inputArray = np.array(input_list)
      ppredArray = self.model.sample_conditional(self.data.y.flatten(),
        inputArray, nsamp).T
    elif self.modelp.fitType=='bayes':
      ppredArray = np.zeros(shape=[len(input_list),nsamp])
      for s in range(nsamp):
        walkidx = np.random.randint(self.sampler.chain.shape[0])
        sampidx = np.random.randint(self.nburnin, self.sampler.chain.shape[1])
        hparamSamp = self.sampler.chain[walkidx, sampidx]
        print 'hparamSamp = ' + str(hparamSamp) # TODO: remove print statement
        self.model.set_parameter_vector(hparamSamp)
        ppredArray[:,s] = self.model.sample_conditional(self.data.y.flatten(),
          inputArray, 1).flatten()
    return list(ppredArray) # each element is row in ppredArray matrix

  def sample_pp_pred(self,nsamp,input_list,lv=None):
    """ Sample from predictive of PP for parameter lv.
        Returns: list (len input_list) of np arrays (size (nsamp,1))."""
    if self.modelp.fitType=='bayes':
      print('*WARNING: fitType=bayes not implemented for sample_pp_pred. \
        Reverting to fitType=mle')
      # TODO: Equivalent algo for fitType=='bayes':
      #   - draw posterior sample path over all xin in input_list
      #   - draw pred samples around sample path pt, based on noise model
    inputArray = np.array(input_list)
    samplePath = self.model.sample_conditional(self.data.y.flatten(),
      inputArray).reshape(-1,)
    return [np.random.normal(s,np.sqrt(self.modelp.noiseVar),nsamp).reshape(-1,)
      for s in samplePath]

  def neg_log_like(self,hparams):
    """ Compute and return the negative log likelihood for model
        hyperparameters hparams, as well as its gradient. """
    self.model.set_parameter_vector(hparams)
    g = self.model.grad_log_likelihood(self.data.y.flatten(), quiet=True)
    return -self.model.log_likelihood(self.data.y.flatten(), quiet=True), -g

  def log_post(self,hparams):
    """ Compute and return the log posterior density (up to constant of
        proportionality) for the model hyperparameters hparams. """
    # Uniform prior between -100 and 100, for each hyperparam
    if np.any((-100 > hparams[1:]) + (hparams[1:] > 100)):
      return -np.inf
    self.model.set_parameter_vector(hparams)
    return self.model.log_likelihood(self.data.y.flatten(), quiet=True)

  # Utilities
  def print_str(self):
    """ Print a description string """
    print '*GeorgeGpPP with modelp='+str(self.modelp)+'.'
    print '-----'


================================================
FILE: bo/pp/pp_gp_my_distmat.py
================================================
"""
Classes for GP models without any PP backend, using a given distance matrix.
"""

from argparse import Namespace
import time
import copy
import numpy as np
from scipy.spatial.distance import cdist 
from bo.pp.pp_core import DiscPP
from bo.pp.gp.gp_utils import kern_exp_quad, kern_matern32, \
  get_cholesky_decomp, solve_upper_triangular, solve_lower_triangular, \
  sample_mvn, squared_euc_distmat, kern_distmat
from bo.util.print_utils import suppress_stdout_stderr


class MyGpDistmatPP(DiscPP):
  """ GPs using a kernel specified by a given distance matrix, without any PP
      backend """

  def __init__(self, data=None, modelp=None, printFlag=True):
    """ Constructor """
    self.set_model_params(modelp)
    self.set_data(data)
    self.set_model()
    super(MyGpDistmatPP,self).__init__()
    if printFlag:
      self.print_str()

  def set_model_params(self, modelp):
    """ Set self.modelp """
    if modelp is None:
      pass #TODO
    self.modelp = modelp

  def set_data(self, data):
    """ Set self.data """
    if data is None:
      pass #TODO
    self.data_init = copy.deepcopy(data)
    self.data = copy.deepcopy(self.data_init)

  def set_model(self):
    """ Set GP regression model """
    self.model = self.get_model()

  def get_model(self):
    """ Returns model object """
    return None

  def infer_post_and_update_samples(self, print_result=False):
    """ Update self.sample_list """
    self.sample_list = [Namespace(ls=self.modelp.kernp.ls,
                                  alpha=self.modelp.kernp.alpha,
                                  sigma=self.modelp.kernp.sigma)]
    if print_result: self.print_inference_result()

  def get_distmat(self, xmat1, xmat2):
    """ Get distance matrix """
    #return squared_euc_distmat(xmat1, xmat2, .5)
    
    from data import Data
    self.distmat = Data.generate_distance_matrix
    #print('distmat')
    #print(self.distmat(xmat1, xmat2, self.modelp.distance))
    return self.distmat(xmat1, xmat2, self.modelp.distance)

  def print_inference_result(self):
    """ Print results of stan inference """
    print('*ls pt est = '+str(self.sample_list[0].ls)+'.')
    print('*alpha pt est = '+str(self.sample_list[0].alpha)+'.')
    print('*sigma pt est = '+str(self.sample_list[0].sigma)+'.')
    print('-----')

  def sample_pp_post_pred(self, nsamp, input_list, full_cov=False):
    """ Sample from posterior predictive of PP.
        Inputs:
          input_list - list of np arrays size=(-1,)
        Returns:
          list (len input_list) of np arrays (size=(nsamp,1))."""
    samp = self.sample_list[0]
    postmu, postcov = self.gp_post(self.data.X, self.data.y, input_list,
                                   samp.ls, samp.alpha, samp.sigma, full_cov)
    if full_cov:
      ppred_list = list(sample_mvn(postmu, postcov, nsamp))
    else:
      ppred_list = list(np.random.normal(postmu.reshape(-1,),
                                         postcov.reshape(-1,),
                                         size=(nsamp, len(input_list))))
    return list(np.stack(ppred_list).T), ppred_list

  def sample_pp_pred(self, nsamp, input_list, lv=None):
    """ Sample from predictive of PP for parameter lv.
        Returns: list (len input_list) of np arrays (size (nsamp,1))."""
    if lv is None:
      lv = self.sample_list[0]
    postmu, postcov = self.gp_post(self.data.X, self.data.y, input_list, lv.ls,
                                   lv.alpha, lv.sigma)
    pred_list = list(sample_mvn(postmu, postcov, 1)) ###TODO: sample from this mean nsamp times
    return list(np.stack(pred_list).T), pred_list

  def gp_post(self, x_train_list, y_train_arr, x_pred_list, ls, alpha, sigma,
              full_cov=True):
    """ Compute parameters of GP posterior """
    kernel = lambda a, b, c, d: kern_distmat(a, b, c, d, self.get_distmat)
    k11_nonoise = kernel(x_train_list, x_train_list, ls, alpha)
    lmat = get_cholesky_decomp(k11_nonoise, sigma, 'try_first')
    smat = solve_upper_triangular(lmat.T, solve_lower_triangular(lmat,
                                  y_train_arr))
    k21 = kernel(x_pred_list, x_train_list, ls, alpha)
    mu2 = k21.dot(smat)
    k22 = kernel(x_pred_list, x_pred_list, ls, alpha)
    vmat = solve_lower_triangular(lmat, k21.T)
    k2 = k22 - vmat.T.dot(vmat)
    if full_cov is False:
      k2 = np.sqrt(np.diag(k2))
    return mu2, k2

  # Utilities
  def print_str(self):
    """ Print a description string """
    print('*MyGpDistmatPP with modelp='+str(self.modelp)+'.')
    print('-----')


================================================
FILE: bo/pp/pp_gp_stan.py
================================================
"""
Classes for GP models with Stan
"""

from argparse import Namespace
import time
import numpy as np
import copy
from bo.pp.pp_core import DiscPP
import bo.pp.stan.gp_hier2 as gpstan2
import bo.pp.stan.gp_hier3 as gpstan3
import bo.pp.stan.gp_hier2_matern as gpstan2_matern
from bo.pp.gp.gp_utils import kern_exp_quad, kern_matern32, \
  get_cholesky_decomp, solve_upper_triangular, solve_lower_triangular, \
  sample_mvn
from bo.util.print_utils import suppress_stdout_stderr

class StanGpPP(DiscPP):
  """ Hierarchical GPs implemented with Stan """

  def __init__(self, data=None, modelp=None, printFlag=True):
    """ Constructor """
    self.set_model_params(modelp)
    self.set_data(data)
    self.ndimx = self.modelp.ndimx
    self.set_model()
    super(StanGpPP,self).__init__()
    if printFlag:
      self.print_str()

  def set_model_params(self,modelp):
    if modelp is None:
      modelp = Namespace(ndimx=1, model_str='optfixedsig',
        gp_mean_transf_str='constant')
      if modelp.model_str=='optfixedsig':
        modelp.kernp = Namespace(u1=.1, u2=5., n1=10., n2=10., sigma=1e-5)
        modelp.infp = Namespace(niter=1000)
      elif modelp.model_str=='opt' or modelp.model_str=='optmatern32':
        modelp.kernp = Namespace(ig1=1., ig2=5., n1=10., n2=20., n3=.01,
          n4=.01)
        modelp.infp = Namespace(niter=1000)
      elif modelp.model_str=='samp' or modelp.model_str=='sampmatern32':
        modelp.kernp = Namespace(ig1=1., ig2=5., n1=10., n2=20., n3=.01,
          n4=.01)
        modelp.infp = Namespace(niter=1500, nwarmup=500)
    self.modelp = modelp

  def set_data(self, data):
    """ Set self.data """
    if data is None:
      pass #TODO: handle case where there's no data
    self.data_init = copy.deepcopy(data)
    self.data = self.get_transformed_data(self.data_init,
      self.modelp.gp_mean_transf_str)

  def get_transformed_data(self, data, transf_str='linear'):
    """ Transform data, for non-zero-mean GP """
    newdata = Namespace(X=data.X)
    if transf_str=='linear':
      mmat,_,_,_ = np.linalg.lstsq(np.concatenate([data.X,
        np.ones((data.X.shape[0],1))],1), data.y.flatten(), rcond=None)
      self.gp_mean_vec = lambda x: np.matmul(np.concatenate([x,
        np.ones((x.shape[0],1))],1), mmat)
      newdata.y = data.y - self.gp_mean_vec(data.X).reshape(-1,1)
    if transf_str=='constant':
      yconstant = data.y.mean()
      #yconstant = 0. 
      self.gp_mean_vec = lambda x: np.array([yconstant for xcomp in x])
      newdata.y = data.y - self.gp_mean_vec(data.X).reshape(-1,1)
    return newdata

  def set_model(self):
    """ Set GP regression model """
    self.model = self.get_model()

  def get_model(self):
    """ Returns GPRegression model """
    if self.modelp.model_str=='optfixedsig':
      return gpstan3.get_model(print_status=False)
    elif self.modelp.model_str=='opt' or self.modelp.model_str=='samp':
      return gpstan2.get_model(print_status=False)
    elif self.modelp.model_str=='optmatern32' or \
      self.modelp.model_str=='sampmatern32':
      return gpstan2_matern.get_model(print_status=False)

  def infer_post_and_update_samples(self, seed=5000012, print_result=False):
    """ Update self.sample_list """
    data_dict = self.get_stan_data_dict()
    with suppress_stdout_stderr():
      if self.modelp.model_str=='optfixedsig' or self.modelp.model_str=='opt' \
        or self.modelp.model_str=='optmatern32':
        stanout = self.model.optimizing(data_dict, iter=self.modelp.infp.niter,
          #seed=seed, as_vector=True, algorithm='Newton')
          seed=seed, as_vector=True, algorithm='LBFGS')
      elif self.modelp.model_str=='samp' or self.modelp.model_str=='sampmatern32':
        stanout = self.model.sampling(data_dict, iter=self.modelp.infp.niter +
          self.modelp.infp.nwarmup, warmup=self.modelp.infp.nwarmup, chains=1,
          seed=seed, refresh=1000)
      print('-----')
    self.sample_list = self.get_sample_list_from_stan_out(stanout)
    if print_result: self.print_inference_result()

  def get_stan_data_dict(self):
    """ Return data dict for stan sampling method """
    if self.modelp.model_str=='optfixedsig':
      return {'u1':self.modelp.kernp.u1, 'u2':self.modelp.kernp.u2,
              'n1':self.modelp.kernp.n1, 'n2':self.modelp.kernp.n2,
              'sigma':self.modelp.kernp.sigma, 'D':self.ndimx,
              'N':len(self.data.X), 'x':self.data.X, 'y':self.data.y.flatten()}
    elif self.modelp.model_str=='opt' or self.modelp.model_str=='samp':
      return {'ig1':self.modelp.kernp.ig1, 'ig2':self.modelp.kernp.ig2,
              'n1':self.modelp.kernp.n1, 'n2':self.modelp.kernp.n2,
              'n3':self.modelp.kernp.n3, 'n4':self.modelp.kernp.n4,
              'D':self.ndimx, 'N':len(self.data.X), 'x':self.data.X,
              'y':self.data.y.flatten()}
    elif self.modelp.model_str=='optmatern32' or \
      self.modelp.model_str=='sampmatern32':
      return {'ig1':self.modelp.kernp.ig1, 'ig2':self.modelp.kernp.ig2,
              'n1':self.modelp.kernp.n1, 'n2':self.modelp.kernp.n2,
              'n3':self.modelp.kernp.n3, 'n4':self.modelp.kernp.n4,
              'D':self.ndimx, 'N':len(self.data.X), 'x':self.data.X,
              'y':self.data.y.flatten(), 'covid':2}

  def get_sample_list_from_stan_out(self, stanout):
    """ Convert stan output to sample_list """
    if self.modelp.model_str=='optfixedsig':
      return [Namespace(ls=stanout['rho'], alpha=stanout['alpha'],
        sigma=self.modelp.kernp.sigma)]
    elif self.modelp.model_str=='opt' or self.modelp.model_str=='optmatern32':
      return [Namespace(ls=stanout['rho'], alpha=stanout['alpha'],
        sigma=stanout['sigma'])]
    elif self.modelp.model_str=='samp' or \
      self.modelp.model_str=='sampmatern32':
      sdict = stanout.extract(['rho','alpha','sigma'])
      return [Namespace(ls=sdict['rho'][i], alpha=sdict['alpha'][i],
        sigma=sdict['sigma'][i]) for i in range(sdict['rho'].shape[0])]

  def print_inference_result(self):
    """ Print results of stan inference """
    if self.modelp.model_str=='optfixedsig' or self.modelp.model_str=='opt' or \
      self.modelp.model_str=='optmatern32':
      print('*ls pt est = '+str(self.sample_list[0].ls)+'.')
      print('*alpha pt est = '+str(self.sample_list[0].alpha)+'.')
      print('*sigma pt est = '+str(self.sample_list[0].sigma)+'.')
    elif self.modelp.model_str=='samp' or \
      self.modelp.model_str=='sampmatern32':
      ls_arr = np.array([ns.ls for ns in self.sample_list])
      alpha_arr = np.array([ns.alpha for ns in self.sample_list])
      sigma_arr = np.array([ns.sigma for ns in self.sample_list])
      print('*ls mean = '+str(ls_arr.mean())+'.')
      print('*ls std = '+str(ls_arr.std())+'.')
      print('*alpha mean = '+str(alpha_arr.mean())+'.')
      print('*alpha std = '+str(alpha_arr.std())+'.')
      print('*sigma mean = '+str(sigma_arr.mean())+'.')
      print('*sigma std = '+str(sigma_arr.std())+'.')
    print('-----')

  def sample_pp_post_pred(self, nsamp, input_list, full_cov=False, nloop=None):
    """ Sample from posterior predictive of PP.
        Inputs:
          input_list - list of np arrays size=(-1,)
        Returns:
          list (len input_list) of np arrays (size=(nsamp,1))."""
    if self.modelp.model_str=='optfixedsig' or self.modelp.model_str=='opt' or \
      self.modelp.model_str=='optmatern32':
      nloop = 1
      sampids = [0]
    elif self.modelp.model_str=='samp' or \
      self.modelp.model_str=='sampmatern32':
      if nloop is None: nloop=nsamp
      nsamp = int(nsamp/nloop)
      sampids = np.random.randint(len(self.sample_list), size=(nloop,))
    ppred_list = []
    for i in range(nloop):
      samp = self.sample_list[sampids[i]]
      postmu, postcov = self.gp_post(self.data.X, self.data.y,
        np.stack(input_list), samp.ls, samp.alpha, samp.sigma, full_cov)
      if full_cov:
        ppred_list.extend(list(sample_mvn(postmu, postcov, nsamp)))
      else:
        ppred_list.extend(list(np.random.normal(postmu.reshape(-1,),
          postcov.reshape(-1,), size=(nsamp, len(input_list)))))
    return self.get_reverse_transform(list(np.stack(ppred_list).T), ppred_list,
      input_list)

  def sample_pp_pred(self, nsamp, input_list, lv=None):
    """ Sample from predictive of PP for parameter lv.
        Returns: list (len input_list) of np arrays (size (nsamp,1))."""
    x_pred = np.stack(input_list)
    if lv is None:
      if self.modelp.model_str=='optfixedsig' or self.modelp.model_str=='opt' \
        or self.modelp.model_str=='optmatern32':
        lv = self.sample_list[0]
      elif self.modelp.model_str=='samp' or \
        self.modelp.model_str=='sampmatern32':
        lv = self.sample_list[np.random.randint(len(self.sample_list))]
    postmu, postcov = self.gp_post(self.data.X, self.data.y, x_pred, lv.ls,
      lv.alpha, lv.sigma)
    pred_list = list(sample_mvn(postmu, postcov, 1)) ###TODO: sample from this mean nsamp times
    return self.get_reverse_transform(list(np.stack(pred_list).T), pred_list,
      input_list)

  def get_reverse_transform(self, pp1, pp2, input_list):
    """ Apply reverse of data transform to ppred or pred """
    pp1 = [pp1[i] + self.gp_mean_vec(input_list[i].reshape(1,-1)) for i in
           range(len(input_list))]
    pp2 = [psamp + self.gp_mean_vec(np.array(input_list)) for psamp in pp2]
    return pp1, pp2

  def gp_post(self, x_train, y_train, x_pred, ls, alpha, sigma, full_cov=True):
    """ Compute parameters of GP posterior """
    if self.modelp.model_str=='optmatern32' or \
      self.modelp.model_str=='sampmatern32':
      kernel = kern_matern32
    else:
      kernel = kern_exp_quad
    k11_nonoise = kernel(x_train, x_train, ls, alpha)
    lmat = get_cholesky_decomp(k11_nonoise, sigma, 'try_first')
    smat = solve_upper_triangular(lmat.T, solve_lower_triangular(lmat, y_train))
    k21 = kernel(x_pred, x_train, ls, alpha)
    mu2 = k21.dot(smat)
    k22 = kernel(x_pred, x_pred, ls, alpha)
    vmat = solve_lower_triangular(lmat, k21.T)
    k2 = k22 - vmat.T.dot(vmat)
    if full_cov is False:
      k2 = np.sqrt(np.diag(k2))
    return mu2, k2

  # Utilities
  def print_str(self):
    """ Print a description string """
    print('*StanGpPP with modelp='+str(self.modelp)+'.')
    print('-----')


================================================
FILE: bo/pp/pp_gp_stan_distmat.py
================================================
"""
Classes for GP models with Stan, using a given distance matrix.
"""

from argparse import Namespace
import time
import copy
import numpy as np
from scipy.spatial.distance import cdist 
from bo.pp.pp_core import DiscPP
import bo.pp.stan.gp_distmat as gpstan
import bo.pp.stan.gp_distmat_fixedsig as gpstan_fixedsig
from bo.pp.gp.gp_utils import kern_exp_quad, kern_matern32, \
  get_cholesky_decomp, solve_upper_triangular, solve_lower_triangular, \
  sample_mvn, squared_euc_distmat, kern_distmat
from bo.util.print_utils import suppress_stdout_stderr

class StanGpDistmatPP(DiscPP):
  """ Hierarchical GPs using a given distance matrix, implemented with Stan """

  def __init__(self, data=None, modelp=None, printFlag=True):
    """ Constructor """
    self.set_model_params(modelp)
    self.set_data(data)
    self.ndimx = self.modelp.ndimx
    self.set_model()
    super(StanGpDistmatPP,self).__init__()
    if printFlag:
      self.print_str()

  def set_model_params(self, modelp):
    """ Set self.modelp """
    if modelp is None:
      pass #TODO
    self.modelp = modelp

  def set_data(self, data):
    """ Set self.data """
    if data is None:
      pass #TODO
    self.data_init = copy.deepcopy(data)
    self.data = copy.deepcopy(self.data_init)

  def set_model(self):
    """ Set GP regression model """
    self.model = self.get_model()

  def get_model(self):
    """ Returns GPRegression model """
    if self.modelp.model_str=='optfixedsig' or \
      self.modelp.model_str=='sampfixedsig':
      return gpstan_fixedsig.get_model(print_status=True)
    elif self.modelp.model_str=='opt' or self.modelp.model_str=='samp':
      return gpstan.get_model(print_status=True)
    elif self.modelp.model_str=='fixedparam':
      return None

  def infer_post_and_update_samples(self, seed=543210, print_result=False):
    """ Update self.sample_list """
    data_dict = self.get_stan_data_dict()
    with suppress_stdout_stderr():
      if self.modelp.model_str=='optfixedsig' or self.modelp.model_str=='opt':
        stanout = self.model.optimizing(data_dict, iter=self.modelp.infp.niter,
          #seed=seed, as_vector=True, algorithm='Newton')
          seed=seed, as_vector=True, algorithm='LBFGS')
      elif self.modelp.model_str=='samp' or self.modelp.model_str=='sampfixedsig':
        stanout = self.model.sampling(data_dict, iter=self.modelp.infp.niter +
          self.modelp.infp.nwarmup, warmup=self.modelp.infp.nwarmup, chains=1,
          seed=seed, refresh=1000)
      elif self.modelp.model_str=='fixedparam':
        stanout = None
      print('-----')
    self.sample_list = self.get_sample_list_from_stan_out(stanout)
    if print_result: self.print_inference_result()

  def get_stan_data_dict(self):
    """ Return data dict for stan sampling method """
    if self.modelp.model_str=='optfixedsig' or \
      self.modelp.model_str=='sampfixedsig':
      return {'ig1':self.modelp.kernp.ig1, 'ig2':self.modelp.kernp.ig2,
              'n1':self.modelp.kernp.n1, 'n2':self.modelp.kernp.n2,
              'sigma':self.modelp.kernp.sigma, 'D':self.ndimx,
              'N':len(self.data.X), 'y':self.data.y.flatten(),
              'distmat':self.get_distmat(self.data.X, self.data.X)}
    elif self.modelp.model_str=='opt' or self.modelp.model_str=='samp':
      return {'ig1':self.modelp.kernp.ig1, 'ig2':self.modelp.kernp.ig2,
              'n1':self.modelp.kernp.n1, 'n2':self.modelp.kernp.n2,
              'n3':self.modelp.kernp.n3, 'n4':self.modelp.kernp.n4,
              'D':self.ndimx, 'N':len(self.data.X), 'y':self.data.y.flatten(),
              'distmat':self.get_distmat(self.data.X, self.data.X)}

  def get_distmat(self, xmat1, xmat2):
    """ Get distance matrix """
    # For now, will compute squared euc distance * .5, on self.data.X
    return squared_euc_distmat(xmat1, xmat2, .5)

  def get_sample_list_from_stan_out(self, stanout):
    """ Convert stan output to sample_list """
    if self.modelp.model_str=='optfixedsig':
      return [Namespace(ls=stanout['rho'], alpha=stanout['alpha'],
        sigma=self.modelp.kernp.sigma)]
    elif self.modelp.model_str=='opt':
      return [Namespace(ls=stanout['rho'], alpha=stanout['alpha'],
        sigma=stanout['sigma'])]
    elif self.modelp.model_str=='sampfixedsig':
      sdict = stanout.extract(['rho','alpha'])
      return [Namespace(ls=sdict['rho'][i], alpha=sdict['alpha'][i],
        sigma=self.modelp.kernp.sigma) for i in range(sdict['rho'].shape[0])]
    elif self.modelp.model_str=='samp':
      sdict = stanout.extract(['rho','alpha','sigma'])
      return [Namespace(ls=sdict['rho'][i], alpha=sdict['alpha'][i],
        sigma=sdict['sigma'][i]) for i in range(sdict['rho'].shape[0])]
    elif self.modelp.model_str=='fixedparam':
      return [Namespace(ls=self.modelp.kernp.ls, alpha=self.modelp.kernp.alpha,
        sigma=self.modelp.kernp.sigma)]

  def print_inference_result(self):
    """ Print results of stan inference """
    if self.modelp.model_str=='optfixedsig' or self.modelp.model_str=='opt' or \
      self.modelp.model_str=='fixedparam':
      print('*ls pt est = '+str(self.sample_list[0].ls)+'.')
      print('*alpha pt est = '+str(self.sample_list[0].alpha)+'.')
      print('*sigma pt est = '+str(self.sample_list[0].sigma)+'.')
    elif self.modelp.model_str=='samp' or \
      self.modelp.model_str=='sampfixedsig':
      ls_arr = np.array([ns.ls for ns in self.sample_list])
      alpha_arr = np.array([ns.alpha for ns in self.sample_list])
      sigma_arr = np.array([ns.sigma for ns in self.sample_list])
      print('*ls mean = '+str(ls_arr.mean())+'.')
      print('*ls std = '+str(ls_arr.std())+'.')
      print('*alpha mean = '+str(alpha_arr.mean())+'.')
      print('*alpha std = '+str(alpha_arr.std())+'.')
      print('*sigma mean = '+str(sigma_arr.mean())+'.')
      print('*sigma std = '+str(sigma_arr.std())+'.')
    print('-----')

  def sample_pp_post_pred(self, nsamp, input_list, full_cov=False, nloop=None):
    """ Sample from posterior predictive of PP.
        Inputs:
          input_list - list of np arrays size=(-1,)
        Returns:
          list (len input_list) of np arrays (size=(nsamp,1))."""
    if self.modelp.model_str=='optfixedsig' or self.modelp.model_str=='opt' or \
        self.modelp.model_str=='fixedparam':
      nloop = 1
      sampids = [0]
    elif self.modelp.model_str=='samp' or \
      self.modelp.model_str=='sampfixedsig':
      if nloop is None: nloop=nsamp
      nsamp = int(nsamp/nloop)
      sampids = np.random.randint(len(self.sample_list), size=(nloop,))
    ppred_list = []
    for i in range(nloop):
      samp = self.sample_list[sampids[i]]
      postmu, postcov = self.gp_post(self.data.X, self.data.y,
        np.stack(input_list), samp.ls, samp.alpha, samp.sigma, full_cov)
      if full_cov:
        ppred_list.extend(list(sample_mvn(postmu, postcov, nsamp)))
      else:
        ppred_list.extend(list(np.random.normal(postmu.reshape(-1,),
          postcov.reshape(-1,), size=(nsamp, len(input_list)))))
    return list(np.stack(ppred_list).T), ppred_list

  def sample_pp_pred(self, nsamp, input_list, lv=None):
    """ Sample from predictive of PP for parameter lv.
        Returns: list (len input_list) of np arrays (size (nsamp,1))."""
    x_pred = np.stack(input_list)
    if lv is None:
      if self.modelp.model_str=='optfixedsig' or self.modelp.model_str=='opt' \
        or self.modelp.model_str=='fixedparam':
        lv = self.sample_list[0]
      elif self.modelp.model_str=='samp' or \
        self.modelp.model_str=='sampfixedsig':
        lv = self.sample_list[np.random.randint(len(self.sample_list))]
    postmu, postcov = self.gp_post(self.data.X, self.data.y, x_pred, lv.ls,
      lv.alpha, lv.sigma)
    pred_list = list(sample_mvn(postmu, postcov, 1)) ###TODO: sample from this mean nsamp times
    return list(np.stack(pred_list).T), pred_list

  def gp_post(self, x_train, y_train, x_pred, ls, alpha, sigma, full_cov=True):
    """ Compute parameters of GP posterior """
    kernel = lambda a, b, c, d: kern_distmat(a, b, c, d, self.get_distmat)
    k11_nonoise = kernel(x_train, x_train, ls, alpha)
    lmat = get_cholesky_decomp(k11_nonoise, sigma, 'try_first')
    smat = solve_upper_triangular(lmat.T, solve_lower_triangular(lmat, y_train))
    k21 = kernel(x_pred, x_train, ls, alpha)
    mu2 = k21.dot(smat)
    k22 = kernel(x_pred, x_pred, ls, alpha)
    vmat = solve_lower_triangular(lmat, k21.T)
    k2 = k22 - vmat.T.dot(vmat)
    if full_cov is False:
      k2 = np.sqrt(np.diag(k2))
    return mu2, k2

  # Utilities
  def print_str(self):
    """ Print a description string """
    print('*StanGpDistmatPP with modelp='+str(self.modelp)+'.')
    print('-----')


================================================
FILE: bo/pp/stan/__init__.py
================================================
"""
Code for defining and compiling models in Stan.
"""


================================================
FILE: bo/pp/stan/compile_stan.py
================================================
"""
Script to compile stan models
"""

#import pp_new.stan.gp_hier2 as gpstan
#import pp_new.stan.gp_hier3 as gpstan
#import pp_new.stan.gp_hier2_matern as gpstan
import pp_new.stan.gp_distmat as gpstan
#import pp_new.stan.gp_distmat_fixedsig as gpstan


# Recompile model and return it
model = gpstan.get_model(recompile=True)


================================================
FILE: bo/pp/stan/gp_distmat.py
================================================
"""
Functions to define and compile PPs in Stan, for model:
hierarchical GP (prior on rho, alpha, sigma) using a given distance matrix.
"""

import time
import pickle
import pystan

def get_model(recompile=False, print_status=True):
  model_file_str = 'bo/pp/stan/hide_model/gp_distmat.pkl'

  if recompile:
    starttime = time.time()
    model = pystan.StanModel(model_code=get_model_code())
    buildtime = time.time()-starttime
    with open(model_file_str,'wb') as f:
      pickle.dump(model, f)
    if print_status:
      print('*Time taken to compile = '+ str(buildtime) +' seconds.\n-----')
      print('*Model saved in file ' + model_file_str + '.\n-----')
  else:
    model = pickle.load(open(model_file_str,'rb'))
    if print_status:
      print('*Model loaded from file ' + model_file_str + '.\n-----')
  return model


def get_model_code():
  """ Parse modelp and return stan model code """
  return """
  data {
    int<lower=1> N;
    matrix[N, N] distmat;
    vector[N] y;
    real<lower=0> ig1;
    real<lower=0> ig2;
    real<lower=0> n1;
    real<lower=0> n2;
    real<lower=0> n3;
    real<lower=0> n4;
  }

  parameters {
    real<lower=0> rho;
    real<lower=0> alpha;
    real<lower=0.0001> sigma;
  }

  model {
    matrix[N, N] cov = square(alpha) * exp(-distmat / square(rho))
                       + diag_matrix(rep_vector(square(sigma), N));
    matrix[N, N] L_cov = cholesky_decompose(cov);
    rho ~ inv_gamma(ig1, ig2);
    alpha ~ normal(n1, n2);
    sigma ~ normal(n3, n4);
    y ~ multi_normal_cholesky(rep_vector(0, N), L_cov);
  }
  """

if __name__ == '__main__':
  get_model()


================================================
FILE: bo/pp/stan/gp_distmat_fixedsig.py
================================================
"""
Functions to define and compile PPs in Stan, for model:
hierarchical GP (prior on rho, alpha) and fixed sigma, using a given
distance matrix.
"""

import time
import pickle
import pystan

def get_model(recompile=False, print_status=True):
  model_file_str = 'bo/pp/stan/hide_model/gp_distmat_fixedsig.pkl'

  if recompile:
    starttime = time.time()
    model = pystan.StanModel(model_code=get_model_code())
    buildtime = time.time()-starttime
    with open(model_file_str,'wb') as f:
      pickle.dump(model, f)
    if print_status:
      print('*Time taken to compile = '+ str(buildtime) +' seconds.\n-----')
      print('*Model saved in file ' + model_file_str + '.\n-----')
  else:
    model = pickle.load(open(model_file_str,'rb'))
    if print_status:
      print('*Model loaded from file ' + model_file_str + '.\n-----')
  return model


def get_model_code():
  """ Parse modelp and return stan model code """
  return """
  data {
    int<lower=1> N;
    matrix[N, N] distmat;
    vector[N] y;
    real<lower=0> ig1;
    real<lower=0> ig2;
    real<lower=0> n1;
    real<lower=0> n2;
    real<lower=0> sigma;
  }

  parameters {
    real<lower=0> rho;
    real<lower=0> alpha;
  }

  model {
    matrix[N, N] cov = square(alpha) * exp(-distmat / square(rho))
                       + diag_matrix(rep_vector(square(sigma), N));
    matrix[N, N] L_cov = cholesky_decompose(cov);
    rho ~ inv_gamma(ig1, ig2);
    alpha ~ normal(n1, n2);
    y ~ multi_normal_cholesky(rep_vector(0, N), L_cov);
  }
  """

if __name__ == '__main__':
  get_model()


================================================
FILE: bo/pp/stan/gp_hier2.py
================================================
"""
Functions to define and compile PPs in Stan, for model:
hierarchical GP (prior on rho, alpha, sigma)
"""

import time
import pickle
import pystan

def get_model(recompile=False, print_status=True):
  model_file_str = 'bo/pp/stan/hide_model/gp_hier2.pkl'

  if recompile:
    starttime = time.time()
    model = pystan.StanModel(model_code=get_model_code())
    buildtime = time.time()-starttime
    with open(model_file_str,'wb') as f:
      pickle.dump(model, f)
    if print_status:
      print('*Time taken to compile = '+ str(buildtime) +' seconds.\n-----')
      print('*Model saved in file ' + model_file_str + '.\n-----')
  else:
    model = pickle.load(open(model_file_str,'rb'))
    if print_status:
      print('*Model loaded from file ' + model_file_str + '.\n-----')
  return model


def get_model_code():
  """ Parse modelp and return stan model code """
  return """
  data {
    int<lower=1> D;
    int<lower=1> N;
    vector[D] x[N];
    vector[N] y;
    real<lower=0> ig1;
    real<lower=0> ig2;
    real<lower=0> n1;
    real<lower=0> n2;
    real<lower=0> n3;
    real<lower=0> n4;
  }

  parameters {
    real<lower=0> rho;
    real<lower=0> alpha;
    real<lower=0.0001> sigma;
  }

  model {
    matrix[N, N] cov =   cov_exp_quad(x, alpha, rho)
                       + diag_matrix(rep_vector(square(sigma), N));
    matrix[N, N] L_cov = cholesky_decompose(cov);
    rho ~ inv_gamma(ig1, ig2);
    alpha ~ normal(n1, n2);
    sigma ~ normal(n3, n4);
    y ~ multi_normal_cholesky(rep_vector(0, N), L_cov);
  }
  """

if __name__ == '__main__':
  get_model()


================================================
FILE: bo/pp/stan/gp_hier2_matern.py
================================================
"""
Functions to define and compile PPs in Stan, for model: hierarchical GP (prior
on rho, alpha, sigma), with matern kernel
"""

import time
import pickle
import pystan

def get_model(recompile=False, print_status=True):
  model_file_str = 'bo/pp/stan/hide_model/gp_hier2_matern.pkl'

  if recompile:
    starttime = time.time()
    model = pystan.StanModel(model_code=get_model_code())
    buildtime = time.time()-starttime
    with open(model_file_str,'wb') as f:
      pickle.dump(model, f)
    if print_status:
      print('*Time taken to compile = '+ str(buildtime) +' seconds.\n-----')
      print('*Model saved in file ' + model_file_str + '.\n-----')
  else:
    model = pickle.load(open(model_file_str,'rb'))
    if print_status:
      print('*Model loaded from file ' + model_file_str + '.\n-----')
  return model


def get_model_code():
  """ Parse modelp and return stan model code """
  return """
  functions {
    matrix distance_matrix_single(int N, vector[] x) {
      matrix[N, N] distmat;
      for(i in 1:(N-1)) {
        for(j in (i+1):N) {
          distmat[i, j] = distance(x[i], x[j]);
        }
      }
      return distmat;
    }

    matrix matern_covariance(int N, matrix dist, real ls, real alpha_sq, int COVFN) {
      matrix[N,N] S;
      real dist_ls; 
      real sqrt3;
      real sqrt5;
      sqrt3=sqrt(3.0);
      sqrt5=sqrt(5.0);
      
      // exponential == Matern nu=1/2 , (p=0; nu=p+1/2)
      if (COVFN==1) {
        for(i in 1:(N-1)) {
          for(j in (i+1):N) {
            dist_ls = fabs(dist[i,j])/ls;
            S[i,j] = alpha_sq * exp(- dist_ls ); 
          }
        }
      }

      // Matern nu= 3/2 covariance
      else if (COVFN==2) {
        for(i in 1:(N-1)) {
          for(j in (i+1):N) {
           dist_ls = fabs(dist[i,j])/ls;
           S[i,j] = alpha_sq * (1 + sqrt3 * dist_ls) * exp(-sqrt3 * dist_ls);
          }
        }
      }
      
      // Matern nu=5/2 covariance
      else if (COVFN==3) { 
        for(i in 1:(N-1)) {
          for(j in (i+1):N) {
            dist_ls = fabs(dist[i,j])/ls;
            S[i,j] = alpha_sq * (1 + sqrt5 *dist_ls + 5* pow(dist_ls,2)/3) * exp(-sqrt5 *dist_ls);
          }
        }
      }

      // Matern as nu->Inf become Gaussian (aka squared exponential cov)
      else if (COVFN==4) {
        for(i in 1:(N-1)) {
          for(j in (i+1):N) {
            dist_ls = fabs(dist[i,j])/ls;
            S[i,j] = alpha_sq * exp( -pow(dist_ls,2)/2 ) ;
          }
        }
      } 

      // fill upper triangle
      for(i in 1:(N-1)) {
        for(j in (i+1):N) {
          S[j,i] = S[i,j];
        }
      }

      // create diagonal: nugget(nonspatial) + spatial variance +  eps ensures positive definiteness
      for(i in 1:N) {
        S[i,i] = alpha_sq;            
      }

      return S;
    }
  }

  data {
    int<lower=1> D;
    int<lower=1> N;
    vector[D] x[N];
    vector[N] y;
    real<lower=0> ig1;
    real<lower=0> ig2;
    real<lower=0> n1;
    real<lower=0> n2;
    real<lower=0> n3;
    real<lower=0> n4;
    int covid;
  }

  parameters {
    real<lower=0> rho;
    real<lower=0> alpha;
    real<lower=0.0001> sigma;
  }

  model {
    matrix[N, N] distmat = distance_matrix_single(N, x);
    matrix[N, N] cov = matern_covariance(N, distmat, rho, square(alpha), covid);
    matrix[N, N] L_cov = cholesky_decompose(cov);
    rho ~ inv_gamma(ig1, ig2);
    alpha ~ normal(n1, n2);
    sigma ~ normal(n3, n4);
    y ~ multi_normal_cholesky(rep_vector(0, N), L_cov);
  }
  """

if __name__ == '__main__':
  get_model()


================================================
FILE: bo/pp/stan/gp_hier3.py
================================================
"""
Functions to define and compile PPs in Stan, for model:
hierarchical GP with uniform prior on rho, normal prior on alpha,
and fixed sigma
"""

import time
import pickle
import pystan

def get_model(recompile=False, print_status=True):
  model_file_str = 'bo/pp/stan/hide_model/gp_hier3.pkl'

  if recompile:
    starttime = time.time()
    model = pystan.StanModel(model_code=get_model_code())
    buildtime = time.time()-starttime
    with open(model_file_str,'wb') as f:
      pickle.dump(model, f)
    if print_status:
      print('*Time taken to compile = '+ str(buildtime) +' seconds.\n-----')
      print('*Model saved in file ' + model_file_str + '.\n-----')
  else:
    model = pickle.load(open(model_file_str,'rb'))
    if print_status:
      print('*Model loaded from file ' + model_file_str + '.\n-----')
  return model


def get_model_code():
  """ Parse modelp and return stan model code """
  return """
  data {
    int<lower=1> D;
    int<lower=1> N;
    vector[D] x[N];
    vector[N] y;
    real<lower=0> u1;
    real<lower=0> u2;
    real<lower=0> n1;
    real<lower=0> n2;
    real<lower=0> sigma;
  }

  parameters {
    real<lower=u1, upper=u2> rho;
    real<lower=0> alpha;
  }

  model {
    matrix[N, N] cov =   cov_exp_quad(x, alpha, rho)
                       + diag_matrix(rep_vector(square(sigma), N));
    matrix[N, N] L_cov = cholesky_decompose(cov);
    rho ~ uniform(u1, u2);
    alpha ~ normal(n1, n2);
    y ~ multi_normal_cholesky(rep_vector(0, N), L_cov);
  }
  """

if __name__ == '__main__':
  get_model()


================================================
FILE: bo/util/__init__.py
================================================
"""
Miscellaneous utilities.
"""


================================================
FILE: bo/util/datatransform.py
================================================
"""
Classes for transforming data.
"""

from argparse import Namespace
import numpy as np
from sklearn.preprocessing import StandardScaler
#import sklearn.preprocessing as sklp 

class DataTransformer(object):
  """ Class for transforming data """

  def __init__(self, datamat, printflag=True):
    """ Constructor
        Parameters:
          datamat - numpy array (n x d) of data to be transformed
    """
    self.datamat = datamat
    self.set_transformers()
    if printflag:
      self.print_str()

  def set_transformers(self):
    """ Set transformers using self.datamat """
    self.ss = StandardScaler()
    self.ss.fit(self.datamat)

  def transform_data(self, datamat=None):
    """ Return transformed datamat (default self.datamat) """
    if datamat is None:
      datamat = self.datamat
    return self.ss.transform(datamat)
 
  def inv_transform_data(self, datamat):
    """ Return inverse transform of datamat """
    return self.ss.inverse_transform(datamat)

  def print_str(self):
    """ Print a description string """
    print('*DataTransformer with self.datamat.shape = ' +
      str(self.datamat.shape) + '.')
    print('-----')


================================================
FILE: bo/util/print_utils.py
================================================
"""
Utilities for printing and output
"""

import os

class suppress_stdout_stderr(object):
    ''' A context manager for doing a "deep suppression" of stdout and stderr in
    Python, i.e. will suppress all print, even if the print originates in a
    compiled C/Fortran sub-function.
       This will not suppress raised exceptions, since exceptions are printed
    to stderr just before a script exits, and after the context manager has
    exited (at least, I think that is why it lets exceptions through). '''
    def __init__(self):
        # Open a pair of null files
        self.null_fds = [os.open(os.devnull, os.O_RDWR) for x in range(2)]
        # Save the actual stdout (1) and stderr (2) file descriptors.
        self.save_fds = [os.dup(1), os.dup(2)]

    def __enter__(self):
        # Assign the null pointers to stdout and stderr.
        os.dup2(self.null_fds[0], 1)
        os.dup2(self.null_fds[1], 2)

    def __exit__(self, *_):
        # Re-assign the real stdout/stderr back to (1) and (2)
        os.dup2(self.save_fds[0], 1)
        os.dup2(self.save_fds[1], 2)
        # Close the null files
        for fd in self.null_fds + self.save_fds:
            os.close(fd)


================================================
FILE: darts/__init__.py
================================================



================================================
FILE: darts/arch.py
================================================
import numpy as np
import sys
import os
import copy
import random

sys.path.append(os.path.expanduser('~/darts/cnn'))
from train_class import Train

OPS = ['none',
       'max_pool_3x3',
       'avg_pool_3x3',
       'skip_connect',
       'sep_conv_3x3',
       'sep_conv_5x5',
       'dil_conv_3x3',
       'dil_conv_5x5'
       ]
NUM_VERTICES = 4
INPUT_1 = 'c_k-2'
INPUT_2 = 'c_k-1'


class Arch:

    def __init__(self, arch):
        self.arch = arch

    def serialize(self):
        return self.arch

    def query(self, epochs=50):
        trainer = Train()
        val_losses, test_losses = trainer.main(self.arch, epochs=epochs)
        val_loss = 100 - np.mean(val_losses)
        test_loss = 100 - test_losses[-1]        
        return val_loss, test_loss

    @classmethod
    def random_arch(cls):
        # output a uniformly random architecture spec
        # from the DARTS repository
        # https://github.com/quark0/darts

        normal = []
        reduction = []
        for i in range(NUM_VERTICES):
            ops = np.random.choice(range(len(OPS)), NUM_VERTICES)

            #input nodes for conv
            nodes_in_normal = np.random.choice(range(i+2), 2, replace=False)
            #input nodes for reduce
            nodes_in_reduce = np.random.choice(range(i+2), 2, replace=False)

            normal.extend([(nodes_in_normal[0], ops[0]), (nodes_in_normal[1], ops[1])])
            reduction.extend([(nodes_in_reduce[0], ops[2]), (nodes_in_reduce[1], ops[3])])

        return (normal, reduction)

    def get_arch_list(self):
        # convert tuple to list so that it is mutable
        arch_list = []
        for cell in self.arch:
            arch_list.append([])
            for pair in cell:
                arch_list[-1].append([])
                for num in pair:
                    arch_list[-1][-1].append(num)
        return arch_list

    def mutate(self, edits):
        """ mutate a single arch """
        # first convert tuple to array so that it is mutable
        mutation = self.get_arch_list()

        #make mutations
        for _ in range(edits):
            cell = np.random.choice(2)
            pair = np.random.choice(len(OPS))
            num = np.random.choice(2)
            if num == 1:
                mutation[cell][pair][num] = np.random.choice(len(OPS))
            else:
                inputs = pair // 2 + 2
                choice = np.random.choice(inputs)
                if pair % 2 == 0 and mutation[cell][pair+1][num] != choice:
                    mutation[cell][pair][num] = choice
                elif pair % 2 != 0 and mutation[cell][pair-1][num] != choice:
                    mutation[cell][pair][num] = choice
                      
        return mutation

    def get_paths(self):
        """ return all paths from input to output """

        path_builder = [[[], [], [], []], [[], [], [], []]]
        paths = [[], []]

        for i, cell in enumerate(self.arch):
            for j in range(len(OPS)):
              if cell[j][0] == 0:
                  path = [INPUT_1, OPS[cell[j][1]]]
                  path_builder[i][j//2].append(path)
                  paths[i].append(path)
              elif cell[j][0] == 1:
                  path = [INPUT_2, OPS[cell[j][1]]]
                  path_builder[i][j//2].append(path)
                  paths[i].append(path)
              else:
                  for path in path_builder[i][cell[j][0] - 2]:
                      path = [*path, OPS[cell[j][1]]]
                      path_builder[i][j//2].append(path)
                      paths[i].append(path)

        # check if there are paths of length >=5
        contains_long_path = [False, False]
        if max([len(path) for path in paths[0]]) >= 5:
            contains_long_path[0] = True
        if max([len(path) for path in paths[1]]) >= 5:
            contains_long_path[1] = True

        return paths, contains_long_path

    def get_path_indices(self, long_paths=True):
        """
        compute the index of each path
        There are 4 * (8^0 + ... + 8^4) paths total
        If long_paths = False, we give a single boolean to all paths of
        size 4, so there are only 4 * (1 + 8^0 + ... + 8^3) paths
        """
        paths, contains_long_path = self.get_paths()
        normal_paths, reduce_paths = paths
        num_ops = len(OPS)
        """
        Compute the max number of paths per input per cell.
        Since there are two cells and two inputs per cell, 
        total paths = 4 * max_paths
        """
        if not long_paths:
            max_paths = 1 + sum([num_ops ** i for i in range(NUM_VERTICES)])
        else:
            max_paths = sum([num_ops ** i for i in range(NUM_VERTICES + 1)])    
        path_indices = []

        # set the base index based on the cell and the input
        for i, paths in enumerate((normal_paths, reduce_paths)):
            for path in paths:
                index = i * 2 * max_paths
                if path[0] == INPUT_2:
                    index += max_paths

                # recursively compute the index of the path
                for j in range(NUM_VERTICES + 1):
                    if j == len(path) - 1:
                        path_indices.append(index)
                        break
                    elif j == (NUM_VERTICES - 1) and not long_paths:
                        path_indices.append(2 * (i + 1) * max_paths - 1)
                        break
                    else:
                        index += num_ops ** j * (OPS.index(path[j + 1]) + 1)

        return (tuple(path_indices), contains_long_path)

    def encode_paths(self, long_paths=True):
        # output one-hot encoding of paths
        path_indices, _ = self.get_path_indices(long_paths=long_paths)
        num_ops = len(OPS)

        if not long_paths:
            max_paths = 1 + sum([num_ops ** i for i in range(NUM_VERTICES)])
        else:
            max_paths = sum([num_ops ** i for i in range(NUM_VERTICES + 1)])    

        path_encoding = np.zeros(4 * max_paths)
        for index in path_indices:
            path_encoding[index] = 1
        return path_encoding

    def path_distance(self, other):
        # compute the distance between two architectures
        # by comparing their path encodings
        return np.sum(np.array(self.encode_paths() != np.array(other.encode_paths())))







================================================
FILE: data.py
================================================
import numpy as np
import pickle
import sys
import os

if 'search_space' not in os.environ or os.environ['search_space'] == 'nasbench':
    from nasbench import api
    from nas_bench.cell import Cell

elif os.environ['search_space'] == 'darts':
    from darts.arch import Arch

elif os.environ['search_space'][:12] == 'nasbench_201':
    from nas_201_api import NASBench201API as API
    from nas_bench_201.cell import Cell

else:
    print('Invalid search space environ in data.py')
    sys.exit()


class Data:

    def __init__(self, 
                 search_space, 
                 dataset='cifar10', 
                 nasbench_folder='./', 
                 loaded_nasbench=None):
        self.search_space = search_space
        self.dataset = dataset

        if loaded_nasbench:
            self.nasbench = loaded_nasbench
        elif search_space == 'nasbench':
                self.nasbench = api.NASBench(nasbench_folder + 'nasbench_only108.tfrecord')
        elif search_space == 'nasbench_201':
            self.nasbench = API(os.path.expanduser('~/nas-bench-201/NAS-Bench-201-v1_0-e61699.pth'))
        elif search_space != 'darts':
            print(search_space, 'is not a valid search space')
            sys.exit()

    def get_type(self):
        return self.search_space

    def query_arch(self, 
                   arch=None, 
                   train=True, 
                   encoding_type='path', 
                   cutoff=-1,
                   deterministic=True, 
                   epochs=0):

        arch_dict = {}
        arch_dict['epochs'] = epochs
        if self.search_space in ['nasbench', 'nasbench_201']:
            if arch is None:
                arch = Cell.random_cell(self.nasbench)

            arch_dict['spec'] = arch

            if encoding_type == 'adj':
                encoding = Cell(**arch).encode_standard()
            elif encoding_type == 'path':
                encoding = Cell(**arch).encode_paths()
            elif encoding_type == 'trunc_path':
                encoding = Cell(**arch).encode_paths()[:cutoff]
            else:
                print('invalid encoding type')

            arch_dict['encoding'] = encoding

            if train:
                arch_dict['val_loss'] = Cell(**arch).get_val_loss(self.nasbench, 
                                                                    deterministic=deterministic,
                                                                    dataset=self.dataset)
                arch_dict['test_loss'] = Cell(**arch).get_test_loss(self.nasbench,
                                                                    dataset=self.dataset)
                arch_dict['num_params'] = Cell(**arch).get_num_params(self.nasbench)
                arch_dict['val_per_param'] = (arch_dict['val_loss'] - 4.8) * (arch_dict['num_params'] ** 0.5) / 100

        else:
            if arch is None:
                arch = Arch.random_arch()

            arch_dict['spec'] = arch

            if encoding_type == 'path':
                encoding = Arch(arch).encode_paths()
            elif encoding_type == 'trunc_path':
                encoding = Arch(arch).encode_paths()[:cutoff]
            else:
                encoding = arch

            arch_dict['encoding'] = encoding

            if train:
                if epochs == 0:
                    epochs = 50
                arch_dict['val_loss'], arch_dict['test_loss'] = Arch(arch).query(epochs=epochs)
        
        return arch_dict           

    def mutate_arch(self, 
                    arch, 
                    mutation_rate=1.0):
        if self.search_space in ['nasbench', 'nasbench_201']:
            return Cell(**arch).mutate(self.nasbench, 
                                       mutation_rate=mutation_rate)
        else:
            return Arch(arch).mutate(int(mutation_rate))

    def get_hash(self, arch):
        # return the path indices of the architecture, used as a hash
        if self.search_space == 'nasbench':
            return Cell(**arch).get_path_indices()
        elif self.search_space == 'darts':
            return Arch(arch).get_path_indices()[0]
        else:
            return Cell(**arch).get_string()

    def generate_random_dataset(self,
                                num=10, 
                                train=True,
                                encoding_type='path', 
                                cutoff=-1,
                                random='standard',
                                allow_isomorphisms=False, 
                                deterministic_loss=True,
                                patience_factor=5):
        """
        create a dataset of randomly sampled architectues
        test for isomorphisms using a hash map of path indices
        use patience_factor to avoid infinite loops
        """
        data = []
        dic = {}
        tries_left = num * patience_factor
        while len(data) < num:
            tries_left -= 1
            if tries_left <= 0:
                break
            arch_dict = self.query_arch(train=train,
                                        encoding_type=encoding_type,
                                        cutoff=cutoff,
                                        deterministic=deterministic_loss)

            h = self.get_hash(arch_dict['spec'])
            if allow_isomorphisms or h not in dic:
                dic[h] = 1
                data.append(arch_dict)

        return data

    def get_candidates(self, 
                       data, 
                       num=100,
                       acq_opt_type='mutation',
                       encoding_type='path',
                       cutoff=-1,
                       loss='val_loss',
                       patience_factor=5, 
                       deterministic_loss=True,
                       num_arches_to_mutate=1,
                       max_mutation_rate=1,
                       allow_isomorphisms=False):
        """
        Creates a set of candidate architectures with mutated and/or random architectures
        """

        candidates = []
        # set up hash map
        dic = {}
        for d in data:
            arch = d['spec']
            h = self.get_hash(arch)
            dic[h] = 1

        if acq_opt_type in ['mutation', 'mutation_random']:
            # mutate architectures with the lowest loss
            best_arches = [arch['spec'] for arch in sorted(data, key=lambda i:i[loss])[:num_arches_to_mutate * patience_factor]]

            # stop when candidates is size num
            # use patience_factor instead of a while loop to avoid long or infinite runtime
            for arch in best_arches:
                if len(candidates) >= num:
                    break
                for i in range(num // num_arches_to_mutate // max_mutation_rate):
                    for rate in range(1, max_mutation_rate + 1):
                        mutated = self.mutate_arch(arch, mutation_rate=rate)
                        arch_dict = self.query_arch(mutated,
                                                    train=False,
                                                    encoding_type=encoding_type,
                                                    cutoff=cutoff)
                        h = self.get_hash(mutated)

                        if allow_isomorphisms or h not in dic:
                            dic[h] = 1    
                            candidates.append(arch_dict)

        if acq_opt_type in ['random', 'mutation_random']:
            # add randomly sampled architectures to the set of candidates
            for _ in range(num * patience_factor):
                if len(candidates) >= 2 * num:
                    break

                arch_dict = self.query_arch(train=False, 
                                            encoding_type=encoding_type,
                                            cutoff=cutoff)
                h = self.get_hash(arch_dict['spec'])

                if allow_isomorphisms or h not in dic:
                    dic[h] = 1
                    candidates.append(arch_dict)

        return candidates

    def remove_duplicates(self, candidates, data):
        # input: two sets of architectues: candidates and data
        # output: candidates with arches from data removed

        dic = {}
        for d in data:
            dic[self.get_hash(d['spec'])] = 1
        unduplicated = []
        for candidate in candidates:
            if self.get_hash(candidate['spec']) not in dic:
                dic[self.get_hash(candidate['spec'])] = 1
                unduplicated.append(candidate)
        return unduplicated

    def encode_data(self, dicts):
        """
        method used by metann_runner.py (for Arch)
        input: list of arch dictionary objects
        output: xtrain (encoded architectures), ytrain (val loss)
        """
        data = []

        for dic in dicts:
            arch = dic['spec']
            encoding = Arch(arch).encode_paths()
            data.append((arch, encoding, dic['val_loss_avg'], None))

        return data

    def get_arch_list(self,
                      aux_file_path, 
                      iteridx=0, 
                      num_top_arches=5,
                      max_edits=20, 
                      num_repeats=5,
                      verbose=1):
        # Method used for gp_bayesopt

        if self.search_space == 'darts':
            print('get_arch_list only supported for nasbench and nasbench_201')
            sys.exit()

        # load the list of architectures chosen by bayesopt so far
        base_arch_list = pickle.load(open(aux_file_path, 'rb'))
        top_arches = [archtuple[0] for archtuple in base_arch_list[:num_top_arches]]
        if verbose:
            top_5_loss = [archtuple[1][0] for archtuple in base_arch_list[:min(5, len(base_arch_list))]]
            print('top 5 val losses {}'.format(top_5_loss))

        # perturb the best k architectures    
        dic = {}
        for archtuple in base_arch_list:
            path_indices = Cell(**archtuple[0]).get_path_indices()
            dic[path_indices] = 1

        new_arch_list = []
        for arch in top_arches:
            for edits in range(1, max_edits):
                for _ in range(num_repeats):
                    perturbation = Cell(**arch).perturb(self.nasbench, edits)
                    path_indices = Cell(**perturbation).get_path_indices()
                    if path_indices not in dic:
                        dic[path_indices] = 1
                        new_arch_list.append(perturbation)

        # make sure new_arch_list is not empty
        while len(new_arch_list) == 0:
            for _ in range(100):
                arch = Cell.random_cell(self.nasbench)
                path_indices = Cell(**arch).get_path_indices()
                if path_indices not in dic:
                    dic[path_indices] = 1
                    new_arch_list.append(arch)

        return new_arch_list

    @classmethod
    def generate_distance_matrix(cls, arches_1, arches_2, distance):
        # Method used for gp_bayesopt for nasbench
        matrix = np.zeros([len(arches_1), len(arches_2)])
        for i, arch_1 in enumerate(arches_1):
            for j, arch_2 in enumerate(arches_2):
                if distance == 'edit_distance':
                    matrix[i][j] = Cell(**arch_1).edit_distance(Cell(**arch_2))
                elif distance == 'path_distance':
                    matrix[i][j] = Cell(**arch_1).path_distance(Cell(**arch_2))        
                elif distance == 'trunc_path_distance':
                    matrix[i][j] = Cell(**arch_1).path_distance(Cell(**arch_2))        
                elif distance == 'nasbot_distance':
                    matrix[i][j] = Cell(**arch_1).nasbot_distance(Cell(**arch_2))  
                else:
                    print('{} is an invalid distance'.format(distance))
                    sys.exit()
        return matrix


================================================
FILE: meta_neural_net.py
================================================
import argparse
import itertools
import os
import random
import sys

import numpy as np
from matplotlib import pyplot as plt
from tensorflow import keras
import tensorflow as tf
from tensorflow.keras import backend as K
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam

def mle_loss(y_true, y_pred):
    # Minimum likelihood estimate loss function
    mean = tf.slice(y_pred, [0, 0], [-1, 1])
    var = tf.slice(y_pred, [0, 1], [-1, 1])
    return 0.5 * tf.log(2*np.pi*var) + tf.square(y_true - mean) / (2*var)


def mape_loss(y_true, y_pred):
    # Minimum absolute percentage error loss function
    lower_bound = 4.5
    fraction = tf.math.divide(tf.subtract(y_pred, lower_bound), \
        tf.subtract(y_true, lower_bound))
    return tf.abs(tf.subtract(fraction, 1))


class MetaNeuralnet:

    def get_dense_model(self, 
                        input_dims, 
                        num_layers,
                        layer_width,
                        loss,
                        regularization):
        input_layer = keras.layers.Input(input_dims)
        model = keras.models.Sequential()

        for _ in range(num_layers):
            model.add(keras.layers.Dense(layer_width, activation='relu'))

        model = model(input_layer)
        if loss == 'mle':
            mean = keras.layers.Dense(1)(model)
            var = keras.layers.Dense(1)(model)
            var = keras.layers.Activation(tf.math.softplus)(var)
            output = keras.layers.concatenate([mean, var])
        else:
            if regularization == 0:
                output = keras.layers.Dense(1)(model)
            else:
                reg = keras.regularizers.l1(regularization)
                output = keras.layers.Dense(1, kernel_regularizer=reg)(model)

        dense_net = keras.models.Model(inputs=input_layer, outputs=output)
        return dense_net

    def fit(self, xtrain, ytrain, 
            num_layers=10,
            layer_width=20,
            loss='mae',
            epochs=200, 
            batch_size=32, 
            lr=.01, 
            verbose=0, 
            regularization=0,
            **kwargs):

        if loss == 'mle':
            loss_fn = mle_loss
        elif loss == 'mape':
            loss_fn = mape_loss
        else:
            loss_fn = 'mae'

        self.model = self.get_dense_model((xtrain.shape[1],), 
                                            loss=loss_fn,
                                            num_layers=num_layers,
                                            layer_width=layer_width,
                                            regularization=regularization)
        optimizer = keras.optimizers.Adam(lr=lr, beta_1=.9, beta_2=.99)

        self.model.compile(optimizer=optimizer, loss=loss_fn)
        #print(self.model.summary())
        self.model.fit(xtrain, ytrain, 
                        batch_size=batch_size, 
                        epochs=epochs, 
                        verbose=verbose)

        train_pred = np.squeeze(self.model.predict(xtrain))
        train_error = np.mean(abs(train_pred-ytrain))
        return train_error

    def predict(self, xtest):
        return self.model.predict(xtest)


================================================
FILE: meta_neuralnet.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Train a Meta Neural Network on NASBench\n",
    "## Predict the accuracy of neural networks to within one percent!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%load_ext autoreload\n",
    "%autoreload 2\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from matplotlib import pyplot as plt\n",
    "from nasbench import api\n",
    "\n",
    "from data import Data\n",
    "from meta_neural_net import MetaNeuralnet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# define a function to plot the meta neural networks\n",
    "\n",
    "def plot_meta_neuralnet(ytrain, train_pred, ytest, test_pred, max_disp=500, title=None):\n",
    "    \n",
    "    plt.scatter(ytrain[:max_disp], train_pred[:max_disp], label='training data', alpha=0.7, s=64)\n",
    "    plt.scatter(ytest[:max_disp], test_pred[:max_disp], label = 'test data', alpha=0.7, marker='^')\n",
    "\n",
    "    # axis limits\n",
    "    plt.xlim((5, 15))\n",
    "    plt.ylim((5, 15))\n",
    "    ax_lim = np.array([np.min([plt.xlim()[0], plt.ylim()[0]]),\n",
    "                    np.max([plt.xlim()[1], plt.ylim()[1]])])\n",
    "    plt.xlim(ax_lim)\n",
    "    plt.ylim(ax_lim)\n",
    "    \n",
    "    # 45-degree line\n",
    "    plt.plot(ax_lim, ax_lim, 'k:') \n",
    "     \n",
    "    plt.gca().set_aspect('equal', adjustable='box')\n",
    "    plt.title(title)\n",
    "    plt.legend(loc='best')\n",
    "    plt.xlabel('true percent error')\n",
    "    plt.ylabel('predicted percent error')\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# load the NASBench dataset\n",
    "# takes about 1 minute to load the nasbench dataset\n",
    "search_space = Data('nasbench')\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# method which runs a meta neural network experiment\n",
    "def meta_neuralnet_experiment(params, \n",
    "                              ns=[100, 500], \n",
    "                              num_ensemble=3, \n",
    "                              test_size=500,\n",
    "                              cutoff=40,\n",
    "                              plot=True):\n",
    "    \n",
    "    for n in ns:\n",
    "        for encoding_type in ['adj', 'path']:\n",
    "\n",
    "            train_data = search_space.generate_random_dataset(num=n, \n",
    "                                                encoding_type=encoding_type,\n",
    "                                                cutoff=cutoff)\n",
    "            \n",
    "            test_data = search_space.generate_random_dataset(num=test_size, \n",
    "                                                encoding_type=encoding_type,\n",
    "                                                cutoff=cutoff)\n",
    "            \n",
    "            print(len(test_data))\n",
    "            test_data = search_space.remove_duplicates(test_data, train_data)\n",
    "            print(len(test_data))\n",
    "            \n",
    "            xtrain = np.array([d['encoding'] for d in train_data])\n",
    "            ytrain = np.array([d['val_loss'] for d in train_data])\n",
    "\n",
    "            xtest = np.array([d['encoding'] for d in test_data])\n",
    "            ytest = np.array([d['val_loss'] for d in test_data])\n",
    "\n",
    "            train_errors = []\n",
    "            test_errors = []\n",
    "            meta_neuralnet = MetaNeuralnet()\n",
    "            for _ in range(num_ensemble):            \n",
    "                meta_neuralnet.fit(xtrain, ytrain, **params)\n",
    "                train_pred = np.squeeze(meta_neuralnet.predict(xtrain))\n",
    "                train_error = np.mean(abs(train_pred-ytrain))\n",
    "                train_errors.append(train_error)\n",
    "                test_pred = np.squeeze(meta_neuralnet.predict(xtest))        \n",
    "                test_error = np.mean(abs(test_pred-ytest))\n",
    "                test_errors.append(test_error)\n",
    "\n",
    "            train_error = np.round(np.mean(train_errors, axis=0), 3)\n",
    "            test_error = np.round(np.mean(test_errors, axis=0), 3)\n",
    "            print('Meta neuralnet training size: {}, encoding type: {}'.format(n, encoding_type))\n",
    "            print('Train error: {}, test error: {}'.format(train_error, test_error))\n",
    "\n",
    "            if plot:\n",
    "                if encoding_type == 'path':\n",
    "                    title = 'Path encoding, training set size {}'.format(n)\n",
    "                else:\n",
    "                    title = 'Adjacency list encoding, training set size {}'.format(n)            \n",
    "\n",
    "                plot_meta_neuralnet(ytrain, train_pred, ytest, test_pred, title=title)\n",
    "                plt.show()          \n",
    "            print('correlation', np.corrcoef(ytest, test_pred)[1,0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "meta_neuralnet_params = {'loss':'mae', 'num_layers':10, 'layer_width':20, 'epochs':200, \\\n",
    "                         'batch_size':32, 'lr':.01, 'regularization':0, 'verbose':0}\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "meta_neuralnet_experiment(meta_neuralnet_params)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}


================================================
FILE: metann_runner.py
================================================
import argparse
import time
import logging
import sys
import os
import pickle
import numpy as np

from acquisition_functions import acq_fn
from data import Data
from meta_neural_net import MetaNeuralnet


"""
meta neural net runner is used in run_experiments_parallel

 - loads data by opening k*i pickle files from previous iterations
 - trains a meta neural network and predicts accuracy of all candidates
 - outputs k pickle files of the architecture to be trained next
"""

def run_meta_neuralnet(search_space, dicts,
                        k=10,
                        verbose=1, 
                        num_ensemble=5, 
                        epochs=10000,
                        lr=0.00001,
                        loss='scaled',
                        explore_type='its',
                        explore_factor=0.5):

    # data: list of arch dictionary objects
    # trains a meta neural network
    # returns list of k arch dictionary objects - the k best predicted

    results = []
    meta_neuralnet = MetaNeuralnet()
    data = search_space.encode_data(dicts)
    xtrain = np.array([d[1] for d in data])
    ytrain = np.array([d[2] for d in data])

    candidates = search_space.get_candidates(data, 
                                            acq_opt_type='mutation_random',
                                            encode_paths=True, 
                                            allow_isomorphisms=True,
                                            deterministic_loss=None)

    xcandidates = np.array([c[1] for c in candidates])
    candidates_specs = [c[0] for c in candidates]
    predictions = []

    # train an ensemble of neural networks
    train_error = 0
    for _ in range(num_ensemble):
        meta_neuralnet = MetaNeuralnet()
        train_error += meta_neuralnet.fit(xtrain, ytrain,
                                            loss=loss,
                                            epochs=epochs,
                                            lr=lr)
        predictions.append(np.squeeze(meta_neuralnet.predict(xcandidates)))
    train_error /= num_ensemble
    if verbose:
        print('Meta neural net train error: {}'.format(train_error))

    sorted_indices = acq_fn(predictions, explore_type)

    top_k_candidates = [candidates_specs[i] for i in sorted_indices[:k]]
    candidates_dict = []
    for candidate in top_k_candidates:
        d = {}
        d['spec'] = candidate
        candidates_dict.append(d)

    return candidates_dict


def run(args):

    save_dir = '{}/'.format(args.experiment_name)
    if not os.path.exists(save_dir):
        os.mkdir(save_dir)

    query = args.query
    k = args.k
    trained_prefix = args.trained_filename
    untrained_prefix = args.untrained_filename
    threshold = args.threshold

    search_space = Data('darts')

    # if it's the first iteration, choose k arches at random to train
    if query == 0:
        print('about to generate {} random'.format(k))
        data = search_space.generate_random_dataset(num=k, train=False)
        arches = [d['spec'] for d in data]

        next_arches = []
        for arch in arches:
            d = {}
            d['spec'] = arch
            next_arches.append(d)

    else:
        # get the data from prior iterations from pickle files
        data = []
        for i in range(query):

            filepath = '{}{}_{}.pkl'.format(save_dir, trained_prefix, i)
            with open(filepath, 'rb') as f:
                arch = pickle.load(f)
            data.append(arch)

        print('Iteration {}'.format(query))
        print('Data from last round')
        print(data)

        # run the meta neural net to output the next arches
        next_arches = run_meta_neuralnet(search_space, data, k=k)

    print('next batch')
    print(next_arches)

    # output the new arches to pickle files
    for i in range(k):
        index = query + i
        filepath = '{}{}_{}.pkl'.format(save_dir, untrained_prefix, index)
        next_arches[i]['index'] = index
        next_arches[i]['filepath'] = filepath
        with open(filepath, 'wb') as f:
            pickle.dump(next_arches[i], f)


def main(args):

    #set up save dir
    save_dir = './'

    #set up logging
    log_format = '%(asctime)s %(message)s'
    logging.basicConfig(stream=sys.stdout, level=logging.INFO,
        format=log_format, datefmt='%m/%d %I:%M:%S %p')
    fh = logging.FileHandler(os.path.join(save_dir, 'log.txt'))
    fh.setFormatter(logging.Formatter(log_format))
    logging.getLogger().addHandler(fh)
    logging.info(args)

    run(args)

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Args for meta neural net')
    parser.add_argument('--experiment_name', type=str, default='darts_test', help='Folder for input/output files')
    parser.add_argument('--params', type=str, default='test', help='Which set of params to use')
    parser.add_argument('--query', type=int, default=0, help='Which query is Neural BayesOpt on')
    parser.add_argument('--trained_filename', type=str, default='trained_spec', help='name of input files')
    parser.add_argument('--untrained_filename', type=str, default='untrained_spec', help='name of output files')
    parser.add_argument('--k', type=int, default=10, help='number of arches to train per iteration')
    parser.add_argument('--threshold', type=int, default=20, help='throw out arches with val loss above threshold')

    args = parser.parse_args()
    main(args)

================================================
FILE: nas_algorithms.py
================================================
import itertools
import os
import pickle
import sys
import copy
import numpy as np
import tensorflow as tf
from argparse import Namespace

from data import Data


def run_nas_algorithm(algo_params, search_space, mp):

    # run nas algorithm
    ps = copy.deepcopy(algo_params)
    algo_name = ps.pop('algo_name')

    if algo_name == 'random':
        data = random_search(search_space, **ps)
    elif algo_name == 'evolution':
        data = evolution_search(search_space, **ps)
    elif algo_name == 'bananas':
        data = bananas(search_space, mp, **ps)
    elif algo_name == 'gp_bayesopt':
        data = gp_bayesopt_search(search_space, **ps)
    elif algo_name == 'dngo':
        data = dngo_search(search_space, **ps)
    else:
        print('invalid algorithm name')
        sys.exit()

    k = 10
    if 'k' in ps:
        k = ps['k']
    total_queries = 150
    if 'total_queries' in ps:
        total_queries = ps['total_queries']
    loss = 'val_loss'
    if 'loss' in ps:
        loss = ps['loss']

    return compute_best_test_losses(data, k, total_queries, loss), data


def compute_best_test_losses(data, k, total_queries, loss):
    """
    Given full data from a completed nas algorithm,
    output the test error of the arch with the best val error 
    after every multiple of k
    """
    results = []
    for query in range(k, total_queries + k, k):
        best_arch = sorted(data[:query], key=lambda i:i[loss])[0]
        test_error = best_arch['test_loss']
        results.append((query, test_error))

    return results


def random_search(search_space,
                  total_queries=150, 
                  loss='val_loss',
                  deterministic=True,
                  verbose=1):
    """ 
    random search
    """
    data = search_space.generate_random_dataset(num=total_queries, 
                                                encoding_type='adj',
                                                deterministic_loss=deterministic)
    
    if verbose:
        top_5_loss = sorted([d[loss] for d in data])[:min(5, len(data))]
        print('random, query {}, top 5 losses {}'.format(total_queries, top_5_loss))    
    return data


def evolution_search(search_space,
                     total_queries=150,
                     num_init=10,
                     k=10,
                     loss='val_loss',
                     population_size=30,                       
                     tournament_size=10,
                     mutation_rate=1.0, 
                     deterministic=True,
                     regularize=True,
                     verbose=1):
    """
    regularized evolution
    """
    data = search_space.generate_random_dataset(num=num_init, 
                                                deterministic_loss=deterministic)

    losses = [d[loss] for d in data]
    query = num_init
    population = [i for i in range(min(num_init, population_size))]

    while query <= total_queries:

        # evolve the population by mutating the best architecture
        # from a random subset of the population
        sample = np.random.choice(population, tournament_size)
        best_index = sorted([(i, losses[i]) for i in sample], key=lambda i:i[1])[0][0]
        mutated = search_space.mutate_arch(data[best_index]['spec'],
                                           mutation_rate=mutation_rate)
        arch_dict = search_space.query_arch(mutated, deterministic=deterministic)
        data.append(arch_dict)        
        losses.append(arch_dict[loss])
        population.append(len(data) - 1)

        # kill the oldest (or worst) from the population
        if len(population) >= population_size:
            if regularize:
                oldest_index = sorted([i for i in population])[0]
                population.remove(oldest_index)
            else:
                worst_index = sorted([(i, losses[i]) for i in population], key=lambda i:i[1])[-1][0]
                population.remove(worst_index)

        if verbose and (query % k == 0):
            top_5_loss = sorted([d[loss] for d in data])[:min(5, len(data))]
            print('evolution, query {}, top 5 losses {}'.format(query, top_5_loss))

        query += 1

    return data


def bananas(search_space, 
            metann_params,
            num_init=10, 
            k=10, 
            loss='val_loss',
            total_queries=150, 
            num_ensemble=5, 
            acq_opt_type='mutation',
            num_arches_to_mutate=1,
            explore_type='its',
            encoding_type='trunc_path',
            cutoff=40,
            deterministic=True,
            verbose=1):
    """
    Bayesian optimization with a neural network model
    """
    from acquisition_functions import acq_fn
    from meta_neural_net import MetaNeuralnet

    data = search_space.generate_random_dataset(num=num_init, 
                                                encoding_type=encoding_type, 
                                                cutoff=cutoff,
                                                deterministic_loss=deterministic)

    query = num_init + k

    while query <= total_queries:

        xtrain = np.array([d['encoding'] for d in data])
        ytrain = np.array([d[loss] for d in data])

        if (query == num_init + k) and verbose:
            print('bananas xtrain shape', xtrain.shape)
            print('bananas ytrain shape', ytrain.shape)

        # get a set of candidate architectures
        candidates = search_space.get_candidates(data, 
                                                 acq_opt_type=acq_opt_type,
                                                 encoding_type=encoding_type, 
                                                 cutoff=cutoff,
                                                 num_arches_to_mutate=num_arches_to_mutate,
                                                 loss=loss,
                                                 deterministic_loss=deterministic)

        xcandidates = np.array([c['encoding'] for c in candidates])
        candidate_predictions = []

        # train an ensemble of neural networks
        train_error = 0
        for _ in range(num_ensemble):
            meta_neuralnet = MetaNeuralnet()
            train_error += meta_neuralnet.fit(xtrain, ytrain, **metann_params)

            # predict the validation loss of the candidate architectures
            candidate_predictions.append(np.squeeze(meta_neuralnet.predict(xcandidates)))

            # clear the tensorflow graph
            tf.reset_default_graph()

        tf.keras.backend.clear_session()

        train_error /= num_ensemble
        if verbose:
            print('query {}, Meta neural net train error: {}'.format(query, train_error))

        # compute the acquisition function for all the candidate architectures
        candidate_indices = acq_fn(candidate_predictions, explore_type)

        # add the k arches with the minimum acquisition function values
        for i in candidate_indices[:k]:

            arch_dict = search_space.query_arch(candidates[i]['spec'],
                                                encoding_type=encoding_type,
                                                cutoff=cutoff,
                                                deterministic=deterministic)
            data.append(arch_dict)

        if verbose:
            top_5_loss = sorted([(d[loss], d['epochs']) for d in data], key=lambda d: d[0])[:min(5, len(data))]
            print('bananas, query {}, top 5 losses (loss, test, epoch): {}'.format(query, top_5_loss))
            recent_10_loss = [(d[loss], d['epochs']) for d in data[-10:]]
            print('bananas, query {}, most recent 10 (loss, test, epoch): {}'.format(query, recent_10_loss))

        query += k

    return data


def gp_bayesopt_search(search_space,
                        num_init=10,
                        k=10,
                        total_queries=150,
                        distance='edit_distance',
                        deterministic=True,
                        tmpdir='./temp',
                        max_iter=200,
                        mode='single_process',
                        nppred=1000):
    """
    Bayesian optimization with a GP prior
    """
    from bo.bo.probo import ProBO

    # set up the path for auxiliary pickle files
    if not os.path.exists(tmpdir):
        os.mkdir(tmpdir)
    aux_file_path = os.path.join(tmpdir, 'aux.pkl')

    num_iterations = total_queries - num_init

    # black-box function that bayesopt will optimize
    def fn(arch):
        return search_space.query_arch(arch, deterministic=deterministic)['val_loss']

    # set all the parameters for the various BayesOpt classes
    fhp = Namespace(fhstr='object', namestr='train')
    domp = Namespace(dom_str='list', set_domain_list_auto=True,
                     aux_file_path=aux_file_path,
                     distance=distance)
    modelp = Namespace(kernp=Namespace(ls=3., alpha=1.5, sigma=1e-5),
                       infp=Namespace(niter=num_iterations, nwarmup=500),
                       distance=distance, search_space=search_space.get_type())
    amp = Namespace(am_str='mygpdistmat_ucb', nppred=nppred, modelp=modelp)
    optp = Namespace(opt_str='rand', max_iter=max_iter)
    makerp = Namespace(domp=domp, amp=amp, optp=optp)
    probop = Namespace(niter=num_iterations, fhp=fhp,
                       makerp=makerp, tmpdir=tmpdir, mode=mode)
    data = Namespace()

    # Set up initial data
    init_data = search_space.generate_random_dataset(num=num_init, 
                                                     deterministic_loss=deterministic)
    data.X = [d['spec'] for d in init_data]
    data.y = np.array([[d['val_loss']] for d in init_data])

    # initialize aux file
    pairs = [(data.X[i], data.y[i]) for i in range(len(data.y))]
    pairs.sort(key=lambda x: x[1])
    with open(aux_file_path, 'wb') as f:
        pickle.dump(pairs, f)

    # run Bayesian Optimization
    bo = ProBO(fn, search_space, aux_file_path, data, probop, True)
    bo.run_bo()

    # get the validation and test loss for all architectures chosen by BayesOpt
    results = []
    for arch in data.X:
        archtuple = search_space.query_arch(arch)
        results.append(archtuple)

    return results


def dngo_search(search_space,
                num_init=10,
                k=10,
                loss='val_loss',
                total_queries=150,
                encoding_type='path',
                cutoff=40,
                acq_opt_type='mutation',
                explore_type='ucb',
                deterministic=True,
                verbose=True):

    import torch
    from pybnn import DNGO
    from pybnn.util.normalization import zero_mean_unit_var_normalization, zero_mean_unit_var_denormalization
    from acquisition_functions import acq_fn

    def fn(arch):
        return search_space.query_arch(arch, deterministic=deterministic)[loss]

    # set up initial data
    data = search_space.generate_random_dataset(num=num_init, 
                                                encoding_type=encoding_type,
                                                cutoff=cutoff,
                                                deterministic_loss=deterministic)

    query = num_init + k

    while query <= total_queries:

        # set up data
        x = np.array([d['encoding'] for d in data])
        y = np.array([d[loss] for d in data])

        # get a set of candidate architectures
        candidates = search_space.get_candidates(data, 
                                                 acq_opt_type=acq_opt_type,
                                                 encoding_type=encoding_type, 
                                                 cutoff=cutoff,
                                                 deterministic_loss=deterministic)

        xcandidates = np.array([d['encoding'] for d in candidates])

        # train the model
        model = DNGO(do_mcmc=False)
        model.train(x, y, do_optimize=True)

        predictions = model.predict(xcandidates)
        candidate_indices = acq_fn(np.array(predictions), explore_type)

        # add the k arches with the minimum acquisition function values
        for i in candidate_indices[:k]:
            arch_dict = search_space.query_arch(candidates[i]['spec'],
                                                encoding_type=encoding_type,
                                                cutoff=cutoff,
                                                deterministic=deterministic)
            data.append(arch_dict)

        if verbose:
            top_5_loss = sorted([(d[loss], d['epochs']) for d in data], key=lambda d: d[0])[:min(5, len(data))]
            print('dngo, query {}, top 5 val losses (val, test, epoch): {}'.format(query, top_5_loss))
            recent_10_loss = [(d[loss], d['epochs']) for d in data[-10:]]
            print('dngo, query {}, most recent 10 (val, test, epoch): {}'.format(query, recent_10_loss))

        query += k

    return data


================================================
FILE: nas_bench/__init__.py
================================================



================================================
FILE: nas_bench/cell.py
================================================
import numpy as np
import copy
import itertools
import random
import sys
import os
import pickle

from nasbench import api


INPUT = 'input'
OUTPUT = 'output'
CONV3X3 = 'conv3x3-bn-relu'
CONV1X1 = 'conv1x1-bn-relu'
MAXPOOL3X3 = 'maxpool3x3'
OPS = [CONV3X3, CONV1X1, MAXPOOL3X3]

NUM_VERTICES = 7
OP_SPOTS = NUM_VERTICES - 2
MAX_EDGES = 9


class Cell:

    def __init__(self, matrix, ops):

        self.matrix = matrix
        self.ops = ops

    def serialize(self):
        return {
            'matrix': self.matrix,
            'ops': self.ops
        }

    def modelspec(self):
        return api.ModelSpec(matrix=self.matrix, ops=self.ops)

    @classmethod
    def random_cell(cls, nasbench):
        """ 
        From the NASBench repository 

        one-hot adjacency matrix
        draw [0,1] for each slot in the adjacency matrix
        """
        while True:
            matrix = np.random.choice(
                [0, 1], size=(NUM_VERTICES, NUM_VERTICES))
            matrix = np.triu(matrix, 1)
            ops = np.random.choice(OPS, size=NUM_VERTICES).tolist()
            ops[0] = INPUT
            ops[-1] = OUTPUT
            spec = api.ModelSpec(matrix=matrix, ops=ops)
            if nasbench.is_valid(spec):
                return {
                    'matrix': matrix,
                    'ops': ops
                }

    def get_val_loss(self, nasbench, deterministic=1, patience=50, epochs=None, dataset=None):
        if not deterministic:
            # output one of the three validation accuracies at random
            if epochs:
                return (100*(1 - nasbench.query(api.ModelSpec(matrix=self.matrix, ops=self.ops), epochs=epochs)['validation_accuracy']))
            else:
                return (100*(1 - nasbench.query(api.ModelSpec(matrix=self.matrix, ops=self.ops))['validation_accuracy']))
        else:        
            # query the api until we see all three accuracies, then average them
            # a few architectures only have two accuracies, so we use patience to avoid an infinite loop
            accs = []
            while len(accs) < 3 and patience > 0:
                patience -= 1
                if epochs:
                    acc = nasbench.query(api.ModelSpec(matrix=self.matrix, ops=self.ops), epochs=epochs)['validation_accuracy']
                else:
                    acc = nasbench.query(api.ModelSpec(matrix=self.matrix, ops=self.ops))['validation_accuracy']
                if acc not in accs:
                    accs.append(acc)
            return round(100*(1-np.mean(accs)), 4)            


    def get_test_loss(self, nasbench, patience=50, epochs=None, dataset=None):
        """
        query the api until we see all three accuracies, then average them
        a few architectures only have two accuracies, so we use patience to avoid an infinite loop
        """
        accs = []
        while len(accs) < 3 and patience > 0:
            patience -= 1
            if epochs:
                acc = nasbench.query(api.ModelSpec(matrix=self.matrix, ops=self.ops), epochs=epochs)['test_accuracy']
            else:
                acc = nasbench.query(api.ModelSpec(matrix=self.matrix, ops=self.ops))['test_accuracy']
            if acc not in accs:
                accs.append(acc)
        return round(100*(1-np.mean(accs)), 4)

    def get_num_params(self, nasbench):
        return nasbench.query(api.ModelSpec(matrix=self.matrix, ops=self.ops))['trainable_parameters']

    def perturb(self, nasbench, edits=1):
        """ 
        create new perturbed cell 
        inspird by https://github.com/google-research/nasbench
        """
        new_matrix = copy.deepcopy(self.matrix)
        new_ops = copy.deepcopy(self.ops)
        for _ in range(edits):
            while True:
                if np.random.random() < 0.5:
                    for src in range(0, NUM_VERTICES - 1):
                        for dst in range(src+1, NUM_VERTICES):
                            new_matrix[src][dst] = 1 - new_matrix[src][dst]
                else:
                    for ind in range(1, NUM_VERTICES - 1):
                        available = [op for op in OPS if op != new_ops[ind]]
                        new_ops[ind] = np.random.choice(available)

                new_spec = api.ModelSpec(new_matrix, new_ops)
                if nasbench.is_valid(new_spec):
                    break
        return {
            'matrix': new_matrix,
            'ops': new_ops
        }

    def mutate(self, 
               nasbench, 
               mutation_rate=1.0, 
               patience=5000):
        """
        A stochastic approach to perturbing the cell
        inspird by https://github.com/google-research/nasbench
        """
        p = 0
        while p < patience:
            p += 1
            new_matrix = copy.deepcopy(self.matrix)
            new_ops = copy.deepcopy(self.ops)

            edge_mutation_prob = mutation_rate / (NUM_VERTICES * (NUM_VERTICES - 1) / 2)
            # flip each edge w.p. so expected flips is 1. same for ops
            for src in range(0, NUM_VERTICES - 1):
                for dst in range(src + 1, NUM_VERTICES):
                    if random.random() < edge_mutation_prob:
                        new_matrix[src, dst] = 1 - new_matrix[src, dst]

            op_mutation_prob = mutation_rate / OP_SPOTS
            for ind in range(1, OP_SPOTS + 1):
                if random.random() < op_mutation_prob:
                    available = [o for o in OPS if o != new_ops[ind]]
                    new_ops[ind] = random.choice(available)

            new_spec = api.ModelSpec(new_matrix, new_ops)
            if nasbench.is_valid(new_spec):
                return {
                    'matrix': new_matrix,
                    'ops': new_ops
                }
        return self.mutate(nasbench, mutation_rate+1)

    def encode_standard(self):
        """ 
        compute the "standard" encoding,
        i.e. adjacency matrix + op list encoding 
        """
        encoding_length = (NUM_VERTICES ** 2 - NUM_VERTICES) // 2 + OP_SPOTS
        encoding = np.zeros((encoding_length))
        dic = {CONV1X1: 0., CONV3X3: 0.5, MAXPOOL3X3: 1.0}
        n = 0
        for i in range(NUM_VERTICES - 1):
            for j in range(i+1, NUM_VERTICES):
                encoding[n] = self.matrix[i][j]
                n += 1
        for i in range(1, NUM_VERTICES - 1):
            encoding[-i] = dic[self.ops[i]]
        return tuple(encoding)

    def get_paths(self):
        """ 
        return all paths from input to output
        """
        paths = []
        for j in range(0, NUM_VERTICES):
            paths.append([[]]) if self.matrix[0][j] else paths.append([])
        
        # create paths sequentially
        for i in range(1, NUM_VERTICES - 1):
            for j in range(1, NUM_VERTICES):
                if self.matrix[i][j]:
                    for path in paths[i]:
                        paths[j].append([*path, self.ops[i]])
        return paths[-1]

    def get_path_indices(self):
        """
        compute the index of each path
        There are 3^0 + ... + 3^5 paths total.
        (Paths can be length 0 to 5, and for each path, for each node, there
        are three choices for the operation.)
        """
        paths = self.get_paths()
        mapping = {CONV3X3: 0, CONV1X1: 1, MAXPOOL3X3: 2}
        path_indices = []

        for path in paths:
            index = 0
            for i in range(NUM_VERTICES - 1):
                if i == len(path):
                    path_indices.append(index)
                    break
                else:
                    index += len(OPS) ** i * (mapping[path[i]] + 1)

        path_indices.sort()
        return tuple(path_indices)

    def encode_paths(self):
        """ output one-hot encoding of paths """
        num_paths = sum([len(OPS) ** i for i in range(OP_SPOTS + 1)])
        path_indices = self.get_path_indices()
        encoding = np.zeros(num_paths)
        for index in path_indices:
            encoding[index] = 1
        return encoding

    def path_distance(self, other):
        """ 
        compute the distance between two architectures
        by comparing their path encodings
        """
        return np.sum(np.array(self.encode_paths() != np.array(other.encode_paths())))

    def trunc_path_distance(self, other, cutoff=40):
        """ 
        compute the distance between two architectures
        by comparing their path encodings
        """
        encoding = self.encode_paths()[:cutoff]
        other_encoding = other.encode_paths()[:cutoff]
        return np.sum(np.array(encoding) != np.array(other_encoding))

    def edit_distance(self, other):
        """
        compute the distance between two architectures
        by comparing their adjacency matrices and op lists
        """
        graph_dist = np.sum(np.array(self.matrix) != np.array(other.matrix))
        ops_dist = np.sum(np.array(self.ops) != np.array(other.ops))
        return (graph_dist + ops_dist)

    def nasbot_distance(self, other):
        # distance based on optimal transport between row sums, column sums, and ops

        row_sums = sorted(np.array(self.matrix).sum(axis=0))
        col_sums = sorted(np.array(self.matrix).sum(axis=1))

        other_row_sums = sorted(np.array(other.matrix).sum(axis=0))
        other_col_sums = sorted(np.array(other.matrix).sum(axis=1))

        row_dist = np.sum(np.abs(np.subtract(row_sums, other_row_sums)))
        col_dist = np.sum(np.abs(np.subtract(col_sums, other_col_sums)))

        counts = [self.ops.count(op) for op in OPS]
        other_counts = [other.ops.count(op) for op in OPS]

        ops_dist = np.sum(np.abs(np.subtract(counts, other_counts)))

        return (row_dist + col_dist + ops_dist)



================================================
FILE: nas_bench_201/__init__.py
================================================



================================================
FILE: nas_bench_201/cell.py
================================================
import numpy as np
import copy
import itertools
import random
import sys
import os
import pickle


OPS = ['avg_pool_3x3', 'nor_conv_1x1', 'nor_conv_3x3', 'none', 'skip_connect']
NUM_OPS = len(OPS)
OP_SPOTS = 6
LONGEST_PATH_LENGTH = 3

class Cell:

    def __init__(self, string):
        self.string = string

    def get_string(self):
        return self.string

    def serialize(self):
        return {
            'string':self.string
        }

    @classmethod
    def random_cell(cls, nasbench, max_nodes=4):
        """
        From the AutoDL-Projects repository
        """
        ops = []
        for i in range(OP_SPOTS):
            op = random.choice(OPS)
            ops.append(op)
        return {'string':cls.get_string_from_ops(ops)}


    def get_runtime(self, nasbench, dataset='cifar100'):
        return nasbench.query_by_index(index, dataset).get_eval('x-valid')['time']

    def get_val_loss(self, nasbench, deterministic=1, dataset='cifar100'):
        index = nasbench.query_index_by_arch(self.string)
        if dataset == 'cifar10':
            results = nasbench.query_by_index(index, 'cifar10-valid')
        else:
            results = nasbench.query_by_index(index, dataset)

        accs = []
        for key in results.keys():
            accs.append(results[key].get_eval('x-valid')['accuracy'])

        if deterministic:
            return round(100-np.mean(accs), 10)   
        else:
            return round(100-np.random.choice(accs), 10)

    def get_test_loss(self, nasbench, dataset='cifar100', deterministic=1):
        index = nasbench.query_index_by_arch(self.string)
        results = nasbench.query_by_index(index, dataset)

        accs = []
        for key in results.keys():
            accs.append(results[key].get_eval('ori-test')['accuracy'])

        if deterministic:
            return round(100-np.mean(accs), 4)   
        else:
            return round(100-np.random.choice(accs), 4)

    def get_op_list(self):
        # given a string, get the list of operations
        tokens = self.string.split('|')
        ops = [t.split('~')[0] for i,t in enumerate(tokens) if i not in [0,2,5,9]]
        return ops

    def get_num(self):
        # compute the unique number of the architecture, in [0, 15624]
        ops = self.get_op_list()
        index = 0
        for i, op in enumerate(ops):
            index += OPS.index(op) * NUM_OPS ** i
        return index

    @classmethod
    def get_string_from_ops(cls, ops):
        # given a list of operations, get the string
        strings = ['|']
        nodes = [0, 0, 1, 0, 1, 2]
        for i, op in enumerate(ops):
            strings.append(op+'~{}|'.format(nodes[i]))
            if i < len(nodes) - 1 and nodes[i+1] == 0:
                strings.append('+|')
        return ''.join(strings)

    def perturb(self, nasbench,
                mutation_rate=1):
        # more deterministic version of mutate
        ops = self.get_op_list()
        new_ops = []
        num = np.random.choice(len(ops))
        for i, op in enumerate(ops):
            if i == num:
                available = [o for o in OPS if o != op]
                new_ops.append(np.random.choice(available))
            else:
                new_ops.append(op)
        return {'string':self.get_string_from_ops(new_ops)}

    def mutate(self, 
               nasbench, 
               mutation_rate=1.0, 
               patience=5000):

        p = 0
        ops = self.get_op_list()
        new_ops = []
        # keeping mutation_prob consistent with nasbench_101
        mutation_prob = mutation_rate / (OP_SPOTS - 2)

        for i, op in enumerate(ops):
            if random.random() < mutation_prob:
                available = [o for o in OPS if o != op]
                new_ops.append(random.choice(available))
            else:
                new_ops.append(op)

        return {'string':self.get_string_from_ops(new_ops)}

    def encode_standard(self):
        """ 
        compute the standard encoding
        """
        ops = self.get_op_list()
        encoding = []
        for op in ops:
            encoding.append(OPS.index(op))

        return encoding

    def get_num_params(self, nasbench):
        # todo update to the newer nasbench-201 dataset
        return 100

    def get_paths(self):
        """ 
        return all paths from input to output
        """
        path_blueprints = [[3], [0,4], [1,5], [0,2,5]]
        ops = self.get_op_list()
        paths = []
        for blueprint in path_blueprints:
            paths.append([ops[node] for node in blueprint])

        return paths

    def get_path_indices(self):
        """
        compute the index of each path
        """
        paths = self.get_paths()
        path_indices = []

        for i, path in enumerate(paths):
            if i == 0:
                index = 0
            elif i in [1, 2]:
                index = NUM_OPS
            else:
                index = NUM_OPS + NUM_OPS ** 2
            for j, op in enumerate(path):
                index += OPS.index(op) * NUM_OPS ** j
            path_indices.append(index)

        return tuple(path_indices)

    def encode_paths(self):
        """ output one-hot encoding of paths """
        num_paths = sum([NUM_OPS ** i for i in range(1, LONGEST_PATH_LENGTH + 1)])
        path_indices = self.get_path_indices()
        encoding = np.zeros(num_paths)
        for index in path_indices:
            encoding[index] = 1
        return encoding

    def path_distance(self, other):
        """ 
        compute the distance between two architectures
        by comparing their path encodings
        """
        return np.sum(np.array(self.encode_paths() != np.array(other.encode_paths())))

    def trunc_path_distance(self, other, cutoff=30):
        """ 
        compute the distance between two architectures
        by comparing their truncated path encodings
        """
        paths = np.array(self.encode_paths()[cutoff])
        other_paths = np.array(other.encode_paths()[cutoff])
        return np.sum(paths != other_paths)

    def edit_distance(self, other):

        ops = self.get_op_list()
        other_ops = other.get_op_list()
        return np.sum([1 for i in range(len(ops)) if ops[i] != other_ops[i]])

    def nasbot_distance(self, other):
        # distance based on optimal transport between row sums, column sums, and ops

        ops = self.get_op_list()
        other_ops = other.get_op_list()

        counts = [ops.count(op) for op in OPS]
        other_counts = [other_ops.count(op) for op in OPS]
        ops_dist = np.sum(np.abs(np.subtract(counts, other_counts)))

        return ops_dist + self.edit_distance(other)


================================================
FILE: params.py
================================================
import sys


def algo_params(param_str):
    """
      Return params list based on param_str.
      These are the parameters used to produce the figures in the paper
      For AlphaX and Reinforcement Learning, we used the corresponding github repos:
      https://github.com/linnanwang/AlphaX-NASBench101
      https://github.com/automl/nas_benchmarks
    """
    params = []

    if param_str == 'test':
        params.append({'algo_name':'random', 'total_queries':30})
        params.append({'algo_name':'evolution', 'total_queries':30})
        params.append({'algo_name':'bananas', 'total_queries':30})   
        params.append({'algo_name':'gp_bayesopt', 'total_queries':30})
        params.append({'algo_name':'dngo', 'total_queries':30})

    elif param_str == 'test_simple': 
        params.append({'algo_name':'random', 'total_queries':30})
        params.append({'algo_name':'evolution', 'total_queries':30})

    elif param_str == 'random': 
        params.append({'algo_name':'random', 'total_queries':10})

    elif param_str == 'bananas':
        params.append({'algo_name':'bananas', 'total_queries':150, 'verbose':0})

    elif param_str == 'main_experiments':
        params.append({'algo_name':'random', 'total_queries':150})
        params.append({'algo_name':'evolution', 'total_queries':150})
        params.append({'algo_name':'bananas', 'total_queries':150})  
        params.append({'algo_name':'gp_bayesopt', 'total_queries':150})        
        params.append({'algo_name':'dngo', 'total_queries':150})

    elif param_str == 'ablation':
        params.append({'algo_name':'bananas', 'total_queries':150})   
        params.append({'algo_name':'bananas', 'total_queries':150, 'encoding_type':'adjacency'})
        params.append({'algo_name':'gp_bayesopt', 'total_queries':150, 'distance':'path_distance'})
        params.append({'algo_name':'gp_bayesopt', 'total_queries':150, 'distance':'edit_distance'})
        params.append({'algo_name':'bananas', 'total_queries':150, 'acq_opt_type':'random'})

    else:
        print('invalid algorithm params: {}'.format(param_str))
        sys.exit()

    print('\n* Running experiment: ' + param_str)
    return params


def meta_neuralnet_params(param_str):

    if param_str == 'nasbench':
        params = {'search_space':'nasbench', 'dataset':'cifar10', 'loss':'mae', 'num_layers':10, 'layer_width':20, \
            'epochs':150, 'batch_size':32, 'lr':.01, 'regularization':0, 'verbose':0}

    elif param_str == 'darts':
        params = {'search_space':'darts', 'dataset':'cifar10', 'loss':'mape', 'num_layers':10, 'layer_width':20, \
            'epochs':10000, 'batch_size':32, 'lr':.00001, 'regularization':0, 'verbose':0}

    elif param_str == 'nasbench_201_cifar10':
        params = {'search_space':'nasbench_201', 'dataset':'cifar10', 'loss':'mae', 'num_layers':10, 'layer_width':20, \
            'epochs':150, 'batch_size':32, 'lr':.01, 'regularization':0, 'verbose':0}

    elif param_str == 'nasbench_201_cifar100':
        params = {'search_space':'nasbench_201', 'dataset':'cifar100', 'loss':'mae', 'num_layers':10, 'layer_width':20, \
            'epochs':150, 'batch_size':32, 'lr':.01, 'regularization':0, 'verbose':0}

    elif param_str == 'nasbench_201_imagenet':
        params = {'search_space':'nasbench_201', 'dataset':'ImageNet16-120', 'loss':'mae', 'num_layers':10, 'layer_width':20, \
            'epochs':150, 'batch_size':32, 'lr':.01, 'regularization':0, 'verbose':0}

    else:
        print('invalid meta neural net params: {}'.format(param_str))
        sys.exit()

    return params


================================================
FILE: run_experiments_parallel.sh
================================================

param_str=fifty_epochs
experiment_name=bananas

# set all instance names and zones
instances=(bananas-t4-1-vm bananas-t4-2-vm bananas-t4-3-vm bananas-t4-4-vm \
	bananas-t4-5-vm bananas-t4-6-vm bananas-t4-7-vm bananas-t4-8-vm \
	bananas-t4-9-vm bananas-t4-10-vm)

zones=(us-west1-b us-west1-b us-west1-b us-west1-b us-west1-b us-west1-b \
	us-west1-b us-west1-b us-west1-b us-west1-b)

# set parameters based on the param string
if [ $param_str = test ]; then
	start_iteration=0
	end_iteration=1
	k=10
	untrained_filename=untrained_spec
	trained_filename=trained_spec
	epochs=1
fi
if [ $param_str = fifty_epochs ]; then
	start_iteration=0
	end_iteration=9
	k=10
	untrained_filename=untrained_spec
	trained_filename=trained_spec
	epochs=50
fi

# start bananas
for i in $(seq $start_iteration $end_iteration)
do 
	let start=$i*$k
	let end=($i+1)*$k-1

	# train the neural net
	# input: all pickle files with index from 0 to i*k-1
	# output: k pickle files for the architectures to train next (indices i*k to (i+1)*k-1)
	echo about to run meta neural network in iteration $i
	python3 metann_runner.py --experiment_name $experiment_name --params $nas_params --k $k \
		--untrained_filename $untrained_filename --trained_filename $trained_filename --query $start
	echo outputted architectures to train in iteration $i

	# train the k architectures
	let max_j=$k-1
	for j in $(seq 0 $max_j )
	do
		let query=$i*$k+$j
		instance=${instances[$j]}
		zone=${zones[$j]}
		untrained_filepath=$experiment_name/$untrained_filename\_$query.pkl
		trained_filepath=$experiment_name/$trained_filename\_$query.pkl

		echo about to copy file $untrained_filepath to instance $instance
		gcloud compute scp $untrained_filepath $instance:~/naszilla/$experiment_name/ --zone $zone

		echo about to ssh into instance $instance
		gcloud compute ssh $instance --zone $zone --command="cd naszilla; \ 
		python3 train_arch_runner.py --untrained_filepath $untrained_filepath \
		--trained_filepath $trained_filepath --epochs $epochs" &
	done
	wait
	echo all architectures trained in iteration $i

	# copy results of trained architectures to the master CPU
	let max_j=$k-1
	for j in $(seq 0 $max_j )
	do
		let query=$i*$k+$j
		instance=${instances[$j]}
		zone=${zones[$j]}
		trained_filepath=$experiment_name/$trained_filename\_$query.pkl
		gcloud compute scp $instance:~/naszilla/$trained_filepath $experiment_name --zone $zone
	done
	echo finished iteration $i
done



================================================
FILE: run_experiments_sequential.py
================================================
import argparse
import time
import logging
import sys
import os
import pickle
import numpy as np
import copy

from params import *


def run_experiments(args, save_dir):

    os.environ['search_space'] = args.search_space

    from nas_algorithms import run_nas_algorithm
    from data import Data

    trials = args.trials
    out_file = args.output_filename
    save_specs = args.save_specs
    metann_params = meta_neuralnet_params(args.search_space)
    algorithm_params = algo_params(args.algo_params)
    num_algos = len(algorithm_params)
    logging.info(algorithm_params)

    # set up search space
    mp = copy.deepcopy(metann_params)
    ss = mp.pop('search_space')
    dataset = mp.pop('dataset')
    search_space = Data(ss, dataset=dataset)

    for i in range(trials):
        results = []
        walltimes = []
        run_data = []

        for j in range(num_algos):
            # run NAS algorithm
            print('\n* Running algorithm: {}'.format(algorithm_params[j]))
            starttime = time.time()
            algo_result, run_datum = run_nas_algorithm(algorithm_params[j], search_space, mp)
            algo_result = np.round(algo_result, 5)

            # remove unnecessary dict entries that take up space
            for d in run_datum:
                if not save_specs:
                    d.pop('spec')
                for key in ['encoding', 'adjacency', 'path', 'dist_to_min']:
                    if key in d:
                        d.pop(key)

            # add walltime, results, run_data
            walltimes.append(time.time()-starttime)
            results.append(algo_result)
            run_data.append(run_datum)

        # print and pickle results
        filename = os.path.join(save_dir, '{}_{}.pkl'.format(out_file, i))
        print('\n* Trial summary: (params, results, walltimes)')
        print(algorithm_params)
        print(metann_params)
        print(results)
        print(walltimes)
        print('\n* Saving to file {}'.format(filename))
        with open(filename, 'wb') as f:
            pickle.dump([algorithm_params, metann_params, results, walltimes, run_data], f)
            f.close()

def main(args):

    # make save directory
    save_dir = args.save_dir
    if not os.path.exists(save_dir):
        os.mkdir(save_dir)

    algo_params = args.algo_params
    save_path = save_dir + '/' + algo_params + '/'
    if not os.path.exists(save_path):
        os.mkdir(save_path)

    # set up logging
    log_format = '%(asctime)s %(message)s'
    logging.basicConfig(stream=sys.stdout, level=logging.INFO,
        format=log_format, datefmt='%m/%d %I:%M:%S %p')
    fh = logging.FileHandler(os.path.join(save_dir, 'log.txt'))
    fh.setFormatter(logging.Formatter(log_format))
    logging.getLogger().addHandler(fh)
    logging.info(args)

    run_experiments(args, save_path)
    

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Args for BANANAS experiments')
    parser.add_argument('--trials', type=int, default=500, help='Number of trials')
    parser.add_argument('--search_space', type=str, default='nasbench', \
        help='nasbench or darts')
    parser.add_argument('--algo_params', type=str, default='main_experiments', help='which parameters to use')
    parser.add_argument('--output_filename', type=str, default='round', help='name of output files')
    parser.add_argument('--save_dir', type=str, default='results_output', help='name of save directory')
    parser.add_argument('--save_specs', type=bool, default=False, help='save the architecture specs')    

    args = parser.parse_args()
    main(args)


================================================
FILE: train_arch_runner.py
================================================
import argparse
import time
import logging
import sys
import os
import pickle

sys.path.append(os.path.expanduser('~/darts/cnn'))
from train_class import Train

"""
train arch runner is used in run_experiments_parallel

 - loads data by opening a pickle file containing an architecture spec
 - trains that architecture for e epochs
 - outputs a new pickle file with the architecture spec and its validation loss
"""

def run(args):

    untrained_filepath = os.path.expanduser(args.untrained_filepath)
    trained_filepath = os.path.expanduser(args.trained_filepath)
    epochs = args.epochs
    gpu = args.gpu
    train_portion = args.train_portion
    seed = args.seed
    save = args.save

    # load the arch spec that will be trained
    dic = pickle.load(open(untrained_filepath, 'rb'))
    arch = dic['spec']
    print('loaded arch', arch)

    # train the arch
    trainer = Train()
    val_accs, test_accs = trainer.main(arch, 
                                        epochs=epochs, 
                                        gpu=gpu, 
                                        train_portion=train_portion, 
                                        seed=seed, 
                                        save=save)

    val_sum = 0
    for epoch, val_acc in val_accs:
        key = 'val_loss_' + str(epoch)
        dic[key] = 100 - val_acc
        val_sum += dic[key]
    for epoch, test_acc in test_accs:
        key = 'test_loss_' + str(epoch)
        dic[key] = 100 - test_acc

    val_loss_avg = val_sum / len(val_accs)

    dic['val_loss_avg'] = val_loss_avg
    dic['val_loss'] = 100 - val_accs[-1][-1]
    dic['test_loss'] = 100 - test_accs[-1][-1]
    dic['filepath'] = args.trained_filepath

    print('arch {}'.format(arch))
    print('val loss: {}'.format(dic['val_loss']))
    print('test loss: {}'.format(dic['test_loss']))
    print('val loss avg: {}'.format(dic['val_loss_avg']))

    with open(trained_filepath, 'wb') as f:
        pickle.dump(dic, f)

def main(args):

    #set up save dir
    save_dir = './'

    #set up logging
    log_format = '%(asctime)s %(message)s'
    logging.basicConfig(stream=sys.stdout, level=logging.INFO,
        format=log_format, datefmt='%m/%d %I:%M:%S %p')
    fh = logging.FileHandler(os.path.join(save_dir, 'log.txt'))
    fh.setFormatter(logging.Formatter(log_format))
    logging.getLogger().addHandler(fh)
    logging.info(args)

    run(args)

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Args for training a darts arch')
    parser.add_argument('--untrained_filepath', type=str, default='darts_test/untrained_spec_0.pkl', help='name of input files')
    parser.add_argument('--trained_filepath', type=str, default='darts_test/trained_spec_0.pkl', help='name of output files')
    parser.add_argument('--epochs', type=int, default=50, help='number of training epochs')
    parser.add_argument('--gpu', type=int, default=0, help='which gpu to use')
    parser.add_argument('--train_portion', type=float, default=0.7, help='portion of training data used for training')
    parser.add_argument('--seed', type=float, default=0, help='random seed to use')
    parser.add_argument('--save', type=str, default='EXP', help='directory to save to')

    args = parser.parse_args()
    main(args)
Download .txt
gitextract_n9lcj8a6/

├── .gitignore
├── LICENSE
├── README.md
├── acquisition_functions.py
├── bo/
│   ├── __init__.py
│   ├── acq/
│   │   ├── __init__.py
│   │   ├── acqmap.py
│   │   ├── acqopt.py
│   │   └── acquisition.py
│   ├── bo/
│   │   ├── __init__.py
│   │   └── probo.py
│   ├── dom/
│   │   ├── __init__.py
│   │   ├── list.py
│   │   └── real.py
│   ├── ds/
│   │   ├── __init__.py
│   │   └── makept.py
│   ├── fn/
│   │   ├── __init__.py
│   │   └── functionhandler.py
│   ├── pp/
│   │   ├── __init__.py
│   │   ├── gp/
│   │   │   ├── __init__.py
│   │   │   └── gp_utils.py
│   │   ├── pp_core.py
│   │   ├── pp_gp_george.py
│   │   ├── pp_gp_my_distmat.py
│   │   ├── pp_gp_stan.py
│   │   ├── pp_gp_stan_distmat.py
│   │   └── stan/
│   │       ├── __init__.py
│   │       ├── compile_stan.py
│   │       ├── gp_distmat.py
│   │       ├── gp_distmat_fixedsig.py
│   │       ├── gp_hier2.py
│   │       ├── gp_hier2_matern.py
│   │       └── gp_hier3.py
│   └── util/
│       ├── __init__.py
│       ├── datatransform.py
│       └── print_utils.py
├── darts/
│   ├── __init__.py
│   └── arch.py
├── data.py
├── meta_neural_net.py
├── meta_neuralnet.ipynb
├── metann_runner.py
├── nas_algorithms.py
├── nas_bench/
│   ├── __init__.py
│   └── cell.py
├── nas_bench_201/
│   ├── __init__.py
│   └── cell.py
├── params.py
├── run_experiments_parallel.sh
├── run_experiments_sequential.py
└── train_arch_runner.py
Download .txt
SYMBOL INDEX (270 symbols across 32 files)

FILE: acquisition_functions.py
  function acq_fn (line 5) | def acq_fn(predictions, explore_type='its'):

FILE: bo/acq/acqmap.py
  class AcqMapper (line 14) | class AcqMapper(object):
    method __init__ (line 17) | def __init__(self, data, amp, print_flag=True):
    method set_am_params (line 28) | def set_am_params(self, amp):
    method get_acqmap (line 34) | def get_acqmap(self, xin_is_list=True):
    method acqmap_list (line 42) | def acqmap_list(self, xin_list):
    method acqmap_single (line 112) | def acqmap_single(self, xin):
    method print_str (line 116) | def print_str(self):

FILE: bo/acq/acqopt.py
  class AcqOptimizer (line 8) | class AcqOptimizer(object):
    method __init__ (line 11) | def __init__(self, optp=None, print_flag=True):
    method set_opt_params (line 20) | def set_opt_params(self, optp):
    method optimize (line 28) | def optimize(self, dom, am):
    method optimize_rand (line 33) | def optimize_rand(self, dom, am):
    method print_str (line 40) | def print_str(self):

FILE: bo/acq/acquisition.py
  class Acquisitioner (line 9) | class Acquisitioner(object):
    method __init__ (line 12) | def __init__(self, data, acqp=None, print_flag=True):
    method set_acq_params (line 23) | def set_acq_params(self, acqp):
    method set_acq_method (line 31) | def set_acq_method(self):
    method ei (line 41) | def ei(self, pmout):
    method pi (line 46) | def pi(self, pmout):
    method ucb (line 51) | def ucb(self, pmout):
    method ts (line 56) | def ts(self, pmout):
    method rand (line 61) | def rand(self, pmout):
    method null (line 65) | def null(self, pmout):
    method bbacq_ei (line 70) | def bbacq_ei(self, pmout_samp, normal=False):
    method bbacq_pi (line 88) | def bbacq_pi(self, pmout_samp, normal=False):
    method bbacq_ucb (line 102) | def bbacq_ucb(self, pmout_samp, beta=0.5, normal=True):
    method bbacq_ts (line 115) | def bbacq_ts(self, pmout_samp):
    method print_str (line 122) | def print_str(self):

FILE: bo/bo/probo.py
  class ProBO (line 14) | class ProBO(object):
    method __init__ (line 17) | def __init__(self, fn, search_space, aux_file_path, data=None, probop=...
    method set_probo_params (line 33) | def set_probo_params(self, probop):
    method set_fh (line 37) | def set_fh(self, fn):
    method set_tmpdir (line 41) | def set_tmpdir(self):
    method run_bo (line 49) | def run_bo(self, verbose=False):
    method print_iter_info (line 85) | def print_iter_info(self, iteridx, itertime):
    method print_str (line 93) | def print_str(self):
    method post_iteration (line 98) | def post_iteration(self):

FILE: bo/dom/list.py
  class ListDomain (line 9) | class ListDomain(object):
    method __init__ (line 12) | def __init__(self, search_space, domp=None, printFlag=True):
    method set_domain_params (line 23) | def set_domain_params(self, domp):
    method init_domain_list (line 27) | def init_domain_list(self):
    method set_domain_list_auto (line 34) | def set_domain_list_auto(self):
    method set_domain_list (line 37) | def set_domain_list(self, domain_list):
    method is_in_domain (line 41) | def is_in_domain(self, pt):
    method unif_rand_sample (line 45) | def unif_rand_sample(self, n=1, replace=True):
    method print_str (line 54) | def print_str(self):

FILE: bo/dom/real.py
  class RealDomain (line 8) | class RealDomain(object):
    method __init__ (line 11) | def __init__(self, domp=None, printFlag=True):
    method set_domain_params (line 21) | def set_domain_params(self, domp):
    method is_in_domain (line 29) | def is_in_domain(self, pt):
    method unif_rand_sample (line 40) | def unif_rand_sample(self, n=1):
    method print_str (line 45) | def print_str(self):

FILE: bo/ds/makept.py
  function main (line 17) | def main(args, search_space, printinfo=False):
  function get_domain (line 43) | def get_domain(domp, search_space):
  function print_info (line 52) | def print_info(nextpt, itertime, nextptpkl):

FILE: bo/fn/functionhandler.py
  function get_fh (line 8) | def get_fh(fn, data=None, fhp=None, print_flag=True):
  class BasicFH (line 25) | class BasicFH(object):
    method __init__ (line 29) | def __init__(self, fn, data=None, fhp=None, print_flag=True):
    method call_fn_and_add_data (line 40) | def call_fn_and_add_data(self, xin):
    method add_data_single (line 46) | def add_data_single(self, xin, yout):
    method add_data (line 56) | def add_data(self, newdata):
    method print_str (line 66) | def print_str(self):
  class ExtraInfoFH (line 72) | class ExtraInfoFH(BasicFH):
    method __init__ (line 76) | def __init__(self, fn, data=None, fhp=None, print_flag=True):
    method call_fn_and_add_data (line 86) | def call_fn_and_add_data(self, xin):
    method print_str (line 92) | def print_str(self):
  class NanNNFH (line 98) | class NanNNFH(BasicFH):
    method __init__ (line 102) | def __init__(self, fn, data=None, fhp=None, print_flag=True):
    method call_fn_and_add_data (line 112) | def call_fn_and_add_data(self, xin):
    method add_data_single_nan (line 124) | def add_data_single_nan(self, xin):
    method add_data_nan (line 134) | def add_data_nan(self, newdata):
    method possibly_init_xnan (line 143) | def possibly_init_xnan(self):
    method print_str (line 148) | def print_str(self):
  class ReplaceNanNNFH (line 154) | class ReplaceNanNNFH(BasicFH):
    method __init__ (line 159) | def __init__(self, fn, data=None, fhp=None, print_flag=True):
    method call_fn_and_add_data (line 169) | def call_fn_and_add_data(self, xin):
    method print_str (line 180) | def print_str(self):
  class ObjectFH (line 186) | class ObjectFH(object):
    method __init__ (line 190) | def __init__(self, fn, data=None, fhp=None, print_flag=True):
    method call_fn_and_add_data (line 201) | def call_fn_and_add_data(self, xin):
    method add_data_single (line 206) | def add_data_single(self, xin, yout):
    method add_data (line 211) | def add_data(self, newdata):
    method print_str (line 221) | def print_str(self):

FILE: bo/pp/gp/gp_utils.py
  function kern_gibbscontext (line 11) | def kern_gibbscontext(xmatcon1, xmatcon2, xmatact1, xmatact2, theta, alpha,
  function kern_gibbs1d (line 26) | def kern_gibbs1d(xmat1, xmat2, theta, alpha):
  function ls_fn (line 36) | def ls_fn(xmat, theta, whichlsfn=1):
  function kern_matern32 (line 56) | def kern_matern32(xmat1, xmat2, ls, alpha):
  function kern_exp_quad (line 62) | def kern_exp_quad(xmat1, xmat2, ls, alpha):
  function kern_exp_quad_noscale (line 67) | def kern_exp_quad_noscale(xmat1, xmat2, ls):
  function squared_euc_distmat (line 73) | def squared_euc_distmat(xmat1, xmat2, coef=1.):
  function kern_distmat (line 78) | def kern_distmat(xmat1, xmat2, ls, alpha, distfn):
  function get_cholesky_decomp (line 85) | def get_cholesky_decomp(k11_nonoise, sigma, psd_str):
  function stable_cholesky (line 100) | def stable_cholesky(mmat, make_psd=True):
  function project_symmetric_to_psd_cone (line 126) | def project_symmetric_to_psd_cone(mmat, is_symmetric=True, epsilon=0):
  function solve_lower_triangular (line 141) | def solve_lower_triangular(amat, b):
  function solve_upper_triangular (line 145) | def solve_upper_triangular(amat, b):
  function solve_triangular_base (line 149) | def solve_triangular_base(amat, b, lower):
  function sample_mvn (line 156) | def sample_mvn(mu, covmat, nsamp):

FILE: bo/pp/pp_core.py
  class DiscPP (line 7) | class DiscPP(object):
    method __init__ (line 10) | def __init__(self):
    method infer_post_and_update_samples (line 20) | def infer_post_and_update_samples(self,nsamp):
    method sample_pp_post_pred (line 25) | def sample_pp_post_pred(self,nsamp,input_list):
    method sample_pp_pred (line 30) | def sample_pp_pred(self,nsamp,input_list,lv_list=None):
    method add_new_data (line 36) | def add_new_data(self,newData):
    method get_namespace_to_save (line 40) | def get_namespace_to_save(self):
    method save_namespace_to_file (line 44) | def save_namespace_to_file(self,fileStr,printFlag):

FILE: bo/pp/pp_gp_george.py
  class GeorgeGpPP (line 12) | class GeorgeGpPP(DiscPP):
    method __init__ (line 15) | def __init__(self,data=None,modelp=None,printFlag=True):
    method set_data (line 26) | def set_data(self,data):
    method set_model_params (line 31) | def set_model_params(self,modelp):
    method set_kernel (line 37) | def set_kernel(self):
    method set_model (line 46) | def set_model(self):
    method get_model (line 52) | def get_model(self):
    method fit_hyperparams (line 56) | def fit_hyperparams(self,printOut=False):
    method infer_post_and_update_samples (line 76) | def infer_post_and_update_samples(self):
    method sample_pp_post_pred (line 80) | def sample_pp_post_pred(self,nsamp,input_list):
    method sample_pp_pred (line 103) | def sample_pp_pred(self,nsamp,input_list,lv=None):
    method neg_log_like (line 118) | def neg_log_like(self,hparams):
    method log_post (line 125) | def log_post(self,hparams):
    method print_str (line 135) | def print_str(self):

FILE: bo/pp/pp_gp_my_distmat.py
  class MyGpDistmatPP (line 17) | class MyGpDistmatPP(DiscPP):
    method __init__ (line 21) | def __init__(self, data=None, modelp=None, printFlag=True):
    method set_model_params (line 30) | def set_model_params(self, modelp):
    method set_data (line 36) | def set_data(self, data):
    method set_model (line 43) | def set_model(self):
    method get_model (line 47) | def get_model(self):
    method infer_post_and_update_samples (line 51) | def infer_post_and_update_samples(self, print_result=False):
    method get_distmat (line 58) | def get_distmat(self, xmat1, xmat2):
    method print_inference_result (line 68) | def print_inference_result(self):
    method sample_pp_post_pred (line 75) | def sample_pp_post_pred(self, nsamp, input_list, full_cov=False):
    method sample_pp_pred (line 92) | def sample_pp_pred(self, nsamp, input_list, lv=None):
    method gp_post (line 102) | def gp_post(self, x_train_list, y_train_arr, x_pred_list, ls, alpha, s...
    method print_str (line 120) | def print_str(self):

FILE: bo/pp/pp_gp_stan.py
  class StanGpPP (line 18) | class StanGpPP(DiscPP):
    method __init__ (line 21) | def __init__(self, data=None, modelp=None, printFlag=True):
    method set_model_params (line 31) | def set_model_params(self,modelp):
    method set_data (line 48) | def set_data(self, data):
    method get_transformed_data (line 56) | def get_transformed_data(self, data, transf_str='linear'):
    method set_model (line 72) | def set_model(self):
    method get_model (line 76) | def get_model(self):
    method infer_post_and_update_samples (line 86) | def infer_post_and_update_samples(self, seed=5000012, print_result=Fal...
    method get_stan_data_dict (line 103) | def get_stan_data_dict(self):
    method get_sample_list_from_stan_out (line 124) | def get_sample_list_from_stan_out(self, stanout):
    method print_inference_result (line 138) | def print_inference_result(self):
    method sample_pp_post_pred (line 158) | def sample_pp_post_pred(self, nsamp, input_list, full_cov=False, nloop...
    method sample_pp_pred (line 186) | def sample_pp_pred(self, nsamp, input_list, lv=None):
    method get_reverse_transform (line 203) | def get_reverse_transform(self, pp1, pp2, input_list):
    method gp_post (line 210) | def gp_post(self, x_train, y_train, x_pred, ls, alpha, sigma, full_cov...
    method print_str (line 230) | def print_str(self):

FILE: bo/pp/pp_gp_stan_distmat.py
  class StanGpDistmatPP (line 18) | class StanGpDistmatPP(DiscPP):
    method __init__ (line 21) | def __init__(self, data=None, modelp=None, printFlag=True):
    method set_model_params (line 31) | def set_model_params(self, modelp):
    method set_data (line 37) | def set_data(self, data):
    method set_model (line 44) | def set_model(self):
    method get_model (line 48) | def get_model(self):
    method infer_post_and_update_samples (line 58) | def infer_post_and_update_samples(self, seed=543210, print_result=False):
    method get_stan_data_dict (line 76) | def get_stan_data_dict(self):
    method get_distmat (line 92) | def get_distmat(self, xmat1, xmat2):
    method get_sample_list_from_stan_out (line 97) | def get_sample_list_from_stan_out(self, stanout):
    method print_inference_result (line 117) | def print_inference_result(self):
    method sample_pp_post_pred (line 137) | def sample_pp_post_pred(self, nsamp, input_list, full_cov=False, nloop...
    method sample_pp_pred (line 164) | def sample_pp_pred(self, nsamp, input_list, lv=None):
    method gp_post (line 180) | def gp_post(self, x_train, y_train, x_pred, ls, alpha, sigma, full_cov...
    method print_str (line 196) | def print_str(self):

FILE: bo/pp/stan/gp_distmat.py
  function get_model (line 10) | def get_model(recompile=False, print_status=True):
  function get_model_code (line 29) | def get_model_code():

FILE: bo/pp/stan/gp_distmat_fixedsig.py
  function get_model (line 11) | def get_model(recompile=False, print_status=True):
  function get_model_code (line 30) | def get_model_code():

FILE: bo/pp/stan/gp_hier2.py
  function get_model (line 10) | def get_model(recompile=False, print_status=True):
  function get_model_code (line 29) | def get_model_code():

FILE: bo/pp/stan/gp_hier2_matern.py
  function get_model (line 10) | def get_model(recompile=False, print_status=True):
  function get_model_code (line 29) | def get_model_code():

FILE: bo/pp/stan/gp_hier3.py
  function get_model (line 11) | def get_model(recompile=False, print_status=True):
  function get_model_code (line 30) | def get_model_code():

FILE: bo/util/datatransform.py
  class DataTransformer (line 10) | class DataTransformer(object):
    method __init__ (line 13) | def __init__(self, datamat, printflag=True):
    method set_transformers (line 23) | def set_transformers(self):
    method transform_data (line 28) | def transform_data(self, datamat=None):
    method inv_transform_data (line 34) | def inv_transform_data(self, datamat):
    method print_str (line 38) | def print_str(self):

FILE: bo/util/print_utils.py
  class suppress_stdout_stderr (line 7) | class suppress_stdout_stderr(object):
    method __init__ (line 14) | def __init__(self):
    method __enter__ (line 20) | def __enter__(self):
    method __exit__ (line 25) | def __exit__(self, *_):

FILE: darts/arch.py
  class Arch (line 24) | class Arch:
    method __init__ (line 26) | def __init__(self, arch):
    method serialize (line 29) | def serialize(self):
    method query (line 32) | def query(self, epochs=50):
    method random_arch (line 40) | def random_arch(cls):
    method get_arch_list (line 60) | def get_arch_list(self):
    method mutate (line 71) | def mutate(self, edits):
    method get_paths (line 93) | def get_paths(self):
    method get_path_indices (line 124) | def get_path_indices(self, long_paths=True):
    method encode_paths (line 165) | def encode_paths(self, long_paths=True):
    method path_distance (line 180) | def path_distance(self, other):

FILE: data.py
  class Data (line 22) | class Data:
    method __init__ (line 24) | def __init__(self,
    method get_type (line 42) | def get_type(self):
    method query_arch (line 45) | def query_arch(self,
    method mutate_arch (line 103) | def mutate_arch(self,
    method get_hash (line 112) | def get_hash(self, arch):
    method generate_random_dataset (line 121) | def generate_random_dataset(self,
    method get_candidates (line 154) | def get_candidates(self,
    method remove_duplicates (line 217) | def remove_duplicates(self, candidates, data):
    method encode_data (line 231) | def encode_data(self, dicts):
    method get_arch_list (line 246) | def get_arch_list(self,
    method generate_distance_matrix (line 294) | def generate_distance_matrix(cls, arches_1, arches_2, distance):

FILE: meta_neural_net.py
  function mle_loss (line 15) | def mle_loss(y_true, y_pred):
  function mape_loss (line 22) | def mape_loss(y_true, y_pred):
  class MetaNeuralnet (line 30) | class MetaNeuralnet:
    method get_dense_model (line 32) | def get_dense_model(self,
    method fit (line 60) | def fit(self, xtrain, ytrain,
    method predict (line 96) | def predict(self, xtest):

FILE: metann_runner.py
  function run_meta_neuralnet (line 22) | def run_meta_neuralnet(search_space, dicts,
  function run (line 77) | def run(args):
  function main (line 133) | def main(args):

FILE: nas_algorithms.py
  function run_nas_algorithm (line 13) | def run_nas_algorithm(algo_params, search_space, mp):
  function compute_best_test_losses (line 46) | def compute_best_test_losses(data, k, total_queries, loss):
  function random_search (line 61) | def random_search(search_space,
  function evolution_search (line 79) | def evolution_search(search_space,
  function bananas (line 131) | def bananas(search_space,
  function gp_bayesopt_search (line 220) | def gp_bayesopt_search(search_space,
  function dngo_search (line 286) | def dngo_search(search_space,

FILE: nas_bench/cell.py
  class Cell (line 24) | class Cell:
    method __init__ (line 26) | def __init__(self, matrix, ops):
    method serialize (line 31) | def serialize(self):
    method modelspec (line 37) | def modelspec(self):
    method random_cell (line 41) | def random_cell(cls, nasbench):
    method get_val_loss (line 62) | def get_val_loss(self, nasbench, deterministic=1, patience=50, epochs=...
    method get_test_loss (line 84) | def get_test_loss(self, nasbench, patience=50, epochs=None, dataset=No...
    method get_num_params (line 100) | def get_num_params(self, nasbench):
    method perturb (line 103) | def perturb(self, nasbench, edits=1):
    method mutate (line 129) | def mutate(self,
    method encode_standard (line 164) | def encode_standard(self):
    method get_paths (line 181) | def get_paths(self):
    method get_path_indices (line 197) | def get_path_indices(self):
    method encode_paths (line 220) | def encode_paths(self):
    method path_distance (line 229) | def path_distance(self, other):
    method trunc_path_distance (line 236) | def trunc_path_distance(self, other, cutoff=40):
    method edit_distance (line 245) | def edit_distance(self, other):
    method nasbot_distance (line 254) | def nasbot_distance(self, other):

FILE: nas_bench_201/cell.py
  class Cell (line 15) | class Cell:
    method __init__ (line 17) | def __init__(self, string):
    method get_string (line 20) | def get_string(self):
    method serialize (line 23) | def serialize(self):
    method random_cell (line 29) | def random_cell(cls, nasbench, max_nodes=4):
    method get_runtime (line 40) | def get_runtime(self, nasbench, dataset='cifar100'):
    method get_val_loss (line 43) | def get_val_loss(self, nasbench, deterministic=1, dataset='cifar100'):
    method get_test_loss (line 59) | def get_test_loss(self, nasbench, dataset='cifar100', deterministic=1):
    method get_op_list (line 72) | def get_op_list(self):
    method get_num (line 78) | def get_num(self):
    method get_string_from_ops (line 87) | def get_string_from_ops(cls, ops):
    method perturb (line 97) | def perturb(self, nasbench,
    method mutate (line 111) | def mutate(self,
    method encode_standard (line 131) | def encode_standard(self):
    method get_num_params (line 142) | def get_num_params(self, nasbench):
    method get_paths (line 146) | def get_paths(self):
    method get_path_indices (line 158) | def get_path_indices(self):
    method encode_paths (line 178) | def encode_paths(self):
    method path_distance (line 187) | def path_distance(self, other):
    method trunc_path_distance (line 194) | def trunc_path_distance(self, other, cutoff=30):
    method edit_distance (line 203) | def edit_distance(self, other):
    method nasbot_distance (line 209) | def nasbot_distance(self, other):

FILE: params.py
  function algo_params (line 4) | def algo_params(param_str):
  function meta_neuralnet_params (line 53) | def meta_neuralnet_params(param_str):

FILE: run_experiments_sequential.py
  function run_experiments (line 13) | def run_experiments(args, save_dir):
  function main (line 71) | def main(args):

FILE: train_arch_runner.py
  function run (line 19) | def run(args):
  function main (line 67) | def main(args):
Condensed preview — 51 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (181K chars).
[
  {
    "path": ".gitignore",
    "chars": 205,
    "preview": "*.DS_Store\n*.sw*\n*.pyc\n*.Rapp.history\n*.*~\n*.out\n*hide\n*hide_*\n*save\n*save_*\n*saved\n*saved_*\n*pylintrc*\nnasbench_only108"
  },
  {
    "path": "LICENSE",
    "chars": 11414,
    "preview": "   Copyright (c) 2019, naszilla.\n   All rights reserved.\n\n                                 Apache License\n              "
  },
  {
    "path": "README.md",
    "chars": 5770,
    "preview": "# BANANAS\n\n**Note: our naszilla/bananas repo has been extended and renamed to [naszilla/naszilla](https://github.com/nas"
  },
  {
    "path": "acquisition_functions.py",
    "chars": 2547,
    "preview": "import numpy as np\nimport sys\n\n# Different acquisition functions that can be used by BANANAS\ndef acq_fn(predictions, exp"
  },
  {
    "path": "bo/__init__.py",
    "chars": 65,
    "preview": "\"\"\"\nCode for running Bayesian Optimization (BO) in NASzilla.\n\"\"\"\n"
  },
  {
    "path": "bo/acq/__init__.py",
    "chars": 41,
    "preview": "\"\"\"\nCode for acquisition strategies.\n\"\"\"\n"
  },
  {
    "path": "bo/acq/acqmap.py",
    "chars": 4486,
    "preview": "\"\"\"\nClasses to manage acqmap (acquisition maps from xin to acquisition value).\n\"\"\"\n\nfrom argparse import Namespace\nimpor"
  },
  {
    "path": "bo/acq/acqopt.py",
    "chars": 1227,
    "preview": "\"\"\"\nClasses to perform acquisition function optimization.\n\"\"\"\n\nfrom argparse import Namespace\nimport numpy as np\n\nclass "
  },
  {
    "path": "bo/acq/acquisition.py",
    "chars": 3991,
    "preview": "\"\"\"\nClasses to manage acquisition functions.\n\"\"\"\n\nfrom argparse import Namespace\nimport numpy as np\nfrom scipy.stats imp"
  },
  {
    "path": "bo/bo/__init__.py",
    "chars": 40,
    "preview": "\"\"\"\nCode for Bayesian optimization.\n\"\"\"\n"
  },
  {
    "path": "bo/bo/probo.py",
    "chars": 3593,
    "preview": "\"\"\"\nClasses for ProBO (probabilistic programming BO) using makept strategy.\n\"\"\"\n\nimport time\nfrom argparse import Namesp"
  },
  {
    "path": "bo/dom/__init__.py",
    "chars": 33,
    "preview": "\"\"\"\nCode for domain classes.\n\"\"\"\n"
  },
  {
    "path": "bo/dom/list.py",
    "chars": 1679,
    "preview": "\"\"\"\nClasses for list (discrete set) domains.\n\"\"\"\n\nfrom argparse import Namespace\nimport numpy as np\n\n\nclass ListDomain(o"
  },
  {
    "path": "bo/dom/real.py",
    "chars": 1362,
    "preview": "\"\"\"\nClasses for real coordinate space domains.\n\"\"\"\n\nfrom argparse import Namespace\nimport numpy as np\n\nclass RealDomain("
  },
  {
    "path": "bo/ds/__init__.py",
    "chars": 65,
    "preview": "\"\"\"\nCode for makept (serializing and subprocesses) strategy.\n\"\"\"\n"
  },
  {
    "path": "bo/ds/makept.py",
    "chars": 2206,
    "preview": "\"\"\"\nMake a point in a domain, and serialize it.\n\"\"\"\n\nimport sys\nimport os\nsys.path.append(os.path.expanduser('./'))\nfrom"
  },
  {
    "path": "bo/fn/__init__.py",
    "chars": 71,
    "preview": "\"\"\"\nCode for synthetic functions to query (perform experiment on).\n\"\"\"\n"
  },
  {
    "path": "bo/fn/functionhandler.py",
    "chars": 6902,
    "preview": "\"\"\"\nClasses to handle functions.\n\"\"\"\n\nfrom argparse import Namespace\nimport numpy as np\n\ndef get_fh(fn, data=None, fhp=N"
  },
  {
    "path": "bo/pp/__init__.py",
    "chars": 62,
    "preview": "\"\"\"\nCode for defining and running probabilistic programs.\n\"\"\"\n"
  },
  {
    "path": "bo/pp/gp/__init__.py",
    "chars": 64,
    "preview": "\"\"\"\nCode for Gaussian process (GP) utilities and functions.\n\"\"\"\n"
  },
  {
    "path": "bo/pp/gp/gp_utils.py",
    "chars": 6448,
    "preview": "\"\"\"\nUtilities for Gaussian process (GP) inference\n\"\"\"\n\nimport numpy as np\nfrom scipy.linalg import solve_triangular\nfrom"
  },
  {
    "path": "bo/pp/pp_core.py",
    "chars": 1979,
    "preview": "\"\"\"\nBase classes for probabilistic programs.\n\"\"\"\n\nimport pickle\n\nclass DiscPP(object):\n  \"\"\" Parent class for discrimina"
  },
  {
    "path": "bo/pp/pp_gp_george.py",
    "chars": 5302,
    "preview": "\"\"\"\nClasses for hierarchical GP models with George PP\n\"\"\"\n\nfrom argparse import Namespace\nimport numpy as np\nimport scip"
  },
  {
    "path": "bo/pp/pp_gp_my_distmat.py",
    "chars": 4530,
    "preview": "\"\"\"\nClasses for GP models without any PP backend, using a given distance matrix.\n\"\"\"\n\nfrom argparse import Namespace\nimp"
  },
  {
    "path": "bo/pp/pp_gp_stan.py",
    "chars": 10329,
    "preview": "\"\"\"\nClasses for GP models with Stan\n\"\"\"\n\nfrom argparse import Namespace\nimport time\nimport numpy as np\nimport copy\nfrom "
  },
  {
    "path": "bo/pp/pp_gp_stan_distmat.py",
    "chars": 8717,
    "preview": "\"\"\"\nClasses for GP models with Stan, using a given distance matrix.\n\"\"\"\n\nfrom argparse import Namespace\nimport time\nimpo"
  },
  {
    "path": "bo/pp/stan/__init__.py",
    "chars": 56,
    "preview": "\"\"\"\nCode for defining and compiling models in Stan.\n\"\"\"\n"
  },
  {
    "path": "bo/pp/stan/compile_stan.py",
    "chars": 328,
    "preview": "\"\"\"\nScript to compile stan models\n\"\"\"\n\n#import pp_new.stan.gp_hier2 as gpstan\n#import pp_new.stan.gp_hier3 as gpstan\n#im"
  },
  {
    "path": "bo/pp/stan/gp_distmat.py",
    "chars": 1617,
    "preview": "\"\"\"\nFunctions to define and compile PPs in Stan, for model:\nhierarchical GP (prior on rho, alpha, sigma) using a given d"
  },
  {
    "path": "bo/pp/stan/gp_distmat_fixedsig.py",
    "chars": 1559,
    "preview": "\"\"\"\nFunctions to define and compile PPs in Stan, for model:\nhierarchical GP (prior on rho, alpha) and fixed sigma, using"
  },
  {
    "path": "bo/pp/stan/gp_hier2.py",
    "chars": 1584,
    "preview": "\"\"\"\nFunctions to define and compile PPs in Stan, for model:\nhierarchical GP (prior on rho, alpha, sigma)\n\"\"\"\n\nimport tim"
  },
  {
    "path": "bo/pp/stan/gp_hier2_matern.py",
    "chars": 3551,
    "preview": "\"\"\"\nFunctions to define and compile PPs in Stan, for model: hierarchical GP (prior\non rho, alpha, sigma), with matern ke"
  },
  {
    "path": "bo/pp/stan/gp_hier3.py",
    "chars": 1549,
    "preview": "\"\"\"\nFunctions to define and compile PPs in Stan, for model:\nhierarchical GP with uniform prior on rho, normal prior on a"
  },
  {
    "path": "bo/util/__init__.py",
    "chars": 33,
    "preview": "\"\"\"\nMiscellaneous utilities.\n\"\"\"\n"
  },
  {
    "path": "bo/util/datatransform.py",
    "chars": 1156,
    "preview": "\"\"\"\nClasses for transforming data.\n\"\"\"\n\nfrom argparse import Namespace\nimport numpy as np\nfrom sklearn.preprocessing imp"
  },
  {
    "path": "bo/util/print_utils.py",
    "chars": 1195,
    "preview": "\"\"\"\nUtilities for printing and output\n\"\"\"\n\nimport os\n\nclass suppress_stdout_stderr(object):\n    ''' A context manager fo"
  },
  {
    "path": "darts/__init__.py",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "darts/arch.py",
    "chars": 6365,
    "preview": "import numpy as np\nimport sys\nimport os\nimport copy\nimport random\n\nsys.path.append(os.path.expanduser('~/darts/cnn'))\nfr"
  },
  {
    "path": "data.py",
    "chars": 11969,
    "preview": "import numpy as np\nimport pickle\nimport sys\nimport os\n\nif 'search_space' not in os.environ or os.environ['search_space']"
  },
  {
    "path": "meta_neural_net.py",
    "chars": 3214,
    "preview": "import argparse\nimport itertools\nimport os\nimport random\nimport sys\n\nimport numpy as np\nfrom matplotlib import pyplot as"
  },
  {
    "path": "meta_neuralnet.ipynb",
    "chars": 6457,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Train a Meta Neural Network on NA"
  },
  {
    "path": "metann_runner.py",
    "chars": 5463,
    "preview": "import argparse\nimport time\nimport logging\nimport sys\nimport os\nimport pickle\nimport numpy as np\n\nfrom acquisition_funct"
  },
  {
    "path": "nas_algorithms.py",
    "chars": 13048,
    "preview": "import itertools\nimport os\nimport pickle\nimport sys\nimport copy\nimport numpy as np\nimport tensorflow as tf\nfrom argparse"
  },
  {
    "path": "nas_bench/__init__.py",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "nas_bench/cell.py",
    "chars": 9794,
    "preview": "import numpy as np\nimport copy\nimport itertools\nimport random\nimport sys\nimport os\nimport pickle\n\nfrom nasbench import a"
  },
  {
    "path": "nas_bench_201/__init__.py",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "nas_bench_201/cell.py",
    "chars": 6695,
    "preview": "import numpy as np\nimport copy\nimport itertools\nimport random\nimport sys\nimport os\nimport pickle\n\n\nOPS = ['avg_pool_3x3'"
  },
  {
    "path": "params.py",
    "chars": 3594,
    "preview": "import sys\n\n\ndef algo_params(param_str):\n    \"\"\"\n      Return params list based on param_str.\n      These are the parame"
  },
  {
    "path": "run_experiments_parallel.sh",
    "chars": 2438,
    "preview": "\nparam_str=fifty_epochs\nexperiment_name=bananas\n\n# set all instance names and zones\ninstances=(bananas-t4-1-vm bananas-t"
  },
  {
    "path": "run_experiments_sequential.py",
    "chars": 3623,
    "preview": "import argparse\nimport time\nimport logging\nimport sys\nimport os\nimport pickle\nimport numpy as np\nimport copy\n\nfrom param"
  },
  {
    "path": "train_arch_runner.py",
    "chars": 3278,
    "preview": "import argparse\nimport time\nimport logging\nimport sys\nimport os\nimport pickle\n\nsys.path.append(os.path.expanduser('~/dar"
  }
]

About this extraction

This page contains the full source code of the naszilla/bananas GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 51 files (167.7 KB), approximately 44.4k tokens, and a symbol index with 270 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!