Full Code of google/qkeras for AI

master 5e0cd30c20b1 cached
140 files
1.4 MB
370.1k tokens
1181 symbols
1 requests
Download .txt
Showing preview only (1,447K chars total). Download the full file or copy to clipboard to get everything.
Repository: google/qkeras
Branch: master
Commit: 5e0cd30c20b1
Files: 140
Total size: 1.4 MB

Directory structure:
gitextract_ead9uto5/

├── .github/
│   └── workflows/
│       └── ci.yml
├── CHANGELOG
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── README.md
├── examples/
│   ├── example_act.py
│   ├── example_b2t.py
│   ├── example_cifar10_po2.py
│   ├── example_keras_to_qkeras.py
│   ├── example_mnist.py
│   ├── example_mnist_ae.py
│   ├── example_mnist_b2t.py
│   ├── example_mnist_bn.py
│   ├── example_mnist_po2.py
│   ├── example_mnist_prune.py
│   ├── example_qdense.py
│   ├── example_qoctave.py
│   └── example_ternary.py
├── experimental/
│   └── lo/
│       ├── __init__.py
│       ├── compress.py
│       ├── conv2d.py
│       ├── dense.py
│       ├── generate_rf_code.py
│       ├── optimizer.py
│       ├── random_forest/
│       │   ├── __init__.py
│       │   ├── gen_random_tree.py
│       │   ├── parser.py
│       │   ├── random_forest.py
│       │   ├── random_tree.py
│       │   └── utils.py
│       ├── receptive.py
│       ├── table/
│       │   ├── __init__.py
│       │   ├── parser.py
│       │   └── utils.py
│       └── utils.py
├── notebook/
│   ├── AutoQKeras.ipynb
│   ├── CodebookQuantization.ipynb
│   ├── QKerasTutorial.ipynb
│   └── QRNNTutorial.ipynb
├── qkeras/
│   ├── __init__.py
│   ├── autoqkeras/
│   │   ├── __init__.py
│   │   ├── autoqkeras_internal.py
│   │   ├── examples/
│   │   │   └── run/
│   │   │       ├── get_data.py
│   │   │       ├── get_model.py
│   │   │       ├── networks/
│   │   │       │   ├── __init__.py
│   │   │       │   └── conv_block.py
│   │   │       └── plot_history.py
│   │   ├── forgiving_metrics/
│   │   │   ├── __init__.py
│   │   │   ├── forgiving_bits.py
│   │   │   ├── forgiving_energy.py
│   │   │   └── forgiving_factor.py
│   │   ├── quantization_config.py
│   │   ├── tests/
│   │   │   └── test_forgiving_factor.py
│   │   └── utils.py
│   ├── b2t.py
│   ├── base_quantizer.py
│   ├── bn_folding_utils.py
│   ├── callbacks.py
│   ├── codebook.py
│   ├── estimate.py
│   ├── experimental/
│   │   └── quantizers/
│   │       ├── __init__.py
│   │       └── quantizers_po2.py
│   ├── qconv2d_batchnorm.py
│   ├── qconvolutional.py
│   ├── qdepthwise_conv2d_transpose.py
│   ├── qdepthwiseconv2d_batchnorm.py
│   ├── qlayers.py
│   ├── qmac.py
│   ├── qmodel.proto
│   ├── qnormalization.py
│   ├── qoctave.py
│   ├── qpooling.py
│   ├── qrecurrent.py
│   ├── qseparable_conv2d_transpose.py
│   ├── qtools/
│   │   ├── DnC/
│   │   │   ├── divide_and_conquer.py
│   │   │   └── dnc_layer_cost_ace.py
│   │   ├── __init__.py
│   │   ├── config_public.py
│   │   ├── examples/
│   │   │   ├── example_generate_json.py
│   │   │   └── example_get_energy.py
│   │   ├── generate_layer_data_type_map.py
│   │   ├── interface.py
│   │   ├── qenergy/
│   │   │   ├── __init__.py
│   │   │   └── qenergy.py
│   │   ├── qgraph.py
│   │   ├── qtools_util.py
│   │   ├── quantized_operators/
│   │   │   ├── __init__.py
│   │   │   ├── accumulator_factory.py
│   │   │   ├── accumulator_impl.py
│   │   │   ├── adder_factory.py
│   │   │   ├── adder_impl.py
│   │   │   ├── divider_factory.py
│   │   │   ├── divider_impl.py
│   │   │   ├── fused_bn_factory.py
│   │   │   ├── merge_factory.py
│   │   │   ├── multiplier_factory.py
│   │   │   ├── multiplier_impl.py
│   │   │   ├── qbn_factory.py
│   │   │   ├── quantizer_factory.py
│   │   │   ├── quantizer_impl.py
│   │   │   └── subtractor_factory.py
│   │   ├── run_qtools.py
│   │   └── settings.py
│   ├── quantizer_imports.py
│   ├── quantizer_registry.py
│   ├── quantizers.py
│   ├── registry.py
│   ├── safe_eval.py
│   └── utils.py
├── requirements.txt
├── setup.cfg
├── setup.py
└── tests/
    ├── automatic_conversion_test.py
    ├── autoqkeras_test.py
    ├── bn_folding_test.py
    ├── callbacks_test.py
    ├── codebook_test.py
    ├── leakyrelu_test.py
    ├── min_max_test.py
    ├── print_qstats_test.py
    ├── qactivation_test.py
    ├── qadaptiveactivation_test.py
    ├── qalpha_test.py
    ├── qconvolutional_test.py
    ├── qdepthwise_conv2d_transpose_test.py
    ├── qlayers_test.py
    ├── qmac_test.py
    ├── qnoise_test.py
    ├── qpooling_test.py
    ├── qrecurrent_test.py
    ├── qseparable_conv2d_transpose_test.py
    ├── qtools_model_test.py
    ├── qtools_util_test.py
    ├── quantizer_impl_test.py
    ├── quantizer_registry_test.py
    ├── range_test.py
    ├── registry_test.py
    ├── safe_eval_test.py
    └── utils_test.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/workflows/ci.yml
================================================
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: CI tests

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up Python 3.7
      uses: actions/setup-python@v2
      with:
        python-version: 3.7
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
        pip install .
        python setup.py install 
    - name: Test with pytest
      run: |
        pytest


================================================
FILE: CHANGELOG
================================================
v0.5, 2019/07 -- Initial release.
v0.6, 2020/03 -- Support tensorflow 2.0, tf.keras and python3.
v0.7, 2020/03 -- Enhancemence of binary and ternary quantization.


================================================
FILE: CONTRIBUTING.md
================================================
# How to Contribute

We'd love to accept your patches and contributions to this project. There are
just a few small guidelines you need to follow.

## Contributor License Agreement

Contributions to this project must be accompanied by a Contributor License
Agreement. You (or your employer) retain the copyright to your contribution;
this simply gives us permission to use and redistribute your contributions as
part of the project. Head over to <https://cla.developers.google.com/> to see
your current agreements on file or to sign a new one.

You generally only need to submit a CLA once, so if you've already submitted one
(even if it was for a different project), you probably don't need to do it
again.

## Code reviews

All submissions, including submissions by project members, require review. We
use GitHub pull requests for this purpose. Consult
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
information on using pull requests.

## Community Guidelines

This project follows
[Google's Open Source Community Guidelines](https://opensource.google.com/conduct/).


================================================
FILE: LICENSE
================================================
Copyright 2019 The QKeras Authors.  All rights reserved.

                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: MANIFEST.in
================================================
include *.txt
recursive-include docs *.txt


================================================
FILE: README.md
================================================
# QKeras

[github.com/google/qkeras](https://github.com/google/qkeras)

## Introduction

QKeras is a quantization extension to Keras that provides drop-in
replacement for some of the Keras layers, especially the ones that
creates parameters and activation layers, and perform arithmetic
operations, so that we can quickly create a deep quantized version of
Keras network.

According to Tensorflow documentation, Keras is a high-level API to
build and train deep learning models. It's used for fast prototyping,
advanced research, and production, with three key advantages:

- User friendly

Keras has a simple, consistent interface optimized for common use
cases. It provides clear and actionable feedback for user errors.

- Modular and composable

Keras models are made by connecting configurable building blocks
together, with few restrictions.

- Easy to extend

Write custom building blocks to express new ideas for research. Create
new layers, loss functions, and develop state-of-the-art models.

QKeras is being designed to extend the functionality of Keras using
Keras' design principle, i.e. being user friendly, modular and
extensible, adding to it being "minimally intrusive" of Keras native
functionality.

In order to successfully quantize a model, users need to replace
variable creating layers (Dense, Conv2D, etc) by their counterparts
(QDense, QConv2D, etc), and any layers that perform math operations
need to be quantized afterwards.

## Publications

- Claudionor N. Coelho Jr, Aki Kuusela, Shan Li, Hao Zhuang, Jennifer Ngadiuba, Thea Klaeboe Aarrestad, Vladimir Loncar, Maurizio Pierini, Adrian Alan Pol, Sioni Summers, "Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors", Nature Machine Intelligence (2021), https://www.nature.com/articles/s42256-021-00356-5

- Claudionor N. Coelho Jr., Aki Kuusela, Hao Zhuang, Thea Aarrestad, Vladimir Loncar, Jennifer Ngadiuba, Maurizio Pierini, Sioni Summers, "Ultra Low-latency, Low-area Inference Accelerators using Heterogeneous Deep Quantization with QKeras and hls4ml", http://arxiv.org/abs/2006.10159v1

- Erwei Wang, James J. Davis, Daniele Moro, Piotr Zielinski, Claudionor Coelho, Satrajit Chatterjee, Peter Y. K. Cheung, George A. Constantinides, "Enabling Binary Neural Network Training on the Edge", https://arxiv.org/abs/2102.04270

## Layers Implemented in QKeras

- QDense

- QConv1D

- QConv2D

- QDepthwiseConv2D

- QSeparableConv1D (depthwise + pointwise convolution, without
quantizing the activation values after the depthwise step)

- QSeparableConv2D (depthwise + pointwise convolution, without
quantizing the activation values after the depthwise step)

- QMobileNetSeparableConv2D (extended from MobileNet SeparableConv2D
implementation, quantizes the activation values after the depthwise step)

- QConv2DTranspose

- QActivation

- QAdaptiveActivation

- QAveragePooling2D (in fact, an AveragePooling2D stacked with a 
QActivation layer for quantization of the result)

- QBatchNormalization (is still in its experimental stage, as we
have not seen the need to use this yet due to the normalization 
and regularization effects of stochastic activation functions.)

- QOctaveConv2D

- QSimpleRNN, QSimpleRNNCell

- QLSTM, QLSTMCell

- QGRU, QGRUCell

- QBidirectional

It is worth noting that not all functionality is safe at this time to
be used with other high-level operations, such as with layer
wrappers. For example, Bidirectional layer wrappers are used with
RNNs.  If this is required, we encourage users to use quantization
functions invoked as strings instead of the actual functions as a way
through this, but we may change that implementation in the future.

A first attempt to create a safe mechanism in QKeras is the adoption
of QActivation is a wrap-up that provides an encapsulation around the
activation functions so that we can save and restore the network
architecture, and duplicate them using Keras interface, but this
interface has not been fully tested yet.

## Activation Layers Implemented in QKeras

- smooth_sigmoid(x)

- hard_sigmoid(x)

- binary_sigmoid(x)

- binary_tanh(x)

- smooth_tanh(x)

- hard_tanh(x)

- quantized_bits(bits=8, integer=0, symmetric=0, keep_negative=1)(x)

- bernoulli(alpha=1.0)(x)

- stochastic_ternary(alpha=1.0, threshold=0.33)(x)

- ternary(alpha=1.0, threshold=0.33)(x)

- stochastic_binary(alpha=1.0)(x)

- binary(alpha=1.0)(x)

- quantized_relu(bits=8, integer=0, use_sigmoid=0, negative_slope=0.0)(x)

- quantized_ulaw(bits=8, integer=0, symmetric=0, u=255.0)(x)

- quantized_tanh(bits=8, integer=0, symmetric=0)(x)

- quantized_po2(bits=8, max_value=-1)(x)

- quantized_relu_po2(bits=8, max_value=-1)(x)

The stochastic_* functions, bernoulli as well as quantized_relu and
quantized_tanh rely on stochastic versions of the activation
functions. They draw a random number with uniform distribution from
_hard_sigmoid of the input x, and result is based on the expected
value of the activation function. Please refer to the papers if you
want to understand the underlying theory, or the documentation in
qkeras/qlayers.py.

The parameters "bits" specify the number of bits for the quantization,
and "integer" specifies how many bits of "bits" are to the left of the
decimal point. Finally, our experience in training networks with
QSeparableConv2D, both quantized_bits and quantized_tanh that
generates values between [-1, 1), required symmetric versions of the
range in order to properly converge and eliminate the bias.

Every time we use a quantization for weights and bias that can
generate numbers outside the range [-1.0, 1.0], we need to adjust the
*_range to the number. For example, if we have a
quantized_bits(bits=6, integer=2) in a weight of a layer, we need to
set the weight range to 2**2, which is equivalent to Catapult HLS
ac_fixed<6, 3, true>. Similarly, for quantization functions that accept an 
alpha parameter, we need to specify a range of alpha,
and for po2 type of quantizers, we need to specify the range of
max_value.


### Example

Suppose you have the following network.

An example of a very simple network is given below in Keras.


```python
from keras.layers import *

x = x_in = Input(shape)
x = Conv2D(18, (3, 3), name="first_conv2d")(x)
x = Activation("relu")(x)
x = SeparableConv2D(32, (3, 3))(x)
x = Activation("relu")(x)
x = Flatten()(x)
x = Dense(NB_CLASSES)(x)
x = Activation("softmax")(x)
```

You can easily quantize this network as follows:

```python
from keras.layers import *
from qkeras import *

x = x_in = Input(shape)
x = QConv2D(18, (3, 3),
        kernel_quantizer="stochastic_ternary",
        bias_quantizer="ternary", name="first_conv2d")(x)
x = QActivation("quantized_relu(3)")(x)
x = QSeparableConv2D(32, (3, 3),
        depthwise_quantizer=quantized_bits(4, 0, 1),
        pointwise_quantizer=quantized_bits(3, 0, 1),
        bias_quantizer=quantized_bits(3),
        depthwise_activation=quantized_tanh(6, 2, 1))(x)
x = QActivation("quantized_relu(3)")(x)
x = Flatten()(x)
x = QDense(NB_CLASSES,
        kernel_quantizer=quantized_bits(3),
        bias_quantizer=quantized_bits(3))(x)
x = QActivation("quantized_bits(20, 5)")(x)
x = Activation("softmax")(x)
```

The last QActivation is advisable if you want to compare results later on. 
Please find more cases under the directory examples.


## QTools
The purpose of QTools is to assist hardware implementation of the quantized
model and model energy consumption estimation. QTools has two functions: data
type map generation and energy consumption estimation.

- Data Type Map Generation:
QTools automatically generate the data type map for weights, bias, multiplier,
adder, etc. of each layer. The data type map includes operation type,
variable size, quantizer type and bits, etc. Input of the QTools is:
1) a given quantized model;
2) a list of input quantizers
for the model. Output of QTools json file that list the data type map of each
layer (stored in qtools_instance._output_dict)
Output methods include: qtools_stats_to_json, which is to output the data type
map to a json file; qtools_stats_print which is to print out the data type map.

- Energy Consumption Estimation:
Another function of QTools is to estimate the model energy consumption in
Pico Joules (pJ). It provides a tool for QKeras users to quickly estimate
energy consumption for memory access and MAC operations in a quantized model
derived from QKeras, especially when comparing power consumption of two models
running on the same device.

As with any high-level model, it should be used with caution when attempting
to estimate the absolute energy consumption of a model for a given technology,
or when attempting to compare different technologies.

This tool also provides a measure for model tuning which needs to consider
both accuracy and model energy consumption. The energy cost provided by this
tool can be integrated into a total loss function which combines energy
cost and accuracy.

- Energy Model:
The best work referenced by the literature on energy consumption was first
computed by Horowitz M.: “1.1 computing’s energy problem (
and what we can do about it)”; IEEE International Solid-State Circuits
Conference Digest of Technical Papers (ISSCC), 2014

In this work, the author attempted to estimate the energy
consumption for accelerators, and for 45 nm process, the data points he
presented has since been used whenever someone wants to compare accelerator
performance. QTools energy consumption on a 45nm process is based on the
data published in this work.

- Examples:
Example of how to generate data type map can be found in qkeras/qtools/
examples/example_generate_json.py. Example of how to generate energy consumption
estimation can be found in qkeras/qtools/examples/example_get_energy.py


## AutoQKeras

AutoQKeras allows the automatic quantization and rebalancing of deep neural
networks by treating quantization and rebalancing of an existing deep neural
network as a hyperparameter search in Keras-Tuner using random search,
hyperband or gaussian processes.

In order to contain the explosion of hyperparameters, users can group tasks by
patterns, and perform distribute training using available resources.

Extensive documentation is present in notebook/AutoQKeras.ipynb.


## Related Work

QKeras has been implemented based on the work of "B.Moons et al. -
Minimum Energy Quantized Neural Networks", Asilomar Conference on
Signals, Systems and Computers, 2017 and "Zhou, S. et al. -
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with
Low Bitwidth Gradients," but the framework should be easily
extensible. The original code from QNN can be found below.

https://github.com/BertMoons/QuantizedNeuralNetworks-Keras-Tensorflow

QKeras extends QNN by providing a richer set of layers (including
SeparableConv2D, DepthwiseConv2D, ternary and stochastic ternary
quantizations), besides some functions to aid the estimation for the
accumulators and conversion between non-quantized to quantized
networks. Finally, our main goal is easy of use, so we attempt to make
QKeras layers a true drop-in replacement for Keras, so that users can
easily exchange non-quantized layers by quantized ones.

### Acknowledgements

Portions of QKeras were derived from QNN.

https://github.com/BertMoons/QuantizedNeuralNetworks-Keras-Tensorflow

Copyright (c) 2017, Bert Moons where it applies



================================================
FILE: examples/example_act.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Example the usage of activation functions in qkeras."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import warnings
import numpy as np

import tensorflow as tf
import tensorflow.keras.backend as K

from qkeras import binary
from qkeras import bernoulli
from qkeras import hard_sigmoid
from qkeras import hard_tanh
from qkeras import quantized_bits
from qkeras import quantized_relu
from qkeras import quantized_tanh
from qkeras import quantized_po2
from qkeras import quantized_relu_po2
from qkeras import set_internal_sigmoid
from qkeras import smooth_sigmoid
from qkeras import smooth_tanh
from qkeras import stochastic_binary
from qkeras import stochastic_ternary
from qkeras import ternary


def main():
  # check the mean value of samples from stochastic_rounding for po2
  np.random.seed(42)
  count = 100000
  val = 42
  a = K.constant([val] * count)
  b = quantized_po2(use_stochastic_rounding=True)(a)
  res = np.sum(K.eval(b)) / count
  print(res, "should be close to ", val)
  b = quantized_relu_po2(use_stochastic_rounding=True)(a)
  res = np.sum(K.eval(b)) / count
  print(res, "should be close to ", val)
  a = K.constant([-1] * count)
  b = quantized_relu_po2(use_stochastic_rounding=True)(a)
  res = np.sum(K.eval(b)) / count
  print(res, "should be all ", 0)

  # non-stochastic rounding quantizer.
  a = K.constant([-3.0, -2.0, -1.0, -0.5, 0.0, 0.5, 1.0, 2.0, 3.0])
  a = K.constant([0.194336])
  print(" a =", K.eval(a).astype(np.float16))
  print("qa =", K.eval(quantized_relu(6,2)(a)).astype(np.float16))
  print("ss =", K.eval(smooth_sigmoid(a)).astype(np.float16))
  print("hs =", K.eval(hard_sigmoid(a)).astype(np.float16))
  print("ht =", K.eval(hard_tanh(a)).astype(np.float16))
  print("st =", K.eval(smooth_tanh(a)).astype(np.float16))
  c = K.constant(np.arange(-1.5, 1.51, 0.3))
  print(" c =", K.eval(c).astype(np.float16))
  print("qb_111 =", K.eval(quantized_bits(1,1,1)(c)).astype(np.float16))
  print("qb_210 =", K.eval(quantized_bits(2,1,0)(c)).astype(np.float16))
  print("qb_211 =", K.eval(quantized_bits(2,1,1)(c)).astype(np.float16))
  print("qb_300 =", K.eval(quantized_bits(3,0,0)(c)).astype(np.float16))
  print("qb_301 =", K.eval(quantized_bits(3,0,1)(c)).astype(np.float16))
  c_1000 = K.constant(np.array([list(K.eval(c))] * 1000))
  b = np.sum(K.eval(bernoulli()(c_1000)).astype(np.int32), axis=0) / 1000.0
  print("       hs =", K.eval(hard_sigmoid(c)).astype(np.float16))
  print("    b_all =", b.astype(np.float16))
  T = 0.0
  t = K.eval(stochastic_ternary(alpha="auto")(c_1000))
  for i in range(10):
    print("stochastic_ternary({}) =".format(i), t[i])
  print("   st_all =", np.round(
      np.sum(t.astype(np.float32), axis=0).astype(np.float16) /
      1000.0, 2).astype(np.float16))
  print("  ternary =", K.eval(ternary(threshold=0.5)(c)).astype(np.int32))
  c = K.constant(np.arange(-1.5, 1.51, 0.3))
  print(" c =", K.eval(c).astype(np.float16))
  print(" b_10 =", K.eval(binary(1)(c)).astype(np.float16))
  print("qr_10 =", K.eval(quantized_relu(1,0)(c)).astype(np.float16))
  print("qr_11 =", K.eval(quantized_relu(1,1)(c)).astype(np.float16))
  print("qr_20 =", K.eval(quantized_relu(2,0)(c)).astype(np.float16))
  print("qr_21 =", K.eval(quantized_relu(2,1)(c)).astype(np.float16))
  print("qr_101 =", K.eval(quantized_relu(1,0,1)(c)).astype(np.float16))
  print("qr_111 =", K.eval(quantized_relu(1,1,1)(c)).astype(np.float16))
  print("qr_201 =", K.eval(quantized_relu(2,0,1)(c)).astype(np.float16))
  print("qr_211 =", K.eval(quantized_relu(2,1,1)(c)).astype(np.float16))
  print("qt_200 =", K.eval(quantized_tanh(2,0)(c)).astype(np.float16))
  print("qt_210 =", K.eval(quantized_tanh(2,1)(c)).astype(np.float16))
  print("qt_201 =", K.eval(quantized_tanh(2,0,1)(c)).astype(np.float16))
  print("qt_211 =", K.eval(quantized_tanh(2,1,1)(c)).astype(np.float16))
  set_internal_sigmoid("smooth"); print("with smooth sigmoid")
  print("qr_101 =", K.eval(quantized_relu(1,0,1)(c)).astype(np.float16))
  print("qr_111 =", K.eval(quantized_relu(1,1,1)(c)).astype(np.float16))
  print("qr_201 =", K.eval(quantized_relu(2,0,1)(c)).astype(np.float16))
  print("qr_211 =", K.eval(quantized_relu(2,1,1)(c)).astype(np.float16))
  print("qt_200 =", K.eval(quantized_tanh(2,0)(c)).astype(np.float16))
  print("qt_210 =", K.eval(quantized_tanh(2,1)(c)).astype(np.float16))
  print("qt_201 =", K.eval(quantized_tanh(2,0,1)(c)).astype(np.float16))
  print("qt_211 =", K.eval(quantized_tanh(2,1,1)(c)).astype(np.float16))
  set_internal_sigmoid("real"); print("with real sigmoid")
  print("qr_101 =", K.eval(quantized_relu(1,0,1)(c)).astype(np.float16))
  print("qr_111 =", K.eval(quantized_relu(1,1,1)(c)).astype(np.float16))
  print("qr_201 =", K.eval(quantized_relu(2,0,1)(c)).astype(np.float16))
  print("qr_211 =", K.eval(quantized_relu(2,1,1)(c)).astype(np.float16))
  print("qt_200 =", K.eval(quantized_tanh(2,0)(c)).astype(np.float16))
  print("qt_210 =", K.eval(quantized_tanh(2,1)(c)).astype(np.float16))
  print("qt_201 =", K.eval(quantized_tanh(2,0,1)(c)).astype(np.float16))
  print("qt_211 =", K.eval(quantized_tanh(2,1,1)(c)).astype(np.float16))
  set_internal_sigmoid("hard")
  print(" c =", K.eval(c).astype(np.float16))
  print("q2_31 =", K.eval(quantized_po2(3,1)(c)).astype(np.float16))
  print("q2_32 =", K.eval(quantized_po2(3,2)(c)).astype(np.float16))
  print("qr2_21 =", K.eval(quantized_relu_po2(2,1)(c)).astype(np.float16))
  print("qr2_22 =", K.eval(quantized_relu_po2(2,2)(c)).astype(np.float16))
  print("qr2_44 =", K.eval(quantized_relu_po2(4,1)(c)).astype(np.float16))

  # stochastic rounding
  c = K.constant(np.arange(-1.5, 1.51, 0.3))
  print("q2_32_2 =", K.eval(quantized_relu_po2(32,2)(c)).astype(np.float16))
  b = K.eval(stochastic_binary()(c_1000)).astype(np.int32)
  for i in range(5):
    print("sbinary({}) =".format(i), b[i])
  print("sbinary =", np.round(np.sum(b, axis=0) / 1000.0, 2).astype(np.float16))
  print(" binary =", K.eval(binary()(c)).astype(np.int32))
  print(" c      =", K.eval(c).astype(np.float16))
  for i in range(10):
    print(" s_bin({}) =".format(i),
          K.eval(binary(use_stochastic_rounding=1)(c)).astype(np.int32))
  for i in range(10):
    print(" s_po2({}) =".format(i),
          K.eval(quantized_po2(use_stochastic_rounding=1)(c)).astype(np.int32))
  for i in range(10):
    print(
        " s_relu_po2({}) =".format(i),
        K.eval(quantized_relu_po2(use_stochastic_rounding=1)(c)).astype(
            np.int32))


if __name__ == '__main__':
  main()


================================================
FILE: examples/example_b2t.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Implements total/partial Binary to Thermometer decoder."""

import numpy as np
from qkeras import BinaryToThermometer

if __name__ == "__main__":
  np.random.seed(42)
  x = np.array(range(8))
  b = BinaryToThermometer(x, 2, 8)
  print(b)
  b = BinaryToThermometer(x, 2, 8, 1)
  print(b)
  b = BinaryToThermometer(x, 2, 8, 1, use_two_hot_encoding=1)
  print(b)
  b = BinaryToThermometer(x, 4, 8)
  print(b)
  b = BinaryToThermometer(x, 4, 8, 1)
  print(b)
  b = BinaryToThermometer(x, 4, 8, 1, use_two_hot_encoding=1)
  print(b)
  x = np.random.randint(0, 255, (100, 28, 28, 1))
  print(x[0, 0, 0:5])
  b = BinaryToThermometer(x, 8, 256, 0)
  print(x.shape, b.shape)
  print(b[0, 0, 0:5])
  b = BinaryToThermometer(x, 8, 256, 1)
  print(b[0, 0, 0:5])
  x = np.random.randint(0, 255, (100, 28, 28, 2))
  b = BinaryToThermometer(x, 8, 256, 0, 1)
  print(x.shape, b.shape)
  print(x[0, 0, 0, 0:2])
  print(b[0, 0, 0, 0:8])
  print(b[0, 0, 0, 8:16])


================================================
FILE: examples/example_cifar10_po2.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests qcore model with po2."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
from collections import defaultdict

import tensorflow.keras.backend as K
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import *
from tensorflow.keras.utils import to_categorical
import numpy as np

from qkeras import *

np.random.seed(42)

NB_EPOCH = 50
BATCH_SIZE = 64
VERBOSE = 1
NB_CLASSES = 10
OPTIMIZER = Adam(lr=0.0001)
VALIDATION_SPLIT = 0.1

(x_train, y_train), (x_test, y_test) = cifar10.load_data()

x_train = x_train.astype("float32")
x_test = x_test.astype("float32")

x_train /= 255.0
x_test /= 255.0

print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")

print(y_train[0:10])

y_train = to_categorical(y_train, NB_CLASSES)
y_test = to_categorical(y_test, NB_CLASSES)

x = x_in = Input(x_train.shape[1:], name="input")
x = QActivation("quantized_relu_po2(4,4)", name="acti")(x)
x = QConv2D(
    128, (3, 3),
    strides=1,
    kernel_quantizer=quantized_po2(4, 1),
    bias_quantizer=quantized_po2(4, 4),
    bias_range=4,
    name="conv2d_0_m")(
        x)
x = QActivation("ternary()", name="act0_m")(x)
x = MaxPooling2D(2, 2, name="mp_0")(x)
x = QConv2D(
    256, (3, 3),
    strides=1,
    kernel_quantizer=quantized_po2(4, 1),
    bias_quantizer=quantized_po2(4, 4),
    bias_range=4,
    name="conv2d_1_m")(
        x)
x = QActivation("quantized_relu(6,2)", name="act1_m")(x)
x = MaxPooling2D(2, 2, name="mp_1")(x)
x = QConv2D(
    128, (3, 3),
    strides=1,
    kernel_quantizer=quantized_bits(4, 0, 1),
    bias_quantizer=quantized_bits(4, 0, 1),
    name="conv2d_2_m")(
        x)
x = QActivation("quantized_relu(4,2)", name="act2_m")(x)
x = MaxPooling2D(2, 2, name="mp_2")(x)
x = Flatten()(x)
x = QDense(
    NB_CLASSES,
    kernel_quantizer=quantized_ulaw(4, 0, 1),
    bias_quantizer=quantized_bits(4, 0, 1),
    name="dense")(
        x)
x = Activation("softmax", name="softmax")(x)

model = Model(inputs=[x_in], outputs=[x])
model.summary()

model.compile(
    loss="categorical_crossentropy", optimizer=OPTIMIZER, metrics=["accuracy"])

if int(os.environ.get("TRAIN", 0)):

  history = model.fit(
      x_train, y_train, batch_size=BATCH_SIZE,
      epochs=NB_EPOCH, initial_epoch=1, verbose=VERBOSE,
      validation_split=VALIDATION_SPLIT)

  outputs = []
  output_names = []

  for layer in model.layers:
    if layer.__class__.__name__ in [
        "QActivation", "Activation", "QDense", "QConv2D", "QDepthwiseConv2D"
    ]:
      output_names.append(layer.name)
      outputs.append(layer.output)

  model_debug = Model(inputs=[x_in], outputs=outputs)

  outputs = model_debug.predict(x_train)

  print("{:30} {: 8.4f} {: 8.4f}".format(
      "input", np.min(x_train), np.max(x_train)))

  for n, p in zip(output_names, outputs):
    print("{:30} {: 8.4f} {: 8.4f}".format(n, np.min(p), np.max(p)), end="")
    layer = model.get_layer(n)
    for i, weights in enumerate(layer.get_weights()):
      weights = K.eval(layer.get_quantizers()[i](K.constant(weights)))
      print(" ({: 8.4f} {: 8.4f})".format(np.min(weights), np.max(weights)),
            end="")
      print("")

  score = model.evaluate(x_test, y_test, verbose=VERBOSE)
  print("Test score:", score[0])
  print("Test accuracy:", score[1])

model.summary()

print_qstats(model)


================================================
FILE: examples/example_keras_to_qkeras.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests automatic conversion of keras model to qkeras."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from collections import defaultdict

from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model

from qkeras.estimate import print_qstats
from qkeras.utils import model_quantize
from qkeras.utils import quantized_model_dump

x0 = x_in0 = Input((28, 28, 1), name="input0")
x1 = x_in1 = Input((28, 28, 1), name="input1")
x = Concatenate(name="concat")([x0, x1])
x = Conv2D(128, (3, 3), strides=1, name="conv2d_0_m")(x)
x = Activation("relu", name="act0_m")(x)
x = MaxPooling2D(2, 2, name="mp_0")(x)
x = Conv2D(256, (3, 3), strides=1, name="conv2d_1_m")(x)
x = Activation("relu", name="act1_m")(x)
x = MaxPooling2D(2, 2, name="mp_1")(x)
x = Conv2D(128, (3, 3), strides=1, name="conv2d_2_m")(x)
x = Activation("relu", name="act2_m")(x)
x = MaxPooling2D(2, 2, name="mp_2")(x)
x = Flatten()(x)
x = Dense(10, name="dense")(x)
x = Activation("softmax", name="softmax")(x)

model = Model(inputs=[x_in0, x_in1], outputs=[x])
model.summary()

q_dict = {
    "conv2d_0_m": {
        "kernel_quantizer": "binary()",
        "bias_quantizer": "quantized_bits(4,0,1)"
    },
    "conv2d_1_m": {
        "kernel_quantizer": "ternary()",
        "bias_quantizer": "quantized_bits(4,0,1)"
    },
    "act2_m": "quantized_relu(6,2)",
    "QActivation": {
        "relu": "quantized_relu(4,0)"
    },
    "QConv2D": {
        "kernel_quantizer": "quantized_bits(4,0,1)",
        "bias_quantizer": "quantized_bits(4,0,1)"
    },
    "QDense": {
        "kernel_quantizer": "quantized_bits(3,0,1)",
        "bias_quantizer": "quantized_bits(3,0,1)"
    }
}

qmodel = model_quantize(model, q_dict, 4)

qmodel.summary()

print_qstats(qmodel)

(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_test_arr = [x_test[0:10,:], x_test[0:10,:]]

quantized_model_dump(
    qmodel, x_test_arr,
    layers_to_dump=["input0", "input1", "act2_m", "act1_m", "act0_m"])



================================================
FILE: examples/example_mnist.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""uses po2."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
from collections import defaultdict

import tensorflow.keras.backend as K
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.utils import to_categorical

from qkeras import *
from qkeras.utils import model_save_quantized_weights


import numpy as np
import tensorflow.compat.v1 as tf

np.random.seed(42)

NB_EPOCH = 100
BATCH_SIZE = 64
VERBOSE = 1
NB_CLASSES = 10
OPTIMIZER = Adam(lr=0.0001, decay=0.000025)
VALIDATION_SPLIT = 0.1

train = 1

(x_train, y_train), (x_test, y_test) = mnist.load_data()

RESHAPED = 784

x_test_orig = x_test

x_train = x_train.astype("float32")
x_test = x_test.astype("float32")

x_train = x_train[..., np.newaxis]
x_test = x_test[..., np.newaxis]

x_train /= 256.0
x_test /= 256.0

print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")

print(y_train[0:10])

y_train = to_categorical(y_train, NB_CLASSES)
y_test = to_categorical(y_test, NB_CLASSES)

x = x_in = Input(
    x_train.shape[1:-1] + (1,), name="input")
x = QConv2D(
    32, (2, 2), strides=(2,2),
    kernel_quantizer=quantized_bits(4,0,1),
    bias_quantizer=quantized_bits(4,0,1),
    name="conv2d_0_m")(x)
x = QActivation("quantized_relu(4,0)", name="act0_m")(x)
x = QConv2D(
    64, (3, 3), strides=(2,2),
    kernel_quantizer=quantized_bits(4,0,1),
    bias_quantizer=quantized_bits(4,0,1),
    name="conv2d_1_m")(x)
x = QActivation("quantized_relu(4,0)", name="act1_m")(x)
x = QConv2D(
    64, (2, 2), strides=(2,2),
    kernel_quantizer=quantized_bits(4,0,1),
    bias_quantizer=quantized_bits(4,0,1),
    name="conv2d_2_m")(x)
x = QActivation("quantized_relu(4,0)", name="act2_m")(x)
x = Flatten()(x)
x = QDense(NB_CLASSES, kernel_quantizer=quantized_bits(4,0,1),
           bias_quantizer=quantized_bits(4,0,1),
           name="dense")(x)
x_out = x
x = Activation("softmax", name="softmax")(x)

model = Model(inputs=[x_in], outputs=[x])
mo = Model(inputs=[x_in], outputs=[x_out])
model.summary()

model.compile(
    loss="categorical_crossentropy", optimizer=OPTIMIZER, metrics=["accuracy"])

if train:

  history = model.fit(
      x_train, y_train, batch_size=BATCH_SIZE,
      epochs=NB_EPOCH, initial_epoch=1, verbose=VERBOSE,
      validation_split=VALIDATION_SPLIT)

  outputs = []
  output_names = []

  for layer in model.layers:
    if layer.__class__.__name__ in ["QActivation", "Activation",
                                  "QDense", "QConv2D", "QDepthwiseConv2D"]:
      output_names.append(layer.name)
      outputs.append(layer.output)

  model_debug = Model(inputs=[x_in], outputs=outputs)

  outputs = model_debug.predict(x_train)

  print("{:30} {: 8.4f} {: 8.4f}".format(
      "input", np.min(x_train), np.max(x_train)))

  for n, p in zip(output_names, outputs):
    print("{:30} {: 8.4f} {: 8.4f}".format(n, np.min(p), np.max(p)), end="")
    layer = model.get_layer(n)
    for i, weights in enumerate(layer.get_weights()):
      weights = K.eval(layer.get_quantizers()[i](K.constant(weights)))
      print(" ({: 8.4f} {: 8.4f})".format(np.min(weights), np.max(weights)),
            end="")
      print("")

  p_test = mo.predict(x_test)
  p_test.tofile("p_test.bin")

  score = model.evaluate(x_test, y_test, verbose=VERBOSE)
  print("Test score:", score[0])
  print("Test accuracy:", score[1])

  all_weights = []
  model_save_quantized_weights(model)

  for layer in model.layers:
    for w, weights in enumerate(layer.get_weights()):
      print(layer.name, w)
      all_weights.append(weights.flatten())

  all_weights = np.concatenate(all_weights).astype(np.float32)
  print(all_weights.size)


for layer in model.layers:
  for w, weight in enumerate(layer.get_weights()):
    print(layer.name, w, weight.shape)

print_qstats(model)


================================================
FILE: examples/example_mnist_ae.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""uses po2."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
from collections import defaultdict

import tensorflow.keras.backend as K
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.utils import to_categorical

from qkeras import *
from qkeras.utils import model_save_quantized_weights


import numpy as np
import tensorflow.compat.v1 as tf

np.random.seed(42)

NB_EPOCH = 100
BATCH_SIZE = 64
VERBOSE = 1
NB_CLASSES = 10
OPTIMIZER = Adam(lr=0.0001, decay=0.000025)
VALIDATION_SPLIT = 0.1

train = 1

(x_train, y_train), (x_test, y_test) = mnist.load_data()

RESHAPED = 784

x_train = x_train.astype("float32")
x_test = x_test.astype("float32")

x_train = x_train[..., np.newaxis]
x_test = x_test[..., np.newaxis]

x_train /= 256.0
x_test /= 256.0

print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")

print(y_train[0:10])

y_train = to_categorical(y_train, NB_CLASSES)
y_test = to_categorical(y_test, NB_CLASSES)

x = x_in = Input(
    x_train.shape[1:-1] + (1,))
x = QConv2D(
    32,
    kernel_size=(3, 3),
    kernel_quantizer=quantized_bits(4,0,1),
    bias_quantizer=quantized_bits(4,0,1))(x)
x = QActivation("quantized_relu(4,0)")(x)
x = QConv2D(
    16,
    kernel_size=(3, 3),
    kernel_quantizer=quantized_bits(4,0,1),
    bias_quantizer=quantized_bits(4,0,1))(x)
x = QActivation("quantized_relu(4,0)")(x)
x = QConv2D(
    8,
    kernel_size=(3, 3),
    kernel_quantizer=quantized_bits(4,0,1),
    bias_quantizer=quantized_bits(4,0,1))(x)
x = QActivation("quantized_relu(4,0)")(x)
x = QConv2DTranspose(
    8,
    kernel_size=(3, 3),
    kernel_quantizer=quantized_bits(4,0,1),
    bias_quantizer=quantized_bits(4,0,1))(x)
x = QActivation("quantized_relu(4,0)")(x)
x = QConv2DTranspose(
    16,
    kernel_size=(3, 3),
    kernel_quantizer=quantized_bits(4,0,1),
    bias_quantizer=quantized_bits(4,0,1))(x)
x = QActivation("quantized_relu(4,0)")(x)
x = QConv2DTranspose(
    32,
    kernel_size=(3, 3),
    kernel_quantizer=quantized_bits(4,0,1),
    bias_quantizer=quantized_bits(4,0,1))(x)
x = QActivation("quantized_relu(4,0)")(x)
x = QConv2D(
    1,
    kernel_size=(3, 3),
    padding="same",
    kernel_quantizer=quantized_bits(4,0,1),
    bias_quantizer=quantized_bits(4,0,1))(x)
x_out = x
x = Activation("sigmoid")(x)

model = Model(inputs=[x_in], outputs=[x])
mo = Model(inputs=[x_in], outputs=[x_out])
model.summary()

model.compile(
    loss="binary_crossentropy", optimizer=OPTIMIZER, metrics=["accuracy"])

if train:

  history = model.fit(
      x_train, x_train, batch_size=BATCH_SIZE,
      epochs=NB_EPOCH, initial_epoch=1, verbose=VERBOSE,
      validation_split=VALIDATION_SPLIT)

  # Generate reconstructions
  num_reco = 8
  samples = x_test[:num_reco]
  targets = y_test[:num_reco]
  reconstructions = model.predict(samples)


for layer in model.layers:
  for w, weight in enumerate(layer.get_weights()):
    print(layer.name, w, weight.shape)

print_qstats(model)


================================================
FILE: examples/example_mnist_b2t.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests qcore model with BinaryToThermometer."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os

import tensorflow.keras.backend as K
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.utils import to_categorical
import numpy as np

from qkeras import *

np.random.seed(42)

NB_EPOCH = 20
BATCH_SIZE = 32
VERBOSE = 1
NB_CLASSES = 10
OPTIMIZER = Adam(lr=0.0001)
N_HIDDEN = 100
VALIDATION_SPLIT = 0.1

T_CLASSES = 256
T_WITH_RESIDUE = 0

(x_train, y_train), (x_test, y_test) = mnist.load_data()

RESHAPED = 784

x_train = x_train.astype("float32")
x_test = x_test.astype("float32")

x_train = x_train[..., np.newaxis]
x_test = x_test[..., np.newaxis]

if T_CLASSES == 1:
  x_train /= 256.0
  x_test /= 256.0

print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")

print(y_train[0:10])

# x_train = x_train[0:1000]
# y_train = y_train[0:1000]
# x_test = x_test[0:100]
# y_test = y_test[0:100]

y_train = to_categorical(y_train, NB_CLASSES)
y_test = to_categorical(y_test, NB_CLASSES)

# we ran out of memory here, so we split x_train/x_test into smaller groups

x = x_in = Input(
    x_train.shape[1:-1] + (T_CLASSES,), name="input")

# Number is represented as 1.bbb, where number of bits of bbb is
# log2(256/T_CLASSES) if T_WITH_RESIDUE == 1

bits = (
    (T_WITH_RESIDUE == 1) * int(np.ceil(np.log2(256/T_CLASSES))) +
    (T_CLASSES > 1)
)

print("Input quantizer: quantized_relu({},{})".format(bits, int(T_CLASSES > 1)))
x = QActivation("quantized_relu({},{})".format(bits, int(T_CLASSES > 1)))(x)
x = QConv2D(
    64, (3, 3), strides=1, padding="same",
    kernel_quantizer=quantized_po2(4,1),
    bias_quantizer=quantized_bits(4,2,1),
    bias_range=4,
    name="conv2d_0_m")(x)
x = QActivation("quantized_relu(4,0)", name="act0_m")(x)
x = MaxPooling2D(2,2,name="mp_0")(x)
x = QConv2D(
    32, (3, 3), strides=1, padding="same",
    kernel_quantizer=stochastic_ternary(),
    bias_quantizer=quantized_bits(8,5,1),
    bias_range=32,
    name="conv2d_1_m")(x)
x = QActivation("quantized_relu(4,0)", name="act1_m")(x)
x = MaxPooling2D(2,2,name="mp_1")(x)
x = QConv2D(
    16, (3, 3), strides=1, padding="same",
    kernel_quantizer=quantized_bits(4,0,1),
    bias_quantizer=quantized_bits(8,5,1),
    bias_range=32,
    name="conv2d_2_m")(x)
x = QActivation("quantized_relu(6,2)", name="act2_m")(x)
x = MaxPooling2D(2,2,name="mp_2")(x)
x = Flatten()(x)
x = QDense(NB_CLASSES, kernel_quantizer=quantized_bits(4,0,1),
           bias_quantizer=quantized_bits(4,0,1),
           name="dense2")(x)
x = Activation("softmax", name="softmax")(x)

model = Model(inputs=[x_in], outputs=[x])
model.summary()

model.compile(
    loss="categorical_crossentropy", optimizer=OPTIMIZER, metrics=["accuracy"])

outputs = []
output_names = []

for layer in model.layers:
  if layer.__class__.__name__ in ["QActivation", "Activation",
                                  "QDense", "QConv2D", "QDepthwiseConv2D"]:
    output_names.append(layer.name)
    outputs.append(layer.output)

model_debug = Model(inputs=[x_in], outputs=outputs)

batch_size = 1000 * BATCH_SIZE
n_batches = x_train.shape[0] // batch_size

if T_CLASSES > 1:
  x_test = BinaryToThermometer(x_test, T_CLASSES, 256, T_WITH_RESIDUE)

if int(os.environ.get("TRAIN", 0)):

  for i in range(NB_EPOCH):
    for b in range(n_batches):

      min_b = b * batch_size
      max_b = (b + 1) * batch_size
      if max_b > x_train.shape[0]:
        max_b = x_train.shape[0]

      if T_CLASSES > 1:
        x = BinaryToThermometer(
            x_train[min_b:max_b], T_CLASSES, 256, T_WITH_RESIDUE)
      else:
        x = x_train[min_b:max_b]

      history = model.fit(
          x, y_train[min_b:max_b], batch_size=BATCH_SIZE,
          epochs=i+1, initial_epoch=i, verbose=VERBOSE,
          validation_split=VALIDATION_SPLIT)

  if T_CLASSES > 1:
    x = BinaryToThermometer(x_train[0:100], T_CLASSES, 256, T_WITH_RESIDUE)
  else:
    x = x_train[0:100]

  outputs = model_debug.predict(x)

  print("{:30} {: 8.4f} {: 8.4f}".format("input", np.min(x), np.max(x)))
  for n, p in zip(output_names, outputs):
    print("{:30} {: 8.4f} {: 8.4f}".format(n, np.min(p), np.max(p)), end="")
    layer = model.get_layer(n)
    for i, weights in enumerate(layer.get_weights()):
      weights = K.eval(layer.get_quantizers()[i](K.constant(weights)))
      print(" ({: 8.4f} {: 8.4f})".format(np.min(weights), np.max(weights)),
            end="")
    print("")

  score = model.evaluate(x_test, y_test, verbose=VERBOSE)
  print("Test score:", score[0])
  print("Test accuracy:", score[1])

print_qstats(model)

acc = analyze_accumulator_from_sample(model, x_test, mode="sampled")

print(acc)




================================================
FILE: examples/example_mnist_bn.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests mnist batchnormalization used as learned scale factor."""

# to run, THRESHOLD=0.05 WITH_BN=1 EPOCHS=5 TRAIN=1 python example_mnist_bn.py

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from collections import defaultdict
import os

import numpy as np
from six.moves import zip
from tensorflow.keras import callbacks
import tensorflow.keras.backend as K
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import *
from tensorflow.keras.utils import to_categorical

from qkeras import *

np.random.seed(42)

TRAIN = 1
NB_EPOCH = 2
BATCH_SIZE = 64
VERBOSE = 1
NB_CLASSES = 10
OPTIMIZER = Adam(lr=0.0001)
VALIDATION_SPLIT = 0.1
WITH_BN = 1
THRESHOLD = 0.1


class LearningRateAdjuster(callbacks.Callback):
  def __init__(self):
    self.learning_rate_factor = 1.0
    pass

  def on_epoch_end(self, epochs, logs):
    max_variance = -1

    for layer in self.model.layers:
      if layer.__class__.__name__ in [
          "BatchNormalization",
          "QBatchNormalization"
      ]:
        variance = np.max(layer.get_weights()[-1])
        if variance > max_variance:
          max_variance = variance

    if max_variance > 32 and self.learning_rate_factor < 100:
      learning_rate = K.get_value(self.model.optimizer.learning_rate)
      self.learning_rate_factor /= 2.0
      print("***** max_variance is {} / lr is {} *****".format(
          max_variance, learning_rate))
      K.eval(K.update(
          self.model.optimizer.learning_rate, learning_rate / 2.0
      ))

lra = LearningRateAdjuster()

(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = x_train.reshape(x_train.shape + (1,)).astype("float32")
x_test = x_test.reshape(x_test.shape + (1,)).astype("float32")

x_train /= 256.0
x_test /= 256.0

print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")

print(y_train[0:10])

y_train = to_categorical(y_train, NB_CLASSES)
y_test = to_categorical(y_test, NB_CLASSES)

x = x_in = Input(x_train.shape[1:], name="input")
#x = QActivation("quantized_relu_po2(4,1)", name="acti")(x)
x = QConv2D(
    128, (3, 3),
    strides=1,
    kernel_quantizer=ternary(threshold=THRESHOLD), #quantized_po2(4, 1),
    bias_quantizer=quantized_bits(4,2,0) if not WITH_BN else None,
    bias_range=4 if not WITH_BN else None,
    use_bias=not WITH_BN,
    name="conv2d_0_m")(x)
if WITH_BN:
  x = QBatchNormalization(
      gamma_quantizer=quantized_relu_po2(4,8),
      variance_quantizer=quantized_relu_po2(6),
      beta_quantizer=quantized_po2(4, 4),
      gamma_range=8,
      beta_range=4,
      name="bn0")(x)
x = QActivation("quantized_relu(3,1)", name="act0_m")(x)
x = MaxPooling2D(2, 2, name="mp_0")(x)
x = QConv2D(
    256, (3, 3),
    strides=1,
    kernel_quantizer=ternary(threshold=THRESHOLD), #quantized_bits(2,0,1),
    bias_quantizer=quantized_bits(4,2,1) if not WITH_BN else None,
    bias_range=4 if not WITH_BN else None,
    use_bias=not WITH_BN,
    name="conv2d_1_m")(x)
if WITH_BN:
  x = QBatchNormalization(
      gamma_quantizer=quantized_relu_po2(4,8),
      variance_quantizer=quantized_relu_po2(6),
      beta_quantizer=quantized_po2(4, 4),
      gamma_range=8,
      beta_range=4,
      name="bn1")(x)
x = QActivation("quantized_relu(3,1)", name="act1_m")(x)
x = MaxPooling2D(2, 2, name="mp_1")(x)
x = QConv2D(
    128, (3, 3),
    strides=1,
    kernel_quantizer=ternary(threshold=THRESHOLD), #quantized_bits(2,0,1),
    bias_quantizer=quantized_bits(4,2,1) if not WITH_BN else None,
    bias_range=4 if not WITH_BN else None,
    use_bias=not WITH_BN,
    name="conv2d_2_m")(x)
if WITH_BN:
  x = QBatchNormalization(
      gamma_quantizer=quantized_relu_po2(4,8),
      variance_quantizer=quantized_relu_po2(6),
      beta_quantizer=quantized_po2(4, 4),
      gamma_range=8,
      beta_range=4,
      name="bn2")(x)
x = QActivation("quantized_relu(3,1)", name="act2_m")(x)
x = MaxPooling2D(2, 2, name="mp_2")(x)
x = Flatten()(x)
x = QDense(
    NB_CLASSES,
    kernel_quantizer=quantized_ulaw(4, 0, 1),
    bias_quantizer=quantized_bits(4, 0, 1),
    name="dense")(
        x)
x = Activation("softmax", name="softmax")(x)

model = Model(inputs=[x_in], outputs=[x])
model.summary()

model.compile(
    loss="categorical_crossentropy", optimizer=OPTIMIZER, metrics=["accuracy"])


if TRAIN:
  history = model.fit(
      x_train, y_train, batch_size=BATCH_SIZE,
      epochs=NB_EPOCH, initial_epoch=1, verbose=VERBOSE,
      validation_split=VALIDATION_SPLIT,
      callbacks=[]) #lra])

  outputs = []
  output_names = []

  for layer in model.layers:
    if layer.__class__.__name__ in [
        "QActivation", "QBatchNormalization", "Activation", "QDense",
        "QConv2D", "QDepthwiseConv2D"
    ]:
      output_names.append(layer.name)
      outputs.append(layer.output)

  model_debug = Model(inputs=[x_in], outputs=outputs)

  outputs = model_debug.predict(x_train)

  print("{:30} {: 8.4f} {: 8.4f}".format(
      "input", np.min(x_train), np.max(x_train)))

  for n, p in zip(output_names, outputs):
    print("{:30} {: 8.4f} {: 8.4f}".format(n, np.min(p), np.max(p)), end="")
    layer = model.get_layer(n)
    for i, weights in enumerate(layer.get_weights()):
      if layer.get_quantizers()[i]:
        weights = K.eval(layer.get_quantizers()[i](K.constant(weights)))
      print(" ({: 8.4f} {: 8.4f})".format(np.min(weights), np.max(weights)),
            end="")
    print("")

  score = model.evaluate(x_test, y_test, verbose=False)
  print("Test score:", score[0])
  print("Test accuracy:", score[1])

print_qstats(model)


================================================
FILE: examples/example_mnist_po2.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests qlayers model with po2."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow.keras.backend as K
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
import numpy as np

from qkeras import *   # pylint: disable=wildcard-import

np.random.seed(42)

NB_EPOCH = 5
BATCH_SIZE = 64
VERBOSE = 1
NB_CLASSES = 10
OPTIMIZER = Adam(lr=0.0001, decay=0.000025)
N_HIDDEN = 100
VALIDATION_SPLIT = 0.1

QUANTIZED = 1
CONV2D = 1

(x_train, y_train), (x_test, y_test) = mnist.load_data()

RESHAPED = 784

x_train = x_train.astype("float32")
x_test = x_test.astype("float32")

x_train = x_train[..., np.newaxis]
x_test = x_test[..., np.newaxis]

x_train /= 256.0
x_test /= 256.0

train = False

print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")

print(y_train[0:10])

y_train = to_categorical(y_train, NB_CLASSES)
y_test = to_categorical(y_test, NB_CLASSES)

# we ran out of memory here, so we split x_train/x_test into smaller groups

x = x_in = Input(x_train.shape[1:-1] + (1,), name="input")
x = QActivation("quantized_relu_po2(4)", name="acti")(x)
x = QConv2D(
    32, (2, 2),
    strides=(2, 2),
    kernel_quantizer=quantized_po2(4, 1),
    bias_quantizer=quantized_po2(4, 1),
    name="conv2d_0_m")(
        x)
x = QActivation("quantized_relu_po2(4,4)", name="act0_m")(x)
x = QConv2D(
    64, (3, 3),
    strides=(2, 2),
    kernel_quantizer=quantized_po2(4, 1),
    bias_quantizer=quantized_po2(4, 1),
    name="conv2d_1_m")(
        x)
x = QActivation("quantized_relu_po2(4,4,use_stochastic_rounding=True)",
                name="act1_m")(x)
x = QConv2D(
    64, (2, 2),
    strides=(2, 2),
    kernel_quantizer=quantized_po2(4, 1, use_stochastic_rounding=True),
    bias_quantizer=quantized_po2(4, 1),
    name="conv2d_2_m")(
        x)
x = QActivation("quantized_relu(4,1)", name="act2_m")(x)
x = Flatten()(x)
x = QDense(
    NB_CLASSES,
    kernel_quantizer=quantized_bits(4, 0, 1),
    bias_quantizer=quantized_bits(4, 0, 1),
    name="dense")(
        x)
x = Activation("softmax", name="softmax")(x)

model = Model(inputs=[x_in], outputs=[x])
model.summary()

model.compile(
    loss="categorical_crossentropy", optimizer=OPTIMIZER, metrics=["accuracy"])

if train:
  history = model.fit(
      x_train, y_train, batch_size=BATCH_SIZE,
      epochs=NB_EPOCH, initial_epoch=1, verbose=VERBOSE,
      validation_split=VALIDATION_SPLIT)

  outputs = []
  output_names = []

  for layer in model.layers:
    if layer.__class__.__name__ in [
        "QActivation", "Activation", "QDense", "QConv2D", "QDepthwiseConv2D"
    ]:
      output_names.append(layer.name)
      outputs.append(layer.output)

  model_debug = Model(inputs=[x_in], outputs=outputs)

  outputs = model_debug.predict(x_train)

  print("{:30} {: 8.4f} {: 8.4f}".format(
      "input", np.min(x_train), np.max(x_train)))

  for n, p in zip(output_names, outputs):
    print("{:30} {: 8.4f} {: 8.4f}".format(n, np.min(p), np.max(p)), end="")
    layer = model.get_layer(n)
    for i, weights in enumerate(layer.get_weights()):
      weights = K.eval(layer.get_quantizers()[i](K.constant(weights)))
      print(" ({: 8.4f} {: 8.4f})".format(np.min(weights), np.max(weights)),
            end="")
      print("")

  score = model.evaluate(x_test, y_test, verbose=VERBOSE)
  print("Test score:", score[0])
  print("Test accuracy:", score[1])

model.summary()

print_qstats(model)


================================================
FILE: examples/example_mnist_prune.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Example of mnist model with pruning.
   Adapted from TF model optimization example."""

import tempfile
import numpy as np

import tensorflow.keras.backend as K
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.models import Sequential
from tensorflow.keras.models import save_model
from tensorflow.keras.utils import to_categorical

from qkeras import QActivation
from qkeras import QDense
from qkeras import QConv2D
from qkeras import quantized_bits
from qkeras.utils import load_qmodel
from qkeras.utils import print_model_sparsity

from tensorflow_model_optimization.python.core.sparsity.keras import prune
from tensorflow_model_optimization.python.core.sparsity.keras import pruning_callbacks
from tensorflow_model_optimization.python.core.sparsity.keras import pruning_schedule


batch_size = 128
num_classes = 10
epochs = 12

prune_whole_model = True # Prune whole model or just specified layers


def build_model(input_shape):
    x = x_in = Input(shape=input_shape, name="input")
    x = QConv2D(
        32, (2, 2), strides=(2,2),
        kernel_quantizer=quantized_bits(4,0,1),
        bias_quantizer=quantized_bits(4,0,1),
        name="conv2d_0_m")(x)
    x = QActivation("quantized_relu(4,0)", name="act0_m")(x)
    x = QConv2D(
        64, (3, 3), strides=(2,2),
        kernel_quantizer=quantized_bits(4,0,1),
        bias_quantizer=quantized_bits(4,0,1),
        name="conv2d_1_m")(x)
    x = QActivation("quantized_relu(4,0)", name="act1_m")(x)
    x = QConv2D(
        64, (2, 2), strides=(2,2),
        kernel_quantizer=quantized_bits(4,0,1),
        bias_quantizer=quantized_bits(4,0,1),
        name="conv2d_2_m")(x)
    x = QActivation("quantized_relu(4,0)", name="act2_m")(x)
    x = Flatten()(x)
    x = QDense(num_classes, kernel_quantizer=quantized_bits(4,0,1),
               bias_quantizer=quantized_bits(4,0,1),
               name="dense")(x)
    x = Activation("softmax", name="softmax")(x)

    model = Model(inputs=[x_in], outputs=[x])
    return model


def build_layerwise_model(input_shape, **pruning_params):
    return Sequential([
        prune.prune_low_magnitude(
            QConv2D(
                32, (2, 2), strides=(2,2),
                kernel_quantizer=quantized_bits(4,0,1),
                bias_quantizer=quantized_bits(4,0,1),
                name="conv2d_0_m"),
            input_shape=input_shape,
            **pruning_params),
        QActivation("quantized_relu(4,0)", name="act0_m"),
        prune.prune_low_magnitude(
            QConv2D(
                64, (3, 3), strides=(2,2),
                kernel_quantizer=quantized_bits(4,0,1),
                bias_quantizer=quantized_bits(4,0,1),
                name="conv2d_1_m"),
            **pruning_params),
        QActivation("quantized_relu(4,0)", name="act1_m"),
        prune.prune_low_magnitude(
            QConv2D(
                64, (2, 2), strides=(2,2),
                kernel_quantizer=quantized_bits(4,0,1),
                bias_quantizer=quantized_bits(4,0,1),
                name="conv2d_2_m"),
            **pruning_params),
        QActivation("quantized_relu(4,0)", name="act2_m"),
        Flatten(),
        prune.prune_low_magnitude(
            QDense(
                num_classes, kernel_quantizer=quantized_bits(4,0,1),
                bias_quantizer=quantized_bits(4,0,1),
                name="dense"),
            **pruning_params),
        Activation("softmax", name="softmax")
  ])


def train_and_save(model, x_train, y_train, x_test, y_test):
    model.compile(
        loss="categorical_crossentropy",
        optimizer="adam",
        metrics=["accuracy"])

    # Print the model summary.
    model.summary()

    # Add a pruning step callback to peg the pruning step to the optimizer's
    # step. Also add a callback to add pruning summaries to tensorboard
    callbacks = [
        pruning_callbacks.UpdatePruningStep(),
        #pruning_callbacks.PruningSummaries(log_dir=tempfile.mkdtemp())
        pruning_callbacks.PruningSummaries(log_dir="/tmp/mnist_prune")
    ]

    model.fit(
        x_train,
        y_train,
        batch_size=batch_size,
        epochs=epochs,
        verbose=1,
        callbacks=callbacks,
        validation_data=(x_test, y_test))
    score = model.evaluate(x_test, y_test, verbose=0)
    print("Test loss:", score[0])
    print("Test accuracy:", score[1])

    print_model_sparsity(model)

    # Export and import the model. Check that accuracy persists.
    _, keras_file = tempfile.mkstemp(".h5")
    print("Saving model to: ", keras_file)
    save_model(model, keras_file)
    
    print("Reloading model")
    with prune.prune_scope():
        loaded_model = load_qmodel(keras_file)
    score = loaded_model.evaluate(x_test, y_test, verbose=0)
    print("Test loss:", score[0])
    print("Test accuracy:", score[1])


def main():
    # input image dimensions
    img_rows, img_cols = 28, 28

    # the data, shuffled and split between train and test sets
    (x_train, y_train), (x_test, y_test) = mnist.load_data()

    if K.image_data_format() == "channels_first":
      x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
      x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
      input_shape = (1, img_rows, img_cols)
    else:
      x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
      x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
      input_shape = (img_rows, img_cols, 1)

    x_train = x_train.astype("float32")
    x_test = x_test.astype("float32")
    x_train /= 255
    x_test /= 255
    print("x_train shape:", x_train.shape)
    print(x_train.shape[0], "train samples")
    print(x_test.shape[0], "test samples")

    # convert class vectors to binary class matrices
    y_train = to_categorical(y_train, num_classes)
    y_test = to_categorical(y_test, num_classes)

    pruning_params = {
        "pruning_schedule":
            pruning_schedule.ConstantSparsity(0.75, begin_step=2000, frequency=100)
    }
    
    if prune_whole_model:
        model = build_model(input_shape)
        model = prune.prune_low_magnitude(model, **pruning_params)
    else:
        model = build_layerwise_model(input_shape, **pruning_params)

    train_and_save(model, x_train, y_train, x_test, y_test)


if __name__ == "__main__":
    main()

================================================
FILE: examples/example_qdense.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests qdense model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import argparse

from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
import numpy as np

from qkeras import print_qstats
from qkeras import QActivation
from qkeras import QDense
from qkeras import quantized_bits
from qkeras import ternary


np.random.seed(42)
OPTIMIZER = Adam()
NB_EPOCH = 1
BATCH_SIZE = 32
VERBOSE = 1
NB_CLASSES = 10
N_HIDDEN = 100
VALIDATION_SPLIT = 0.1
RESHAPED = 784


def QDenseModel(weights_f, load_weights=False):
  """Construct QDenseModel."""

  x = x_in = Input((RESHAPED,), name="input")
  x = QActivation("quantized_relu(4)", name="act_i")(x)
  x = QDense(N_HIDDEN, kernel_quantizer=ternary(),
             bias_quantizer=quantized_bits(4, 0, 1), name="dense0")(x)
  x = QActivation("quantized_relu(2)", name="act0")(x)
  x = QDense(
      NB_CLASSES,
      kernel_quantizer=quantized_bits(4, 0, 1),
      bias_quantizer=quantized_bits(4, 0, 1),
      name="dense2")(
          x)
  x = Activation("softmax", name="softmax")(x)

  model = Model(inputs=[x_in], outputs=[x])
  model.summary()
  model.compile(loss="categorical_crossentropy",
                optimizer=OPTIMIZER, metrics=["accuracy"])

  if load_weights and weights_f:
    model.load_weights(weights_f)

  print_qstats(model)
  return model


def UseNetwork(weights_f, load_weights=False):
  """Use DenseModel.

  Args:
    weights_f: weight file location.
    load_weights: load weights when it is True.
  """
  model = QDenseModel(weights_f, load_weights)

  batch_size = BATCH_SIZE
  (x_train_, y_train_), (x_test_, y_test_) = mnist.load_data()

  x_train_ = x_train_.reshape(60000, RESHAPED)
  x_test_ = x_test_.reshape(10000, RESHAPED)
  x_train_ = x_train_.astype("float32")
  x_test_ = x_test_.astype("float32")

  x_train_ /= 255
  x_test_ /= 255

  print(x_train_.shape[0], "train samples")
  print(x_test_.shape[0], "test samples")

  y_train_ = to_categorical(y_train_, NB_CLASSES)
  y_test_ = to_categorical(y_test_, NB_CLASSES)

  if not load_weights:
    model.fit(
        x_train_,
        y_train_,
        batch_size=batch_size,
        epochs=NB_EPOCH,
        verbose=VERBOSE,
        validation_split=VALIDATION_SPLIT)

    if weights_f:
      model.save_weights(weights_f)

  score = model.evaluate(x_test_, y_test_, verbose=VERBOSE)
  print_qstats(model)
  print("Test score:", score[0])
  print("Test accuracy:", score[1])


def ParserArgs():
  parser = argparse.ArgumentParser()
  parser.add_argument("-l", "--load_weight", default="0",
                      help="""load weights directly from file.
                            0 is to disable and train the network.""")
  parser.add_argument("-w", "--weight_file", default=None)
  a = parser.parse_args()
  return a


if __name__ == "__main__":
  args = ParserArgs()
  lw = False if args.load_weight == "0" else True
  UseNetwork(args.weight_file, load_weights=lw)


================================================
FILE: examples/example_qoctave.py
================================================
# Copyright 2019 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""QOctave example."""
import numpy as np
import sys
from tensorflow.keras import activations
from tensorflow.keras import initializers
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from functools import partial
from qkeras import *   # pylint: disable=wildcard-import


def create_model():
  """use qocatve in network."""
  kernel_initializer=initializers.he_normal(seed=42)

  x = x_in = Input(shape=(256, 256, 3))

  # Block 1
  high, low = QOctaveConv2D(
      32, (3, 3),
      alpha=0.5,
      strides=(2, 2),
      padding='valid',
      kernel_initializer=kernel_initializer,
      bias_initializer="zeros",
      bias_quantizer="quantized_bits(4,1)",
      depthwise_quantizer="quantized_bits(4,1)",
      depthwise_activation="quantized_bits(6,2,1)",
      pointwise_quantizer="quantized_bits(4,1)",
      acc_quantizer="quantized_bits(16,7,1)",
      activation="quantized_relu(6,2)",
      use_separable=True,
      name='block1_conv1')([x, None])

  # Block 2
  high, low = QOctaveConv2D(
      64, (3, 3),
      alpha=0.4,
      strides=(2, 2),
      padding='same',
      kernel_initializer=kernel_initializer,
      bias_initializer="zeros",
      bias_quantizer="quantized_bits(4,1)",
      depthwise_quantizer="quantized_bits(4,1)",
      depthwise_activation="quantized_bits(6,2,1)",
      pointwise_quantizer="quantized_bits(4,1)",
      acc_quantizer="quantized_bits(16,7,1)",
      activation="quantized_relu(6,2)",
      use_separable=True,
      name='block2_conv1')([high, low])

  # Block 3
  high, low = QOctaveConv2D(
      64, (3, 3),
      alpha=0.4,
      strides=(2, 2),
      padding='same',
      kernel_initializer=kernel_initializer,
      bias_initializer="zeros",
      bias_quantizer="quantized_bits(4,1)",
      depthwise_quantizer="quantized_bits(4,1)",
      depthwise_activation="quantized_bits(6,2,1)",
      pointwise_quantizer="quantized_bits(4,1)",
      acc_quantizer="quantized_bits(16,7,1)",
      activation="quantized_relu(6,2)",
      use_separable=True,
      name='block3_conv1')([high, low])

  high, low = QOctaveConv2D(
      32, (3, 3),
      alpha=0.4,
      strides=(1, 1),
      padding='same',
      kernel_initializer=kernel_initializer,
      bias_initializer='zeros',
      bias_quantizer="quantized_bits(4,1)",
      depthwise_quantizer="quantized_bits(4,1)",
      depthwise_activation="quantized_bits(6,2,1)",
      pointwise_quantizer="quantized_bits(4,1)",
      acc_quantizer="quantized_bits(16,7,1)",
      activation="quantized_relu(6,2)",
      use_separable=True,
      name='block3_conv2')([high, low])

  high, low = QOctaveConv2D(
      32, (3, 3),
      alpha=0.3,
      strides=(1, 1),
      padding='same',
      kernel_initializer=kernel_initializer,
      bias_initializer='zeros',
      bias_quantizer="quantized_bits(4,1)",
      depthwise_quantizer="quantized_bits(4,1)",
      depthwise_activation="quantized_bits(6,2,1)",
      pointwise_quantizer="quantized_bits(4,1)",
      acc_quantizer="quantized_bits(16,7,1)",
      activation="quantized_relu(6,2)",
      use_separable=True,
      name='block3_conv3')([high, low])

  x, _ = QOctaveConv2D(
      32, (3, 3),
      alpha=0.0,
      strides=(2, 2),
      padding='same',
      kernel_initializer=kernel_initializer,
      bias_initializer='zeros',
      bias_quantizer="quantized_bits(4,1)",
      depthwise_quantizer="quantized_bits(4,1)",
      depthwise_activation="quantized_bits(6,2,1)",
      pointwise_quantizer="quantized_bits(4,1)",
      acc_quantizer="quantized_bits(16,7,1)",
      activation="quantized_relu(6,2)",
      use_separable=True,
      name='block3_conv_down')([high, low])

  # Upsample
  x = UpSampling2D(size=(2, 2), data_format="channels_last")(x)

  x = QConv2D(
      2, (2, 2),
      strides=(1, 1),
      kernel_initializer=kernel_initializer,
      bias_initializer="ones",
      kernel_quantizer=quantized_bits(4, 0, 1),
      bias_quantizer=quantized_bits(4, 0, 1),
      padding="same",
      name="conv_up")(
          x)

  x = Activation("softmax", name="softmax")(x)
  output = x

  model = Model(x_in, output, name='qoctave_network')
  return model


# Create the model
def customLoss(y_true,y_pred):
  log1 = 1.5 * y_true * K.log(y_pred + 1e-9) * K.pow(1-y_pred, 2)
  log0 = 0.5 * (1 - y_true) * K.log((1 - y_pred) + 1e-9) * K.pow(y_pred, 2)
  return (- K.sum(K.mean(log0 + log1, axis = 0)))

if __name__ == '__main__':
  model = create_model()
  model.compile(optimizer="Adam", loss=customLoss, metrics=['acc'])
  model.summary(line_length=100)
  print_qstats(model)


================================================
FILE: examples/example_ternary.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from __future__ import absolute_import  # Not necessary in a Python 3-only module
from __future__ import division  # Not necessary in a Python 3-only module
from __future__ import print_function  # Not necessary in a Python 3-only module

from absl import app
from absl import flags
import matplotlib
import numpy as np

matplotlib.use('TkAgg')
import matplotlib.pyplot as plt


FLAGS = flags.FLAGS


def _stochastic_rounding(x, precision, resolution, delta):
  """Stochastic_rounding for ternary.

  Args:
    x:
    precision: A float. The area we want to make this stochastic rounding.
       [delta-precision, delta] [delta, delta+precision]
    resolution: control the quantization resolution.
    delta: the undiscountinued point (positive number)

  Return:
    A tensor with stochastic rounding numbers.
  """
  delta_left = delta - precision
  delta_right = delta + precision
  scale = 1 / resolution
  scale_delta_left = delta_left * scale
  scale_delta_right = delta_right * scale
  scale_2_delta = scale_delta_right - scale_delta_left
  scale_x = x * scale
  fraction = scale_x - scale_delta_left
  # print(precision, scale, x[0], np.floor(scale_x[0]), scale_x[0], fraction[0])

  # we use uniform distribution
  random_selector = np.random.uniform(0, 1, size=x.shape) * scale_2_delta

  # print(precision, scale, x[0], delta_left[0], delta_right[0])
  # print('x', scale_x[0], fraction[0], random_selector[0], scale_2_delta[0])
  # rounddown = fraction < random_selector
  result = np.where(fraction < random_selector,
                    scale_delta_left / scale,
                    scale_delta_right / scale)
  return result


def _ternary(x, sto=False):
  m = np.amax(np.abs(x), keepdims=True)
  scale = 2 * m / 3.0
  thres = scale / 2.0
  ratio = 0.1

  if sto:
    sign_bit = np.sign(x)
    x = np.abs(x)
    prec = x / scale
    x = (
        sign_bit * scale * _stochastic_rounding(
            x / scale,
            precision=0.3, resolution=0.01, # those two are all normalized.
            delta=thres / scale))
    # prec + prec *ratio)
    # mm = np.amax(np.abs(x), keepdims=True)
  return np.where(np.abs(x) < thres, np.zeros_like(x), np.sign(x))


def main(argv):
  if len(argv) > 1:
    raise app.UsageError('Too many command-line arguments.')

  # x = np.arange(-3.0, 3.0, 0.01)
  # x = np.random.uniform(-0.01, 0.01, size=1000)
  x = np.random.uniform(-10.0, 10.0, size=1000)
  # x = np.random.uniform(-1, 1, size=1000)
  x = np.sort(x)
  tr = np.zeros_like(x)
  t = np.zeros_like(x)
  iter_count = 500
  for _ in range(iter_count):
    y = _ternary(x)
    yr = _ternary(x, sto=True)
    t = t + y
    tr = tr + yr

  plt.plot(x, t/iter_count)
  plt.plot(x, tr/iter_count)
  plt.ylabel('mean (%s samples)' % iter_count)
  plt.show()


if __name__ == '__main__':
  app.run(main)


================================================
FILE: experimental/lo/__init__.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Exports logic optimization module."""
from .utils import *  # pylint: disable=wildcard-import
from .receptive import model_to_receptive_field
from .conv2d import optimize_conv2d_logic
from .dense import optimize_dense_logic
from .optimizer import run_rf_optimizer
from .optimizer import run_abc_optimizer
from .optimizer import mp_rf_optimizer_func
from .table import load
from .compress import Compressor
from .generate_rf_code import *
# __version__ = "0.5.0"


================================================
FILE: experimental/lo/compress.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Implements faster version of set on multiple strings."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function


class Compressor:
  """Implements a hierarchical set class with better performance than a set."""

  def __init__(self, hash_only_input=False):
    self.n_dict = {}
    self.hash_only_input = hash_only_input

  def add_entry(self, table_in, table_out=""):
    """Adds entry (table_in, table_out) to the set."""
    line = (table_in, table_out)

    if self.hash_only_input:
      h_line = hash(table_in)
    else:
      h_line = hash(line)

    if self.n_dict.get(h_line, None):
      self.n_dict[h_line] = self.n_dict[h_line].union([line])
    else:
      self.n_dict[h_line] = set([line])

  def has_entry(self, table_in, table_out=""):
    """Checks if table_in is already stored in the set."""

    line = (table_in, table_out)

    if self.hash_only_input:
      h_line = hash(table_in)
    else:
      h_line = hash(line)

    if not self.n_dict.get(h_line, None):
      return None

    set_h_line = self.n_dict[h_line]

    for (ti, to) in set_h_line:
      if table_in == ti:
        return to

    return None

  def __call__(self):
    for key in self.n_dict:
      for line in self.n_dict[key]:
        yield line



================================================
FILE: experimental/lo/conv2d.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Implements convolutional (?, h, w, c) facing input layer optimization."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import multiprocessing as mp
import os
import shutil

from .compress import Compressor
import numpy as np
import six
from tensorflow.keras.models import Model
from .utils import get_padding_value

DEBUG = int(os.getenv("DEBUG", 0))

OG_IS_SYMBOLIC = 0


def parallel_index_table(
    p, ni, size, idx_height, idx_width, i_dict, o_dict,
    kernel, strides, padding, generate_pla):
  """Processes the table in parallel and use espresso to optimize it."""

  print("... indexing table from {} to {} ({} => {})".format(
      ni, ni+size, p[0].shape, p[1].shape))

  table_ins = []
  table_ous = []

  table_set = Compressor(hash_only_input=True)

  if DEBUG:
    table_set_line = {}

  for n in range(size):

    # we need to traverse the outputs to compute the input coordinates

    for ho in idx_height:
      min_hi = strides[0]*ho - 2*padding[0]
      max_hi = strides[0]*ho - 2*padding[0] + kernel[0]

      if min_hi < 0 or max_hi > p[0].shape[0]:
        continue

      for wo in idx_width:
        min_wi = strides[1]*wo - 2*padding[1]
        max_wi = strides[1]*wo - 2*padding[1] + kernel[1]

        if min_wi < 0 or max_wi > p[0].shape[1]:
          continue

        i_values = p[0][n, min_hi:max_hi, min_wi:max_wi].flatten()

        # o_values has dimension (1, 1, C_O)

        o_values = p[1][n, ho, wo]

        # if we generate a pla entry, we care about a list of
        # bits. Otherwise, we care about a list of floating point
        # values.

        table_i = "".join([i_dict[v] for v in i_values])
        table_o = "".join([o_dict[v] for v in o_values])

        if generate_pla:
          table_s = "".join([str(v) for v in table_i])
          bit_str = table_s
        else:
          table_s = ",".join([str(v) for v in table_i])
          table_i = table_s
          bit_str = "".join(i_dict[v] for v in i_values)
        is_table_zero = bit_str != "0"*len(bit_str)

        if table_set.has_entry(table_s) and not is_table_zero:

          # if table is already stored, we do not store it again.
          # from time to time, we may want to check if we have found
          # diverging output values.

          if DEBUG:

            (table_o_old, (old_n, old_ho, old_wo)) = table_set_line[table_s]

            if table_o != table_o_old:
              print(
                  "contradicting outputs n={} old_n={} out_p={} out={}".format(
                      (n, ho, wo), (old_n, old_ho, old_wo), table_o_old,
                      table_o))
              print(" I:", table_s)
              print(" I:", i_values)
              print("<<<", table_o_old)
              print(">>>", table_o)
              return (None, None)

          continue

        # these are unique table entries

        table_ins.append(table_i)
        table_ous.append(table_o)

        # we store this information in order to be able to debug
        # and discard information.

        table_set.add_entry(table_s)

        if DEBUG:
          table_set_line[table_s] = (table_o, (n, ho, wo))

  print("... indexing table from {} to {} completed".format(ni, ni+size))

  return (table_ins, table_ous)


def parallel_compress_output_table(
    filename, header, table_ins, table_ous, output_group, generate_pla,
    n_bits_og, o, o_bits):
  """Processes in parallel compression of table and writes it to a disk."""

  f = open(filename, "w")

  f.write("".join(header))

  c = Compressor()

  for n in range(len(table_ins)):
    for og in range(output_group):

      if output_group > 1:
        if generate_pla:
          if OG_IS_SYMBOLIC:
            og_l = ["0"] * n_bits_og
            og_l[n_bits_og - 1 - og] = "1"
            og_b = "".join(og_l)
            table_i_suffix = " " + og_b
          else:
            og_b = bin(og)[2:]
            table_i_suffix = " " + "0" * (n_bits_og - len(og_b)) + og_b
        else:
          table_i_suffix = "," + str(og)
      else:
        table_i_suffix = ""
      table_i = table_ins[n] + table_i_suffix
      table_o = table_ous[n][(o+og)*o_bits:(o+og+1)*o_bits]

      if generate_pla:
        c.add_entry(table_i + " " + table_o)
      else:
        c.add_entry(table_i + "," + str(table_o[0]))

  for line in c():
    f.write("{}\n".format(line[0]))

  if generate_pla:
    f.write(".e\n")

  f.close()

  print("... file {} generated".format(filename))


def optimize_conv2d_logic(
    model, i_name, o_name, x_train,
    i_dict=None, o_dict=None,
    kernel=None, strides=None, padding=None,
    output_group=1, samples=2000,
    randomize=None, generate_pla=True, prefix=""):
  """Generates table for logic synthesis for conv2d or conv2d-like shape.

  Generates table in either espresso format or csv format to be optimized
  for logic synthesis. The parameters kernel, strides and padding usually
  do not require any values, unless we want to embed maxpooling layer or
  multiple convolutional layers between i_name and o_name. In that case,
  we require the user to compute the proper kernel, strides, and padding
  that will correspond to the combined layer, as Keras and tensorflow do not
  provide a way to compute the receptive field between two layers.

  Arguments:
    model: Keras model
    i_name: name of convolutional layer (input to this layer must be
      quantized).
    o_name: name of quantized output layer.
    x_train: training set to be used to dump table.
    i_dict: dictionary of floating point values to encoding for inputs.
    o_dict: dictionary of floating point values to encoding for outputs.
    kernel: kernel size, to be specified if we want to override convolution
      kernel.
    strides: strides, to be specified if we want to override first convolution
      strides.
    padding: padding, to be specified if we want to override first convolution
      padding.
    output_group: by default, we compute one PE per channel output. The user
      can override that by specifying how many output channels should be
      bundled into the same PE.
    samples: how many images from x_train should be sampled when generating the
      tables.
    randomize: if specified, it should be the number of coordinates within the
      same image we will use to derive the convolution table.
    generate_pla: if true, we generate table in pla format. Otherwise, we
      generate a csv file.
    prefix: prefix name to create directory.

  Returns:
    list of files generated.
  """

  # if no i_dict or no o_dict, we do not know how to encode, so we generate
  # csv file.

  if not i_dict or not o_dict:
    generate_pla = False

  # extract layer from i_name and o_name

  i_layer = model.get_layer(i_name)
  o_layer = model.get_layer(o_name)

  # if kernel is not specified, use the kernel size from i_layer

  if not kernel:
    kernel = i_layer.kernel_size

  # if strides is not specified, use the strides from i_layer

  if not strides:
    strides = i_layer.strides

  # if padding is not specified, use the padding from i_layer

  if not padding:
    padding = i_layer.padding

  # for conv2d, we want a list for kernel, strides and padding

  if not isinstance(kernel, list) and not isinstance(kernel, tuple):
    kernel = [kernel, kernel]

  if not isinstance(strides, list) and not isinstance(strides, tuple):
    strides = [strides, strides]

  if not isinstance(padding, list) and not isinstance(padding, tuple):
    padding = [padding, padding]

  # compute the padding value

  padding[0] = get_padding_value(padding[0], kernel[0])
  padding[1] = get_padding_value(padding[1], kernel[1])

  # resample inputs

  skip = min(2000, samples)

  indexes = np.array(range(x_train.shape[0]))
  np.random.shuffle(indexes)
  x_train = x_train[indexes[:samples]]

  # we want to create a smaller model that from inputs generate
  # i_layer.output + o_layer.output tensors, so that we can predict
  # its values.

  outputs = []

  x = i_layer.input
  y = o_layer.output

  if not isinstance(x, list):
    x = [x]

  outputs = x + [y]

  mo = Model(inputs=model.inputs, outputs=outputs)
  p = mo.predict(x_train)

  # in csv mode, each entry has "1" value, for PLA,
  # we encode the floating point into multiple bits.

  if not generate_pla:
    i_bits = 1
    # i_dict = {v:v for v in i_dict.keys()}
  else:
    i_bits = len(six.next(six.itervalues(i_dict)))

  if not generate_pla:
    o_bits = 1
    # o_dict = {v:v for v in o_dict.keys()}
  else:
    o_bits = len(six.next(six.itervalues(o_dict)))

  # if randomize is specified, we will sample sqrt(randomize)
  # from each image, as the conv2d performs the filter everywhere
  # in the image. Because the same image may contain a lot of
  # reduntant information, we may want to restrict the number of
  # samples.

  if randomize:
    idx_height = np.random.choice(
        p[-1].shape[1],
        int(np.round(np.sqrt(randomize))))

    idx_width = np.random.choice(
        p[-1].shape[2],
        int(np.round(np.sqrt(randomize))))
  else:
    idx_height = range(p[-1].shape[1])
    idx_width = range(p[-1].shape[2])

  # this is just to inspect that the inputs and outputs are really quantized.

  print("inputs:")
  for i in range(len(x)):
    print(i, np.min(p[i]), np.max(p[i]))
  print("outputs:")
  print(np.min(p[-1]), np.max(p[-1]))

  # i_size and o_size are the channel sizes of the inputs and outputs

  o_size = y.shape[-1]
  i_size = p[0].shape[-1]

  if generate_pla:
    suffix = "pla"
  else:
    suffix = "csv"

  prefix = prefix + "/" if prefix else ""

  # lets try to remove the directory and create a new one

  try:
    shutil.rmtree(prefix + i_layer.name + "." + suffix)
  except OSError:
    pass

  try:
    os.makedirs(prefix + i_layer.name + "." + suffix)
  except OSError:
    pass

  table_ins = list()
  table_ous = list()

  print("...indexing inputs")

  # for each image in sampled x_train

  # on Intel processors, mp.cpu_count() returns number of threads

  number_of_processes = mp.cpu_count() // 2
  pool = mp.Pool(number_of_processes)

  results = []

  for n in range(0, x_train.shape[0], skip):

    res = pool.apply_async(
        parallel_index_table,
        args=((p[0][n:n+skip], p[1][n:n+skip]), n, skip, idx_height,
              idx_width, i_dict, o_dict, kernel, strides, padding,
              generate_pla))
    results.append(res)

  pool.close()
  pool.join()

  all_pools = [res.get(timeout=1) for res in results]

  table_ins = sum([ap[0] for ap in all_pools], [])
  table_ous = sum([ap[1] for ap in all_pools], [])

  # input and output size

  ni = len(table_ins[0])
  no = len(table_ous[0])

  print("... generating tables {} outputs, {} entries".format(
      o_size, len(table_ins)))

  # this step should be very fast

  files = []

  if OG_IS_SYMBOLIC:
    if output_group > 1:
      n_bits_og = output_group
    else:
      n_bits_og = 1
  else:
    if output_group == 2:
      n_bits_og = 1
    else:
      n_bits_og = int(np.ceil(np.log2(output_group)))

  # sometimes linux get very grumpy with too many files opened.
  # let's limit to 20.

  number_of_processes = min(20, mp.cpu_count() // 2)
  pool = mp.Pool(number_of_processes)

  for o in range(0, o_size, output_group):

    filename = "{}{}.{}/{}_{}.raw.{}".format(
        prefix, i_name, suffix, i_name, o, suffix)

    files.append(filename)

    header = []

    if generate_pla:
      header.append(".i {}\n".format(ni + n_bits_og))
      header.append(".o {}\n".format(no // o_size))
      header.append(".type fr\n")

      if OG_IS_SYMBOLIC and output_group > 1:
        header.append(".mv {} {} {} {}\n".format(
            3, ni, n_bits_og, no // o_size))

      # let's generate some labels

      header.append(".ob " + " ".join([
          "o_" + str(o) + "_" + str(o_bits - 1 - v)
          for v in range(o_bits)]) + "\n")

      i_names = []

      # name is i_<channel>_<kernel_row>_<kernel_col>_bit

      assert ni == (i_size * kernel[0] * kernel[1] * i_bits)

      for channel in range(i_size):
        for row in range(kernel[0]):
          for col in range(kernel[1]):
            for bit in range(i_bits):
              i_names.append("i_{}_{}_{}_{}".format(
                  channel, row, col, (i_bits - 1 - bit)))

      # if we are grouping multiple channels, these will be the inputs

      for c in range(n_bits_og):
        i_names.append("og_{}".format(n_bits_og - 1 - c))

      header.append(".ilb " + " ".join(i_names) + "\n")

    pool.apply_async(
        parallel_compress_output_table,
        args=((filename, header, table_ins, table_ous, output_group,
               generate_pla, n_bits_og, o, o_bits)))

  pool.close()
  pool.join()

  return files


================================================
FILE: experimental/lo/dense.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================

"""Implements dense (?, features) fancing input layer optimization."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import multiprocessing as mp
import os
import shutil

from .compress import Compressor
import numpy as np
import six
from tensorflow.keras.models import Model

DEBUG = int(os.getenv("DEBUG", 0))

OG_IS_SYMBOLIC = 0


def parallel_index_table(
    p, ni, size, i_dict, o_dict, generate_pla):
  """Processes the table in parallel and use espresso to optimize it."""

  print("... indexing table from {} to {} ({} => {})".format(
      ni, ni+size, p[0].shape, p[1].shape))

  table_ins = []
  table_ous = []

  table_set = Compressor(hash_only_input=True)

  if DEBUG:
    table_set_line = {}

  for n in range(size):

    i_values = p[0][n].flatten()
    o_values = p[1][n].flatten()

    # if we generate a pla entry, we care about a list of
    # bits. Otherwise, we care about a list of floating point
    # values.

    table_i = "".join([i_dict[v] for v in i_values])
    table_o = "".join([o_dict[v] for v in o_values])

    if generate_pla:
      table_s = "".join([str(v) for v in table_i])
      bit_str = table_s
    else:
      table_s = ",".join([str(v) for v in table_i])
      table_i = table_s
      bit_str = "".join(str(i_dict[v]) for v in i_values)
    is_table_zero = bit_str != "0"*len(bit_str)

    if table_set.has_entry(table_s) and not is_table_zero:

      # if table is already stored, we do not store it again.
      # from time to time, we may want to check if we have found
      # diverging output values.

      if DEBUG:

        (table_o_old, old_n) = table_set_line[table_s]

        if table_o != table_o_old:
          print("contradicting outputs n={} old_n={} out_p={} out={}".format(
              n, old_n, table_o_old, table_o))
          print(" I:", table_s)
          print(" I:", i_values)
          print("<<<", table_o_old)
          print(">>>", table_o)
          return (None, None)

      continue

    # these are unique table entries

    table_ins.append(table_i)
    table_ous.append(table_o)

    # we store this information in order to be able to debug
    # and discard information.

    table_set.add_entry(table_s)

    if DEBUG:
      table_set_line[table_s] = (table_o, n)

  print("... indexing table from {} to {} completed".format(ni, ni+size))

  return (table_ins, table_ous)


def parallel_compress_output_table(
    filename, header, table_ins, table_ous, output_group, generate_pla,
    n_bits_og, o, o_bits):
  """Processes in parallel compression of table and writes it to a disk."""

  f = open(filename, "w")

  f.write("".join(header))

  c = Compressor()

  for n in range(len(table_ins)):
    for og in range(output_group):

      if output_group > 1:
        if generate_pla:
          if OG_IS_SYMBOLIC:
            og_l = ["0"] * n_bits_og
            og_l[n_bits_og - 1 - og] = "1"
            og_b = "".join(og_l)
            table_i_suffix = " " + og_b
          else:
            og_b = bin(og)[2:]
            table_i_suffix = " " + "0"*(n_bits_og - len(og_b)) + og_b
        else:
          table_i_suffix = "," + str(og)
      else:
        table_i_suffix = ""
      table_i = table_ins[n] + table_i_suffix
      table_o = table_ous[n][(o+og)*o_bits:(o+og+1)*o_bits]

      if generate_pla:
        c.add_entry(table_i + " " + table_o)
      else:
        c.add_entry(table_i + "," + str(table_o[0]))

  for line in c():
    f.write("{}\n".format(line[0]))

  if generate_pla:
    f.write(".e\n")
  f.close()


def optimize_dense_logic(
    model, i_name, o_name, x_train, i_dict, o_dict,
    output_group=1, samples=2000,
    generate_pla=True, prefix=""):

  """Generates table for logic synthesis for dense or flattened layer.

  Generates table in either espresso format or csv format to be optimized
  for logic synthesis.

  Arguments:
    model: Keras model
    i_name: name of convolutional layer (input to this layer must be
      quantized).
    o_name: name of quantized output layer.
    x_train: training set to be used to dump table.
    i_dict: dictionary of floating point values to encoding for inputs.
    o_dict: dictionary of floating point values to encoding for outputs.
    output_group: by default, we compute one PE per channel output. The user
      can override that by specifying how many output channels should be
      bundled into the same PE.
    samples: how many images from x_train should be sampled when generating the
      tables.
    generate_pla: if true, we generate table in pla format. Otherwise, we
      generate a csv file.
    prefix: prefix name to create a directory.
  Returns:
    list of files generated.
  """

  i_layer = model.get_layer(i_name)
  o_layer = model.get_layer(o_name)

  # resample inputs

  skip = min(2000, samples)

  indexes = np.array(range(x_train.shape[0]))
  np.random.shuffle(indexes)

  x_train = x_train[indexes[:samples]]

  outputs = []

  x = i_layer.input
  y = o_layer.output

  if not isinstance(x, list):
    x = [x]

  outputs = x + [y]

  mo = Model(inputs=model.inputs, outputs=outputs)
  p = mo.predict(x_train)

  # in csv mode, each entry has "1" value, for PLA,
  # we encode the floating point into multiple bits.

  if not generate_pla:
    i_bits = 1
    # i_dict = {v:v for v in i_dict.keys()}
  else:
    i_bits = len(six.next(six.itervalues(i_dict)))

  if not generate_pla:
    o_bits = 1
    # o_dict = {v:v for v in o_dict.keys()}
  else:
    o_bits = len(six.next(six.itervalues(o_dict)))

  print("inputs:")
  for i in range(len(x)):
    print(i, np.min(p[i]), np.max(p[i]))
  print("outputs:")
  print(0, np.min(p[-1]), np.max(p[-1]))

  o_size = y.shape[-1]
  i_size = p[0].shape[-1]

  if generate_pla:
    suffix = "pla"
  else:
    suffix = "csv"

  prefix = prefix + "/" if prefix else ""

  # lets try to remove the directory and create a new one

  try:
    shutil.rmtree(prefix + i_layer.name + "." + suffix)
  except OSError:
    pass

  try:
    os.makedirs(prefix + i_layer.name + "." + suffix)
  except OSError:
    pass

  print("...indexing inputs")

  # for each image in sampled x_train

  # on Intel processors, mp.cpu_count() returns number of threads

  number_of_processes = mp.cpu_count() // 2
  pool = mp.Pool(number_of_processes)

  results = []

  for n in range(0, x_train.shape[0], skip):

    res = pool.apply_async(
        parallel_index_table,
        args=((p[0][n:n+skip], p[1][n:n+skip]), n, skip, i_dict, o_dict,
              generate_pla))
    results.append(res)

  pool.close()
  pool.join()

  all_pools = [res.get(timeout=1) for res in results]

  table_ins = sum([ap[0] for ap in all_pools], [])
  table_ous = sum([ap[1] for ap in all_pools], [])

  # input and output size

  ni = len(table_ins[0])
  no = len(table_ous[0])

  print("... generating tables {} outputs, {} entries".format(
      o_size, len(table_ins)))

  # this step should be very fast

  files = []

  if OG_IS_SYMBOLIC:
    if output_group > 1:
      n_bits_og = output_group
    else:
      n_bits_og = 1
  else:
    if output_group == 2:
      n_bits_og = 1
    else:
      n_bits_og = int(np.ceil(np.log2(output_group)))

  # sometimes linux get very grumpy with too many files opened.
  # let's limit to 20.

  number_of_processes = min(20, mp.cpu_count() // 2)
  pool = mp.Pool(number_of_processes)

  for o in range(0, o_size, output_group):

    filename = "{}{}.{}/{}_{}.raw.{}".format(
        prefix, i_name, suffix, i_name, o, suffix)

    files.append(filename)

    header = []

    if generate_pla:
      header.append(".i {}\n".format(ni + n_bits_og))
      header.append(".o {}\n".format(no // o_size))
      header.append(".type fr\n")

      if OG_IS_SYMBOLIC and output_group > 1:
        header.append(".mv {} {} {} {}\n".format(
            3, ni, n_bits_og, no // o_size))

      # let's generate some labels

      header.append(".ob " + " ".join([
          "o_" + str(o) + "_" + str(o_bits - 1 - v)
          for v in range(o_bits)]) + "\n")

      i_names = []

      # name is i_<features>_bit

      assert ni == (i_size * i_bits)

      for feature in range(i_size):
        for bit in range(i_bits):
          i_names.append("i_{}_{}".format(
              feature, (i_bits - 1 - bit)))

      # if we are grouping multiple channels, these will be the inputs

      for c in range(n_bits_og):
        i_names.append("og_{}".format(n_bits_og - 1 - c))

      header.append(".ilb " + " ".join(i_names) + "\n")

    pool.apply_async(
        parallel_compress_output_table,
        args=((filename, header, table_ins, table_ous, output_group,
               generate_pla, n_bits_og, o, o_bits)))

  pool.close()
  pool.join()

  return files




================================================
FILE: experimental/lo/generate_rf_code.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Generates expressions for random trees."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os

import numpy as np

DEBUG = int(os.environ.get("DEBUG", 0))
PRINT_DEBUG = int(os.environ.get("PRINT_DEBUG", 0))


def gen_random_tree_regressor(
    tree, code, bits, o_bits, o_decimal_digits, o_is_neg, bdd, offset, is_cc=True):
  """Generates HLS friendly C++ code for random tree regressor.

  Generates HLS friendly C++ code for Catapult.

  Arguments:
    tree: decision tree regressor from SkLearn.
    code: list of code lines to be append to.
    bits: list containing number of bits for each of the inputs.
    o_bits: number of bits for output.
    o_decimal_digits: number of decimal digits (right of the decimal point
        of o_bits for approximation of regressor in RandomTreeRegressor.
    o_is_neg: True or 1 if output can be negative.
    bdd: we actually try to cache entries (i,v,n1,n0) entries so that if
        they appear again, we reuse previously computed nodes.
    offset: each variable created in this function call is incremented by
        offset.
    is_cc: if True, generates C++, else Verilog.

  Returns:
    Tuple containing last variable name and current number of variables.

  """

  # extract information from tree

  n_nodes = tree.node_count
  children_left = tree.children_left
  children_right = tree.children_right
  feature = tree.feature
  threshold = tree.threshold
  values = np.copy(tree.value)

  o_suffix = ""
  if DEBUG:
    o_type = "float"
  elif is_cc:
    o_type = "ac_fixed<{},{},{}>".format(
        o_bits + o_decimal_digits,
        o_bits + o_is_neg,
        o_is_neg)
  else:
    o_sign = " signed" if o_is_neg else ""
    if o_bits + o_decimal_digits > 1:
      o_suffix = "[{}:0]".format(o_bits + o_decimal_digits - 1)
    o_type = "wire" + o_sign + " " + o_suffix


  def round_digits(x, decimal_digits):
    """Rounds to decimal_digits to the right of the decimal point."""

    if DEBUG:
      return x
    factor = (1 << decimal_digits) * 1.0
    x = x * factor
    return np.round(x) / factor

  is_leaves = np.zeros(shape=n_nodes, dtype=bool)

  stack = [(0, -1)]

  while stack:
    node_id, parent_depth = stack.pop()

    if children_left[node_id] != children_right[node_id]:
      stack.append((children_left[node_id], parent_depth+1))
      stack.append((children_right[node_id], parent_depth+1))
    else:
      is_leaves[node_id] = True
      values[node_id] = round_digits(tree.value[node_id], o_decimal_digits)
      if (
          values[node_id].flatten()[0] != tree.value[node_id].flatten()[0] and
          DEBUG
      ):
        print(node_id, values[node_id].flatten()[0],
              tree.value[node_id].flatten()[0])

  v_name = {}
  n_vars = offset

  bdd = {}

  def round_value_to_int(x):
    v = hex(int(np.round(x * (1 << (o_decimal_digits)))))
    if is_cc:
      if DEBUG:
        return str(x)
      else:
        return x
      #v + " /* {} */".format(x)
    else:
      return (
          str(o_bits + o_decimal_digits) + "'h" + v[2:] + " /* {} */".format(x)
      )

  if is_leaves[0]:
    v_name[0] = round_value_to_int(values[0].flatten()[0])
    code.append("  {} n_{} = {};".format(o_type, n_vars, v_name[0]))
    last_var = "n_{}".format(n_vars)
    n_vars += 1
  else:
    for i in range(n_nodes-1, -1, -1):
      if is_leaves[i]:
        continue

      if v_name.get(children_left[i], None) is not None:
        n1 = v_name[children_left[i]]
      elif is_leaves[children_left[i]]:
        n1 = round_value_to_int(values[children_left[i]].flatten()[0])
        v_name[children_left[i]] = n1
      else:
        n1 = "n_" + str(n_vars)
        n_vars += 1
        v_name[children_left[i]] = n1
        raise ValueError((children_left[i], n1, is_leaves[children_left[i]]))

      if v_name.get(children_right[i], None) is not None:
        n0 = v_name[children_right[i]]
      elif is_leaves[children_right[i]]:
        n0 = round_value_to_int(values[children_right[i]].flatten()[0])
        v_name[children_right[i]] = n0
      else:
        n0 = "n_" + str(n_vars)
        n_vars += 1
        v_name[children_right[i]] = n0
        raise ValueError((children_right[i], n0, is_leaves[children_right[i]]))

      if v_name.get(i, None) is not None:
        n = v_name[i]
        last_var = v_name[i]
      elif bdd.get((feature[i], threshold[i], n1, n0), None) is not None:
        n = bdd[(feature[i], threshold[i], n1, n0)]
        v_name[i] = n
        last_var = n
      elif n1 == n0:
        # store intermediate results so that we can build a dag, not a tree
        bdd[(feature[i], threshold[i], n1, n0)] = n1
        v_name[i] = n1
        last_var = n1
      else:
        n = "n_" + str(n_vars)
        n_vars += 1
        v_name[i] = n
        # store intermediate results so that we can build a dag, not a tree
        bdd[(feature[i], threshold[i], n1, n0)] = n
        t = int(threshold[i])
        if bits[feature[i]] == 1:
          if t == 0:
            n1, n0 = n0, n1
          code.append(
              "  {} {} = (i_{}) ? {} : {}; // x_{} {}".format(
                  o_type, v_name[i], feature[i], n1, n0, i,
                  threshold[i]))
        else:
          code.append(
              "  {} {} = (i_{} <= {}) ? {} : {}; // x_{} {}".format(
                  o_type, v_name[i], feature[i], t, n1, n0, i,
                  threshold[i]))
        last_var = v_name[i]

  return (last_var, n_vars)


def entry_to_hex(entry, max_value, size, is_cc):
  """Converts class instance to hexa number."""

  e_vector = [np.power(max_value+1, i) for i in range(len(entry)-1, -1, -1)]
  entry = np.array(entry)
  v = hex(np.sum(entry * e_vector))

  if is_cc:
    return v
  else:
    return str(size) + "'h" + v[2:] + " /* {} */".format(entry)


def gen_random_tree_classifier(
    tree, code, bits, bdd, max_value, values_rom, offset, is_cc=True):
  """Generates C++ or Verilog friendly code for random tree classifier.

  Generates HLS Catapult friendly code or RTL in Verilog for random tree
  classifier from SkLearn.

  Arguments:
    tree: RandomTreeClassifier from sklearn.
    code: list of strings containing code generated.
    bits: list containing number of bits for each of the inputs.
    bdd: we actually try to cache entries (i,v,n1,n0) entries so that if
        they appear again, we reuse previously computed nodes.
    max_value: random tree classifiers returns vector of classes with the
        number of instances found in the terminal leaf node. This variable
        specifies a clipping factor for each class type so that we have
        a bounded problem to synthesize.
    values_rom: to save space in classifier, we store class values in
        values_rom.
    offset: each variable created in this function call is incremented by
        offset.
    is_cc: if True, generates C++ code; otherwise, Verilog.

  Returns:
    Tuple containing last variable name and current number of variables.
  """

  # extract information from tree

  n_nodes = tree.node_count
  children_left = tree.children_left
  children_right = tree.children_right
  feature = tree.feature
  threshold = tree.threshold

  values = {}

  is_leaves = np.zeros(shape=n_nodes, dtype=bool)

  stack = [(0, -1)]

  rom_l = []

  use_rom = max_value >= 7

  n_classes = len(tree.value[0].flatten())

  max_bits = int(np.ceil(np.log2(max_value + 1)))

  while stack:
    node_id, parent_depth = stack.pop()

    if children_left[node_id] != children_right[node_id]:
      stack.append((children_left[node_id], parent_depth+1))
      stack.append((children_right[node_id], parent_depth+1))
    else:
      # is leaf node
      is_leaves[node_id] = True
      # get tree node output
      p_input_tuple = tree.value[node_id].flatten().astype(np.int32)
      max_input_value = np.max(p_input_tuple)
      min_input_value = np.min(p_input_tuple)
      # if max_value == 1, only keep top ones
      if max_value == 1:
        input_tuple = (p_input_tuple == max_input_value).astype(np.int32)
        tree.value[node_id] = (tree.value[node_id] == max_input_value).astype(
            tree.value[node_id].dtype)
      else: # if max_value <= 3:
        # SKLearn classifier computes probability for each entry instead of
        # suming them all. We should do the same.
        max_input_value = np.sum(p_input_tuple)
        min_input_value = 0
        # Just update tree.value to number so that we can compare accuracy of
        # quantization later.
        tree.value[node_id] = np.round(
            max_value *
            (tree.value[node_id] - min_input_value) /
            (max_input_value - min_input_value))
        input_tuple = tree.value[node_id].flatten()
      input_tuple = tuple(list(input_tuple.astype(np.int32)))

      # stores values in rom - we will use rom to store values if use_rom is
      # true.
      if values_rom.get(input_tuple, None) is None:
        values_rom[input_tuple] = len(values_rom)
        rom_l.append(input_tuple)
        if DEBUG:
          print(values_rom[input_tuple], input_tuple)

      if use_rom:
        values[node_id] = values_rom[input_tuple]
      else:
        values[node_id] = entry_to_hex(
            input_tuple, max_value, max_bits * n_classes, is_cc)

  # t_bits: entry type
  # l_bits: table line type
  if use_rom:
    t_bits = int(np.ceil(np.log2(len(values_rom))))
    l_bits = max_bits * n_classes
  else:
    t_bits = max_bits * n_classes

  # we only store the index here, as we read from a rom
  if is_cc:
    if DEBUG:
      t_type = "int"
    else:
      t_type = "ac_int<{},false>".format(t_bits)
  else:
    t_type = "wire [{}:0]".format(t_bits-1)

  v_name = {}
  n_vars = offset

  bdd = {}

  if is_leaves[0]:
    v_name[0] = t_type + "(" + str(values[0]) + ")"
    code.append("  {} n_{} = {};".format(
        t_type, n_vars, values[0]))
    last_var = "n_{}".format(n_vars)
    n_vars += 1
  else:
    for i in range(n_nodes-1, -1, -1):
      if is_leaves[i]:
        continue

      if v_name.get(children_left[i], None) is not None:
        n1 = v_name[children_left[i]]
      elif is_leaves[children_left[i]]:
        if is_cc:
          n1 = t_type + "(" + str(values[children_left[i]]) + ")"
        else:
          n1 = str(values[children_left[i]])
        v_name[children_left[i]] = n1
      else:
        n1 = "n_" + str(n_vars)
        n_vars += 1
        v_name[children_left[i]] = n1
        raise ValueError((children_left[i], n1, is_leaves[children_left[i]]))

      if v_name.get(children_right[i], None) is not None:
        n0 = v_name[children_right[i]]
      elif is_leaves[children_right[i]]:
        if is_cc:
          n0 = t_type + "(" + str(values[children_right[i]]) + ")"
        else:
          n0 = str(values[children_right[i]])
        v_name[children_right[i]] = n0
      else:
        n0 = "n_" + str(n_vars)
        n_vars += 1
        v_name[children_right[i]] = n0
        raise ValueError((children_right[i], n0, is_leaves[children_right[i]]))

      if v_name.get(i, None) is not None:
        n = v_name[i]
        last_var = v_name[i]
      elif bdd.get((feature[i], threshold[i], n1, n0), None) is not None:
        n = bdd[(feature[i], threshold[i], n1, n0)]
        v_name[i] = n
        last_var = n
      elif n1 == n0:
        # store intermediate results so that we can build a dag, not a tree
        bdd[(feature[i], threshold[i], n1, n0)] = n1
        v_name[i] = n1
        last_var = n1
      else:
        n = "n_" + str(n_vars)
        n_vars += 1
        v_name[i] = n
        # store intermediate results so that we can build a dag, not a tree
        bdd[(feature[i], threshold[i], n1, n0)] = n
        t = int(threshold[i])
        if bits[feature[i]] == 1:
          if t == 0:
            n1, n0 = n0, n1
          code.append(
              "  {} {} = (i_{}) ? {} : {}; // x_{} {}".format(
                  t_type, v_name[i], feature[i], n1, n0, i,
                  threshold[i]))
        else:
          code.append(
              "  {} {} = (i_{} <= {}) ? {} : {}; // x_{} {}".format(
                  t_type, v_name[i], feature[i], t, n1, n0, i,
                  threshold[i]))
        last_var = v_name[i]

  if use_rom:
    if is_cc:
      if DEBUG:
        l_type = "int"
      else:
        l_type = "ac_int<{},false>".format(l_bits)

      code.append("  {} {}_rom[{}]".format(l_type, last_var, len(values_rom)) +
                  " {")
      for i in range(len(values_rom)):
        code_s = "    " + entry_to_hex(rom_l[i], max_value, l_bits, is_cc)
        if i < len(values_rom)-1:
          code_s = code_s + ","
        code.append(code_s)
      code.append("  };")

    else:
      l_type = "wire [{}:0]".format(l_bits - 1)
      code.append("  function [{}:0] {}_rom;".format(l_bits-1, last_var))
      code.append("  input [{}:0] address;".format(t_bits-1))
      code.append("  begin")
      code.append("    case (address)")
      for i in range(len(values_rom)):
        code.append("    {}'d{}: {}_rom = {};".format(
            l_bits, i, last_var, entry_to_hex(rom_l[i], max_value, l_bits, is_cc)))
      code.append("    default: {}_rom = 0;".format(last_var))
      code.append("    endcase")
      code.append("  end")
      code.append("  endfunction")

    code.append("  {} v_{} = {}_rom[{}];".format(
        l_type, last_var, last_var, last_var))

    last_var = "v_" + last_var

  return last_var, n_vars


def gen_random_forest(
    rf, name, bits, is_neg, o_bits, o_is_neg, is_regressor=True,
    is_top_level=False, is_cc=True):
  """Generates HLS based C++ or SystemVerilog code for random forest."""

  # TODO(nunescoelho): need to take care of multiple outputs for classifier.
  # we can get better result if we do not look at the winning classifier,
  # but sum how many of them appear in each classifier for leaf nodes.

  bdd = {}
  values_rom = {}
  offset = 0
  code = []

  max_value = (1 << int(os.environ.get("MAX_BITS",1))) - 1
  decimal_digits = int(os.environ.get("MAX_BITS", 5))

  assert max_value > 0

  o_list = []
  for i in range(len(rf.estimators_)):
    tree = rf.estimators_[i].tree_
    code.append("  //----- TREE {}".format(i))
    if is_regressor:
      last_var, offset = gen_random_tree_regressor(
          tree, code, bits, o_bits, decimal_digits, o_is_neg, bdd, offset, is_cc)
    else:
      values_rom = {}
      last_var, offset = gen_random_tree_classifier(
          tree, code, bits, bdd, max_value, values_rom, offset, is_cc)

    o_list.append(last_var)

  if is_cc:
    header = [
        "#include <ac_int.h>",
        "#include <ac_fixed.h>",
        "#include <iostream>",
        "using namespace std;",
        "//#define _PRINT_DEBUG_",
        "#define PB(n) cout << #n << \":\" << n << endl;",
        "#define PS(n) \\",
        "  cout << #n << \":\" << n.to_double() << \" \"; \\",
        "  for(int i=n.width-1; i>=0; i--) cout << n[i]; cout << endl;"
    ]

    if DEBUG:
      header = header + [
          "static inline float round_even(float x) {",
          "  int x_int = truncf(x);",
          "  float x_dec = x - x_int;",
          "  if ((x_dec == 0.5) && (x_int % 2 == 0)) {",
          "    return truncf(x);",
          "  } else {",
          "    return truncf(x + 0.5);"
          "  }",
          "}"
      ]
      if is_top_level:
        header.append("#pragma hls_design top")
      header.append("void {}(int in[{}], int &out)".format(
          name, np.sum(bits), o_bits) + " {")
    else:
      n_bits = int(np.ceil(np.log2(len(o_list))))
      header = header + [
          "static inline ac_int<{},{}> round_even(ac_fixed<{},{},{}> x)".format(
              o_bits, o_is_neg,
              n_bits + o_bits + decimal_digits, n_bits + o_bits + o_is_neg,
              o_is_neg
          ) + " {",
          "  bool x_int_is_even = x[{}] == 0;".format(decimal_digits + n_bits),
          "  bool x_frac_is_0_5 = x[{}] && (x.slc<{}>(0) == 0);".format(
              n_bits + decimal_digits-1, n_bits + decimal_digits-1),
          "  if (x_frac_is_0_5 && x_int_is_even) {",
          "    return x.slc<{}>({});".format(o_bits, n_bits + decimal_digits),
          "  } else {",
          "    ac_int<{},{}> r = x.slc<{}>({}) + 1;".format(
              o_bits + 1, o_is_neg,
              o_bits + 1, n_bits + decimal_digits - 1),
          "    return r.slc<{}>(1);".format(o_bits + 1),
          #"    return (x + ac_fixed<{},{},{}>({})).slc<{}>({});".format(
          #    n_bits + o_bits + decimal_digits, n_bits + o_bits + o_is_neg,
          #    o_is_neg, 1<<(n_bits+decimal_digits-1),
          #    o_bits, n_bits + decimal_digits),
          #    #o_is_neg, len(o_list)/2, o_bits, n_bits + decimal_digits),
          "  }",
          "}"
      ]
      if is_top_level:
        header.append("#pragma hls_design top")
      header.append("void {}(ac_int<{},0> in, ac_int<{},{}> &out)".format(
          name, np.sum(bits), o_bits, o_is_neg) + " {")
  else:
    n_bits = int(np.ceil(np.log2(len(o_list))))
    i_decl = "  input [{}:0] in;".format(np.sum(bits)-1)
    o_sign = "signed " if o_is_neg else ""
    o_decl = "  output " + o_sign + "[{}:0] out;".format(o_bits-1)
    header = [
        "module " + name + "(in, out);",
        i_decl,
        o_decl,
        "",
        "  function {}[{}:0] round_even;".format(o_sign, o_bits),
        "  input {}[{}:0] x;".format(o_sign, n_bits + o_bits + decimal_digits - 1),
        "  reg x_int_is_even;",
        "  reg x_frac_is_0_5;",
        "  reg {}[{}:0] round_sum;".format(o_sign, o_bits + 1),
        "  begin",
        "    x_int_is_even = x[{}] == 0;".format(decimal_digits + n_bits),
        "    x_frac_is_0_5 = x[{}] && (x[{}:0] == 0);".format(
            n_bits + decimal_digits-1, n_bits + decimal_digits - 2),
        "    if (x_frac_is_0_5 && x_int_is_even)",
        "      round_even = x[{}:{}];".format(
            n_bits + decimal_digits + o_bits - 1, n_bits + decimal_digits),
        "    else",
        "    begin",
        "      round_sum = x[{}:{}] + 1;".format(
            n_bits + decimal_digits + o_bits - 1, n_bits + decimal_digits - 1),
        "      round_even = round_sum[{}:1];".format(o_bits + 1),
        "    end",
        #"      round_even = (x + {})[{}:{}];".format(
        #    #(1 << (n_bits + decimal_digits - 1)),
        #    n_bits + decimal_digits + o_bits - 1, n_bits + decimal_digits),
        "  end",
        "  endfunction"
    ]


  all_bits = np.sum(bits)
  sum_i = 0
  for i in range(bits.shape[0]):
    if is_cc:
      if bits[i] > 1:
        if DEBUG:
          header.append("  int i_{} = in[{}];".format(i, i))
        else:
          header.append("  ac_int<{},{}> i_{} = in.slc<{}>({});".format(
              bits[i], is_neg[i], i, bits[i], sum_i))
      else:
        header.append("  bool i_{} = in[{}];".format(i, sum_i))
    else:
      if bits[i] == 1:
        header.append("  wire i_{} = in[{}];".format(i, all_bits - sum_i - 1))
      else:
        header.append("  wire i_{}[{}:0] = in[{}:{}];".format(
            i, bits[i], sum_i + bits[i] - 1, all_bits - sum_i - 1))
    sum_i += bits[i]

  footer = []

  if is_regressor:
    n_bits = int(np.ceil(np.log2(len(o_list))))
    assert 1 << n_bits == len(o_list)

    if is_cc:

      if DEBUG:
        tmp_type = "float"
      else:
        tmp_type = "ac_fixed<{},{},{}>".format(
            n_bits + o_bits + decimal_digits, n_bits + o_bits + o_is_neg,
            o_is_neg)
      avg_o = "  {} o_tmp = {};".format(tmp_type, " + ".join(o_list))

      # rnd_o = "  o_tmp += {}({});".format(tmp_type, len(o_list)/2)

      if DEBUG:
        out = "  out = round_even(o_tmp / {});".format(len(o_list))
      else:
        out = "  out = round_even(o_tmp);"

      footer.append("  #ifdef _PRINT_DEBUG_")
      for o_name in o_list:
        footer.append("  PS({});".format(o_name))
      footer.append("  #endif")
      closing = "}"

    else:
      tmp_sign = "signed " if o_is_neg else ""
      avg_o = "  wire " + tmp_sign + "[{}:0] o_tmp = {};".format(
          n_bits + o_bits + decimal_digits - 1, " + ".join(o_list))

      for n in o_list:
        footer.append("  // always @({}) $display(\"{} = %f (%b)\", {} / 32.0, {});".format(n,n,n,n))
      footer.append("  // always @(o_tmp) $display(\"o_tmp = %b\", o_tmp);")

      out = "  assign out = round_even(o_tmp);"

      closing = "endmodule"

    footer = footer + [avg_o, out, closing]

  else:

    assert not o_is_neg

    footer = []

    o_suffix = ""
    if DEBUG:
      o_type = "int"
    elif is_cc:
      o_type = "ac_int<{},{}>".format(o_bits, o_is_neg)
    else:
      o_sign = " signed" if o_is_neg else ""
      o_suffix = "[{}:0]".format(o_bits)
      o_type = "wire" + o_sign + " " + o_suffix

    if is_cc:
      n_classes = 1 << o_bits
      max_bits = int(np.ceil(np.log2(max_value + 1)))
      log2_o_list = int(np.ceil(np.log2(len(o_list))))
      if DEBUG:
        log2_o_type = "int"
      else:
        log2_o_type = "ac_int<{},false>".format(log2_o_list + max_bits)
      sum_v = (
          "  {} sum[{}] = ".format(
              log2_o_type, 1 << o_bits) + "{" +
          ",".join("0" * (1 << o_bits)) + "};"
      )
      footer = [sum_v]
      for o_name in o_list:
        for i in range(n_classes):
          if DEBUG:
            footer.append("  sum[{}] += ({} >> {}) & {};".format(
                i, o_name, (n_classes - i) * max_bits - max_bits,
                hex((1 << max_bits) - 1)))
          else:
            footer.append("  sum[{}] += {}.slc<{}>({});".format(
                i, o_name, max_bits, (n_classes - i) * max_bits - max_bits))
        debug_print = []
        for i in range(n_classes):
          debug_print.append("{}.slc<{}>({}).to_string(AC_DEC)".format(
              o_name, max_bits, (n_classes - i) * max_bits - max_bits))
        footer_s = (
            "  cout << \"{} \" <<".format(o_name) +
            " << \" \" << ".join(debug_print) + " << endl;"
        )
        footer.append("  #ifdef _PRINT_DEBUG_")
        footer.append(footer_s)
        footer.append("  #endif")
      footer.append("  {} max_tmp = sum[0];".format(log2_o_type))
      footer.append("  {} max_id = 0;".format(o_type))
      footer.append("  for(int i=1; i<{}; i++)".format(1 << o_bits))
      footer.append(
        "    if (sum[i] >= max_tmp) { max_tmp = sum[i]; max_id = i; }")
      out = "  out = max_id;"

      footer.append(out)
      footer += ["}"]
    else:
      n_classes = 1 << o_bits
      max_bits = int(np.ceil(np.log2(max_value + 1)))
      log2_o_list = int(np.ceil(np.log2(len(o_list))))
      log2_o_type = "wire [{}:0]".format(log2_o_list + max_bits)
      footer = []
      for i in range(n_classes):
        code_s = "  {} sum_{} = ".format(log2_o_type, i)
        code_term = []
        for o_name in o_list:
          code_term.append("{}[{}:{}]".format(
              o_name, (n_classes - i) * max_bits, (n_classes - i) * max_bits - max_bits))
        code_s += " + ".join(code_term) + ";"
        footer.append(code_s)
        footer.append("  // always @(sum_{}) $display(\"sum_{} = %d\", sum_{});".format(
            i, i, i))
      footer.append("  reg [{}:0] max_tmp;".format(
          log2_o_list + max_bits - 1))
      footer.append("  reg [{}:0] max_id;".format(o_bits-1))
      footer.append("  integer i;")
      footer.append("  always @(" +
                    " or ".join(
                        ["sum_" + str(i) for i in range(n_classes)]) + ")")
      footer.append("  begin")
      footer.append("    max_tmp = sum_0; max_id = 0;")
      for i in range(1, n_classes):
        footer.append(
            "    if (sum_{} >= max_tmp) begin max_tmp = sum_{}; max_id = {}; end".format(
                i, i, i))
      footer.append("  end")
      footer.append("  assign out = max_id;")
      footer.append("endmodule")

  return header + code + footer


def gen_testbench_sv(rf, name, bits, is_neg, o_bits, o_is_neg, x, y, p, code):
  code.append("module tb;")
  x_0, x_1 = x.shape
  x_0_log2 = int(np.ceil(np.log2(x_0)))
  code.append("reg [{}:0] x_rom[{}:0];".format(x_1-1, x_0-1))
  code.append("initial $readmemb(\"x.rom\", x_rom, 0, {});".format(x_0-1))
  with open("x.rom", "w") as f:
    for i in range(len(x)):
      f.write("".join([str(int(v)) for v in x[i]]) + "\n")

  o_sign = "signed " if o_is_neg else ""
  o_type = o_sign + "[{}:0]".format(o_bits - 1)
  code.append("reg {} y_rom[{}:0];".format(o_type,x_0-1))
  code.append("reg {} p_rom[{}:0];".format(o_type,x_0-1))
  with open("y.rom","w") as f:
    for i in range(len(y)):
      f.write(hex(int(y[i]))+ "\n")
  with open("p.rom","w") as f:
    for i in range(len(y)):
      f.write(hex(int(p[i]))+ "\n")
  code.append("initial $readmemh(\"y.rom\", y_rom, 0, {});".format(x_0-1))
  code.append("initial $readmemh(\"p.rom\", p_rom, 0, {});".format(x_0-1))
  code.append("integer i;")
  code.append("integer cnt;")
  code.append("reg [{}:0] in;".format(x_1-1))
  code.append("wire {} out;".format(o_type))
  code.append("{} {}(in, out);".format(name, name))
  code.append("initial")
  code.append("begin")
  code.append("  cnt = 0;")
  code.append("  in = x_rom[i];")
  code.append("  for (i=0; i<{}; i=i+1)".format(x_0))
  code.append("  begin")
  code.append("    in = x_rom[i];")
  code.append("    #1000;")
  code.append("    if (p_rom[i] != out && y_rom[i] != out)")
  code.append("    begin")
  code.append("      $display(\"%d: %b y=%d p=%d -> %d\", i, x_rom[i], y_rom[i], p_rom[i], out);")
  code.append("    end")
  code.append("    else")
  code.append("    begin")
  code.append("      cnt = cnt + 1;")
  code.append("    end")
  code.append("  end")
  code.append("  $display(\"acc = %f\", 100.0 * cnt / {});".format(x_0))
  code.append("end")
  code.append("endmodule")


def gen_testbench_cc(rf, name, bits, is_neg, o_bits, o_is_neg, x, y, p, code):
  code.append("int x[{}][{}] = ".format(*x.shape) + "{")
  for i in range(len(x)):
    code_s = "  {" + ",".join([str(int(v)) for v in x[i]]) + "}"
    if i < len(x) - 1:
      code_s = code_s + ","
    code.append(code_s)
  code.append("};")
  code_s = (
      "int y[{}] = ".format(y.shape[0]) + "{" +
      ",".join([str(int(v)) for v in y]) + "};"
  )
  code.append(code_s)
  code_s = (
      "int p[{}] = ".format(p.shape[0]) + "{" +
      ",".join([str(int(v)) for v in p]) + "};"
  )
  code.append(code_s)

  code.append("int main()")
  code.append("{")
  code.append("  double acc = 0.0;")
  if DEBUG:
    code.append("  int in[{}];".format(x.shape[1]))
    code.append("  int out;")
  else:
    code.append("  ac_int<{},0> in;".format(x.shape[1]))
    code.append("  ac_int<{},{}> out;".format(o_bits, o_is_neg))

  code.append("  for (int i=0; i<{}; i++)".format(x.shape[0]) + "{")
  code.append("    for (int j=0; j<{}; j++) in[j] = x[i][j];".format(
      x.shape[1]))
  code.append("    {}(in, out);".format(name))
  code.append("    if (p[i] != out && y[i] != out) {")
  code.append("      cout << i << \": \";")
  code.append("      for (int j=0; j<{}; j++) cout << in[j];".format(
      x.shape[1]))
  if DEBUG:
    code.append("      cout << \" y=\" << y[i] << \" p=\" << p[i] << \" \" << out << endl;")
    code.append("    }")
    code.append("    acc += (y[i] == out);")
  else:
    code.append("      cout << \" y=\" << y[i] << \" p=\" << p[i] << \" \" << out.to_int() << endl;")
    code.append("      #ifdef _PRINT_DEBUG_")
    code.append("        exit(1);")
    code.append("      #endif")
    code.append("    }")
    code.append("    acc += (y[i] == out.to_int());")
  code.append("  }")
  code.append("  cout << \"acc = \" << 100.0 * acc  / {} << endl;".format(
      x.shape[0]))
  code.append("}")



================================================
FILE: experimental/lo/optimizer.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Implements random forest or logic otimizer function."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import multiprocessing as mp
import os
import pickle
import random
import shutil
import subprocess
import sys
import time
import warnings

import numpy as np
import six

from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor

from .compress import Compressor
from .generate_rf_code import gen_random_forest
from .table import load


def file_compress(fin, fout):
  """Compresses table using hash set."""
  c = Compressor()
  n_lines = 0
  for line in open(fin):
    n_lines += 1
    line = line.strip()
    c.add_entry(line)

  f = open(fout, "w")
  n_compressed = 0
  for line in c():
    n_compressed += 1
    f.write(line + "\n")
  f.close()
  print("... random forrest for {} reduced from {} to {} entries".format(
      os.path.basename(fin), n_lines, n_compressed))


def mp_rf_optimizer_func(fn_tuple):
  """Executes in parallel creation of random forrest creation."""

  fn, flags, file_suffix = fn_tuple

  n_trees = flags["n_trees"]
  is_regressor = flags["is_regressor"]
  sample_size = flags["sample_size"]
  n_features = flags["n_features"]
  max_depth = flags["max_depth"]

  if not file_suffix:
    file_suffix = "none"

  path_split = fn.split("/")
  path = "/".join(path_split[:-1]) + "/"
  fn_split = path_split[-1].split(".")
  # o_file = path + ".".join(fn_split[0:-2] + [fn_split[-1]])
  cv_file = path + ".".join(fn_split[0:-2] + [file_suffix])
  rfb_file = path + ".".join(fn_split[0:-2] + ["rb", "bin"])

  # let's compress the table first to make the job easier for random forest.
  # compression can usually achieve a ratio of 50x or more.

  # compress(fn, o_file)
  train = load(fn)

  n_features = "auto" if not n_features else float(n_features)

  # min_size = 1

  if max_depth:
    max_depth = int(max_depth)

  print("... creating random forrest for " + os.path.basename(fn) + " with " +
        str(sample_size) + " samples")

  if is_regressor:
    rf = RandomForestRegressor(
        n_estimators=n_trees,
        max_depth=max_depth,
        # min_samples_split=2,
        # min_samples_leaf=min_size,
        max_features=n_features,
        # max_leaf_nodes=100,
        # oob_score=True,
        # warm_start=True,
        bootstrap=True,
        random_state=42,
        n_jobs=1)
  else:
    rf = RandomForestClassifier(
        n_estimators=n_trees,
        max_depth=max_depth,
        # min_samples_split=2,
        # min_samples_leaf=min_size,
        max_features=n_features,
        # max_leaf_nodes=100,
        # oob_score=True,
        # warm_start=True,
        bootstrap=True,
        random_state=42,
        n_jobs=1)

  if sample_size and train.shape[0] >= 10000:
    sample_size = int(sample_size)
    np.random.seed(42)
    idx = np.random.choice(train.shape[0], train.shape[0], replace=False)

    x = train[idx[sample_size:], 0:-1]
    y = train[idx[sample_size:], -1]

    x_test = train[idx[0:sample_size], 0:-1]
    y_test = train[idx[0:sample_size], -1]
  else:
    x = train[:, 0:-1]
    y = train[:, -1]

    x_test = x
    y_test = y

  estimators = []
  with warnings.catch_warnings():
    warnings.simplefilter("ignore")
    rf.fit(x, y)

  func_name = fn_split[0]

  bits = np.ceil(
      np.log2(
          np.abs(
              np.amax(x, axis=0) -
              np.amin(x, axis=0) + 1))).astype(np.int32)
  is_neg = (np.amin(x, axis=0) < 0).astype(np.int8)

  o_bits = np.ceil(
      np.log2(
          np.abs(
              np.amax(y, axis=0) -
              np.amin(y, axis=0) + 1))).astype(np.int32)
  o_is_neg = (np.amin(y, axis=0) < 0).astype(np.int8)

  rf.bits = bits
  rf.is_neg = is_neg
  rf.o_bits = o_bits
  rf.o_is_neg = o_is_neg

  code = gen_random_forest(
      rf, func_name, bits, is_neg, o_bits, o_is_neg,
      is_regressor=is_regressor, is_top_level=False,
      is_cc=file_suffix == "cc")

  open(cv_file, "w").write("\n".join(code))

  p = 1.0 * np.round(rf.predict(x_test))

  dy = np.max(train[:, -1]) - np.min(train[:, -1])

  error = np.sum(np.abs(y_test - p)) / (1.0 * p.shape[0] * dy)
  score = np.sum(y_test == p) / p.shape[0]

  print("y:", np.max(y_test), y_test[0:30].astype(np.int32))
  print("p:", np.max(p), p[0:30].astype(np.int32))

  print("... model {} with score of {:.2f}% and error of {:.2f}%".format(
      func_name, 100.0*score, 100.0*error))

  print("... saving model in {}".format(rfb_file))
  pickle.dump(rf, open(rfb_file, "wb"))
  return rfb_file


def mp_abc_optimizer_func(fn):
  """Performs espresso and abc optimization on a single espresso input."""

  fn_split = fn.split(".")
  o_file = ".".join(fn_split[0:-2] + [fn_split[-1]])
  v_file = ".".join(fn_split[0:-2] + ["v"])
  b_file = ".".join(fn_split[0:-2] + ["blif"])

  print("...running espresso in " + fn)

  espresso_flags = os.environ.get("ESPRESSO_FLAGS", "-Dexpand")

  cmd = "espresso {} {} > {}".format(fn, espresso_flags, o_file)

  output = subprocess.check_output(cmd, stderr=subprocess.STDOUT, shell=True)

  output = output.strip()
  if output:
    print(output)
    sys.stdout.flush()

  # check if network is empty

  for line in open(o_file):
    line = line.strip()
    if line[0:2] == ".p":
      terms = int(line[2:])
      # empty : espresso optimized away all the logic
      if terms == 0:
        shutil.copyfile(fn, o_file)
      break

  print("...running abc in " + o_file)

  abc_flags = os.environ.get("ABC_FLAGS", "")

  abc_flags_list = abc_flags.split(";") if abc_flags else []

  abc_cmds_list = (
      ["read_pla " + o_file] + abc_flags_list +
      ["strash",
       "dc2",
       "strash",
       "if -K 3",
       "write_verilog " + v_file,
       "write_blif " + b_file
       ])

  abc_cmds = ";".join(abc_cmds_list)

  cmd = "abc -c '" + abc_cmds + "'"

  output = subprocess.check_output(cmd, stderr=subprocess.STDOUT, shell=True)

  output = output.strip()
  if output:
    print(output)
    sys.stdout.flush()

  print("...generated " + v_file)


def run_abc_optimizer(files):
  """Implements logic optimizer using espresso/abc."""

  # intel processors sometimes return number of threads, not processors

  cpus = mp.cpu_count() // 2

  start_time = time.time()
  pool = mp.Pool(cpus)
  pool.map(mp_abc_optimizer_func, files)
  pool.close()
  print("Optimizer ran in {} seconds.".format(time.time() - start_time))


def run_rf_optimizer(files, flags, file_suffix="cc"):
  """Implements random forest main optimizer."""

  # intel processors sometimes return number of threads, not processors

  cpus = mp.cpu_count() // 2

  start_time = time.time()
  pool = mp.Pool(cpus)
  pool.map(mp_rf_optimizer_func, zip(
      files, [flags]*len(files), [file_suffix]*len(files)))
  pool.close()
  print("Optimizer ran in {} seconds.".format(time.time() - start_time))

  # generates header file

  # .../.../.../conv2d_0_m.csv/conv2d_0_m_0.csv
  #
  # returns conv2d_0_m for module_name

  module_name = files[0].split("/")[-2].split(".")[0]

  path_split = files[0].split("/")
  path = "/".join(path_split[:-1]) + "/"
  fn_split = path_split[-1].split(".")
  rfb_file = path + ".".join(fn_split[0:-2] + ["rb", "bin"])

  rf = pickle.load(open(rfb_file, "rb"))

  f = open(path + module_name + "." + file_suffix, "w")

  if file_suffix == "cc":
    f.write("#include <ac_int.h>\n\n")

  modules = []

  for fn in files:
    path_split = fn.split("/")
    path = "/".join(path_split[:-1]) + "/"
    fn_split = path_split[-1].split(".")
    v_file = ".".join(fn_split[0:-2] + [file_suffix])

    func_name = fn_split[0]

    if file_suffix == "v":
      f.write("'include \"" + v_file + "\"\n")
    else:
      f.write("#include \"" + v_file + "\"\n")

    modules.append(func_name)

  f.write("\n\n")

  if file_suffix == "v":
    f.write("module " + module_name + "(")
    f.write("input [" + str(np.sum(rf.bits)-1) + ":0] in, ")
    o_sign = " signed " if rf.o_is_neg else ""
    f.write("output " + o_sign + "[" + str(len(modules)*rf.o_bits-1) +
            ":0] out);\n")
  else:
    f.write("void " + module_name + "(")
    f.write("ac_int<" + str(np.sum(rf.bits)) + ",false> in, ")
    f.write("ac_int<" + str(len(modules)*rf.o_bits) + "," +
            ("true" if rf.o_is_neg else "false") +
            "> &out)\n")
    f.write("{\n")

  for o in range(len(modules)):
    if file_suffix == "v":
      f.write("  wire " + ("signed " if rf.o_is_neg else "") +
              "[" + str(rf.bits[-1]-1) + ":0] "
              "o_" + str(o) + ";\n")
      f.write("  " + modules[o] + "(in, o_" + str(o) + ");\n")
      f.write("  assign out[" + str(rf.o_bits*(o+1)-1) + ":" +
              str(rf.bits[-1]*o) + "] = o_" + str(o) + ";\n")
    else:
      f.write("  ac_int<" + str(rf.o_bits) + "," +
              ("true" if rf.o_is_neg else "false") +
              "> o_" + str(o) + "; " + modules[o] +
              "(in, o_" + str(o) + "); out.set_slc<" +
              str(rf.o_bits) + ">(" +
              str(rf.o_bits*o) + "," +
              "o_" + str(o) + ");\n")

  if file_suffix == "cc":
    f.write("}")

  f.close()


================================================
FILE: experimental/lo/random_forest/__init__.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from .utils import load
from .utils import load_csv
from .utils import load_pla
# from .random_forest import RandomForest
# from .random_tree import RandomTree


================================================
FILE: experimental/lo/random_forest/gen_random_tree.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Generates expressions for random trees."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import DecisionTreeRegressor

def gen_random_tree_cc(tree):
  n_nodes = tree.node_count
  children_left = tree.children_left
  children_right = tree.children_right
  feature = tree.feature
  threshold = tree.threshold

  node_depth = np.zeros(shape=n_nodes, dtype=np.int64)
  is_leaves = np.zeros(shape=n_nodes, dtype=bool)

  stack = [(0, -1)]

  while (len(stack) > 0):
    node_id, parent_depth = stack.pop()
    node_depth[node_id] = parent_depth + 1

    if children_left[node_id] != children_right[node_id]:
      stack.append((chidren_left[node_id], parent_depth+1))
      stack.append((children_right[node_id], parent_depth+1))
    else:
      is_leaves[node_id] = True

  for i in range(n_nodes):
    if is_leaves[i]:
      print("{}n_{} leaf node.".format("  "*node_depth[i], i))
    else:
      print("{}n_{} (i_{} <= {}) ? n_{} : n_{}".format(
          "  "*node_depth[i], i, feature[i], threshold[i],
          children_left[i], children_right[i]))


================================================
FILE: experimental/lo/random_forest/parser.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Parses PLA format usig ply."""
from ply import yacc
from ply import lex
import numpy as np

_1 = 1
_0 = 2
_X = 3
_U = 0

NOT = {_0: _1, _1: _0, _X: _U, _U: _U}

class PLA:
  def __init__(self):
    self.pla_i = []
    self.pla_o = []

pla = PLA()

tokens = [
  "I",
  "O",
  "MV",
  "ILB",
  "OB",
  "P",
  "L",
  "E",
  "TYPE",
  "SYMBOL",
  "NUMBER",
  "NEWLINE"
]

t_ignore = " \t|"
t_I = r"\.[iI]"
t_O = r"\.[oO]"
t_MV = r"\.[mM][vV]"
t_ILB = r"\.[iI][lL][bB]"
t_OB = r"\.[oO][bB]"
t_P = r"\.[pP]"
t_L = r"\.[lL]"
t_E = r"\.[eE]"
t_TYPE = r"\.type"
t_SYMBOL = r"[a-zA-Z_][a-zA-Z0-9_\<\>\-\$]*"

def t_NUMBER(t):
  r"[\d\-]+"
  return t

def t_NEWLINE(t):
  r"\n+"
  t.lexer.lineno += t.value.count("\n")
  return t

def t_error(t):
  print("Illegal character '{}'".format(t.value))
  t.lexer.skip(1)

lex.lex()

def p_pla(p):
  """pla : pla_declarations pla_table pla_end"""

def p_pla_declarations(p):
  """pla_declarations : pla_declarations pla_declaration
                      | pla_declaration"""

def p_pla_declaration(p):
  """pla_declaration : I NUMBER NEWLINE
                     | O NUMBER NEWLINE
                     | P NUMBER NEWLINE
                     | MV number_list NEWLINE
                     | ILB symbol_list NEWLINE
                     | OB symbol_list NEWLINE
                     | L NUMBER symbol_list NEWLINE
                     | TYPE SYMBOL NEWLINE
  """
  token = p[1].lower()
  if token == ".i":
    pla.ni = int(p[2])
  elif token == ".o":
    pla.no = int(p[2])
  elif token == ".mv":
    pla.mv = [int(v) for v in p[2]]
  elif token == ".ilb":
    pla.ilb = p[2]
  elif token == ".ob":
    pla.ob = p[2]
  elif token == ".l":
    pla.label = p[2]
  elif token == ".type":
    pla.set_type = p[2]


def p_pla_table(p):
  """pla_table : pla_table number_symbol_list NEWLINE
               | number_symbol_list NEWLINE"""
  if len(p[1:]) == 3:
    line = "".join(p[2])
  else:
    line = "".join(p[1])

  assert hasattr(pla, "ni") and hasattr(pla, "no")

  # right now we only process binary functions

  line = [_1 if v == "1" else _0 if v == "0" else _X for v in line]

  pla.pla_i.append(line[0:pla.ni])
  pla.pla_o.append(line[pla.ni:])


def p_pla_end(p):
  """pla_end : E opt_new_line"""
  pass


def p_opt_new_line(p):
  """opt_new_line : NEWLINE
                  |
  """
  pass


def p_number_list(p):
  """number_list : number_list NUMBER
                 | NUMBER
  """
  if len(p[1:]) == 2:
    p[0] = p[1] + [p[2]]
  else:
    p[0] = [p[1]]


def p_symbol_list(p):
  """symbol_list : symbol_list SYMBOL
                 | SYMBOL
  """
  if len(p[1:]) == 2:
    p[0] = p[1] + [p[2]]
  else:
    p[0] = [p[1]]


def p_number_symbol_list(p):
  """number_symbol_list : number_symbol_list number_or_symbol
                        | number_or_symbol
  """
  if len(p[1:]) == 2:
    p[0] = p[1] + [p[2]]
  else:
    p[0] = [p[1]]


def p_number_or_symbol(p):
  """number_or_symbol : NUMBER
                      | SYMBOL
  """
  p[0] = p[1]


def p_error(p):
  print("Error text at {}".format(p)) #p.value))

yacc.yacc()

def get_tokens(fn):
  lex.input("".join(open(fn).readlines()))
  return lex.token

def parse(fn):
  yacc.parse("".join(open(fn).readlines()))

  pla.pla_i = np.array(pla.pla_i)
  pla.pla_o = np.array(pla.pla_o)

  return pla


================================================
FILE: experimental/lo/random_forest/random_forest.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Creates a random forest to generate hardware for it."""

import numpy as np
import pickle
import os

from .random_tree import RandomTree

def fit_parallel(max_depth, min_size, sample, mask_stuck_at_values):

  tree = RandomTree(max_depth, min_size)
  tree.fit(sample, mask_stuck_at_values)

  return tree


class RandomForest:
  def __init__(
      self, max_depth, min_size, n_trees, use_mean=False,
      sample_size=None):
    self.max_depth = max_depth
    self.min_size = min_size
    self.use_mean = use_mean
    self.sample_size = sample_size
    self.n_trees = n_trees
    self.inputs = None
    self.bits = None
    self.is_neg = None

    self.trees = None

  @staticmethod
  def save(model, filename):
    """Saves model to disk."""
    print("... saving model in {}".format(filename))
    f = open(filename, "wb")
    pickle.dump(model, f)
    f.close()


  @staticmethod
  def load(filename):
    """Loads model from disk."""
    print("... loading model from {}".format(filename))
    f = open(filename, "rb")
    random_forest = pickle.load(f)
    f.close()

    return random_forest


  def subsample(self, dataset):
    """Subsamples dataset if we do not want to use entire dataset."""
    sample_idx = np.random.choice(
        dataset.shape[0], self.sample_size, replace=True)
    sample = dataset[sample_idx,...]
    return sample


  def fit(self, dataset, verbose=False):
    """Fits random tree to model."""
    self.inputs = dataset.shape[1]-1
    self.bits = np.ceil(
        np.log2(
            np.abs(
                np.amax(dataset, axis=0) -
                np.amin(dataset, axis=0)))).astype(np.int32)
    self.is_neg = (np.amin(dataset, axis=0) < 0).astype(np.int8)

    self.trees = []

    for i in range(self.n_trees):
      if verbose:
        print("... creating tree {}".format(i))

      # as subsample is an expensive operation, we will only perform it if it
      # reduces the dataset substantially

      if self.sample_size and self.sample_size < 0.3 * dataset.shape[0]:
        if verbose:
          print("... generated subsample of size {}".format(self.sample_size))
        sample = self.subsample(dataset)
      else:
        sample = dataset

      self.trees.append(fit_parallel(
          self.max_depth, self.min_size, sample, True))


  def predict_row(self, row):
    """Predicts output for single row."""
    result = [tree.predict_row(row) for tree in self.trees]
    if self.use_mean:
      return int(np.round(np.mean(result)))
    else:
      return max(set(result), key=result.count)


  def predict(self, data):
    """Predicts class based on data."""

    assert self.trees is not None

    return np.array([self.predict_row(data[i]) for i in range(data.shape[0])])


  def gen_code(self, filename, func_name):
    """Generates code for model."""

    assert self.bits is not None

    vd_list = []
    n_vars = 0
    for tree in self.trees:
      vd_list.append(tree.gen_code(n_vars))
      n_vars += len(vd_list[-1])

    # checks the type by the suffix

    is_v = filename.split(".")[-1] == "v"

    assert self.inputs

    f = open(filename, "w")

    i_bits = np.sum(self.bits[:-1])
    o_bits = self.bits[-1]
    o_sign = self.is_neg[-1]

    if is_v:
      f.write("module {}(input [{}:0] i, output [{}:0] o);\n".format(
          func_name, i_bits-1, o_bits-1))
    else:
      f.write("#include<ac_int.h>\n\n")
      f.write("void {}(ac_int<{},false> i, ac_int<{},{}> &o)\n".format(
          func_name, i_bits, o_bits, o_sign))
      f.write("{\n")


    # write function headline
    s_in_line = []

    i_bits = self.bits[0]
    i_sign = self.is_neg[0]

    if is_v:
      i_datatype = "  wire {}[{}:0] ".format(
          "signed " if i_sign else "", i_bits-1)
    else:
      i_datatype = "  ac_int<{},{}> ".format(i_bits, i_sign)

    len_s = len(i_datatype)

    for i in range(self.inputs):
      if is_v:
        s = (
            "i_" + str(i) + " = " + "i[" + str(i_bits*(i+1)-1) + ":" +
            str(i_bits*i) + "]"
        )
      else:
        s = (
            "i_" + str(i) + " = " + "i.slc<" + str(i_bits) + ">(" +
            str(i_bits*i) + ")"
        )
      if (
          len_s + len(s) + 2 > 70 or i_bits != self.bits[i] or
          i_sign != self.is_neg[i]
      ):
        f.write(i_datatype + ", ".join(s_in_line) + ";\n")

        s_in_line = []
        if is_v:
          i_datatype = "  wire {}[{}:0] ".format(
              "signed " if i_sign else "", i_bits-1)
        else:
          i_datatype = "  ac_int<{},{}> ".format(i_bits, i_sign)

        len_s = len(i_datatype)

      s_in_line.append(s)
      len_s += len(s) + 2

    if s_in_line:
      f.write(i_datatype + ", ".join(s_in_line) + ";\n")

    if is_v:
      o_datatype = "  wire {}[{}:0] ".format(
          "signed " if o_sign else "", o_bits)
    else:
      o_datatype = "  ac_int<{},{}> ".format(o_bits, o_sign)

    o_list = []
    for i in range(len(vd_list)):
      for v in vd_list[i]:
        if is_v:
          f.write(o_datatype + v + " = " + vd_list[i][v] + ";\n")
        else:
          f.write(o_datatype + v + " = " + vd_list[i][v] + ";\n")
      f.write("\n")
      o_list.append(v)

    assert len(o_list) <= 3

    if is_v:
      f.write("  assign ")
    else:
      f.write("  ")

    if len(o_list) == 1:
      f.write("o = " + o_list[0] + ";")
    elif len(o_list) == 2:
      cond = "( " + o_list[0] + " == " + o_list[1] + " ) "
      n1 = o_list[0]
      n0 = "( ( " + " + ".join(o_list) + " ) >> 1 )"
      f.write("o = " + cond + "? " + n1 + ": " + n0)
    elif len(o_list) == 3:
      cond = (
          "( " +
          "( " + " == ".join(o_list[0:2]) + " )?" + o_list[0] + ":" +
          "( " + " == ".join(o_list[1:]) + " )?" + o_list[1] + ":" +
          "( " + " == ".join([o_list[0], o_list[2]]) + " )?" + o_list[0] +
          ":" + "( " + " < ".join(o_list[0:2]) + " ) ?" +
          "( ( " + " < ".join(o_list[1:]) + " ) ?" + o_list[1] + ":" +
          o_list[2] + " ) : " +
          "( ( " + " < ".join([o_list[0], o_list[2]]) + " ) ?" + o_list[0] +
          ":" + o_list[2] + " )"
      )
      f.write("o = " + cond + ";\n")
    if is_v:
      f.write("endmodule")
    else:
      f.write("}")

    f.close()


================================================
FILE: experimental/lo/random_forest/random_tree.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Implements Random Forest for quantized netlist."""

from csv import reader
from math import sqrt
import os
import pprint
from random import seed
from random import randrange
import sys

import numpy as np
from .parser import parse, _X, _0, _1

class RandomTree:
  def __init__(self, max_depth, min_size):
    self.min_size = min_size
    self.max_depth = max_depth
    self.n_features = None

  def split_into_groups(self, index, value, dataset):
    mask_l = dataset[:, index] < value
    mask_r = np.logical_not(mask_l)
    left = dataset[mask_l,...]
    right = dataset[mask_r,...]
    return left, right

  def gini_index(self, groups, classes):
    # count all samples at split point
    n_instances = float(sum([len(group) for group in groups]))
    # sum weighted Gini index for each group
    gini = 0.0
    for group in groups:
      size = float(len(group))
      # avoid divide by zero
      if size == 0:
        continue
      score = 0.0
      # score the group based on the score for each class
      for class_val in classes:
        p = np.array([np.sum(group[:, -1] == class_val) / size
                      for class_val in classes])
        score += np.sum(np.power(p, 2))

      # weight the group score by its relative size
      gini += (1.0 - score) * (size / n_instances)
    return gini

  def select_best_split(self, dataset):
    class_values = list(set(list(dataset[:,-1].flatten())))

    b_index, b_value, b_score, b_groups = 9999, 9999, 9999, None

    # because several of the entries may be don't cares, we will select the
    # whole set and restrict to only the ones that are not don't cares

    features = list(
        np.random.choice(len(dataset[0])-1, self.n_features, p=self.probs,
                         replace=False))

    for index in features:
      assert self.mask[index] == True
      b_values = list(set(list(dataset[:, index])))
      for b in b_values:
        groups = self.split_into_groups(index, b, dataset)
        gini = self.gini_index(groups, class_values)
        if gini < b_score:
          b_index, b_value, b_score, b_groups = index, b, gini, groups

    return {'index': b_index, 'value': b_value, 'groups': b_groups}

  def select_terminal(self, group):
    outcomes = list(group[:,-1].flatten())
    return max(set(outcomes), key=outcomes.count)

  def split_node(self, node, depth):
    left, right = node['groups']
    del(node['groups'])

    # check for a no split
    if left.shape[0] == 0:
      node['left'] = node['right'] = self.select_terminal(right)
      return
    elif right.shape[0] == 0:
      node['left'] = node['right'] = self.select_terminal(left)
      return

    # check for max depth
    if depth >= self.max_depth:
      node['left'], node['right'] = (self.select_terminal(left),
                                     self.select_terminal(right))
      return

    # process left child
    if len(set(list(
        left[:, -1].flatten()))) == 1 or left.shape[0] <= self.min_size:
      node['left'] = self.select_terminal(left)
    else:
      node['left'] = self.select_best_split(left)
      self.split_node(node['left'], depth + 1)

    # process right child
    if len(set(list(
        right[:, -1].flatten()))) == 1 or right.shape[0] <= self.min_size:
      node['right'] = self.select_terminal(right)
    else:
      node['right'] = self.select_best_split(right)
      self.split_node(node['right'], depth+1)

  def create_mask(self, dataset):
    self.mask = np.amin(dataset, axis=0) != np.amax(dataset, axis=0)

  def fit(self, dataset, mask_stuck_at_values=False):
    if mask_stuck_at_values:
      self.create_mask(dataset)
    else:
      self.mask = np.ones(dataset.shape[1])

    self.probs = self.mask[:-1].astype(np.float32) / np.sum(self.mask[:-1])

    if not self.n_features:
      self.n_features = int(np.sqrt(dataset.shape[1] - 1))

    self.root = self.select_best_split(dataset)
    self.split_node(self.root, 1)

  def predict_internal(self, node, data):
    if data[node['index']] < node['value']:
      if isinstance(node['left'], dict):
        return self.predict_internal(node['left'], data)
      else:
        return node['left']
    else:
      if isinstance(node['right'], dict):
        return self.predict_internal(node['right'], data)
      else:
        return node['right']


  def predict_row(self, row):
    return self.predict_internal(self.root, row)


  def predict(self, data):
    return np.array(self.predict_row(data[i]) for i in range(data.shape[0]))

  def gen_code_internal(self, node, var_dict, n_offset):
    # traverse left
    cond = '( i_' + str(node['index']) + ' < ' + str(node['value']) + ' )'
    if isinstance(node['left'], dict):
      n0 = self.gen_code_internal(node['left'], var_dict, n_offset)
    else:
      n0 = str(node['left'])

    if isinstance(node['right'], dict):
      n1 = self.gen_code_internal(node['right'], var_dict, n_offset)
    else:
      n1 = str(node['right'])

    index = len(var_dict) + n_offset
    r = 'n_' + str(index)
    stmt = cond + '? ' + n0 + ' : ' + n1
    var_dict[r] = stmt

    return r

  def gen_code(self, n_offset=0):
    var_dict = {}

    self.gen_code_internal(self.root, var_dict, n_offset)

    return var_dict


================================================
FILE: experimental/lo/random_forest/utils.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Reads and processes tables of PLAs and CSVs."""

from csv import reader
from math import sqrt
import os
import pprint
from random import seed
from random import randrange
import sys

import numpy as np
from .parser import parse, _X, _0, _1


def str_column_to_float(dataset, column):
  """Converts string column to float."""
  for row in dataset:
    row[column] = float(row[column].strip())

def str_column_to_int(dataset, column):
  """Converts string column to int."""
  for row in dataset:
    row[column] = int(row[column].strip())

def str_column_to_number(dataset, column):
  """Converts output to integer if possible or float."""

  class_values = [row[column] for row in dataset]
  unique = set(class_values)
  lookup = dict()
  is_symbolic = False
  for value in unique:
    try:
      # try int first
      lookup[value] = int(value)
    except ValueError:
      try:
        # if it fails, try float
        lookup[value] = float(value)
      except ValueError:
        # if it fails, it is symbolic
        is_symbolic = True
        break

  # best we an do is to assign unique numbers to the classes
  if is_symbolic:
    for i, value in enumerate(unique):
      lookup[value] = i

  # convert output to unique number
  for row in dataset:
    row[column] = lookup[row[column]]

  return lookup


def load_csv(filename):
  """Loads CSV file."""
  dataset = list()
  with open(filename, 'r') as file:
    csv_reader = reader(file)
    for row in csv_reader:
      if not row:
        continue
      dataset.append(row)

  # converts data to int's
  for i in range(0, len(dataset[0])-1):
    str_column_to_int(dataset, i)

  # converts output to int or float
  str_column_to_number(dataset, len(dataset[0])-1)
  dataset = np.array(dataset)

  return dataset


def load_pla(filename):
  """Loads PLA file."""
  dataset = list()
  pla = parse(filename)
  for i,o in zip(pla.pla_i, pla.pla_o):
    i_s = [1 if v == _1 else 0 if v == _0 else 0 for v in i]
    o_s = [sum([(1 << (len(o)-1-oo)) if o[oo] == _1 else 0
                for oo in range(len(o))])]
    dataset.append(i_s + o_s)
  dataset = np.array(dataset)
  return dataset


def load(filename):
  """Loads and decides if we will load PLA or CSV file based on suffix."""

  suffix_split = filename.split(".")

  if suffix_split[-1] == "pla":
    print("... loading pla")
    dataset = load_pla(filename)
  else:
    dataset = load_csv(filename)
  return dataset



================================================
FILE: experimental/lo/receptive.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import math

from .utils import get_padding_value


def print_rf(layer_name, x):
  print("Layer {}:".format(layer_name))
  print(
      "\theight/width: {}\n\tstride: {}\n\teq_kernel_size: {}\n\tstart: {}\n".format(
          *x)
  )


def rf_computation_for_layer(layer, layer_in):
  k, s, p = layer
  n_in, j_in, r_in, start_in = layer_in

  n_out = int(math.floor((n_in + 2*p - k)/s)) + 1

  if s == 1 and p == 1:
    n_out = n_in

  actual_p = (n_out-1)*s - n_in + k
  p_r = math.ceil(actual_p/2)
  p_l = math.floor(actual_p/2)

  j_out = j_in * s

  r_out = r_in + (k-1)*j_in

  start_out = start_in + (int((k-1)/2) - p_l) * j_in

  return n_out, j_out, r_out, start_out


def model_to_receptive_field(model, i_name, o_name):
  layers_h = []
  layers_w = []

  i_layer = model.get_layer(i_name)
  o_layer = model.get_layer(o_name)

  # right now this only works for sequential layers

  i_index = model.layers.index(i_layer)
  o_index = model.layers.index(o_layer)

  for i in range(i_index, o_index+1):
    k_h, k_w = (1, 1)
    s_h, s_w = (1, 1)
    p_h, p_w = (0, 0)

    if hasattr(model.layers[i], "kernel_size"):
      kernel = model.layers[i].kernel_size

      if isinstance(kernel, int):
        kernel = [kernel, kernel]

      k_h, k_w = kernel[0], kernel[1]

    if hasattr(model.layers[i], "strides"):
      strides = model.layers[i].strides

      if isinstance(strides, int):
        strides = [strides, strides]

      s_h, s_w = strides[0], strides[1]

    if hasattr(model.layers[i], "padding"):
      padding = model.layers[i].padding

      if isinstance(padding, str):
        padding = [padding, padding]

      p_h = get_padding_value(padding[0], k_h)
      p_w = get_padding_value(padding[1], k_w)

    layers_h.append((k_h, s_h, p_h))
    layers_w.append((k_w, s_w, p_w))

  x_h = (i_layer.input.shape[1], 1, 1, 0.5)
  x_w = (i_layer.input.shape[2], 1, 1, 0.5)

  for l_h, l_w in zip(layers_h, layers_w):
    x_h = rf_computation_for_layer(l_h, x_h)
    x_w = rf_computation_for_layer(l_w, x_w)

  strides = (x_h[1], x_w[1])
  kernel = (x_h[2], x_w[2])
  padding = ("valid", "valid")

  return (strides, kernel, padding)



================================================
FILE: experimental/lo/table/__init__.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from .utils import load
from .utils import load_csv
from .utils import load_pla


================================================
FILE: experimental/lo/table/parser.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Parses PLA format usig ply."""
from ply import yacc
from ply import lex
import numpy as np

_1 = 1
_0 = 2
_X = 3
_U = 0

NOT = {_0: _1, _1: _0, _X: _U, _U: _U}

class PLA:
  def __init__(self):
    self.pla_i = []
    self.pla_o = []

pla = PLA()

tokens = [
  "I",
  "O",
  "MV",
  "ILB",
  "OB",
  "P",
  "L",
  "E",
  "TYPE",
  "SYMBOL",
  "NUMBER",
  "NEWLINE"
]

t_ignore = " \t|"
t_I = r"\.[iI]"
t_O = r"\.[oO]"
t_MV = r"\.[mM][vV]"
t_ILB = r"\.[iI][lL][bB]"
t_OB = r"\.[oO][bB]"
t_P = r"\.[pP]"
t_L = r"\.[lL]"
t_E = r"\.[eE]"
t_TYPE = r"\.type"
t_SYMBOL = r"[a-zA-Z_][a-zA-Z0-9_\<\>\-\$]*"

def t_NUMBER(t):
  r"[\d\-]+"
  return t

def t_NEWLINE(t):
  r"\n+"
  t.lexer.lineno += t.value.count("\n")
  return t

def t_error(t):
  print("Illegal character '{}'".format(t.value))
  t.lexer.skip(1)

lex.lex()

def p_pla(p):
  """pla : pla_declarations pla_table pla_end"""

def p_pla_declarations(p):
  """pla_declarations : pla_declarations pla_declaration
                      | pla_declaration"""

def p_pla_declaration(p):
  """pla_declaration : I NUMBER NEWLINE
                     | O NUMBER NEWLINE
                     | P NUMBER NEWLINE
                     | MV number_list NEWLINE
                     | ILB symbol_list NEWLINE
                     | OB symbol_list NEWLINE
                     | L NUMBER symbol_list NEWLINE
                     | TYPE SYMBOL NEWLINE
  """
  token = p[1].lower()
  if token == ".i":
    pla.ni = int(p[2])
  elif token == ".o":
    pla.no = int(p[2])
  elif token == ".mv":
    pla.mv = [int(v) for v in p[2]]
  elif token == ".ilb":
    pla.ilb = p[2]
  elif token == ".ob":
    pla.ob = p[2]
  elif token == ".l":
    pla.label = p[2]
  elif token == ".type":
    pla.set_type = p[2]


def p_pla_table(p):
  """pla_table : pla_table number_symbol_list NEWLINE
               | number_symbol_list NEWLINE"""
  if len(p[1:]) == 3:
    line = "".join(p[2])
  else:
    line = "".join(p[1])

  assert hasattr(pla, "ni") and hasattr(pla, "no")

  # right now we only process binary functions

  line = [_1 if v == "1" else _0 if v == "0" else _X for v in line]

  pla.pla_i.append(line[0:pla.ni])
  pla.pla_o.append(line[pla.ni:])


def p_pla_end(p):
  """pla_end : E opt_new_line"""
  pass


def p_opt_new_line(p):
  """opt_new_line : NEWLINE
                  |
  """
  pass


def p_number_list(p):
  """number_list : number_list NUMBER
                 | NUMBER
  """
  if len(p[1:]) == 2:
    p[0] = p[1] + [p[2]]
  else:
    p[0] = [p[1]]


def p_symbol_list(p):
  """symbol_list : symbol_list SYMBOL
                 | SYMBOL
  """
  if len(p[1:]) == 2:
    p[0] = p[1] + [p[2]]
  else:
    p[0] = [p[1]]


def p_number_symbol_list(p):
  """number_symbol_list : number_symbol_list number_or_symbol
                        | number_or_symbol
  """
  if len(p[1:]) == 2:
    p[0] = p[1] + [p[2]]
  else:
    p[0] = [p[1]]


def p_number_or_symbol(p):
  """number_or_symbol : NUMBER
                      | SYMBOL
  """
  p[0] = p[1]


def p_error(p):
  print("Error text at {}".format(p)) #p.value))

yacc.yacc()

def get_tokens(fn):
  lex.input("".join(open(fn).readlines()))
  return lex.token

def parse(fn):
  yacc.parse("".join(open(fn).readlines()))

  pla.pla_i = np.array(pla.pla_i)
  pla.pla_o = np.array(pla.pla_o)

  return pla


================================================
FILE: experimental/lo/table/utils.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Reads and processes tables of PLAs and CSVs."""

from csv import reader
from csv import QUOTE_NONNUMERIC
from math import sqrt
import os
import pprint
from random import seed
from random import randrange
import sys

import numpy as np
from .parser import parse, _X, _0, _1


def str_column_to_float(dataset, column):
  """Converts string column to float."""
  for row in dataset:
    row[column] = float(row[column].strip())

def str_column_to_int(dataset, column, d_values):
  """Converts string column to int."""
  for row in dataset:
    v = int(row[column].strip())
    row[column] = v if not d_values else d_values[v]

def str_column_to_number(dataset, column):
  """Converts output to integer if possible or float."""

  class_values = [row[column] for row in dataset]
  unique = set(class_values)
  lookup = dict()
  is_symbolic = False
  for value in unique:
    try:
      # try int first
      lookup[value] = int(value)
    except ValueError:
      try:
        # if it fails, try float
        lookup[value] = float(value)
      except ValueError:
        # if it fails, it is symbolic
        is_symbolic = True
        break

  # best we an do is to assign unique numbers to the classes
  if is_symbolic:
    for i, value in enumerate(unique):
      lookup[value] = i

  # convert output to unique number
  for row in dataset:
    row[column] = lookup[row[column]]

  return lookup


def int2bin(v, bits):
  str_v = format((v & ((1<<bits)-1)), "#0" + str(bits+2) + "b")[2:]
  return [int(b) for b in str_v]


def load_csv(filename):
  """Loads CSV file."""
  dataset = list()

  with open(filename, 'r') as file:
    csv_reader = reader(file, quoting=QUOTE_NONNUMERIC)
    for row in csv_reader:
      if not row:
        continue
      dataset.append(row)
      #dataset.append([int(v) for v in row])

  return np.array(dataset)


def load_pla(filename):
  """Loads PLA file."""
  dataset = list()
  pla = parse(filename)
  for i,o in zip(pla.pla_i, pla.pla_o):
    i_s = [1 if v == _1 else 0 if v == _0 else 0 for v in i]
    o_s = [sum([(1 << (len(o)-1-oo)) if o[oo] == _1 else 0
                for oo in range(len(o))])]
    dataset.append(i_s + o_s)
  dataset = np.array(dataset)
  return dataset


def load(filename):
  """Loads and decides if we will load PLA or CSV file based on suffix."""

  suffix_split = filename.split(".")

  if suffix_split[-1] == "pla":
    print("... loading pla")
    dataset = load_pla(filename)
  else:
    dataset = load_csv(filename)
  return dataset



================================================
FILE: experimental/lo/utils.py
================================================
# Copyright 2020 Google LLC
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Computes padding and quantization dictionary values."""

import numpy as np


def get_padding_value(padding, kernel):
  """Returns padding value for kernel."""

  if padding == "valid":
    return 0
  elif padding == "same":
    return kernel // 2
  elif padding == "full":
    return kernel - 1

  raise ValueError("accepted paddings are 'valid', 'same' or 'full', found " +
                   padding)


def get_quantized_bits_dict(bits, ibits, sign=False, mode="bin"):
  """Returns map from floating values to bit encoding."""

  o_dict = {}

  n_bits = bits

  for b in range(1 << (bits - sign)):
    v = (1.0 * b) * (1 << ibits) / (1 << bits)
    if mode == "bin":
      b_str = bin(b)[2:]
      b_str = "0" * (n_bits - len(b_str)) + b_str
    else:  # mode == "dec":
      b_str = str(b)

    o_dict[v] = b_str

    if b > 0 and sign:
      if mode == "bin":
        b_str = bin(-b & ((1 << n_bits) - 1))[2:]
      else:  # mode == "dec"
        b_str = str(-b)

      o_dict[-v] = b_str

  if sign:
    v = (1.0 * (1 << (bits - sign))) * (1 << ibits) / (1 << bits)
    if mode == "bin":
      b_str = bin(-(1 << (bits - sign)) & ((1 << bits) - 1))[2:]
    else:
      b_str = str(-(1 << (bits - sign)))
    o_dict[-v] = b_str
  return o_dict


def get_quantized_po2_dict(
    bits, max_exp, sign=False, make_smaller_zero=True, mode="bin"):
  """Returns map from floating values to bit encoding."""

  # if make_smaller_zero we will make sure smaller number is 000...0

  # mode = "bin" |-> make_smaller_zero

  assert mode != "bin" or  make_smaller_zero

  o_dict = {}

  if max_exp > 0:
    v = 1.0
    if mode == "bin":
      b_str = "0" * bits
    else:
      b_str = "1"

    o_dict[v] = b_str

    if sign:
      v = -1.0
      if mode == "bin":
        b_str = "1" + "0"*(bits-sign)
      else:
        b_str = "-1"

      o_dict[v] = b_str

  for b in range(1, 1<<(bits - sign - 1)):
    v = np.power(2.0, -b)
    if mode == "bin":
      b_sign = "0" if sign else ""
      b_str = b_sign + bin((-b) & ((1 << (bits - sign + 1)) - 1))[3:]
    else:
      b_str = str(v)
    o_dict[v] = b_str

    if b <= max_exp:
      v = np.power(2.0, b)
      if mode == "bin":
        b_str = bin(b)[2:]
        b_str = b_sign + "0"*(bits - sign - len(b_str)) + b_str
      else:
        b_str = str(v)
      o_dict[v] = b_str

    if sign:
      v = -np.power(2.0, -b)
      if mode == "bin":
        b_sign = "1" if sign else ""
        b_str = b_sign + bin((-b) & ((1 << (bits - sign + 1)) - 1))[3:]
      else:
        b_str = str(v)
      o_dict[v] = b_str

      if b <= max_exp:
        v = -np.power(2.0, b)
        if mode == "bin":
          b_str = bin(b)[2:]
          b_str = b_sign + "0"*(bits - sign - len(b_str)) + b_str
        else:
          b_str = str(v)
        o_dict[v] = b_str

  b = 1 << (bits - sign - 1)
  v = np.power(2.0, -b)
  if mode == "bin":
    b_sign = "0" if sign else ""
    b_str = b_sign + bin((-b) & ((1 << (bits - sign + 1)) - 1))[3:]
  else:
    b_str = str(v)
  o_dict[v] = b_str

  smaller_mask = b_str

  if sign:
    v = -np.power(2.0, -b)
    if mode == "bin":
      b_sign = "1" if sign else ""
      b_str = b_sign + bin((-b) & ((1 << (bits - sign + 1)) - 1))[3:]
    else:
      b_str = str(v)
    o_dict[v] = b_str

  def invert_bit(bit, mask):
    """Inverts bits if mask is 1."""

    if mask == "0":
      return bit
    else:
      return "0" if bit == "1" else "1"

  if mode == "bin":
    if make_smaller_zero:
      for v in o_dict:
        o_dict[v] = "".join(
            invert_bit(bit, mask_bit)
            for bit, mask_bit in zip(o_dict[v], smaller_mask))
  else:
    keys_sorted = list(sorted(o_dict.keys()))
    if make_smaller_zero:
      min_positive_key = min([abs(v) for v in keys_sorted])
      min_positive_index = keys_sorted.index(min_positive_key)
    else:
      min_positive_index = 0
    for i, k in enumerate(keys_sorted):
      o_dict[k] = str(i - min_positive_index)

  return o_dict


def get_ternary_dict(mode="bin"):
  """Returns map from floating values to bit encoding."""

  if mode == "bin":
    return {-1.0: "11", 0.0: "00", 1.0: "01"}
  else:
    return {-1.0: "-1", 0.0: "0", 1.0: "1"}


def get_binary_dict(symmetric=False, mode="bin"):
  """Returns map from floating values to bit encoding."""

  if mode == "bin":
    if symmetric:
      return {-1.0: "10", 1.0: "01"}
    else:
      return {0.0: "0", 1.0: "1"}
  else:
    if symmetric:
      return {-1.0: "-1", 1.0: "1"}
    else:
      return {0.0: "0", 1.0: "1"}


================================================
FILE: notebook/AutoQKeras.ipynb
================================================
{
 "cells": [
   {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "##### Copyright 2020 Google LLC\n",
    "#\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "# https://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "QC9sVuNrzT-f"
   },
   "source": [
    "# Introduction\n",
    "\n",
    "In this notebook, we show how to quantize a model using AutoQKeras.\n",
    "\n",
    "As usual, let's first make sure we are using Python 3."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "height": 51
    },
    "colab_type": "code",
    "executionInfo": {
     "elapsed": 926,
     "status": "ok",
     "timestamp": 1591840345558,
     "user": {
      "displayName": "Claudionor Coelho",
      "photoUrl": "",
      "userId": "01084525977535968041"
     },
     "user_tz": 420
    },
    "id": "0sY-O2IfzdB3",
    "outputId": "1c5a4e7a-1003-4b56-a30a-ca6bc196f18b"
   },
   "outputs": [],
   "source": [
    "import sys\n",
    "print(sys.version)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "6V7FxYH0zfY0"
   },
   "source": [
    "Now, let's load some packages we will need to run AutoQKeras."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "wuVqOAcbz3Go"
   },
   "outputs": [],
   "source": [
    "import warnings\n",
    "warnings.filterwarnings(\"ignore\")\n",
    "\n",
    "import json\n",
    "import pprint\n",
    "import numpy as np\n",
    "import six\n",
    "import tempfile\n",
    "import tensorflow.compat.v2 as tf\n",
    "# V2 Behavior is necessary to use TF2 APIs before TF2 is default TF version internally.\n",
    "tf.enable_v2_behavior()\n",
    "from tensorflow.keras.optimizers import *\n",
    "\n",
    "from qkeras.autoqkeras import *\n",
    "from qkeras import *\n",
    "from qkeras.utils import model_quantize\n",
    "from qkeras.qtools import run_qtools\n",
    "from qkeras.qtools import settings as qtools_settings\n",
    "\n",
    "from tensorflow.keras.utils import to_categorical\n",
    "import tensorflow_datasets as tfds\n",
    "\n",
    "print(\"using tensorflow\", tf.__version__)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's define `get_data` and `get_model` as you may not have stand alone access to examples directory inside autoqkeras."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_data(dataset_name, fast=False):\n",
    "  \"\"\"Returns dataset from tfds.\"\"\"\n",
    "  ds_train = tfds.load(name=dataset_name, split=\"train\", batch_size=-1)\n",
    "  ds_test = tfds.load(name=dataset_name, split=\"test\", batch_size=-1)\n",
    "\n",
    "  dataset = tfds.as_numpy(ds_train)\n",
    "  x_train, y_train = dataset[\"image\"].astype(np.float32), dataset[\"label\"]\n",
    "\n",
    "  dataset = tfds.as_numpy(ds_test)\n",
    "  x_test, y_test = dataset[\"image\"].astype(np.float32), dataset[\"label\"]\n",
    "\n",
    "  if len(x_train.shape) == 3:\n",
    "    x_train = x_train.reshape(x_train.shape + (1,))\n",
    "    x_test = x_test.reshape(x_test.shape + (1,))\n",
    "\n",
    "  x_train /= 256.0\n",
    "  x_test /= 256.0\n",
    "\n",
    "  x_mean = np.mean(x_train, axis=0)\n",
    "\n",
    "  x_train -= x_mean\n",
    "  x_test -= x_mean\n",
    "\n",
    "  nb_classes = np.max(y_train) + 1\n",
    "  y_train = to_categorical(y_train, nb_classes)\n",
    "  y_test = to_categorical(y_test, nb_classes)\n",
    "\n",
    "  print(x_train.shape[0], \"train samples\")\n",
    "  print(x_test.shape[0], \"test samples\")\n",
    "  return (x_train, y_train), (x_test, y_test)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from tensorflow.keras.initializers import *\n",
    "from tensorflow.keras.layers import *\n",
    "from tensorflow.keras.models import Model\n",
    "from tensorflow.keras.optimizers import *\n",
    "\n",
    "class ConvBlockNetwork(object):\n",
    "  \"\"\"Creates Convolutional block type of network.\"\"\"\n",
    "\n",
    "  def __init__(\n",
    "      self,\n",
    "      shape,\n",
    "      nb_classes,\n",
    "      kernel_size,\n",
    "      filters,\n",
    "      dropout_rate=0.0,\n",
    "      with_maxpooling=True,\n",
    "      with_batchnorm=True,\n",
    "      kernel_initializer=\"he_normal\",\n",
    "      bias_initializer=\"zeros\",\n",
    "      use_separable=False,\n",
    "      use_xnornet_trick=False,\n",
    "      all_conv=False\n",
    "  ):\n",
    "    \"\"\"Creates class.\n",
    "\n",
    "    Args:\n",
    "      shape: shape of inputs.\n",
    "      nb_classes: number of output classes.\n",
    "      kernel_size: kernel_size of network.\n",
    "      filters: sizes of filters (if entry is a list, we create a block).\n",
    "      dropout_rate: dropout rate if > 0.\n",
    "      with_maxpooling: if true, use maxpooling.\n",
    "      with_batchnorm: with BatchNormalization.\n",
    "      kernel_initializer: kernel_initializer.\n",
    "      bias_initializer: bias and beta initializer.\n",
    "      use_separable: if \"dsp\", do conv's 1x3 + 3x1. If \"mobilenet\",\n",
    "        use MobileNet separable convolution. If False or \"none\", perform single\n",
    "        conv layer.\n",
    "      use_xnornet_trick: use bn+act after max pool to enable binary\n",
    "        to avoid saturation to largest value.\n",
    "      all_conv: if true, implements all convolutional network.\n",
    "    \"\"\"\n",
    "    self.shape = shape\n",
    "    self.nb_classes = nb_classes\n",
    "    self.kernel_size = kernel_size\n",
    "    self.filters = filters\n",
    "    self.dropout_rate = dropout_rate\n",
    "    self.with_maxpooling = with_maxpooling\n",
    "    self.with_batchnorm = with_batchnorm\n",
    "    self.kernel_initializer = kernel_initializer\n"
Download .txt
gitextract_ead9uto5/

├── .github/
│   └── workflows/
│       └── ci.yml
├── CHANGELOG
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── README.md
├── examples/
│   ├── example_act.py
│   ├── example_b2t.py
│   ├── example_cifar10_po2.py
│   ├── example_keras_to_qkeras.py
│   ├── example_mnist.py
│   ├── example_mnist_ae.py
│   ├── example_mnist_b2t.py
│   ├── example_mnist_bn.py
│   ├── example_mnist_po2.py
│   ├── example_mnist_prune.py
│   ├── example_qdense.py
│   ├── example_qoctave.py
│   └── example_ternary.py
├── experimental/
│   └── lo/
│       ├── __init__.py
│       ├── compress.py
│       ├── conv2d.py
│       ├── dense.py
│       ├── generate_rf_code.py
│       ├── optimizer.py
│       ├── random_forest/
│       │   ├── __init__.py
│       │   ├── gen_random_tree.py
│       │   ├── parser.py
│       │   ├── random_forest.py
│       │   ├── random_tree.py
│       │   └── utils.py
│       ├── receptive.py
│       ├── table/
│       │   ├── __init__.py
│       │   ├── parser.py
│       │   └── utils.py
│       └── utils.py
├── notebook/
│   ├── AutoQKeras.ipynb
│   ├── CodebookQuantization.ipynb
│   ├── QKerasTutorial.ipynb
│   └── QRNNTutorial.ipynb
├── qkeras/
│   ├── __init__.py
│   ├── autoqkeras/
│   │   ├── __init__.py
│   │   ├── autoqkeras_internal.py
│   │   ├── examples/
│   │   │   └── run/
│   │   │       ├── get_data.py
│   │   │       ├── get_model.py
│   │   │       ├── networks/
│   │   │       │   ├── __init__.py
│   │   │       │   └── conv_block.py
│   │   │       └── plot_history.py
│   │   ├── forgiving_metrics/
│   │   │   ├── __init__.py
│   │   │   ├── forgiving_bits.py
│   │   │   ├── forgiving_energy.py
│   │   │   └── forgiving_factor.py
│   │   ├── quantization_config.py
│   │   ├── tests/
│   │   │   └── test_forgiving_factor.py
│   │   └── utils.py
│   ├── b2t.py
│   ├── base_quantizer.py
│   ├── bn_folding_utils.py
│   ├── callbacks.py
│   ├── codebook.py
│   ├── estimate.py
│   ├── experimental/
│   │   └── quantizers/
│   │       ├── __init__.py
│   │       └── quantizers_po2.py
│   ├── qconv2d_batchnorm.py
│   ├── qconvolutional.py
│   ├── qdepthwise_conv2d_transpose.py
│   ├── qdepthwiseconv2d_batchnorm.py
│   ├── qlayers.py
│   ├── qmac.py
│   ├── qmodel.proto
│   ├── qnormalization.py
│   ├── qoctave.py
│   ├── qpooling.py
│   ├── qrecurrent.py
│   ├── qseparable_conv2d_transpose.py
│   ├── qtools/
│   │   ├── DnC/
│   │   │   ├── divide_and_conquer.py
│   │   │   └── dnc_layer_cost_ace.py
│   │   ├── __init__.py
│   │   ├── config_public.py
│   │   ├── examples/
│   │   │   ├── example_generate_json.py
│   │   │   └── example_get_energy.py
│   │   ├── generate_layer_data_type_map.py
│   │   ├── interface.py
│   │   ├── qenergy/
│   │   │   ├── __init__.py
│   │   │   └── qenergy.py
│   │   ├── qgraph.py
│   │   ├── qtools_util.py
│   │   ├── quantized_operators/
│   │   │   ├── __init__.py
│   │   │   ├── accumulator_factory.py
│   │   │   ├── accumulator_impl.py
│   │   │   ├── adder_factory.py
│   │   │   ├── adder_impl.py
│   │   │   ├── divider_factory.py
│   │   │   ├── divider_impl.py
│   │   │   ├── fused_bn_factory.py
│   │   │   ├── merge_factory.py
│   │   │   ├── multiplier_factory.py
│   │   │   ├── multiplier_impl.py
│   │   │   ├── qbn_factory.py
│   │   │   ├── quantizer_factory.py
│   │   │   ├── quantizer_impl.py
│   │   │   └── subtractor_factory.py
│   │   ├── run_qtools.py
│   │   └── settings.py
│   ├── quantizer_imports.py
│   ├── quantizer_registry.py
│   ├── quantizers.py
│   ├── registry.py
│   ├── safe_eval.py
│   └── utils.py
├── requirements.txt
├── setup.cfg
├── setup.py
└── tests/
    ├── automatic_conversion_test.py
    ├── autoqkeras_test.py
    ├── bn_folding_test.py
    ├── callbacks_test.py
    ├── codebook_test.py
    ├── leakyrelu_test.py
    ├── min_max_test.py
    ├── print_qstats_test.py
    ├── qactivation_test.py
    ├── qadaptiveactivation_test.py
    ├── qalpha_test.py
    ├── qconvolutional_test.py
    ├── qdepthwise_conv2d_transpose_test.py
    ├── qlayers_test.py
    ├── qmac_test.py
    ├── qnoise_test.py
    ├── qpooling_test.py
    ├── qrecurrent_test.py
    ├── qseparable_conv2d_transpose_test.py
    ├── qtools_model_test.py
    ├── qtools_util_test.py
    ├── quantizer_impl_test.py
    ├── quantizer_registry_test.py
    ├── range_test.py
    ├── registry_test.py
    ├── safe_eval_test.py
    └── utils_test.py
Download .txt
SYMBOL INDEX (1181 symbols across 104 files)

FILE: examples/example_act.py
  function main (line 43) | def main():

FILE: examples/example_mnist_bn.py
  class LearningRateAdjuster (line 52) | class LearningRateAdjuster(callbacks.Callback):
    method __init__ (line 53) | def __init__(self):
    method on_epoch_end (line 57) | def on_epoch_end(self, epochs, logs):

FILE: examples/example_mnist_prune.py
  function build_model (line 51) | def build_model(input_shape):
  function build_layerwise_model (line 81) | def build_layerwise_model(input_shape, **pruning_params):
  function train_and_save (line 119) | def train_and_save(model, x_train, y_train, x_test, y_test):
  function main (line 163) | def main():

FILE: examples/example_qdense.py
  function QDenseModel (line 49) | def QDenseModel(weights_f, load_weights=False):
  function UseNetwork (line 77) | def UseNetwork(weights_f, load_weights=False):
  function ParserArgs (line 121) | def ParserArgs():

FILE: examples/example_qoctave.py
  function create_model (line 30) | def create_model():
  function customLoss (line 157) | def customLoss(y_true,y_pred):

FILE: examples/example_ternary.py
  function _stochastic_rounding (line 32) | def _stochastic_rounding(x, precision, resolution, delta):
  function _ternary (line 67) | def _ternary(x, sto=False):
  function main (line 87) | def main(argv):

FILE: experimental/lo/compress.py
  class Compressor (line 23) | class Compressor:
    method __init__ (line 26) | def __init__(self, hash_only_input=False):
    method add_entry (line 30) | def add_entry(self, table_in, table_out=""):
    method has_entry (line 44) | def has_entry(self, table_in, table_out=""):
    method __call__ (line 65) | def __call__(self):

FILE: experimental/lo/conv2d.py
  function parallel_index_table (line 37) | def parallel_index_table(
  function parallel_compress_output_table (line 134) | def parallel_compress_output_table(
  function optimize_conv2d_logic (line 181) | def optimize_conv2d_logic(

FILE: experimental/lo/dense.py
  function parallel_index_table (line 37) | def parallel_index_table(
  function parallel_compress_output_table (line 112) | def parallel_compress_output_table(
  function optimize_dense_logic (line 156) | def optimize_dense_logic(

FILE: experimental/lo/generate_rf_code.py
  function gen_random_tree_regressor (line 30) | def gen_random_tree_regressor(
  function entry_to_hex (line 194) | def entry_to_hex(entry, max_value, size, is_cc):
  function gen_random_tree_classifier (line 207) | def gen_random_tree_classifier(
  function gen_random_forest (line 434) | def gen_random_forest(
  function gen_testbench_sv (line 721) | def gen_testbench_sv(rf, name, bits, is_neg, o_bits, o_is_neg, x, y, p, ...
  function gen_testbench_cc (line 770) | def gen_testbench_cc(rf, name, bits, is_neg, o_bits, o_is_neg, x, y, p, ...

FILE: experimental/lo/optimizer.py
  function file_compress (line 43) | def file_compress(fin, fout):
  function mp_rf_optimizer_func (line 62) | def mp_rf_optimizer_func(fn_tuple):
  function mp_abc_optimizer_func (line 194) | def mp_abc_optimizer_func(fn):
  function run_abc_optimizer (line 256) | def run_abc_optimizer(files):
  function run_rf_optimizer (line 270) | def run_rf_optimizer(files, flags, file_suffix="cc"):

FILE: experimental/lo/random_forest/gen_random_tree.py
  function gen_random_tree_cc (line 26) | def gen_random_tree_cc(tree):

FILE: experimental/lo/random_forest/parser.py
  class PLA (line 28) | class PLA:
    method __init__ (line 29) | def __init__(self):
  function t_NUMBER (line 62) | def t_NUMBER(t):
  function t_NEWLINE (line 66) | def t_NEWLINE(t):
  function t_error (line 71) | def t_error(t):
  function p_pla (line 77) | def p_pla(p):
  function p_pla_declarations (line 80) | def p_pla_declarations(p):
  function p_pla_declaration (line 84) | def p_pla_declaration(p):
  function p_pla_table (line 111) | def p_pla_table(p):
  function p_pla_end (line 129) | def p_pla_end(p):
  function p_opt_new_line (line 134) | def p_opt_new_line(p):
  function p_number_list (line 141) | def p_number_list(p):
  function p_symbol_list (line 151) | def p_symbol_list(p):
  function p_number_symbol_list (line 161) | def p_number_symbol_list(p):
  function p_number_or_symbol (line 171) | def p_number_or_symbol(p):
  function p_error (line 178) | def p_error(p):
  function get_tokens (line 183) | def get_tokens(fn):
  function parse (line 187) | def parse(fn):

FILE: experimental/lo/random_forest/random_forest.py
  function fit_parallel (line 24) | def fit_parallel(max_depth, min_size, sample, mask_stuck_at_values):
  class RandomForest (line 32) | class RandomForest:
    method __init__ (line 33) | def __init__(
    method save (line 48) | def save(model, filename):
    method load (line 57) | def load(filename):
    method subsample (line 67) | def subsample(self, dataset):
    method fit (line 75) | def fit(self, dataset, verbose=False):
    method predict_row (line 105) | def predict_row(self, row):
    method predict (line 114) | def predict(self, data):
    method gen_code (line 122) | def gen_code(self, filename, func_name):

FILE: experimental/lo/random_forest/random_tree.py
  class RandomTree (line 29) | class RandomTree:
    method __init__ (line 30) | def __init__(self, max_depth, min_size):
    method split_into_groups (line 35) | def split_into_groups(self, index, value, dataset):
    method gini_index (line 42) | def gini_index(self, groups, classes):
    method select_best_split (line 63) | def select_best_split(self, dataset):
    method select_terminal (line 86) | def select_terminal(self, group):
    method split_node (line 90) | def split_node(self, node, depth):
    method create_mask (line 124) | def create_mask(self, dataset):
    method fit (line 127) | def fit(self, dataset, mask_stuck_at_values=False):
    method predict_internal (line 141) | def predict_internal(self, node, data):
    method predict_row (line 154) | def predict_row(self, row):
    method predict (line 158) | def predict(self, data):
    method gen_code_internal (line 161) | def gen_code_internal(self, node, var_dict, n_offset):
    method gen_code (line 181) | def gen_code(self, n_offset=0):

FILE: experimental/lo/random_forest/utils.py
  function str_column_to_float (line 30) | def str_column_to_float(dataset, column):
  function str_column_to_int (line 35) | def str_column_to_int(dataset, column):
  function str_column_to_number (line 40) | def str_column_to_number(dataset, column):
  function load_csv (line 72) | def load_csv(filename):
  function load_pla (line 93) | def load_pla(filename):
  function load (line 106) | def load(filename):

FILE: experimental/lo/receptive.py
  function print_rf (line 25) | def print_rf(layer_name, x):
  function rf_computation_for_layer (line 33) | def rf_computation_for_layer(layer, layer_in):
  function model_to_receptive_field (line 55) | def model_to_receptive_field(model, i_name, o_name):

FILE: experimental/lo/table/parser.py
  class PLA (line 28) | class PLA:
    method __init__ (line 29) | def __init__(self):
  function t_NUMBER (line 62) | def t_NUMBER(t):
  function t_NEWLINE (line 66) | def t_NEWLINE(t):
  function t_error (line 71) | def t_error(t):
  function p_pla (line 77) | def p_pla(p):
  function p_pla_declarations (line 80) | def p_pla_declarations(p):
  function p_pla_declaration (line 84) | def p_pla_declaration(p):
  function p_pla_table (line 111) | def p_pla_table(p):
  function p_pla_end (line 129) | def p_pla_end(p):
  function p_opt_new_line (line 134) | def p_opt_new_line(p):
  function p_number_list (line 141) | def p_number_list(p):
  function p_symbol_list (line 151) | def p_symbol_list(p):
  function p_number_symbol_list (line 161) | def p_number_symbol_list(p):
  function p_number_or_symbol (line 171) | def p_number_or_symbol(p):
  function p_error (line 178) | def p_error(p):
  function get_tokens (line 183) | def get_tokens(fn):
  function parse (line 187) | def parse(fn):

FILE: experimental/lo/table/utils.py
  function str_column_to_float (line 31) | def str_column_to_float(dataset, column):
  function str_column_to_int (line 36) | def str_column_to_int(dataset, column, d_values):
  function str_column_to_number (line 42) | def str_column_to_number(dataset, column):
  function int2bin (line 74) | def int2bin(v, bits):
  function load_csv (line 79) | def load_csv(filename):
  function load_pla (line 94) | def load_pla(filename):
  function load (line 107) | def load(filename):

FILE: experimental/lo/utils.py
  function get_padding_value (line 21) | def get_padding_value(padding, kernel):
  function get_quantized_bits_dict (line 35) | def get_quantized_bits_dict(bits, ibits, sign=False, mode="bin"):
  function get_quantized_po2_dict (line 70) | def get_quantized_po2_dict(
  function get_ternary_dict (line 183) | def get_ternary_dict(mode="bin"):
  function get_binary_dict (line 192) | def get_binary_dict(symmetric=False, mode="bin"):

FILE: qkeras/autoqkeras/autoqkeras_internal.py
  class AutoQKHyperModel (line 80) | class AutoQKHyperModel(HyperModel):
    method __init__ (line 112) | def __init__(
    method _adjust_limit (line 177) | def _adjust_limit(self, default):
    method _n (line 195) | def _n(self, name, s_list):
    method _get_quantizer (line 199) | def _get_quantizer(self, hp, head, layer_name, layer_class_name,
    method quantize_model (line 327) | def quantize_model(self, hp):
    method build (line 563) | def build(self, hp):
    method adjusted_score (line 694) | def adjusted_score(hyper_model, delta, metric_function=None):
    method trial_size_metric (line 726) | def trial_size_metric(trial_size):
  class AutoQKeras (line 732) | class AutoQKeras:
    method __init__ (line 765) | def __init__(
    method _has_earlystopping (line 891) | def _has_earlystopping(self, callbacks):
    method history (line 901) | def history(self, number_of_trials=-1):
    method fit (line 947) | def fit(self, *fit_args, **fit_kwargs):
    method get_best_lr (line 970) | def get_best_lr(qmodel):
    method get_best_model (line 974) | def get_best_model(self):
    method get_learning_rate (line 983) | def get_learning_rate(self):
  class AutoQKerasScheduler (line 987) | class AutoQKerasScheduler:
    method __init__ (line 1019) | def __init__(
    method get_next_block (line 1115) | def get_next_block(self, overwrite):
    method get_limit (line 1129) | def get_limit(self, model, pattern):
    method fit (line 1160) | def fit(self, *fit_args, **fit_kwargs):
    method compute_block_costs (line 1263) | def compute_block_costs(self, patterns, model):
    method retrieve_max_block (line 1301) | def retrieve_max_block(self):
    method get_history (line 1305) | def get_history(self):
    method get_best_model (line 1309) | def get_best_model(self):
    method get_learning_rate (line 1321) | def get_learning_rate(self):

FILE: qkeras/autoqkeras/examples/run/get_data.py
  function get_data (line 24) | def get_data(dataset_name, fast=False):

FILE: qkeras/autoqkeras/examples/run/get_model.py
  function get_model (line 20) | def get_model(dataset):

FILE: qkeras/autoqkeras/examples/run/networks/conv_block.py
  class ConvBlockNetwork (line 34) | class ConvBlockNetwork(object):
    method __init__ (line 37) | def __init__(
    method build (line 82) | def build(self):

FILE: qkeras/autoqkeras/forgiving_metrics/forgiving_bits.py
  class ForgivingFactorBits (line 25) | class ForgivingFactorBits(ForgivingFactor):
    method __init__ (line 28) | def __init__(
    method _param_size (line 40) | def _param_size(self, layer):
    method _act_size (line 80) | def _act_size(self, layer):
    method compute_model_size (line 149) | def compute_model_size(self, model):
    method get_reference (line 177) | def get_reference(self, model):
    method get_reference_stats (line 187) | def get_reference_stats(self):
    method get_trial (line 190) | def get_trial(self, model):
    method get_total_factor (line 201) | def get_total_factor(self):
    method print_stats (line 207) | def print_stats(self):

FILE: qkeras/autoqkeras/forgiving_metrics/forgiving_energy.py
  class ForgivingFactorPower (line 26) | class ForgivingFactorPower(ForgivingFactor):
    method __init__ (line 29) | def __init__(self, delta_p, delta_n, rate, stress=1.0, **kwargs):
    method get_reference (line 107) | def get_reference(self, model):
    method get_trial (line 136) | def get_trial(self, model):
    method get_total_factor (line 164) | def get_total_factor(self):
    method get_reference_stats (line 168) | def get_reference_stats(self):
    method get_trial_stats (line 171) | def get_trial_stats(self):
    method print_stats (line 174) | def print_stats(self, verbosity=0):

FILE: qkeras/autoqkeras/forgiving_metrics/forgiving_factor.py
  class ForgivingFactor (line 22) | class ForgivingFactor:
    method __init__ (line 25) | def __init__(self, delta_p, delta_n, rate):
    method get_reference (line 30) | def get_reference(self, model):
    method get_trial (line 35) | def get_trial(self, model, schema):
    method delta (line 40) | def delta(self):

FILE: qkeras/autoqkeras/tests/test_forgiving_factor.py
  function get_model (line 26) | def get_model():
  function test_forgiving_factor_bits (line 48) | def test_forgiving_factor_bits():
  function test_new_forgiving_factor (line 90) | def test_new_forgiving_factor():

FILE: qkeras/autoqkeras/utils.py
  function print_qmodel_summary (line 25) | def print_qmodel_summary(q_model):
  function get_quantization_dictionary (line 69) | def get_quantization_dictionary(q_model):
  function save_quantization_dict (line 80) | def save_quantization_dict(fn, q_model):

FILE: qkeras/b2t.py
  function BinaryToThermometer (line 22) | def BinaryToThermometer(

FILE: qkeras/base_quantizer.py
  function _create_variable_name (line 20) | def _create_variable_name(attr_name, var_name=None):
  class BaseQuantizer (line 40) | class BaseQuantizer(tf.Module):
    method __init__ (line 46) | def __init__(self):
    method build (line 49) | def build(self, var_name=None, use_variables=False):
    method _set_trainable_parameter (line 60) | def _set_trainable_parameter(self):
    method update_qnoise_factor (line 63) | def update_qnoise_factor(self, qnoise_factor):
    method variables (line 82) | def variables(self):
    method trainable_variables (line 87) | def trainable_variables(self):
    method non_trainable_variables (line 92) | def non_trainable_variables(self):

FILE: qkeras/bn_folding_utils.py
  function convert_folded_layer_to_unfolded (line 35) | def convert_folded_layer_to_unfolded(layer):
  function unfold_model (line 80) | def unfold_model(model):
  function populate_bias_quantizer_from_accumulator (line 143) | def populate_bias_quantizer_from_accumulator(model, source_quantizers):

FILE: qkeras/callbacks.py
  class QNoiseScheduler (line 26) | class QNoiseScheduler(tf.keras.callbacks.Callback):
    method __init__ (line 35) | def __init__(self,
    method calculate_qnoise_factor (line 84) | def calculate_qnoise_factor(self, freq):
    method set_qnoise_factor (line 104) | def set_qnoise_factor(self, quantizer, qnoise_factor):
    method set_quantizers (line 112) | def set_quantizers(self):
    method get_quantizers (line 132) | def get_quantizers(self, model):
    method update_qnoise_factor (line 154) | def update_qnoise_factor(self, freq):
    method on_train_begin (line 171) | def on_train_begin(self, logs=None):
    method on_epoch_begin (line 177) | def on_epoch_begin(self, epoch, logs=None):
    method on_epoch_end (line 181) | def on_epoch_end(self, epoch, logs=None):
    method on_train_batch_begin (line 186) | def on_train_batch_begin(self, batch, logs=None):

FILE: qkeras/codebook.py
  function create_in_out_table (line 28) | def create_in_out_table(km, quantizer):
  function activation_compression (line 47) | def activation_compression(model, compile_config, activation_indexes, bits,
  function weight_compression (line 120) | def weight_compression(weights, bits, axis=0, quantizer=None):
  function two_tier_embedding_compression (line 159) | def two_tier_embedding_compression(embeddings, bits, quantizer=None):

FILE: qkeras/estimate.py
  function analyze_accumulator (line 57) | def analyze_accumulator(in_model, x, verbose=False):
  function analyze_accumulator_from_sample (line 155) | def analyze_accumulator_from_sample(
  function get_quant_mode (line 226) | def get_quant_mode(quant):
  function get_operation_type (line 274) | def get_operation_type(layer, output_cache):
  function create_activation_cache (line 330) | def create_activation_cache(model):
  function extract_model_operations (line 373) | def extract_model_operations(in_model):
  function print_qstats (line 625) | def print_qstats(model):

FILE: qkeras/experimental/quantizers/quantizers_po2.py
  function _update_ema_variable (line 48) | def _update_ema_variable(variable, new_val, ema_decay, is_initialized,
  function _get_scaling_axis (line 75) | def _get_scaling_axis(scale_axis, len_axis):
  function _get_msqe_scale (line 100) | def _get_msqe_scale(x,
  class BaseQuantizerPO2 (line 157) | class BaseQuantizerPO2(Layer):  # pylint: disable=invalid-name
    method __init__ (line 190) | def __init__(self,
    method build (line 245) | def build(self, input_shape):
    method call (line 330) | def call(self, inputs, msqe_weight=None):
    method _quantize (line 355) | def _quantize(self, inputs, msqe_weight=None):
    method _update_second_moments_msqe_weight (line 392) | def _update_second_moments_msqe_weight(self, input_quantized, inputs):
    method _get_scale (line 427) | def _get_scale(self, inputs=None, reduce_axes=None, msqe_weight=None):
    method _get_init_scale_exponent (line 444) | def _get_init_scale_exponent(self, inputs):
    method _get_outlier_mask (line 457) | def _get_outlier_mask(self, inputs):
    method _get_msqe_weight (line 469) | def _get_msqe_weight(self, inputs=None):
    method _get_stable_scale (line 506) | def _get_stable_scale(self, scale):
    method _update_stable_scale_exponent (line 537) | def _update_stable_scale_exponent(self, scale, should_update, is_initi...
    method _initialize_scale_exponent (line 562) | def _initialize_scale_exponent(self, inputs):
    method _get_clipped_inputs_mask (line 579) | def _get_clipped_inputs_mask(self, inputs, scale):
    method _get_scale_axis (line 597) | def _get_scale_axis(self, input_shape):
    method _get_scaled_axes (line 612) | def _get_scaled_axes(self, scale_axis, input_shape):
    method _clip_quant (line 628) | def _clip_quant(self, inputs):
    method _round_quant (line 639) | def _round_quant(self, inputs):
    method _simple_quantize (line 650) | def _simple_quantize(self, inputs, scale, should_return_q=False):
    method _get_po2_scale (line 669) | def _get_po2_scale(self, scale):
    method _get_po2_scale_exponent (line 680) | def _get_po2_scale_exponent(self, scale):
    method _calculate_msqe (line 692) | def _calculate_msqe(self, x, xq, reduce_axes=None, msqe_weight=None):
    method _calculate_msqe_inputs (line 713) | def _calculate_msqe_inputs(self,
    method _least_squares_msqe_scale (line 735) | def _least_squares_msqe_scale(self,
    method _line_search_msqe_scale (line 785) | def _line_search_msqe_scale(self,
    method _optimize_msqe_scale (line 832) | def _optimize_msqe_scale(self,
    method max (line 889) | def max(self):
    method min (line 896) | def min(self):
  class quantized_bits_learnable_po2 (line 907) | class quantized_bits_learnable_po2(BaseQuantizerPO2):  # pylint: disable...
    method __init__ (line 948) | def __init__(self,
    method __str__ (line 990) | def __str__(self):
    method build (line 1021) | def build(self, input_shape):
    method _get_init_scale_exponent (line 1029) | def _get_init_scale_exponent(self, inputs):
    method _get_outlier_mask (line 1049) | def _get_outlier_mask(self, inputs):
    method _get_scale (line 1070) | def _get_scale(self, inputs=None, reduce_axes=None, msqe_weight=None):
    method msqe_round (line 1108) | def msqe_round(self,
    method get_config (line 1153) | def get_config(self):
  class quantized_bits_msqe_po2 (line 1177) | class quantized_bits_msqe_po2(BaseQuantizerPO2):  # pylint: disable=inva...
    method __init__ (line 1211) | def __init__(self,
    method __str__ (line 1249) | def __str__(self):
    method _get_init_scale_exponent (line 1280) | def _get_init_scale_exponent(self, inputs):
    method _get_outlier_mask (line 1294) | def _get_outlier_mask(self, inputs):
    method _get_scale (line 1312) | def _get_scale(self, inputs=None, reduce_axes=None, msqe_weight=None):
    method get_config (line 1347) | def get_config(self):

FILE: qkeras/qconv2d_batchnorm.py
  class QConv2DBatchnorm (line 37) | class QConv2DBatchnorm(QConv2D):
    method __init__ (line 40) | def __init__(
    method build (line 150) | def build(self, input_shape):
    method call (line 160) | def call(self, inputs, training=None):
    method get_config (line 303) | def get_config(self):
    method get_quantization_config (line 318) | def get_quantization_config(self):
    method get_quantizers (line 326) | def get_quantizers(self):
    method get_folded_weights (line 329) | def get_folded_weights(self):

FILE: qkeras/qconvolutional.py
  function deconv_output_length (line 47) | def deconv_output_length(
  class QConv1D (line 100) | class QConv1D(Conv1D, PrunableLayer):
    method __init__ (line 118) | def __init__(self,
    method call (line 193) | def call(self, inputs):
    method get_config (line 220) | def get_config(self):
    method get_quantization_config (line 234) | def get_quantization_config(self):
    method get_quantizers (line 245) | def get_quantizers(self):
    method get_prunable_weights (line 248) | def get_prunable_weights(self):
  class QConv2D (line 252) | class QConv2D(Conv2D, PrunableLayer):
    method __init__ (line 271) | def __init__(
    method convolution_op (line 366) | def convolution_op(self, inputs, kernel):
    method _jit_compiled_convolution_op (line 377) | def _jit_compiled_convolution_op(self, inputs, kernel):
    method call (line 380) | def call(self, inputs):
    method get_config (line 419) | def get_config(self):
    method from_config (line 435) | def from_config(cls, config):
    method get_quantization_config (line 442) | def get_quantization_config(self):
    method get_quantizers (line 453) | def get_quantizers(self):
    method get_prunable_weights (line 456) | def get_prunable_weights(self):
  class QConv2DTranspose (line 460) | class QConv2DTranspose(Conv2DTranspose, PrunableLayer):
    method __init__ (line 474) | def __init__(self,
    method call (line 543) | def call(self, inputs):
    method get_config (line 613) | def get_config(self):
    method get_quantizers (line 625) | def get_quantizers(self):
    method get_prunable_weights (line 628) | def get_prunable_weights(self):
  class QSeparableConv1D (line 632) | class QSeparableConv1D(SeparableConv1D, PrunableLayer):
    method __init__ (line 647) | def __init__(self,
    method call (line 734) | def call(self, inputs):
    method get_config (line 789) | def get_config(self):
    method get_quantizers (line 804) | def get_quantizers(self):
    method get_prunable_weights (line 807) | def get_prunable_weights(self):
  class QSeparableConv2D (line 811) | class QSeparableConv2D(SeparableConv2D, PrunableLayer):
    method __init__ (line 826) | def __init__(self,
    method call (line 913) | def call(self, inputs):
    method get_config (line 951) | def get_config(self):
    method get_quantizers (line 966) | def get_quantizers(self):
    method get_prunable_weights (line 969) | def get_prunable_weights(self):
  class QDepthwiseConv2D (line 973) | class QDepthwiseConv2D(DepthwiseConv2D, PrunableLayer):
    method __init__ (line 991) | def __init__(self,
    method build (line 1068) | def build(self, input_shape):
    method call (line 1105) | def call(self, inputs, training=None):
    method get_config (line 1132) | def get_config(self):
    method get_quantization_config (line 1158) | def get_quantization_config(self):
    method get_quantizers (line 1169) | def get_quantizers(self):
    method get_prunable_weights (line 1172) | def get_prunable_weights(self):
  function QMobileNetSeparableConv2D (line 1176) | def QMobileNetSeparableConv2D(

FILE: qkeras/qdepthwise_conv2d_transpose.py
  class QDepthwiseConv2DTranspose (line 31) | class QDepthwiseConv2DTranspose(Conv2DTranspose):
    method __init__ (line 44) | def __init__(
    method _get_input_axis (line 97) | def _get_input_axis(self):
    method _get_input_dims (line 105) | def _get_input_dims(self, input_shape):
    method _get_output_size (line 115) | def _get_output_size(
    method build (line 157) | def build(self, input_shape):
    method compute_final_output_shape (line 198) | def compute_final_output_shape(self, input_shape, kernel_size, strides):
    method conv_transpose_op (line 234) | def conv_transpose_op(
    method call (line 330) | def call(self, inputs):
    method get_config (line 351) | def get_config(self):
    method get_quantizers (line 374) | def get_quantizers(self):
    method get_prunable_weights (line 381) | def get_prunable_weights(self):

FILE: qkeras/qdepthwiseconv2d_batchnorm.py
  class QDepthwiseConv2DBatchnorm (line 31) | class QDepthwiseConv2DBatchnorm(QDepthwiseConv2D):
    method __init__ (line 34) | def __init__(
    method build (line 151) | def build(self, input_shape):
    method call (line 161) | def call(self, inputs, training=None):
    method get_config (line 308) | def get_config(self):
    method get_quantization_config (line 323) | def get_quantization_config(self):
    method get_quantizers (line 331) | def get_quantizers(self):
    method get_folded_weights (line 334) | def get_folded_weights(self):

FILE: qkeras/qlayers.py
  function get_auto_range_constraint_initializer (line 61) | def get_auto_range_constraint_initializer(quantizer, constraint, initial...
  class QInitializer (line 89) | class QInitializer(Initializer):
    method __init__ (line 92) | def __init__(self, initializer, use_scale, quantizer):
    method __call__ (line 102) | def __call__(self, shape, dtype=None):
    method get_config (line 129) | def get_config(self):
    method from_config (line 137) | def from_config(cls, config):
  class QActivation (line 150) | class QActivation(Layer, PrunableLayer):
    method __init__ (line 156) | def __init__(self, activation, **kwargs):
    method call (line 179) | def call(self, inputs):
    method get_config (line 182) | def get_config(self):
    method from_config (line 188) | def from_config(cls, config):
    method get_quantization_config (line 203) | def get_quantization_config(self):
    method compute_output_shape (line 206) | def compute_output_shape(self, input_shape):
    method get_prunable_weights (line 209) | def get_prunable_weights(self):
  class QAdaptiveActivation (line 213) | class QAdaptiveActivation(Layer, PrunableLayer):
    method __init__ (line 221) | def __init__(self,
    method build (line 345) | def build(self, input_shape):
    method call (line 391) | def call(self, inputs, training=False):
    method get_weights (line 473) | def get_weights(self):
    method set_weights (line 477) | def set_weights(self, weights):
    method get_config (line 480) | def get_config(self):
    method get_quantization_config (line 496) | def get_quantization_config(self):
    method compute_output_shape (line 500) | def compute_output_shape(self, input_shape):
    method get_prunable_weights (line 503) | def get_prunable_weights(self):
  class Clip (line 512) | class Clip(Constraint):
    method __init__ (line 523) | def __init__(self, min_value=0.0, max_value=1.0,
    method __call__ (line 535) | def __call__(self, w):
    method get_config (line 544) | def get_config(self):
    method from_config (line 549) | def from_config(cls, config):
  class QDense (line 563) | class QDense(Dense, PrunableLayer):
    method __init__ (line 580) | def __init__(self,
    method call (line 647) | def call(self, inputs):
    method compute_output_shape (line 664) | def compute_output_shape(self, input_shape):
    method get_config (line 671) | def get_config(self):
    method get_quantization_config (line 711) | def get_quantization_config(self):
    method get_quantizers (line 722) | def get_quantizers(self):
    method get_prunable_weights (line 725) | def get_prunable_weights(self):
  function get_constraint (line 729) | def get_constraint(identifier, quantizer):
  function get_initializer (line 748) | def get_initializer(identifier):

FILE: qkeras/qmac.py
  class QScaleShift (line 31) | class QScaleShift(tf.keras.layers.Layer, PrunableLayer):
    method __init__ (line 49) | def __init__(self,
    method build (line 93) | def build(self, input_shape):
    method call (line 108) | def call(self, inputs):
    method get_config (line 125) | def get_config(self):
    method get_quantization_config (line 154) | def get_quantization_config(self):
    method get_quantizers (line 164) | def get_quantizers(self):
    method get_prunable_weights (line 167) | def get_prunable_weights(self):

FILE: qkeras/qnormalization.py
  class QBatchNormalization (line 45) | class QBatchNormalization(BatchNormalization, PrunableLayer):
    method __init__ (line 54) | def __init__(
    method call (line 178) | def call(self, inputs, training=None):
    method get_config (line 304) | def get_config(self):
    method compute_output_shape (line 356) | def compute_output_shape(self, input_shape):
    method get_quantizers (line 359) | def get_quantizers(self):
    method get_prunable_weights (line 362) | def get_prunable_weights(self):

FILE: qkeras/qoctave.py
  function GetActivationSuffix (line 36) | def GetActivationSuffix(activation):
  function QOctaveConv2D (line 57) | def QOctaveConv2D(
  function OctaveConv2D (line 369) | def OctaveConv2D(

FILE: qkeras/qpooling.py
  class QAveragePooling2D (line 28) | class QAveragePooling2D(AveragePooling2D):
    method __init__ (line 31) | def __init__(self, pool_size=(2, 2),
    method call (line 56) | def call(self, inputs):
    method get_config (line 108) | def get_config(self):
    method get_quantization_config (line 120) | def get_quantization_config(self):
    method get_quantizers (line 128) | def get_quantizers(self):
  class QGlobalAveragePooling2D (line 132) | class QGlobalAveragePooling2D(GlobalAveragePooling2D):
    method __init__ (line 135) | def __init__(self, data_format=None,
    method compute_pooling_area (line 151) | def compute_pooling_area(self, input_shape):
    method call (line 159) | def call(self, inputs):
    method get_config (line 205) | def get_config(self):
    method get_quantization_config (line 217) | def get_quantization_config(self):
    method get_quantizers (line 225) | def get_quantizers(self):

FILE: qkeras/qrecurrent.py
  class QSimpleRNNCell (line 46) | class QSimpleRNNCell(SimpleRNNCell):
    method __init__ (line 63) | def __init__(self,
    method call (line 142) | def call(self, inputs, states, training=None):
    method get_config (line 186) | def get_config(self):
  class QSimpleRNN (line 205) | class QSimpleRNN(RNN, PrunableLayer):
    method __init__ (line 225) | def __init__(self,
    method call (line 293) | def call(self, inputs, mask=None, training=None, initial_state=None):
    method get_quantizers (line 298) | def get_quantizers(self):
    method get_prunable_weights (line 301) | def get_prunable_weights(self):
    method units (line 305) | def units(self):
    method activation (line 309) | def activation(self):
    method use_bias (line 313) | def use_bias(self):
    method kernel_initializer (line 317) | def kernel_initializer(self):
    method recurrent_initializer (line 321) | def recurrent_initializer(self):
    method bias_initializer (line 325) | def bias_initializer(self):
    method kernel_regularizer (line 329) | def kernel_regularizer(self):
    method recurrent_regularizer (line 333) | def recurrent_regularizer(self):
    method bias_regularizer (line 337) | def bias_regularizer(self):
    method kernel_constraint (line 341) | def kernel_constraint(self):
    method recurrent_constraint (line 345) | def recurrent_constraint(self):
    method bias_constraint (line 349) | def bias_constraint(self):
    method kernel_quantizer_internal (line 353) | def kernel_quantizer_internal(self):
    method recurrent_quantizer_internal (line 357) | def recurrent_quantizer_internal(self):
    method bias_quantizer_internal (line 361) | def bias_quantizer_internal(self):
    method state_quantizer_internal (line 365) | def state_quantizer_internal(self):
    method kernel_quantizer (line 369) | def kernel_quantizer(self):
    method recurrent_quantizer (line 373) | def recurrent_quantizer(self):
    method bias_quantizer (line 377) | def bias_quantizer(self):
    method state_quantizer (line 381) | def state_quantizer(self):
    method dropout (line 385) | def dropout(self):
    method recurrent_dropout (line 389) | def recurrent_dropout(self):
    method get_config (line 392) | def get_config(self):
    method get_quantization_config (line 448) | def get_quantization_config(self):
    method from_config (line 463) | def from_config(cls, config):
  class QLSTMCell (line 469) | class QLSTMCell(LSTMCell):
    method __init__ (line 488) | def __init__(self,
    method _compute_carry_and_output (line 577) | def _compute_carry_and_output(self, x, h_tm1, c_tm1, quantized_recurre...
    method _compute_carry_and_output_fused (line 591) | def _compute_carry_and_output_fused(self, z, c_tm1):
    method call (line 600) | def call(self, inputs, states, training=None):
    method get_config (line 680) | def get_config(self):
  class QLSTM (line 699) | class QLSTM(RNN, PrunableLayer):
    method __init__ (line 718) | def __init__(self,
    method call (line 796) | def call(self, inputs, mask=None, training=None, initial_state=None):
    method get_quantizers (line 801) | def get_quantizers(self):
    method get_prunable_weights (line 804) | def get_prunable_weights(self):
    method units (line 808) | def units(self):
    method activation (line 812) | def activation(self):
    method recurrent_activation (line 816) | def recurrent_activation(self):
    method use_bias (line 820) | def use_bias(self):
    method kernel_initializer (line 824) | def kernel_initializer(self):
    method recurrent_initializer (line 828) | def recurrent_initializer(self):
    method bias_initializer (line 832) | def bias_initializer(self):
    method unit_forget_bias (line 836) | def unit_forget_bias(self):
    method kernel_regularizer (line 840) | def kernel_regularizer(self):
    method recurrent_regularizer (line 844) | def recurrent_regularizer(self):
    method bias_regularizer (line 848) | def bias_regularizer(self):
    method kernel_constraint (line 852) | def kernel_constraint(self):
    method recurrent_constraint (line 856) | def recurrent_constraint(self):
    method bias_constraint (line 860) | def bias_constraint(self):
    method kernel_quantizer_internal (line 864) | def kernel_quantizer_internal(self):
    method recurrent_quantizer_internal (line 868) | def recurrent_quantizer_internal(self):
    method bias_quantizer_internal (line 872) | def bias_quantizer_internal(self):
    method state_quantizer_internal (line 876) | def state_quantizer_internal(self):
    method kernel_quantizer (line 880) | def kernel_quantizer(self):
    method recurrent_quantizer (line 884) | def recurrent_quantizer(self):
    method bias_quantizer (line 888) | def bias_quantizer(self):
    method state_quantizer (line 892) | def state_quantizer(self):
    method dropout (line 896) | def dropout(self):
    method recurrent_dropout (line 900) | def recurrent_dropout(self):
    method implementation (line 904) | def implementation(self):
    method get_config (line 907) | def get_config(self):
    method get_quantization_config (line 968) | def get_quantization_config(self):
    method from_config (line 985) | def from_config(cls, config):
  class QGRUCell (line 991) | class QGRUCell(GRUCell):
    method __init__ (line 1010) | def __init__(self,
    method call (line 1100) | def call(self, inputs, states, training=None):
    method get_config (line 1221) | def get_config(self):
  class QGRU (line 1240) | class QGRU(RNN, PrunableLayer):
    method __init__ (line 1260) | def __init__(self,
    method call (line 1338) | def call(self, inputs, mask=None, training=None, initial_state=None):
    method get_quantizers (line 1343) | def get_quantizers(self):
    method get_prunable_weights (line 1346) | def get_prunable_weights(self):
    method units (line 1350) | def units(self):
    method activation (line 1354) | def activation(self):
    method recurrent_activation (line 1358) | def recurrent_activation(self):
    method use_bias (line 1362) | def use_bias(self):
    method kernel_initializer (line 1366) | def kernel_initializer(self):
    method recurrent_initializer (line 1370) | def recurrent_initializer(self):
    method bias_initializer (line 1374) | def bias_initializer(self):
    method kernel_regularizer (line 1378) | def kernel_regularizer(self):
    method recurrent_regularizer (line 1382) | def recurrent_regularizer(self):
    method bias_regularizer (line 1386) | def bias_regularizer(self):
    method kernel_constraint (line 1390) | def kernel_constraint(self):
    method recurrent_constraint (line 1394) | def recurrent_constraint(self):
    method bias_constraint (line 1398) | def bias_constraint(self):
    method kernel_quantizer_internal (line 1402) | def kernel_quantizer_internal(self):
    method recurrent_quantizer_internal (line 1406) | def recurrent_quantizer_internal(self):
    method bias_quantizer_internal (line 1410) | def bias_quantizer_internal(self):
    method state_quantizer_internal (line 1414) | def state_quantizer_internal(self):
    method kernel_quantizer (line 1418) | def kernel_quantizer(self):
    method recurrent_quantizer (line 1422) | def recurrent_quantizer(self):
    method bias_quantizer (line 1426) | def bias_quantizer(self):
    method state_quantizer (line 1430) | def state_quantizer(self):
    method dropout (line 1434) | def dropout(self):
    method recurrent_dropout (line 1438) | def recurrent_dropout(self):
    method implementation (line 1442) | def implementation(self):
    method reset_after (line 1446) | def reset_after(self):
    method get_config (line 1449) | def get_config(self):
    method get_quantization_config (line 1510) | def get_quantization_config(self):
    method from_config (line 1527) | def from_config(cls, config):
  class QBidirectional (line 1533) | class QBidirectional(Bidirectional):
    method get_quantizers (line 1544) | def get_quantizers(self):
    method activation (line 1551) | def activation(self):
    method get_quantization_config (line 1554) | def get_quantization_config(self):

FILE: qkeras/qseparable_conv2d_transpose.py
  class QSeparableConv2DTranspose (line 30) | class QSeparableConv2DTranspose(Conv2DTranspose):
    method __init__ (line 45) | def __init__(self,
    method _get_input_axis (line 100) | def _get_input_axis(self):
    method _get_input_dims (line 108) | def _get_input_dims(self, input_shape):
    method _get_output_size (line 115) | def _get_output_size(self, inputs, output_padding, padding, strides,
    method build (line 150) | def build(self, input_shape):
    method compute_final_output_shape (line 204) | def compute_final_output_shape(
    method conv_transpose_op (line 245) | def conv_transpose_op(self, inputs, filters, strides, padding,
    method call (line 324) | def call(self, inputs):
    method get_config (line 367) | def get_config(self):
    method get_quantizers (line 390) | def get_quantizers(self):
    method get_prunable_weights (line 399) | def get_prunable_weights(self):

FILE: qkeras/qtools/DnC/divide_and_conquer.py
  class CostMode (line 43) | class CostMode(enum.Enum):
  class DivideConquerGraph (line 50) | class DivideConquerGraph:
    method __init__ (line 53) | def __init__(
    method idx_to_layer (line 77) | def idx_to_layer(self, idx: int):
    method layer_to_idx (line 81) | def layer_to_idx(self, layer: tf.keras.layers.Layer):
    method get_first_node (line 85) | def get_first_node(self):
    method is_first_node (line 89) | def is_first_node(self, node: Union[int, tf.keras.layers.Layer]):
    method get_last_node (line 95) | def get_last_node(self):
    method is_last_node (line 99) | def is_last_node(self, node: Union[int, tf.keras.layers.Layer]):
    method get_prev_nodes (line 105) | def get_prev_nodes(self, node: Union[int, tf.keras.layers.Layer]):
    method get_next_nodes (line 111) | def get_next_nodes(self, node: Union[int, tf.keras.layers.Layer]):
    method get_layer_quantizer_bitwidth (line 117) | def get_layer_quantizer_bitwidth(
    method get_layer_mac_count (line 158) | def get_layer_mac_count(self, node: Union[int, tf.keras.layers.Layer]):
    method get_layer_shapes (line 166) | def get_layer_shapes(self, node: Union[int, tf.keras.layers.Layer]):
  class Choice (line 184) | class Choice:
    method __init__ (line 187) | def __init__(self, l: float = 0, k: float = 0, cin_unroll: int = 0,
    method __str__ (line 207) | def __str__(self):
  function get_valid_unrolls (line 213) | def get_valid_unrolls(layer: tf.keras.layers.Layer, cout_unroll: int,
  function get_per_layer_cost (line 255) | def get_per_layer_cost(layer_quantizer_bitwidth, layer_mac_count, layer_...
  function get_valid_candidates (line 288) | def get_valid_candidates(input_value, output_to_input_ratio_max):
  function get_InBufferThru (line 299) | def get_InBufferThru(InElementPerClk, input_channel):
  function get_OutBufferThru (line 303) | def get_OutBufferThru(OutElementPerClk, output_channel, kernel_height,
  function is_bufferThru_greater_than_targetThru (line 312) | def is_bufferThru_greater_than_targetThru(
  function set_best_global_cost_in_paths (line 336) | def set_best_global_cost_in_paths(
  function backtrack (line 384) | def backtrack(graph, paths):
  function update_cur_best_choices (line 429) | def update_cur_best_choices(
  function get_ComputeInElementPerClk (line 459) | def get_ComputeInElementPerClk(layer_type, cin_unroll,
  function get_InElementPerClk_base (line 472) | def get_InElementPerClk_base(ComputInElementPerClk, kh_unroll, kw_unroll):
  function get_pe_throughput (line 476) | def get_pe_throughput(layer_type, cin_unroll, cout_unroll, kh_unroll, kw...
  function get_target_throughputs (line 494) | def get_target_throughputs(layer, target_out_throughput):
  function calc_hw_params (line 519) | def calc_hw_params(graph, target_OutElementPerClk, target_out_throughput,
  function estimate_model_cost (line 731) | def estimate_model_cost(

FILE: qkeras/qtools/DnC/dnc_layer_cost_ace.py
  function mac_gates_polynomial_3d (line 80) | def mac_gates_polynomial_3d(xyz, a, b, c):
  function gen_mac_gate_model (line 100) | def gen_mac_gate_model(do_plot=False):
  function get_ace_mac_gates (line 213) | def get_ace_mac_gates(xbit, wbit, abit, regen_params=False):

FILE: qkeras/qtools/examples/example_generate_json.py
  function hybrid_model (line 30) | def hybrid_model():
  function generate_json (line 49) | def generate_json(in_model):

FILE: qkeras/qtools/examples/example_get_energy.py
  function hybrid_model (line 31) | def hybrid_model():

FILE: qkeras/qtools/generate_layer_data_type_map.py
  class TagMissingError (line 31) | class TagMissingError(ValueError):
  function get_bn_quantizers (line 73) | def get_bn_quantizers(layer, quantizer_factory, cfg, keras_quantizer,
  function update_output_quantizer_in_graph (line 168) | def update_output_quantizer_in_graph(graph, node_id, quantizer_factory,
  function generate_layer_data_type_map (line 190) | def generate_layer_data_type_map(

FILE: qkeras/qtools/interface.py
  function print_qstats (line 27) | def print_qstats(graph):
  function populate_quantizer (line 62) | def populate_quantizer(quantizer, shape=None, implemented_as=None):
  function map_to_json (line 117) | def map_to_json(mydict):

FILE: qkeras/qtools/qenergy/qenergy.py
  function get_op_type (line 65) | def get_op_type(quantizer):
  function memory_read_energy (line 74) | def memory_read_energy(is_input_layer, tensor_shape, mode, min_sram_size,
  function parameter_read_energy (line 118) | def parameter_read_energy(
  function memory_write_energy (line 162) | def memory_write_energy(is_output_layer, tensor_shape, mode, min_sram_size,
  function energy_estimate (line 205) | def energy_estimate(model, layer_map, weights_on_memory,

FILE: qkeras/qtools/qgraph.py
  class WrongInputQuantizerError (line 35) | class WrongInputQuantizerError(ValueError):
  function GraphRemoveNode (line 39) | def GraphRemoveNode(graph, v):
  function GraphRemoveNodeWithNodeType (line 58) | def GraphRemoveNodeWithNodeType(graph, node_type):
  function GraphAddHiddenInputLayer (line 69) | def  GraphAddHiddenInputLayer(model, graph, input_quantizer_map):
  function GraphAddSingleSourceSingleSink (line 116) | def GraphAddSingleSourceSingleSink(graph):
  function GenerateInputQuantizerList (line 147) | def GenerateInputQuantizerList(input_quantizers,
  function AddToNodeDict (line 181) | def AddToNodeDict(layer_items,
  function GenerateGraphFromModel (line 199) | def GenerateGraphFromModel(model,
  function GraphGetInputs (line 289) | def GraphGetInputs(graph):
  function GraphGetOutputs (line 307) | def GraphGetOutputs(graph):
  function GraphPropagateActivationsToEdges (line 325) | def GraphPropagateActivationsToEdges(graph, debug=False):
  function PrintGraph (line 395) | def PrintGraph(graph, msg=""):
  function CreateGraph (line 411) | def CreateGraph(model, input_quantizers=None,
  function GraphUpdateEdge (line 437) | def GraphUpdateEdge(graph, node_id, quantizer_on_edge):

FILE: qkeras/qtools/qtools_util.py
  function get_val (line 30) | def get_val(feature, key, default_val=None):
  function is_shape_alternation_layers (line 39) | def is_shape_alternation_layers(layer):
  function is_merge_layers (line 46) | def is_merge_layers(layer):
  function get_input_quantizers (line 56) | def get_input_quantizers(graph, node_id, quantizer_factory, debug=False):
  function get_input_quantizers_advanced (line 78) | def get_input_quantizers_advanced(graph, node_id,
  function get_operation_count (line 115) | def get_operation_count(layer, input_shape):
  function get_weights (line 227) | def get_weights(layer, model_weights_already_quantized=True):
  function get_scale_from_quantized_bits_with_auto_po2 (line 251) | def get_scale_from_quantized_bits_with_auto_po2(quantizer):
  function adjust_multiplier_for_auto_po2 (line 261) | def adjust_multiplier_for_auto_po2(multiplier, qkeras_weight_quantizer):
  function adjust_accumulator_for_auto_po2 (line 318) | def adjust_accumulator_for_auto_po2(
  function find_divisors (line 354) | def find_divisors(num):
  function get_layer_info (line 358) | def get_layer_info(layer: tf.keras.layers.Layer, attr_name: str):
  function is_upsampled (line 388) | def is_upsampled(layer: tf.keras.layers.Layer):

FILE: qkeras/qtools/quantized_operators/accumulator_factory.py
  class AccumulatorFactory (line 28) | class AccumulatorFactory:
    method make_accumulator (line 31) | def make_accumulator(

FILE: qkeras/qtools/quantized_operators/accumulator_impl.py
  function po2_to_qbits (line 30) | def po2_to_qbits(quantizer: quantizer_impl.IQuantizer):
  class IAccumulator (line 44) | class IAccumulator(abc.ABC):
    method implemented_as (line 49) | def implemented_as():
  class FloatingPointAccumulator (line 53) | class FloatingPointAccumulator(IAccumulator):
    method __init__ (line 56) | def __init__(
    method implemented_as (line 72) | def implemented_as():
  class FixedPointAccumulator (line 76) | class FixedPointAccumulator(IAccumulator):
    method __init__ (line 79) | def __init__(
    method implemented_as (line 116) | def implemented_as():
  class Po2Accumulator (line 120) | class Po2Accumulator(FixedPointAccumulator):
    method __init__ (line 126) | def __init__(
    method implemented_as (line 144) | def implemented_as():

FILE: qkeras/qtools/quantized_operators/adder_factory.py
  class IAdder (line 30) | class IAdder(abc.ABC):
    method __init__ (line 33) | def __init__(self):
    method make_quantizer (line 85) | def make_quantizer(self, quantizer_1: quantizer_impl.IQuantizer,

FILE: qkeras/qtools/quantized_operators/adder_impl.py
  function po2_qbits_converter (line 28) | def po2_qbits_converter(po2_quantizer: quantizer_impl.IQuantizer):
  class IAdderImpl (line 41) | class IAdderImpl(abc.ABC):
    method implemented_as (line 46) | def implemented_as():
  class FixedPointAdder (line 50) | class FixedPointAdder(IAdderImpl):
    method __init__ (line 53) | def __init__(self, quantizer_1, quantizer_2):
    method implemented_as (line 70) | def implemented_as():
  class FloatingPointAdder (line 74) | class FloatingPointAdder(IAdderImpl):
    method __init__ (line 77) | def __init__(self, quantizer_1, quantizer_2):
    method implemented_as (line 83) | def implemented_as():
  class Po2FixedPointAdder (line 87) | class Po2FixedPointAdder(IAdderImpl):
    method __init__ (line 90) | def __init__(self, quantizer_1, quantizer_2):
    method implemented_as (line 107) | def implemented_as():
  class Po2Adder (line 111) | class Po2Adder(IAdderImpl):
    method __init__ (line 114) | def __init__(self, quantizer_1, quantizer_2):
    method implemented_as (line 121) | def implemented_as():

FILE: qkeras/qtools/quantized_operators/divider_factory.py
  class UnacceptedQuantizerError (line 30) | class UnacceptedQuantizerError(ValueError):
  class IDivider (line 34) | class IDivider(abc.ABC):
    method __init__ (line 37) | def __init__(self):
    method make_quantizer (line 106) | def make_quantizer(self, numerator_quantizer: quantizer_impl.IQuantizer,

FILE: qkeras/qtools/quantized_operators/divider_impl.py
  class IDividerImpl (line 26) | class IDividerImpl(abc.ABC):
    method __init__ (line 29) | def __init__(self, numerator_quantizer, denominator_quantizer,
    method implemented_as (line 37) | def implemented_as():
  class FloatingPointDivider (line 41) | class FloatingPointDivider(IDividerImpl):
    method __init__ (line 44) | def __init__(self, numerator_quantizer, denominator_quantizer,
    method implemented_as (line 63) | def implemented_as():
  class Shifter (line 68) | class Shifter(IDividerImpl):
    method __init__ (line 72) | def __init__(self, numerator_quantizer, denominator_quantizer,
    method implemented_as (line 117) | def implemented_as():
  class Subtractor (line 121) | class Subtractor(IDividerImpl):
    method __init__ (line 127) | def __init__(self, numerator_quantizer, denominator_quantizer,
    method implemented_as (line 159) | def implemented_as():

FILE: qkeras/qtools/quantized_operators/fused_bn_factory.py
  class FusedBNFactory (line 34) | class FusedBNFactory:
    method make_quantizer (line 50) | def make_quantizer(

FILE: qkeras/qtools/quantized_operators/merge_factory.py
  class MergeFactory (line 29) | class MergeFactory:
    method make_quantizer (line 32) | def make_quantizer(self, input_qe_list, layer_type):
  class IMerger (line 51) | class IMerger(abc.ABC):
    method __init__ (line 54) | def __init__(self, input_qe_list):
  class Add (line 63) | class Add(IMerger):
    method __init__ (line 69) | def __init__(self, input_qe_list):
    method implemented_as (line 112) | def implemented_as(self):
  class Multiply (line 116) | class Multiply(IMerger):
    method __init__ (line 122) | def  __init__(self, input_qe_list):
    method implemented_as (line 138) | def implemented_as(self):
  class Maximum (line 142) | class Maximum(IMerger):
    method __init__ (line 148) | def __init__(self, input_qe_list):
    method implemented_as (line 204) | def implemented_as():
  class Minimum (line 208) | class Minimum(Maximum):
  class Average (line 216) | class Average(Maximum):
    method __init__ (line 221) | def __init__(self, input_qe_list):
  class Concatenate (line 228) | class Concatenate(Maximum):
    method __init__ (line 234) | def __init__(self, input_qe_list):
  class Dot (line 242) | class Dot(IMerger):

FILE: qkeras/qtools/quantized_operators/multiplier_factory.py
  class MultiplierFactory (line 28) | class MultiplierFactory:
    method __init__ (line 31) | def __init__(self):
    method make_multiplier (line 118) | def make_multiplier(

FILE: qkeras/qtools/quantized_operators/multiplier_impl.py
  class IMultiplier (line 27) | class IMultiplier(abc.ABC):
    method __init__ (line 34) | def __init__(self, weight_quantizer: quantizer_impl.IQuantizer,
    method implemented_as (line 44) | def implemented_as():
    method name (line 47) | def name(self) -> str:
    method output_quantizer (line 50) | def output_quantizer(self):
  function assert_neither_input_and_weights_is_floating_point (line 54) | def assert_neither_input_and_weights_is_floating_point(
  class Mux (line 62) | class Mux(IMultiplier):
    method __init__ (line 66) | def __init__(self, weight_quantizer: quantizer_impl.IQuantizer,
    method implemented_as (line 115) | def implemented_as():
  class XorGate (line 119) | class XorGate(IMultiplier):
    method __init__ (line 122) | def __init__(self, weight_quantizer: quantizer_impl.IQuantizer,
    method implemented_as (line 138) | def implemented_as():
  class Shifter (line 142) | class Shifter(IMultiplier):
    method __init__ (line 183) | def __init__(
    method implemented_as (line 228) | def implemented_as():
  class AndGate (line 232) | class AndGate(IMultiplier):
    method __init__ (line 236) | def __init__(
    method implemented_as (line 273) | def implemented_as():
  class Adder (line 277) | class Adder(IMultiplier):
    method __init__ (line 280) | def __init__(self, weight_quantizer: quantizer_impl.IQuantizer,
    method implemented_as (line 310) | def implemented_as():
  class FloatingPointMultiplier (line 314) | class FloatingPointMultiplier(IMultiplier):
    method __init__ (line 317) | def __init__(self, weight_quantizer: quantizer_impl.IQuantizer,
    method implemented_as (line 337) | def implemented_as():
  class FixedPointMultiplier (line 341) | class FixedPointMultiplier(IMultiplier):
    method __init__ (line 344) | def __init__(self, weight_quantizer: quantizer_impl.IQuantizer,
    method implemented_as (line 373) | def implemented_as():

FILE: qkeras/qtools/quantized_operators/qbn_factory.py
  class QBNFactory (line 32) | class QBNFactory:
    method make_quantizer (line 40) | def make_quantizer(

FILE: qkeras/qtools/quantized_operators/quantizer_factory.py
  class QuantizerFactory (line 29) | class QuantizerFactory:
    method __init__ (line 32) | def __init__(self):
    method _make_quantizer_util (line 94) | def _make_quantizer_util(self, quantizer) -> quantizer_impl.IQuantizer:
    method make_quantizer (line 110) | def make_quantizer(self, quantizer) -> quantizer_impl.IQuantizer:
    method is_quantizer_supported (line 123) | def is_quantizer_supported(self, quantizer) -> bool:
    method make_default_quantizer (line 130) | def make_default_quantizer(self, mode) -> quantizer_impl.IQuantizer:
    method clone_quantizer (line 162) | def clone_quantizer(

FILE: qkeras/qtools/quantized_operators/quantizer_impl.py
  function get_np_value (line 30) | def get_np_value(val):
  function get_exp (line 41) | def get_exp(quantizer):
  class IQuantizer (line 67) | class IQuantizer(abc.ABC):
    method __init__ (line 70) | def __init__(self):
  class QuantizedBits (line 82) | class QuantizedBits(IQuantizer):
    method __init__ (line 94) | def __init__(self):
    method convert_qkeras_quantizer (line 100) | def convert_qkeras_quantizer(
    method convert_to_qkeras_quantizer (line 107) | def convert_to_qkeras_quantizer(
  class QuantizedTanh (line 122) | class QuantizedTanh(QuantizedBits):
    method __init__ (line 125) | def __init__(self):
    method convert_qkeras_quantizer (line 129) | def convert_qkeras_quantizer(
    method convert_to_qkeras_quantizer (line 135) | def convert_to_qkeras_quantizer(
  class QuantizedUlaw (line 144) | class QuantizedUlaw(QuantizedBits):
    method __init__ (line 148) | def __init__(self):
    method convert_qkeras_quantizer (line 152) | def convert_qkeras_quantizer(
    method convert_to_qkeras_quantizer (line 159) | def convert_to_qkeras_quantizer(self, symmetric=0, u=255.0):
  class Binary (line 166) | class Binary(IQuantizer):
    method __init__ (line 169) | def __init__(self, use_01=False):
    method convert_qkeras_quantizer (line 183) | def convert_qkeras_quantizer(self, quantizer: quantizers.binary):
    method convert_to_qkeras_quantizer (line 193) | def convert_to_qkeras_quantizer(self, alpha=None,
  class StochasticBinary (line 201) | class StochasticBinary(Binary):
    method __init__ (line 205) | def __init__(self):
    method convert_qkeras_quantizer (line 209) | def convert_qkeras_quantizer(
    method convert_to_qkeras_quantizer (line 215) | def convert_to_qkeras_quantizer(self, alpha=None, temperature=6.0,
  class Bernoulli (line 223) | class Bernoulli(Binary):
    method __init__ (line 226) | def __init__(self):
    method convert_qkeras_quantizer (line 230) | def convert_qkeras_quantizer(self, quantizer: quantizers.bernoulli):
    method convert_to_qkeras_quantizer (line 233) | def convert_to_qkeras_quantizer(self, alpha=None, temperature=6.0,
  class QuantizedRelu (line 241) | class QuantizedRelu(IQuantizer):
    method __init__ (line 244) | def __init__(self):
    method convert_qkeras_quantizer (line 249) | def convert_qkeras_quantizer(
    method convert_to_qkeras_quantizer (line 267) | def convert_to_qkeras_quantizer(
  class Ternary (line 281) | class Ternary(IQuantizer):
    method __init__ (line 284) | def __init__(self):
    method convert_qkeras_quantizer (line 292) | def convert_qkeras_quantizer(
    method convert_to_qkeras_quantizer (line 296) | def convert_to_qkeras_quantizer(
  class StochasticTernary (line 307) | class StochasticTernary(Ternary):
    method __init__ (line 310) | def __init__(self):
    method convert_qkeras_quantizer (line 315) | def convert_qkeras_quantizer(
    method convert_to_qkeras_quantizer (line 319) | def convert_to_qkeras_quantizer(
  class FloatingPoint (line 330) | class FloatingPoint(IQuantizer):
    method __init__ (line 333) | def __init__(self, bits):
    method convert_qkeras_quantizer (line 342) | def convert_qkeras_quantizer(self, bits):
    method convert_to_qkeras_quantizer (line 345) | def convert_to_qkeras_quantizer(self, bits):
  class PowerOfTwo (line 349) | class PowerOfTwo(IQuantizer):
    method __init__ (line 352) | def __init__(self, is_signed=True):
    method convert_qkeras_quantizer (line 364) | def convert_qkeras_quantizer(self, quantizer):
    method convert_to_qkeras_quantizer (line 387) | def convert_to_qkeras_quantizer(
    method get_min_max_exp (line 408) | def get_min_max_exp(self):
    method quantizer_bits_calculator (line 411) | def quantizer_bits_calculator(self, val):
    method update_quantizer (line 451) | def update_quantizer(self, val, reset=False):
    method update_inference_values (line 482) | def update_inference_values(self, weights):
  class ReluPowerOfTwo (line 489) | class ReluPowerOfTwo(PowerOfTwo):
    method __init__ (line 492) | def __init__(self):
    method convert_qkeras_quantizer (line 499) | def convert_qkeras_quantizer(
    method convert_to_qkeras_quantizer (line 509) | def convert_to_qkeras_quantizer(

FILE: qkeras/qtools/quantized_operators/subtractor_factory.py
  class ISubtractor (line 26) | class ISubtractor(adder_factory.IAdder):
    method make_quantizer (line 33) | def make_quantizer(self, quantizer_1: quantizer_impl.IQuantizer,

FILE: qkeras/qtools/run_qtools.py
  class QTools (line 34) | class QTools:
    method __init__ (line 37) | def __init__(self, model, process, source_quantizers=None,
    method qtools_stats_to_json (line 68) | def qtools_stats_to_json(self, json_name):
    method qtools_stats_print (line 74) | def qtools_stats_print(self):
    method pe (line 80) | def pe(self, weights_on_memory="dram",
    method extract_energy_sum (line 100) | def extract_energy_sum(self, cfg_setting, energy_dict):
    method extract_energy_profile (line 114) | def extract_energy_profile(self, cfg_setting, energy_dict):
    method calculate_ace (line 131) | def calculate_ace(self, default_float_bits):
    method calculate_output_bytes (line 162) | def calculate_output_bytes(self, include_model_input_size,
    method calculate_weight_bytes (line 193) | def calculate_weight_bytes(self, default_float_bits):
    method get_roofline_numbers (line 225) | def get_roofline_numbers(self, include_model_input_size=True,

FILE: qkeras/qtools/settings.py
  class ConfigClass (line 25) | class ConfigClass:
    method __init__ (line 28) | def __init__(self):
    method update (line 59) | def update(self, process, cfg_setting):

FILE: qkeras/quantizer_registry.py
  function register_quantizer (line 24) | def register_quantizer(quantizer):
  function lookup_quantizer (line 32) | def lookup_quantizer(name):

FILE: qkeras/quantizers.py
  function get_weight_scale (line 43) | def get_weight_scale(quantizer, x=None):
  function _get_integer_bits (line 59) | def _get_integer_bits(min_value,
  function _get_scaling_axis (line 127) | def _get_scaling_axis(scale_axis: Any, len_axis: int) -> List[int]:
  function _get_unrolled_shape (line 157) | def _get_unrolled_shape(input_shape: List[int], unroll_factor: Any,
  function _get_rolled_back_shape (line 219) | def _get_rolled_back_shape(input_shape: List[int], roll_axis: Any) -> Li...
  function _validate_axis_and_eps (line 262) | def _validate_axis_and_eps(x_shape: List[int], scale_axis: Any,
  function _repeat_along_axis (line 356) | def _repeat_along_axis(x: tf.Tensor, axis: int, repeats: int) -> tf.Tensor:
  function _repeat_along_axes (line 361) | def _repeat_along_axes(x: tf.Tensor, axis: Any, repeats: Any) -> tf.Tensor:
  function _get_scale_mean (line 371) | def _get_scale_mean(
  function _clip_po2_scale (line 429) | def _clip_po2_scale(scale: tf.Tensor, min_po2_exponent: Any,
  function _get_least_squares_scale (line 439) | def _get_least_squares_scale(
  function _get_scale (line 508) | def _get_scale(*args, **kwargs):
  function smooth_sigmoid (line 512) | def smooth_sigmoid(x):
  function hard_sigmoid (line 522) | def hard_sigmoid(x):
  function binary_sigmoid (line 528) | def binary_sigmoid(x):
  function set_internal_sigmoid (line 541) | def set_internal_sigmoid(mode):
  function binary_tanh (line 560) | def binary_tanh(x):
  function hard_tanh (line 565) | def hard_tanh(x):
  function smooth_tanh (line 570) | def smooth_tanh(x):
  function stochastic_round (line 575) | def stochastic_round(x, precision=0.5):
  function stochastic_round_po2 (line 586) | def stochastic_round_po2(x):
  function _round_through (line 616) | def _round_through(x, use_stochastic_rounding=False, precision=0.5):
  function _sign_through (line 651) | def _sign_through(x):
  function _ceil_through (line 662) | def _ceil_through(x):
  function _floor_through (line 668) | def _floor_through(x):
  class quantized_linear (line 683) | class quantized_linear(base_quantizer.BaseQuantizer):
    method __init__ (line 837) | def __init__(
    method _check_bits (line 874) | def _check_bits(self, bits):
    method _check_alpha (line 880) | def _check_alpha(self, alpha):
    method bits (line 898) | def bits(self):
    method integer (line 902) | def integer(self):
    method keep_negative (line 906) | def keep_negative(self):
    method use_stochastic_rounding (line 910) | def use_stochastic_rounding(self):
    method scale_axis (line 914) | def scale_axis(self):
    method use_variables (line 918) | def use_variables(self):
    method scale (line 922) | def scale(self):
    method data_type_scale (line 926) | def data_type_scale(self):
    method auto_alpha (line 933) | def auto_alpha(self):
    method use_sign_function (line 939) | def use_sign_function(self):
    method default_quantization_scale (line 945) | def default_quantization_scale(self):
    method get_clip_bounds (line 957) | def get_clip_bounds(self):
    method __call__ (line 972) | def __call__(self, x):
    method _scale_clip_and_round (line 998) | def _scale_clip_and_round(self, x, quantization_scale):
    method _get_auto_quantization_scale (line 1021) | def _get_auto_quantization_scale(self, x):
    method _get_quantization_scale_from_max_data (line 1038) | def _get_quantization_scale_from_max_data(self, x):
    method _po2_autoscale (line 1058) | def _po2_autoscale(self, x, quantization_scale):
    method _build (line 1103) | def _build(self):
    method max (line 1109) | def max(self):
    method min (line 1114) | def min(self):
    method range (line 1119) | def range(self):
    method __str__ (line 1134) | def __str__(self):
    method _set_trainable_parameter (line 1155) | def _set_trainable_parameter(self):
    method from_config (line 1161) | def from_config(cls, config):
    method get_config (line 1164) | def get_config(self):
  class quantized_bits (line 1179) | class quantized_bits(base_quantizer.BaseQuantizer):  # pylint: disable=i...
    method __init__ (line 1250) | def __init__(self,
    method __str__ (line 1299) | def __str__(self):
    method __call__ (line 1320) | def __call__(self, x):
    method _set_trainable_parameter (line 1454) | def _set_trainable_parameter(self):
    method max (line 1460) | def max(self):
    method min (line 1472) | def min(self):
    method range (line 1485) | def range(self):
    method from_config (line 1500) | def from_config(cls, config):
    method get_config (line 1507) | def get_config(self):
  class bernoulli (line 1535) | class bernoulli(base_quantizer.BaseQuantizer):  # pylint: disable=invali...
    method __init__ (line 1565) | def __init__(self, alpha=None, temperature=6.0, use_real_sigmoid=True):
    method __str__ (line 1574) | def __str__(self):
    method __call__ (line 1587) | def __call__(self, x):
    method _set_trainable_parameter (line 1625) | def _set_trainable_parameter(self):
    method max (line 1629) | def max(self):
    method min (line 1636) | def min(self):
    method from_config (line 1641) | def from_config(cls, config):
    method get_config (line 1644) | def get_config(self):
  class ternary (line 1650) | class ternary(base_quantizer.BaseQuantizer):  # pylint: disable=invalid-...
    method __init__ (line 1670) | def __init__(self, alpha=None, threshold=None, use_stochastic_rounding...
    method __str__ (line 1682) | def __str__(self):
    method __call__ (line 1699) | def __call__(self, x):
    method _set_trainable_parameter (line 1767) | def _set_trainable_parameter(self):
    method max (line 1771) | def max(self):
    method min (line 1778) | def min(self):
    method from_config (line 1786) | def from_config(cls, config):
    method get_config (line 1789) | def get_config(self):
  class stochastic_ternary (line 1800) | class stochastic_ternary(ternary):  # pylint: disable=invalid-name
    method __init__ (line 1820) | def __init__(
    method __str__ (line 1843) | def __str__(self):
    method __call__ (line 1860) | def __call__(self, x):
    method _set_trainable_parameter (line 1923) | def _set_trainable_parameter(self):
    method max (line 1927) | def max(self):
    method min (line 1934) | def min(self):
    method from_config (line 1942) | def from_config(cls, config):
    method get_config (line 1945) | def get_config(self):
  class binary (line 1957) | class binary(base_quantizer.BaseQuantizer):  # pylint: disable=invalid-name
    method __init__ (line 2011) | def __init__(self, use_01=False, alpha=None, use_stochastic_rounding=F...
    method __str__ (line 2026) | def __str__(self):
    method __call__ (line 2058) | def __call__(self, x):
    method _set_trainable_parameter (line 2123) | def _set_trainable_parameter(self):
    method max (line 2127) | def max(self):
    method min (line 2134) | def min(self):
    method from_config (line 2144) | def from_config(cls, config):
    method get_config (line 2147) | def get_config(self):
  class stochastic_binary (line 2157) | class stochastic_binary(binary):  # pylint: disable=invalid-name
    method __init__ (line 2175) | def __init__(self, alpha=None, temperature=6.0, use_real_sigmoid=True):
    method __str__ (line 2184) | def __str__(self):
    method __call__ (line 2197) | def __call__(self, x):
    method _set_trainable_parameter (line 2232) | def _set_trainable_parameter(self):
    method max (line 2236) | def max(self):
    method min (line 2243) | def min(self):
    method from_config (line 2251) | def from_config(cls, config):
    method get_config (line 2254) | def get_config(self):
  function fast_relu_quantize (line 2264) | def fast_relu_quantize(p, m_i, factor):
  class quantized_relu (line 2269) | class quantized_relu(base_quantizer.BaseQuantizer):  # pylint: disable=i...
    method __init__ (line 2320) | def __init__(self,
    method __str__ (line 2350) | def __str__(self):
    method __call__ (line 2367) | def __call__(self, x):
    method max (line 2434) | def max(self):
    method min (line 2447) | def min(self):
    method range (line 2461) | def range(self):
    method from_config (line 2474) | def from_config(cls, config):
    method get_config (line 2477) | def get_config(self):
  class quantized_ulaw (line 2500) | class quantized_ulaw(base_quantizer.BaseQuantizer):  # pylint: disable=i...
    method __init__ (line 2514) | def __init__(self, bits=8, integer=0, symmetric=0, u=255.0):
    method __str__ (line 2521) | def __str__(self):
    method __call__ (line 2529) | def __call__(self, x):
    method max (line 2541) | def max(self):
    method min (line 2550) | def min(self):
    method from_config (line 2560) | def from_config(cls, config):
    method get_config (line 2563) | def get_config(self):
  class quantized_tanh (line 2574) | class quantized_tanh(base_quantizer.BaseQuantizer):  # pylint: disable=i...
    method __init__ (line 2593) | def __init__(self, bits=8, use_stochastic_rounding=False,
    method __str__ (line 2601) | def __str__(self):
    method __call__ (line 2611) | def __call__(self, x):
    method max (line 2621) | def max(self):
    method min (line 2625) | def min(self):
    method from_config (line 2630) | def from_config(cls, config):
    method get_config (line 2633) | def get_config(self):
  class quantized_sigmoid (line 2644) | class quantized_sigmoid(base_quantizer.BaseQuantizer):  # pylint: disabl...
    method __init__ (line 2658) | def __init__(self, bits=8, symmetric=False,
    method __str__ (line 2667) | def __str__(self):
    method __call__ (line 2677) | def __call__(self, x):
    method max (line 2687) | def max(self):
    method min (line 2691) | def min(self):
    method from_config (line 2696) | def from_config(cls, config):
    method get_config (line 2699) | def get_config(self):
  function _clip_power_of_two (line 2709) | def _clip_power_of_two(x_abs,
  function _need_exponent_sign_bit_check (line 2781) | def _need_exponent_sign_bit_check(max_value):
  function _get_min_max_exponents (line 2810) | def _get_min_max_exponents(non_sign_bits, need_exponent_sign_bit,
  class quantized_po2 (line 2833) | class quantized_po2(base_quantizer.BaseQuantizer):  # pylint: disable=in...
    method __init__ (line 2859) | def __init__(self,
    method __str__ (line 2887) | def __str__(self):
    method __call__ (line 2898) | def __call__(self, x):
    method max (line 2918) | def max(self):
    method min (line 2925) | def min(self):
    method from_config (line 2933) | def from_config(cls, config):
    method get_config (line 2936) | def get_config(self):
  class quantized_relu_po2 (line 2970) | class quantized_relu_po2(base_quantizer.BaseQuantizer):  # pylint: disab...
    method __init__ (line 2996) | def __init__(self,
    method __str__ (line 3031) | def __str__(self):
    method __call__ (line 3044) | def __call__(self, x):
    method max (line 3084) | def max(self):
    method min (line 3091) | def min(self):
    method from_config (line 3103) | def from_config(cls, config):
    method get_config (line 3106) | def get_config(self):
  class quantized_hswish (line 3144) | class quantized_hswish(quantized_bits):  # pylint: disable=invalid-name
    method __init__ (line 3186) | def __init__(
    method __str__ (line 3216) | def __str__(self):
    method __call__ (line 3251) | def __call__(self, x):
    method min (line 3270) | def min(self):
    method get_config (line 3288) | def get_config(self):
  function get_quantizer (line 3303) | def get_quantizer(identifier):
  function get_quantized_initializer (line 3334) | def get_quantized_initializer(w_initializer, w_range):

FILE: qkeras/registry.py
  class Registry (line 45) | class Registry(object):
    method __init__ (line 48) | def __init__(self):
    method register (line 52) | def register(self, item, name=None):
    method lookup (line 64) | def lookup(self, name):

FILE: qkeras/safe_eval.py
  function Num (line 31) | def Num(s):
  function Str (line 47) | def Str(s):
  function IsNum (line 50) | def IsNum(s):
  function IsBool (line 61) | def IsBool(s):
  function IsNone (line 67) | def IsNone(s):
  function Bool (line 70) | def Bool(s):
  function ListofNums (line 73) | def ListofNums(s):
  function IsListofNums (line 79) | def IsListofNums(s):
  function GetArg (line 92) | def GetArg(s):
  function GetParams (line 105) | def GetParams(s):
  function safe_eval (line 137) | def safe_eval(eval_str, op_dict, *params, **kwparams):  # pylint: disabl...

FILE: qkeras/utils.py
  function find_bn_fusing_layer_pair (line 102) | def find_bn_fusing_layer_pair(model, custom_objects={}):
  function add_bn_fusing_weights (line 147) | def add_bn_fusing_weights(prev_layer, bn_layer, saved_weights):
  function model_save_quantized_weights (line 223) | def model_save_quantized_weights(model, filename=None, custom_objects={}):
  function quantize_activation (line 416) | def quantize_activation(layer_config, activation_bits):
  function get_config (line 442) | def get_config(quantizer_config, layer, layer_class, parameter=None):
  function is_TFOpLambda_layer (line 453) | def is_TFOpLambda_layer(layer):
  function get_y_from_TFOpLambda (line 457) | def get_y_from_TFOpLambda(model_cfg, layer):
  function convert_to_folded_model (line 483) | def convert_to_folded_model(model):
  function model_quantize (line 579) | def model_quantize(model,
  function _add_supported_quantized_objects (line 1029) | def _add_supported_quantized_objects(custom_objects):
  function clone_model (line 1072) | def clone_model(model, custom_objects=None):
  function quantized_model_from_json (line 1089) | def quantized_model_from_json(json_string, custom_objects=None):
  function load_qmodel (line 1103) | def load_qmodel(filepath, custom_objects=None, compile=True):
  function print_model_sparsity (line 1137) | def print_model_sparsity(model):
  function get_model_sparsity (line 1165) | def get_model_sparsity(model, per_layer=False, allow_list=None):
  function quantized_model_debug (line 1234) | def quantized_model_debug(model, X_test, plot=False, plt_instance=None):
  function quantized_model_dump (line 1316) | def quantized_model_dump(model,
  function clone_model_and_freeze_auto_po2_scale (line 1357) | def clone_model_and_freeze_auto_po2_scale(

FILE: tests/automatic_conversion_test.py
  function create_network (line 27) | def create_network():
  function create_network_with_bn (line 36) | def create_network_with_bn():
  function create_network_sequential (line 47) | def create_network_sequential():
  function test_linear_activation (line 57) | def test_linear_activation():
  function test_linear_activation_conversion (line 63) | def test_linear_activation_conversion():
  function test_no_activation_conversion_to_quantized (line 78) | def test_no_activation_conversion_to_quantized():
  function test_automatic_conversion_from_relu_to_qr (line 86) | def test_automatic_conversion_from_relu_to_qr():
  function test_conversion_from_relu_activation_to_qr_qactivation (line 97) | def test_conversion_from_relu_activation_to_qr_qactivation():
  function test_conversion_from_relu_activation_to_qadaptiveactivation (line 114) | def test_conversion_from_relu_activation_to_qadaptiveactivation():
  function test_conversion_qadaptiveactivation_with_preference (line 131) | def test_conversion_qadaptiveactivation_with_preference():
  function test_sequential_model_conversion (line 156) | def test_sequential_model_conversion():
  function test_folded_layer_conversion (line 167) | def test_folded_layer_conversion():

FILE: tests/autoqkeras_test.py
  function dense_model (line 38) | def dense_model():
  function test_autoqkeras (line 58) | def test_autoqkeras():

FILE: tests/bn_folding_test.py
  function get_sgd_optimizer (line 42) | def get_sgd_optimizer(learning_rate):
  function get_qconv2d_model (line 49) | def get_qconv2d_model(input_shape, kernel_size, kernel_quantizer=None):
  function get_qconv2d_batchnorm_model (line 92) | def get_qconv2d_batchnorm_model(input_shape, kernel_size, folding_mode,
  function get_models_with_one_layer (line 113) | def get_models_with_one_layer(kernel_quantizer, folding_mode, ema_freeze...
  function get_debug_model (line 167) | def get_debug_model(model):
  function generate_dataset (line 177) | def generate_dataset(train_size=10,
  function run_training (line 200) | def run_training(model, epochs, loss_fn, loss_metric, optimizer,
  function test_unfold_model (line 243) | def test_unfold_model():
  function test_loading (line 353) | def test_loading():
  function test_same_training_and_prediction (line 402) | def test_same_training_and_prediction():
  function test_populate_bias_quantizer_from_accumulator (line 612) | def test_populate_bias_quantizer_from_accumulator():

FILE: tests/callbacks_test.py
  function qconv_model (line 34) | def qconv_model():
  function test_QNoiseScheduler (line 49) | def test_QNoiseScheduler():

FILE: tests/codebook_test.py
  function test_codebook_weights (line 62) | def test_codebook_weights(bits, axis, quantizer, weights, expected_result):

FILE: tests/leakyrelu_test.py
  function test_quantized_relu (line 74) | def test_quantized_relu(bits, integer, use_sigmoid, negative_slope, test...
  function test_quantized_relu_po2 (line 131) | def test_quantized_relu_po2(bits, negative_slope, test_values, expected_...

FILE: tests/min_max_test.py
  function test_binary (line 26) | def test_binary():
  function test_ternary (line 36) | def test_ternary():
  function test_quantized_bits (line 46) | def test_quantized_bits():
  function test_po2 (line 81) | def test_po2():

FILE: tests/print_qstats_test.py
  function create_network (line 33) | def create_network():
  function create_mix_network (line 42) | def create_mix_network():
  function create_network_with_bn (line 52) | def create_network_with_bn():
  function test_conversion_print_qstats (line 65) | def test_conversion_print_qstats():

FILE: tests/qactivation_test.py
  function disable_test_quantized_po2 (line 118) | def disable_test_quantized_po2(
  function disable_test_quantized_relu_po2 (line 193) | def disable_test_quantized_relu_po2(bits, max_value, use_stochastic_roun...
  function test_smooth_sigmoid (line 207) | def test_smooth_sigmoid():
  function test_hard_sigmoid (line 226) | def test_hard_sigmoid():
  function test_quantized_sigmoid (line 281) | def test_quantized_sigmoid(bits, sigmoid_type, use_real_sigmoid,
  function test_quantized_sigmoid_limits (line 330) | def test_quantized_sigmoid_limits(
  function test_quantized_tanh (line 373) | def test_quantized_tanh(bits, use_real_tanh, test_values, expected_values):
  function test_quantized_tanh_limits (line 420) | def test_quantized_tanh_limits(bits, sigmoid_type, use_real_tanh, test_v...
  function test_quantized_relu (line 469) | def test_quantized_relu(bits, integer, use_sigmoid, test_values, expecte...
  function test_quantized_bits (line 527) | def test_quantized_bits(bits, integer, symmetric, keep_negative, test_va...
  function test_quantized_bits_with_auto_po2_scale (line 547) | def test_quantized_bits_with_auto_po2_scale(
  function test_quantized_bits_with_post_training_scale (line 561) | def test_quantized_bits_with_post_training_scale():
  function test_ternary (line 592) | def test_ternary(alpha, threshold, test_values, expected_values):
  function test_binary (line 613) | def test_binary(use_01, alpha, test_values, expected_values):
  function test_stochastic_round_quantized_po2 (line 630) | def test_stochastic_round_quantized_po2(test_values, expected_values):
  function test_stochastic_round_quantized_relu_po2 (line 648) | def test_stochastic_round_quantized_relu_po2(test_values, expected_values):
  function test_stochastic_binary (line 659) | def test_stochastic_binary():
  function test_stochastic_binary_inference_mode (line 706) | def test_stochastic_binary_inference_mode(alpha, test_values, expected_v...
  function test_stochastic_ternary (line 738) | def test_stochastic_ternary(bound, alpha, temperature, expected_values,
  function test_stochastic_ternary_inference_mode (line 772) | def test_stochastic_ternary_inference_mode(alpha, threshold, test_values,
  function test_quantized_hswish (line 806) | def test_quantized_hswish(bits, integer, symmetric, relu_shift,
  function test_quantized_relu_fast_inference (line 816) | def test_quantized_relu_fast_inference():

FILE: tests/qadaptiveactivation_test.py
  function run_qadaptiveactivation_test (line 31) | def run_qadaptiveactivation_test(input_val, kwargs):
  function test_qadaptiveact_ema (line 121) | def test_qadaptiveact_ema(momentum, ema_freeze_delay, total_steps,
  function test_qadaptiveactivation (line 169) | def test_qadaptiveactivation():

FILE: tests/qalpha_test.py
  function test_binary_auto (line 37) | def test_binary_auto():
  function test_binary_auto_po2 (line 58) | def test_binary_auto_po2():
  function test_ternary_auto (line 83) | def test_ternary_auto():
  function test_ternary_auto_po2 (line 103) | def test_ternary_auto_po2():
  function test_get_integer_bits (line 128) | def test_get_integer_bits():

FILE: tests/qconvolutional_test.py
  function test_qnetwork (line 52) | def test_qnetwork():
  function test_sequential_qnetwork (line 162) | def test_sequential_qnetwork():
  function test_qconv1d (line 206) | def test_qconv1d(layer_cls):
  function test_qconv2dtranspose (line 288) | def test_qconv2dtranspose():
  function test_masked_qconv2d_creates_correct_parameters (line 312) | def test_masked_qconv2d_creates_correct_parameters():
  function test_qconv2d_masks_weights (line 328) | def test_qconv2d_masks_weights():
  function test_masked_qconv2d_load_restore_works (line 355) | def test_masked_qconv2d_load_restore_works():
  function test_qconv2d_groups_works (line 385) | def test_qconv2d_groups_works():

FILE: tests/qdepthwise_conv2d_transpose_test.py
  function create_model (line 118) | def create_model(group_size=1):
  function create_quantized_model (line 137) | def create_quantized_model(group_size=1):
  function test_qseparable_conv2d_transpose (line 156) | def test_qseparable_conv2d_transpose():
  function test_quantization_in_separable_conv2d_transpose (line 198) | def test_quantization_in_separable_conv2d_transpose():
  function test_qseparable_conv2d_transpose_with_groups (line 241) | def test_qseparable_conv2d_transpose_with_groups():
  function test_save_and_load_model (line 284) | def test_save_and_load_model():

FILE: tests/qlayers_test.py
  function qdense_util (line 45) | def qdense_util(layer_cls,
  function test_qdense (line 93) | def test_qdense(layer_kwargs, input_data, weight_data, bias_data,
  function test_qactivation_loads (line 103) | def test_qactivation_loads():

FILE: tests/qmac_test.py
  function create_qmac_model (line 37) | def create_qmac_model(layer_cls,
  function test_qmac (line 64) | def test_qmac(layer_kwargs, input_data, weight_data, bias_data,

FILE: tests/qnoise_test.py
  function test_qnoise_quantized_bits (line 30) | def test_qnoise_quantized_bits():
  function test_qnoise_quantized_relu (line 69) | def test_qnoise_quantized_relu():

FILE: tests/qpooling_test.py
  function test_q_average_pooling (line 81) | def test_q_average_pooling(pooling, input_size, pool_size, strides, padd...
  function test_qpooling_in_model_quantize (line 145) | def test_qpooling_in_model_quantize():
  function test_qpooling_in_qtools (line 175) | def test_qpooling_in_qtools():
  function test_QAveragePooling_output (line 263) | def test_QAveragePooling_output():
  function test_QGlobalAveragePooling_output (line 280) | def test_QGlobalAveragePooling_output():

FILE: tests/qrecurrent_test.py
  function test_qrnn (line 101) | def test_qrnn(rnn, all_weights_signature, expected_output):
  function test_qbidirectional (line 204) | def test_qbidirectional(rnn, all_weights_signature, expected_output):
  function create_network_rnn (line 267) | def create_network_rnn(rnn):
  function test_rnn_conversion (line 274) | def test_rnn_conversion(rnn):
  function create_network_birnn (line 300) | def create_network_birnn(rnn):
  function test_birnn_conversion (line 307) | def test_birnn_conversion(rnn):
  function test_birnn_subrnn (line 343) | def test_birnn_subrnn():

FILE: tests/qseparable_conv2d_transpose_test.py
  function create_model (line 33) | def create_model():
  function create_quantized_model (line 53) | def create_quantized_model():
  function test_qseparable_conv2d_transpose (line 73) | def test_qseparable_conv2d_transpose():
  function test_quantization_in_separable_conv2d_transpose (line 139) | def test_quantization_in_separable_conv2d_transpose():
  function test_save_and_load_model (line 179) | def test_save_and_load_model():

FILE: tests/qtools_model_test.py
  function qdense_model_fork (line 44) | def qdense_model_fork():
  function qconv_model (line 71) | def qconv_model():
  function po2_qbits_model (line 98) | def po2_qbits_model():
  function float_po2_model (line 111) | def float_po2_model():
  function qbn_model (line 133) | def qbn_model(
  function qbn_model_inference (line 154) | def qbn_model_inference():
  function add_qmodel (line 210) | def add_qmodel(quantizer1, quantizer2, quantizer3):
  function multiply_qmodel (line 245) | def multiply_qmodel():
  function pooling_qmodel (line 281) | def pooling_qmodel():
  function maximum_qmodel (line 294) | def maximum_qmodel(quantizer1, quantizer2, quantizer3):
  function concatenate_qmodel (line 327) | def concatenate_qmodel(quantizer1, quantizer2, quantizer3):
  function run (line 361) | def run(model, input_quantizers, is_inference=False,
  function test_wrong_input_quantizers (line 382) | def test_wrong_input_quantizers():
  function test_qbn_inference (line 410) | def test_qbn_inference():
  function test_invalid_denominator_qbn (line 477) | def test_invalid_denominator_qbn():
  function test_conv2d (line 489) | def test_conv2d():
  function test_qdense_model_fork (line 522) | def test_qdense_model_fork():
  function test_util_layers (line 542) | def test_util_layers():
  function test_merge_layers (line 583) | def test_merge_layers():
  function test_pooling (line 627) | def test_pooling():
  function test_qenergy (line 643) | def test_qenergy():
  function test_quntized_reference_energy_same_as_floating_trial (line 745) | def test_quntized_reference_energy_same_as_floating_trial():
  function test_auto_po2 (line 839) | def test_auto_po2():
  function test_big_bias_quantizer (line 901) | def test_big_bias_quantizer():
  function test_qdepthwiseconv2d (line 914) | def test_qdepthwiseconv2d():
  function test_divide_and_conquer_sequential_conv2d (line 950) | def test_divide_and_conquer_sequential_conv2d():

FILE: tests/qtools_util_test.py
  function test_adjust_multiplier_for_auto_po2 (line 42) | def test_adjust_multiplier_for_auto_po2(

FILE: tests/quantizer_impl_test.py
  function test_QuantizedBits (line 34) | def test_QuantizedBits():
  function test_QuantizedBits_ElementsPerScale (line 49) | def test_QuantizedBits_ElementsPerScale():
  function test_QuantizedTanh (line 99) | def test_QuantizedTanh():
  function test_QuantizedUlaw (line 112) | def test_QuantizedUlaw():
  function test_Binary (line 125) | def test_Binary():
  function test_StochasticBinary (line 138) | def test_StochasticBinary():
  function test_Bernoulli (line 152) | def test_Bernoulli():
  function test_QuantizedRelu (line 165) | def test_QuantizedRelu():
  function test_Ternary (line 182) | def test_Ternary():
  function test_StochasticTernary (line 197) | def test_StochasticTernary():
  function test_PowerOfTwo (line 212) | def test_PowerOfTwo():
  function test_ReluPowerOfTwo (line 226) | def test_ReluPowerOfTwo():
  function test_GetScale_PerChannelScale (line 240) | def test_GetScale_PerChannelScale():
  function _get_num_unique_elements (line 282) | def _get_num_unique_elements(input_tensor):
  function test_GetScale_ElementsPerScale_Scalar_ScaleAxis_EPS (line 286) | def test_GetScale_ElementsPerScale_Scalar_ScaleAxis_EPS():
  function test_GetScale_ElementsPerScale_List_ScaleAxis_EPS (line 363) | def test_GetScale_ElementsPerScale_List_ScaleAxis_EPS():
  function test_GetScale_MinPO2Exponent_MaxPO2Exponent (line 443) | def test_GetScale_MinPO2Exponent_MaxPO2Exponent():
  function test_GetUnrolledShape_GetRolledBackShape (line 486) | def test_GetUnrolledShape_GetRolledBackShape():

FILE: tests/quantizer_registry_test.py
  function test_lookup (line 44) | def test_lookup(quantizer_name):

FILE: tests/range_test.py
  function test_quantized_relu_range (line 47) | def test_quantized_relu_range(bits, integer, expected_values):
  function test_quantized_bits_range (line 70) | def test_quantized_bits_range(bits, integer, expected_values):

FILE: tests/registry_test.py
  function sample_function (line 25) | def sample_function(arg):
  class SampleClass (line 30) | class SampleClass(object):
    method __init__ (line 33) | def __init__(self, arg):
    method get_arg (line 36) | def get_arg(self):
  function test_register_function (line 40) | def test_register_function():
  function test_register_class (line 48) | def test_register_class():
  function test_register_with_name (line 56) | def test_register_with_name():
  function test_lookup_missing_item (line 65) | def test_lookup_missing_item():
  function test_lookup_missing_name (line 70) | def test_lookup_missing_name():

FILE: tests/safe_eval_test.py
  function test_get_params1 (line 31) | def test_get_params1():
  function test_get_params2 (line 38) | def test_get_params2():
  function test_get_params3 (line 47) | def test_get_params3():
  function test_safe_eval1 (line 63) | def test_safe_eval1():
  function i_func (line 68) | def i_func(s):
  function myadd2 (line 72) | def myadd2(a, b):
  function myadd (line 76) | def myadd(a=32, b=10):
  class myaddcls (line 79) | class myaddcls(object):
    method __call__ (line 80) | def __call__(self, a=32, b=10):
  function test_safe_eval2 (line 83) | def test_safe_eval2():
  function test_safe_eval3 (line 88) | def test_safe_eval3():
  function test_safe_eval4 (line 93) | def test_safe_eval4():
  function test_safe_eval5 (line 98) | def test_safe_eval5():

FILE: tests/utils_test.py
  function create_quantized_network (line 39) | def create_quantized_network():
  function create_quantized_po2_network (line 64) | def create_quantized_po2_network():
  function set_network_sparsity (line 75) | def set_network_sparsity(model, sparsity):
  function test_get_model_sparsity (line 90) | def test_get_model_sparsity():
  function test_get_po2_model_sparsity (line 104) | def test_get_po2_model_sparsity():
  function test_convert_to_folded_model (line 123) | def test_convert_to_folded_model():
  function test_find_bn_fusing_layer_pair (line 171) | def test_find_bn_fusing_layer_pair():
  function create_test_model_for_scale_freezing (line 230) | def create_test_model_for_scale_freezing(bias_quantizer):
  function test_clone_model_and_freeze_auto_po2_scale (line 291) | def test_clone_model_and_freeze_auto_po2_scale():
  function test_clone_model_and_freeze_auto_po2_scale_serialization (line 317) | def test_clone_model_and_freeze_auto_po2_scale_serialization():
  function test_clone_model_and_freeze_auto_po2_scale_error (line 330) | def test_clone_model_and_freeze_auto_po2_scale_error():
Condensed preview — 140 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (1,495K chars).
[
  {
    "path": ".github/workflows/ci.yml",
    "chars": 792,
    "preview": "# This workflow will install Python dependencies, run tests and lint with a single version of Python\n# For more informat"
  },
  {
    "path": "CHANGELOG",
    "chars": 163,
    "preview": "v0.5, 2019/07 -- Initial release.\nv0.6, 2020/03 -- Support tensorflow 2.0, tf.keras and python3.\nv0.7, 2020/03 -- Enhanc"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 1101,
    "preview": "# How to Contribute\n\nWe'd love to accept your patches and contributions to this project. There are\njust a few small guid"
  },
  {
    "path": "LICENSE",
    "chars": 11415,
    "preview": "Copyright 2019 The QKeras Authors.  All rights reserved.\n\n                                 Apache License\n              "
  },
  {
    "path": "MANIFEST.in",
    "chars": 43,
    "preview": "include *.txt\nrecursive-include docs *.txt\n"
  },
  {
    "path": "README.md",
    "chars": 11430,
    "preview": "# QKeras\n\n[github.com/google/qkeras](https://github.com/google/qkeras)\n\n## Introduction\n\nQKeras is a quantization extens"
  },
  {
    "path": "examples/example_act.py",
    "chars": 7210,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_b2t.py",
    "chars": 1605,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_cifar10_po2.py",
    "chars": 4128,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_keras_to_qkeras.py",
    "chars": 2734,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_mnist.py",
    "chars": 4802,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_mnist_ae.py",
    "chars": 3980,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_mnist_b2t.py",
    "chars": 5700,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_mnist_bn.py",
    "chars": 6340,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_mnist_po2.py",
    "chars": 4370,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_mnist_prune.py",
    "chars": 7149,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_qdense.py",
    "chars": 3846,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_qoctave.py",
    "chars": 5382,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "examples/example_ternary.py",
    "chars": 3467,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/__init__.py",
    "chars": 1122,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/compress.py",
    "chars": 1956,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/conv2d.py",
    "chars": 13437,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/dense.py",
    "chars": 9414,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/generate_rf_code.py",
    "chars": 28407,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/optimizer.py",
    "chars": 9825,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/random_forest/__init__.py",
    "chars": 817,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/random_forest/gen_random_tree.py",
    "chars": 1862,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/random_forest/parser.py",
    "chars": 3954,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/random_forest/random_forest.py",
    "chars": 6887,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/random_forest/random_tree.py",
    "chars": 5882,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/random_forest/utils.py",
    "chars": 3093,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/receptive.py",
    "chars": 2919,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/table/__init__.py",
    "chars": 737,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/table/parser.py",
    "chars": 3954,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/table/utils.py",
    "chars": 3166,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "experimental/lo/utils.py",
    "chars": 5176,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "notebook/AutoQKeras.ipynb",
    "chars": 54286,
    "preview": "{\n \"cells\": [\n   {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\":"
  },
  {
    "path": "notebook/CodebookQuantization.ipynb",
    "chars": 10188,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": "
  },
  {
    "path": "notebook/QKerasTutorial.ipynb",
    "chars": 33700,
    "preview": "{\n \"cells\": [\n   {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\":"
  },
  {
    "path": "notebook/QRNNTutorial.ipynb",
    "chars": 17723,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": "
  },
  {
    "path": "qkeras/__init__.py",
    "chars": 1978,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/autoqkeras/__init__.py",
    "chars": 1020,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/autoqkeras/autoqkeras_internal.py",
    "chars": 46514,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/examples/run/get_data.py",
    "chars": 2398,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/examples/run/get_model.py",
    "chars": 2563,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/examples/run/networks/__init__.py",
    "chars": 780,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/examples/run/networks/conv_block.py",
    "chars": 7179,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/examples/run/plot_history.py",
    "chars": 1662,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/forgiving_metrics/__init__.py",
    "chars": 976,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/forgiving_metrics/forgiving_bits.py",
    "chars": 8566,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/forgiving_metrics/forgiving_energy.py",
    "chars": 8002,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/forgiving_metrics/forgiving_factor.py",
    "chars": 1621,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/quantization_config.py",
    "chars": 1755,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/tests/test_forgiving_factor.py",
    "chars": 5012,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/autoqkeras/utils.py",
    "chars": 3103,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "qkeras/b2t.py",
    "chars": 4331,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/base_quantizer.py",
    "chars": 2874,
    "preview": "# Copyright 2025 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/bn_folding_utils.py",
    "chars": 7973,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/callbacks.py",
    "chars": 6915,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/codebook.py",
    "chars": 7717,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/estimate.py",
    "chars": 23503,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/experimental/quantizers/__init__.py",
    "chars": 918,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/experimental/quantizers/quantizers_po2.py",
    "chars": 53793,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qconv2d_batchnorm.py",
    "chars": 13460,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qconvolutional.py",
    "chars": 46456,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qdepthwise_conv2d_transpose.py",
    "chars": 11345,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qdepthwiseconv2d_batchnorm.py",
    "chars": 14111,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qlayers.py",
    "chars": 29523,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qmac.py",
    "chars": 6262,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qmodel.proto",
    "chars": 2107,
    "preview": "// Copyright 2019 Google LLC\n//\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use"
  },
  {
    "path": "qkeras/qnormalization.py",
    "chars": 13600,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qoctave.py",
    "chars": 21944,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qpooling.py",
    "chars": 8046,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qrecurrent.py",
    "chars": 53268,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qseparable_conv2d_transpose.py",
    "chars": 13853,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/DnC/divide_and_conquer.py",
    "chars": 33192,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/DnC/dnc_layer_cost_ace.py",
    "chars": 7286,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/__init__.py",
    "chars": 868,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/config_public.py",
    "chars": 1566,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/examples/example_generate_json.py",
    "chars": 5412,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/examples/example_get_energy.py",
    "chars": 5864,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/generate_layer_data_type_map.py",
    "chars": 37057,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/interface.py",
    "chars": 8273,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/qenergy/__init__.py",
    "chars": 834,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/qenergy/qenergy.py",
    "chars": 11732,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/qgraph.py",
    "chars": 13133,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/qtools_util.py",
    "chars": 13957,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/__init__.py",
    "chars": 1501,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/accumulator_factory.py",
    "chars": 2202,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/accumulator_impl.py",
    "chars": 4230,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/adder_factory.py",
    "chars": 3284,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/adder_impl.py",
    "chars": 3747,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/divider_factory.py",
    "chars": 4900,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/divider_impl.py",
    "chars": 5137,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/fused_bn_factory.py",
    "chars": 4866,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/merge_factory.py",
    "chars": 7235,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/multiplier_factory.py",
    "chars": 6618,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/multiplier_impl.py",
    "chars": 12880,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/qbn_factory.py",
    "chars": 4057,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/quantizer_factory.py",
    "chars": 6168,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/quantizer_impl.py",
    "chars": 14840,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/quantized_operators/subtractor_factory.py",
    "chars": 1908,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/run_qtools.py",
    "chars": 9255,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/qtools/settings.py",
    "chars": 3572,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/quantizer_imports.py",
    "chars": 1238,
    "preview": "# Copyright 2025 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/quantizer_registry.py",
    "chars": 1200,
    "preview": "# Copyright 2024 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/quantizers.py",
    "chars": 121969,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/registry.py",
    "chars": 1894,
    "preview": "# Copyright 2024 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/safe_eval.py",
    "chars": 4519,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "qkeras/utils.py",
    "chars": 58074,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "requirements.txt",
    "chars": 488,
    "preview": "tensorflow>=2.5.0rc0\nnumpy>=1.16.5\npyparser\npandas>=1.1.0\nmatplotlib>=3.3.0\nscipy>=1.4.1\nsetuptools>=41.0.0\nargparse>=1."
  },
  {
    "path": "setup.cfg",
    "chars": 566,
    "preview": "[metadata]\nname = qkeras\nversion = 0.9.0\nauthor = Google\nauthor_email = qkeras-team@google.com\ndescription = A quantizat"
  },
  {
    "path": "setup.py",
    "chars": 1685,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/automatic_conversion_test.py",
    "chars": 8234,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/autoqkeras_test.py",
    "chars": 4419,
    "preview": "# ==============================================================================\n# Copyright 2020 Google LLC\n#\n#\n# Licen"
  },
  {
    "path": "tests/bn_folding_test.py",
    "chars": 24795,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/callbacks_test.py",
    "chars": 6409,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/codebook_test.py",
    "chars": 2700,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/leakyrelu_test.py",
    "chars": 6245,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/min_max_test.py",
    "chars": 3498,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/print_qstats_test.py",
    "chars": 3205,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qactivation_test.py",
    "chars": 29329,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qadaptiveactivation_test.py",
    "chars": 7982,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qalpha_test.py",
    "chars": 6642,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qconvolutional_test.py",
    "chars": 13471,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qdepthwise_conv2d_transpose_test.py",
    "chars": 8506,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qlayers_test.py",
    "chars": 4704,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qmac_test.py",
    "chars": 3189,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qnoise_test.py",
    "chars": 6554,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qpooling_test.py",
    "chars": 10737,
    "preview": "# Copyright 2021 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qrecurrent_test.py",
    "chars": 13203,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qseparable_conv2d_transpose_test.py",
    "chars": 6610,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qtools_model_test.py",
    "chars": 35229,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/qtools_util_test.py",
    "chars": 3741,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/quantizer_impl_test.py",
    "chars": 24597,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/quantizer_registry_test.py",
    "chars": 1489,
    "preview": "# Copyright 2024 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/range_test.py",
    "chars": 3475,
    "preview": "# Copyright 2020 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/registry_test.py",
    "chars": 2300,
    "preview": "# Copyright 2024 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/safe_eval_test.py",
    "chars": 2486,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  },
  {
    "path": "tests/utils_test.py",
    "chars": 11404,
    "preview": "# Copyright 2019 Google LLC\n#\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this"
  }
]

About this extraction

This page contains the full source code of the google/qkeras GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 140 files (1.4 MB), approximately 370.1k tokens, and a symbol index with 1181 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!