master d28e68b2faa8 cached
17 files
70.2 KB
16.7k tokens
13 symbols
1 requests
Download .txt
Repository: XanaduAI/quantum-neural-networks
Branch: master
Commit: d28e68b2faa8
Files: 17
Total size: 70.2 KB

Directory structure:
gitextract_p7of1u1r/

├── .gitignore
├── LICENSE
├── README.md
├── fraud_detection/
│   ├── README.md
│   ├── data_processor.py
│   ├── fraud_detection.py
│   ├── plot_confusion_matrix.py
│   ├── roc.py
│   └── testing.py
├── function_fitting/
│   ├── function_fitting.py
│   ├── sine_outputs.npy
│   ├── sine_test_data.npy
│   └── sine_train_data.npy
├── requirements.txt
├── tetrominos_learning/
│   ├── plot_images.py
│   └── tetrominos_learning.py
└── version_check.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
creditcard*
outputs*
roc.pdf
confusion.pdf


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
<img align="left" src="https://github.com/XanaduAI/quantum-neural-networks/blob/master/static/tetronimo.png" width=300px>

# Continuous-variable quantum neural networks

This repository contains the source code used to produce the results presented in [*"Continuous-variable quantum neural networks"*](https://doi.org/10.1103/PhysRevResearch.1.033063).

<br/>

## Requirements

To construct and optimize the variational quantum circuits, these scripts and notebooks use the TensorFlow backend of [Strawberry Fields](https://github.com/XanaduAI/strawberryfields). In addition, matplotlib is required for generating output plots.

**Due to subsequent interface upgrades, these scripts will work only with the following
configuration**
 
- Strawberry Fields version 0.10.0
- TensorFlow version 1.3
- Python version 3.5 or 3.6

Your version of Python can be checked by running `python --version`. The correct versions of
StrawberryFields and TensorFlow can be installed by running `pip install -r requirements.txt`
from the main directory of this repository. 

## Contents

<!-- <p align="center">
	<img src="https://github.com/XanaduAI/quantum-neural-networks/blob/master/static/function_fitting.png">
</p> -->

* **Function fitting**: The folder `function_fitting` contains the Python script `function_fitting.py`, which automates the process of fitting classical functions using continuous-variable (CV) variational quantum circuits. Simply specify the function you would like to fit, along with other hyperparameters, and this script automatically constructs and optimizes the CV quantum neural network. In addition, training data is also provided.

* **Quantum autoencoder**: coming soon.

* **Quantum fraud detection**: The folder `fraud_detection` contains the Python script `fraud_detection.py`, which builds and trains a hybrid classical/quantum model for fraud detection. Additional scripts are provided for visualizing the results.

* **Tetrominos learning**: The folder `tetrominos_learning` contains the Python script `tetrominos_learning.py`, which trains a continuous-variable (CV) quantum neural network. The task of the network is to encode 7 different 4X4 images, representing the (L,O,T,I,S,J,Z) [tetrominos](https://en.wikipedia.org/wiki/Tetromino), in the photon number distribution of two light modes. Once the training phase is completed, the script `plot_images.py` can be executed in order to generate a `.png` figure representing the final results.

<img align='right' src="https://github.com/XanaduAI/quantum-neural-networks/blob/master/static/tetronimo_gif.gif">

## Using the scripts

To use the scripts, simply set the input data, output data, and hyperparametersby modifying the scripts directly - and then enter the subdirectory and run the script using Python 3:

```bash
python3 script_name.py
```

The outputs of the simulations will be saved in the subdirectory.

To access any saved data, the file can be loaded using NumPy:

```python
results = np.load('simulation_results.npz')
```

## Authors

Nathan Killoran, Thomas R. Bromley, Juan Miguel Arrazola, Maria Schuld, Nicolás Quesada, and Seth Lloyd.

If you are doing any research using this source code and Strawberry Fields, please cite the following two papers:

> Nathan Killoran, Thomas R. Bromley, Juan Miguel Arrazola, Maria Schuld, Nicolás Quesada, and Seth Lloyd. Continuous-variable quantum neural networks. [Physical Review Research, 1(3), 033063](https://doi.org/10.1103/PhysRevResearch.1.033063) (2019).

> Nathan Killoran, Josh Izaac, Nicolás Quesada, Ville Bergholm, Matthew Amy, and Christian Weedbrook. Strawberry Fields: A Software Platform for Photonic Quantum Computing. arXiv, 2018. [Quantum, 3, 129](https://quantum-journal.org/papers/q-2019-03-11-129/) (2019).

## License

This source code is free and open source, released under the Apache License, Version 2.0.


================================================
FILE: fraud_detection/README.md
================================================
<img align="left" src="https://github.com/XanaduAI/quantum-neural-networks/blob/master/static/fraud_detection.png" width=300px>

# Fraud detection

This folder provides the source code used in Experiment B in *"Continuous-variable quantum neural networks"* [arXiv:1806.06871](https://arxiv.org/abs/1806.06871).

## Getting the data

The raw data is sourced from the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset on Kaggle. The `creditcard.csv` file should be downloaded and placed in this folder. The user can then run:
```bash
python3 data_processor.py
```
This script creates two datasets for training and testing.

## Training and testing the model

The model is a hybrid classical-quantum classifier, with a number of input classical layers that control the parameters of an input layer in a two-mode CV quantum neural network. The model is trained so that it outputs a photon in one mode for a genuine credit card transaction, and outputs a photon in the other mode for a fraudulent transaction.

Training can be performed with:
```bash
python3 fraud_detection.py
```
| WARNING: this script can take a long time to run. On a typical PC, it may take hours to arrive at a well-trained model. |
| --- |

The model is periodically saved during training, and progress can be monitored by launching TensorBoard in the terminal:
```bash
tensorboard --logdir=outputs/tensorboard/simulation_label
```
where `simulation_label` is the name used to refer to a particular run of the script `fraud_detection.py` (this is specified within the file itself; the default is `1`).

Testing can be performed with:
```bash
python3 testing.py
```
| WARNING: this script can take a long time to run|
| --- |

Here, the user must edit `testing.py` to point to the simulation label and checkpoint of the model which is to be tested. These are specified under the variables `simulation_label` and `ckpt_val` in `testing.py`.

The output of testing is a confusion table, which can be found as a numpy array in `outputs/confusion/simulation_label`. The confusion table is given for multiple threshold probabilities for a transaction to be considered as genuine.

## Visualizing the results

The performance of the trained model can be investigated with:
```bash
python3 roc.py
```
which outputs the receiver operating characteristic (ROC) curve and confusion matrix for the optimal threshold probability.



================================================
FILE: fraud_detection/data_processor.py
================================================
# Copyright 2018 Xanadu Quantum Technologies Inc.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#     http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""For processing data from https://www.kaggle.com/mlg-ulb/creditcardfraud"""
import csv
import numpy as np
import random

# creditcard.csv downloaded from https://www.kaggle.com/mlg-ulb/creditcardfraud
with open('creditcard.csv', 'r') as csv_file:
    csv_reader = csv.reader(csv_file, delimiter=',')

    data = list(csv_reader)

data = data[1:]
data_genuine = []
data_fraudulent = []

# Splitting genuine and fraudulent data
for i in range(len(data)):
    if int(data[i][30]) == 0:
        data_genuine.append([float(i) for i in data[i]])
    if int(data[i][30]) == 1:
        data_fraudulent.append([float(i) for i in data[i]])

fraudulent_data_points = len(data_fraudulent)

# We want the genuine data points to be 3x the fraudulent ones
undersampling_ratio = 3

genuine_data_points = fraudulent_data_points * undersampling_ratio

random.shuffle(data_genuine)
random.shuffle(data_fraudulent)

# Fraudulent and genuine transactions are split into two datasets for cross validation

data_fraudulent_1 = data_fraudulent[:int(fraudulent_data_points / 2)]
data_fraudulent_2 = data_fraudulent[int(fraudulent_data_points / 2):]

data_genuine_1 = data_genuine[:int(genuine_data_points / 2)]
data_genuine_2 = data_genuine[int(genuine_data_points / 2):genuine_data_points]
data_genuine_remaining = data_genuine[genuine_data_points:]

random.shuffle(data_fraudulent_1)
random.shuffle(data_fraudulent_2)
random.shuffle(data_genuine_1)
random.shuffle(data_genuine_2)

np.savetxt('creditcard_genuine_1.csv', data_genuine_1, delimiter=',')
np.savetxt('creditcard_genuine_2.csv', data_genuine_2, delimiter=',')
np.savetxt('creditcard_fraudulent_1.csv', data_fraudulent_1, delimiter=',')
np.savetxt('creditcard_fraudulent_2.csv', data_fraudulent_2, delimiter=',')
# Larger datasets are used for testing, including genuine transactions unseen in training
np.savetxt('creditcard_combined_1_big.csv', data_fraudulent_1 + data_genuine_1 + data_genuine_remaining, delimiter=',')
np.savetxt('creditcard_combined_2_big.csv', data_fraudulent_2 + data_genuine_2 + data_genuine_remaining, delimiter=',')


================================================
FILE: fraud_detection/fraud_detection.py
================================================
# Copyright 2018 Xanadu Quantum Technologies Inc.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#     http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Fraud detection fitting script"""
import numpy as np
import os

import tensorflow as tf

import strawberryfields as sf
from strawberryfields.ops import Dgate, BSgate, Kgate, Sgate, Rgate

import sys
sys.path.append("..")
import version_check

# ===================================================================================
#                                   Hyperparameters
# ===================================================================================

# Two modes required: one for "genuine" transactions and one for "fradulent"
mode_number = 2
# Number of photonic quantum layers
depth = 4

# Fock basis truncation
cutoff = 10
# Number of batches in optimization
reps = 30000

# Label for simulation
simulation_label = 1

# Number of batches to use in the optimization
batch_size = 24

# Random initialization of gate parameters
sdev_photon = 0.1
sdev = 1

# Variable clipping values
disp_clip = 5
sq_clip = 5
kerr_clip = 1

# If loading from checkpoint, previous batch number reached
ckpt_val = 0

# Number of repetitions between each output to TensorBoard
tb_reps = 100
# Number of repetitions between each model save
savr_reps = 1000

model_string = str(simulation_label)

# Target location of output
folder_locator = './outputs/'

# Locations of TensorBoard and model save outputs
board_string = folder_locator + 'tensorboard/' + model_string + '/'
checkpoint_string = folder_locator + 'models/' + model_string + '/'

# ===================================================================================
#                                   Loading the training data
# ===================================================================================

# Data outputted from data_processor.py
data_genuine = np.loadtxt('creditcard_genuine_1.csv', delimiter=',')
data_fraudulent = np.loadtxt('creditcard_fraudulent_1.csv', delimiter=',')

# Combining genuine and fraudulent data
data_combined = np.append(data_genuine, data_fraudulent, axis=0)
data_points = len(data_combined)

# ===================================================================================
#                                   Setting up the classical NN input
# ===================================================================================

# Input neurons
input_neurons = 10
# Widths of hidden layers
nn_architecture = [10, 10]
# Output neurons of classical part
output_neurons = 14

# Defining classical network parameters
input_classical_layer = tf.placeholder(tf.float32, shape=[batch_size, input_neurons])

layer_matrix_1 = tf.Variable(tf.random_normal(shape=[input_neurons, nn_architecture[0]]))
offset_1 = tf.Variable(tf.random_normal(shape=[nn_architecture[0]]))

layer_matrix_2 = tf.Variable(tf.random_normal(shape=[nn_architecture[0], nn_architecture[1]]))
offset_2 = tf.Variable(tf.random_normal(shape=[nn_architecture[1]]))

layer_matrix_3 = tf.Variable(tf.random_normal(shape=[nn_architecture[1], output_neurons]))
offset_3 = tf.Variable(tf.random_normal(shape=[output_neurons]))

# Creating hidden layers and output
layer_1 = tf.nn.elu(tf.matmul(input_classical_layer, layer_matrix_1) + offset_1)
layer_2 = tf.nn.elu(tf.matmul(layer_1, layer_matrix_2) + offset_2)

output_layer = tf.nn.elu(tf.matmul(layer_2, layer_matrix_3) + offset_3)

# ===================================================================================
#                                   Defining QNN parameters
# ===================================================================================

# Number of beamsplitters in interferometer
bs_in_interferometer = int(1.0 * mode_number * (mode_number - 1) / 2)

with tf.name_scope('variables'):
    bs_variables = tf.Variable(tf.random_normal(shape=[depth, bs_in_interferometer, 2, 2]
                                                , stddev=sdev))
    phase_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number, 2], stddev=sdev))

    sq_magnitude_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number]
                                                          , stddev=sdev_photon))
    sq_phase_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number]
                                                      , stddev=sdev))
    disp_magnitude_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number]
                                                            , stddev=sdev_photon))
    disp_phase_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number]
                                                        , stddev=sdev))
    kerr_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number], stddev=sdev_photon))

parameters = [layer_matrix_1, offset_1, layer_matrix_2, offset_2, layer_matrix_3, offset_3, bs_variables,
              phase_variables, sq_magnitude_variables, sq_phase_variables, disp_magnitude_variables,
              disp_phase_variables, kerr_variables]


# ===================================================================================
#                                   Constructing quantum layers
# ===================================================================================


# Defining input QNN layer, whose parameters are set by the outputs of the classical network
def input_qnn_layer():
    with tf.name_scope('inputlayer'):
        Sgate(tf.clip_by_value(output_layer[:, 0], -sq_clip, sq_clip), output_layer[:, 1]) | q[0]
        Sgate(tf.clip_by_value(output_layer[:, 2], -sq_clip, sq_clip), output_layer[:, 3]) | q[1]

        BSgate(output_layer[:, 4], output_layer[:, 5]) | (q[0], q[1])

        Rgate(output_layer[:, 6]) | q[0]
        Rgate(output_layer[:, 7]) | q[1]

        Dgate(tf.clip_by_value(output_layer[:, 8], -disp_clip, disp_clip), output_layer[:, 9]) \
        | q[0]
        Dgate(tf.clip_by_value(output_layer[:, 10], -disp_clip, disp_clip), output_layer[:, 11]) \
        | q[1]

        Kgate(tf.clip_by_value(output_layer[:, 12], -kerr_clip, kerr_clip)) | q[0]
        Kgate(tf.clip_by_value(output_layer[:, 13], -kerr_clip, kerr_clip)) | q[1]


# Defining standard QNN layers
def qnn_layer(layer_number):
    with tf.name_scope('layer_{}'.format(layer_number)):
        BSgate(bs_variables[layer_number, 0, 0, 0], bs_variables[layer_number, 0, 0, 1]) \
        | (q[0], q[1])

        for i in range(mode_number):
            Rgate(phase_variables[layer_number, i, 0]) | q[i]

        for i in range(mode_number):
            Sgate(tf.clip_by_value(sq_magnitude_variables[layer_number, i], -sq_clip, sq_clip),
                  sq_phase_variables[layer_number, i]) | q[i]

        BSgate(bs_variables[layer_number, 0, 1, 0], bs_variables[layer_number, 0, 1, 1]) \
        | (q[0], q[1])

        for i in range(mode_number):
            Rgate(phase_variables[layer_number, i, 1]) | q[i]

        for i in range(mode_number):
            Dgate(tf.clip_by_value(disp_magnitude_variables[layer_number, i], -disp_clip,
                                   disp_clip), disp_phase_variables[layer_number, i]) | q[i]

        for i in range(mode_number):
            Kgate(tf.clip_by_value(kerr_variables[layer_number, i], -kerr_clip, kerr_clip)) | q[i]


# ===================================================================================
#                                   Defining QNN
# ===================================================================================

# construct the two-mode Strawberry Fields engine
eng, q = sf.Engine(mode_number)

# construct the circuit
with eng:
    input_qnn_layer()

    for i in range(depth):
        qnn_layer(i)

# run the engine (in batch mode)
state = eng.run("tf", cutoff_dim=cutoff, eval=False, batch_size=batch_size)
# extract the state
ket = state.ket()

# ===================================================================================
#                                   Setting up cost function
# ===================================================================================

# Classifications for whole batch: rows act as data points in the batch and columns
# are the one-hot classifications
classification = tf.placeholder(shape=[batch_size, 2], dtype=tf.int32)

func_to_minimise = 0

# Building up the function to minimize by looping through batch
for i in range(batch_size):
    # Probabilities corresponding to a single photon in either mode
    prob = tf.abs(ket[i, classification[i, 0], classification[i, 1]]) ** 2
    # These probabilities should be optimised to 1
    func_to_minimise += (1.0 / batch_size) * (prob - 1) ** 2

# Defining the cost function
cost_func = func_to_minimise
tf.summary.scalar('Cost', cost_func)

# ===================================================================================
#                                   Training
# ===================================================================================

# We choose the Adam optimizer
optimiser = tf.train.AdamOptimizer()
training = optimiser.minimize(cost_func)

# Saver/Loader for outputting model
saver = tf.train.Saver(parameters)

session = tf.Session()
session.run(tf.global_variables_initializer())

# Load previous model if non-zero ckpt_val is specified
if ckpt_val != 0:
    saver.restore(session, checkpoint_string + 'sess.ckpt-' + str(ckpt_val))

# TensorBoard writer
writer = tf.summary.FileWriter(board_string)
merge = tf.summary.merge_all()

counter = ckpt_val

# Tracks optimum value found (set high so first iteration encodes value)
opt_val = 1e20
# Batch number in which optimum value occurs
opt_position = 0
# Flag to detect if new optimum occured in last batch
new_opt = False

while counter <= reps:

    # Shuffles data to create new epoch
    np.random.shuffle(data_combined)

    # Splits data into batches
    split_data = np.split(data_combined, data_points / batch_size)

    for batch in split_data:

        if counter > reps:
            break

        # Input data (provided as principal components)
        data_points_principal_components = batch[:, 1:input_neurons + 1]
        # Data classes
        classes = batch[:, -1]

        # Encoding classes into one-hot form
        one_hot_input = np.zeros((batch_size, 2))

        for i in range(batch_size):
            if int(classes[i]) == 0:
                # Encoded such that genuine transactions should be outputted as a photon in the first mode
                one_hot_input[i] = [1, 0]
            else:
                one_hot_input[i] = [0, 1]

        # Output to TensorBoard
        if counter % tb_reps == 0:
            [summary, training_run, func_to_minimise_run] = session.run([merge, training, func_to_minimise],
                                                                        feed_dict={
                                                                            input_classical_layer:
                                                                                data_points_principal_components,
                                                                            classification: one_hot_input})
            writer.add_summary(summary, counter)

        else:
            # Standard run of training
            [training_run, func_to_minimise_run] = session.run([training, func_to_minimise], feed_dict={
                input_classical_layer: data_points_principal_components, classification: one_hot_input})

        # Ensures cost function is well behaved
        if np.isnan(func_to_minimise_run):
            compute_grads = session.run(optimiser.compute_gradients(cost_func),
                                        feed_dict={input_classical_layer: data_points_principal_components,
                                                   classification: one_hot_input})
            if not os.path.exists(checkpoint_string):
                os.makedirs(checkpoint_string)
            # If cost function becomes NaN, output value of gradients for investigation
            np.save(checkpoint_string + 'NaN.npy', compute_grads)
            print('NaNs outputted - leaving at step ' + str(counter))
            raise SystemExit

        # Test to see if new optimum found in current batch
        if func_to_minimise_run < opt_val:
            opt_val = func_to_minimise_run
            opt_position = counter
            new_opt = True

        # Save model every fixed number of batches, provided a new optimum value has occurred
        if (counter % savr_reps == 0) and (i != 0) and new_opt and (not np.isnan(func_to_minimise_run)):
            if not os.path.exists(checkpoint_string):
                os.makedirs(checkpoint_string)
            saver.save(session, checkpoint_string + 'sess.ckpt', global_step=counter)
            # Saves position of optimum and corresponding value of cost function
            np.savetxt(checkpoint_string + 'optimum.txt', [opt_position, opt_val])

        counter += 1


================================================
FILE: fraud_detection/plot_confusion_matrix.py
================================================
# Code adapted from scikit-learn: https://scikit-learn.org/stable/_downloads/plot_confusion_matrix.py
"""
New BSD License

Copyright (c) 2007-2019 The scikit-learn developers.
All rights reserved.


Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

  a. Redistributions of source code must retain the above copyright notice,
     this list of conditions and the following disclaimer.
  b. Redistributions in binary form must reproduce the above copyright
     notice, this list of conditions and the following disclaimer in the
     documentation and/or other materials provided with the distribution.
  c. Neither the name of the Scikit-learn Developers  nor the names of
     its contributors may be used to endorse or promote products
     derived from this software without specific prior written
     permission.


THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.

================
Confusion matrix
================

Example of confusion matrix usage to evaluate the quality
of the output of a classifier on the iris data set. The
diagonal elements represent the number of points for which
the predicted label is equal to the true label, while
off-diagonal elements are those that are mislabeled by the
classifier. The higher the diagonal values of the confusion
matrix the better, indicating many correct predictions.

The figures show the confusion matrix with and without
normalization by class support size (number of elements
in each class). This kind of normalization can be
interesting in case of class imbalance to have a more
visual interpretation of which class is being misclassified.

Here the results are not as good as they could be as our
choice for the regularization parameter C was not the best.
In real life applications this parameter is usually chosen
using :ref:`grid_search`.
"""
import numpy as np
import matplotlib.pyplot as plt

def plot_confusion_matrix(cm, classes, title=None, cmap=plt.cm.Blues):
    """
    This function prints and plots the confusion matrix.
    Normalization can be applied by setting `normalize=True`.
    """

    fig, ax = plt.subplots()
    im = ax.imshow(cm, interpolation='nearest', cmap=cmap)

    # We want to show all ticks...
    ax.set(xticks=np.arange(cm.shape[1]),
           yticks=np.arange(cm.shape[0]),
           # ... and label them with the respective list entries
           xticklabels=classes, yticklabels=classes,
           ylabel='True label',
           xlabel='Predicted label')

    # Rotate the tick labels and set their alignment.
    plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
             rotation_mode="anchor")

    # Loop over data dimensions and create text annotations.
    fmt = '.2f'
    thresh = cm.max() / 2.
    for i in range(cm.shape[0]):
        for j in range(cm.shape[1]):
            ax.text(j, i, format(cm[i, j], fmt),
                    ha="center", va="center",
                    color="white" if cm[i, j] > thresh else "black")
    fig.tight_layout()
    return ax


================================================
FILE: fraud_detection/roc.py
================================================
# Copyright 2018 Xanadu Quantum Technologies Inc.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#     http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Script for creating Plots"""
import numpy as np
import matplotlib.pyplot as plt
import plot_confusion_matrix

plt.switch_backend('agg')

# Label for simulation
simulation_label = 1

# Loading confusion table
confusion_table = np.load('./outputs/confusion/' + str(simulation_label) + '/confusion_table.npy')

# Defining array of thresholds from 0 to 1 to consider in the ROC curve
thresholds_points = 101
thresholds = np.linspace(0, 1, num=thresholds_points)

# false/true positive/negative rates
fp_rate = []
tp_rate = []
fn_rate = []
tn_rate = []

# Creating rates
for i in range(thresholds_points):
    fp_rate.append(confusion_table[i, 0, 1] / (confusion_table[i, 0, 1] + confusion_table[i, 0, 0]))
    tp_rate.append(confusion_table[i, 1, 1] / (confusion_table[i, 1, 1] + confusion_table[i, 1, 0]))

    fn_rate.append(confusion_table[i, 1, 0] / (confusion_table[i, 1, 1] + confusion_table[i, 1, 0]))
    tn_rate.append(confusion_table[i, 0, 0] / (confusion_table[i, 0, 0] + confusion_table[i, 0, 1]))

# Distance of each threshold from ideal point at (0, 1)
distance_from_ideal = (np.array(tn_rate) - 1)**2 + (np.array(fn_rate) - 0)**2

# Threshold closest to (0, 1)
closest_threshold = np.argmin(distance_from_ideal)

# Area under ROC curve
area_under_curve = np.trapz(np.sort(tn_rate), x=np.sort(fn_rate))

print("Area under ROC curve: " + str(area_under_curve))
print("Closest threshold to optimal ROC: " + str(thresholds[closest_threshold]))

# Plotting ROC curve
straight_line = np.linspace(0, 1, 1001)

plt.gcf().subplots_adjust(bottom=0.15)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.rc('font', serif='New Century Schoolbook')
plt.gcf().subplots_adjust(bottom=0.15)
plt.plot(fn_rate, tn_rate, color='#056eee', linewidth=2.2)
plt.plot(straight_line, straight_line, color='#070d0d', linewidth=1.5, dashes=[6, 2])
plt.plot(0.0, 1.0, 'ko')
plt.plot(fn_rate[closest_threshold], tn_rate[closest_threshold], 'k^')
plt.ylim(-0.05, 1.05)
plt.xlim(-0.05, 1.05)
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.xlabel('False negative rate', fontsize=15)
plt.ylabel('True negative rate', fontsize=15)
plt.tick_params(axis='both', which='major', labelsize=14, length=6, width=1)
plt.tick_params(axis='both', which='minor', labelsize=14, length=6, width=1)
plt.savefig('./roc.pdf')
plt.close()

# Selecting ideal confusion table and plotting
confusion_table_ideal = confusion_table[closest_threshold]

plt.figure()
plot_confusion_matrix.plot_confusion_matrix(confusion_table_ideal, classes=['Genuine', 'Fraudulent'], title='')

plt.savefig('./confusion.pdf')


================================================
FILE: fraud_detection/testing.py
================================================
# Copyright 2018 Xanadu Quantum Technologies Inc.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#     http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Fraud detection fitting script"""
import numpy as np
import os

import tensorflow as tf

import strawberryfields as sf
from strawberryfields.ops import Dgate, BSgate, Kgate, Sgate, Rgate

import sys
sys.path.append("..")
import version_check

# ===================================================================================
#                                   Hyperparameters
# ===================================================================================

# Two modes required: one for "genuine" transactions and one for "fradulent"
mode_number = 2
# Number of photonic quantum layers
depth = 4

# Fock basis truncation
cutoff = 10

# Label for simulation
simulation_label = 1

# Random initialization of gate parameters
sdev_photon = 0.1
sdev = 1

# Variable clipping values
disp_clip = 5
sq_clip = 5
kerr_clip = 1

# If loading from checkpoint, previous batch number reached
ckpt_val = 30000

model_string = str(simulation_label)

# Target location of output
folder_locator = './outputs/'

# Locations of model saves and where confusion matrix will be saved
checkpoint_string = folder_locator + 'models/' + model_string + '/'
confusion_string = folder_locator + 'confusion/' + model_string + '/'

# ===================================================================================
#                                   Loading the testing data
# ===================================================================================

# Loading combined dataset with extra genuine datapoints unseen in training
data_combined = np.loadtxt('./creditcard_combined_2_big.csv', delimiter=',')

# Set to a size so that the data can be equally split up with no remainder
batch_size = 29

data_combined_points = len(data_combined)

# ===================================================================================
#                                   Setting up the classical NN input
# ===================================================================================

# Input neurons
input_neurons = 10
# Widths of hidden layers
nn_architecture = [10, 10]
# Output neurons of classical part
output_neurons = 14

# Defining classical network parameters
input_classical_layer = tf.placeholder(tf.float32, shape=[batch_size, input_neurons])

layer_matrix_1 = tf.Variable(tf.random_normal(shape=[input_neurons, nn_architecture[0]]))
offset_1 = tf.Variable(tf.random_normal(shape=[nn_architecture[0]]))

layer_matrix_2 = tf.Variable(tf.random_normal(shape=[nn_architecture[0], nn_architecture[1]]))
offset_2 = tf.Variable(tf.random_normal(shape=[nn_architecture[1]]))

layer_matrix_3 = tf.Variable(tf.random_normal(shape=[nn_architecture[1], output_neurons]))
offset_3 = tf.Variable(tf.random_normal(shape=[output_neurons]))

# Creating hidden layers and output
layer_1 = tf.nn.elu(tf.matmul(input_classical_layer, layer_matrix_1) + offset_1)
layer_2 = tf.nn.elu(tf.matmul(layer_1, layer_matrix_2) + offset_2)

output_layer = tf.nn.elu(tf.matmul(layer_2, layer_matrix_3) + offset_3)

# ===================================================================================
#                                   Defining QNN parameters
# ===================================================================================

# Number of beamsplitters in interferometer
bs_in_interferometer = int(1.0 * mode_number * (mode_number - 1) / 2)

with tf.name_scope('variables'):
    bs_variables = tf.Variable(tf.random_normal(shape=[depth, bs_in_interferometer, 2, 2]
                                                , stddev=sdev))
    phase_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number, 2], stddev=sdev))

    sq_magnitude_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number]
                                                          , stddev=sdev_photon))
    sq_phase_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number]
                                                      , stddev=sdev))
    disp_magnitude_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number]
                                                            , stddev=sdev_photon))
    disp_phase_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number]
                                                        , stddev=sdev))
    kerr_variables = tf.Variable(tf.random_normal(shape=[depth, mode_number], stddev=sdev_photon))

parameters = [layer_matrix_1, offset_1, layer_matrix_2, offset_2, layer_matrix_3, offset_3, bs_variables,
              phase_variables, sq_magnitude_variables, sq_phase_variables, disp_magnitude_variables,
              disp_phase_variables, kerr_variables]


# ===================================================================================
#                                   Constructing quantum layers
# ===================================================================================


# Defining input QNN layer, whose parameters are set by the outputs of the classical network
def input_qnn_layer():
    with tf.name_scope('inputlayer'):
        Sgate(tf.clip_by_value(output_layer[:, 0], -sq_clip, sq_clip), output_layer[:, 1]) | q[0]
        Sgate(tf.clip_by_value(output_layer[:, 2], -sq_clip, sq_clip), output_layer[:, 3]) | q[1]

        BSgate(output_layer[:, 4], output_layer[:, 5]) | (q[0], q[1])

        Rgate(output_layer[:, 6]) | q[0]
        Rgate(output_layer[:, 7]) | q[1]

        Dgate(tf.clip_by_value(output_layer[:, 8], -disp_clip, disp_clip), output_layer[:, 9]) \
        | q[0]
        Dgate(tf.clip_by_value(output_layer[:, 10], -disp_clip, disp_clip), output_layer[:, 11]) \
        | q[1]

        Kgate(tf.clip_by_value(output_layer[:, 12], -kerr_clip, kerr_clip)) | q[0]
        Kgate(tf.clip_by_value(output_layer[:, 13], -kerr_clip, kerr_clip)) | q[1]


# Defining standard QNN layers
def qnn_layer(layer_number):
    with tf.name_scope('layer_{}'.format(layer_number)):
        BSgate(bs_variables[layer_number, 0, 0, 0], bs_variables[layer_number, 0, 0, 1]) \
        | (q[0], q[1])

        for i in range(mode_number):
            Rgate(phase_variables[layer_number, i, 0]) | q[i]

        for i in range(mode_number):
            Sgate(tf.clip_by_value(sq_magnitude_variables[layer_number, i], -sq_clip, sq_clip),
                  sq_phase_variables[layer_number, i]) | q[i]

        BSgate(bs_variables[layer_number, 0, 1, 0], bs_variables[layer_number, 0, 1, 1]) \
        | (q[0], q[1])

        for i in range(mode_number):
            Rgate(phase_variables[layer_number, i, 1]) | q[i]

        for i in range(mode_number):
            Dgate(tf.clip_by_value(disp_magnitude_variables[layer_number, i], -disp_clip,
                                   disp_clip), disp_phase_variables[layer_number, i]) | q[i]

        for i in range(mode_number):
            Kgate(tf.clip_by_value(kerr_variables[layer_number, i], -kerr_clip, kerr_clip)) | q[i]


# ===================================================================================
#                                   Defining QNN
# ===================================================================================

# construct the two-mode Strawberry Fields engine
eng, q = sf.Engine(mode_number)

# construct the circuit
with eng:
    input_qnn_layer()

    for i in range(depth):
        qnn_layer(i)

# run the engine (in batch mode)
state = eng.run("tf", cutoff_dim=cutoff, eval=False, batch_size=batch_size)
# extract the state
ket = state.ket()

# ===================================================================================
#                                   Extracting probabilities
# ===================================================================================

# Classifications for whole batch: rows act as data points in the batch and columns
# are the one-hot classifications
classification = tf.placeholder(shape=[batch_size, 2], dtype=tf.int32)

prob = []

for i in range(batch_size):
    # Finds the probability of a photon being in either mode
    prob.append([tf.abs(ket[i, 1, 0]) ** 2, tf.abs(ket[i, 0, 1]) ** 2])

# ===================================================================================
#                                   Testing performance
# ===================================================================================

# Defining array of thresholds from 0 to 1 to consider in the ROC curve
thresholds_points = 101
thresholds = np.linspace(0, 1, num=thresholds_points)

# Saver/Loader for outputting model
saver = tf.train.Saver(parameters)

session = tf.Session()
session.run(tf.global_variables_initializer())

saver.restore(session, checkpoint_string + 'sess.ckpt-' + str(ckpt_val))

# Split up data to process in batches
data_split = np.split(data_combined, data_combined_points / batch_size)

# Defining confusion table
confusion_table = np.zeros((thresholds_points, 2, 2))

for batch in data_split:
    # Input data (provided as principal components)
    data_points_principal_components = batch[:, 1:input_neurons + 1]
    # Data classes
    classes = batch[:, -1]

    # Probabilities outputted from circuit
    prob_run = session.run(prob, feed_dict={input_classical_layer: data_points_principal_components})

    for i in range(batch_size):
        # Calculate probabilities of photon coming out of either mode
        p = prob_run[i]
        # Normalize to these two events (i.e. ignore all other outputs)
        p = p / np.sum(p)

        # Predicted class is a list corresponding to threshold probabilities
        predicted_class = []

        for j in range(thresholds_points):
            # If probability of a photon exiting first mode is larger than threshold, attribute as genuine
            if p[0] > thresholds[j]:
                predicted_class.append(0)
            else:
                predicted_class.append(1)

        actual_class = classes[i]

        # Constructing confusion table
        for j in range(2):
            for k in range(2):
                for l in range(thresholds_points):
                    if actual_class == j and predicted_class[l] == k:
                        confusion_table[l, j, k] += 1

# Renormalizing confusion table
for i in range(thresholds_points):
    confusion_table[i] = confusion_table[i] / data_combined_points * 100

if not os.path.exists(confusion_string):
    os.makedirs(confusion_string)

# Save as numpy array
np.save(confusion_string + 'confusion_table.npy', confusion_table)


================================================
FILE: function_fitting/function_fitting.py
================================================
# Copyright 2018 Xanadu Quantum Technologies Inc.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#     http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Function fitting script"""
import os
import time

import numpy as np

import matplotlib.pyplot as plt
from matplotlib import rcParams

import tensorflow as tf

import strawberryfields as sf
from strawberryfields.ops import *

import sys
sys.path.append("..")
import version_check

# ===================================================================================
#                                   Hyperparameters
# ===================================================================================


# Fock basis truncation
cutoff = 10
# domain [-xmax, xmax] to perform the function fitting over
xmax = 1
# Number of batches to use in the optimization
# Each batch corresponds to a different input-output relation
batch_size = 50
# Number of photonic quantum layers
depth = 6

# variable clipping values
disp_clip = 100
sq_clip = 50
kerr_clip = 50

# number of optimization steps
reps = 1000

# regularization
regularization = 0.0
reg_variance = 0.0


# ===================================================================================
#                                   Functions
# ===================================================================================
# This section contains various function we may wish to fit using our quantum
# neural network.


def f1(x, eps=0.0):
    """The function f(x)=|x|+noise"""
    return np.abs(x) + eps * np.random.normal(size=x.shape)


def f2(x, eps=0.0):
    """The function f(x)=sin(pi*x)/(pi*x)+noise"""
    return np.sin(x*pi)/(pi*x) + eps * np.random.normal(size=x.shape)


def f3(x, eps=0.0):
    """The function f(x)=sin(pi*x)+noise"""
    return 1.0*(np.sin(1.0 * x * np.pi) + eps * np.random.normal(size=x.shape))


def f4(x, eps=0.0):
    """The function f(x)=exp(x)+noise"""
    return np.exp(x) + eps * np.random.normal(size=x.shape)


def f5(x, eps=0.0):
    """The function f(x)=tanh(4x)+noise"""
    return np.tanh(4*x) + eps * np.random.normal(size=x.shape)


def f6(x, eps=0.0):
    """The function f(x)=x^3+noise"""
    return x**3 + eps * np.random.normal(size=x.shape)


# ===================================================================================
#                                   Training data
# ===================================================================================
# load the training data from the provided files

train_data = np.load('sine_train_data.npy')
test_data = np.load('sine_test_data.npy')
data_y = np.load('sine_outputs.npy')


# ===================================================================================
#                      Construct the quantum neural network
# ===================================================================================

# Random initialization of gate parameters
sdev = 0.05

with tf.name_scope('variables'):
    d_r = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    d_phi = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    r1 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    sq_r = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    sq_phi = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    r2 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    kappa1 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))


# construct the one-mode Strawberry Fields engine
eng, q = sf.Engine(1)


def layer(i):
    """This function generates the ith layer of the quantum neural network.

    Note: it must be executed within a Strawberry Fields engine context.

    Args:
        i (int): the layer number.
    """
    with tf.name_scope('layer_{}'.format(i)):
        # displacement gate
        Dgate(tf.clip_by_value(d_r[i], -disp_clip, disp_clip), d_phi[i]) | q[0]
        # rotation gate
        Rgate(r1[i]) | q[0]
        # squeeze gate
        Sgate(tf.clip_by_value(sq_r[i], -sq_clip, sq_clip), sq_phi[i]) | q[0]
        # rotation gate
        Rgate(r2[i]) | q[0]
        # Kerr gate
        Kgate(tf.clip_by_value(kappa1[i], -kerr_clip, kerr_clip)) | q[0]


# Use a TensorFlow placeholder to store the input data
input_data = tf.placeholder(tf.float32, shape=[batch_size])

# construct the circuit
with eng:
    # the input data is encoded as displacement in the phase space
    Dgate(input_data) | q[0]

    for k in range(depth):
        # apply layers to the required depth
        layer(k)

# run the engine
state = eng.run('tf', cutoff_dim=cutoff, eval=False, batch_size=batch_size)


# ===================================================================================
#                      Define the loss function
# ===================================================================================

# First, we calculate the x-quadrature expectation value
ket = state.ket()
mean_x, svd_x = state.quad_expectation(0)
errors_y = tf.sqrt(svd_x)

# the loss function is defined as mean(|<x>[batch_num] - data[batch_num]|^2)
output_data = tf.placeholder(tf.float32, shape=[batch_size])
loss = tf.reduce_mean(tf.abs(mean_x - output_data) ** 2)
var = tf.reduce_mean(errors_y)

# when constructing the cost function, we ensure that the norm of the state
# remains close to 1, and that the variance in the error do not grow.
state_norm = tf.abs(tf.reduce_mean(state.trace()))
cost = loss + regularization * (tf.abs(state_norm - 1) ** 2) + reg_variance*var
tf.summary.scalar('cost', cost)


# ===================================================================================
#                      Perform the optimization
# ===================================================================================

# we choose the Adam optimizer
optimiser = tf.train.AdamOptimizer()
min_op = optimiser.minimize(cost)

session = tf.Session()
session.run(tf.global_variables_initializer())

print('Beginning optimization')

loss_vals = []
error_vals = []

# start time
start_time = time.time()

for i in range(reps+1):

    loss_, predictions, errors, mean_error, ket_norm, _ = session.run(
        [loss, mean_x, errors_y, var, state_norm, min_op],
        feed_dict={input_data: train_data, output_data: data_y})

    loss_vals.append(loss_)
    error_vals.append(mean_error)

    if i % 100 == 0:
        print('Step: {} Loss: {}'.format(i, loss_))

end_time = time.time()


# ===================================================================================
#                      Analyze the results
# ===================================================================================

test_predictions = session.run(mean_x, feed_dict={input_data: test_data})

np.save('sine_test_predictions', test_predictions)

print("Elapsed time is {} seconds".format(np.round(end_time - start_time)))

x = np.linspace(-xmax, xmax, 200)

# set plotting options
rcParams['font.family'] = 'serif'
rcParams['font.sans-serif'] = ['Computer Modern Roman']

fig, ax = plt.subplots(1,1)

# plot the function to be fitted, in green
ax.plot(x, f3(x), color='#3f9b0b', zorder=1, linewidth=2)

# plot the training data, in red
ax.scatter(train_data, data_y, color='#fb2943', marker='o', zorder=2, s=75)

# plot the test predictions, in blue
ax.scatter(test_data, test_predictions, color='#0165fc', marker='x', zorder=3, s=75)

ax.set_xlabel('Input', fontsize=18)
ax.set_ylabel('Output', fontsize=18)
ax.tick_params(axis='both', which='minor', labelsize=16)

fig.savefig('result.pdf', format='pdf', bbox_inches='tight')


================================================
FILE: requirements.txt
================================================
strawberryfields==0.10
tensorflow==1.3
matplotlib


================================================
FILE: tetrominos_learning/plot_images.py
================================================
# Copyright 2018 Xanadu Quantum Technologies Inc.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#     http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" This scripts converts Tetris numpy images into a .png figure."""

import numpy as np
import matplotlib.pyplot as plt
import os

##################### set local directories ########
# Model name
model_string = 'tetris'

# Output folder
folder_locator = './outputs/'

# Locations of saved data and output figure
save_string = folder_locator + 'models/' + model_string + '/'


# Loading of images
images_out = np.load(save_string + 'images_out.npy')
images_out_big = np.load(save_string + 'images_out_big.npy')

num_labels = 7
plot_scale = 1

# Plotting of the final image.
fig_images, axs = plt.subplots(
    nrows=2, ncols=num_labels, figsize=(num_labels * plot_scale, 2 * plot_scale)
)

all_images = [images_out, images_out_big]
for i in range(2):
    for lable in range(num_labels):
        ax = axs[i][lable]
        ax.imshow(all_images[i][lable], cmap='gray')
        ax.axis('off')
        ax.set_xticklabels([])
        ax.set_yticklabels([])
plt.tight_layout()
fig_images.savefig(save_string + 'fig_images.png')


================================================
FILE: tetrominos_learning/tetrominos_learning.py
================================================
# Copyright 2018 Xanadu Quantum Technologies Inc.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#     http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" This script trains a quantum network for encoding Tetris images in the quantum state of two bosonic modes."""

import strawberryfields as sf
from strawberryfields.ops import Dgate, BSgate, Kgate, Sgate, Rgate
import tensorflow as tf
import numpy as np
import time
import os

import sys
sys.path.append("..")
import version_check

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
os.environ['OMP_NUM_THREADS'] = '1'
os.environ['CUDA_VISIBLE_DEVICES'] = '1'

# =============================================
#   Settings and hyperparameters
# =============================================

# Model name
model_string = 'tetris'

# Output folder
folder_locator = './outputs/'

# Locations of TensorBoard and model saving outputs
board_string = folder_locator + 'tensorboard/' + model_string + '/'
save_string = folder_locator + 'models/' + model_string + '/'


# Record initial time
init_time = time.time()

# Set seed for random generator
tf.set_random_seed(1)

# Depth of the quantum network (suggested: 25)
depth = 25

# Fock basis truncation
cutoff = 11  # suggested value: 11

# Image size (im_dim X im_dim)
im_dim = 4

# Number of optimization steps (suggested: 50000)
reps = 20000

# Number of steps between data logging/saving.
partial_reps = 1000

# Number of images to encode (suggested: 7)
num_images = 7

# Clipping of training parameters
disp_clip = 5
sq_clip = 5
kerr_clip = 1

# Weight for quantum state normalization
norm_weight = 100.0

# ====================================================
#   Manual definition of target images
# ====================================================

train_images = np.zeros((num_images, im_dim, im_dim))

# Target images: L,O,T,I,S,J,Z tetrominos.
L, O, T, I, S, J, Z = np.zeros((num_images, im_dim, im_dim))

L[0, 0] = L[1, 0] = L[2, 0] = L[2, 1] = 1 / np.sqrt(4)
O[0, 0] = O[1, 1] = O[0, 1] = O[1, 0] = 1 / np.sqrt(4)
T[0, 0] = T[0, 1] = T[0, 2] = T[1, 1] = 1 / np.sqrt(4)
I[0, 0] = I[1, 0] = I[2, 0] = I[3, 0] = 1 / np.sqrt(4)
S[1, 0] = S[1, 1] = S[0, 1] = S[0, 2] = 1 / np.sqrt(4)
J[0, 1] = J[1, 1] = J[2, 1] = J[2, 0] = 1 / np.sqrt(4)
Z[0, 0] = Z[0, 1] = Z[1, 1] = Z[1, 2] = 1 / np.sqrt(4)

train_images = [L, O, T, I, S, J, Z]

# ====================================================
#   Initialization of TensorFlow variables
# ====================================================

print('Initializing TensorFlow graph...')

# Initial standard deviation of parameters
sdev = 0.1

# Coherent state amplitude
alpha = 1.4

# Combinations of two-mode amplitudes corresponding to different final images
disps_alpha = tf.constant(
    [alpha, -alpha, alpha, -alpha, 1.0j * alpha, -1.0j * alpha, 1.0j * alpha]
)
disps_beta = tf.constant(
    [alpha, alpha, -alpha, -alpha, 1.0j * alpha, 1.0j * alpha, -1.0j * alpha]
)

# Trainable weights of the quantum network.
with tf.name_scope('variables'):
    r1 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    r2 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))

    theta1 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    phi1 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))

    theta2 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    phi2 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))

    sqr1 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    sqphi1 = tf.Variable(tf.random_normal(shape=[depth]))

    sqr2 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    sqphi2 = tf.Variable(tf.random_normal(shape=[depth]))

    dr1 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    dphi1 = tf.Variable(tf.random_normal(shape=[depth]))

    dr2 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    dphi2 = tf.Variable(tf.random_normal(shape=[depth]))

    kappa1 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))
    kappa2 = tf.Variable(tf.random_normal(shape=[depth], stddev=sdev))

# List of all the weights
parameters = [
    r1,
    r2,
    theta1,
    phi1,
    theta2,
    phi2,
    sqr1,
    sqphi1,
    sqr2,
    sqphi2,
    dr1,
    dphi1,
    dr2,
    dphi2,
    kappa1,
    kappa2,
]

# ====================================================
#   Definition of the quantum neural network
# ====================================================

# Single quantum variational layer


def layer(l):
    with tf.name_scope('layer_{}'.format(l)):
        BSgate(theta1[l], phi1[l]) | (q[0], q[1])
        Rgate(r1[l]) | q[0]
        Sgate(tf.clip_by_value(sqr1[l], -sq_clip, sq_clip), sqphi1[l]) | q[0]
        Sgate(tf.clip_by_value(sqr2[l], -sq_clip, sq_clip), sqphi2[l]) | q[1]
        BSgate(theta2[l], phi2[l]) | (q[0], q[1])
        Rgate(r2[l]) | q[0]
        Dgate(tf.clip_by_value(dr1[l], -disp_clip, disp_clip), dphi1[l]) | q[0]
        Dgate(tf.clip_by_value(dr2[l], -disp_clip, disp_clip), dphi2[l]) | q[1]
        Kgate(tf.clip_by_value(kappa1[l], -kerr_clip, kerr_clip)) | q[0]
        Kgate(tf.clip_by_value(kappa2[l], -kerr_clip, kerr_clip)) | q[1]


# StrawberryFields quantum simulator of 2 optical modes
engine, q = sf.Engine(num_subsystems=2)

# Definition of the CV quantum network
with engine:
    # State preparation
    Dgate(disps_alpha) | q[0]
    Dgate(disps_beta) | q[1]
    # Sequence of variational layers
    for i in range(depth):
        layer(i)

# Symbolic evaluation of the output state
state = engine.run('tf', cutoff_dim=cutoff, eval=False, batch_size=num_images)
ket = state.ket()
trace = tf.abs(state.trace())

# Projection on the subspace of up to im_dim-1 photons for each mode.
ket_reduced = ket[:, :im_dim, :im_dim]
norm = tf.sqrt(tf.abs(tf.reduce_sum(tf.conj(ket_reduced) * ket_reduced, axis=[1, 2])))
# Since norm has shape [num_images] while ket_reduced has shape [num_images,im_dim,im_dim]
# we need to add 2 extra dimensions to the norm tensor.
norm_extended = tf.reshape(norm, [num_images, 1, 1])
ket_processed = ket_reduced / tf.cast(norm_extended, dtype=tf.complex64)

# ====================================================
#   Definition of the loss function
# ====================================================

# Target images
data_states = tf.placeholder(tf.complex64, shape=[num_images, im_dim, im_dim])

# Overlaps with target images
overlaps = tf.abs(tf.reduce_sum(tf.conj(ket_processed) * data_states, axis=[1, 2])) ** 2

# Overlap cost function
overlap_cost = tf.reduce_mean((overlaps - 1) ** 2)

# State norm cost function
norm_cost = tf.reduce_sum((trace - 1) ** 2)

cost = overlap_cost + norm_weight * norm_cost

# ====================================================
#   TensorBoard logging of cost functions and images
# ====================================================

tf.summary.scalar('Cost', cost)
tf.summary.scalar('Norm cost', norm_cost)
tf.summary.scalar('Overlap cost', overlap_cost)

# Output images with and without subspace projection.
images_out = tf.abs(ket_processed) ** 2
images_out_big = tf.abs(ket) ** 2

tf.summary.image(
    'image_out', tf.expand_dims(images_out, axis=3), max_outputs=num_images
)
tf.summary.image(
    'image_out_big', tf.expand_dims(images_out_big, axis=3), max_outputs=num_images
)

# TensorBoard writer and summary
writer = tf.summary.FileWriter(board_string)
merge = tf.summary.merge_all()


# ====================================================
#   Training
# ====================================================

# Optimization algorithm (Adam optimizer)
optim = tf.train.AdamOptimizer()
training = optim.minimize(cost)

print('Graph building time: {:3f}'.format(time.time() - init_time))

# TensorFlow session
with tf.Session() as session:
    session.run(tf.global_variables_initializer())
    start_time = time.time()

    for i in range(reps):
        rep_time = time.time()
        # make an optimization step
        _training = session.run(training, feed_dict={data_states: train_images})

        if (i + 1) % partial_reps == 0:
            # evaluate tensors for saving and logging
            [summary, params_numpy, _images_out, _images_out_big] = session.run(
                [merge, tf.squeeze(parameters), images_out, images_out_big],
                feed_dict={data_states: train_images},
            )
            # save tensorboard data
            writer.add_summary(summary, i + 1)

            # save trained weights
            os.makedirs(save_string, exist_ok=True)
            np.save(save_string + 'trained_params.npy', params_numpy)

            # save output images as numpy arrays
            np.save(save_string + 'images_out.npy', _images_out)
            np.save(save_string + 'images_out_big.npy', _images_out_big)

            print(
                'Iteration: {:d} Single iteration time {:.3f}'.format(
                    i + 1, time.time() - rep_time
                )
            )

print('Script completed. Total time: {:3f}'.format(time.time() - init_time))


================================================
FILE: version_check.py
================================================
"""Script for checking the correct versions of Python, StrawberryFields and TensorFlow are being
used."""
import sys

import strawberryfields as sf
import tensorflow as tf

python_version = sys.version_info
sf_version = sf.__version__
tf_version = tf.__version__.split(".")

if python_version < (3, 5) or python_version > (3, 6):
    raise SystemError("Your version of python is {}.{}. You must have Python 3.5 or 3.6 installed "
                      "to run this script.".format(python_version.major, python_version.minor))

if sf_version != "0.10.0":
    raise ImportError("An incompatible version of StrawberryFields is installed. You must have "
                      "StrawberryFields version 0.10 to run this script. To install the correct "
                      "version, run:\n >>> pip install strawberryfields==0.10")

if not(tf_version[0] == "1" and tf_version[1] == "3"):
    raise ImportError("An incompatible version of TensorFlow is installed. You must have "
                      "TensorFlow version 1.3 to run this script. To install the correct "
                      "version, run:\n >>> pip install tensorflow==1.3")
Download .txt
gitextract_p7of1u1r/

├── .gitignore
├── LICENSE
├── README.md
├── fraud_detection/
│   ├── README.md
│   ├── data_processor.py
│   ├── fraud_detection.py
│   ├── plot_confusion_matrix.py
│   ├── roc.py
│   └── testing.py
├── function_fitting/
│   ├── function_fitting.py
│   ├── sine_outputs.npy
│   ├── sine_test_data.npy
│   └── sine_train_data.npy
├── requirements.txt
├── tetrominos_learning/
│   ├── plot_images.py
│   └── tetrominos_learning.py
└── version_check.py
Download .txt
SYMBOL INDEX (13 symbols across 5 files)

FILE: fraud_detection/fraud_detection.py
  function input_qnn_layer (line 147) | def input_qnn_layer():
  function qnn_layer (line 167) | def qnn_layer(layer_number):

FILE: fraud_detection/plot_confusion_matrix.py
  function plot_confusion_matrix (line 61) | def plot_confusion_matrix(cm, classes, title=None, cmap=plt.cm.Blues):

FILE: fraud_detection/testing.py
  function input_qnn_layer (line 137) | def input_qnn_layer():
  function qnn_layer (line 157) | def qnn_layer(layer_number):

FILE: function_fitting/function_fitting.py
  function f1 (line 67) | def f1(x, eps=0.0):
  function f2 (line 72) | def f2(x, eps=0.0):
  function f3 (line 77) | def f3(x, eps=0.0):
  function f4 (line 82) | def f4(x, eps=0.0):
  function f5 (line 87) | def f5(x, eps=0.0):
  function f6 (line 92) | def f6(x, eps=0.0):
  function layer (line 128) | def layer(i):

FILE: tetrominos_learning/tetrominos_learning.py
  function layer (line 170) | def layer(l):
Condensed preview — 17 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (75K chars).
[
  {
    "path": ".gitignore",
    "chars": 43,
    "preview": "creditcard*\noutputs*\nroc.pdf\nconfusion.pdf\n"
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 3872,
    "preview": "<img align=\"left\" src=\"https://github.com/XanaduAI/quantum-neural-networks/blob/master/static/tetronimo.png\" width=300px"
  },
  {
    "path": "fraud_detection/README.md",
    "chars": 2423,
    "preview": "<img align=\"left\" src=\"https://github.com/XanaduAI/quantum-neural-networks/blob/master/static/fraud_detection.png\" width"
  },
  {
    "path": "fraud_detection/data_processor.py",
    "chars": 2675,
    "preview": "# Copyright 2018 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# y"
  },
  {
    "path": "fraud_detection/fraud_detection.py",
    "chars": 13367,
    "preview": "# Copyright 2018 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# y"
  },
  {
    "path": "fraud_detection/plot_confusion_matrix.py",
    "chars": 3782,
    "preview": "# Code adapted from scikit-learn: https://scikit-learn.org/stable/_downloads/plot_confusion_matrix.py\n\"\"\"\nNew BSD Licens"
  },
  {
    "path": "fraud_detection/roc.py",
    "chars": 3189,
    "preview": "# Copyright 2018 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# y"
  },
  {
    "path": "fraud_detection/testing.py",
    "chars": 10980,
    "preview": "# Copyright 2018 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# y"
  },
  {
    "path": "function_fitting/function_fitting.py",
    "chars": 7938,
    "preview": "# Copyright 2018 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# y"
  },
  {
    "path": "requirements.txt",
    "chars": 50,
    "preview": "strawberryfields==0.10\ntensorflow==1.3\nmatplotlib\n"
  },
  {
    "path": "tetrominos_learning/plot_images.py",
    "chars": 1616,
    "preview": "# Copyright 2018 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# y"
  },
  {
    "path": "tetrominos_learning/tetrominos_learning.py",
    "chars": 9435,
    "preview": "# Copyright 2018 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# y"
  },
  {
    "path": "version_check.py",
    "chars": 1140,
    "preview": "\"\"\"Script for checking the correct versions of Python, StrawberryFields and TensorFlow are being\nused.\"\"\"\nimport sys\n\nim"
  }
]

// ... and 3 more files (download for full content)

About this extraction

This page contains the full source code of the XanaduAI/quantum-neural-networks GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 17 files (70.2 KB), approximately 16.7k tokens, and a symbol index with 13 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!