master 964fa7d1da8f cached
5 files
65.2 KB
31.6k tokens
13 symbols
1 requests
Download .txt
Repository: inoryy/tensorflow2-deep-reinforcement-learning
Branch: master
Commit: 964fa7d1da8f
Files: 5
Total size: 65.2 KB

Directory structure:
gitextract__krukguf/

├── LICENSE
├── README.md
├── a2c.py
├── actor-critic-agent-with-tensorflow2.ipynb
└── requirements.txt

================================================
FILE CONTENTS
================================================

================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2021 Roman Ring

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# Deep Reinforcement Learning with TensorFlow 2.1

Source code accompanying the blog post
[Deep Reinforcement Learning with TensorFlow 2.1](http://inoryy.com/post/tensorflow2-deep-reinforcement-learning/).

In the blog post, I showcase the `TensorFlow 2.1` features through the lens of deep reinforcement learning
by implementing an advantage actor-critic agent, solving the classic `CartPole-v0` environment.
While the goal is to showcase `TensorFlow 2.1`, I also provide a brief overview of the DRL methods.

You can view the code either as a [notebook](actor-critic-agent-with-tensorflow2.ipynb),
a self-contained [script](a2c.py), or execute it online with
[Google Colab](https://colab.research.google.com/drive/1XoHmGiwo2eUN-gzSVLRvE10fIf_ycO1j).

To run it locally, install the dependencies with `pip install -r requirements.txt`, and then execute `python a2c.py`.  

To control various hyperparameters, specify them as [flags](https://github.com/inoryy/tensorflow2-deep-reinforcement-learning/blob/master/a2c.py#L12-L17), e.g. `python a2c.py --batch_size=256`.


================================================
FILE: a2c.py
================================================
import gym
import logging
import argparse
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import tensorflow.keras.layers as kl
import tensorflow.keras.losses as kls
import tensorflow.keras.optimizers as ko


parser = argparse.ArgumentParser()
parser.add_argument('-b', '--batch_size', type=int, default=64)
parser.add_argument('-n', '--num_updates', type=int, default=250)
parser.add_argument('-lr', '--learning_rate', type=float, default=7e-3)
parser.add_argument('-r', '--render_test', action='store_true', default=False)
parser.add_argument('-p', '--plot_results', action='store_true', default=False)


class ProbabilityDistribution(tf.keras.Model):
  def call(self, logits, **kwargs):
    # Sample a random categorical action from the given logits.
    return tf.squeeze(tf.random.categorical(logits, 1), axis=-1)


class Model(tf.keras.Model):
  def __init__(self, num_actions):
    super().__init__('mlp_policy')
    # Note: no tf.get_variable(), just simple Keras API!
    self.hidden1 = kl.Dense(128, activation='relu')
    self.hidden2 = kl.Dense(128, activation='relu')
    self.value = kl.Dense(1, name='value')
    # Logits are unnormalized log probabilities.
    self.logits = kl.Dense(num_actions, name='policy_logits')
    self.dist = ProbabilityDistribution()

  def call(self, inputs, **kwargs):
    # Inputs is a numpy array, convert to a tensor.
    x = tf.convert_to_tensor(inputs)
    # Separate hidden layers from the same input tensor.
    hidden_logs = self.hidden1(x)
    hidden_vals = self.hidden2(x)
    return self.logits(hidden_logs), self.value(hidden_vals)

  def action_value(self, obs):
    # Executes `call()` under the hood.
    logits, value = self.predict_on_batch(obs)
    action = self.dist.predict_on_batch(logits)
    # Another way to sample actions:
    #   action = tf.random.categorical(logits, 1)
    # Will become clearer later why we don't use it.
    return np.squeeze(action, axis=-1), np.squeeze(value, axis=-1)


class A2CAgent:
  def __init__(self, model, lr=7e-3, gamma=0.99, value_c=0.5, entropy_c=1e-4):
    # `gamma` is the discount factor; coefficients are used for the loss terms.
    self.gamma = gamma
    self.value_c = value_c
    self.entropy_c = entropy_c

    self.model = model
    self.model.compile(
      optimizer=ko.RMSprop(lr=lr),
      # Define separate losses for policy logits and value estimate.
      loss=[self._logits_loss, self._value_loss])

  def train(self, env, batch_sz=64, updates=250):
    # Storage helpers for a single batch of data.
    actions = np.empty((batch_sz,), dtype=np.int32)
    rewards, dones, values = np.empty((3, batch_sz))
    observations = np.empty((batch_sz,) + env.observation_space.shape)
    # Training loop: collect samples, send to optimizer, repeat updates times.
    ep_rewards = [0.0]
    next_obs = env.reset()
    for update in range(updates):
      for step in range(batch_sz):
        observations[step] = next_obs.copy()
        actions[step], values[step] = self.model.action_value(next_obs[None, :])
        next_obs, rewards[step], dones[step], _ = env.step(actions[step])

        ep_rewards[-1] += rewards[step]
        if dones[step]:
          ep_rewards.append(0.0)
          next_obs = env.reset()
          logging.info("Episode: %03d, Reward: %03d" % (len(ep_rewards) - 1, ep_rewards[-2]))

      _, next_value = self.model.action_value(next_obs[None, :])
      returns, advs = self._returns_advantages(rewards, dones, values, next_value)
      # A trick to input actions and advantages through same API.
      acts_and_advs = np.concatenate([actions[:, None], advs[:, None]], axis=-1)
      # Performs a full training step on the collected batch.
      # Note: no need to mess around with gradients, Keras API handles it.
      losses = self.model.train_on_batch(observations, [acts_and_advs, returns])
      logging.debug("[%d/%d] Losses: %s" % (update + 1, updates, losses))

    return ep_rewards

  def test(self, env, render=False):
    obs, done, ep_reward = env.reset(), False, 0
    while not done:
      action, _ = self.model.action_value(obs[None, :])
      obs, reward, done, _ = env.step(action)
      ep_reward += reward
      if render:
        env.render()
    return ep_reward

  def _returns_advantages(self, rewards, dones, values, next_value):
    # `next_value` is the bootstrap value estimate of the future state (critic).
    returns = np.append(np.zeros_like(rewards), next_value, axis=-1)
    # Returns are calculated as discounted sum of future rewards.
    for t in reversed(range(rewards.shape[0])):
      returns[t] = rewards[t] + self.gamma * returns[t + 1] * (1 - dones[t])
    returns = returns[:-1]
    # Advantages are equal to returns - baseline (value estimates in our case).
    advantages = returns - values
    return returns, advantages

  def _value_loss(self, returns, value):
    # Value loss is typically MSE between value estimates and returns.
    return self.value_c * kls.mean_squared_error(returns, value)

  def _logits_loss(self, actions_and_advantages, logits):
    # A trick to input actions and advantages through the same API.
    actions, advantages = tf.split(actions_and_advantages, 2, axis=-1)
    # Sparse categorical CE loss obj that supports sample_weight arg on `call()`.
    # `from_logits` argument ensures transformation into normalized probabilities.
    weighted_sparse_ce = kls.SparseCategoricalCrossentropy(from_logits=True)
    # Policy loss is defined by policy gradients, weighted by advantages.
    # Note: we only calculate the loss on the actions we've actually taken.
    actions = tf.cast(actions, tf.int32)
    policy_loss = weighted_sparse_ce(actions, logits, sample_weight=advantages)
    # Entropy loss can be calculated as cross-entropy over itself.
    probs = tf.nn.softmax(logits)
    entropy_loss = kls.categorical_crossentropy(probs, probs)
    # We want to minimize policy and maximize entropy losses.
    # Here signs are flipped because the optimizer minimizes.
    return policy_loss - self.entropy_c * entropy_loss


if __name__ == '__main__':
  args = parser.parse_args()
  logging.getLogger().setLevel(logging.INFO)

  env = gym.make('CartPole-v0')
  model = Model(num_actions=env.action_space.n)
  agent = A2CAgent(model, args.learning_rate)

  rewards_history = agent.train(env, args.batch_size, args.num_updates)
  print("Finished training. Testing...")
  print("Total Episode Reward: %d out of 200" % agent.test(env, args.render_test))

  if args.plot_results:
    plt.style.use('seaborn')
    plt.plot(np.arange(0, len(rewards_history), 10), rewards_history[::10])
    plt.xlabel('Episode')
    plt.ylabel('Total Reward')
    plt.show()



================================================
FILE: actor-critic-agent-with-tensorflow2.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Advantage Actor-Critic with TensorFlow 2.1"
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Setup"
   ],
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   }
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "pycharm": {
     "is_executing": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "text": [
      "TensorFlow Ver:  2.1.0\n"
     ],
     "output_type": "stream"
    }
   ],
   "source": [
    "import gym\n",
    "import logging\n",
    "import numpy as np\n",
    "import tensorflow as tf\n",
    "import matplotlib.pyplot as plt\n",
    "import tensorflow.keras.layers as kl\n",
    "import tensorflow.keras.losses as kls\n",
    "import tensorflow.keras.optimizers as ko\n",
    "\n",
    "%matplotlib inline\n",
    "\n",
    "print(\"TensorFlow Ver: \", tf.__version__)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "pycharm": {
     "is_executing": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "text": [
      "Eager Execution: True\n",
      "1 + 2 + 3 + 4 + 5 = tf.Tensor(15, shape=(), dtype=int32)\n"
     ],
     "output_type": "stream"
    }
   ],
   "source": [
    "# Eager by default!\n",
    "print(\"Eager Execution:\", tf.executing_eagerly())\n",
    "print(\"1 + 2 + 3 + 4 + 5 =\", tf.reduce_sum([1, 2, 3, 4, 5]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Policy & Value Model Class"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "pycharm": {
     "is_executing": false
    }
   },
   "outputs": [],
   "source": [
    "class ProbabilityDistribution(tf.keras.Model):\n",
    "  def call(self, logits, **kwargs):\n",
    "    # Sample a random categorical action from the given logits.\n",
    "    return tf.squeeze(tf.random.categorical(logits, 1), axis=-1)\n",
    "\n",
    "\n",
    "class Model(tf.keras.Model):\n",
    "  def __init__(self, num_actions):\n",
    "    super().__init__('mlp_policy')\n",
    "    # Note: no tf.get_variable(), just simple Keras API!\n",
    "    self.hidden1 = kl.Dense(128, activation='relu')\n",
    "    self.hidden2 = kl.Dense(128, activation='relu')\n",
    "    self.value = kl.Dense(1, name='value')\n",
    "    # Logits are unnormalized log probabilities.\n",
    "    self.logits = kl.Dense(num_actions, name='policy_logits')\n",
    "    self.dist = ProbabilityDistribution()\n",
    "\n",
    "  def call(self, inputs, **kwargs):\n",
    "    # Inputs is a numpy array, convert to a tensor.\n",
    "    x = tf.convert_to_tensor(inputs)\n",
    "    # Separate hidden layers from the same input tensor.\n",
    "    hidden_logs = self.hidden1(x)\n",
    "    hidden_vals = self.hidden2(x)\n",
    "    return self.logits(hidden_logs), self.value(hidden_vals)\n",
    "\n",
    "  def action_value(self, obs):\n",
    "    # Executes `call()` under the hood.\n",
    "    logits, value = self.predict_on_batch(obs)\n",
    "    action = self.dist.predict_on_batch(logits)\n",
    "    # Another way to sample actions:\n",
    "    #   action = tf.random.categorical(logits, 1)\n",
    "    # Will become clearer later why we don't use it.\n",
    "    return np.squeeze(action, axis=-1), np.squeeze(value, axis=-1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "outputs": [
    {
     "data": {
      "text/plain": "(array(1), array([1.0734191e-05], dtype=float32))"
     },
     "metadata": {},
     "output_type": "execute_result",
     "execution_count": 16
    }
   ],
   "source": [
    "# Verify everything works by sampling a single action.\n",
    "env = gym.make('CartPole-v0')\n",
    "model = Model(num_actions=env.action_space.n)\n",
    "model.action_value(env.reset()[None, :])"
   ],
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n",
     "is_executing": false
    }
   }
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Advantage Actor-Critic Agent Class"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "pycharm": {
     "is_executing": false
    }
   },
   "outputs": [],
   "source": [
    "class A2CAgent:\n",
    "  def __init__(self, model, lr=7e-3, gamma=0.99, value_c=0.5, entropy_c=1e-4):\n",
    "    # `gamma` is the discount factor; coefficients are used for the loss terms.\n",
    "    self.gamma = gamma\n",
    "    self.value_c = value_c\n",
    "    self.entropy_c = entropy_c\n",
    "\n",
    "    self.model = model\n",
    "    self.model.compile(\n",
    "      optimizer=ko.RMSprop(lr=lr),\n",
    "      # Define separate losses for policy logits and value estimate.\n",
    "      loss=[self._logits_loss, self._value_loss])\n",
    "\n",
    "  def train(self, env, batch_sz=64, updates=250):\n",
    "    # Storage helpers for a single batch of data.\n",
    "    actions = np.empty((batch_sz,), dtype=np.int32)\n",
    "    rewards, dones, values = np.empty((3, batch_sz))\n",
    "    observations = np.empty((batch_sz,) + env.observation_space.shape)\n",
    "    # Training loop: collect samples, send to optimizer, repeat updates times.\n",
    "    ep_rewards = [0.0]\n",
    "    next_obs = env.reset()\n",
    "    for update in range(updates):\n",
    "      for step in range(batch_sz):\n",
    "        observations[step] = next_obs.copy()\n",
    "        actions[step], values[step] = self.model.action_value(next_obs[None, :])\n",
    "        next_obs, rewards[step], dones[step], _ = env.step(actions[step])\n",
    "\n",
    "        ep_rewards[-1] += rewards[step]\n",
    "        if dones[step]:\n",
    "          ep_rewards.append(0.0)\n",
    "          next_obs = env.reset()\n",
    "          logging.info(\"Episode: %03d, Reward: %03d\" % (len(ep_rewards) - 1, ep_rewards[-2]))\n",
    "\n",
    "      _, next_value = self.model.action_value(next_obs[None, :])\n",
    "      returns, advs = self._returns_advantages(rewards, dones, values, next_value)\n",
    "      # A trick to input actions and advantages through same API.\n",
    "      acts_and_advs = np.concatenate([actions[:, None], advs[:, None]], axis=-1)\n",
    "      # Performs a full training step on the collected batch.\n",
    "      # Note: no need to mess around with gradients, Keras API handles it.\n",
    "      losses = self.model.train_on_batch(observations, [acts_and_advs, returns])\n",
    "      logging.debug(\"[%d/%d] Losses: %s\" % (update + 1, updates, losses))\n",
    "\n",
    "    return ep_rewards\n",
    "\n",
    "  def test(self, env, render=False):\n",
    "    obs, done, ep_reward = env.reset(), False, 0\n",
    "    while not done:\n",
    "      action, _ = self.model.action_value(obs[None, :])\n",
    "      obs, reward, done, _ = env.step(action)\n",
    "      ep_reward += reward\n",
    "      if render:\n",
    "        env.render()\n",
    "    return ep_reward\n",
    "\n",
    "  def _returns_advantages(self, rewards, dones, values, next_value):\n",
    "    # `next_value` is the bootstrap value estimate of the future state (critic).\n",
    "    returns = np.append(np.zeros_like(rewards), next_value, axis=-1)\n",
    "    # Returns are calculated as discounted sum of future rewards.\n",
    "    for t in reversed(range(rewards.shape[0])):\n",
    "      returns[t] = rewards[t] + self.gamma * returns[t + 1] * (1 - dones[t])\n",
    "    returns = returns[:-1]\n",
    "    # Advantages are equal to returns - baseline (value estimates in our case).\n",
    "    advantages = returns - values\n",
    "    return returns, advantages\n",
    "\n",
    "  def _value_loss(self, returns, value):\n",
    "    # Value loss is typically MSE between value estimates and returns.\n",
    "    return self.value_c * kls.mean_squared_error(returns, value)\n",
    "\n",
    "  def _logits_loss(self, actions_and_advantages, logits):\n",
    "    # A trick to input actions and advantages through the same API.\n",
    "    actions, advantages = tf.split(actions_and_advantages, 2, axis=-1)\n",
    "    # Sparse categorical CE loss obj that supports sample_weight arg on `call()`.\n",
    "    # `from_logits` argument ensures transformation into normalized probabilities.\n",
    "    weighted_sparse_ce = kls.SparseCategoricalCrossentropy(from_logits=True)\n",
    "    # Policy loss is defined by policy gradients, weighted by advantages.\n",
    "    # Note: we only calculate the loss on the actions we've actually taken.\n",
    "    actions = tf.cast(actions, tf.int32)\n",
    "    policy_loss = weighted_sparse_ce(actions, logits, sample_weight=advantages)\n",
    "    # Entropy loss can be calculated as cross-entropy over itself.\n",
    "    probs = tf.nn.softmax(logits)\n",
    "    entropy_loss = kls.categorical_crossentropy(probs, probs)\n",
    "    # We want to minimize policy and maximize entropy losses.\n",
    "    # Here signs are flipped because the optimizer minimizes.\n",
    "    return policy_loss - self.entropy_c * entropy_loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "outputs": [
    {
     "name": "stdout",
     "text": [
      "Total Episode Reward: 18 out of 200\n"
     ],
     "output_type": "stream"
    }
   ],
   "source": [
    "# Verify everything works with random weights.\n",
    "agent = A2CAgent(model)\n",
    "rewards_sum = agent.test(env)\n",
    "print(\"Total Episode Reward: %d out of 200\" % agent.test(env))"
   ],
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n",
     "is_executing": false
    }
   }
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training A2C Agent & Results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "pycharm": {
     "is_executing": false
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "text": [
      "INFO:root:Episode: 001, Reward: 013\n",
      "INFO:root:Episode: 002, Reward: 024\n",
      "INFO:root:Episode: 003, Reward: 015\n",
      "INFO:root:Episode: 004, Reward: 012\n",
      "INFO:root:Episode: 005, Reward: 012\n",
      "INFO:root:Episode: 006, Reward: 023\n",
      "INFO:root:Episode: 007, Reward: 023\n",
      "INFO:root:Episode: 008, Reward: 021\n",
      "INFO:root:Episode: 009, Reward: 021\n",
      "INFO:root:Episode: 010, Reward: 022\n",
      "INFO:root:Episode: 011, Reward: 023\n",
      "INFO:root:Episode: 012, Reward: 036\n",
      "INFO:root:Episode: 013, Reward: 024\n",
      "INFO:root:Episode: 014, Reward: 023\n",
      "INFO:root:Episode: 015, Reward: 009\n",
      "INFO:root:Episode: 016, Reward: 031\n",
      "INFO:root:Episode: 017, Reward: 020\n",
      "INFO:root:Episode: 018, Reward: 043\n",
      "INFO:root:Episode: 019, Reward: 028\n",
      "INFO:root:Episode: 020, Reward: 089\n",
      "INFO:root:Episode: 021, Reward: 021\n",
      "INFO:root:Episode: 022, Reward: 016\n",
      "INFO:root:Episode: 023, Reward: 047\n",
      "INFO:root:Episode: 024, Reward: 033\n",
      "INFO:root:Episode: 025, Reward: 026\n",
      "INFO:root:Episode: 026, Reward: 038\n",
      "INFO:root:Episode: 027, Reward: 052\n",
      "INFO:root:Episode: 028, Reward: 026\n",
      "INFO:root:Episode: 029, Reward: 021\n",
      "INFO:root:Episode: 030, Reward: 030\n",
      "INFO:root:Episode: 031, Reward: 035\n",
      "INFO:root:Episode: 032, Reward: 083\n",
      "INFO:root:Episode: 033, Reward: 043\n",
      "INFO:root:Episode: 034, Reward: 028\n",
      "INFO:root:Episode: 035, Reward: 031\n",
      "INFO:root:Episode: 036, Reward: 070\n",
      "INFO:root:Episode: 037, Reward: 080\n",
      "INFO:root:Episode: 038, Reward: 077\n",
      "INFO:root:Episode: 039, Reward: 027\n",
      "INFO:root:Episode: 040, Reward: 155\n",
      "INFO:root:Episode: 041, Reward: 043\n",
      "INFO:root:Episode: 042, Reward: 042\n",
      "INFO:root:Episode: 043, Reward: 048\n",
      "INFO:root:Episode: 044, Reward: 026\n",
      "INFO:root:Episode: 045, Reward: 036\n",
      "INFO:root:Episode: 046, Reward: 035\n",
      "INFO:root:Episode: 047, Reward: 041\n",
      "INFO:root:Episode: 048, Reward: 089\n",
      "INFO:root:Episode: 049, Reward: 137\n",
      "INFO:root:Episode: 050, Reward: 186\n",
      "INFO:root:Episode: 051, Reward: 115\n",
      "INFO:root:Episode: 052, Reward: 096\n",
      "INFO:root:Episode: 053, Reward: 089\n",
      "INFO:root:Episode: 054, Reward: 067\n",
      "INFO:root:Episode: 055, Reward: 092\n",
      "INFO:root:Episode: 056, Reward: 104\n",
      "INFO:root:Episode: 057, Reward: 177\n",
      "INFO:root:Episode: 058, Reward: 183\n",
      "INFO:root:Episode: 059, Reward: 131\n",
      "INFO:root:Episode: 060, Reward: 088\n",
      "INFO:root:Episode: 061, Reward: 078\n",
      "INFO:root:Episode: 062, Reward: 089\n",
      "INFO:root:Episode: 063, Reward: 096\n",
      "INFO:root:Episode: 064, Reward: 112\n",
      "INFO:root:Episode: 065, Reward: 073\n",
      "INFO:root:Episode: 066, Reward: 114\n",
      "INFO:root:Episode: 067, Reward: 092\n",
      "INFO:root:Episode: 068, Reward: 117\n",
      "INFO:root:Episode: 069, Reward: 150\n",
      "INFO:root:Episode: 070, Reward: 200\n",
      "INFO:root:Episode: 071, Reward: 200\n",
      "INFO:root:Episode: 072, Reward: 015\n",
      "INFO:root:Episode: 073, Reward: 112\n",
      "INFO:root:Episode: 074, Reward: 199\n",
      "INFO:root:Episode: 075, Reward: 200\n",
      "INFO:root:Episode: 076, Reward: 181\n",
      "INFO:root:Episode: 077, Reward: 127\n",
      "INFO:root:Episode: 078, Reward: 083\n",
      "INFO:root:Episode: 079, Reward: 148\n",
      "INFO:root:Episode: 080, Reward: 200\n",
      "INFO:root:Episode: 081, Reward: 200\n",
      "INFO:root:Episode: 082, Reward: 145\n",
      "INFO:root:Episode: 083, Reward: 184\n",
      "INFO:root:Episode: 084, Reward: 200\n",
      "INFO:root:Episode: 085, Reward: 200\n",
      "INFO:root:Episode: 086, Reward: 190\n",
      "INFO:root:Episode: 087, Reward: 133\n",
      "INFO:root:Episode: 088, Reward: 200\n",
      "INFO:root:Episode: 089, Reward: 200\n",
      "INFO:root:Episode: 090, Reward: 200\n",
      "INFO:root:Episode: 091, Reward: 110\n",
      "INFO:root:Episode: 092, Reward: 154\n",
      "INFO:root:Episode: 093, Reward: 200\n",
      "INFO:root:Episode: 094, Reward: 200\n",
      "INFO:root:Episode: 095, Reward: 200\n",
      "INFO:root:Episode: 096, Reward: 200\n",
      "INFO:root:Episode: 097, Reward: 200\n",
      "INFO:root:Episode: 098, Reward: 136\n",
      "INFO:root:Episode: 099, Reward: 200\n",
      "INFO:root:Episode: 100, Reward: 200\n",
      "INFO:root:Episode: 101, Reward: 200\n",
      "INFO:root:Episode: 102, Reward: 200\n",
      "INFO:root:Episode: 103, Reward: 200\n",
      "INFO:root:Episode: 104, Reward: 181\n",
      "INFO:root:Episode: 105, Reward: 200\n",
      "INFO:root:Episode: 106, Reward: 153\n",
      "INFO:root:Episode: 107, Reward: 200\n",
      "INFO:root:Episode: 108, Reward: 200\n",
      "INFO:root:Episode: 109, Reward: 134\n",
      "INFO:root:Episode: 110, Reward: 169\n",
      "INFO:root:Episode: 111, Reward: 083\n",
      "INFO:root:Episode: 112, Reward: 200\n",
      "INFO:root:Episode: 113, Reward: 200\n",
      "INFO:root:Episode: 114, Reward: 200\n",
      "INFO:root:Episode: 115, Reward: 200\n",
      "INFO:root:Episode: 116, Reward: 200\n",
      "INFO:root:Episode: 117, Reward: 200\n",
      "INFO:root:Episode: 118, Reward: 072\n",
      "INFO:root:Episode: 119, Reward: 183\n",
      "INFO:root:Episode: 120, Reward: 191\n",
      "INFO:root:Episode: 121, Reward: 200\n",
      "INFO:root:Episode: 122, Reward: 184\n",
      "INFO:root:Episode: 123, Reward: 123\n",
      "INFO:root:Episode: 124, Reward: 102\n",
      "INFO:root:Episode: 125, Reward: 162\n",
      "INFO:root:Episode: 126, Reward: 176\n",
      "INFO:root:Episode: 127, Reward: 162\n",
      "INFO:root:Episode: 128, Reward: 200\n",
      "INFO:root:Episode: 129, Reward: 200\n",
      "INFO:root:Episode: 130, Reward: 200\n",
      "INFO:root:Episode: 131, Reward: 200\n",
      "INFO:root:Episode: 132, Reward: 200\n",
      "INFO:root:Episode: 133, Reward: 200\n",
      "INFO:root:Episode: 134, Reward: 200\n",
      "INFO:root:Episode: 135, Reward: 200\n",
      "INFO:root:Episode: 136, Reward: 200\n"
     ],
     "output_type": "stream"
    },
    {
     "name": "stdout",
     "text": [
      "Finished training! Testing...\n",
      "Total Episode Reward: 200 out of 200\n"
     ],
     "output_type": "stream"
    },
    {
     "data": {
      "text/plain": "<Figure size 576x396 with 1 Axes>",
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfUAAAFYCAYAAABKymUhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOzdeXxU5dk//s+ZfZLMTGaSyWQjEMIqSUDAShCoCCjFb1WsVKXY6g+fX32gWp9a7ePzc+fb1u3xaa1o+9UqlVZqocXy7SLWBVdADUsSVJaELfssycxk9uX8/kjOJEAmM5k558yZ5Hq/Xn1VhsyZ+wxJrrnu+7qvm2FZlgUhhBBCsp4s0wMghBBCCD8oqBNCCCFjBAV1QgghZIygoE4IIYSMERTUCSGEkDGCgjohhBAyRigyPYBUWK1uXq9nNOagp8fL6zWlbLzdLzD+7pnud2yj+x3bhrtfs1mX1HMpUwegUMgzPQRRjbf7BcbfPdP9jm10v2NbOvdLQZ0QQggZIyioE0IIIWMEBXVCCCFkjKCgTgghhIwRFNQJIYSQMYKCOiGEEDJGUFAnhBBCxggK6oQQQsgYIWhHuSeffBL19fUIh8P4/ve/j5qaGtx3332IRCIwm8146qmnoFKpsGvXLvzud7+DTCbDjTfeiBtuuEHIYRFCCCFjkmBBfd++fTh+/Dhef/119PT0YPXq1airq8PatWvxjW98A08++SR27NiB6667Dps3b8aOHTugVCpx3XXXYfny5cjPzxdqaIQQQsiYJFhQv+SSS1BbWwsAMBgM8Pl82L9/Px599FEAwLJly7BlyxZUVlaipqYGOl1/X9v58+fjwIEDuOKKK4QaGiFEAIdO2FBemIvCfK2or3vklANVUUCTRYuJX53uQUuHK+XnL6gthSlHyeOIhHW6040jpxwpP79mahEmFIj7fdVm7cPhZnvKzy8pyMHFU808jig5ggV1uVyOnJwcAMD27duxZMkSfPTRR1CpVAAAs9kMq9UKm80Gk8kUe15hYSGsVuuI1zYac3jvBZxss/yxYrzdLzD+7lnM++1x+/GrPzegwqLDL+9ZCrmMEeV1j53pwX//8RA0KjkeWr8ANVMKRXnddLAsi82//BAeXyjla7x/uB0vP3Alj6MS1v/eWo+WNmfKz9/5QQte+v9WiPaBkWVZPPzKpzjb1ZfyNbRqBbb978qUfxZS/fkV/JS2t99+Gzt27MDLL7+Mq666KvY4y7Ln/P/Qxxlm5DeB79N6zGYd7ye/Sdl4u19g/N2z2Pd7otUJlu3PyP763jEsri0V/DVZlsWLOxsAAKFwFA+/uBd3Xl+D6skFgr92Ojz+EDy+EKrK9PjmwspRP/8f+07j2NleHG22wqTXCDBC/ll7vCjQq3HLVTNG/dzjrb34+97T+PM7R3H9kioBRnehIycdONvVh9lVBVg6tzyla5jzNXDYU/tQMNzPb7JBXtCg/uGHH+LXv/41XnrpJeh0Omi1Wvj9fmg0GnR1daGoqAgWiwV79uyJPae7uxtz5swRcliEEJ7ZXf7Yf7/x4UlcOtMClVLYk7UaWxz46kwvaiYX4PorpuJnWz7Fs39uwIbrajBnqnQzdocrAACosOhQWzX6DyCt1j4cO9uLlnZXVgT1SDSKPm8IUyfkp3S/0yvy8cHhduw52I5vLpwEpQgntv3r87MAgGsWVaKyRC/46/FJsFUot9uNJ598Er/5zW9iRW8LFy7E7t27AQBvvfUWFi9ejNmzZ6OxsREulwsejwcHDhzA/PnzhRoWIUQAXFCvKMpDjzsQ+6UolGiUxfY9J8AAuOHyKsyfacEPb6iFTMZg885GfP5Vt6Cvnw7uvTLp1Ck9v6q0P8g0t6c+nS2mPm8ILAB9ijUAaqUcV146EX2+EPZ90cXv4IbR5fCiodmOqjJ91gV0QMCg/o9//AM9PT24++67ccstt+CWW27BHXfcgTfeeANr165Fb28vrrvuOmg0Gtxzzz1Yv349brvtNmzcuDFWNEcIyQ52Z3+gWrtiGnI1Cvxj3xn0pbFmnMgnTZ1os3qwsKYYE4ryAAAXTTLhR9+eA6VChhf+2oS9RzoFe/109AwE9YIUs+xJxXrIZAya21IvtBOTy9v/faDPVaV8jVWXVYJhgHc+b71gyZZv7xxoBQAsnzdB0NcRimDT7zfeeCNuvPHGCx5/5ZVXLnhs5cqVWLlypVBDIYQIjMs+y815+OZllfjjO8fxt09O4aZlU3l/rWAogp0ftkCpkGH14snn/N20Cfm456Y5+J/XD+Ol//sFwuEoFs8Wfn1/NOwD0++pTp2rVXJMKtHjVKcb4UgUCrm0y/5dniCA9IJ6kTEHc6eZUX/UiuOtTkybIMyWZ18gjI8aOpCfp8K86eJXrvNB2t8NhJCsYHf6oVUrkKNRYOnFZSg0aPBOfSusvT7eX+vt+lb0uANYPq982MBYVWrAvTdfjFytEq/88yu8O5B5SYXDPTD9rk9t+h0Apk80IhyJ4mx36tXZYnF5B4J6TupBHQBWzO/PnIVc2vm4sQP+YARL55ZL/sNSPNk5akKIZLAsC5vLj4KBIKVUyHD9ksmIRFns/KCF19fq84Xw972nkatR4Oq6iXG/bmKxDvetvRj6HCV+/9Yx7P70DK/jSIfDFQADID8v9aA+Y6IRAHAijW1iYuEjUweAqeUGVBTl4cAxa2y5h09RlsU79a1QyGX4+hxpze6MBgV1QkhavIEwAsHIOWvEX7vIgokWHfZ90YXTnfxtrfvbJ6fgC4TxvxZOQo5m5MKrcnMefvKducjPU+H1d0/gb5+c4m0c6XC4/DDkqdLKBGdM7O/t0dIu/XX1WFBPM1NnGAbL5peDZYF3D/I/+9LU4kBXjw+XXlSU9lgziYI6ISQtXNZUYBgM6jKGwZql/XuK//TeCV6Km2y9Prx7oBUFeg2uSHLvcElBLv7zO3NRoFfjLx+0YOcHLYIXWo0kGmXR4w6kXCTHKSnMRa5GgeZsyNS56ffc9DvgLbjIgjytEh8cakcgFEn7ekO9PTCtn60FchwK6oSQtAwX1IH+avTqShO+PN2DIydTbxHK+cuHLQhHWFz/9clQKpL/1VVkzMFPvjMXRfla/N9PTmH7e80ZC+xOTxCRKAtjmkGdYRhUlRlgc/rhHMiEpcrlSb/6naNUyHH5xaXw+MPYx+Puhg67B00nHZhWbsDE4uzefUVBnRCSFtsIW7RuuLwKDIDte5oRjaYeSE93urHvSBcqLHm49CLLqJ9faNDiJ9+Zi5KCHLz56Rm89q/jiGYgsHNFcgVpFMlxuP3q6bRfFYPLE4RKIYOap2ZESy8uh1zG4O16/ra3vVM/sI1tfnZn6QAFdUJImhwjBPUKiw511cU4292X1r7x7XtOAADWXD4FsgRtpOMx6tS4b+1clJtz8c6BVrz65lHRA3sPt51Nl34nuMllBgBAs8TX1V3eIPS5qoTtv5Nl1Kkxb7oZbVYPvjrTm/b1vP4wPm7shEmvxsXTpNuJMFkU1AkhaYk3/c5ZvXgyFHIZ3viwBaHw6NdBm07a8cWpHsyqNGFWpSnxE0ZgyFXhvrVzMdGiwweH2/Hbv32JSDSa1jVHI9ZNjodMfXKJHgyAFgl3lmNZFu6BoM4nLqN+m4ftbR819K/PXzG3HHJZ9ofE7L8DQkhG2V1+KORM3F/cBQYNls8vh90VwDv1baO6dpRlsf29ZjAA1lzOz2EeeVol7r15DiaX6rH3SCe2v9fMy3WT4Uiz8cxQWrUCpeZctHS4RP1gMhq+QBjhCMt7NXlVqR6TinU4dNyWVi+EaJTFOwdaoVTIsERiTYpSRUGdEJIWu9MPk14z4rT41XUTkatR4G+fnBpV+9h9RzpxtrsPC2ZZUGHhr4ApR6PEPTfOgVYtR2NL6mdmj5YjlqnzcxBLVakewVAUbVYPL9fjm9PDX+X7UAzDYPn8crBAWs2FGprtsPb6UTerv6p+LKCgTghJWTAUgcsbSrhFK1ejxNV1k+ANhPGPvaeTunYoHMHOD1qgkDNYvWRy4ieMklatgDlfC7vTL1o1vMPth0Iugy7Fw03ON7lU2uvq3B51nQD7vi+ZYYE+V4UPDnfAHwyndI1/jZFtbENRUCeEpMzh7p9OTmbf9bJ5ZSjQq/F2fStszsRTpu/Ut8HuCmDZvHIUGrRpj3U4ZoMWwXA0FnyEZncFYNKpUy72O18VVywn0Qp4Nw+HucSjVMhw+ZxS+AJh7G0afRFmm7UPX57uwYyKfJQPHAo0FlBQJ4SkLFGR3FBKhRyrl0xGOBLFzg9Ojvi1Hn8If997CjlqBa6um8TDSIdXmN8/bqsAbUfPFxr48MBHkRynpCAHWrVcspk6N/1uECCoA8DSi8tS3t42lraxDUVBnRCSMvsojxFdMKv/qNR9Rzpxpit++9i/7z0Njz+MqxdOFHStk5sBsAlw8Mz5evr4K5LjyBgGk0v06HJ4BT3qNlVCTr8DgCFPja/NLEKH3Ysjp5JvcNTnC+GTpk4UGjSYMyX7t7ENRUGdEJIy2ygydWCgfezlVWAB7NgzfNW53enH25+3wqRXY/m85NrBpsosYqbucPK3nW0obl1dilvb3F5+DnMZyeD2tuQL5j5saEcwHMUVc8shk/GzFCIVFNQJISkbbDyTfKCaVWnCRZOMaDrpGDa72vlhC8KRKFYvngylgp8uZPFwmboQR8Seb/DIVf4ydWDourr0puCFnn4HgMoSPapK9WhotqPL4U349ZFoFO/Wt0KllGHx7BLBxpUpFNQJISmzO/1gMLpAxTAM1lw+BQCw/b0T53R1O9Plxt6mTpSb81A3q5jv4V6gcGCGQYzpdzuP3eSGmsy1i5Vkph6CjGGQo1EI+jpctv5OEtvbDh23w+4K4LLqEuQmOOkvG1FQJ4SkzJ7iMaITi3VYcJEFZ7r68OkXXbHHd7zfDBbAmqVVokyLqpRyGHJVsWUEIfWkMKuRjDytEhZTDlo6XBnpZz8SlycIXa6St2r/eOZNNyM/T4WPGjrgC4y8vY3rQneFwEs7mUJBnRCSktgxokmup59v9ZLJUMgZ/OWDFoTCUXxxyoGmFgdmTjSiOs12sKNhztfC4QoI3pWN2/7H9/Q7AEwp1cMXiKDDJq0mNE5vUJSzyRVyGZZeXAZ/MIKPGzvift2ZLjeOnu3FrElGlBXmCj6uTKCgTghJSW9fAJEom/LZ4OZ8La6YWw6b0493D7TG2rWuWVrF2+EfySjM1yDKsrEWrkKxu/zQquXQqvmfipbi4S6BUASBYETQIrmhvn5xGRRyGd6pb407YzFWt7ENRUGdEJKS0W5nG87/WjgJWrUCO/Y043SXG5deZMGkYj1fQ0yKWNvaHK6AIFk6MHgMq5Sa0Li5FrEiZOrc61x6URG6enxoGqb1r9sbxL4vulCUr0VNVYEoY8oECuqEkJSMpvFMPHlaJVYtqEAkykIuY3C9AO1gEzEbhN/W5guE4QuEeS+S45SZc6FWytEioUzd6RWm7/tIuHavw21v++BwO0LhKJbNKxd8jT+TKKgTQlLCR6YOACvmT8DMiUZcv2QyzPnCtIMdSeHAaybTujZVqWz9Gw25TIbKEh3abR54/an1Qeeb2yNci9h4JhbrMK3cgKaTDnTYB+sLwpEo3j3QBrVKjkW1Y28b21AU1AkhKeEjUwf6K9DvvflifGPBRD6GNWrm2LY24TJ1bjubUaDpd6C/CQ0L4GSHNLJ1l1fc6XdOrBlN/WC2fuCYFT3uABbVlAhS0yAlFNQJISnhAlW6mXqmGfX9B6xYhczU3cJm6gBQVTawri6R/eouj/Dd5IZz8bRCmPRqfNLYCa+/f7aAC/DLxug2tqEoqBNCUmJ3+ZGjVmR95iOXyWDSqwXN1B0CNZ4ZarBdrEQydZEL5ThymQxXzC1HIBTBRw0dONXpwolWJ2qrClBsyhF1LJlAQZ0QMmosy8Lu9Kc99S4V5nwtnJ4gAqGIINfn1tT57vs+lCFXhUKDBs1tTtHOhx+JS4S+7/EsmV0KpUKGdw604l+fcWemj/0sHQAE/Yh97NgxbNiwAbfeeivWrVuHu+66Cz09PQCA3t5ezJkzB5s2bcKiRYtQWVkZe96WLVsglwvb85kQkjqPP4xAKJL1U++cWLtYp1+QpiRcUDcKmKkDwJQyA/Z90YWuHl/Gs9LBE9rEb8Wap1WibpYFHxzugLXXj2JTDi4SsaFRJgkW1L1eLzZt2oS6urrYY88++2zsv++//36sWbMGLMuiqKgIW7duFWoohBCe8VUkJxWxCvhen0BBPQB9rgpKhbCTo5NL9dj3RRea25yZD+reEHI1ilG3EObL8nkT8MHh/u5yy+eP7W1sQwn2bqtUKrz44osoKiq64O9aWlrgdrtRW1sLr9eLSESYKS9CiDD42s4mFdwRrEL0gI+yLBzugKBFchzuxDYprKu7PMGMTL1zyovyUFtVAEOuCgurhT8cSCoEy9QVCgUUiuEv/+qrr2LdunUA+jN6u92Ou+66C93d3Vi1ahW++93vCjUsQggPxlqmbhbwCFa3N4RwJCpokRxnQlEelApZxjvLRaJR9PlCGe+vvnF1DcKRKDSq7C7mHA3R7zQYDKK+vh6PPPIIAECr1eKHP/whrr32WoRCIaxbtw5z585FdXV13GsYjTlQ8HzOstms4/V6Ujfe7hcYf/cs5P16Q/2Hn0yZaJLM+5rOOBQDR3C6/WHe76fX319HVGbR8XrteNeaUp6Po6cd0Om10GRoZwJXQ2A25fB2z1L5PhNLqvcr+r/4Z599htra2tif8/LysGbNGgD9U/Z1dXU4evToiEG9p8fL65jMZh2sVjev15Sy8Xa/wPi7Z6Hvt7Wzf3pXFo1K4n1N935ZloVKKUNbl5v3+2k+3R/UtUoZb9ce6X4rinLx5SkHPm9qx/QKIy+vN1pnuvrHppbzc8/085t8kBe9gqGxsREzZsyI/fno0aP4yU9+ApZlEQ6HceDAAUydOlXsYRFCRsHm8kMhl2WkslkIDMOg0KAVpP/74HY2cZYqqgb2q5/I4BS8KwN930k/wTL1pqYmPPHEE2hra4NCocDu3bvxq1/9ClarFRUVFbGvmz59OvLz87FmzRrIZDIsXbr0nEyeECI9DpcfBQOd2MaKQoMG7TYPPP4QcjX8BSOum5yQe9SHkkKxXKa6yREBg3p1dfWw29QefPDBCx67//77hRoGIYRngVAEbm8IE4ryMj0UXpljR7D6kVvMX1C3i9BNbiijTg2jTo3mdhdYlhX1bHqOizvMReRucoQ6yhFCRskxxrazcQoHtrXxXQHf4/JDLmNgyBMvwFWV6uHyBAXZopeMTHaTG+8oqBNCRmWsbWfjFHKZOs+B0OEOwKgTd6mCm4LP1OEusW5yFNRFR0GdEDIqY63xDMcsQKYejkTR6w7ApBNnPZ3DFcu1tGVmXZ3L1A00/S46CuqEkFEZq0Gdy9T5PIK1ty8AFoBJ5FmNicV5kMuYjGbqaqUcahWd4SE2CuqEkFEZq9PvORoFcjUKXo9gFePI1eEoFXJUWHQ409WHoEAnz43E5QmOme2O2YaCOiFkVOxOPxj0V1mPNYX5WticfkR5Orp0sKhQ/PeqqlSPSJTFma4+UV+XZVm4vSEYaD09IyioE0JGxe7yI1+nztjpW0IyGzQIR6Jw9gV5uR63VGHMwFLF5DI9APGb0Hj8YUSiLFW+Z8jY+6kkhAgmEo2ixx0cc+vpnNgRrDytqzvc3PS7+Jn6FK5YTuR1dbeXO0edgnomUFAnhCSt1x1ElGXH3Ho6xzxwX3ytq/cMrKln4v0qMGigz1WhWeTOctRNLrMoqBNCkjZWK985XKbOVwW83eWHWilHTgZOS2MYBlWlevS4A7G1fTE4uaBOhXIZQUGdEJK0sVr5zinkOVN3uPww6dUZadUKZKYPvNs70CKWMvWMoKBOCEmaLYPV3GKIBXUeMvVAMAKPPyza6WzDqSrtL5YTc786l6lT9XtmUFAnhCRtrPZ95ygVcuTnqWDlIVOPnc6Wwa1/k4r1kDGMqOvqsRaxVCiXERTUCSFJG+vT70D/urrD7Uc4Ek3rOlzjmUx+AFKr5CgvysWpDnfa95MsNx3mklEU1AkhSbO7/MjVKKBRiV/4JRazQQuWRdrFZYN71DO7VFFVakA4EsXZbnGa0Lg8QchlDHI1Y/d7RMooqBNCksKyLOxO/5jO0oEhB7ukeVqbVJYqJnPr6iI1oXF5+1vEZqo4cLyjoE4ISUqfL4RgOJrxICW02BGsaZ7WFuv7nuH3a0rsGFZx1tVdnhBNvWcQBXVCSFLG+h51Dpepp3uuuhQK5QCgyKhFrkYhSqYeCEYQCEWgpyK5jKGgTghJyngokgOGHMGaZqZudwWQp1VCpczs8aMMw6CqzACb0x/bbiYUFxXJZRwFdUJIUmJBfYxn6kadGnIZk1amzrIsegYaz0gBt1+9ReBsnVrEZh4FdUJIUmKNZ8Z4pi6TMSjQa9JaU/f4w5KqP5gs0rp6LKjT9HvGUFAnhCRFCvuuxVKYr4HLG0IgGEnp+dyshkknjfdqcokeDIQ/sW1w+p36vmcKBXVCSFLsTj9UChl04+CgjlgFfIrtYmNFcgZpTL9r1QqUmnPR0uFCJCpcExqafs88CuqEkKTYXX6Y9Jpxsf843b3qse1sEsnUgf519WAoijarR7DXcHGHudD0e8ZQUCeEJBQIRtDnC4359XROunvVucYzUimUA4DJpcKvq1OmnnkU1AkhCY3109nOZ+bOVU/xYBeHW3r1B1UidJYbPMxl7C/RSBUFdUJIQlJpeSqWwvz0jmC1u/xgGMCQJ52MtaQwF1q1XNCz1V3eIPK0SshlFFoyRdB3/tixY1i+fDl+//vfAwA2bdqE66+/HrfccgtuueUW7NmzBwCwa9cufOtb38KaNWuwY8cOIYdECEnBeGk8w9FplVAr5Sln6j0u/8B+d+kENxnDoKwwD909PsFObHN5gjT1nmGCHaPj9XqxadMm1NXVnfPYT3/6U8ycOfOcxzZv3owdO3ZAqVTiuuuuw/Lly5Gfny/U0AghozReWsRyGIZBYb4GNqcPLMuOqjgwGmXR4w7GDlKREotJixNtTticfhSbcni9djgShccfxoSiPF6vS0ZHsI+RKpUKL774IoqKimKPeTwXVl0ePnwYNTU10Ol00Gg0mD9/Pg4cOCDUsAghKRgv3eSGMhu08Acj8PjDo3peb18AUZaVVJEcx2LsD+SdDi/v13Zzle+UqWeUYJm6QqGAQnHu5T0eD5577jm4XC5YLBY88MADsNlsMJlMsa8pLCyE1WoValiEkBRwa8T5GT6cREyFA0sN1l4f8rTJF35J5XS24XDZebcAQZ26yUmDqKfY33TTTZgyZQoqKyvxwgsv4Fe/+hVmz559ztckM9VlNOZAoeD3kASzWcfr9aRuvN0vMP7umc/77ekLosCgRUmxgbdr8o3vf99J5flAfSuC7Oiu/VVrfyFaRYlB0O+5VK49I9S/lu70hXkf2xl7/weF4qI8Qe6bfn6TI2pQX7FixTn//cgjj+DKK6+MFcwBQHd3N+bMmTPidXp6+P2UaTbrYLW6eb2mlI23+wXG3z3zeb/hSBR2pw9TygySfQ+F+PfVyvuTi5YzPZg+ivXxU229AAC1DIK9X6nerxIsAOBUu5P3sZ0daEGrAP/3TT+/yQd5UUsz77jjDrS3twMA9u/fj6lTp2L27NlobGyEy+WCx+PBgQMHMH/+fDGHRQgZQa87AJYdP5XvnEJur/oou8rZY41npPd+qZVymPRqQdbUY33fafo9owTL1JuamvDEE0+gra0NCoUCu3fvxs0334w777wTOTk50Gq1+PnPfw6NRoN77rkH69evB8Mw2LhxI3S68TXNQoiUjbfKdw63pj7arnJS7CY3lMWYgy9P9yAQikDN41nvbk9/oZyODnPJKMGCenV1NbZu3XrB46tWrbrgsZUrV2LlypVCDYUQkobxGtS1agXytMpRZ+oOVwBKhWxUxXVispj6g3p3j4/X7WfOgUI5A2XqGSWdzgiEEEkab41nhjLna2B3+hBl2aSf43D7YdKpJXvwTbGxf1mhi+cpeG76XUdb2jKKgjohZETjNVMH+g92CUdY9A70ck8kGIrA7Q1Jcj2dYzEJs1fd7QlCrZLzOqVPRo+COiFkROOx8QxnsAd8clPwPW5uj7o019OBwb3qfGfqTm+Qpt4lgII6IWRENlcAeVol1Krxl4GZDdxpbckVy2XDwTcFBg3kMgadPG4NjrIs3J4QFclJAAV1QkhcLMvC4fJLOkgJabSZul3C3eQ4CrkMhQYNuhypnUA3HK8/jCjL0nY2CaCgTgiJy+0NIRSOjssiOWAwU092W5vDPbCdTeLtdC2mHPT5QujzhXi5XqzynYrkMo6COiEkrvFcJAf0Z9wMkm9A45Bw45mhYuvqPE3BuweCuo4y9YyjoE4IiWuwSE7amadQlAoZ8nVq2JzJrqlLv1AOGKyA56tYLtZNjjL1jKOgTgiJK5apj9PpdwAwGzTocQUQjkQTfq3DHUCuRgGNStRjNUaN26veydO6Ojf9TkE98yioE0LiGs+NZziF+VqwGPyAEw/LsrC7/DDqpP9e8Z2pu2N936n6PdMoqBNC4hrva+rA0B7wIwd1XyCMQDCSFUsV+To1VAoZb2vqLsrUJYOCOiEkLrvTD5VSun3MxWCOndY28lR1Nmxn48gYBkXGHHQ5fGBH0QI3HtfAYS4U1DOPgjohJC77wB51qfYxFwMX1BNl6lI/ne18xSYtAqEIevuCaV/L5Q1CIWeQo5Z2LcF4QEGdEDIsXyAMjz88rqfegcHp90Rd5bJlOxuHz3V1lycIXY5qXH/4kwoK6oSQYTmo8h1A//qzQs4k3Nbm4Pq+S7zxDIfbq85Hu1iXN0jd5CSCgjohZFhUJNdPxjAo0GtgTXL6PVveLy5T705zW5s/GEYwFKX1dImgoE4IGdZ4Pp3tfIX5WvT5QvAHw3G/xu4KgEF/Zp8NLLG96ull6rHKd9rOJgkU1Akhw7LR9HuMOYltbQ6XH/o8FRTy7Pi1mqdVIlejSHtbm8tLle9Skh3fff+iiuoAACAASURBVIQQ0XEtTylT78/Ugfjb2qIsix53IKveK4ZhYDHloLvHh0g0cbe8eGiPurRQUCeEDMvu9EPGMMjX0S/rRA1oXJ4gIlE2a4rkOBZjDiJRNrbUkorB6Xf6PpECCuqEkGH1tzxVQy6jXxOJGtA4sqjxzFDFpvR7wNNhLtJCP62EkAuEI1H0ugNZ0fJUDIka0GTbHnWOhYcjWGn6XVooqBNCLtDjDoAFFclx+k9ek8fdqz64nS27PgRZjOk3oKHqd2mhoE4IuQCdznYuhmFQaNDC6vQP2ys9m/q+D2UZmH5PK6h7Q2AA5FFQlwQK6oSQC1DjmQuZ8zUIBCPo84Uu+DuHe2D6PcsK5TQqBfLzVOmtqXuCyNUqqfZCIuhfgRByAcrUL1RoGCiWG2Zd3eHyQyFnoMvCdeViUw4cLj9C4UhKz3d7gzBk4X2PVRTUCSEXsFGmfoHC/IFtbcOsqztcARh1asiy8EATiykHLIDuntFn6+FIFB5/mIrkJCTuOXlXXnnliCfu7N69W5ABEUIyL1uruYVkjmXq5wa/UDgKpyeIGRX5mRhW2rhiuU6HD2XmvFE9lyuS09F6umTEDeq//vWvAQA7duyAyWTCggULEI1G8cknn8DnS+4T3bFjx7BhwwbceuutWLduHTo6OnD//fcjHA5DoVDgqaeegtlsxqJFi1BZWRl73pYtWyCXy9O8NUJIquxOP3Q5SqiV9HPIGczUz51+7+nrL5Iz6rLzA1CsWC6FbW1uahErOXGD+uTJkwEAp06dwn333Rd7vLa2FnfccUfCC3u9XmzatAl1dXWxx37xi1/g29/+NlatWoU//OEPeOWVV3DvvfeiqKgIW7duTec+CCE8ibIs7K4Ays25mR6KpAx2lTs3qemJzWpkV5EcJ3YEawoV8M6BTJ3W1KUj4Zp6a2sr9u7di2AwiFAohM8++wxtbW0JL6xSqfDiiy+iqKgo9tjDDz+Mq666CgBgNBrR29sLr9eLSCS1Ag1CCP/cniDCkSitp59Ho1JAl6OE9bxMPdt3CpjztWCY1La1DU6/U1CXiriZOufhhx/G448/juPHj4NlWVRVVeG//uu/El9YoYBCce7lc3L6PxFGIhG89tpr2LhxI7xeL+x2O+666y50d3dj1apV+O53vzvitY3GHCgU/E4Lms06Xq8ndePtfoHxd8+p3q9jYEq1vFifVe+ZGGMtKcxFS5sTpoI8yGX9NUeBSAcAoHKCUdT3i8/XKjblwur0j/qaUaYTADCh1CD4vWfT9yIfUr3fhEE9NzcX27dvT+niw4lEIrjvvvuwYMEC1NXVoa+vDz/84Q9x7bXXIhQKYd26dZg7dy6qq6vjXqMnzaMCz2c262C1unm9ppSNt/sFxt89p3O/zWd6AAA5SlnWvGdi/fvm56oQjrA4cdIWKyI82+EEAMjZqGjvF9/3W2BQo6nFg9Nne5CjSRgWYtq7B8YQjgh67/Tzm3yQTzj9/rOf/Sy1UcVx//33Y+LEifjBD34AAMjLy8OaNWugUqmQm5uLuro6HD16lNfXJIQkj/aoxxc72GXIurrDPdBNLksL5QCg2JhaD3i3l6rfpSbhR7KysjJ873vfw5w5c6BUDv7DcUF5NHbt2gWlUom77ror9tjRo0fx8ssv4/HHH0ckEsGBAwewcuXKUV+bEMKPWFDP0jViIcWK5Zx+TB94zOHyQ6uWjyrDlZrYwS4OLypL9Ek/j45dlZ6E34UWiwUWiwUAEA6HAWDE/eucpqYmPPHEE2hra4NCocDu3btht9uhVqtxyy23AACqqqrwyCOPID8/H2vWrIFMJsPSpUtRW1ubzj0RQtIQK/yiTP0ChcNk6nZXIKuzdCD1CninJwSNSg4VbX2UjIRB/e67777gsaeffjrhhaurq5Pepnb//fcn9XWEEOHZXX6olXLkZnHmKRTzwAcdrlWsLxCGLxCGsSz57FaKBveqj66rnNsbpD3qEpPwp3bv3r34xS9+gd7eXgBAMBiEVqvFj3/8Y8EHRwgRn93pR4FBk9SM3Hhj0mvAMIOtYh1Zvp2NY9JroJDLRrWtLcqycHtDMBu1Ao6MjFbCQrlnnnkG9957L/R6PX75y1/im9/8Jh544AExxkYIEZkvEIY3EM76ICUUhVwGk04d6yo3WCSXnY1nODKGgcWoRVePd9ijZYfT5wshyrK0ni4xCYN6Xl4e5s+fD5VKhRkzZuBHP/oRXn75ZTHGRggR2WCRXHYHKSEVGrTodQcQCkfHVI98iykHvkAELu+FR8sOx80VydH0u6QkDOqhUAj19fXQ6XTYtWsXDh48iNbWVjHGRggRmY2K5BIqzNeARX/tgd01kKmPiaA+sK6e5BT8YOU7bWeTkoRr6o8++ii6u7tx77334tFHH4XD4cDtt98uxtgIISIbK2vEQuJOa7P1+rK+7/tQxcbBCvhpExKfOOeiw1wkKWFQ//jjj7Fo0SJMnjwZr776qhhjIoRkCDWeSYw7rc3q9Me2/2X7mjowZK96kg1oaI+6NCUM6n19fXjsscfQ1dWF+fPnY9GiRVi4cCF0uvHVh5eQ8SDbDycRA9dVztbrg8MdgD5HCSXPZ1FkQnGsAU1y29pcXlpTl6KEa+obNmzAli1b8Ne//hWrVq3Czp07zzlOlRAydtidfshlDPLzsj/zFEqhYbABjcMVGBPr6UB/q1etWj76NXUK6pKSMFM/ePAgPvvsM3z++efweDyYOXMmnnnmGTHGRkjW6vOF8PS2g1gypxRXzC3P9HCSZnf5YdSpIZPRHvV4DHkqKOQynOxwIxyJjpmgzjAMLMYctFo9iEbZhN8DNP0uTQmD+rp167Bw4ULcdtttqKuro4YUhCShpd2JM919+P1bx6CQy7Bkdmmmh5RQOBKFsy+YVJHUeCZjGBQaNLGWqmNhPZ1TbMrBqU43HC5/rCVuPC5vEAo5A606+5cexpKEQf2TTz7BZ599hnfffRfPPfcc9Ho9LrnkEqxfv16M8RGSlbg2ogDwuze/glatwCUzijI4osTabR6wGFwzJvEV5g8J6mMkUwcGi+U6e7yJg7onBH2uihI9iUm4pm4wGLB8+XKsW7cOq1evBsuy2LJliwhDIyR7cW1Ev7NiGjQqOf7PriNoaLZneFQja2zpH9/MScYMj0T6uG1twNjYzsYZ3Ks+crEcy7JweYM09S5BCYP63XffjWXLlmHTpk1wuVz4j//4D3z44YdijI2QrGUbyNTnTzfjhzfMhkzGYPPORhw905PhkcXX2OIAA6C60pTpoUget60NGFs7BSzGwSNYR+IPRhAKR6lIToISTr+vWbMGTzzxBDweD0wm+mEnJBlWpw8qhQz6XBUMeWpsXF2DX/25Ab/c0YD71l6MScXSOtXL6w/hRKsTlaV66Cj7SujcTH3sBfXOBHvVY9vZ6HtFchJm6nK5HKtWrcLatWsBAI8//jjef/99wQdGSDaz9Z570lltVQH+32tmIRCK4JnXD6PN5snwCM/1xakeRFkWNZMLMj2UrMBl6nIZA8MYylZzNAroc1UJM3Wu8l2XSy1ipSZhUN+8eTO2bdsGs9kMAPi3f/s3bN68WfCBEZKtvP4QvIHwBQVnl8wowq0rZ6DPF8J///EgrL2jO7taSNx6f20VBfVkcHvV8/PG3va/YqMWNqcf4Ug07te4PP0tYg2UqUtOwqCuVCpRVDRYtVtQUACViv4hCYmHq3wvHKbV6uLZpbjpiino7Qvi6T8eRM/A0Z2ZxLIsGlvs0OUoMbGYOkUmI1ejQEVR3pjc/mcx5YBlMeKHTuomJ11JBfX6+noA/S1j//SnP0GppCkXQuLhKt8LDcNvCbryaxW45rJJsPb68czrh9DnS+6oS6Gc6eqD0xNEdWUBZLQ9KSkMw+CR/+dr+LdvXpTpofCOaxfbOcIU/OD0OwV1qUkY1B966CE8//zzOHjwIC6//HK8/fbbeOyxx8QYGyFZicvUzfnxC6iuXVSJ5fPL0Wbz4JnXD8EXCIs1vAtwW9lqqqgQlgBFxsQ94LlMnabfpSdh9XtZWRl++9vfijEWQsYE60CmPlITF4ZhcNOyqfAHIviosQPP7mjAf3x7NlRK8btzNbTYwTBAdSWtpxOgeGCvejKZOk2/S0/cTN3r9eIXv/gFNm7ciJdeegksywIArFYrNmzYINoACck2ttia+sgduWQMg1u/MQPzp5tx9Gwvnn+jacTiJCF4/CE0tzkxuVSPPC0tqxGgyKgFg5H3qrs8QTAM6HtGguIG9YceegiBQADXXnstGhoasHnzZrzxxhtYs2YNLr30UjHHSEhWsTl9yNUokKNJOBEGmYzBv31zFqorTWhotuOlv32BaJQVYZT9jpx0gGWBWtrKRgYoFXIUGDQjnqvu8oag0yrHXOX/WBD3t05bWxuefvppAMDSpUtRV1eHefPm4bXXXkNpqfQPpyAkE1iWhc3pR2lBbtLPUSpk2Hh9DZ55/RA+/bIbGpUC31s5XZSe2o2xrWyFgr8WyR4WUw6OnHTAHwxDo7owTLg8wTHVHncsiZupKxSD/5BKpRIzZszAb37zGwrohIzA6QkiFI6e00Y0GWqlHD+8YTYqLHn44HA7tr/XHFvyEkp0YCubPleFCZY8QV+LZJfiEYrlQuEofIEwdZOTqLhB/fwsQSZLWChPyLjHraebE6ynDydHo8CPbpyDkoIcvPnpGbx/qJ3v4Z3jdKcbLm8INZNNtJWNnKOIO9hlmCl4N1f5TkVykhR3+r21tRXPPfdc3D//4Ac/EHZkhGQhrvJ9tJk6R5+jwj03zsF//Z99ePPTM1gyp1SwgBvbykbr6eQ8I+1Vd3J71ClTl6S46fc111yDcDgc+9/5fyaEXMjWO3LjmWSY9BpcMqMI3T0+HD0t3Klujc12yBgGs+hUNnIe7lz14Srg3bFuclT5LkVxM/W777477YsfO3YMGzZswK233op169aho6MD9913HyKRCMxmM5566imoVCrs2rULv/vd7yCTyXDjjTfihhtuSPu1CckEqzNx45lkfH1OGT5u6sT7h9sxcxL/QdftDaKl3YUp5QbkauiXMzlXoV4DuYxBV8+Fa+pO2qMuaYItlHu9XmzatAl1dXWxx5599lmsXbsWr732GsrKyrBjxw54vV5s3rwZW7ZswdatW/HSSy+ht7dXqGERIqjBTD29oF5VpkdZYS7qj1pj3bv4dOSkAyzoABcyPJmMQZFRi06794KCzVjjGZp+lyTBgrpKpcKLL754zmEw+/fvx7JlywAAy5Ytw969e3H48GHU1NRAp9NBo9Fg/vz5OHDggFDDIkRQ1l4/DHkqKBXpdYZjGAZL5pQiEmXxSWMnT6MbROvpJBGLMQfeQPiCswnc3v4/U6YuTYIFdYVCAY3m3GzF5/PFTngzm82wWq2w2WwwmQanFwsLC2G1WoUaFiGCCUeicLj9I7aHHY26WcVQyGV4/3A7r9vb+reyOWDIU2FCEW1lI8MrNg2/rY3L1Kn6XZrirqlfccUVwza/YFkWDMPgnXfeGfWLDb0e90vq/F9W3PVHYjTmQJFmJnQ+s3l8HTk53u4XEP6eO+0esCxQbtHx8lpmAIvmlGJPfSu63EHUjLJBTLwxHDvTgz5fCCu+VoGiIn3a45SK8fY9LfT9TploBD49A08oes5r+UP9rYwrK0yinlVA/77JiRvUt2zZEvdJXm/89oEj0Wq18Pv90Gg06OrqQlFRESwWC/bs2RP7mu7ubsyZM2fE6/SM0L4wFWazDlarm9drStl4u19AnHs+esoBANCpFby91oIZRdhT34pde06geBQdvEa63/c/PwMAmFqqHzPfB+Pte1qM+81V9k/knjjjwOxKY+xxW68XWrUCzl5+fw+PhP59kw/ycaffKyoqYv+LRCKw2Wyw2Wxob2/Hj370o5QGunDhQuzevRsA8NZbb2Hx4sWYPXs2Ghsb4XK54PF4cODAAcyfPz+l6xOSSbaByvdU96gPZ2q5ASUFOfj8aDdv5643tjgglzG4SICqejJ2xNvW5vIEoc+hHRNSlfDEiccffxzvvvsuHA4HysvL0dbWhu9973sJL9zU1IQnnngCbW1tUCgU2L17N55++mn853/+J15//XWUlpbiuuuug1KpxD333IP169eDYRhs3LgROt34mmYhY4N1oPI9lW5y8TAMg6/PLsUf3z2BTxo7cOXXKtK6nssbxKkOF6ZNyE/qwBkyfhlyVVCr5OgcsqYejbJw+0KxgE+kJ+FP9cGDB/HWW2/hlltuwdatW9HQ0ID33nsv4YWrq6uxdevWCx5/5ZVXLnhs5cqVWLlyZZJDJkSahMjUAWBhTQl2vN+M9w+3Y8UlE9I66OVIC21lI8lhGAaWgW1tUZaFjGHQ5wuBZanyXcoSVr9z1eqhUAgsy6K2thb19fWCD4yQbGPr9UEuY2DS8RvU87RKzJtehA67F8dbnWldq4G2spFRKDblIBiOotcdAEB71LNBwkx94sSJeO211zB37lzcfvvtKCsrg9OZ3i8WQsYiq9MPk14tyBnTX59div1fdOH9Q+2YNiE/pWtEoyyaWuww6tQoMyd/NCwZvyzGwR7wJr0m1giJMnXpShjUH3vsMfT29sJgMGDXrl2w2+3493//dzHGRkjWCIQicHmCmDnRmPiLUzC9Ih8WoxaffdWNm5dPRZ529IVKLR0uePxhzJteJMpZ7ST7xfaq9/hw0aQhmToFdclKOP3+wAMPwGQyQS6XY/Xq1bj99tvx8MMPizE2QrIG1x423Z7v8TAMg6/PKUM4EsXeI6l1mGts7p96p/V0kqzzK+BdXDc5qn6XrLiZ+q5du7B9+3YcPXoUra2tsccDgQB1fCPkPNxBLumczpbIwppi/Pn9ZnxwqB3L55WPOttuaLFDLmMEm00gY49l4Fx17ghWytSlL25Qv+aaazBv3jz8+Mc/Pme6nWEYTJs2TZTBEZItBjN14YK6PkeFudPM+OyrbjS3uzClzJD0c52eIE53ujFzohFaNW1lI8nJ1SiRp1UOZupUKCd5I06/l5WVYdu2bZg0aRK8Xi98Ph8qKyvP6dVOCBFuO9v5vj6nFADw/qG2UT2viareSYqKTTmw9voRjkSpUC4LJFxT3759O9auXYudO3dix44duPnmm/HXv/5VjLERkjWEaDwznBkTjSjK1+KzL7vh9SffYS52Khutp5NRspi0iLIs7E4/XJ4glAoZNCrxer6T0Uk4D/fnP/8Z//znP2MnrvX19WH9+vW49tprBR8cIdnC5vRDpZRBJ3ABkWzgSNYde5qx90gXls0rT/icSDSKphYHCvRqlBZQJzAyOlwFfKfDC5e3v0Us7Z6QroSZ+vlHqObl5UGppMpHQjgsy8Lm9MFs0Iryy+6ymhLIZQzeP9SW1JGsLe0ueANh1FQV0i9jMmpD96q7PCGaepe4hJm6xWLBz372M1x22WUAgA8//BAWi0XwgRGSLTz+MHyBCArLhV1P5xhyVZgztRD1R61o6XChqnTkgrkGbisbraeTFHDb2k51uhGORKlITuISZuqPPfYY8vPzsW3bNmzbtg2FhYXYtGmTGGMjJCvYnP3r6YUCVr6fjyuY++BQe8KvbWyxQyGnrWwkNUXG/u/r4629AAAdZeqSNuI+9WuuuQa5ubnYsGGDmGMiJKvYevsr380GcTJ1ALhokgmFBg32f9mFm5ZNjbtNrbcvgDNdfZg1yQg1FTeRFKiVcpj0ajhc/f3fDRTUJS1upr5jxw4xx0FI1rJmIFOXMQyWzC5FMBTFvi+64n5dI21lIzzg1tUBQEfT75KWcPqdEDIyay/XTU68TB0AFtWWQMYweP9g/II5rjUsbWUj6Sgecn66PpcKpaUs7vT7wYMHcfnll1/wOMuyYBgGe/bsEXBYhGQPMbrJDSc/T405Uwtx4JgVpzrdqCzRn/P34UgUR045UGjQnPNLmZDRsgz5/jFQpi5pcYP6RRddhGeeeUbMsRCSlaxOP/K0yoy0X10yuxQHjlnx/qH2C4J6c5sTvkAEdbOKaSsbSYvFOPiBlQrlpC3ubyGVSoWysjIxx0JI1unvtOVDuTkvI69fXWlCgV6N/V924cYrppzzwaKB1tMJT86dfqegLmVx19Rra2vFHAchWcnZF0Q4wopaJDeUTMZg8exSBIIRfPrluQVzjc0OKOQyzKCtbCRNBQYN5DIGDAPkaWlNXcriBvV7771XzHEQkpUGe76LWyQ31OLaUjAM8P6QPeu2Xh9arX2YUZEPtZK2spH0KOQylJvzUGTMgYyWciSNzmAkJA2ZaDxzPqNOjdlVhTh0wobTnW5MLNah/qtuAFT1Tvhz57dqEIkmbktMMou2tBGShkw0nhnOEu5I1sP92Xr9V/1T8bUU1AlPTHqN6Ds8yOhRUCckDZloPDOcmskmGHVq7DvSCY8/hEPHrCgyas9pGkIIGfsoqBOSBluvHwyAAn1mM3W5TIbFtSXwByN47V/H4AuE6QAXQsYhCuqEpMHm9CFfp4ZSkfkfJa5gbu+R/ql3Wk8nZPzJ/G8iQrJUOBKFwxUQvT1sPAUGTWxPukopx/QJ+RkeESFEbBTUCUmR3eUHC6DQIJ3iIe5I1tophVDRVjZCxh1Rt7Rt374du3btiv25qakJ119/PQ4ePIjc3FwAwPr164ftOU+I1MQq3/OlkakDwOyqQqxZWoUl8yoyPRRCSAaIGtTXrFmDNWvWAAA+/fRT/POf/4TX68VPf/pTzJw5U8yhEJI2rvJdStt8ZDIG37h0IsxmHaxWd6aHQwgRWcam3zdv3owNGzbA4/FkagiEpMWWoSNXCSEknox0lGtoaEBJSQnMZjM8Hg+ee+45uFwuWCwWPPDAA8jPpwIfIn02CWbqhJDxjWFZVvS+fw899BCuvvpqXHrppfjXv/6FKVOmoLKyEi+88AJsNhsefPDBEZ8fDkegUFAREMmse375PlranNjx+Dchl1E/bEJI5mUkU9+/fz8eeOABAMCKFStij69YsQKPPPJIwuf39Hh5Hc94W38cb/cLCHPPHTYPTHoNHPY+Xq/Lh/H2b0z3O7bR/fY/lgzR19S7urqQm5sLlar/TN477rgD7e39/ar379+PqVOnij0kQkbNHwzD7Q1lvOc7IYQMJXqmbrVaYTKZYn9et24d7rzzTuTk5ECr1eLnP/+52EMiZNRszoEiOVpPJ4RIiOhBvbq6Gi+99FLsz4sWLcKiRYvEHgYhaeHOUafKd0KIlFBHOUJSMNh4hjJ1Qoh0UFAnkheORPHugVZ4/aFMDyUmduSqhFrEEkIIBXUiefuOdOH3bx3DGx+ezPRQYqTYIpYQQiioE8k7fMIGANh7pBOhcDTDo+lnc/qgVsmRp1VmeiiEEBJDQZ1IWjgSRdMpBwDA4w/j0ECAzySWZWF1+mE2aMAw1HSGECIdFNSJpB0924tAMILqyv5tkB82tGd4RECfL4RAMELr6YQQyclIRzlCktVwwg4AuOrSCvgCYRxpccDh8sOkz9xa9uAedVpPJ4RIC2XqRLJYlsXhEzaoVXJMn5CPRbUlYAF83NSZ0XFxe9TNlKkTQiSGgjqRrE6HF929PlRXmqCQy/C1mRaoFDJ81NCOqPjnEMVQpk4IkSoK6kSyGpr7p95rqwoAAFq1AvNnFMHa68fxs70ZG5eNMnVCiERRUCeSxW1lq60qjD22uLYEAPBhQ0dGxgQMaRFLmTohRGIoqBNJ8vrDON7qRGWJDoZcVezxaRPyUZSvxedfdcMXCGdkbFanH3laJTQqqjMlhEgLBXUiSUdOORCJspg9JEsHAIZhsKi2BMFwFJ9+2SX6uKJRFnannzrJEUIkiYI6kSRu6n32lMIL/m5hdTEYBvgoA1PwvX0BRKIsHeRCCJEkCupEcqJRFo0tdhjyVKiw5F3w9ya9BtWVBWhud6HN5hF1bINHrlJQJ4RIDwV1IjknO1xwe0OYXVUQtw0rVzD3scjZOm1nI4RIGQV1IjmHmwem3qsunHrnzJ5SiDytEp80dSAcEe+QF2o8QwiRMgrqRHIaTtihkDOYOckY92uUChkWzLLA5Q2hcWA/uxgoUyeESBkFdSIpDpcfZ7r7MKPCmHDL2KIa8fes23p9YAAUZLD3PCGExENBnUhKQ8u5XeRGUmHRYWKxDg3Ndjj7AkIPDUD/HnWjXg2FnH50CCHSQ7+ZiKRwp7LVDrOVbTiLa0sQZVl8ckT4Q15C4Sh63QGqfCeESBYFdSIZwVAEX5xyoLQwF0VJ7gO/9CILFHIZPmroACvwIS92lx8sALOBpt4JIdJEQZ1IxldnehEMR5OaeufkapSYN92MDrsXzW0uAUc3eJBLITWeIYRIFAV1IhmDW9mSD+oAsCh2yEs772MayspVvlOmTgiRKArqRBJYlkXDCRty1ApMKTeM6rkzJxpRoFfj06+64Q8Kd8hL7MhVytQJIRJFQZ1IQpvNA7srgOrJJshlo/u2lDEMLqspQSAYwedfWQUa4WCmTkGdECJVFNSJJIx0gEsyFtWUgAHwkYBT8LZeHxRyGQx5qsRfTAghGUBBnUhCQ7MdDAPUTB7dejqnMF+LmZOMONbqRKfDy/Po+tmcfhQYNJDF6UdPCCGZNnLLLp41NTVhw4YNmDhxIgBg2rRpuP3223HfffchEonAbDbjqaeegkpFmdB40ucL4USbE1VlBuRplSlfZ1FtCb441YOPGzvwra9X8ThCwBcIo88XwqRiHa/XJYQQPomaqXu9Xlx11VXYunUrtm7digcffBDPPvss1q5di9deew1lZWXYsWOHmEMiEtDYYgfLjr7q/Xxzp5qRo1bg48YORKL8HvIy2POd1tMJIdIlalD3eC48+3r//v1YtmwZAGDZsmXYu3evmEMiEtAwcCDLSKeyJUOllOPSWRb09gVx5KSDj6HFxCrfaTsbIUTCRJ1+93q9qK+vdyGsCgAAFtJJREFUx+233w6fz4c777wTPp8vNt1uNpthtSauXjYac6BQyHkdm9k8vqZVpXK/kUgUTScdMBu1mHNRcdzz05N1zZIpeO9AGz49asWyBZXn/F069+z9ohsAMLnCKJn3LpFsGSdf6H7HNrrf5Iga1GfMmIGNGzdi2bJlOHnyJG677TaEw4P7ipNt89nTw28hlNmsg9Xq5vWaUial+z16pgceXwhfm1kEm60v7evp1TKUm/Owv6kTzaft0OdwHxjTu+dTbb0AABUDybx3I5HSv7EY6H7HNrrf5IO8qNPvVVVVsan2yspKFBYWwuVywe/vX6/s6upCUVGRmEMiGTY49Z7eejqHYRgsri1BJMpi35EuXq4JUOMZQkh2EDWo79ixA6+++ioAwGq1wm634/rrr8fu3bsBAG+99RYWL14s5pBIhh1utkOlkGFGhZG3ay6YZYFcxuCjhnbeDnmxOf3QqOTI1Yg6uUUIIaMi6m+oFStW4Mc//jF2796NYDCIRx55BDNnzsRPfvITvP766ygtLcV1110n5pBIBll7fWi3eTC7qgAqJX81ErocFS6eWojPj1pxqtONyhJ9WtdjWRZWpw8WY07aa/6EECIkUYO6wWDAiy++eMHjr7zyipjDIBIRm3pPsYvcSBbVluLzo1Z81NCRdlB3e0MIhqJ0kAshRPKooxzJGK417GiOWk1WdaUJRp0a+77oQjAUSetaVietpxNCsgMFdZIR/mAYX53pwYSiPJj0/GfAMhmDhdXF8AXCOHAsvUNebL105CohJDtQUCcZ8eWpHoQjLGZP4T9L5wyes96R1nVsA5k6dZMjhEgdBXWSEYebB05lS7OL3EgsxhxMm5CPL0/3oCuNQ16sA5k6dZMjhEgdBXUiOpZlcbjZjjytMu0itkQWD2TrT7z6GewD/dtHyzqwR73QQJk6IUTaKKgT0Z3p6oOzL4jaqgLIZMJuEVswy4KF1cU4frYXj275DE0t9lFfw+b0QZ+jhFrFb2tiQgjhGwV1IrrY1LsAW9nOJ5fJsP7qmdh4w2z4g2H8z58OY9dHJxFNsilNNMrC4QrQejohJCtQUCeiO3zCDrmMwaxJJlFej2EYrKybhPvXzYNJr8EbH53EL7c3oM8XSvhch9uPSJSlyndCSFagoE5E5fQEcbLDhanlBuSI3HK1skSPh2+7BNWTTWhssePRVz7DqU7XiM/htrPRHnVCSDagoE5E1TjQRa5WwKr3keRplbh7zWxcu6gSDpcfP9taj/cPtcXtEU+NZwgh2YSCOhHV4Hq6cPvTE5ExDK5dVIm7vz0baqUcv3vzKF7+x5cIDNN5jhrPEEKyCR05RZL2xSkHnt/ZhHJzLmqnFKJ2cgHKzLlJH3ISjkRx5KQDRUYtik05Ao82sZrJBXj4tkvw/M4mfNzYiTNdfdiwuhoW4+DYqPEMISSbUFAnSQmGIvjdm1/BFwjjeKsTx1qd2LGnGSa9GrWTC1BTVYCZE43QqOJ/Sx072wt/MIJFtQWSOe2s0KDF/evmYds7x7HnYBse2/I5br96Ji6eZgYAWJ1+MAxg0qkzPFJCCEmMgjpJyj/2nYa1148rL5mAq+smoumkAw3NdjS12LHnUDv2HGqHQs5g+oR81FYVoraqAJbzsvHDJ4Q7lS0dSoUM371qOqpK9di6+yh+9ZdGrFowEauXVMLW64NJp4FCTitVhBDpo6BOEurq8eIf+84gP0+FaxdVQqtWoG5WMepmFSMSjeJkuxuHm21obLbjyKkeHDnVg23vHIfFqEVNVQFqqwowfYIRDc02qFVyTJ+Qn+lbGtZlNSWosOiweWcj/rHvNFranejtC2JGhTTHSwgh56OgTkbEsiz+8NYxhCNR3Lx8GrTqc79l5DIZppQbMKXcgG99vQo97gAaW+xoaLbjyCkH3v68FW9/3gqVUoZgKIp508ySznonFOXhoe/Nx2///iUOHu8v6qP2sISQbEFBnYyo/qgVTScdmDXJiPnTzQm/3qhTY8nsUiyZXYpwJIrjZ3txuNmOxhY7Ouxe1FUXizDq9ORolPjB9TV4c/8Z/OWDFkwpN2R6SIQQkhQK6iQufzCMbe8ch0LO4DtXTh91cZtCLsPMSSbMnGTCTcumIhiKQKXMjv7pDMPgGwsmYtm88qwZMyGESHcelGTcro9PoccdwDcuncjLFrRsDI7ZOGZCyPhFQZ0Mq9Xah399dhaFBg2urpuY6eEQQghJAgV1cgGWZfH73UcRibJYu2IaZauEEJIlKKhnoc+/6sZT2w6iud0pyPU/aerEsVYnLp5aiDkS21NOCCEkPgrqWSQcieL1d4/j+Tea8OXpHjz9x0M4draX19fw+EPY/t4JqJQy3Lx8Kq/XJoQQIiwK6lmity+Ap7cdxO5Pz6LYlIObrpiCcDiKZ/50CF+ccvD2On/5oAUubwjfXDiJ9mcTQkiWoS1tWeDomR688NcjcHmCmD/djNtWzYRWrUCRMQfPv9GIX2xvwA+ur077ONOTHS7sOdCGkoIcXPW1Cp5GTwghRCyUqUsYy7J4c/8ZPLXtEDy+EG5aNhX/fl11rKvbnKmFuOuGWsgY4Fd/bsSBY9aUXysaZfH7t46CBbDuyumS7vpGCCFkePSbW6K8/jCe39mEP713ArpcJe69+WJcecmECxrAVFcW4D++PRsKuQzP72zCp192pfR67x9ux8kONxbMsmDmRCMft0AIIURkok+/P/nkk6ivr0c4HMb3v/997N+/HwcPHkRubi4AYP369bj88svFHpaktHb3YfPORnT1+DCjIh/fv2YWDHnxj/6cXmHEPTfOwf9sP4Tf7DqCUDiKy2pKkn49lyeIP+9phlYtx41Lp/BxC4QQQjJA1KC+b98+HD9+HK+//jp6enqwevVq1NXV4ac//Slmzpwp5lAka29TJ3735lcIhqP4xoIKXL9kMuSyxBMqU8oN+PFNF+OZ1w/ht3//EqFIFJfPKUvqNbe/dwLeQBhrl08d8cMDIYQQaRM1qF9yySWora0FABgMBvh8PrhcLjGHIFmhcBR/fOc43jvYBq1ajh9cU4O50xIfoDJUZYke9958Mf779UN49c2jCIWjWDF/wojPOXa2Fx83daLCkoelc5P7EEAIIUSaRA3qcrkcOTn9PcS3b9+OJUuWwOFw4LnnnoPL5YLFYsEDDzyA/PzxdX613enH82804mSHG+XmPGxcXQ1Lir3WKyw63Ld2Lp7edhDb3j6OcDiKbywYvs1rOBLF1reOggFwy1XTk5oRIIQQIl0My7Ks2C/69ttv4ze/+Q1efvll7Nu3D1OmTEFlZSVeeOEF2Gw2PPjggyM+PxyOQKEYG61LD3zVjaf/UA+3N4gr5k/Av3+rFhpV+p+12q3/f3v3HxR1mccB/L2woPzSUBcKNFJU1A4T0cLAJEDJc9ROVJCWRm8cr4zSUhGTARonYZFKpSk6ox+zlIg/ziitPHNsuGtlxhDSGivKsZDk1yILCwuyPPeHuRe5luDiyrPv13/7Zdn9vNtm334fdr9PKza//l80NJuQGDsBCbPHX/Mhu38dr8JbH36N2LAAJC+ZctPPSURE9nXLS720tBQ7duzAm2++ec0ZeVVVFTIzM1FYWPiHj1Ff32LTmVQqrz49phACFxqM6DJ39+l5K6saUfKfc3B2ViBx9njMus+v19ub/pH6S+3YtvsUGppNmDcjAIseGgOFQgGVygvf/lCPzbvK4KJ0wtZVYfB0c7HZ896O+voaD1TMKzfmlZu1vCqV1w397i1dfm9paUFOTg7eeecdS6E/8cQTSE9Ph5+fH8rKyjBu3MC4NGmXuRv//PAbnDxbd1OPM3zIYKz+218w+q4hNprs/1R3uCH1sanYtvsUDunOo/NyNxKir3y6vehYFToum5EYM076QicichS3tNQPHz6MpqYmrF271nIsLi4OTz/9NNzd3eHm5oasrKxbOVKfXO7qxusHz6CiqgGj7xqCcSOH9ulx3AYpER06sl9LddiQwdj42FTkFlXg3yd/vvKp+GmjcPJsHcb6D0X45Bv/6hsREd3e7PI39Ztlz+X3zstmvHrgNM6c02PSPd54Om4yBg2ArUkNbZ14qagCP9e1QunsBHN3NzKWT8fdvje2pDPQcflObswrN+a98eV3fty5F0ydXdi+txJnzukxOXA41iweGIUOAEPcXbFhWQhG3+WFLnM3YkJHOUyhExE5Cm7ocoPaTF3Yvq8SVdXNCB2vwj8W3jvgro/u6eaC9QkhOF/fhsA7Pe09DhER2djAaiU7aW2/jJf2nEJVdTMemOSLJx4deIV+ldsgJWaG+MNFOTDnJyKi6+OZ+p8wtHXi5aIK/FTXivDgO7Fi7kQ4Odnua2dERES2wlL/A82tHdhWVIGaBiMiQ/yhnjMeTjb8HjkREZEtsdSvQ28wYVtRBWr1bZg9bRQSosfa9MIwREREtsZSt6LhUjtyfr0S21/DAhA3awwLnYiIbnss9d+p1bdhW9Ep6A0deDRiNOaH38NCJyKiAYGl/hsXGozILTqF5tZOLIkMvO7uZkRERLcjlvqvfqptwUt7KtDSdhnLYsb96T7kREREtxuWOoDvf27Ctt2n0GbqwuOPBCFyir+9RyIiIuo1hy/16vpWZL9XjvaOLvx93kSEB3ODEyIiGphY6vWt6DILrJp/Lx6Y5GvvcYiIiPrM4Us9bNKdmBsRiCa90d6jEBER3RReABwYsNdxJyIi+i22GRERkSRY6kRERJJgqRMREUmCpU5ERCQJljoREZEkWOpERESSYKkTERFJgqVOREQkCZY6ERGRJFjqREREkmCpExERSUIhhBD2HoKIiIhuHs/UiYiIJMFSJyIikgRLnYiISBIsdSIiIkmw1ImIiCTBUiciIpKEw5f61q1bER8fj4SEBHz11Vf2Hqdf5OTkID4+HnFxcThy5Ah++eUXJCUlITExEWvWrEFnZ6e9R7Q5k8mE6OhoHDhwwCHylpSUYMGCBVi0aBE+//xzqTMbjUYkJycjKSkJCQkJKC0tlTLvd999h5iYGBQWFgLAdTOWlJQgLi4OS5Yswb59++w58k2xlnf58uVQq9VYvnw56uvrAciTF7g281WlpaUICgqy3O5VZuHAysrKxKpVq4QQQnz//fdi8eLFdp7I9nQ6nVi5cqUQQgi9Xi9mzZolUlNTxeHDh4UQQmg0GvHee+/Zc8R+8fLLL4tFixaJ/fv3S59Xr9eLOXPmiJaWFlFbWyvS0tKkzqzVakVubq4QQoiLFy+K2NhY6fIajUahVqtFWlqa0Gq1QghhNaPRaBRz5swRBoNBtLe3i9jYWNHU1GTP0fvEWt6UlBRx6NAhIYQQhYWFQqPRSJNXCOuZhRDCZDIJtVotwsPDLffrTWaHPlPX6XSIiYkBAIwdOxYGgwGtra12nsq2pk+fjh07dgAAhg4divb2dpSVlSE6OhoAEB0dDZ1OZ88Rbe6HH35AVVUVIiMjAUD6vDqdDjNmzICnpyd8fHywZcsWqTN7e3vj0qVLAACDwQBvb2/p8rq6umLXrl3w8fGxHLOWsbKyEsHBwfDy8sLgwYMxbdo0lJeX22vsPrOWNyMjA7GxsQD+/5rLkhewnhkA8vPzkZiYCFdXVwDodWaHLvWGhgZ4e3tbbg8fPtyyxCMLZ2dnuLu7AwD27t2Lhx56CO3t7Zb/YVQqlXSZNRoNUlNTLbdlz1tdXQ0hBNauXYvExETodDqpM8+bNw81NTWYPXs21Go1Nm7cKF1epVKJwYMH9zhmLWNDQwOGDRtmuc+IESMGZHZred3d3eHs7Ayz2Yz3338f8+fPlyYvYD3zuXPncPbsWcydO9dyrLeZlbYfdeAQv7tCrhACCoXCTtP0r6NHj2Lfvn146623LP/6Ba79bzDQHTx4EFOmTMGoUaMsx377msqW96ra2lq8+uqrqKmpweOPPy515g8++AB+fn4oKCjA2bNnsXnzZqnzXmUto+zvYWazGSkpKQgLC8OMGTNQUlLS4+ey5c3KykJaWlqPY719jR36TN3X1xcNDQ2W23V1dRgxYoQdJ+ofpaWlyM/Px65du+Dl5QU3NzeYTCYAV8rg98s/A9nx48fx2WefYenSpdi7dy9ee+01qfMCV1aYQkJCoFQqcffdd8PDw0PqzOXl5YiIiAAATJgwAbW1tVLnvcpaRmvvYSqVyl4j2tymTZsQEBCA5ORkANbfs2XJW1tbix9//BHr16/H0qVLUVdXB7Va3evMDl3q4eHh+PTTTwEA33zzDXx8fODp6WnnqWyrpaUFOTk5eOONN3DHHXcAAB588EFL7iNHjmDmzJn2HNGmtm/fjv3796O4uBhLlizB6tWrpc4LABEREThx4gS6u7uh1+vR1tYmdeaAgABUVlYCAC5cuAAPDw+p815lLeN9992H06dPw2AwwGg0ory8HNOmTbPzpLZRUlICFxcXPPPMM5ZjMuf19fXF0aNHUVxcjOLiYvj4+KCwsLDXmR1+l7bc3FycPHkSCoUCGRkZmDBhgr1Hsqk9e/YgLy8Po0ePthzLzs5GWloaOjo64Ofnh6ysLLi4uNhxyv6Rl5cHf39/REREYOPGjVLnLSoqwqFDh9De3o4nn3wSwcHB0mY2Go14/vnn0djYiK6uLqxZswaBgYFS5T1z5gw0Gg0uXLgApVIJX19f5ObmIjU19ZqMn3zyCQoKCqBQKKBWq7FgwQJ7j99r1vI2NjZi0KBBlhOtwMBAZGZmSpEXsJ45Ly/PcvIVFRWFY8eOAUCvMjt8qRMREcnCoZffiYiIZMJSJyIikgRLnYiISBIsdSIiIkmw1ImIiCTh0FeUI3IE1dXVeOSRRxASEtLj+KxZs7By5Uqrv/Pss88iNTUVvr6+fX7e8+fPY8WKFZav5RBR/2OpEzmAYcOGQavV3vD9X3nllX6choj6C0udyIFNmjQJq1evRllZGYxGI7KzszF+/HhERUXh7bffRkdHB9LT0+Hi4gKTyYSnnnoKkZGRqKysRHZ2NpRKJRQKBdLT0zF27FiUl5cjIyMD/v7+GDNmjOV5mpubkZGRgaamJnR2diIxMRHz58+3Y3IiOfFv6kQOzGw2Y9y4cdBqtVi2bBl27tzZ4+fFxcWIioqCVqtFfn6+ZcvTlJQUbNq0CVqtFitWrMALL7wAAMjJycH69euRn5/f4/rU27dvx8yZM/Huu++ioKAAO3fuhF6vv3VBiRwEz9SJHIBer0dSUlKPYxs2bAAAy+YoU6dORUFBQY/7xMbGIjU1FTU1NXj44YexcOFCGAwGNDY2YvLkyQCA+++/H8899xwA4Ntvv0VoaCgAICwszLLkX1ZWhtOnT+PgwYMArmw7WV1d3WNLSSK6eSx1IgfwR39T/+2Von+/peP06dPx0UcfQafT4cCBAygpKUFmZuZ1fx8AnJyuLACazWbLMVdXV2RkZCA4OPhmYhDRn+DyO5GDO3HiBADgyy+/RFBQUI+fabVaXLx4EVFRUXjxxRdRWVkJLy8vqFQqy05pOp0OU6ZMAXBl042KigoAwBdffGF5nNDQUHz88ccAAJPJhMzMTHR1dfV7NiJHwzN1Igdgbfl95MiRAK5sO7x79240NzdDo9H0uM+YMWOwbt06eHh4oLu7G+vWrQMAaDQaZGdnw9nZGU5OTpaz9w0bNmDLli3w8/PDxIkTLY+TnJyMtLQ0LFu2DJ2dnYiPj4dSybcfIlvjLm1EDiwoKAhff/01C5ZIElx+JyIikgTP1ImIiCTBM3UiIiJJsNSJiIgkwVInIiKSBEudiIhIEix1IiIiSbDUiYiIJPE/SRJE1lLBkuEAAAAASUVORK5CYII=\n"
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# set to logging.WARNING to disable logs or logging.DEBUG to see losses as well\n",
    "logging.getLogger().setLevel(logging.INFO)\n",
    "\n",
    "model = Model(num_actions=env.action_space.n)\n",
    "agent = A2CAgent(model)\n",
    "\n",
    "rewards_history = agent.train(env)\n",
    "print(\"Finished training! Testing...\")\n",
    "print(\"Total Episode Reward: %d out of 200\" % agent.test(env))\n",
    "\n",
    "plt.style.use('seaborn')\n",
    "plt.plot(np.arange(0, len(rewards_history), 5), rewards_history[::5])\n",
    "plt.xlabel('Episode')\n",
    "plt.ylabel('Total Reward')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Static Computational Graph"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "pycharm": {
     "is_executing": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "text": [
      "Eager Execution: False\n",
      "Finished training! Testing...\n",
      "Total Episode Reward: 200 out of 200\n"
     ],
     "output_type": "stream"
    },
    {
     "name": "stderr",
     "text": [
      "INFO:root:Episode: 001, Reward: 035\n",
      "INFO:root:Episode: 002, Reward: 020\n",
      "INFO:root:Episode: 003, Reward: 044\n",
      "INFO:root:Episode: 004, Reward: 013\n",
      "INFO:root:Episode: 005, Reward: 019\n",
      "INFO:root:Episode: 006, Reward: 166\n",
      "INFO:root:Episode: 007, Reward: 078\n",
      "INFO:root:Episode: 008, Reward: 039\n",
      "INFO:root:Episode: 009, Reward: 052\n",
      "INFO:root:Episode: 010, Reward: 029\n",
      "INFO:root:Episode: 011, Reward: 025\n",
      "INFO:root:Episode: 012, Reward: 013\n",
      "INFO:root:Episode: 013, Reward: 036\n",
      "INFO:root:Episode: 014, Reward: 031\n",
      "INFO:root:Episode: 015, Reward: 012\n",
      "INFO:root:Episode: 016, Reward: 019\n",
      "INFO:root:Episode: 017, Reward: 061\n",
      "INFO:root:Episode: 018, Reward: 069\n",
      "INFO:root:Episode: 019, Reward: 138\n",
      "INFO:root:Episode: 020, Reward: 035\n",
      "INFO:root:Episode: 021, Reward: 050\n",
      "INFO:root:Episode: 022, Reward: 140\n",
      "INFO:root:Episode: 023, Reward: 077\n",
      "INFO:root:Episode: 024, Reward: 073\n",
      "INFO:root:Episode: 025, Reward: 119\n",
      "INFO:root:Episode: 026, Reward: 113\n",
      "INFO:root:Episode: 027, Reward: 062\n",
      "INFO:root:Episode: 028, Reward: 052\n",
      "INFO:root:Episode: 029, Reward: 047\n",
      "INFO:root:Episode: 030, Reward: 044\n",
      "INFO:root:Episode: 031, Reward: 074\n",
      "INFO:root:Episode: 032, Reward: 105\n",
      "INFO:root:Episode: 033, Reward: 108\n",
      "INFO:root:Episode: 034, Reward: 096\n",
      "INFO:root:Episode: 035, Reward: 114\n",
      "INFO:root:Episode: 036, Reward: 053\n",
      "INFO:root:Episode: 037, Reward: 141\n",
      "INFO:root:Episode: 038, Reward: 067\n",
      "INFO:root:Episode: 039, Reward: 068\n",
      "INFO:root:Episode: 040, Reward: 106\n",
      "INFO:root:Episode: 041, Reward: 200\n",
      "INFO:root:Episode: 042, Reward: 180\n",
      "INFO:root:Episode: 043, Reward: 200\n",
      "INFO:root:Episode: 044, Reward: 200\n",
      "INFO:root:Episode: 045, Reward: 200\n",
      "INFO:root:Episode: 046, Reward: 154\n",
      "INFO:root:Episode: 047, Reward: 115\n",
      "INFO:root:Episode: 048, Reward: 177\n",
      "INFO:root:Episode: 049, Reward: 200\n",
      "INFO:root:Episode: 050, Reward: 147\n",
      "INFO:root:Episode: 051, Reward: 165\n",
      "INFO:root:Episode: 052, Reward: 200\n",
      "INFO:root:Episode: 053, Reward: 200\n",
      "INFO:root:Episode: 054, Reward: 130\n",
      "INFO:root:Episode: 055, Reward: 165\n",
      "INFO:root:Episode: 056, Reward: 099\n",
      "INFO:root:Episode: 057, Reward: 114\n",
      "INFO:root:Episode: 058, Reward: 038\n",
      "INFO:root:Episode: 059, Reward: 036\n",
      "INFO:root:Episode: 060, Reward: 134\n",
      "INFO:root:Episode: 061, Reward: 138\n",
      "INFO:root:Episode: 062, Reward: 200\n",
      "INFO:root:Episode: 063, Reward: 152\n",
      "INFO:root:Episode: 064, Reward: 086\n",
      "INFO:root:Episode: 065, Reward: 103\n",
      "INFO:root:Episode: 066, Reward: 153\n",
      "INFO:root:Episode: 067, Reward: 200\n",
      "INFO:root:Episode: 068, Reward: 162\n",
      "INFO:root:Episode: 069, Reward: 176\n",
      "INFO:root:Episode: 070, Reward: 125\n",
      "INFO:root:Episode: 071, Reward: 114\n",
      "INFO:root:Episode: 072, Reward: 103\n",
      "INFO:root:Episode: 073, Reward: 127\n",
      "INFO:root:Episode: 074, Reward: 104\n",
      "INFO:root:Episode: 075, Reward: 090\n",
      "INFO:root:Episode: 076, Reward: 056\n",
      "INFO:root:Episode: 077, Reward: 044\n",
      "INFO:root:Episode: 078, Reward: 085\n",
      "INFO:root:Episode: 079, Reward: 127\n",
      "INFO:root:Episode: 080, Reward: 085\n",
      "INFO:root:Episode: 081, Reward: 111\n",
      "INFO:root:Episode: 082, Reward: 099\n",
      "INFO:root:Episode: 083, Reward: 200\n",
      "INFO:root:Episode: 084, Reward: 157\n",
      "INFO:root:Episode: 085, Reward: 135\n",
      "INFO:root:Episode: 086, Reward: 106\n",
      "INFO:root:Episode: 087, Reward: 200\n",
      "INFO:root:Episode: 088, Reward: 168\n",
      "INFO:root:Episode: 089, Reward: 086\n",
      "INFO:root:Episode: 090, Reward: 072\n",
      "INFO:root:Episode: 091, Reward: 161\n",
      "INFO:root:Episode: 092, Reward: 156\n",
      "INFO:root:Episode: 093, Reward: 158\n",
      "INFO:root:Episode: 094, Reward: 200\n",
      "INFO:root:Episode: 095, Reward: 200\n",
      "INFO:root:Episode: 096, Reward: 200\n",
      "INFO:root:Episode: 097, Reward: 200\n",
      "INFO:root:Episode: 098, Reward: 200\n",
      "INFO:root:Episode: 099, Reward: 200\n",
      "INFO:root:Episode: 100, Reward: 200\n",
      "INFO:root:Episode: 101, Reward: 200\n",
      "INFO:root:Episode: 102, Reward: 200\n",
      "INFO:root:Episode: 103, Reward: 200\n",
      "INFO:root:Episode: 104, Reward: 200\n",
      "INFO:root:Episode: 105, Reward: 200\n",
      "INFO:root:Episode: 106, Reward: 200\n",
      "INFO:root:Episode: 107, Reward: 200\n",
      "INFO:root:Episode: 108, Reward: 150\n",
      "INFO:root:Episode: 109, Reward: 122\n",
      "INFO:root:Episode: 110, Reward: 146\n",
      "INFO:root:Episode: 111, Reward: 065\n",
      "INFO:root:Episode: 112, Reward: 065\n",
      "INFO:root:Episode: 113, Reward: 131\n",
      "INFO:root:Episode: 114, Reward: 071\n",
      "INFO:root:Episode: 115, Reward: 057\n",
      "INFO:root:Episode: 116, Reward: 195\n",
      "INFO:root:Episode: 117, Reward: 130\n",
      "INFO:root:Episode: 118, Reward: 149\n",
      "INFO:root:Episode: 119, Reward: 186\n",
      "INFO:root:Episode: 120, Reward: 177\n",
      "INFO:root:Episode: 121, Reward: 151\n",
      "INFO:root:Episode: 122, Reward: 110\n",
      "INFO:root:Episode: 123, Reward: 172\n",
      "INFO:root:Episode: 124, Reward: 144\n",
      "INFO:root:Episode: 125, Reward: 200\n",
      "INFO:root:Episode: 126, Reward: 200\n",
      "INFO:root:Episode: 127, Reward: 147\n",
      "INFO:root:Episode: 128, Reward: 156\n",
      "INFO:root:Episode: 129, Reward: 186\n"
     ],
     "output_type": "stream"
    }
   ],
   "source": [
    "with tf.Graph().as_default():\n",
    "    print(\"Eager Execution:\", tf.executing_eagerly()) # False\n",
    "\n",
    "    model = Model(num_actions=env.action_space.n)\n",
    "    agent = A2CAgent(model)\n",
    "\n",
    "    rewards_history = agent.train(env)\n",
    "    print(\"Finished training! Testing...\")\n",
    "    print(\"Total Episode Reward: %d out of 200\" % agent.test(env))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Benchmarks\n",
    "\n",
    "Note: wall time doesn't show the whole picture, it's better to compare CPU time."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "pycharm": {
     "is_executing": false
    }
   },
   "outputs": [],
   "source": [
    "# Generate 100k observations to run benchmarks on.\n",
    "env = gym.make('CartPole-v0')\n",
    "obs = np.repeat(env.reset()[None, :], 100000, axis=0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Eager Benchmark"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "pycharm": {
     "is_executing": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "text": [
      "Eager Execution:   True\n",
      "Eager Keras Model: True\n",
      "CPU times: user 24 ms, sys: 12.7 ms, total: 36.7 ms\n",
      "Wall time: 35.1 ms\n"
     ],
     "output_type": "stream"
    }
   ],
   "source": [
    "%%time\n",
    "\n",
    "model = Model(env.action_space.n)\n",
    "model.run_eagerly = True\n",
    "\n",
    "print(\"Eager Execution:  \", tf.executing_eagerly())\n",
    "print(\"Eager Keras Model:\", model.run_eagerly)\n",
    "\n",
    "_ = model.predict_on_batch(obs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Static Benchmark"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "pycharm": {
     "is_executing": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "text": [
      "Eager Execution:   False\n",
      "Eager Keras Model: False\n",
      "CPU times: user 81.6 ms, sys: 21 ms, total: 103 ms\n",
      "Wall time: 99.6 ms\n"
     ],
     "output_type": "stream"
    }
   ],
   "source": [
    "%%time\n",
    "\n",
    "with tf.Graph().as_default():\n",
    "    model = Model(env.action_space.n)\n",
    "\n",
    "    print(\"Eager Execution:  \", tf.executing_eagerly())\n",
    "    print(\"Eager Keras Model:\", model.run_eagerly)\n",
    "\n",
    "    _ = model.predict_on_batch(obs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Default Benchmark"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "pycharm": {
     "is_executing": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "text": [
      "Eager Execution:   True\n",
      "Eager Keras Model: False\n",
      "CPU times: user 54.2 ms, sys: 4.58 ms, total: 58.7 ms\n",
      "Wall time: 56 ms\n"
     ],
     "output_type": "stream"
    }
   ],
   "source": [
    "%%time\n",
    "\n",
    "model = Model(env.action_space.n)\n",
    "\n",
    "print(\"Eager Execution:  \", tf.executing_eagerly())\n",
    "print(\"Eager Keras Model:\", model.run_eagerly)\n",
    "\n",
    "_ = model.predict_on_batch(obs)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  },
  "pycharm": {
   "stem_cell": {
    "cell_type": "raw",
    "source": [],
    "metadata": {
     "collapsed": false
    }
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}

================================================
FILE: requirements.txt
================================================
gym
matplotlib
numpy>=1.16
tensorflow>=2.0.0,<=2.1.0
Download .txt
gitextract__krukguf/

├── LICENSE
├── README.md
├── a2c.py
├── actor-critic-agent-with-tensorflow2.ipynb
└── requirements.txt
Download .txt
SYMBOL INDEX (13 symbols across 1 files)

FILE: a2c.py
  class ProbabilityDistribution (line 20) | class ProbabilityDistribution(tf.keras.Model):
    method call (line 21) | def call(self, logits, **kwargs):
  class Model (line 26) | class Model(tf.keras.Model):
    method __init__ (line 27) | def __init__(self, num_actions):
    method call (line 37) | def call(self, inputs, **kwargs):
    method action_value (line 45) | def action_value(self, obs):
  class A2CAgent (line 55) | class A2CAgent:
    method __init__ (line 56) | def __init__(self, model, lr=7e-3, gamma=0.99, value_c=0.5, entropy_c=...
    method train (line 68) | def train(self, env, batch_sz=64, updates=250):
    method test (line 99) | def test(self, env, render=False):
    method _returns_advantages (line 109) | def _returns_advantages(self, rewards, dones, values, next_value):
    method _value_loss (line 120) | def _value_loss(self, returns, value):
    method _logits_loss (line 124) | def _logits_loss(self, actions_and_advantages, logits):
Condensed preview — 5 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (70K chars).
[
  {
    "path": "LICENSE",
    "chars": 1067,
    "preview": "MIT License\n\nCopyright (c) 2021 Roman Ring\n\nPermission is hereby granted, free of charge, to any person obtaining a copy"
  },
  {
    "path": "README.md",
    "chars": 1068,
    "preview": "# Deep Reinforcement Learning with TensorFlow 2.1\n\nSource code accompanying the blog post\n[Deep Reinforcement Learning w"
  },
  {
    "path": "a2c.py",
    "chars": 6698,
    "preview": "import gym\nimport logging\nimport argparse\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nimp"
  },
  {
    "path": "actor-critic-agent-with-tensorflow2.ipynb",
    "chars": 57913,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Advantage Actor-Critic with Tenso"
  },
  {
    "path": "requirements.txt",
    "chars": 53,
    "preview": "gym\nmatplotlib\nnumpy>=1.16\ntensorflow>=2.0.0,<=2.1.0\n"
  }
]

About this extraction

This page contains the full source code of the inoryy/tensorflow2-deep-reinforcement-learning GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 5 files (65.2 KB), approximately 31.6k tokens, and a symbol index with 13 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!