Repository: llSourcell/A-Guide-to-DeepMinds-StarCraft-AI-Environment
Branch: master
Commit: cd8bd7ac637d
Files: 14
Total size: 69.9 KB
Directory structure:
gitextract_etjspkov/
├── A Guide to DeepMind's StarCraft AI Environment.ipynb
├── LICENSE
├── README.md
├── deepq_mineral_shards.py
├── defeat_zerglings/
│ ├── common.py
│ ├── demo_agent.py
│ ├── dqfd.py
│ ├── run_demo_agent.py
│ └── train.py
├── enjoy_mineral_shards.py
├── maps/
│ └── chris_maps.py
├── mineral_shards.pkl
├── tests/
│ └── scripted_test.py
└── train_mineral_shards.py
================================================
FILE CONTENTS
================================================
================================================
FILE: A Guide to DeepMind's StarCraft AI Environment.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# A Guide to DeepMind's StarCraft AI Environment\n",
"\n",
"## Demo -- We're going to setup and install the necessary tools to run a pretrained Deep Q Network model on the CollectMineralShards mini-game of DeepMind's StarCraft II Environment.\n",
"\n",
"\n",
"\n",
"## History\n",
"\n",
"Deepmind already beat Atari Games with the Deep Q Learner\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"Then they beat the \"unbeatable\" game of \"Go\" with AlphaGo\n",
"\n",
"\n",
"\n",
"\n",
"And now they've set their sights on Starcraft. For an AI to play StarCraft well, it'll need\n",
"\n",
"- An effective use of memory\n",
"- an ability to plan over a long time\n",
"- The capacity to adapt plans based on new information. \n",
"- To execute something as simple as “expand your base to some location”, one must coordinate mouse clicks, camera, and available resources. This makes actions and planning hierarchical, which is a challenging aspect of Reinforcement Learning.\n",
"\n",
"Blizzard's StarCraft II API is an interface that provides full external control of StarCraft II.\n",
"\n",
"This API exposes functionality for developing software for:\n",
"\n",
"- Scripted bots.\n",
"- Machine-learning based bots.\n",
"- Replay analysis.\n",
"- Tool assisted human play.\n",
"\n",
"DeepMind's PySC2 - StarCraft II Learning Environment exposes it as a Python RL Environment. \n",
"\n",
"- A Machine Learning API developed by Blizzard that gives researchers and developers hooks into the game. This includes the release of tools for Linux for the first time.\n",
"- A dataset of anonymised game replays, which will increase from 65k to more than half a million in the coming weeks. \n",
"- An open source version of DeepMind’s toolset, PySC2, to allow researchers to easily use Blizzard’s feature-layer API with their agents.\n",
"- A series of simple RL mini-games to allow researchers to test the performance of agents on specific tasks.\n",
"- A joint paper that outlines the environment, and reports initial baseline results on the mini-games, supervised learning from replays, and the full 1v1 ladder game against the built-in AI.\n",
"\n",
"Starcraft II is a real-time strategy game developed by Blizzard entertainment, otherwise known as the makers of World of Warcraft. It's the sequel to Starcraft, a game from 1998 that many regard as one of the greatest PC games ever released. Even now, over a decade on, it's still played regularly by people all over the world; in Korea, it's so popular that there are professional leagues dedicated solely to playing the game.\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation Steps\n",
"\n",
"\n",
"Steps\n",
"\n",
"1) Install pysc2\n",
"\n",
"2) Clone pysc2-examples repository\n",
"\n",
"3) Download mini-games StarCraft II Maps\n",
"\n",
"4) Install Tensorflow, baselines libraries\n",
"\n",
"5) Open the project with IntelliJ \n",
"\n",
"6) Run the training script\n",
"\n",
"7) Run the pre-trained model\n",
"\n",
"\n",
"\n",
"## Step 1 - Install pysc2\n",
"\n",
"`pip3 install pysc2`\n",
"\n",
"\n",
"## Step 2 - Git Clone psyc2 examples\n",
"\n",
"`git clone https://github.com/llSourcell/A-Guide-to-DeepMinds-StarCraft-AI-Environment`\n",
"\n",
"## Step 3 - Download mini-games StarCraft II Maps\n",
"\n",
"https://github.com/deepmind/pysc2/releases/download/v1.0/mini_games.zip\n",
"\n",
"save these maps to StarCraft II/Maps \n",
"\n",
"## Step 4 - Install Tensorflow + OpenAI Baselines\n",
"\n",
"`pip3 install tensorflow`\n",
"`pip3 install baselines`\n",
"\n",
"## Step 5 - Open the Project with Intellij\n",
"\n",
"### Start training\n",
"\n",
"`python3 train_mineral_shards.py`\n",
"\n",
"### Open project , Python 3 SDK \n",
"\n",
"## Step 6 Run training script\n",
"\n",
"Right click the train_mineral_shards.py and select [Run 'train_mineral_shards'] menu.\n",
"\n",
"This is the brief explanation of console logs.\n",
"\n",
"- steps : The number of commands that we sent to marines.\n",
"- episodes : The number of games that we played.\n",
"- mean 100 episode reward : mean rewards of last 100 episodes.\n",
"- mean 100 episode min… : mean minerals of last 100 episodes.\n",
"- % time spent exploring : The percentage of Exploring (Exploration & Exploit)\n",
"\n",
"\n",
"## Step 7 Run pre-trained model\n",
"\n",
"- Right click the enjoy_mineral_shards.py and select [Run 'enjoy_mineral_shards'] menu.\n",
"\n",
"Then we can see the pre-trained agent of CollectMineralShards map.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: README.md
================================================
# A-Guide-to-DeepMind-s-StarCraft-AI-Environment-
This is the code for "A Guide to DeepMind's StarCraft AI Environment" by Siraj Raval on Youtube
## Overview
This is the code for [this](https://youtu.be/URWXG5jRB-A) video on on Youtube by Siraj Raval. This code will help you train or run a pretrained AI model in the DeepMind Starcraft II environment.
## Dependencies
- pysc2 (Deepmind) [https://github.com/deepmind/pysc2]
- baselines (OpenAI) [https://github.com/openai/baselines]
- s2client-proto (Blizzard) [https://github.com/Blizzard/s2client-proto]
- Tensorflow 1.3 (Google) [https://github.com/tensorflow/tensorflow]
## Usage
## 1. Get PySC2
### PyPI
The easiest way to get PySC2 is to use pip:
```shell
$ pip install pysc2
```
Also, you have to install `baselines` library.
```shell
$ pip install baselines
```
## 2. Install StarCraft II
### Mac / Win
You have to purchase StarCraft II and install it. Or even the Starter Edition will work.
http://us.battle.net/sc2/en/legacy-of-the-void/
### Linux Packages
Follow Blizzard's [documentation](https://github.com/Blizzard/s2client-proto#downloads) to
get the linux version. By default, PySC2 expects the game to live in
`~/StarCraftII/`.
* [3.16.1](http://blzdistsc2-a.akamaihd.net/Linux/SC2.3.16.1.zip)
## 3. Download Maps
Download the [ladder maps](https://github.com/Blizzard/s2client-proto#downloads)
and the [mini games](https://github.com/deepmind/pysc2/releases/download/v1.0/mini_games.zip)
and extract them to your `StarcraftII/Maps/` directory.
## 4. Train it!
```shell
$ python train_mineral_shards.py
```
## 5. Enjoy it!
```shell
$ python enjoy_mineral_shards.py
```
## Credits
The credits for this code go to [chris-chris](https://github.com/chris-chris/pysc2-examples). I've merely created a wrapper to get people started.
================================================
FILE: deepq_mineral_shards.py
================================================
import numpy as np
import os
import dill
import tempfile
import tensorflow as tf
import zipfile
import baselines.common.tf_util as U
from baselines import logger
from baselines.common.schedules import LinearSchedule
from baselines import deepq
from baselines.deepq.replay_buffer import ReplayBuffer, PrioritizedReplayBuffer
from pysc2.lib import actions as sc2_actions
from pysc2.env import environment
from pysc2.lib import features
from pysc2.lib import actions
import gflags as flags
_PLAYER_RELATIVE = features.SCREEN_FEATURES.player_relative.index
_PLAYER_FRIENDLY = 1
_PLAYER_NEUTRAL = 3 # beacon/minerals
_PLAYER_HOSTILE = 4
_NO_OP = actions.FUNCTIONS.no_op.id
_MOVE_SCREEN = actions.FUNCTIONS.Move_screen.id
_ATTACK_SCREEN = actions.FUNCTIONS.Attack_screen.id
_SELECT_ARMY = actions.FUNCTIONS.select_army.id
_NOT_QUEUED = [0]
_SELECT_ALL = [0]
FLAGS = flags.FLAGS
class ActWrapper(object):
def __init__(self, act):
self._act = act
#self._act_params = act_params
@staticmethod
def load(path, act_params, num_cpu=16):
with open(path, "rb") as f:
model_data = dill.load(f)
act = deepq.build_act(**act_params)
sess = U.make_session(num_cpu=num_cpu)
sess.__enter__()
with tempfile.TemporaryDirectory() as td:
arc_path = os.path.join(td, "packed.zip")
with open(arc_path, "wb") as f:
f.write(model_data)
zipfile.ZipFile(arc_path, 'r', zipfile.ZIP_DEFLATED).extractall(td)
U.load_state(os.path.join(td, "model"))
return ActWrapper(act)
def __call__(self, *args, **kwargs):
return self._act(*args, **kwargs)
def save(self, path):
"""Save model to a pickle located at `path`"""
with tempfile.TemporaryDirectory() as td:
U.save_state(os.path.join(td, "model"))
arc_name = os.path.join(td, "packed.zip")
with zipfile.ZipFile(arc_name, 'w') as zipf:
for root, dirs, files in os.walk(td):
for fname in files:
file_path = os.path.join(root, fname)
if file_path != arc_name:
zipf.write(file_path, os.path.relpath(file_path, td))
with open(arc_name, "rb") as f:
model_data = f.read()
with open(path, "wb") as f:
dill.dump((model_data), f)
def load(path, act_params, num_cpu=16):
"""Load act function that was returned by learn function.
Parameters
----------
path: str
path to the act function pickle
num_cpu: int
number of cpus to use for executing the policy
Returns
-------
act: ActWrapper
function that takes a batch of observations
and returns actions.
"""
return ActWrapper.load(path, num_cpu=num_cpu, act_params=act_params)
def learn(env,
q_func,
num_actions=4,
lr=5e-4,
max_timesteps=100000,
buffer_size=50000,
exploration_fraction=0.1,
exploration_final_eps=0.02,
train_freq=1,
batch_size=32,
print_freq=1,
checkpoint_freq=10000,
learning_starts=1000,
gamma=1.0,
target_network_update_freq=500,
prioritized_replay=False,
prioritized_replay_alpha=0.6,
prioritized_replay_beta0=0.4,
prioritized_replay_beta_iters=None,
prioritized_replay_eps=1e-6,
num_cpu=16,
param_noise=False,
param_noise_threshold=0.05,
callback=None):
"""Train a deepq model.
Parameters
-------
env: pysc2.env.SC2Env
environment to train on
q_func: (tf.Variable, int, str, bool) -> tf.Variable
the model that takes the following inputs:
observation_in: object
the output of observation placeholder
num_actions: int
number of actions
scope: str
reuse: bool
should be passed to outer variable scope
and returns a tensor of shape (batch_size, num_actions) with values of every action.
lr: float
learning rate for adam optimizer
max_timesteps: int
number of env steps to optimizer for
buffer_size: int
size of the replay buffer
exploration_fraction: float
fraction of entire training period over which the exploration rate is annealed
exploration_final_eps: float
final value of random action probability
train_freq: int
update the model every `train_freq` steps.
set to None to disable printing
batch_size: int
size of a batched sampled from replay buffer for training
print_freq: int
how often to print out training progress
set to None to disable printing
checkpoint_freq: int
how often to save the model. This is so that the best version is restored
at the end of the training. If you do not wish to restore the best version at
the end of the training set this variable to None.
learning_starts: int
how many steps of the model to collect transitions for before learning starts
gamma: float
discount factor
target_network_update_freq: int
update the target network every `target_network_update_freq` steps.
prioritized_replay: True
if True prioritized replay buffer will be used.
prioritized_replay_alpha: float
alpha parameter for prioritized replay buffer
prioritized_replay_beta0: float
initial value of beta for prioritized replay buffer
prioritized_replay_beta_iters: int
number of iterations over which beta will be annealed from initial value
to 1.0. If set to None equals to max_timesteps.
prioritized_replay_eps: float
epsilon to add to the TD errors when updating priorities.
num_cpu: int
number of cpus to use for training
callback: (locals, globals) -> None
function called at every steps with state of the algorithm.
If callback returns true training stops.
Returns
-------
act: ActWrapper
Wrapper over act function. Adds ability to save it and load it.
See header of baselines/deepq/categorical.py for details on the act function.
"""
# Create all the functions necessary to train the model
sess = U.make_session(num_cpu=num_cpu)
sess.__enter__()
def make_obs_ph(name):
return U.BatchInput((64, 64), name=name)
act, train, update_target, debug = deepq.build_train(
make_obs_ph=make_obs_ph,
q_func=q_func,
num_actions=num_actions,
optimizer=tf.train.AdamOptimizer(learning_rate=lr),
gamma=gamma,
grad_norm_clipping=10
)
act_params = {
'make_obs_ph': make_obs_ph,
'q_func': q_func,
'num_actions': num_actions,
}
# Create the replay buffer
if prioritized_replay:
replay_buffer = PrioritizedReplayBuffer(buffer_size, alpha=prioritized_replay_alpha)
if prioritized_replay_beta_iters is None:
prioritized_replay_beta_iters = max_timesteps
beta_schedule = LinearSchedule(prioritized_replay_beta_iters,
initial_p=prioritized_replay_beta0,
final_p=1.0)
else:
replay_buffer = ReplayBuffer(buffer_size)
beta_schedule = None
# Create the schedule for exploration starting from 1.
exploration = LinearSchedule(schedule_timesteps=int(exploration_fraction * max_timesteps),
initial_p=1.0,
final_p=exploration_final_eps)
# Initialize the parameters and copy them to the target network.
U.initialize()
update_target()
episode_rewards = [0.0]
#episode_minerals = [0.0]
saved_mean_reward = None
path_memory = np.zeros((64,64))
obs = env.reset()
# Select all marines first
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_ARMY, [_SELECT_ALL])])
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
screen = player_relative + path_memory
player_y, player_x = (player_relative == _PLAYER_FRIENDLY).nonzero()
player = [int(player_x.mean()), int(player_y.mean())]
if(player[0]>32):
screen = shift(LEFT, player[0]-32, screen)
elif(player[0]<32):
screen = shift(RIGHT, 32 - player[0], screen)
if(player[1]>32):
screen = shift(UP, player[1]-32, screen)
elif(player[1]<32):
screen = shift(DOWN, 32 - player[1], screen)
reset = True
with tempfile.TemporaryDirectory() as td:
model_saved = False
model_file = os.path.join(td, "model")
for t in range(max_timesteps):
if callback is not None:
if callback(locals(), globals()):
break
# Take action and update exploration to the newest value
kwargs = {}
if not param_noise:
update_eps = exploration.value(t)
update_param_noise_threshold = 0.
else:
update_eps = 0.
if param_noise_threshold >= 0.:
update_param_noise_threshold = param_noise_threshold
else:
# Compute the threshold such that the KL divergence between perturbed and non-perturbed
# policy is comparable to eps-greedy exploration with eps = exploration.value(t).
# See Appendix C.1 in Parameter Space Noise for Exploration, Plappert et al., 2017
# for detailed explanation.
update_param_noise_threshold = -np.log(1. - exploration.value(t) + exploration.value(t) / float(num_actions))
kwargs['reset'] = reset
kwargs['update_param_noise_threshold'] = update_param_noise_threshold
kwargs['update_param_noise_scale'] = True
action = act(np.array(screen)[None], update_eps=update_eps, **kwargs)[0]
reset = False
coord = [player[0], player[1]]
rew = 0
path_memory_ = np.array(path_memory, copy=True)
if(action == 0): #UP
if(player[1] >= 16):
coord = [player[0], player[1] - 16]
path_memory_[player[1] - 16 : player[1], player[0]] = -1
elif(player[1] > 0):
coord = [player[0], 0]
path_memory_[0 : player[1], player[0]] = -1
#else:
# rew -= 1
elif(action == 1): #DOWN
if(player[1] <= 47):
coord = [player[0], player[1] + 16]
path_memory_[player[1] : player[1] + 16, player[0]] = -1
elif(player[1] > 47):
coord = [player[0], 63]
path_memory_[player[1] : 63, player[0]] = -1
#else:
# rew -= 1
elif(action == 2): #LEFT
if(player[0] >= 16):
coord = [player[0] - 16, player[1]]
path_memory_[player[1], player[0] - 16 : player[0]] = -1
elif(player[0] < 16):
coord = [0, player[1]]
path_memory_[player[1], 0 : player[0]] = -1
#else:
# rew -= 1
elif(action == 3): #RIGHT
if(player[0] <= 47):
coord = [player[0] + 16, player[1]]
path_memory_[player[1], player[0] : player[0] + 16] = -1
elif(player[0] > 47):
coord = [63, player[1]]
path_memory_[player[1], player[0] : 63] = -1
#else:
# rew -= 1
#else:
#Cannot move, give minus reward
# rew -= 1
#if(path_memory[coord[1],coord[0]] != 0):
# rew -= 0.5
path_memory = np.array(path_memory_)
#print("action : %s Coord : %s" % (action, coord))
if _MOVE_SCREEN not in obs[0].observation["available_actions"]:
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_ARMY, [_SELECT_ALL])])
new_action = [sc2_actions.FunctionCall(_MOVE_SCREEN, [_NOT_QUEUED, coord])]
# else:
# new_action = [sc2_actions.FunctionCall(_NO_OP, [])]
obs = env.step(actions=new_action)
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
new_screen = player_relative + path_memory
player_y, player_x = (player_relative == _PLAYER_FRIENDLY).nonzero()
player = [int(player_x.mean()), int(player_y.mean())]
if(player[0]>32):
new_screen = shift(LEFT, player[0]-32, new_screen)
elif(player[0]<32):
new_screen = shift(RIGHT, 32 - player[0], new_screen)
if(player[1]>32):
new_screen = shift(UP, player[1]-32, new_screen)
elif(player[1]<32):
new_screen = shift(DOWN, 32 - player[1], new_screen)
rew = obs[0].reward
done = obs[0].step_type == environment.StepType.LAST
# Store transition in the replay buffer.
replay_buffer.add(screen, action, rew, new_screen, float(done))
screen = new_screen
episode_rewards[-1] += rew
#episode_minerals[-1] += obs[0].reward
if done:
obs = env.reset()
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
screen = player_relative + path_memory
player_y, player_x = (player_relative == _PLAYER_FRIENDLY).nonzero()
player = [int(player_x.mean()), int(player_y.mean())]
if(player[0]>32):
screen = shift(LEFT, player[0]-32, screen)
elif(player[0]<32):
screen = shift(RIGHT, 32 - player[0], screen)
if(player[1]>32):
screen = shift(UP, player[1]-32, screen)
elif(player[1]<32):
screen = shift(DOWN, 32 - player[1], screen)
# Select all marines first
env.step(actions=[sc2_actions.FunctionCall(_SELECT_ARMY, [_SELECT_ALL])])
episode_rewards.append(0.0)
#episode_minerals.append(0.0)
path_memory = np.zeros((64,64))
reset = True
if t > learning_starts and t % train_freq == 0:
# Minimize the error in Bellman's equation on a batch sampled from replay buffer.
if prioritized_replay:
experience = replay_buffer.sample(batch_size, beta=beta_schedule.value(t))
(obses_t, actions, rewards, obses_tp1, dones, weights, batch_idxes) = experience
else:
obses_t, actions, rewards, obses_tp1, dones = replay_buffer.sample(batch_size)
weights, batch_idxes = np.ones_like(rewards), None
td_errors = train(obses_t, actions, rewards, obses_tp1, dones, weights)
if prioritized_replay:
new_priorities = np.abs(td_errors) + prioritized_replay_eps
replay_buffer.update_priorities(batch_idxes, new_priorities)
if t > learning_starts and t % target_network_update_freq == 0:
# Update target network periodically.
update_target()
mean_100ep_reward = round(np.mean(episode_rewards[-101:-1]), 1)
#mean_100ep_mineral = round(np.mean(episode_minerals[-101:-1]), 1)
num_episodes = len(episode_rewards)
if done and print_freq is not None and len(episode_rewards) % print_freq == 0:
logger.record_tabular("steps", t)
logger.record_tabular("episodes", num_episodes)
logger.record_tabular("mean 100 episode reward", mean_100ep_reward)
#logger.record_tabular("mean 100 episode mineral", mean_100ep_mineral)
logger.record_tabular("% time spent exploring", int(100 * exploration.value(t)))
logger.dump_tabular()
if (checkpoint_freq is not None and t > learning_starts and
num_episodes > 100 and t % checkpoint_freq == 0):
if saved_mean_reward is None or mean_100ep_reward > saved_mean_reward:
if print_freq is not None:
logger.log("Saving model due to mean reward increase: {} -> {}".format(
saved_mean_reward, mean_100ep_reward))
U.save_state(model_file)
model_saved = True
saved_mean_reward = mean_100ep_reward
if model_saved:
if print_freq is not None:
logger.log("Restored model with mean reward: {}".format(saved_mean_reward))
U.load_state(model_file)
return ActWrapper(act)
def intToCoordinate(num, size=64):
if size!=64:
num = num * size * size // 4096
y = num // size
x = num - size * y
return [x, y]
UP, DOWN, LEFT, RIGHT = 'up', 'down', 'left', 'right'
def shift(direction, number, matrix):
''' shift given 2D matrix in-place the given number of rows or columns
in the specified (UP, DOWN, LEFT, RIGHT) direction and return it
'''
if direction in (UP):
matrix = np.roll(matrix, -number, axis=0)
matrix[number:,:] = -2
return matrix
elif direction in (DOWN):
matrix = np.roll(matrix, number, axis=0)
matrix[:number,:] = -2
return matrix
elif direction in (LEFT):
matrix = np.roll(matrix, -number, axis=1)
matrix[:,number:] = -2
return matrix
elif direction in (RIGHT):
matrix = np.roll(matrix, number, axis=1)
matrix[:,:number] = -2
return matrix
else:
return matrix
================================================
FILE: defeat_zerglings/common.py
================================================
import numpy as np
from pysc2.lib import actions as sc2_actions
from pysc2.lib import features
from pysc2.lib import actions
_PLAYER_RELATIVE = features.SCREEN_FEATURES.player_relative.index
_UNIT_TYPE = features.SCREEN_FEATURES.unit_type.index
_SELECTED = features.SCREEN_FEATURES.selected.index
_PLAYER_FRIENDLY = 1
_PLAYER_NEUTRAL = 3 # beacon/minerals
_PLAYER_HOSTILE = 4
_NO_OP = actions.FUNCTIONS.no_op.id
_SELECT_UNIT_ID = 1
_CONTROL_GROUP_SET = 1
_CONTROL_GROUP_RECALL = 0
_SELECT_CONTROL_GROUP = actions.FUNCTIONS.select_control_group.id
_MOVE_SCREEN = actions.FUNCTIONS.Move_screen.id
_ATTACK_SCREEN = actions.FUNCTIONS.Attack_screen.id
_SELECT_ARMY = actions.FUNCTIONS.select_army.id
_SELECT_UNIT = actions.FUNCTIONS.select_unit.id
_SELECT_POINT = actions.FUNCTIONS.select_point.id
_NOT_QUEUED = [0]
_SELECT_ALL = [0]
def init(env, player_relative, obs):
#print("init")
army_count = env._obs.observation.player_common.army_count
if(army_count==0):
return obs
try:
obs = env.step(actions=[sc2_actions.FunctionCall(_NO_OP, [])])
obs = env.step(actions=[sc2_actions.FunctionCall(_NO_OP, [])])
obs = env.step(actions=[sc2_actions.FunctionCall(_NO_OP, [])])
obs = env.step(actions=[sc2_actions.FunctionCall(_NO_OP, [])])
obs = env.step(actions=[sc2_actions.FunctionCall(_NO_OP, [])])
player_y, player_x = (player_relative == _PLAYER_FRIENDLY).nonzero()
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_ARMY, [_SELECT_ALL])])
except Exception as e:
print(e)
for i in range(len(player_x)):
if i % 4 != 0:
continue
xy = [player_x[i], player_y[i]]
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_POINT, [[0], xy])])
group_id = 0
group_list = []
unit_xy_list = []
for i in range(len(player_x)):
if i % 4 != 0:
continue
if group_id > 9:
break
xy = [player_x[i], player_y[i]]
unit_xy_list.append(xy)
if(len(unit_xy_list) >= 1):
for idx, xy in enumerate(unit_xy_list):
if(idx==0):
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_POINT, [[0], xy])])
else:
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_POINT, [[1], xy])])
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_CONTROL_GROUP, [[_CONTROL_GROUP_SET], [group_id]])])
unit_xy_list = []
group_list.append(group_id)
group_id += 1
if(len(unit_xy_list) >= 1):
for idx, xy in enumerate(unit_xy_list):
if(idx==0):
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_POINT, [[0], xy])])
else:
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_POINT, [[1], xy])])
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_CONTROL_GROUP, [[_CONTROL_GROUP_SET], [group_id]])])
group_list.append(group_id)
group_id += 1
return obs
def update_group_list(obs):
control_groups = obs[0].observation["control_groups"]
group_count = 0
group_list = []
for id, group in enumerate(control_groups):
if(group[0]!=0):
group_count += 1
group_list.append(id)
return group_list
def check_group_list(env, obs):
error = False
control_groups = obs[0].observation["control_groups"]
army_count = 0
for id, group in enumerate(control_groups):
if(group[0]==48):
army_count += group[1]
if(group[1] != 1):
#print("group error group_id : %s count : %s" % (id, group[1]))
error = True
return error
if(army_count != env._obs.observation.player_common.army_count):
error = True
# print("army_count %s != %s env._obs.observation.player_common.army_count "
# % (army_count, env._obs.observation.player_common.army_count))
return error
UP, DOWN, LEFT, RIGHT = 'up', 'down', 'left', 'right'
def shift(direction, number, matrix):
''' shift given 2D matrix in-place the given number of rows or columns
in the specified (UP, DOWN, LEFT, RIGHT) direction and return it
'''
if direction in (UP):
matrix = np.roll(matrix, -number, axis=0)
matrix[number:,:] = -2
return matrix
elif direction in (DOWN):
matrix = np.roll(matrix, number, axis=0)
matrix[:number,:] = -2
return matrix
elif direction in (LEFT):
matrix = np.roll(matrix, -number, axis=1)
matrix[:,number:] = -2
return matrix
elif direction in (RIGHT):
matrix = np.roll(matrix, number, axis=1)
matrix[:,:number] = -2
return matrix
else:
return matrix
def select_marine(env, obs):
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
screen = player_relative
group_list = update_group_list(obs)
if(check_group_list(env, obs)):
obs = init(env, player_relative, obs)
group_list = update_group_list(obs)
# if(len(group_list) == 0):
# obs = init(env, player_relative, obs)
# group_list = update_group_list(obs)
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
friendly_y, friendly_x = (player_relative == _PLAYER_FRIENDLY).nonzero()
enemy_y, enemy_x = (player_relative == _PLAYER_HOSTILE).nonzero()
player = []
danger_closest, danger_min_dist = None, None
for e in zip(enemy_x, enemy_y):
for p in zip(friendly_x, friendly_y):
dist = np.linalg.norm(np.array(p) - np.array(e))
if not danger_min_dist or dist < danger_min_dist:
danger_closest, danger_min_dist = p, dist
marine_closest, marine_min_dist = None, None
for e in zip(friendly_x, friendly_y):
for p in zip(friendly_x, friendly_y):
dist = np.linalg.norm(np.array(p) - np.array(e))
if not marine_min_dist or dist < marine_min_dist:
if dist >= 2:
marine_closest, marine_min_dist = p, dist
if(danger_min_dist != None and danger_min_dist <= 5):
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_POINT, [[0], danger_closest])])
selected = obs[0].observation["screen"][_SELECTED]
player_y, player_x = (selected == _PLAYER_FRIENDLY).nonzero()
if(len(player_y)>0):
player = [int(player_x.mean()), int(player_y.mean())]
elif(marine_closest != None and marine_min_dist <= 3):
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_POINT, [[0], marine_closest])])
selected = obs[0].observation["screen"][_SELECTED]
player_y, player_x = (selected == _PLAYER_FRIENDLY).nonzero()
if(len(player_y)>0):
player = [int(player_x.mean()), int(player_y.mean())]
else:
# If there is no marine in danger, select random
while(len(group_list)>0):
# units = env._obs.observation.raw_data.units
# marine_list = [] # for unit in units:
# if(unit.alliance == 1):
# marine_list.append(unit)
group_id = np.random.choice(group_list)
#xy = [int(unit.pos.y - 10), int(unit.pos.x+8)]
#print("check xy : %s - %s" % (xy, player_relative[xy[0],xy[1]]))
obs = env.step(actions=[sc2_actions.FunctionCall(_SELECT_CONTROL_GROUP, [[_CONTROL_GROUP_RECALL], [group_id]])])
selected = obs[0].observation["screen"][_SELECTED]
player_y, player_x = (selected == _PLAYER_FRIENDLY).nonzero()
if(len(player_y)>0):
player = [int(player_x.mean()), int(player_y.mean())]
break
else:
group_list.remove(group_id)
if(len(player) == 2):
if(player[0]>32):
screen = shift(LEFT, player[0]-32, screen)
elif(player[0]<32):
screen = shift(RIGHT, 32 - player[0], screen)
if(player[1]>32):
screen = shift(UP, player[1]-32, screen)
elif(player[1]<32):
screen = shift(DOWN, 32 - player[1], screen)
return obs, screen, player
def marine_action(env, obs, player, action):
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
enemy_y, enemy_x = (player_relative == _PLAYER_HOSTILE).nonzero()
closest, min_dist = None, None
if(len(player) == 2):
for p in zip(enemy_x, enemy_y):
dist = np.linalg.norm(np.array(player) - np.array(p))
if not min_dist or dist < min_dist:
closest, min_dist = p, dist
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
friendly_y, friendly_x = (player_relative == _PLAYER_FRIENDLY).nonzero()
closest_friend, min_dist_friend = None, None
if(len(player) == 2):
for p in zip(friendly_x, friendly_y):
dist = np.linalg.norm(np.array(player) - np.array(p))
if not min_dist_friend or dist < min_dist_friend:
closest_friend, min_dist_friend = p, dist
if(closest == None):
new_action = [sc2_actions.FunctionCall(_NO_OP, [])]
elif(action == 0 and closest_friend != None and min_dist_friend < 3):
# Friendly marine is too close => Sparse!
mean_friend = [int(friendly_x.mean()), int(friendly_x.mean())]
diff = np.array(player) - np.array(closest_friend)
norm = np.linalg.norm(diff)
if(norm != 0):
diff = diff / norm
coord = np.array(player) + diff * 4
if(coord[0]<0):
coord[0] = 0
elif(coord[0]>63):
coord[0] = 63
if(coord[1]<0):
coord[1] = 0
elif(coord[1]>63):
coord[1] = 63
new_action = [sc2_actions.FunctionCall(_MOVE_SCREEN, [_NOT_QUEUED, coord])]
elif(action <= 1): #Attack
# nearest enemy
coord = closest
new_action = [sc2_actions.FunctionCall(_ATTACK_SCREEN, [_NOT_QUEUED, coord])]
#print("action : %s Attack Coord : %s" % (action, coord))
elif(action == 2): # Oppsite direcion from enemy
# nearest enemy opposite
diff = np.array(player) - np.array(closest)
norm = np.linalg.norm(diff)
if(norm != 0):
diff = diff / norm
coord = np.array(player) + diff * 7
if(coord[0]<0):
coord[0] = 0
elif(coord[0]>63):
coord[0] = 63
if(coord[1]<0):
coord[1] = 0
elif(coord[1]>63):
coord[1] = 63
new_action = [sc2_actions.FunctionCall(_MOVE_SCREEN, [_NOT_QUEUED, coord])]
elif(action == 4): #UP
coord = [player[0], player[1] - 3]
new_action = [sc2_actions.FunctionCall(_MOVE_SCREEN, [_NOT_QUEUED, coord])]
elif(action == 5): #DOWN
coord = [player[0], player[1] + 3]
new_action = [sc2_actions.FunctionCall(_MOVE_SCREEN, [_NOT_QUEUED, coord])]
elif(action == 6): #LEFT
coord = [player[0] - 3, player[1]]
new_action = [sc2_actions.FunctionCall(_MOVE_SCREEN, [_NOT_QUEUED, coord])]
elif(action == 7): #RIGHT
coord = [player[0] + 3, player[1]]
new_action = [sc2_actions.FunctionCall(_MOVE_SCREEN, [_NOT_QUEUED, coord])]
#print("action : %s Back Coord : %s" % (action, coord))
return obs, new_action
================================================
FILE: defeat_zerglings/demo_agent.py
================================================
"""A random agent for starcraft."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy
from pysc2.agents import base_agent
from pysc2.lib import actions
from pysc2.lib import actions as sc2_actions
from pysc2.lib import features
from pysc2.lib import actions
from defeat_zerglings import common
import numpy as np
_PLAYER_RELATIVE = features.SCREEN_FEATURES.player_relative.index
_UNIT_TYPE = features.SCREEN_FEATURES.unit_type.index
_SELECTED = features.SCREEN_FEATURES.selected.index
_PLAYER_FRIENDLY = 1
_PLAYER_NEUTRAL = 3 # beacon/minerals
_PLAYER_HOSTILE = 4
_NO_OP = actions.FUNCTIONS.no_op.id
_SELECT_UNIT_ID = 1
_CONTROL_GROUP_SET = 1
_CONTROL_GROUP_RECALL = 0
_SELECT_CONTROL_GROUP = actions.FUNCTIONS.select_control_group.id
_MOVE_SCREEN = actions.FUNCTIONS.Move_screen.id
_ATTACK_SCREEN = actions.FUNCTIONS.Attack_screen.id
_SELECT_ARMY = actions.FUNCTIONS.select_army.id
_SELECT_UNIT = actions.FUNCTIONS.select_unit.id
_SELECT_POINT = actions.FUNCTIONS.select_point.id
_NOT_QUEUED = [0]
_SELECT_ALL = [0]
class MarineAgent(base_agent.BaseAgent):
"""A random agent for starcraft."""
demo_replay = []
def __init__(self, env):
self.env = env
def step(self, obs):
super(MarineAgent, self).step(obs)
#1. Select marine!
obs, screen, player = common.select_marine(self.env, [obs])
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
enemy_y, enemy_x = (player_relative == _PLAYER_HOSTILE).nonzero()
#2. Run away from nearby enemy
closest, min_dist = None, None
if(len(player) == 2):
for p in zip(enemy_x, enemy_y):
dist = np.linalg.norm(np.array(player) - np.array(p))
if not min_dist or dist < min_dist:
closest, min_dist = p, dist
#3. Sparse!
friendly_y, friendly_x = (player_relative == _PLAYER_FRIENDLY).nonzero()
closest_friend, min_dist_friend = None, None
if(len(player) == 2):
for p in zip(friendly_x, friendly_y):
dist = np.linalg.norm(np.array(player) - np.array(p))
if not min_dist_friend or dist < min_dist_friend:
closest_friend, min_dist_friend = p, dist
if(min_dist != None and min_dist <= 7):
obs, new_action = common.marine_action(self.env, obs, player, 2)
elif(min_dist_friend != None and min_dist_friend <= 3):
sparse_or_attack = np.random.randint(0,2)
obs, new_action = common.marine_action(self.env, obs, player, sparse_or_attack)
else:
obs, new_action = common.marine_action(self.env, obs, player, 1)
return new_action[0]
================================================
FILE: defeat_zerglings/dqfd.py
================================================
import numpy as np
import os
import dill
import tempfile
import tensorflow as tf
import zipfile
import baselines.common.tf_util as U
from baselines import logger
from baselines.common.schedules import LinearSchedule
from baselines import deepq
from baselines.deepq.replay_buffer import ReplayBuffer, PrioritizedReplayBuffer
from pysc2.lib import actions as sc2_actions
from pysc2.env import environment
from pysc2.lib import features
from pysc2.lib import actions
from defeat_zerglings import common
import gflags as flags
_PLAYER_RELATIVE = features.SCREEN_FEATURES.player_relative.index
_UNIT_TYPE = features.SCREEN_FEATURES.unit_type.index
_SELECTED = features.SCREEN_FEATURES.selected.index
_PLAYER_FRIENDLY = 1
_PLAYER_NEUTRAL = 3 # beacon/minerals
_PLAYER_HOSTILE = 4
_NO_OP = actions.FUNCTIONS.no_op.id
_SELECT_UNIT_ID = 1
_CONTROL_GROUP_SET = 1
_CONTROL_GROUP_RECALL = 0
_SELECT_CONTROL_GROUP = actions.FUNCTIONS.select_control_group.id
_MOVE_SCREEN = actions.FUNCTIONS.Move_screen.id
_ATTACK_SCREEN = actions.FUNCTIONS.Attack_screen.id
_SELECT_ARMY = actions.FUNCTIONS.select_army.id
_SELECT_UNIT = actions.FUNCTIONS.select_unit.id
_SELECT_POINT = actions.FUNCTIONS.select_point.id
_NOT_QUEUED = [0]
_SELECT_ALL = [0]
UP, DOWN, LEFT, RIGHT = 'up', 'down', 'left', 'right'
FLAGS = flags.FLAGS
class ActWrapper(object):
def __init__(self, act):
self._act = act
#self._act_params = act_params
@staticmethod
def load(path, act_params, num_cpu=16):
with open(path, "rb") as f:
model_data = dill.load(f)
act = deepq.build_act(**act_params)
sess = U.make_session(num_cpu=num_cpu)
sess.__enter__()
with tempfile.TemporaryDirectory() as td:
arc_path = os.path.join(td, "packed.zip")
with open(arc_path, "wb") as f:
f.write(model_data)
zipfile.ZipFile(arc_path, 'r', zipfile.ZIP_DEFLATED).extractall(td)
U.load_state(os.path.join(td, "model"))
return ActWrapper(act)
def __call__(self, *args, **kwargs):
return self._act(*args, **kwargs)
def save(self, path):
"""Save model to a pickle located at `path`"""
with tempfile.TemporaryDirectory() as td:
U.save_state(os.path.join(td, "model"))
arc_name = os.path.join(td, "packed.zip")
with zipfile.ZipFile(arc_name, 'w') as zipf:
for root, dirs, files in os.walk(td):
for fname in files:
file_path = os.path.join(root, fname)
if file_path != arc_name:
zipf.write(file_path, os.path.relpath(file_path, td))
with open(arc_name, "rb") as f:
model_data = f.read()
with open(path, "wb") as f:
dill.dump((model_data), f)
def load(path, act_params, num_cpu=16):
"""Load act function that was returned by learn function.
Parameters
----------
path: str
path to the act function pickle
num_cpu: int
number of cpus to use for executing the policy
Returns
-------
act: ActWrapper
function that takes a batch of observations
and returns actions.
"""
return ActWrapper.load(path, num_cpu=num_cpu, act_params=act_params)
def learn(env,
q_func,
num_actions=3,
lr=5e-4,
max_timesteps=100000,
buffer_size=50000,
exploration_fraction=0.1,
exploration_final_eps=0.02,
train_freq=1,
batch_size=32,
print_freq=1,
checkpoint_freq=10000,
learning_starts=1000,
gamma=1.0,
target_network_update_freq=500,
prioritized_replay=False,
prioritized_replay_alpha=0.6,
prioritized_replay_beta0=0.4,
prioritized_replay_beta_iters=None,
prioritized_replay_eps=1e-6,
num_cpu=16,
param_noise=False,
param_noise_threshold=0.05,
callback=None,
demo_replay=[]
):
"""Train a deepq model.
Parameters
-------
env: pysc2.env.SC2Env
environment to train on
q_func: (tf.Variable, int, str, bool) -> tf.Variable
the model that takes the following inputs:
observation_in: object
the output of observation placeholder
num_actions: int
number of actions
scope: str
reuse: bool
should be passed to outer variable scope
and returns a tensor of shape (batch_size, num_actions) with values of every action.
lr: float
learning rate for adam optimizer
max_timesteps: int
number of env steps to optimizer for
buffer_size: int
size of the replay buffer
exploration_fraction: float
fraction of entire training period over which the exploration rate is annealed
exploration_final_eps: float
final value of random action probability
train_freq: int
update the model every `train_freq` steps.
set to None to disable printing
batch_size: int
size of a batched sampled from replay buffer for training
print_freq: int
how often to print out training progress
set to None to disable printing
checkpoint_freq: int
how often to save the model. This is so that the best version is restored
at the end of the training. If you do not wish to restore the best version at
the end of the training set this variable to None.
learning_starts: int
how many steps of the model to collect transitions for before learning starts
gamma: float
discount factor
target_network_update_freq: int
update the target network every `target_network_update_freq` steps.
prioritized_replay: True
if True prioritized replay buffer will be used.
prioritized_replay_alpha: float
alpha parameter for prioritized replay buffer
prioritized_replay_beta0: float
initial value of beta for prioritized replay buffer
prioritized_replay_beta_iters: int
number of iterations over which beta will be annealed from initial value
to 1.0. If set to None equals to max_timesteps.
prioritized_replay_eps: float
epsilon to add to the TD errors when updating priorities.
num_cpu: int
number of cpus to use for training
callback: (locals, globals) -> None
function called at every steps with state of the algorithm.
If callback returns true training stops.
Returns
-------
act: ActWrapper
Wrapper over act function. Adds ability to save it and load it.
See header of baselines/deepq/categorical.py for details on the act function.
"""
# Create all the functions necessary to train the model
sess = U.make_session(num_cpu=num_cpu)
sess.__enter__()
def make_obs_ph(name):
return U.BatchInput((64, 64), name=name)
act, train, update_target, debug = deepq.build_train(
make_obs_ph=make_obs_ph,
q_func=q_func,
num_actions=num_actions,
optimizer=tf.train.AdamOptimizer(learning_rate=lr),
gamma=gamma,
grad_norm_clipping=10
)
act_params = {
'make_obs_ph': make_obs_ph,
'q_func': q_func,
'num_actions': num_actions,
}
# Create the replay buffer
if prioritized_replay:
replay_buffer = PrioritizedReplayBuffer(buffer_size, alpha=prioritized_replay_alpha)
if prioritized_replay_beta_iters is None:
prioritized_replay_beta_iters = max_timesteps
beta_schedule = LinearSchedule(prioritized_replay_beta_iters,
initial_p=prioritized_replay_beta0,
final_p=1.0)
else:
replay_buffer = ReplayBuffer(buffer_size)
beta_schedule = None
# Create the schedule for exploration starting from 1.
exploration = LinearSchedule(schedule_timesteps=int(exploration_fraction * max_timesteps),
initial_p=1.0,
final_p=exploration_final_eps)
# Initialize the parameters and copy them to the target network.
U.initialize()
update_target()
episode_rewards = [0.0]
saved_mean_reward = None
obs = env.reset()
# Select all marines first
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
screen = player_relative
obs = common.init(env, player_relative, obs)
group_id = 0
reset = True
with tempfile.TemporaryDirectory() as td:
model_saved = False
model_file = os.path.join(td, "model")
for t in range(max_timesteps):
if callback is not None:
if callback(locals(), globals()):
break
# Take action and update exploration to the newest value
kwargs = {}
if not param_noise:
update_eps = exploration.value(t)
update_param_noise_threshold = 0.
else:
update_eps = 0.
if param_noise_threshold >= 0.:
update_param_noise_threshold = param_noise_threshold
else:
# Compute the threshold such that the KL divergence between perturbed and non-perturbed
# policy is comparable to eps-greedy exploration with eps = exploration.value(t).
# See Appendix C.1 in Parameter Space Noise for Exploration, Plappert et al., 2017
# for detailed explanation.
update_param_noise_threshold = -np.log(1. - exploration.value(t) + exploration.value(t) / float(num_actions))
kwargs['reset'] = reset
kwargs['update_param_noise_threshold'] = update_param_noise_threshold
kwargs['update_param_noise_scale'] = True
# custom process for DefeatZerglingsAndBanelings
obs, screen, player = common.select_marine(env, obs)
action = act(np.array(screen)[None], update_eps=update_eps, **kwargs)[0]
reset = False
rew = 0
new_action = None
obs, new_action = common.marine_action(env, obs, player, action)
army_count = env._obs.observation.player_common.army_count
try:
if army_count > 0 and _ATTACK_SCREEN in obs[0].observation["available_actions"]:
obs = env.step(actions=new_action)
else:
new_action = [sc2_actions.FunctionCall(_NO_OP, [])]
obs = env.step(actions=new_action)
except Exception as e:
#print(e)
1 # Do nothing
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
new_screen = player_relative
rew += obs[0].reward
done = obs[0].step_type == environment.StepType.LAST
selected = obs[0].observation["screen"][_SELECTED]
player_y, player_x = (selected == _PLAYER_FRIENDLY).nonzero()
if(len(player_y)>0):
player = [int(player_x.mean()), int(player_y.mean())]
if(len(player) == 2):
if(player[0]>32):
new_screen = common.shift(LEFT, player[0]-32, new_screen)
elif(player[0]<32):
new_screen = common.shift(RIGHT, 32 - player[0], new_screen)
if(player[1]>32):
new_screen = common.shift(UP, player[1]-32, new_screen)
elif(player[1]<32):
new_screen = common.shift(DOWN, 32 - player[1], new_screen)
# Store transition in the replay buffer.
replay_buffer.add(screen, action, rew, new_screen, float(done))
screen = new_screen
episode_rewards[-1] += rew
if done:
print("Episode Reward : %s" % episode_rewards[-1])
obs = env.reset()
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
screen = player_relative
group_list = common.init(env, player_relative, obs)
# Select all marines first
#env.step(actions=[sc2_actions.FunctionCall(_SELECT_UNIT, [_SELECT_ALL])])
episode_rewards.append(0.0)
reset = True
if t > learning_starts and t % train_freq == 0:
# Minimize the error in Bellman's equation on a batch sampled from replay buffer.
if prioritized_replay:
experience = replay_buffer.sample(batch_size, beta=beta_schedule.value(t))
(obses_t, actions, rewards, obses_tp1, dones, weights, batch_idxes) = experience
else:
obses_t, actions, rewards, obses_tp1, dones = replay_buffer.sample(batch_size)
weights, batch_idxes = np.ones_like(rewards), None
td_errors = train(obses_t, actions, rewards, obses_tp1, dones, weights)
if prioritized_replay:
new_priorities = np.abs(td_errors) + prioritized_replay_eps
replay_buffer.update_priorities(batch_idxes, new_priorities)
if t > learning_starts and t % target_network_update_freq == 0:
# Update target network periodically.
update_target()
mean_100ep_reward = round(np.mean(episode_rewards[-101:-1]), 1)
num_episodes = len(episode_rewards)
if done and print_freq is not None and len(episode_rewards) % print_freq == 0:
logger.record_tabular("steps", t)
logger.record_tabular("episodes", num_episodes)
logger.record_tabular("mean 100 episode reward", mean_100ep_reward)
logger.record_tabular("% time spent exploring", int(100 * exploration.value(t)))
logger.dump_tabular()
if (checkpoint_freq is not None and t > learning_starts and
num_episodes > 100 and t % checkpoint_freq == 0):
if saved_mean_reward is None or mean_100ep_reward > saved_mean_reward:
if print_freq is not None:
logger.log("Saving model due to mean reward increase: {} -> {}".format(
saved_mean_reward, mean_100ep_reward))
U.save_state(model_file)
model_saved = True
saved_mean_reward = mean_100ep_reward
if model_saved:
if print_freq is not None:
logger.log("Restored model with mean reward: {}".format(saved_mean_reward))
U.load_state(model_file)
return ActWrapper(act)
================================================
FILE: defeat_zerglings/run_demo_agent.py
================================================
import sys
import gflags as flags
from baselines import deepq
from pysc2.env import sc2_env
from pysc2.lib import actions
from pysc2.env import run_loop
from defeat_zerglings import demo_agent
from maps import chris_maps
_MOVE_SCREEN = actions.FUNCTIONS.Move_screen.id
_SELECT_ARMY = actions.FUNCTIONS.select_army.id
_SELECT_ALL = [0]
_NOT_QUEUED = [0]
step_mul = 1
steps = 20000
FLAGS = flags.FLAGS
def main():
FLAGS(sys.argv)
with sc2_env.SC2Env(
"DefeatZerglingsAndBanelings",
step_mul=step_mul,
visualize=True,
game_steps_per_episode=steps * step_mul) as env:
demo_replay = []
agent = demo_agent.MarineAgent(env=env)
agent.env = env
run_loop.run_loop([agent], env, steps)
if __name__ == '__main__':
main()
================================================
FILE: defeat_zerglings/train.py
================================================
import sys
import gflags as flags
from baselines import deepq
from pysc2.env import sc2_env
from pysc2.lib import actions
from defeat_zerglings import dqfd
_MOVE_SCREEN = actions.FUNCTIONS.Move_screen.id
_SELECT_ARMY = actions.FUNCTIONS.select_army.id
_SELECT_ALL = [0]
_NOT_QUEUED = [0]
step_mul = 1
steps = 2000
FLAGS = flags.FLAGS
def main():
FLAGS(sys.argv)
with sc2_env.SC2Env(
"DefeatZerglingsAndBanelings",
step_mul=step_mul,
visualize=True,
game_steps_per_episode=steps * step_mul) as env:
model = deepq.models.cnn_to_mlp(
convs=[(32, 8, 4), (64, 4, 2), (64, 3, 1)],
hiddens=[256],
dueling=True
)
demo_replay = []
act = dqfd.learn(
env,
q_func=model,
num_actions=3,
lr=1e-4,
max_timesteps=10000000,
buffer_size=100000,
exploration_fraction=0.5,
exploration_final_eps=0.01,
train_freq=2,
learning_starts=100000,
target_network_update_freq=1000,
gamma=0.99,
prioritized_replay=True,
demo_replay=demo_replay
)
act.save("defeat_zerglings.pkl")
if __name__ == '__main__':
main()
================================================
FILE: enjoy_mineral_shards.py
================================================
import sys
import baselines.common.tf_util as U
import gflags as flags
import numpy as np
from baselines import deepq
from pysc2.env import environment
from pysc2.env import sc2_env
from pysc2.lib import actions
from pysc2.lib import actions as sc2_actions
from pysc2.lib import features
import deepq_mineral_shards
_PLAYER_RELATIVE = features.SCREEN_FEATURES.player_relative.index
_PLAYER_FRIENDLY = 1
_PLAYER_NEUTRAL = 3 # beacon/minerals
_PLAYER_HOSTILE = 4
_NO_OP = actions.FUNCTIONS.no_op.id
_MOVE_SCREEN = actions.FUNCTIONS.Move_screen.id
_ATTACK_SCREEN = actions.FUNCTIONS.Attack_screen.id
_SELECT_ARMY = actions.FUNCTIONS.select_army.id
_NOT_QUEUED = [0]
_SELECT_ALL = [0]
step_mul = 16
steps = 400
FLAGS = flags.FLAGS
def main():
FLAGS(sys.argv)
with sc2_env.SC2Env(
"CollectMineralShards",
step_mul=step_mul,
visualize=True,
game_steps_per_episode=steps * step_mul) as env:
model = deepq.models.cnn_to_mlp(
convs=[(32, 8, 4), (64, 4, 2), (64, 3, 1)],
hiddens=[256],
dueling=True
)
def make_obs_ph(name):
return U.BatchInput((64, 64), name=name)
act_params = {
'make_obs_ph': make_obs_ph,
'q_func': model,
'num_actions': 4,
}
act = deepq_mineral_shards.load("mineral_shards.pkl", act_params=act_params)
while True:
obs = env.reset()
episode_rew = 0
done = False
step_result = env.step(actions=[sc2_actions.FunctionCall(_SELECT_ARMY, [_SELECT_ALL])])
while not done:
player_relative = step_result[0].observation["screen"][_PLAYER_RELATIVE]
obs = player_relative
player_y, player_x = (player_relative == _PLAYER_FRIENDLY).nonzero()
player = [int(player_x.mean()), int(player_y.mean())]
if(player[0]>32):
obs = shift(LEFT, player[0]-32, obs)
elif(player[0]<32):
obs = shift(RIGHT, 32 - player[0], obs)
if(player[1]>32):
obs = shift(UP, player[1]-32, obs)
elif(player[1]<32):
obs = shift(DOWN, 32 - player[1], obs)
action = act(obs[None])[0]
coord = [player[0], player[1]]
if(action == 0): #UP
if(player[1] >= 16):
coord = [player[0], player[1] - 16]
elif(player[1] > 0):
coord = [player[0], 0]
elif(action == 1): #DOWN
if(player[1] <= 47):
coord = [player[0], player[1] + 16]
elif(player[1] > 47):
coord = [player[0], 63]
elif(action == 2): #LEFT
if(player[0] >= 16):
coord = [player[0] - 16, player[1]]
elif(player[0] < 16):
coord = [0, player[1]]
elif(action == 3): #RIGHT
if(player[0] <= 47):
coord = [player[0] + 16, player[1]]
elif(player[0] > 47):
coord = [63, player[1]]
new_action = [sc2_actions.FunctionCall(_MOVE_SCREEN, [_NOT_QUEUED, coord])]
step_result = env.step(actions=new_action)
rew = step_result[0].reward
done = step_result[0].step_type == environment.StepType.LAST
episode_rew += rew
print("Episode reward", episode_rew)
UP, DOWN, LEFT, RIGHT = 'up', 'down', 'left', 'right'
def shift(direction, number, matrix):
''' shift given 2D matrix in-place the given number of rows or columns
in the specified (UP, DOWN, LEFT, RIGHT) direction and return it
'''
if direction in (UP):
matrix = np.roll(matrix, -number, axis=0)
matrix[number:,:] = -2
return matrix
elif direction in (DOWN):
matrix = np.roll(matrix, number, axis=0)
matrix[:number,:] = -2
return matrix
elif direction in (LEFT):
matrix = np.roll(matrix, -number, axis=1)
matrix[:,number:] = -2
return matrix
elif direction in (RIGHT):
matrix = np.roll(matrix, number, axis=1)
matrix[:,:number] = -2
return matrix
else:
return matrix
if __name__ == '__main__':
main()
================================================
FILE: maps/chris_maps.py
================================================
"""Define the mini game map configs. These are maps made by Deepmind."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pysc2.maps import lib
class ChrisMaps(lib.Map):
directory = "chris_maps"
download = "https://github.com/chris-chris/pysc2-examples#get-the-maps"
players = 1
score_index = 0
game_steps_per_episode = 0
step_mul = 8
chris_maps = [
"DefeatZealots", # 120s
]
for name in chris_maps:
globals()[name] = type(name, (ChrisMaps,), dict(filename=name))
================================================
FILE: tests/scripted_test.py
================================================
#!/usr/bin/python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pysc2.agents import random_agent
from pysc2.env import run_loop
from pysc2.env import sc2_env
from pysc2.tests import utils
from pysc2.lib import actions as sc2_actions
from pysc2.lib import features
from pysc2.lib import basetest
import gflags as flags
import sys
_NO_OP = sc2_actions.FUNCTIONS.no_op.id
_PLAYER_RELATIVE = features.SCREEN_FEATURES.player_relative.index
FLAGS = flags.FLAGS
class TestScripted(utils.TestCase):
steps = 2000
step_mul = 1
def test_defeat_zerglings(self):
FLAGS(sys.argv)
with sc2_env.SC2Env(
"DefeatZerglingsAndBanelings",
step_mul=self.step_mul,
visualize=True,
game_steps_per_episode=self.steps * self.step_mul) as env:
obs = env.step(actions=[sc2_actions.FunctionCall(_NO_OP, [])])
player_relative = obs[0].observation["screen"][_PLAYER_RELATIVE]
# Break Point!!
print(player_relative)
agent = random_agent.RandomAgent()
run_loop.run_loop([agent], env, self.steps)
self.assertEqual(agent.steps, self.steps)
if __name__ == "__main__":
basetest.main()
================================================
FILE: train_mineral_shards.py
================================================
import sys
import gflags as flags
from baselines import deepq
from pysc2.env import sc2_env
from pysc2.lib import actions
import deepq_mineral_shards
_MOVE_SCREEN = actions.FUNCTIONS.Move_screen.id
_SELECT_ARMY = actions.FUNCTIONS.select_army.id
_SELECT_ALL = [0]
_NOT_QUEUED = [0]
step_mul = 8
FLAGS = flags.FLAGS
def main():
FLAGS(sys.argv)
with sc2_env.SC2Env(
"CollectMineralShards",
step_mul=step_mul,
visualize=True) as env:
model = deepq.models.cnn_to_mlp(
convs=[(32, 8, 4), (64, 4, 2), (64, 3, 1)],
hiddens=[256],
dueling=True
)
act = deepq_mineral_shards.learn(
env,
q_func=model,
num_actions=4,
lr=1e-5,
max_timesteps=2000000,
buffer_size=100000,
exploration_fraction=0.5,
exploration_final_eps=0.01,
train_freq=4,
learning_starts=100000,
target_network_update_freq=1000,
gamma=0.99,
prioritized_replay=True
)
act.save("mineral_shards.pkl")
if __name__ == '__main__':
main()
gitextract_etjspkov/ ├── A Guide to DeepMind's StarCraft AI Environment.ipynb ├── LICENSE ├── README.md ├── deepq_mineral_shards.py ├── defeat_zerglings/ │ ├── common.py │ ├── demo_agent.py │ ├── dqfd.py │ ├── run_demo_agent.py │ └── train.py ├── enjoy_mineral_shards.py ├── maps/ │ └── chris_maps.py ├── mineral_shards.pkl ├── tests/ │ └── scripted_test.py └── train_mineral_shards.py
SYMBOL INDEX (33 symbols across 10 files)
FILE: deepq_mineral_shards.py
class ActWrapper (line 35) | class ActWrapper(object):
method __init__ (line 36) | def __init__(self, act):
method load (line 41) | def load(path, act_params, num_cpu=16):
method __call__ (line 57) | def __call__(self, *args, **kwargs):
method save (line 60) | def save(self, path):
function load (line 77) | def load(path, act_params, num_cpu=16):
function learn (line 96) | def learn(env,
function intToCoordinate (line 453) | def intToCoordinate(num, size=64):
function shift (line 462) | def shift(direction, number, matrix):
FILE: defeat_zerglings/common.py
function init (line 30) | def init(env, player_relative, obs):
function update_group_list (line 95) | def update_group_list(obs):
function check_group_list (line 105) | def check_group_list(env, obs):
function shift (line 127) | def shift(direction, number, matrix):
function select_marine (line 150) | def select_marine(env, obs):
function marine_action (line 241) | def marine_action(env, obs, player, action):
FILE: defeat_zerglings/demo_agent.py
class MarineAgent (line 43) | class MarineAgent(base_agent.BaseAgent):
method __init__ (line 47) | def __init__(self, env):
method step (line 50) | def step(self, obs):
FILE: defeat_zerglings/dqfd.py
class ActWrapper (line 51) | class ActWrapper(object):
method __init__ (line 52) | def __init__(self, act):
method load (line 57) | def load(path, act_params, num_cpu=16):
method __call__ (line 73) | def __call__(self, *args, **kwargs):
method save (line 76) | def save(self, path):
function load (line 93) | def load(path, act_params, num_cpu=16):
function learn (line 112) | def learn(env,
FILE: defeat_zerglings/run_demo_agent.py
function main (line 22) | def main():
FILE: defeat_zerglings/train.py
function main (line 20) | def main():
FILE: enjoy_mineral_shards.py
function main (line 31) | def main():
function shift (line 127) | def shift(direction, number, matrix):
FILE: maps/chris_maps.py
class ChrisMaps (line 9) | class ChrisMaps(lib.Map):
FILE: tests/scripted_test.py
class TestScripted (line 23) | class TestScripted(utils.TestCase):
method test_defeat_zerglings (line 27) | def test_defeat_zerglings(self):
FILE: train_mineral_shards.py
function main (line 19) | def main():
Condensed preview — 14 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (75K chars).
[
{
"path": "A Guide to DeepMind's StarCraft AI Environment.ipynb",
"chars": 6525,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"# A Guide to DeepMind's StarCraft A"
},
{
"path": "LICENSE",
"chars": 11357,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "README.md",
"chars": 1824,
"preview": "# A-Guide-to-DeepMind-s-StarCraft-AI-Environment-\nThis is the code for \"A Guide to DeepMind's StarCraft AI Environment\" "
},
{
"path": "deepq_mineral_shards.py",
"chars": 16415,
"preview": "import numpy as np\nimport os\nimport dill\nimport tempfile\nimport tensorflow as tf\nimport zipfile\n\nimport baselines.common"
},
{
"path": "defeat_zerglings/common.py",
"chars": 10536,
"preview": "import numpy as np\n\nfrom pysc2.lib import actions as sc2_actions\nfrom pysc2.lib import features\nfrom pysc2.lib import ac"
},
{
"path": "defeat_zerglings/demo_agent.py",
"chars": 2630,
"preview": "\"\"\"A random agent for starcraft.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __futur"
},
{
"path": "defeat_zerglings/dqfd.py",
"chars": 13623,
"preview": "import numpy as np\nimport os\nimport dill\nimport tempfile\nimport tensorflow as tf\nimport zipfile\n\nimport baselines.common"
},
{
"path": "defeat_zerglings/run_demo_agent.py",
"chars": 766,
"preview": "import sys\n\nimport gflags as flags\nfrom baselines import deepq\nfrom pysc2.env import sc2_env\nfrom pysc2.lib import actio"
},
{
"path": "defeat_zerglings/train.py",
"chars": 1147,
"preview": "import sys\n\nimport gflags as flags\nfrom baselines import deepq\nfrom pysc2.env import sc2_env\nfrom pysc2.lib import actio"
},
{
"path": "enjoy_mineral_shards.py",
"chars": 3962,
"preview": "import sys\n\nimport baselines.common.tf_util as U\nimport gflags as flags\nimport numpy as np\nfrom baselines import deepq\nf"
},
{
"path": "maps/chris_maps.py",
"chars": 551,
"preview": "\"\"\"Define the mini game map configs. These are maps made by Deepmind.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __"
},
{
"path": "tests/scripted_test.py",
"chars": 1216,
"preview": "#!/usr/bin/python\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_f"
},
{
"path": "train_mineral_shards.py",
"chars": 1036,
"preview": "import sys\n\nimport gflags as flags\nfrom baselines import deepq\nfrom pysc2.env import sc2_env\nfrom pysc2.lib import actio"
}
]
// ... and 1 more files (download for full content)
About this extraction
This page contains the full source code of the llSourcell/A-Guide-to-DeepMinds-StarCraft-AI-Environment GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 14 files (69.9 KB), approximately 18.6k tokens, and a symbol index with 33 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.