Repository: cyanrain7/TRPO-in-MARL
Branch: master
Commit: e412f13da689
Files: 56
Total size: 372.2 KB
Directory structure:
gitextract_0j5bd_hz/
├── .gitignore
├── LICENSE
├── README.md
├── algorithms/
│ ├── __init__.py
│ ├── actor_critic.py
│ ├── happo_policy.py
│ ├── happo_trainer.py
│ ├── hatrpo_policy.py
│ ├── hatrpo_trainer.py
│ └── utils/
│ ├── act.py
│ ├── cnn.py
│ ├── distributions.py
│ ├── mlp.py
│ ├── rnn.py
│ └── util.py
├── configs/
│ └── config.py
├── envs/
│ ├── __init__.py
│ ├── env_wrappers.py
│ ├── ma_mujoco/
│ │ ├── __init__.py
│ │ └── multiagent_mujoco/
│ │ ├── __init__.py
│ │ ├── assets/
│ │ │ ├── .gitignore
│ │ │ ├── __init__.py
│ │ │ ├── coupled_half_cheetah.xml
│ │ │ ├── manyagent_ant.xml
│ │ │ ├── manyagent_ant.xml.template
│ │ │ ├── manyagent_ant__stage1.xml
│ │ │ ├── manyagent_swimmer.xml.template
│ │ │ ├── manyagent_swimmer__bckp2.xml
│ │ │ └── manyagent_swimmer_bckp.xml
│ │ ├── coupled_half_cheetah.py
│ │ ├── manyagent_ant.py
│ │ ├── manyagent_swimmer.py
│ │ ├── mujoco_multi.py
│ │ ├── multiagentenv.py
│ │ └── obsk.py
│ └── starcraft2/
│ ├── StarCraft2_Env.py
│ ├── multiagentenv.py
│ └── smac_maps.py
├── install_sc2.sh
├── requirements.txt
├── runners/
│ ├── __init__.py
│ └── separated/
│ ├── __init__.py
│ ├── base_runner.py
│ ├── mujoco_runner.py
│ └── smac_runner.py
├── scripts/
│ ├── __init__.py
│ ├── train/
│ │ ├── __init__.py
│ │ ├── train_mujoco.py
│ │ └── train_smac.py
│ ├── train_mujoco.sh
│ └── train_smac.sh
└── utils/
├── __init__.py
├── multi_discrete.py
├── popart.py
├── separated_buffer.py
└── util.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
*.*~
__pycache__/
*.pkl
data/
**/*.egg-info
.python-version
.idea/
.vscode/
.DS_Store
_build/
results/
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2020 Tianshou contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# Trust Region Policy Optimisation in Multi-Agent Reinforcement Learning
Described in the paper "[Trust Region Policy Optimisation in Multi-Agent Reinforcement Learning](https://arxiv.org/pdf/2109.11251.pdf)", this repository develops *Heterogeneous Agent Trust Region Policy Optimisation (HATRPO)* and *Heterogeneous-Agent Proximal Policy Optimisation (HAPPO)* algorithms on the bechmarks of SMAC and Multi-agent MUJOCO. *HATRPO* and *HAPPO* are the first trust region methods for multi-agent reinforcement learning **with theoretically-justified monotonic improvement guarantee**. Performance wise, it is the new state-of-the-art algorithm against its rivals such as [IPPO](https://arxiv.org/abs/2011.09533), [MAPPO](https://arxiv.org/abs/2103.01955) and [MADDPG](https://arxiv.org/abs/1706.02275). HAPPO and HATRPO have been integrated into HARL framework, please check the latest changes at [here](https://github.com/PKU-MARL/HARL).
## Installation
### Create environment
``` Bash
conda create -n env_name python=3.9
conda activate env_name
pip install -r requirements.txt
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
```
### Multi-agent MuJoCo
Following the instructios in https://github.com/openai/mujoco-py and https://github.com/schroederdewitt/multiagent_mujoco to setup a mujoco environment. In the end, remember to set the following environment variables:
``` Bash
LD_LIBRARY_PATH=${HOME}/.mujoco/mujoco200/bin;
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so
```
### StarCraft II & SMAC
Run the script
``` Bash
bash install_sc2.sh
```
Or you could install them manually to other path you like, just follow here: https://github.com/oxwhirl/smac.
## How to run
When your environment is ready, you could run shell scripts provided. For example:
``` Bash
cd scripts
./train_mujoco.sh # run with HAPPO/HATRPO on Multi-agent MuJoCo
./train_smac.sh # run with HAPPO/HATRPO on StarCraft II
```
If you would like to change the configs of experiments, you could modify sh files or look for config files for more details. And you can change algorithm by modify **algo=happo** as **algo=hatrpo**.
## Some experiment results
### SMAC
### Multi-agent MuJoCo on MAPPO
###
## Additional Experiment Setting
### For SMAC
#### 2022/4/24 update important ERROR for SMAC
##### Fix the parameter of **gamma**, the right configuration of **gamma** show as following:
##### gamma for **3s5z** and **2c_vs_64zg** is 0.95
##### gamma for **corridor** is 0.99
================================================
FILE: algorithms/__init__.py
================================================
================================================
FILE: algorithms/actor_critic.py
================================================
import torch
import torch.nn as nn
from algorithms.utils.util import init, check
from algorithms.utils.cnn import CNNBase
from algorithms.utils.mlp import MLPBase
from algorithms.utils.rnn import RNNLayer
from algorithms.utils.act import ACTLayer
from utils.util import get_shape_from_obs_space
class Actor(nn.Module):
"""
Actor network class for HAPPO. Outputs actions given observations.
:param args: (argparse.Namespace) arguments containing relevant model information.
:param obs_space: (gym.Space) observation space.
:param action_space: (gym.Space) action space.
:param device: (torch.device) specifies the device to run on (cpu/gpu).
"""
def __init__(self, args, obs_space, action_space, device=torch.device("cpu")):
super(Actor, self).__init__()
self.hidden_size = args.hidden_size
self.args=args
self._gain = args.gain
self._use_orthogonal = args.use_orthogonal
self._use_policy_active_masks = args.use_policy_active_masks
self._use_naive_recurrent_policy = args.use_naive_recurrent_policy
self._use_recurrent_policy = args.use_recurrent_policy
self._recurrent_N = args.recurrent_N
self.tpdv = dict(dtype=torch.float32, device=device)
obs_shape = get_shape_from_obs_space(obs_space)
base = CNNBase if len(obs_shape) == 3 else MLPBase
self.base = base(args, obs_shape)
if self._use_naive_recurrent_policy or self._use_recurrent_policy:
self.rnn = RNNLayer(self.hidden_size, self.hidden_size, self._recurrent_N, self._use_orthogonal)
self.act = ACTLayer(action_space, self.hidden_size, self._use_orthogonal, self._gain, args)
self.to(device)
def forward(self, obs, rnn_states, masks, available_actions=None, deterministic=False):
"""
Compute actions from the given inputs.
:param obs: (np.ndarray / torch.Tensor) observation inputs into network.
:param rnn_states: (np.ndarray / torch.Tensor) if RNN network, hidden states for RNN.
:param masks: (np.ndarray / torch.Tensor) mask tensor denoting if hidden states should be reinitialized to zeros.
:param available_actions: (np.ndarray / torch.Tensor) denotes which actions are available to agent
(if None, all actions available)
:param deterministic: (bool) whether to sample from action distribution or return the mode.
:return actions: (torch.Tensor) actions to take.
:return action_log_probs: (torch.Tensor) log probabilities of taken actions.
:return rnn_states: (torch.Tensor) updated RNN hidden states.
"""
obs = check(obs).to(**self.tpdv)
rnn_states = check(rnn_states).to(**self.tpdv)
masks = check(masks).to(**self.tpdv)
if available_actions is not None:
available_actions = check(available_actions).to(**self.tpdv)
actor_features = self.base(obs)
if self._use_naive_recurrent_policy or self._use_recurrent_policy:
actor_features, rnn_states = self.rnn(actor_features, rnn_states, masks)
actions, action_log_probs = self.act(actor_features, available_actions, deterministic)
return actions, action_log_probs, rnn_states
def evaluate_actions(self, obs, rnn_states, action, masks, available_actions=None, active_masks=None):
"""
Compute log probability and entropy of given actions.
:param obs: (torch.Tensor) observation inputs into network.
:param action: (torch.Tensor) actions whose entropy and log probability to evaluate.
:param rnn_states: (torch.Tensor) if RNN network, hidden states for RNN.
:param masks: (torch.Tensor) mask tensor denoting if hidden states should be reinitialized to zeros.
:param available_actions: (torch.Tensor) denotes which actions are available to agent
(if None, all actions available)
:param active_masks: (torch.Tensor) denotes whether an agent is active or dead.
:return action_log_probs: (torch.Tensor) log probabilities of the input actions.
:return dist_entropy: (torch.Tensor) action distribution entropy for the given inputs.
"""
obs = check(obs).to(**self.tpdv)
rnn_states = check(rnn_states).to(**self.tpdv)
action = check(action).to(**self.tpdv)
masks = check(masks).to(**self.tpdv)
if available_actions is not None:
available_actions = check(available_actions).to(**self.tpdv)
if active_masks is not None:
active_masks = check(active_masks).to(**self.tpdv)
actor_features = self.base(obs)
if self._use_naive_recurrent_policy or self._use_recurrent_policy:
actor_features, rnn_states = self.rnn(actor_features, rnn_states, masks)
if self.args.algorithm_name=="hatrpo":
action_log_probs, dist_entropy ,action_mu, action_std, all_probs= self.act.evaluate_actions_trpo(actor_features,
action, available_actions,
active_masks=
active_masks if self._use_policy_active_masks
else None)
return action_log_probs, dist_entropy, action_mu, action_std, all_probs
else:
action_log_probs, dist_entropy = self.act.evaluate_actions(actor_features,
action, available_actions,
active_masks=
active_masks if self._use_policy_active_masks
else None)
return action_log_probs, dist_entropy
class Critic(nn.Module):
"""
Critic network class for HAPPO. Outputs value function predictions given centralized input (HAPPO) or local observations (IPPO).
:param args: (argparse.Namespace) arguments containing relevant model information.
:param cent_obs_space: (gym.Space) (centralized) observation space.
:param device: (torch.device) specifies the device to run on (cpu/gpu).
"""
def __init__(self, args, cent_obs_space, device=torch.device("cpu")):
super(Critic, self).__init__()
self.hidden_size = args.hidden_size
self._use_orthogonal = args.use_orthogonal
self._use_naive_recurrent_policy = args.use_naive_recurrent_policy
self._use_recurrent_policy = args.use_recurrent_policy
self._recurrent_N = args.recurrent_N
self.tpdv = dict(dtype=torch.float32, device=device)
init_method = [nn.init.xavier_uniform_, nn.init.orthogonal_][self._use_orthogonal]
cent_obs_shape = get_shape_from_obs_space(cent_obs_space)
base = CNNBase if len(cent_obs_shape) == 3 else MLPBase
self.base = base(args, cent_obs_shape)
if self._use_naive_recurrent_policy or self._use_recurrent_policy:
self.rnn = RNNLayer(self.hidden_size, self.hidden_size, self._recurrent_N, self._use_orthogonal)
def init_(m):
return init(m, init_method, lambda x: nn.init.constant_(x, 0))
self.v_out = init_(nn.Linear(self.hidden_size, 1))
self.to(device)
def forward(self, cent_obs, rnn_states, masks):
"""
Compute actions from the given inputs.
:param cent_obs: (np.ndarray / torch.Tensor) observation inputs into network.
:param rnn_states: (np.ndarray / torch.Tensor) if RNN network, hidden states for RNN.
:param masks: (np.ndarray / torch.Tensor) mask tensor denoting if RNN states should be reinitialized to zeros.
:return values: (torch.Tensor) value function predictions.
:return rnn_states: (torch.Tensor) updated RNN hidden states.
"""
cent_obs = check(cent_obs).to(**self.tpdv)
rnn_states = check(rnn_states).to(**self.tpdv)
masks = check(masks).to(**self.tpdv)
critic_features = self.base(cent_obs)
if self._use_naive_recurrent_policy or self._use_recurrent_policy:
critic_features, rnn_states = self.rnn(critic_features, rnn_states, masks)
values = self.v_out(critic_features)
return values, rnn_states
================================================
FILE: algorithms/happo_policy.py
================================================
import torch
from algorithms.actor_critic import Actor, Critic
from utils.util import update_linear_schedule
class HAPPO_Policy:
"""
HAPPO Policy class. Wraps actor and critic networks to compute actions and value function predictions.
:param args: (argparse.Namespace) arguments containing relevant model and policy information.
:param obs_space: (gym.Space) observation space.
:param cent_obs_space: (gym.Space) value function input space (centralized input for HAPPO, decentralized for IPPO).
:param action_space: (gym.Space) action space.
:param device: (torch.device) specifies the device to run on (cpu/gpu).
"""
def __init__(self, args, obs_space, cent_obs_space, act_space, device=torch.device("cpu")):
self.args=args
self.device = device
self.lr = args.lr
self.critic_lr = args.critic_lr
self.opti_eps = args.opti_eps
self.weight_decay = args.weight_decay
self.obs_space = obs_space
self.share_obs_space = cent_obs_space
self.act_space = act_space
self.actor = Actor(args, self.obs_space, self.act_space, self.device)
######################################Please Note#########################################
##### We create one critic for each agent, but they are trained with same data #####
##### and using same update setting. Therefore they have the same parameter, #####
##### you can regard them as the same critic. #####
##########################################################################################
self.critic = Critic(args, self.share_obs_space, self.device)
self.actor_optimizer = torch.optim.Adam(self.actor.parameters(),
lr=self.lr, eps=self.opti_eps,
weight_decay=self.weight_decay)
self.critic_optimizer = torch.optim.Adam(self.critic.parameters(),
lr=self.critic_lr,
eps=self.opti_eps,
weight_decay=self.weight_decay)
def lr_decay(self, episode, episodes):
"""
Decay the actor and critic learning rates.
:param episode: (int) current training episode.
:param episodes: (int) total number of training episodes.
"""
update_linear_schedule(self.actor_optimizer, episode, episodes, self.lr)
update_linear_schedule(self.critic_optimizer, episode, episodes, self.critic_lr)
def get_actions(self, cent_obs, obs, rnn_states_actor, rnn_states_critic, masks, available_actions=None,
deterministic=False):
"""
Compute actions and value function predictions for the given inputs.
:param cent_obs (np.ndarray): centralized input to the critic.
:param obs (np.ndarray): local agent inputs to the actor.
:param rnn_states_actor: (np.ndarray) if actor is RNN, RNN states for actor.
:param rnn_states_critic: (np.ndarray) if critic is RNN, RNN states for critic.
:param masks: (np.ndarray) denotes points at which RNN states should be reset.
:param available_actions: (np.ndarray) denotes which actions are available to agent
(if None, all actions available)
:param deterministic: (bool) whether the action should be mode of distribution or should be sampled.
:return values: (torch.Tensor) value function predictions.
:return actions: (torch.Tensor) actions to take.
:return action_log_probs: (torch.Tensor) log probabilities of chosen actions.
:return rnn_states_actor: (torch.Tensor) updated actor network RNN states.
:return rnn_states_critic: (torch.Tensor) updated critic network RNN states.
"""
actions, action_log_probs, rnn_states_actor = self.actor(obs,
rnn_states_actor,
masks,
available_actions,
deterministic)
values, rnn_states_critic = self.critic(cent_obs, rnn_states_critic, masks)
return values, actions, action_log_probs, rnn_states_actor, rnn_states_critic
def get_values(self, cent_obs, rnn_states_critic, masks):
"""
Get value function predictions.
:param cent_obs (np.ndarray): centralized input to the critic.
:param rnn_states_critic: (np.ndarray) if critic is RNN, RNN states for critic.
:param masks: (np.ndarray) denotes points at which RNN states should be reset.
:return values: (torch.Tensor) value function predictions.
"""
values, _ = self.critic(cent_obs, rnn_states_critic, masks)
return values
def evaluate_actions(self, cent_obs, obs, rnn_states_actor, rnn_states_critic, action, masks,
available_actions=None, active_masks=None):
"""
Get action logprobs / entropy and value function predictions for actor update.
:param cent_obs (np.ndarray): centralized input to the critic.
:param obs (np.ndarray): local agent inputs to the actor.
:param rnn_states_actor: (np.ndarray) if actor is RNN, RNN states for actor.
:param rnn_states_critic: (np.ndarray) if critic is RNN, RNN states for critic.
:param action: (np.ndarray) actions whose log probabilites and entropy to compute.
:param masks: (np.ndarray) denotes points at which RNN states should be reset.
:param available_actions: (np.ndarray) denotes which actions are available to agent
(if None, all actions available)
:param active_masks: (torch.Tensor) denotes whether an agent is active or dead.
:return values: (torch.Tensor) value function predictions.
:return action_log_probs: (torch.Tensor) log probabilities of the input actions.
:return dist_entropy: (torch.Tensor) action distribution entropy for the given inputs.
"""
action_log_probs, dist_entropy = self.actor.evaluate_actions(obs,
rnn_states_actor,
action,
masks,
available_actions,
active_masks)
values, _ = self.critic(cent_obs, rnn_states_critic, masks)
return values, action_log_probs, dist_entropy
def act(self, obs, rnn_states_actor, masks, available_actions=None, deterministic=False):
"""
Compute actions using the given inputs.
:param obs (np.ndarray): local agent inputs to the actor.
:param rnn_states_actor: (np.ndarray) if actor is RNN, RNN states for actor.
:param masks: (np.ndarray) denotes points at which RNN states should be reset.
:param available_actions: (np.ndarray) denotes which actions are available to agent
(if None, all actions available)
:param deterministic: (bool) whether the action should be mode of distribution or should be sampled.
"""
actions, _, rnn_states_actor = self.actor(obs, rnn_states_actor, masks, available_actions, deterministic)
return actions, rnn_states_actor
================================================
FILE: algorithms/happo_trainer.py
================================================
import numpy as np
import torch
import torch.nn as nn
from utils.util import get_gard_norm, huber_loss, mse_loss
from utils.popart import PopArt
from algorithms.utils.util import check
class HAPPO():
"""
Trainer class for HAPPO to update policies.
:param args: (argparse.Namespace) arguments containing relevant model, policy, and env information.
:param policy: (HAPPO_Policy) policy to update.
:param device: (torch.device) specifies the device to run on (cpu/gpu).
"""
def __init__(self,
args,
policy,
device=torch.device("cpu")):
self.device = device
self.tpdv = dict(dtype=torch.float32, device=device)
self.policy = policy
self.clip_param = args.clip_param
self.ppo_epoch = args.ppo_epoch
self.num_mini_batch = args.num_mini_batch
self.data_chunk_length = args.data_chunk_length
self.value_loss_coef = args.value_loss_coef
self.entropy_coef = args.entropy_coef
self.max_grad_norm = args.max_grad_norm
self.huber_delta = args.huber_delta
self._use_recurrent_policy = args.use_recurrent_policy
self._use_naive_recurrent = args.use_naive_recurrent_policy
self._use_max_grad_norm = args.use_max_grad_norm
self._use_clipped_value_loss = args.use_clipped_value_loss
self._use_huber_loss = args.use_huber_loss
self._use_popart = args.use_popart
self._use_value_active_masks = args.use_value_active_masks
self._use_policy_active_masks = args.use_policy_active_masks
if self._use_popart:
self.value_normalizer = PopArt(1, device=self.device)
else:
self.value_normalizer = None
def cal_value_loss(self, values, value_preds_batch, return_batch, active_masks_batch):
"""
Calculate value function loss.
:param values: (torch.Tensor) value function predictions.
:param value_preds_batch: (torch.Tensor) "old" value predictions from data batch (used for value clip loss)
:param return_batch: (torch.Tensor) reward to go returns.
:param active_masks_batch: (torch.Tensor) denotes if agent is active or dead at a given timesep.
:return value_loss: (torch.Tensor) value function loss.
"""
if self._use_popart:
value_pred_clipped = value_preds_batch + (values - value_preds_batch).clamp(-self.clip_param,
self.clip_param)
error_clipped = self.value_normalizer(return_batch) - value_pred_clipped
error_original = self.value_normalizer(return_batch) - values
else:
value_pred_clipped = value_preds_batch + (values - value_preds_batch).clamp(-self.clip_param,
self.clip_param)
error_clipped = return_batch - value_pred_clipped
error_original = return_batch - values
if self._use_huber_loss:
value_loss_clipped = huber_loss(error_clipped, self.huber_delta)
value_loss_original = huber_loss(error_original, self.huber_delta)
else:
value_loss_clipped = mse_loss(error_clipped)
value_loss_original = mse_loss(error_original)
if self._use_clipped_value_loss:
value_loss = torch.max(value_loss_original, value_loss_clipped)
else:
value_loss = value_loss_original
if self._use_value_active_masks:
value_loss = (value_loss * active_masks_batch).sum() / active_masks_batch.sum()
else:
value_loss = value_loss.mean()
return value_loss
def ppo_update(self, sample, update_actor=True):
"""
Update actor and critic networks.
:param sample: (Tuple) contains data batch with which to update networks.
:update_actor: (bool) whether to update actor network.
:return value_loss: (torch.Tensor) value function loss.
:return critic_grad_norm: (torch.Tensor) gradient norm from critic update.
;return policy_loss: (torch.Tensor) actor(policy) loss value.
:return dist_entropy: (torch.Tensor) action entropies.
:return actor_grad_norm: (torch.Tensor) gradient norm from actor update.
:return imp_weights: (torch.Tensor) importance sampling weights.
"""
share_obs_batch, obs_batch, rnn_states_batch, rnn_states_critic_batch, actions_batch, \
value_preds_batch, return_batch, masks_batch, active_masks_batch, old_action_log_probs_batch, \
adv_targ, available_actions_batch, factor_batch = sample
old_action_log_probs_batch = check(old_action_log_probs_batch).to(**self.tpdv)
adv_targ = check(adv_targ).to(**self.tpdv)
value_preds_batch = check(value_preds_batch).to(**self.tpdv)
return_batch = check(return_batch).to(**self.tpdv)
active_masks_batch = check(active_masks_batch).to(**self.tpdv)
factor_batch = check(factor_batch).to(**self.tpdv)
# Reshape to do in a single forward pass for all steps
values, action_log_probs, dist_entropy = self.policy.evaluate_actions(share_obs_batch,
obs_batch,
rnn_states_batch,
rnn_states_critic_batch,
actions_batch,
masks_batch,
available_actions_batch,
active_masks_batch)
# actor update
imp_weights = torch.prod(torch.exp(action_log_probs - old_action_log_probs_batch),dim=-1,keepdim=True)
surr1 = imp_weights * adv_targ
surr2 = torch.clamp(imp_weights, 1.0 - self.clip_param, 1.0 + self.clip_param) * adv_targ
if self._use_policy_active_masks:
policy_action_loss = (-torch.sum(factor_batch * torch.min(surr1, surr2),
dim=-1,
keepdim=True) * active_masks_batch).sum() / active_masks_batch.sum()
else:
policy_action_loss = -torch.sum(factor_batch * torch.min(surr1, surr2), dim=-1, keepdim=True).mean()
policy_loss = policy_action_loss
self.policy.actor_optimizer.zero_grad()
if update_actor:
(policy_loss - dist_entropy * self.entropy_coef).backward()
if self._use_max_grad_norm:
actor_grad_norm = nn.utils.clip_grad_norm_(self.policy.actor.parameters(), self.max_grad_norm)
else:
actor_grad_norm = get_gard_norm(self.policy.actor.parameters())
self.policy.actor_optimizer.step()
value_loss = self.cal_value_loss(values, value_preds_batch, return_batch, active_masks_batch)
self.policy.critic_optimizer.zero_grad()
(value_loss * self.value_loss_coef).backward()
if self._use_max_grad_norm:
critic_grad_norm = nn.utils.clip_grad_norm_(self.policy.critic.parameters(), self.max_grad_norm)
else:
critic_grad_norm = get_gard_norm(self.policy.critic.parameters())
self.policy.critic_optimizer.step()
return value_loss, critic_grad_norm, policy_loss, dist_entropy, actor_grad_norm, imp_weights
def train(self, buffer, update_actor=True):
"""
Perform a training update using minibatch GD.
:param buffer: (SharedReplayBuffer) buffer containing training data.
:param update_actor: (bool) whether to update actor network.
:return train_info: (dict) contains information regarding training update (e.g. loss, grad norms, etc).
"""
if self._use_popart:
advantages = buffer.returns[:-1] - self.value_normalizer.denormalize(buffer.value_preds[:-1])
else:
advantages = buffer.returns[:-1] - buffer.value_preds[:-1]
advantages_copy = advantages.copy()
advantages_copy[buffer.active_masks[:-1] == 0.0] = np.nan
mean_advantages = np.nanmean(advantages_copy)
std_advantages = np.nanstd(advantages_copy)
advantages = (advantages - mean_advantages) / (std_advantages + 1e-5)
train_info = {}
train_info['value_loss'] = 0
train_info['policy_loss'] = 0
train_info['dist_entropy'] = 0
train_info['actor_grad_norm'] = 0
train_info['critic_grad_norm'] = 0
train_info['ratio'] = 0
for _ in range(self.ppo_epoch):
if self._use_recurrent_policy:
data_generator = buffer.recurrent_generator(advantages, self.num_mini_batch, self.data_chunk_length)
elif self._use_naive_recurrent:
data_generator = buffer.naive_recurrent_generator(advantages, self.num_mini_batch)
else:
data_generator = buffer.feed_forward_generator(advantages, self.num_mini_batch)
for sample in data_generator:
value_loss, critic_grad_norm, policy_loss, dist_entropy, actor_grad_norm, imp_weights = self.ppo_update(sample, update_actor=update_actor)
train_info['value_loss'] += value_loss.item()
train_info['policy_loss'] += policy_loss.item()
train_info['dist_entropy'] += dist_entropy.item()
train_info['actor_grad_norm'] += actor_grad_norm
train_info['critic_grad_norm'] += critic_grad_norm
train_info['ratio'] += imp_weights.mean()
num_updates = self.ppo_epoch * self.num_mini_batch
for k in train_info.keys():
train_info[k] /= num_updates
return train_info
def prep_training(self):
self.policy.actor.train()
self.policy.critic.train()
def prep_rollout(self):
self.policy.actor.eval()
self.policy.critic.eval()
================================================
FILE: algorithms/hatrpo_policy.py
================================================
import torch
from algorithms.actor_critic import Actor, Critic
from utils.util import update_linear_schedule
class HATRPO_Policy:
"""
HATRPO Policy class. Wraps actor and critic networks to compute actions and value function predictions.
:param args: (argparse.Namespace) arguments containing relevant model and policy information.
:param obs_space: (gym.Space) observation space.
:param cent_obs_space: (gym.Space) value function input space .
:param action_space: (gym.Space) action space.
:param device: (torch.device) specifies the device to run on (cpu/gpu).
"""
def __init__(self, args, obs_space, cent_obs_space, act_space, device=torch.device("cpu")):
self.args=args
self.device = device
self.lr = args.lr
self.critic_lr = args.critic_lr
self.opti_eps = args.opti_eps
self.weight_decay = args.weight_decay
self.obs_space = obs_space
self.share_obs_space = cent_obs_space
self.act_space = act_space
self.actor = Actor(args, self.obs_space, self.act_space, self.device)
######################################Please Note#########################################
##### We create one critic for each agent, but they are trained with same data #####
##### and using same update setting. Therefore they have the same parameter, #####
##### you can regard them as the same critic. #####
##########################################################################################
self.critic = Critic(args, self.share_obs_space, self.device)
self.actor_optimizer = torch.optim.Adam(self.actor.parameters(),
lr=self.lr, eps=self.opti_eps,
weight_decay=self.weight_decay)
self.critic_optimizer = torch.optim.Adam(self.critic.parameters(),
lr=self.critic_lr,
eps=self.opti_eps,
weight_decay=self.weight_decay)
def lr_decay(self, episode, episodes):
"""
Decay the actor and critic learning rates.
:param episode: (int) current training episode.
:param episodes: (int) total number of training episodes.
"""
update_linear_schedule(self.actor_optimizer, episode, episodes, self.lr)
update_linear_schedule(self.critic_optimizer, episode, episodes, self.critic_lr)
def get_actions(self, cent_obs, obs, rnn_states_actor, rnn_states_critic, masks, available_actions=None,
deterministic=False):
"""
Compute actions and value function predictions for the given inputs.
:param cent_obs (np.ndarray): centralized input to the critic.
:param obs (np.ndarray): local agent inputs to the actor.
:param rnn_states_actor: (np.ndarray) if actor is RNN, RNN states for actor.
:param rnn_states_critic: (np.ndarray) if critic is RNN, RNN states for critic.
:param masks: (np.ndarray) denotes points at which RNN states should be reset.
:param available_actions: (np.ndarray) denotes which actions are available to agent
(if None, all actions available)
:param deterministic: (bool) whether the action should be mode of distribution or should be sampled.
:return values: (torch.Tensor) value function predictions.
:return actions: (torch.Tensor) actions to take.
:return action_log_probs: (torch.Tensor) log probabilities of chosen actions.
:return rnn_states_actor: (torch.Tensor) updated actor network RNN states.
:return rnn_states_critic: (torch.Tensor) updated critic network RNN states.
"""
actions, action_log_probs, rnn_states_actor = self.actor(obs,
rnn_states_actor,
masks,
available_actions,
deterministic)
values, rnn_states_critic = self.critic(cent_obs, rnn_states_critic, masks)
return values, actions, action_log_probs, rnn_states_actor, rnn_states_critic
def get_values(self, cent_obs, rnn_states_critic, masks):
"""
Get value function predictions.
:param cent_obs (np.ndarray): centralized input to the critic.
:param rnn_states_critic: (np.ndarray) if critic is RNN, RNN states for critic.
:param masks: (np.ndarray) denotes points at which RNN states should be reset.
:return values: (torch.Tensor) value function predictions.
"""
values, _ = self.critic(cent_obs, rnn_states_critic, masks)
return values
def evaluate_actions(self, cent_obs, obs, rnn_states_actor, rnn_states_critic, action, masks,
available_actions=None, active_masks=None):
"""
Get action logprobs / entropy and value function predictions for actor update.
:param cent_obs (np.ndarray): centralized input to the critic.
:param obs (np.ndarray): local agent inputs to the actor.
:param rnn_states_actor: (np.ndarray) if actor is RNN, RNN states for actor.
:param rnn_states_critic: (np.ndarray) if critic is RNN, RNN states for critic.
:param action: (np.ndarray) actions whose log probabilites and entropy to compute.
:param masks: (np.ndarray) denotes points at which RNN states should be reset.
:param available_actions: (np.ndarray) denotes which actions are available to agent
(if None, all actions available)
:param active_masks: (torch.Tensor) denotes whether an agent is active or dead.
:return values: (torch.Tensor) value function predictions.
:return action_log_probs: (torch.Tensor) log probabilities of the input actions.
:return dist_entropy: (torch.Tensor) action distribution entropy for the given inputs.
"""
action_log_probs, dist_entropy , action_mu, action_std, all_probs= self.actor.evaluate_actions(obs,
rnn_states_actor,
action,
masks,
available_actions,
active_masks)
values, _ = self.critic(cent_obs, rnn_states_critic, masks)
return values, action_log_probs, dist_entropy, action_mu, action_std, all_probs
def act(self, obs, rnn_states_actor, masks, available_actions=None, deterministic=False):
"""
Compute actions using the given inputs.
:param obs (np.ndarray): local agent inputs to the actor.
:param rnn_states_actor: (np.ndarray) if actor is RNN, RNN states for actor.
:param masks: (np.ndarray) denotes points at which RNN states should be reset.
:param available_actions: (np.ndarray) denotes which actions are available to agent
(if None, all actions available)
:param deterministic: (bool) whether the action should be mode of distribution or should be sampled.
"""
actions, _, rnn_states_actor = self.actor(obs, rnn_states_actor, masks, available_actions, deterministic)
return actions, rnn_states_actor
================================================
FILE: algorithms/hatrpo_trainer.py
================================================
import numpy as np
import torch
import torch.nn as nn
from utils.util import get_gard_norm, huber_loss, mse_loss
from utils.popart import PopArt
from algorithms.utils.util import check
from algorithms.actor_critic import Actor
class HATRPO():
"""
Trainer class for MATRPO to update policies.
:param args: (argparse.Namespace) arguments containing relevant model, policy, and env information.
:param policy: (HATRPO_Policy) policy to update.
:param device: (torch.device) specifies the device to run on (cpu/gpu).
"""
def __init__(self,
args,
policy,
device=torch.device("cpu")):
self.device = device
self.tpdv = dict(dtype=torch.float32, device=device)
self.policy = policy
self.clip_param = args.clip_param
self.num_mini_batch = args.num_mini_batch
self.data_chunk_length = args.data_chunk_length
self.value_loss_coef = args.value_loss_coef
self.entropy_coef = args.entropy_coef
self.max_grad_norm = args.max_grad_norm
self.huber_delta = args.huber_delta
self.kl_threshold = args.kl_threshold
self.ls_step = args.ls_step
self.accept_ratio = args.accept_ratio
self._use_recurrent_policy = args.use_recurrent_policy
self._use_naive_recurrent = args.use_naive_recurrent_policy
self._use_max_grad_norm = args.use_max_grad_norm
self._use_clipped_value_loss = args.use_clipped_value_loss
self._use_huber_loss = args.use_huber_loss
self._use_popart = args.use_popart
self._use_value_active_masks = args.use_value_active_masks
self._use_policy_active_masks = args.use_policy_active_masks
if self._use_popart:
self.value_normalizer = PopArt(1, device=self.device)
else:
self.value_normalizer = None
def cal_value_loss(self, values, value_preds_batch, return_batch, active_masks_batch):
"""
Calculate value function loss.
:param values: (torch.Tensor) value function predictions.
:param value_preds_batch: (torch.Tensor) "old" value predictions from data batch (used for value clip loss)
:param return_batch: (torch.Tensor) reward to go returns.
:param active_masks_batch: (torch.Tensor) denotes if agent is active or dead at a given timesep.
:return value_loss: (torch.Tensor) value function loss.
"""
if self._use_popart:
value_pred_clipped = value_preds_batch + (values - value_preds_batch).clamp(-self.clip_param,
self.clip_param)
error_clipped = self.value_normalizer(return_batch) - value_pred_clipped
error_original = self.value_normalizer(return_batch) - values
else:
value_pred_clipped = value_preds_batch + (values - value_preds_batch).clamp(-self.clip_param,
self.clip_param)
error_clipped = return_batch - value_pred_clipped
error_original = return_batch - values
if self._use_huber_loss:
value_loss_clipped = huber_loss(error_clipped, self.huber_delta)
value_loss_original = huber_loss(error_original, self.huber_delta)
else:
value_loss_clipped = mse_loss(error_clipped)
value_loss_original = mse_loss(error_original)
if self._use_clipped_value_loss:
value_loss = torch.max(value_loss_original, value_loss_clipped)
else:
value_loss = value_loss_original
if self._use_value_active_masks:
value_loss = (value_loss * active_masks_batch).sum() / active_masks_batch.sum()
else:
value_loss = value_loss.mean()
return value_loss
def flat_grad(self, grads):
grad_flatten = []
for grad in grads:
if grad is None:
continue
grad_flatten.append(grad.view(-1))
grad_flatten = torch.cat(grad_flatten)
return grad_flatten
def flat_hessian(self, hessians):
hessians_flatten = []
for hessian in hessians:
if hessian is None:
continue
hessians_flatten.append(hessian.contiguous().view(-1))
hessians_flatten = torch.cat(hessians_flatten).data
return hessians_flatten
def flat_params(self, model):
params = []
for param in model.parameters():
params.append(param.data.view(-1))
params_flatten = torch.cat(params)
return params_flatten
def update_model(self, model, new_params):
index = 0
for params in model.parameters():
params_length = len(params.view(-1))
new_param = new_params[index: index + params_length]
new_param = new_param.view(params.size())
params.data.copy_(new_param)
index += params_length
def kl_approx(self, q, p):
r = torch.exp(p - q)
kl = r - 1 - p + q
return kl
def kl_divergence(self, obs, rnn_states, action, masks, available_actions, active_masks, new_actor, old_actor):
_, _, mu, std, probs = new_actor.evaluate_actions(obs, rnn_states, action, masks, available_actions, active_masks)
_, _, mu_old, std_old, probs_old = old_actor.evaluate_actions(obs, rnn_states, action, masks, available_actions, active_masks)
if mu.grad_fn==None:
probs_old=probs_old.detach()
kl= self.kl_approx(probs_old,probs)
else:
logstd = torch.log(std)
mu_old = mu_old.detach()
std_old = std_old.detach()
logstd_old = torch.log(std_old)
# kl divergence between old policy and new policy : D( pi_old || pi_new )
# pi_old -> mu0, logstd0, std0 / pi_new -> mu, logstd, std
# be careful of calculating KL-divergence. It is not symmetric metric
kl = logstd - logstd_old + (std_old.pow(2) + (mu_old - mu).pow(2)) / (2.0 * std.pow(2)) - 0.5
if len(kl.shape)>1:
kl=kl.sum(1, keepdim=True)
return kl
# from openai baseline code
# https://github.com/openai/baselines/blob/master/baselines/common/cg.py
def conjugate_gradient(self, actor, obs, rnn_states, action, masks, available_actions, active_masks, b, nsteps, residual_tol=1e-10):
x = torch.zeros(b.size()).to(device=self.device)
r = b.clone()
p = b.clone()
rdotr = torch.dot(r, r)
for i in range(nsteps):
_Avp = self.fisher_vector_product(actor, obs, rnn_states, action, masks, available_actions, active_masks, p)
alpha = rdotr / torch.dot(p, _Avp)
x += alpha * p
r -= alpha * _Avp
new_rdotr = torch.dot(r, r)
betta = new_rdotr / rdotr
p = r + betta * p
rdotr = new_rdotr
if rdotr < residual_tol:
break
return x
def fisher_vector_product(self, actor, obs, rnn_states, action, masks, available_actions, active_masks, p):
p.detach()
kl = self.kl_divergence(obs, rnn_states, action, masks, available_actions, active_masks, new_actor=actor, old_actor=actor)
kl = kl.mean()
kl_grad = torch.autograd.grad(kl, actor.parameters(), create_graph=True, allow_unused=True)
kl_grad = self.flat_grad(kl_grad) # check kl_grad == 0
kl_grad_p = (kl_grad * p).sum()
kl_hessian_p = torch.autograd.grad(kl_grad_p, actor.parameters(), allow_unused=True)
kl_hessian_p = self.flat_hessian(kl_hessian_p)
return kl_hessian_p + 0.1 * p
def trpo_update(self, sample, update_actor=True):
"""
Update actor and critic networks.
:param sample: (Tuple) contains data batch with which to update networks.
:update_actor: (bool) whether to update actor network.
:return value_loss: (torch.Tensor) value function loss.
:return critic_grad_norm: (torch.Tensor) gradient norm from critic update.
;return policy_loss: (torch.Tensor) actor(policy) loss value.
:return dist_entropy: (torch.Tensor) action entropies.
:return actor_grad_norm: (torch.Tensor) gradient norm from actor update.
:return imp_weights: (torch.Tensor) importance sampling weights.
"""
share_obs_batch, obs_batch, rnn_states_batch, rnn_states_critic_batch, actions_batch, \
value_preds_batch, return_batch, masks_batch, active_masks_batch, old_action_log_probs_batch, \
adv_targ, available_actions_batch, factor_batch = sample
old_action_log_probs_batch = check(old_action_log_probs_batch).to(**self.tpdv)
adv_targ = check(adv_targ).to(**self.tpdv)
value_preds_batch = check(value_preds_batch).to(**self.tpdv)
return_batch = check(return_batch).to(**self.tpdv)
active_masks_batch = check(active_masks_batch).to(**self.tpdv)
factor_batch = check(factor_batch).to(**self.tpdv)
values, action_log_probs, dist_entropy, action_mu, action_std, _ = self.policy.evaluate_actions(share_obs_batch,
obs_batch,
rnn_states_batch,
rnn_states_critic_batch,
actions_batch,
masks_batch,
available_actions_batch,
active_masks_batch)
# critic update
value_loss = self.cal_value_loss(values, value_preds_batch, return_batch, active_masks_batch)
self.policy.critic_optimizer.zero_grad()
(value_loss * self.value_loss_coef).backward()
if self._use_max_grad_norm:
critic_grad_norm = nn.utils.clip_grad_norm_(self.policy.critic.parameters(), self.max_grad_norm)
else:
critic_grad_norm = get_gard_norm(self.policy.critic.parameters())
self.policy.critic_optimizer.step()
# actor update
ratio = torch.prod(torch.exp(action_log_probs - old_action_log_probs_batch),dim=-1,keepdim=True)
if self._use_policy_active_masks:
loss = (torch.sum(ratio * factor_batch * adv_targ, dim=-1, keepdim=True) *
active_masks_batch).sum() / active_masks_batch.sum()
else:
loss = torch.sum(ratio * factor_batch * adv_targ, dim=-1, keepdim=True).mean()
loss_grad = torch.autograd.grad(loss, self.policy.actor.parameters(), allow_unused=True)
loss_grad = self.flat_grad(loss_grad)
step_dir = self.conjugate_gradient(self.policy.actor,
obs_batch,
rnn_states_batch,
actions_batch,
masks_batch,
available_actions_batch,
active_masks_batch,
loss_grad.data,
nsteps=10)
loss = loss.data.cpu().numpy()
params = self.flat_params(self.policy.actor)
fvp = self.fisher_vector_product(self.policy.actor,
obs_batch,
rnn_states_batch,
actions_batch,
masks_batch,
available_actions_batch,
active_masks_batch,
step_dir)
shs = 0.5 * (step_dir * fvp).sum(0, keepdim=True)
step_size = 1 / torch.sqrt(shs / self.kl_threshold)[0]
full_step = step_size * step_dir
old_actor = Actor(self.policy.args,
self.policy.obs_space,
self.policy.act_space,
self.device)
self.update_model(old_actor, params)
expected_improve = (loss_grad * full_step).sum(0, keepdim=True)
expected_improve = expected_improve.data.cpu().numpy()
# Backtracking line search
flag = False
fraction = 1
for i in range(self.ls_step):
new_params = params + fraction * full_step
self.update_model(self.policy.actor, new_params)
values, action_log_probs, dist_entropy, action_mu, action_std, _ = self.policy.evaluate_actions(share_obs_batch,
obs_batch,
rnn_states_batch,
rnn_states_critic_batch,
actions_batch,
masks_batch,
available_actions_batch,
active_masks_batch)
ratio = torch.exp(action_log_probs - old_action_log_probs_batch)
if self._use_policy_active_masks:
new_loss = (torch.sum(ratio * factor_batch * adv_targ, dim=-1, keepdim=True) *
active_masks_batch).sum() / active_masks_batch.sum()
else:
new_loss = torch.sum(ratio * factor_batch * adv_targ, dim=-1, keepdim=True).mean()
new_loss = new_loss.data.cpu().numpy()
loss_improve = new_loss - loss
kl = self.kl_divergence(obs_batch,
rnn_states_batch,
actions_batch,
masks_batch,
available_actions_batch,
active_masks_batch,
new_actor=self.policy.actor,
old_actor=old_actor)
kl = kl.mean()
if kl < self.kl_threshold and (loss_improve / expected_improve) > self.accept_ratio and loss_improve.item()>0:
flag = True
break
expected_improve *= 0.5
fraction *= 0.5
if not flag:
params = self.flat_params(old_actor)
self.update_model(self.policy.actor, params)
print('policy update does not impove the surrogate')
return value_loss, critic_grad_norm, kl, loss_improve, expected_improve, dist_entropy, ratio
def train(self, buffer, update_actor=True):
"""
Perform a training update using minibatch GD.
:param buffer: (SharedReplayBuffer) buffer containing training data.
:param update_actor: (bool) whether to update actor network.
:return train_info: (dict) contains information regarding training update (e.g. loss, grad norms, etc).
"""
if self._use_popart:
advantages = buffer.returns[:-1] - self.value_normalizer.denormalize(buffer.value_preds[:-1])
else:
advantages = buffer.returns[:-1] - buffer.value_preds[:-1]
advantages_copy = advantages.copy()
advantages_copy[buffer.active_masks[:-1] == 0.0] = np.nan
mean_advantages = np.nanmean(advantages_copy)
std_advantages = np.nanstd(advantages_copy)
advantages = (advantages - mean_advantages) / (std_advantages + 1e-5)
train_info = {}
train_info['value_loss'] = 0
train_info['kl'] = 0
train_info['dist_entropy'] = 0
train_info['loss_improve'] = 0
train_info['expected_improve'] = 0
train_info['critic_grad_norm'] = 0
train_info['ratio'] = 0
if self._use_recurrent_policy:
data_generator = buffer.recurrent_generator(advantages, self.num_mini_batch, self.data_chunk_length)
elif self._use_naive_recurrent:
data_generator = buffer.naive_recurrent_generator(advantages, self.num_mini_batch)
else:
data_generator = buffer.feed_forward_generator(advantages, self.num_mini_batch)
for sample in data_generator:
value_loss, critic_grad_norm, kl, loss_improve, expected_improve, dist_entropy, imp_weights \
= self.trpo_update(sample, update_actor)
train_info['value_loss'] += value_loss.item()
train_info['kl'] += kl
train_info['loss_improve'] += loss_improve.item()
train_info['expected_improve'] += expected_improve
train_info['dist_entropy'] += dist_entropy.item()
train_info['critic_grad_norm'] += critic_grad_norm
train_info['ratio'] += imp_weights.mean()
num_updates = self.num_mini_batch
for k in train_info.keys():
train_info[k] /= num_updates
return train_info
def prep_training(self):
self.policy.actor.train()
self.policy.critic.train()
def prep_rollout(self):
self.policy.actor.eval()
self.policy.critic.eval()
================================================
FILE: algorithms/utils/act.py
================================================
from .distributions import Bernoulli, Categorical, DiagGaussian
import torch
import torch.nn as nn
class ACTLayer(nn.Module):
"""
MLP Module to compute actions.
:param action_space: (gym.Space) action space.
:param inputs_dim: (int) dimension of network input.
:param use_orthogonal: (bool) whether to use orthogonal initialization.
:param gain: (float) gain of the output layer of the network.
"""
def __init__(self, action_space, inputs_dim, use_orthogonal, gain, args=None):
super(ACTLayer, self).__init__()
self.mixed_action = False
self.multi_discrete = False
self.action_type = action_space.__class__.__name__
if action_space.__class__.__name__ == "Discrete":
action_dim = action_space.n
self.action_out = Categorical(inputs_dim, action_dim, use_orthogonal, gain)
elif action_space.__class__.__name__ == "Box":
action_dim = action_space.shape[0]
self.action_out = DiagGaussian(inputs_dim, action_dim, use_orthogonal, gain, args)
elif action_space.__class__.__name__ == "MultiBinary":
action_dim = action_space.shape[0]
self.action_out = Bernoulli(inputs_dim, action_dim, use_orthogonal, gain)
elif action_space.__class__.__name__ == "MultiDiscrete":
self.multi_discrete = True
action_dims = action_space.high - action_space.low + 1
self.action_outs = []
for action_dim in action_dims:
self.action_outs.append(Categorical(inputs_dim, action_dim, use_orthogonal, gain))
self.action_outs = nn.ModuleList(self.action_outs)
else: # discrete + continous
self.mixed_action = True
continous_dim = action_space[0].shape[0]
discrete_dim = action_space[1].n
self.action_outs = nn.ModuleList([DiagGaussian(inputs_dim, continous_dim, use_orthogonal, gain, args),
Categorical(inputs_dim, discrete_dim, use_orthogonal, gain)])
def forward(self, x, available_actions=None, deterministic=False):
"""
Compute actions and action logprobs from given input.
:param x: (torch.Tensor) input to network.
:param available_actions: (torch.Tensor) denotes which actions are available to agent
(if None, all actions available)
:param deterministic: (bool) whether to sample from action distribution or return the mode.
:return actions: (torch.Tensor) actions to take.
:return action_log_probs: (torch.Tensor) log probabilities of taken actions.
"""
if self.mixed_action :
actions = []
action_log_probs = []
for action_out in self.action_outs:
action_logit = action_out(x)
action = action_logit.mode() if deterministic else action_logit.sample()
action_log_prob = action_logit.log_probs(action)
actions.append(action.float())
action_log_probs.append(action_log_prob)
actions = torch.cat(actions, -1)
action_log_probs = torch.sum(torch.cat(action_log_probs, -1), -1, keepdim=True)
elif self.multi_discrete:
actions = []
action_log_probs = []
for action_out in self.action_outs:
action_logit = action_out(x)
action = action_logit.mode() if deterministic else action_logit.sample()
action_log_prob = action_logit.log_probs(action)
actions.append(action)
action_log_probs.append(action_log_prob)
actions = torch.cat(actions, -1)
action_log_probs = torch.cat(action_log_probs, -1)
else:
action_logits = self.action_out(x, available_actions)
actions = action_logits.mode() if deterministic else action_logits.sample()
action_log_probs = action_logits.log_probs(actions)
return actions, action_log_probs
def get_probs(self, x, available_actions=None):
"""
Compute action probabilities from inputs.
:param x: (torch.Tensor) input to network.
:param available_actions: (torch.Tensor) denotes which actions are available to agent
(if None, all actions available)
:return action_probs: (torch.Tensor)
"""
if self.mixed_action or self.multi_discrete:
action_probs = []
for action_out in self.action_outs:
action_logit = action_out(x)
action_prob = action_logit.probs
action_probs.append(action_prob)
action_probs = torch.cat(action_probs, -1)
else:
action_logits = self.action_out(x, available_actions)
action_probs = action_logits.probs
return action_probs
def evaluate_actions(self, x, action, available_actions=None, active_masks=None):
"""
Compute log probability and entropy of given actions.
:param x: (torch.Tensor) input to network.
:param action: (torch.Tensor) actions whose entropy and log probability to evaluate.
:param available_actions: (torch.Tensor) denotes which actions are available to agent
(if None, all actions available)
:param active_masks: (torch.Tensor) denotes whether an agent is active or dead.
:return action_log_probs: (torch.Tensor) log probabilities of the input actions.
:return dist_entropy: (torch.Tensor) action distribution entropy for the given inputs.
"""
if self.mixed_action:
a, b = action.split((2, 1), -1)
b = b.long()
action = [a, b]
action_log_probs = []
dist_entropy = []
for action_out, act in zip(self.action_outs, action):
action_logit = action_out(x)
action_log_probs.append(action_logit.log_probs(act))
if active_masks is not None:
if len(action_logit.entropy().shape) == len(active_masks.shape):
dist_entropy.append((action_logit.entropy() * active_masks).sum()/active_masks.sum())
else:
dist_entropy.append((action_logit.entropy() * active_masks.squeeze(-1)).sum()/active_masks.sum())
else:
dist_entropy.append(action_logit.entropy().mean())
action_log_probs = torch.sum(torch.cat(action_log_probs, -1), -1, keepdim=True)
dist_entropy = dist_entropy[0] / 2.0 + dist_entropy[1] / 0.98
elif self.multi_discrete:
action = torch.transpose(action, 0, 1)
action_log_probs = []
dist_entropy = []
for action_out, act in zip(self.action_outs, action):
action_logit = action_out(x)
action_log_probs.append(action_logit.log_probs(act))
if active_masks is not None:
dist_entropy.append((action_logit.entropy()*active_masks.squeeze(-1)).sum()/active_masks.sum())
else:
dist_entropy.append(action_logit.entropy().mean())
action_log_probs = torch.cat(action_log_probs, -1)
dist_entropy = torch.tensor(dist_entropy).mean()
else:
action_logits = self.action_out(x, available_actions)
action_log_probs = action_logits.log_probs(action)
if active_masks is not None:
if self.action_type=="Discrete":
dist_entropy = (action_logits.entropy()*active_masks.squeeze(-1)).sum()/active_masks.sum()
else:
dist_entropy = (action_logits.entropy()*active_masks).sum()/active_masks.sum()
else:
dist_entropy = action_logits.entropy().mean()
return action_log_probs, dist_entropy
def evaluate_actions_trpo(self, x, action, available_actions=None, active_masks=None):
"""
Compute log probability and entropy of given actions.
:param x: (torch.Tensor) input to network.
:param action: (torch.Tensor) actions whose entropy and log probability to evaluate.
:param available_actions: (torch.Tensor) denotes which actions are available to agent
(if None, all actions available)
:param active_masks: (torch.Tensor) denotes whether an agent is active or dead.
:return action_log_probs: (torch.Tensor) log probabilities of the input actions.
:return dist_entropy: (torch.Tensor) action distribution entropy for the given inputs.
"""
if self.multi_discrete:
action = torch.transpose(action, 0, 1)
action_log_probs = []
dist_entropy = []
mu_collector = []
std_collector = []
probs_collector = []
for action_out, act in zip(self.action_outs, action):
action_logit = action_out(x)
mu = action_logit.mean
std = action_logit.stddev
action_log_probs.append(action_logit.log_probs(act))
mu_collector.append(mu)
std_collector.append(std)
probs_collector.append(action_logit.logits)
if active_masks is not None:
dist_entropy.append((action_logit.entropy()*active_masks.squeeze(-1)).sum()/active_masks.sum())
else:
dist_entropy.append(action_logit.entropy().mean())
action_mu = torch.cat(mu_collector,-1)
action_std = torch.cat(std_collector,-1)
all_probs = torch.cat(probs_collector,-1)
action_log_probs = torch.cat(action_log_probs, -1)
dist_entropy = torch.tensor(dist_entropy).mean()
else:
action_logits = self.action_out(x, available_actions)
action_mu = action_logits.mean
action_std = action_logits.stddev
action_log_probs = action_logits.log_probs(action)
if self.action_type=="Discrete":
all_probs = action_logits.logits
else:
all_probs = None
if active_masks is not None:
if self.action_type=="Discrete":
dist_entropy = (action_logits.entropy()*active_masks.squeeze(-1)).sum()/active_masks.sum()
else:
dist_entropy = (action_logits.entropy()*active_masks).sum()/active_masks.sum()
else:
dist_entropy = action_logits.entropy().mean()
return action_log_probs, dist_entropy, action_mu, action_std, all_probs
================================================
FILE: algorithms/utils/cnn.py
================================================
import torch.nn as nn
from .util import init
"""CNN Modules and utils."""
class Flatten(nn.Module):
def forward(self, x):
return x.view(x.size(0), -1)
class CNNLayer(nn.Module):
def __init__(self, obs_shape, hidden_size, use_orthogonal, use_ReLU, kernel_size=3, stride=1):
super(CNNLayer, self).__init__()
active_func = [nn.Tanh(), nn.ReLU()][use_ReLU]
init_method = [nn.init.xavier_uniform_, nn.init.orthogonal_][use_orthogonal]
gain = nn.init.calculate_gain(['tanh', 'relu'][use_ReLU])
def init_(m):
return init(m, init_method, lambda x: nn.init.constant_(x, 0), gain=gain)
input_channel = obs_shape[0]
input_width = obs_shape[1]
input_height = obs_shape[2]
self.cnn = nn.Sequential(
init_(nn.Conv2d(in_channels=input_channel,
out_channels=hidden_size // 2,
kernel_size=kernel_size,
stride=stride)
),
active_func,
Flatten(),
init_(nn.Linear(hidden_size // 2 * (input_width - kernel_size + stride) * (input_height - kernel_size + stride),
hidden_size)
),
active_func,
init_(nn.Linear(hidden_size, hidden_size)), active_func)
def forward(self, x):
x = x / 255.0
x = self.cnn(x)
return x
class CNNBase(nn.Module):
def __init__(self, args, obs_shape):
super(CNNBase, self).__init__()
self._use_orthogonal = args.use_orthogonal
self._use_ReLU = args.use_ReLU
self.hidden_size = args.hidden_size
self.cnn = CNNLayer(obs_shape, self.hidden_size, self._use_orthogonal, self._use_ReLU)
def forward(self, x):
x = self.cnn(x)
return x
================================================
FILE: algorithms/utils/distributions.py
================================================
import torch
import torch.nn as nn
from .util import init
"""
Modify standard PyTorch distributions so they to make compatible with this codebase.
"""
#
# Standardize distribution interfaces
#
# Categorical
class FixedCategorical(torch.distributions.Categorical):
def sample(self):
return super().sample().unsqueeze(-1)
def log_probs(self, actions):
return (
super()
.log_prob(actions.squeeze(-1))
.view(actions.size(0), -1)
.sum(-1)
.unsqueeze(-1)
)
def mode(self):
return self.probs.argmax(dim=-1, keepdim=True)
# Normal
class FixedNormal(torch.distributions.Normal):
def log_probs(self, actions):
return super().log_prob(actions)
# return super().log_prob(actions).sum(-1, keepdim=True)
def entrop(self):
return super.entropy().sum(-1)
def mode(self):
return self.mean
# Bernoulli
class FixedBernoulli(torch.distributions.Bernoulli):
def log_probs(self, actions):
return super.log_prob(actions).view(actions.size(0), -1).sum(-1).unsqueeze(-1)
def entropy(self):
return super().entropy().sum(-1)
def mode(self):
return torch.gt(self.probs, 0.5).float()
class Categorical(nn.Module):
def __init__(self, num_inputs, num_outputs, use_orthogonal=True, gain=0.01):
super(Categorical, self).__init__()
init_method = [nn.init.xavier_uniform_, nn.init.orthogonal_][use_orthogonal]
def init_(m):
return init(m, init_method, lambda x: nn.init.constant_(x, 0), gain)
self.linear = init_(nn.Linear(num_inputs, num_outputs))
def forward(self, x, available_actions=None):
x = self.linear(x)
if available_actions is not None:
x[available_actions == 0] = -1e10
return FixedCategorical(logits=x)
# class DiagGaussian(nn.Module):
# def __init__(self, num_inputs, num_outputs, use_orthogonal=True, gain=0.01):
# super(DiagGaussian, self).__init__()
#
# init_method = [nn.init.xavier_uniform_, nn.init.orthogonal_][use_orthogonal]
# def init_(m):
# return init(m, init_method, lambda x: nn.init.constant_(x, 0), gain)
#
# self.fc_mean = init_(nn.Linear(num_inputs, num_outputs))
# self.logstd = AddBias(torch.zeros(num_outputs))
#
# def forward(self, x, available_actions=None):
# action_mean = self.fc_mean(x)
#
# # An ugly hack for my KFAC implementation.
# zeros = torch.zeros(action_mean.size())
# if x.is_cuda:
# zeros = zeros.cuda()
#
# action_logstd = self.logstd(zeros)
# return FixedNormal(action_mean, action_logstd.exp())
class DiagGaussian(nn.Module):
def __init__(self, num_inputs, num_outputs, use_orthogonal=True, gain=0.01, args=None):
super(DiagGaussian, self).__init__()
init_method = [nn.init.xavier_uniform_, nn.init.orthogonal_][use_orthogonal]
def init_(m):
return init(m, init_method, lambda x: nn.init.constant_(x, 0), gain)
if args is not None:
self.std_x_coef = args.std_x_coef
self.std_y_coef = args.std_y_coef
else:
self.std_x_coef = 1.
self.std_y_coef = 0.5
self.fc_mean = init_(nn.Linear(num_inputs, num_outputs))
log_std = torch.ones(num_outputs) * self.std_x_coef
self.log_std = torch.nn.Parameter(log_std)
def forward(self, x, available_actions=None):
action_mean = self.fc_mean(x)
action_std = torch.sigmoid(self.log_std / self.std_x_coef) * self.std_y_coef
return FixedNormal(action_mean, action_std)
class Bernoulli(nn.Module):
def __init__(self, num_inputs, num_outputs, use_orthogonal=True, gain=0.01):
super(Bernoulli, self).__init__()
init_method = [nn.init.xavier_uniform_, nn.init.orthogonal_][use_orthogonal]
def init_(m):
return init(m, init_method, lambda x: nn.init.constant_(x, 0), gain)
self.linear = init_(nn.Linear(num_inputs, num_outputs))
def forward(self, x):
x = self.linear(x)
return FixedBernoulli(logits=x)
class AddBias(nn.Module):
def __init__(self, bias):
super(AddBias, self).__init__()
self._bias = nn.Parameter(bias.unsqueeze(1))
def forward(self, x):
if x.dim() == 2:
bias = self._bias.t().view(1, -1)
else:
bias = self._bias.t().view(1, -1, 1, 1)
return x + bias
================================================
FILE: algorithms/utils/mlp.py
================================================
import torch.nn as nn
from .util import init, get_clones
"""MLP modules."""
class MLPLayer(nn.Module):
def __init__(self, input_dim, hidden_size, layer_N, use_orthogonal, use_ReLU):
super(MLPLayer, self).__init__()
self._layer_N = layer_N
active_func = [nn.Tanh(), nn.ReLU()][use_ReLU]
init_method = [nn.init.xavier_uniform_, nn.init.orthogonal_][use_orthogonal]
gain = nn.init.calculate_gain(['tanh', 'relu'][use_ReLU])
def init_(m):
return init(m, init_method, lambda x: nn.init.constant_(x, 0), gain=gain)
self.fc1 = nn.Sequential(
init_(nn.Linear(input_dim, hidden_size)), active_func, nn.LayerNorm(hidden_size))
# self.fc_h = nn.Sequential(init_(
# nn.Linear(hidden_size, hidden_size)), active_func, nn.LayerNorm(hidden_size))
# self.fc2 = get_clones(self.fc_h, self._layer_N)
self.fc2 = nn.ModuleList([nn.Sequential(init_(
nn.Linear(hidden_size, hidden_size)), active_func, nn.LayerNorm(hidden_size)) for i in range(self._layer_N)])
def forward(self, x):
x = self.fc1(x)
for i in range(self._layer_N):
x = self.fc2[i](x)
return x
class MLPBase(nn.Module):
def __init__(self, args, obs_shape, cat_self=True, attn_internal=False):
super(MLPBase, self).__init__()
self._use_feature_normalization = args.use_feature_normalization
self._use_orthogonal = args.use_orthogonal
self._use_ReLU = args.use_ReLU
self._stacked_frames = args.stacked_frames
self._layer_N = args.layer_N
self.hidden_size = args.hidden_size
obs_dim = obs_shape[0]
if self._use_feature_normalization:
self.feature_norm = nn.LayerNorm(obs_dim)
self.mlp = MLPLayer(obs_dim, self.hidden_size,
self._layer_N, self._use_orthogonal, self._use_ReLU)
def forward(self, x):
if self._use_feature_normalization:
x = self.feature_norm(x)
x = self.mlp(x)
return x
================================================
FILE: algorithms/utils/rnn.py
================================================
import torch
import torch.nn as nn
"""RNN modules."""
class RNNLayer(nn.Module):
def __init__(self, inputs_dim, outputs_dim, recurrent_N, use_orthogonal):
super(RNNLayer, self).__init__()
self._recurrent_N = recurrent_N
self._use_orthogonal = use_orthogonal
self.rnn = nn.GRU(inputs_dim, outputs_dim, num_layers=self._recurrent_N)
for name, param in self.rnn.named_parameters():
if 'bias' in name:
nn.init.constant_(param, 0)
elif 'weight' in name:
if self._use_orthogonal:
nn.init.orthogonal_(param)
else:
nn.init.xavier_uniform_(param)
self.norm = nn.LayerNorm(outputs_dim)
def forward(self, x, hxs, masks):
if x.size(0) == hxs.size(0):
x, hxs = self.rnn(x.unsqueeze(0),
(hxs * masks.repeat(1, self._recurrent_N).unsqueeze(-1)).transpose(0, 1).contiguous())
x = x.squeeze(0)
hxs = hxs.transpose(0, 1)
else:
# x is a (T, N, -1) tensor that has been flatten to (T * N, -1)
N = hxs.size(0)
T = int(x.size(0) / N)
# unflatten
x = x.view(T, N, x.size(1))
# Same deal with masks
masks = masks.view(T, N)
# Let's figure out which steps in the sequence have a zero for any agent
# We will always assume t=0 has a zero in it as that makes the logic cleaner
has_zeros = ((masks[1:] == 0.0)
.any(dim=-1)
.nonzero()
.squeeze()
.cpu())
# +1 to correct the masks[1:]
if has_zeros.dim() == 0:
# Deal with scalar
has_zeros = [has_zeros.item() + 1]
else:
has_zeros = (has_zeros + 1).numpy().tolist()
# add t=0 and t=T to the list
has_zeros = [0] + has_zeros + [T]
hxs = hxs.transpose(0, 1)
outputs = []
for i in range(len(has_zeros) - 1):
# We can now process steps that don't have any zeros in masks together!
# This is much faster
start_idx = has_zeros[i]
end_idx = has_zeros[i + 1]
temp = (hxs * masks[start_idx].view(1, -1, 1).repeat(self._recurrent_N, 1, 1)).contiguous()
rnn_scores, hxs = self.rnn(x[start_idx:end_idx], temp)
outputs.append(rnn_scores)
# assert len(outputs) == T
# x is a (T, N, -1) tensor
x = torch.cat(outputs, dim=0)
# flatten
x = x.reshape(T * N, -1)
hxs = hxs.transpose(0, 1)
x = self.norm(x)
return x, hxs
================================================
FILE: algorithms/utils/util.py
================================================
import copy
import numpy as np
import torch
import torch.nn as nn
def init(module, weight_init, bias_init, gain=1):
weight_init(module.weight.data, gain=gain)
bias_init(module.bias.data)
return module
def get_clones(module, N):
return nn.ModuleList([copy.deepcopy(module) for i in range(N)])
def check(input):
output = torch.from_numpy(input) if type(input) == np.ndarray else input
return output
================================================
FILE: configs/config.py
================================================
import argparse
def get_config():
"""
The configuration parser for common hyperparameters of all environment.
Please reach each `scripts/train/_runner.py` file to find private hyperparameters
only used in .
Prepare parameters:
--algorithm_name
specifiy the algorithm, including `["happo", "hatrpo"]`
--experiment_name
an identifier to distinguish different experiment.
--seed
set seed for numpy and torch
--seed_specify
by default True Random or specify seed for numpy/torch
--running_id
the running index of experiment (default=1)
--cuda
by default True, will use GPU to train; or else will use CPU;
--cuda_deterministic
by default, make sure random seed effective. if set, bypass such function.
--n_training_threads
number of training threads working in parallel. by default 1
--n_rollout_threads
number of parallel envs for training rollout. by default 32
--n_eval_rollout_threads
number of parallel envs for evaluating rollout. by default 1
--n_render_rollout_threads
number of parallel envs for rendering, could only be set as 1 for some environments.
--num_env_steps
number of env steps to train (default: 10e6)
Env parameters:
--env_name
specify the name of environment
--use_obs_instead_of_state
[only for some env] by default False, will use global state; or else will use concatenated local obs.
Replay Buffer parameters:
--episode_length
the max length of episode in the buffer.
Network parameters:
--share_policy
by default True, all agents will share the same network; set to make training agents use different policies.
--use_centralized_V
by default True, use centralized training mode; or else will decentralized training mode.
--stacked_frames
Number of input frames which should be stack together.
--hidden_size
Dimension of hidden layers for actor/critic networks
--layer_N
Number of layers for actor/critic networks
--use_ReLU
by default True, will use ReLU. or else will use Tanh.
--use_popart
by default True, use running mean and std to normalize rewards.
--use_feature_normalization
by default True, apply layernorm to normalize inputs.
--use_orthogonal
by default True, use Orthogonal initialization for weights and 0 initialization for biases. or else, will use xavier uniform inilialization.
--gain
by default 0.01, use the gain # of last action layer
--use_naive_recurrent_policy
by default False, use the whole trajectory to calculate hidden states.
--use_recurrent_policy
by default, use Recurrent Policy. If set, do not use.
--recurrent_N
The number of recurrent layers ( default 1).
--data_chunk_length
Time length of chunks used to train a recurrent_policy, default 10.
Optimizer parameters:
--lr
learning rate parameter, (default: 5e-4, fixed).
--critic_lr
learning rate of critic (default: 5e-4, fixed)
--opti_eps
RMSprop optimizer epsilon (default: 1e-5)
--weight_decay
coefficience of weight decay (default: 0)
TRPO parameters:
--kl_threshold
the threshold of kl-divergence (default: 0.01)
--ls_step
the step of line search (default: 10)
--accept_ratio
accept ratio of loss improve (default: 0.5)
PPO parameters:
--ppo_epoch
number of ppo epochs (default: 15)
--use_clipped_value_loss
by default, clip loss value. If set, do not clip loss value.
--clip_param
ppo clip parameter (default: 0.2)
--num_mini_batch
number of batches for ppo (default: 1)
--entropy_coef
entropy term coefficient (default: 0.01)
--use_max_grad_norm
by default, use max norm of gradients. If set, do not use.
--max_grad_norm
max norm of gradients (default: 0.5)
--use_gae
by default, use generalized advantage estimation. If set, do not use gae.
--gamma
discount factor for rewards (default: 0.99)
--gae_lambda
gae lambda parameter (default: 0.95)
--use_proper_time_limits
by default, the return value does consider limits of time. If set, compute returns with considering time limits factor.
--use_huber_loss
by default, use huber loss. If set, do not use huber loss.
--use_value_active_masks
by default True, whether to mask useless data in value loss.
--huber_delta
coefficient of huber loss.
Run parameters:
--use_linear_lr_decay
by default, do not apply linear decay to learning rate. If set, use a linear schedule on the learning rate
--save_interval
time duration between contiunous twice models saving.
--log_interval
time duration between contiunous twice log printing.
--model_dir
by default None. set the path to pretrained model.
Eval parameters:
--use_eval
by default, do not start evaluation. If set`, start evaluation alongside with training.
--eval_interval
time duration between contiunous twice evaluation progress.
--eval_episodes
number of episodes of a single evaluation.
Render parameters:
--save_gifs
by default, do not save render video. If set, save video.
--use_render
by default, do not render the env during training. If set, start render. Note: something, the environment has internal render process which is not controlled by this hyperparam.
--render_episodes
the number of episodes to render a given env
--ifi
the play interval of each rendered image in saved video.
Pretrained parameters:
"""
parser = argparse.ArgumentParser(description='onpolicy_algorithm', formatter_class=argparse.RawDescriptionHelpFormatter)
# prepare parameters
parser.add_argument("--algorithm_name", type=str,
default=' ', choices=["happo","hatrpo"])
parser.add_argument("--experiment_name", type=str,
default="check", help="an identifier to distinguish different experiment.")
parser.add_argument("--seed", type=int,
default=1, help="Random seed for numpy/torch")
parser.add_argument("--seed_specify", action="store_false",
default=True, help="Random or specify seed for numpy/torch")
parser.add_argument("--running_id", type=int,
default=1, help="the running index of experiment")
parser.add_argument("--cuda", action='store_false',
default=True, help="by default True, will use GPU to train; or else will use CPU;")
parser.add_argument("--cuda_deterministic", action='store_false',
default=True, help="by default, make sure random seed effective. if set, bypass such function.")
parser.add_argument("--n_training_threads", type=int,
default=1, help="Number of torch threads for training")
parser.add_argument("--n_rollout_threads", type=int,
default=32, help="Number of parallel envs for training rollouts")
parser.add_argument("--n_eval_rollout_threads", type=int,
default=1, help="Number of parallel envs for evaluating rollouts")
parser.add_argument("--n_render_rollout_threads", type=int,
default=1, help="Number of parallel envs for rendering rollouts")
parser.add_argument("--num_env_steps", type=int,
default=10e6, help='Number of environment steps to train (default: 10e6)')
parser.add_argument("--user_name", type=str,
default='marl',help="[for wandb usage], to specify user's name for simply collecting training data.")
# env parameters
parser.add_argument("--env_name", type=str,
default='StarCraft2', help="specify the name of environment")
parser.add_argument("--use_obs_instead_of_state", action='store_true',
default=False, help="Whether to use global state or concatenated obs")
# replay buffer parameters
parser.add_argument("--episode_length", type=int,
default=200, help="Max length for any episode")
# network parameters
parser.add_argument("--share_policy", action='store_false',
default=True, help='Whether agent share the same policy')
parser.add_argument("--use_centralized_V", action='store_false',
default=True, help="Whether to use centralized V function")
parser.add_argument("--stacked_frames", type=int,
default=1, help="Dimension of hidden layers for actor/critic networks")
parser.add_argument("--use_stacked_frames", action='store_true',
default=False, help="Whether to use stacked_frames")
parser.add_argument("--hidden_size", type=int,
default=64, help="Dimension of hidden layers for actor/critic networks")
parser.add_argument("--layer_N", type=int,
default=1, help="Number of layers for actor/critic networks")
parser.add_argument("--use_ReLU", action='store_false',
default=True, help="Whether to use ReLU")
parser.add_argument("--use_popart", action='store_false',
default=True, help="by default True, use running mean and std to normalize rewards.")
parser.add_argument("--use_valuenorm", action='store_false',
default=True, help="by default True, use running mean and std to normalize rewards.")
parser.add_argument("--use_feature_normalization", action='store_false',
default=True, help="Whether to apply layernorm to the inputs")
parser.add_argument("--use_orthogonal", action='store_false',
default=True, help="Whether to use Orthogonal initialization for weights and 0 initialization for biases")
parser.add_argument("--gain", type=float,
default=0.01, help="The gain # of last action layer")
# recurrent parameters
parser.add_argument("--use_naive_recurrent_policy", action='store_true',
default=False, help='Whether to use a naive recurrent policy')
parser.add_argument("--use_recurrent_policy", action='store_true',
default=False, help='use a recurrent policy')
parser.add_argument("--recurrent_N", type=int,
default=1, help="The number of recurrent layers.")
parser.add_argument("--data_chunk_length", type=int,
default=10, help="Time length of chunks used to train a recurrent_policy")
# optimizer parameters
parser.add_argument("--lr", type=float,
default=5e-4, help='learning rate (default: 5e-4)')
parser.add_argument("--critic_lr", type=float,
default=5e-4, help='critic learning rate (default: 5e-4)')
parser.add_argument("--opti_eps", type=float,
default=1e-5, help='RMSprop optimizer epsilon (default: 1e-5)')
parser.add_argument("--weight_decay", type=float, default=0)
parser.add_argument("--std_x_coef", type=float, default=1)
parser.add_argument("--std_y_coef", type=float, default=0.5)
# trpo parameters
parser.add_argument("--kl_threshold", type=float,
default=0.01, help='the threshold of kl-divergence (default: 0.01)')
parser.add_argument("--ls_step", type=int,
default=10, help='number of line search (default: 10)')
parser.add_argument("--accept_ratio", type=float,
default=0.5, help='accept ratio of loss improve (default: 0.5)')
# ppo parameters
parser.add_argument("--ppo_epoch", type=int,
default=15, help='number of ppo epochs (default: 15)')
parser.add_argument("--use_clipped_value_loss", action='store_false',
default=True, help="by default, clip loss value. If set, do not clip loss value.")
parser.add_argument("--clip_param", type=float,
default=0.2, help='ppo clip parameter (default: 0.2)')
parser.add_argument("--num_mini_batch", type=int,
default=1, help='number of batches for ppo (default: 1)')
parser.add_argument("--entropy_coef", type=float,
default=0.01, help='entropy term coefficient (default: 0.01)')
parser.add_argument("--value_loss_coef", type=float,
default=1, help='value loss coefficient (default: 0.5)')
parser.add_argument("--use_max_grad_norm", action='store_false',
default=True, help="by default, use max norm of gradients. If set, do not use.")
parser.add_argument("--max_grad_norm", type=float,
default=10.0, help='max norm of gradients (default: 0.5)')
parser.add_argument("--use_gae", action='store_false',
default=True, help='use generalized advantage estimation')
parser.add_argument("--gamma", type=float, default=0.99,
help='discount factor for rewards (default: 0.99)')
parser.add_argument("--gae_lambda", type=float, default=0.95,
help='gae lambda parameter (default: 0.95)')
parser.add_argument("--use_proper_time_limits", action='store_true',
default=False, help='compute returns taking into account time limits')
parser.add_argument("--use_huber_loss", action='store_false',
default=True, help="by default, use huber loss. If set, do not use huber loss.")
parser.add_argument("--use_value_active_masks", action='store_false',
default=True, help="by default True, whether to mask useless data in value loss.")
parser.add_argument("--use_policy_active_masks", action='store_false',
default=True, help="by default True, whether to mask useless data in policy loss.")
parser.add_argument("--huber_delta", type=float,
default=10.0, help=" coefficience of huber loss.")
# run parameters
parser.add_argument("--use_linear_lr_decay", action='store_true',
default=False, help='use a linear schedule on the learning rate')
parser.add_argument("--save_interval", type=int,
default=1, help="time duration between contiunous twice models saving.")
parser.add_argument("--log_interval", type=int,
default=5, help="time duration between contiunous twice log printing.")
parser.add_argument("--model_dir", type=str,
default=None, help="by default None. set the path to pretrained model.")
# eval parameters
parser.add_argument("--use_eval", action='store_true',
default=False, help="by default, do not start evaluation. If set`, start evaluation alongside with training.")
parser.add_argument("--eval_interval", type=int,
default=25, help="time duration between contiunous twice evaluation progress.")
parser.add_argument("--eval_episodes", type=int,
default=32, help="number of episodes of a single evaluation.")
# render parameters
parser.add_argument("--save_gifs", action='store_true',
default=False, help="by default, do not save render video. If set, save video.")
parser.add_argument("--use_render", action='store_true',
default=False, help="by default, do not render the env during training. If set, start render. Note: something, the environment has internal render process which is not controlled by this hyperparam.")
parser.add_argument("--render_episodes", type=int,
default=5, help="the number of episodes to render a given env")
parser.add_argument("--ifi", type=float,
default=0.1, help="the play interval of each rendered image in saved video.")
return parser
================================================
FILE: envs/__init__.py
================================================
import socket
from absl import flags
FLAGS = flags.FLAGS
FLAGS(['train_sc.py'])
================================================
FILE: envs/env_wrappers.py
================================================
"""
Modified from OpenAI Baselines code to work with multi-agent envs
"""
import numpy as np
import torch
from multiprocessing import Process, Pipe
from abc import ABC, abstractmethod
from utils.util import tile_images
class CloudpickleWrapper(object):
"""
Uses cloudpickle to serialize contents (otherwise multiprocessing tries to use pickle)
"""
def __init__(self, x):
self.x = x
def __getstate__(self):
import cloudpickle
return cloudpickle.dumps(self.x)
def __setstate__(self, ob):
import pickle
self.x = pickle.loads(ob)
class ShareVecEnv(ABC):
"""
An abstract asynchronous, vectorized environment.
Used to batch data from multiple copies of an environment, so that
each observation becomes an batch of observations, and expected action is a batch of actions to
be applied per-environment.
"""
closed = False
viewer = None
metadata = {
'render.modes': ['human', 'rgb_array']
}
def __init__(self, num_envs, observation_space, share_observation_space, action_space):
self.num_envs = num_envs
self.observation_space = observation_space
self.share_observation_space = share_observation_space
self.action_space = action_space
@abstractmethod
def reset(self):
"""
Reset all the environments and return an array of
observations, or a dict of observation arrays.
If step_async is still doing work, that work will
be cancelled and step_wait() should not be called
until step_async() is invoked again.
"""
pass
@abstractmethod
def step_async(self, actions):
"""
Tell all the environments to start taking a step
with the given actions.
Call step_wait() to get the results of the step.
You should not call this if a step_async run is
already pending.
"""
pass
@abstractmethod
def step_wait(self):
"""
Wait for the step taken with step_async().
Returns (obs, rews, dones, infos):
- obs: an array of observations, or a dict of
arrays of observations.
- rews: an array of rewards
- dones: an array of "episode done" booleans
- infos: a sequence of info objects
"""
pass
def close_extras(self):
"""
Clean up the extra resources, beyond what's in this base class.
Only runs when not self.closed.
"""
pass
def close(self):
if self.closed:
return
if self.viewer is not None:
self.viewer.close()
self.close_extras()
self.closed = True
def step(self, actions):
"""
Step the environments synchronously.
This is available for backwards compatibility.
"""
self.step_async(actions)
return self.step_wait()
def render(self, mode='human'):
imgs = self.get_images()
bigimg = tile_images(imgs)
if mode == 'human':
self.get_viewer().imshow(bigimg)
return self.get_viewer().isopen
elif mode == 'rgb_array':
return bigimg
else:
raise NotImplementedError
def get_images(self):
"""
Return RGB images from each environment
"""
raise NotImplementedError
@property
def unwrapped(self):
if isinstance(self, VecEnvWrapper):
return self.venv.unwrapped
else:
return self
def get_viewer(self):
if self.viewer is None:
from gym.envs.classic_control import rendering
self.viewer = rendering.SimpleImageViewer()
return self.viewer
def worker(remote, parent_remote, env_fn_wrapper):
parent_remote.close()
env = env_fn_wrapper.x()
while True:
cmd, data = remote.recv()
if cmd == 'step':
ob, reward, done, info = env.step(data)
if 'bool' in done.__class__.__name__:
if done:
ob = env.reset()
else:
if np.all(done):
ob = env.reset()
remote.send((ob, reward, done, info))
elif cmd == 'reset':
ob = env.reset()
remote.send((ob))
elif cmd == 'render':
if data == "rgb_array":
fr = env.render(mode=data)
remote.send(fr)
elif data == "human":
env.render(mode=data)
elif cmd == 'reset_task':
ob = env.reset_task()
remote.send(ob)
elif cmd == 'close':
env.close()
remote.close()
break
elif cmd == 'get_spaces':
remote.send((env.observation_space, env.share_observation_space, env.action_space))
else:
raise NotImplementedError
class GuardSubprocVecEnv(ShareVecEnv):
def __init__(self, env_fns, spaces=None):
"""
envs: list of gym environments to run in subprocesses
"""
self.waiting = False
self.closed = False
nenvs = len(env_fns)
self.remotes, self.work_remotes = zip(*[Pipe() for _ in range(nenvs)])
self.ps = [Process(target=worker, args=(work_remote, remote, CloudpickleWrapper(env_fn)))
for (work_remote, remote, env_fn) in zip(self.work_remotes, self.remotes, env_fns)]
for p in self.ps:
p.daemon = False # could cause zombie process
p.start()
for remote in self.work_remotes:
remote.close()
self.remotes[0].send(('get_spaces', None))
observation_space, share_observation_space, action_space = self.remotes[0].recv()
ShareVecEnv.__init__(self, len(env_fns), observation_space,
share_observation_space, action_space)
def step_async(self, actions):
for remote, action in zip(self.remotes, actions):
remote.send(('step', action))
self.waiting = True
def step_wait(self):
results = [remote.recv() for remote in self.remotes]
self.waiting = False
obs, rews, dones, infos = zip(*results)
return np.stack(obs), np.stack(rews), np.stack(dones), infos
def reset(self):
for remote in self.remotes:
remote.send(('reset', None))
obs = [remote.recv() for remote in self.remotes]
return np.stack(obs)
def reset_task(self):
for remote in self.remotes:
remote.send(('reset_task', None))
return np.stack([remote.recv() for remote in self.remotes])
def close(self):
if self.closed:
return
if self.waiting:
for remote in self.remotes:
remote.recv()
for remote in self.remotes:
remote.send(('close', None))
for p in self.ps:
p.join()
self.closed = True
class SubprocVecEnv(ShareVecEnv):
def __init__(self, env_fns, spaces=None):
"""
envs: list of gym environments to run in subprocesses
"""
self.waiting = False
self.closed = False
nenvs = len(env_fns)
self.remotes, self.work_remotes = zip(*[Pipe() for _ in range(nenvs)])
self.ps = [Process(target=worker, args=(work_remote, remote, CloudpickleWrapper(env_fn)))
for (work_remote, remote, env_fn) in zip(self.work_remotes, self.remotes, env_fns)]
for p in self.ps:
p.daemon = True # if the main process crashes, we should not cause things to hang
p.start()
for remote in self.work_remotes:
remote.close()
self.remotes[0].send(('get_spaces', None))
observation_space, share_observation_space, action_space = self.remotes[0].recv()
ShareVecEnv.__init__(self, len(env_fns), observation_space,
share_observation_space, action_space)
def step_async(self, actions):
for remote, action in zip(self.remotes, actions):
remote.send(('step', action))
self.waiting = True
def step_wait(self):
results = [remote.recv() for remote in self.remotes]
self.waiting = False
obs, rews, dones, infos = zip(*results)
return np.stack(obs), np.stack(rews), np.stack(dones), infos
def reset(self):
for remote in self.remotes:
remote.send(('reset', None))
obs = [remote.recv() for remote in self.remotes]
return np.stack(obs)
def reset_task(self):
for remote in self.remotes:
remote.send(('reset_task', None))
return np.stack([remote.recv() for remote in self.remotes])
def close(self):
if self.closed:
return
if self.waiting:
for remote in self.remotes:
remote.recv()
for remote in self.remotes:
remote.send(('close', None))
for p in self.ps:
p.join()
self.closed = True
def render(self, mode="rgb_array"):
for remote in self.remotes:
remote.send(('render', mode))
if mode == "rgb_array":
frame = [remote.recv() for remote in self.remotes]
return np.stack(frame)
def shareworker(remote, parent_remote, env_fn_wrapper):
parent_remote.close()
env = env_fn_wrapper.x()
while True:
cmd, data = remote.recv()
if cmd == 'step':
ob, s_ob, reward, done, info, available_actions = env.step(data)
if 'bool' in done.__class__.__name__:
if done:
ob, s_ob, available_actions = env.reset()
else:
if np.all(done):
ob, s_ob, available_actions = env.reset()
remote.send((ob, s_ob, reward, done, info, available_actions))
elif cmd == 'reset':
ob, s_ob, available_actions = env.reset()
remote.send((ob, s_ob, available_actions))
elif cmd == 'reset_task':
ob = env.reset_task()
remote.send(ob)
elif cmd == 'render':
if data == "rgb_array":
fr = env.render(mode=data)
remote.send(fr)
elif data == "human":
env.render(mode=data)
elif cmd == 'close':
env.close()
remote.close()
break
elif cmd == 'get_spaces':
remote.send(
(env.observation_space, env.share_observation_space, env.action_space))
elif cmd == 'render_vulnerability':
fr = env.render_vulnerability(data)
remote.send((fr))
elif cmd == 'get_num_agents':
remote.send((env.n_agents))
else:
raise NotImplementedError
class ShareSubprocVecEnv(ShareVecEnv):
def __init__(self, env_fns, spaces=None):
"""
envs: list of gym environments to run in subprocesses
"""
self.waiting = False
self.closed = False
nenvs = len(env_fns)
self.remotes, self.work_remotes = zip(*[Pipe() for _ in range(nenvs)])
self.ps = [Process(target=shareworker, args=(work_remote, remote, CloudpickleWrapper(env_fn)))
for (work_remote, remote, env_fn) in zip(self.work_remotes, self.remotes, env_fns)]
for p in self.ps:
p.daemon = True # if the main process crashes, we should not cause things to hang
p.start()
for remote in self.work_remotes:
remote.close()
self.remotes[0].send(('get_num_agents', None))
self.n_agents = self.remotes[0].recv()
self.remotes[0].send(('get_spaces', None))
observation_space, share_observation_space, action_space = self.remotes[0].recv(
)
ShareVecEnv.__init__(self, len(env_fns), observation_space,
share_observation_space, action_space)
def step_async(self, actions):
for remote, action in zip(self.remotes, actions):
remote.send(('step', action))
self.waiting = True
def step_wait(self):
results = [remote.recv() for remote in self.remotes]
self.waiting = False
obs, share_obs, rews, dones, infos, available_actions = zip(*results)
return np.stack(obs), np.stack(share_obs), np.stack(rews), np.stack(dones), infos, np.stack(available_actions)
def reset(self):
for remote in self.remotes:
remote.send(('reset', None))
results = [remote.recv() for remote in self.remotes]
obs, share_obs, available_actions = zip(*results)
return np.stack(obs), np.stack(share_obs), np.stack(available_actions)
def reset_task(self):
for remote in self.remotes:
remote.send(('reset_task', None))
return np.stack([remote.recv() for remote in self.remotes])
def close(self):
if self.closed:
return
if self.waiting:
for remote in self.remotes:
remote.recv()
for remote in self.remotes:
remote.send(('close', None))
for p in self.ps:
p.join()
self.closed = True
def choosesimpleworker(remote, parent_remote, env_fn_wrapper):
parent_remote.close()
env = env_fn_wrapper.x()
while True:
cmd, data = remote.recv()
if cmd == 'step':
ob, reward, done, info = env.step(data)
remote.send((ob, reward, done, info))
elif cmd == 'reset':
ob = env.reset(data)
remote.send((ob))
elif cmd == 'reset_task':
ob = env.reset_task()
remote.send(ob)
elif cmd == 'close':
env.close()
remote.close()
break
elif cmd == 'render':
if data == "rgb_array":
fr = env.render(mode=data)
remote.send(fr)
elif data == "human":
env.render(mode=data)
elif cmd == 'get_spaces':
remote.send(
(env.observation_space, env.share_observation_space, env.action_space))
else:
raise NotImplementedError
class ChooseSimpleSubprocVecEnv(ShareVecEnv):
def __init__(self, env_fns, spaces=None):
"""
envs: list of gym environments to run in subprocesses
"""
self.waiting = False
self.closed = False
nenvs = len(env_fns)
self.remotes, self.work_remotes = zip(*[Pipe() for _ in range(nenvs)])
self.ps = [Process(target=choosesimpleworker, args=(work_remote, remote, CloudpickleWrapper(env_fn)))
for (work_remote, remote, env_fn) in zip(self.work_remotes, self.remotes, env_fns)]
for p in self.ps:
p.daemon = True # if the main process crashes, we should not cause things to hang
p.start()
for remote in self.work_remotes:
remote.close()
self.remotes[0].send(('get_spaces', None))
observation_space, share_observation_space, action_space = self.remotes[0].recv()
ShareVecEnv.__init__(self, len(env_fns), observation_space,
share_observation_space, action_space)
def step_async(self, actions):
for remote, action in zip(self.remotes, actions):
remote.send(('step', action))
self.waiting = True
def step_wait(self):
results = [remote.recv() for remote in self.remotes]
self.waiting = False
obs, rews, dones, infos = zip(*results)
return np.stack(obs), np.stack(rews), np.stack(dones), infos
def reset(self, reset_choose):
for remote, choose in zip(self.remotes, reset_choose):
remote.send(('reset', choose))
obs = [remote.recv() for remote in self.remotes]
return np.stack(obs)
def render(self, mode="rgb_array"):
for remote in self.remotes:
remote.send(('render', mode))
if mode == "rgb_array":
frame = [remote.recv() for remote in self.remotes]
return np.stack(frame)
def reset_task(self):
for remote in self.remotes:
remote.send(('reset_task', None))
return np.stack([remote.recv() for remote in self.remotes])
def close(self):
if self.closed:
return
if self.waiting:
for remote in self.remotes:
remote.recv()
for remote in self.remotes:
remote.send(('close', None))
for p in self.ps:
p.join()
self.closed = True
def chooseworker(remote, parent_remote, env_fn_wrapper):
parent_remote.close()
env = env_fn_wrapper.x()
while True:
cmd, data = remote.recv()
if cmd == 'step':
ob, s_ob, reward, done, info, available_actions = env.step(data)
remote.send((ob, s_ob, reward, done, info, available_actions))
elif cmd == 'reset':
ob, s_ob, available_actions = env.reset(data)
remote.send((ob, s_ob, available_actions))
elif cmd == 'reset_task':
ob = env.reset_task()
remote.send(ob)
elif cmd == 'close':
env.close()
remote.close()
break
elif cmd == 'render':
remote.send(env.render(mode='rgb_array'))
elif cmd == 'get_spaces':
remote.send(
(env.observation_space, env.share_observation_space, env.action_space))
else:
raise NotImplementedError
class ChooseSubprocVecEnv(ShareVecEnv):
def __init__(self, env_fns, spaces=None):
"""
envs: list of gym environments to run in subprocesses
"""
self.waiting = False
self.closed = False
nenvs = len(env_fns)
self.remotes, self.work_remotes = zip(*[Pipe() for _ in range(nenvs)])
self.ps = [Process(target=chooseworker, args=(work_remote, remote, CloudpickleWrapper(env_fn)))
for (work_remote, remote, env_fn) in zip(self.work_remotes, self.remotes, env_fns)]
for p in self.ps:
p.daemon = True # if the main process crashes, we should not cause things to hang
p.start()
for remote in self.work_remotes:
remote.close()
self.remotes[0].send(('get_spaces', None))
observation_space, share_observation_space, action_space = self.remotes[0].recv(
)
ShareVecEnv.__init__(self, len(env_fns), observation_space,
share_observation_space, action_space)
def step_async(self, actions):
for remote, action in zip(self.remotes, actions):
remote.send(('step', action))
self.waiting = True
def step_wait(self):
results = [remote.recv() for remote in self.remotes]
self.waiting = False
obs, share_obs, rews, dones, infos, available_actions = zip(*results)
return np.stack(obs), np.stack(share_obs), np.stack(rews), np.stack(dones), infos, np.stack(available_actions)
def reset(self, reset_choose):
for remote, choose in zip(self.remotes, reset_choose):
remote.send(('reset', choose))
results = [remote.recv() for remote in self.remotes]
obs, share_obs, available_actions = zip(*results)
return np.stack(obs), np.stack(share_obs), np.stack(available_actions)
def reset_task(self):
for remote in self.remotes:
remote.send(('reset_task', None))
return np.stack([remote.recv() for remote in self.remotes])
def close(self):
if self.closed:
return
if self.waiting:
for remote in self.remotes:
remote.recv()
for remote in self.remotes:
remote.send(('close', None))
for p in self.ps:
p.join()
self.closed = True
def chooseguardworker(remote, parent_remote, env_fn_wrapper):
parent_remote.close()
env = env_fn_wrapper.x()
while True:
cmd, data = remote.recv()
if cmd == 'step':
ob, reward, done, info = env.step(data)
remote.send((ob, reward, done, info))
elif cmd == 'reset':
ob = env.reset(data)
remote.send((ob))
elif cmd == 'reset_task':
ob = env.reset_task()
remote.send(ob)
elif cmd == 'close':
env.close()
remote.close()
break
elif cmd == 'get_spaces':
remote.send(
(env.observation_space, env.share_observation_space, env.action_space))
else:
raise NotImplementedError
class ChooseGuardSubprocVecEnv(ShareVecEnv):
def __init__(self, env_fns, spaces=None):
"""
envs: list of gym environments to run in subprocesses
"""
self.waiting = False
self.closed = False
nenvs = len(env_fns)
self.remotes, self.work_remotes = zip(*[Pipe() for _ in range(nenvs)])
self.ps = [Process(target=chooseguardworker, args=(work_remote, remote, CloudpickleWrapper(env_fn)))
for (work_remote, remote, env_fn) in zip(self.work_remotes, self.remotes, env_fns)]
for p in self.ps:
p.daemon = False # if the main process crashes, we should not cause things to hang
p.start()
for remote in self.work_remotes:
remote.close()
self.remotes[0].send(('get_spaces', None))
observation_space, share_observation_space, action_space = self.remotes[0].recv(
)
ShareVecEnv.__init__(self, len(env_fns), observation_space,
share_observation_space, action_space)
def step_async(self, actions):
for remote, action in zip(self.remotes, actions):
remote.send(('step', action))
self.waiting = True
def step_wait(self):
results = [remote.recv() for remote in self.remotes]
self.waiting = False
obs, rews, dones, infos = zip(*results)
return np.stack(obs), np.stack(rews), np.stack(dones), infos
def reset(self, reset_choose):
for remote, choose in zip(self.remotes, reset_choose):
remote.send(('reset', choose))
obs = [remote.recv() for remote in self.remotes]
return np.stack(obs)
def reset_task(self):
for remote in self.remotes:
remote.send(('reset_task', None))
return np.stack([remote.recv() for remote in self.remotes])
def close(self):
if self.closed:
return
if self.waiting:
for remote in self.remotes:
remote.recv()
for remote in self.remotes:
remote.send(('close', None))
for p in self.ps:
p.join()
self.closed = True
# single env
class DummyVecEnv(ShareVecEnv):
def __init__(self, env_fns):
self.envs = [fn() for fn in env_fns]
env = self.envs[0]
ShareVecEnv.__init__(self, len(
env_fns), env.observation_space, env.share_observation_space, env.action_space)
self.actions = None
def step_async(self, actions):
self.actions = actions
def step_wait(self):
results = [env.step(a) for (a, env) in zip(self.actions, self.envs)]
obs, rews, dones, infos = map(np.array, zip(*results))
for (i, done) in enumerate(dones):
if 'bool' in done.__class__.__name__:
if done:
obs[i] = self.envs[i].reset()
else:
if np.all(done):
obs[i] = self.envs[i].reset()
self.actions = None
return obs, rews, dones, infos
def reset(self):
obs = [env.reset() for env in self.envs]
return np.array(obs)
def close(self):
for env in self.envs:
env.close()
def render(self, mode="human"):
if mode == "rgb_array":
return np.array([env.render(mode=mode) for env in self.envs])
elif mode == "human":
for env in self.envs:
env.render(mode=mode)
else:
raise NotImplementedError
class ShareDummyVecEnv(ShareVecEnv):
def __init__(self, env_fns):
self.envs = [fn() for fn in env_fns]
env = self.envs[0]
ShareVecEnv.__init__(self, len(
env_fns), env.observation_space, env.share_observation_space, env.action_space)
self.actions = None
def step_async(self, actions):
self.actions = actions
def step_wait(self):
results = [env.step(a) for (a, env) in zip(self.actions, self.envs)]
obs, share_obs, rews, dones, infos, available_actions = map(
np.array, zip(*results))
for (i, done) in enumerate(dones):
if 'bool' in done.__class__.__name__:
if done:
obs[i], share_obs[i], available_actions[i] = self.envs[i].reset()
else:
if np.all(done):
obs[i], share_obs[i], available_actions[i] = self.envs[i].reset()
self.actions = None
return obs, share_obs, rews, dones, infos, available_actions
def reset(self):
results = [env.reset() for env in self.envs]
obs, share_obs, available_actions = map(np.array, zip(*results))
return obs, share_obs, available_actions
def close(self):
for env in self.envs:
env.close()
def render(self, mode="human"):
if mode == "rgb_array":
return np.array([env.render(mode=mode) for env in self.envs])
elif mode == "human":
for env in self.envs:
env.render(mode=mode)
else:
raise NotImplementedError
class ChooseDummyVecEnv(ShareVecEnv):
def __init__(self, env_fns):
self.envs = [fn() for fn in env_fns]
env = self.envs[0]
ShareVecEnv.__init__(self, len(
env_fns), env.observation_space, env.share_observation_space, env.action_space)
self.actions = None
def step_async(self, actions):
self.actions = actions
def step_wait(self):
results = [env.step(a) for (a, env) in zip(self.actions, self.envs)]
obs, share_obs, rews, dones, infos, available_actions = map(
np.array, zip(*results))
self.actions = None
return obs, share_obs, rews, dones, infos, available_actions
def reset(self, reset_choose):
results = [env.reset(choose)
for (env, choose) in zip(self.envs, reset_choose)]
obs, share_obs, available_actions = map(np.array, zip(*results))
return obs, share_obs, available_actions
def close(self):
for env in self.envs:
env.close()
def render(self, mode="human"):
if mode == "rgb_array":
return np.array([env.render(mode=mode) for env in self.envs])
elif mode == "human":
for env in self.envs:
env.render(mode=mode)
else:
raise NotImplementedError
class ChooseSimpleDummyVecEnv(ShareVecEnv):
def __init__(self, env_fns):
self.envs = [fn() for fn in env_fns]
env = self.envs[0]
ShareVecEnv.__init__(self, len(
env_fns), env.observation_space, env.share_observation_space, env.action_space)
self.actions = None
def step_async(self, actions):
self.actions = actions
def step_wait(self):
results = [env.step(a) for (a, env) in zip(self.actions, self.envs)]
obs, rews, dones, infos = map(np.array, zip(*results))
self.actions = None
return obs, rews, dones, infos
def reset(self, reset_choose):
obs = [env.reset(choose)
for (env, choose) in zip(self.envs, reset_choose)]
return np.array(obs)
def close(self):
for env in self.envs:
env.close()
def render(self, mode="human"):
if mode == "rgb_array":
return np.array([env.render(mode=mode) for env in self.envs])
elif mode == "human":
for env in self.envs:
env.render(mode=mode)
else:
raise NotImplementedError
================================================
FILE: envs/ma_mujoco/__init__.py
================================================
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/__init__.py
================================================
from .mujoco_multi import MujocoMulti
from .coupled_half_cheetah import CoupledHalfCheetah
from .manyagent_swimmer import ManyAgentSwimmerEnv
from .manyagent_ant import ManyAgentAntEnv
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/assets/.gitignore
================================================
*.auto.xml
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/assets/__init__.py
================================================
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/assets/coupled_half_cheetah.xml
================================================
-
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/assets/manyagent_ant.xml
================================================
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/assets/manyagent_ant.xml.template
================================================
{{ body }}
{{ actuators }}
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/assets/manyagent_ant__stage1.xml
================================================
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/assets/manyagent_swimmer.xml.template
================================================
{{ body }}
{{ actuators }}
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/assets/manyagent_swimmer__bckp2.xml
================================================
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/assets/manyagent_swimmer_bckp.xml
================================================
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/coupled_half_cheetah.py
================================================
import numpy as np
from gym import utils
from gym.envs.mujoco import mujoco_env
import os
class CoupledHalfCheetah(mujoco_env.MujocoEnv, utils.EzPickle):
def __init__(self, **kwargs):
mujoco_env.MujocoEnv.__init__(self, os.path.join(os.path.dirname(os.path.abspath(__file__)), 'assets', 'coupled_half_cheetah.xml'), 5)
utils.EzPickle.__init__(self)
def step(self, action):
xposbefore1 = self.sim.data.qpos[0]
xposbefore2 = self.sim.data.qpos[len(self.sim.data.qpos) // 2]
self.do_simulation(action, self.frame_skip)
xposafter1 = self.sim.data.qpos[0]
xposafter2 = self.sim.data.qpos[len(self.sim.data.qpos)//2]
ob = self._get_obs()
reward_ctrl1 = - 0.1 * np.square(action[0:len(action)//2]).sum()
reward_ctrl2 = - 0.1 * np.square(action[len(action)//2:]).sum()
reward_run1 = (xposafter1 - xposbefore1)/self.dt
reward_run2 = (xposafter2 - xposbefore2) / self.dt
reward = (reward_ctrl1 + reward_ctrl2)/2.0 + (reward_run1 + reward_run2)/2.0
done = False
return ob, reward, done, dict(reward_run1=reward_run1, reward_ctrl1=reward_ctrl1,
reward_run2=reward_run2, reward_ctrl2=reward_ctrl2)
def _get_obs(self):
return np.concatenate([
self.sim.data.qpos.flat[1:],
self.sim.data.qvel.flat,
])
def reset_model(self):
qpos = self.init_qpos + self.np_random.uniform(low=-.1, high=.1, size=self.model.nq)
qvel = self.init_qvel + self.np_random.randn(self.model.nv) * .1
self.set_state(qpos, qvel)
return self._get_obs()
def viewer_setup(self):
self.viewer.cam.distance = self.model.stat.extent * 0.5
def get_env_info(self):
return {"episode_limit": self.episode_limit}
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/manyagent_ant.py
================================================
import numpy as np
from gym import utils
from gym.envs.mujoco import mujoco_env
from jinja2 import Template
import os
class ManyAgentAntEnv(mujoco_env.MujocoEnv, utils.EzPickle):
def __init__(self, **kwargs):
agent_conf = kwargs.get("agent_conf")
n_agents = int(agent_conf.split("x")[0])
n_segs_per_agents = int(agent_conf.split("x")[1])
n_segs = n_agents * n_segs_per_agents
# Check whether asset file exists already, otherwise create it
asset_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'assets',
'manyagent_ant_{}_agents_each_{}_segments.auto.xml'.format(n_agents,
n_segs_per_agents))
#if not os.path.exists(asset_path):
print("Auto-Generating Manyagent Ant asset with {} segments at {}.".format(n_segs, asset_path))
self._generate_asset(n_segs=n_segs, asset_path=asset_path)
#asset_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'assets',git p
# 'manyagent_swimmer.xml')
mujoco_env.MujocoEnv.__init__(self, asset_path, 4)
utils.EzPickle.__init__(self)
def _generate_asset(self, n_segs, asset_path):
template_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'assets',
'manyagent_ant.xml.template')
with open(template_path, "r") as f:
t = Template(f.read())
body_str_template = """
"""
body_close_str_template ="\n"
actuator_str_template = """\t
\n"""
body_str = ""
for i in range(1,n_segs):
body_str += body_str_template.format(*([i]*16))
body_str += body_close_str_template*(n_segs-1)
actuator_str = ""
for i in range(n_segs):
actuator_str += actuator_str_template.format(*([i]*8))
rt = t.render(body=body_str, actuators=actuator_str)
with open(asset_path, "w") as f:
f.write(rt)
pass
def step(self, a):
xposbefore = self.get_body_com("torso_0")[0]
self.do_simulation(a, self.frame_skip)
xposafter = self.get_body_com("torso_0")[0]
forward_reward = (xposafter - xposbefore)/self.dt
ctrl_cost = .5 * np.square(a).sum()
contact_cost = 0.5 * 1e-3 * np.sum(
np.square(np.clip(self.sim.data.cfrc_ext, -1, 1)))
survive_reward = 1.0
reward = forward_reward - ctrl_cost - contact_cost + survive_reward
state = self.state_vector()
notdone = np.isfinite(state).all() \
and state[2] >= 0.2 and state[2] <= 1.0
done = not notdone
ob = self._get_obs()
return ob, reward, done, dict(
reward_forward=forward_reward,
reward_ctrl=-ctrl_cost,
reward_contact=-contact_cost,
reward_survive=survive_reward)
def _get_obs(self):
return np.concatenate([
self.sim.data.qpos.flat[2:],
self.sim.data.qvel.flat,
np.clip(self.sim.data.cfrc_ext, -1, 1).flat,
])
def reset_model(self):
qpos = self.init_qpos + self.np_random.uniform(size=self.model.nq, low=-.1, high=.1)
qvel = self.init_qvel + self.np_random.randn(self.model.nv) * .1
self.set_state(qpos, qvel)
return self._get_obs()
def viewer_setup(self):
self.viewer.cam.distance = self.model.stat.extent * 0.5
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/manyagent_swimmer.py
================================================
import numpy as np
from gym import utils
from gym.envs.mujoco import mujoco_env
import os
from jinja2 import Template
class ManyAgentSwimmerEnv(mujoco_env.MujocoEnv, utils.EzPickle):
def __init__(self, **kwargs):
agent_conf = kwargs.get("agent_conf")
n_agents = int(agent_conf.split("x")[0])
n_segs_per_agents = int(agent_conf.split("x")[1])
n_segs = n_agents * n_segs_per_agents
# Check whether asset file exists already, otherwise create it
asset_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'assets',
'manyagent_swimmer_{}_agents_each_{}_segments.auto.xml'.format(n_agents,
n_segs_per_agents))
# if not os.path.exists(asset_path):
print("Auto-Generating Manyagent Swimmer asset with {} segments at {}.".format(n_segs, asset_path))
self._generate_asset(n_segs=n_segs, asset_path=asset_path)
#asset_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'assets',git p
# 'manyagent_swimmer.xml')
mujoco_env.MujocoEnv.__init__(self, asset_path, 4)
utils.EzPickle.__init__(self)
def _generate_asset(self, n_segs, asset_path):
template_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'assets',
'manyagent_swimmer.xml.template')
with open(template_path, "r") as f:
t = Template(f.read())
body_str_template = """
"""
body_end_str_template = """
"""
body_close_str_template ="\n"
actuator_str_template = """\t \n"""
body_str = ""
for i in range(1,n_segs-1):
body_str += body_str_template.format(i, (-1)**(i+1), i)
body_str += body_end_str_template.format(n_segs-1)
body_str += body_close_str_template*(n_segs-2)
actuator_str = ""
for i in range(n_segs):
actuator_str += actuator_str_template.format(i)
rt = t.render(body=body_str, actuators=actuator_str)
with open(asset_path, "w") as f:
f.write(rt)
pass
def step(self, a):
ctrl_cost_coeff = 0.0001
xposbefore = self.sim.data.qpos[0]
self.do_simulation(a, self.frame_skip)
xposafter = self.sim.data.qpos[0]
reward_fwd = (xposafter - xposbefore) / self.dt
reward_ctrl = - ctrl_cost_coeff * np.square(a).sum()
reward = reward_fwd + reward_ctrl
ob = self._get_obs()
return ob, reward, False, dict(reward_fwd=reward_fwd, reward_ctrl=reward_ctrl)
def _get_obs(self):
qpos = self.sim.data.qpos
qvel = self.sim.data.qvel
return np.concatenate([qpos.flat[2:], qvel.flat])
def reset_model(self):
self.set_state(
self.init_qpos + self.np_random.uniform(low=-.1, high=.1, size=self.model.nq),
self.init_qvel + self.np_random.uniform(low=-.1, high=.1, size=self.model.nv)
)
return self._get_obs()
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/mujoco_multi.py
================================================
from functools import partial
import gym
from gym.spaces import Box
from gym.wrappers import TimeLimit
import numpy as np
from .multiagentenv import MultiAgentEnv
from .manyagent_swimmer import ManyAgentSwimmerEnv
from .obsk import get_joints_at_kdist, get_parts_and_edges, build_obs
def env_fn(env, **kwargs) -> MultiAgentEnv: # TODO: this may be a more complex function
# env_args = kwargs.get("env_args", {})
return env(**kwargs)
env_REGISTRY = {}
env_REGISTRY["manyagent_swimmer"] = partial(env_fn, env=ManyAgentSwimmerEnv)
# using code from https://github.com/ikostrikov/pytorch-ddpg-naf
class NormalizedActions(gym.ActionWrapper):
def _action(self, action):
action = (action + 1) / 2
action *= (self.action_space.high - self.action_space.low)
action += self.action_space.low
return action
def action(self, action_):
return self._action(action_)
def _reverse_action(self, action):
action -= self.action_space.low
action /= (self.action_space.high - self.action_space.low)
action = action * 2 - 1
return action
class MujocoMulti(MultiAgentEnv):
def __init__(self, batch_size=None, **kwargs):
super().__init__(batch_size, **kwargs)
self.scenario = kwargs["env_args"]["scenario"] # e.g. Ant-v2
self.agent_conf = kwargs["env_args"]["agent_conf"] # e.g. '2x3'
self.agent_partitions, self.mujoco_edges, self.mujoco_globals = get_parts_and_edges(self.scenario,
self.agent_conf)
self.n_agents = len(self.agent_partitions)
self.n_actions = max([len(l) for l in self.agent_partitions])
self.obs_add_global_pos = kwargs["env_args"].get("obs_add_global_pos", False)
self.agent_obsk = kwargs["env_args"].get("agent_obsk",
None) # if None, fully observable else k>=0 implies observe nearest k agents or joints
self.agent_obsk_agents = kwargs["env_args"].get("agent_obsk_agents",
False) # observe full k nearest agents (True) or just single joints (False)
if self.agent_obsk is not None:
self.k_categories_label = kwargs["env_args"].get("k_categories")
if self.k_categories_label is None:
if self.scenario in ["Ant-v2", "manyagent_ant"]:
self.k_categories_label = "qpos,qvel,cfrc_ext|qpos"
elif self.scenario in ["Humanoid-v2", "HumanoidStandup-v2"]:
self.k_categories_label = "qpos,qvel,cfrc_ext,cvel,cinert,qfrc_actuator|qpos"
elif self.scenario in ["Reacher-v2"]:
self.k_categories_label = "qpos,qvel,fingertip_dist|qpos"
elif self.scenario in ["coupled_half_cheetah"]:
self.k_categories_label = "qpos,qvel,ten_J,ten_length,ten_velocity|"
else:
self.k_categories_label = "qpos,qvel|qpos"
k_split = self.k_categories_label.split("|")
self.k_categories = [k_split[k if k < len(k_split) else -1].split(",") for k in range(self.agent_obsk + 1)]
self.global_categories_label = kwargs["env_args"].get("global_categories")
self.global_categories = self.global_categories_label.split(
",") if self.global_categories_label is not None else []
if self.agent_obsk is not None:
self.k_dicts = [get_joints_at_kdist(agent_id,
self.agent_partitions,
self.mujoco_edges,
k=self.agent_obsk,
kagents=False, ) for agent_id in range(self.n_agents)]
# load scenario from script
self.episode_limit = self.args.episode_limit
self.env_version = kwargs["env_args"].get("env_version", 2)
if self.env_version == 2:
try:
self.wrapped_env = NormalizedActions(gym.make(self.scenario))
except gym.error.Error:
self.wrapped_env = NormalizedActions(
TimeLimit(partial(env_REGISTRY[self.scenario], **kwargs["env_args"])(),
max_episode_steps=self.episode_limit))
else:
assert False, "not implemented!"
self.timelimit_env = self.wrapped_env.env
self.timelimit_env._max_episode_steps = self.episode_limit
self.env = self.timelimit_env.env
self.timelimit_env.reset()
self.obs_size = self.get_obs_size()
self.share_obs_size = self.get_state_size()
# COMPATIBILITY
self.n = self.n_agents
# self.observation_space = [Box(low=np.array([-10]*self.n_agents), high=np.array([10]*self.n_agents)) for _ in range(self.n_agents)]
self.observation_space = [Box(low=-10, high=10, shape=(self.obs_size,)) for _ in range(self.n_agents)]
self.share_observation_space = [Box(low=-10, high=10, shape=(self.share_obs_size,)) for _ in
range(self.n_agents)]
acdims = [len(ap) for ap in self.agent_partitions]
self.action_space = tuple([Box(self.env.action_space.low[sum(acdims[:a]):sum(acdims[:a + 1])],
self.env.action_space.high[sum(acdims[:a]):sum(acdims[:a + 1])]) for a in
range(self.n_agents)])
pass
def step(self, actions):
# need to remove dummy actions that arise due to unequal action vector sizes across agents
flat_actions = np.concatenate([actions[i][:self.action_space[i].low.shape[0]] for i in range(self.n_agents)])
obs_n, reward_n, done_n, info_n = self.wrapped_env.step(flat_actions)
self.steps += 1
info = {}
info.update(info_n)
# if done_n:
# if self.steps < self.episode_limit:
# info["episode_limit"] = False # the next state will be masked out
# else:
# info["episode_limit"] = True # the next state will not be masked out
if done_n:
if self.steps < self.episode_limit:
info["bad_transition"] = False # the next state will be masked out
else:
info["bad_transition"] = True # the next state will not be masked out
# return reward_n, done_n, info
rewards = [[reward_n]] * self.n_agents
dones = [done_n] * self.n_agents
infos = [info for _ in range(self.n_agents)]
return self.get_obs(), self.get_state(), rewards, dones, infos, self.get_avail_actions()
def get_obs(self):
""" Returns all agent observat3ions in a list """
state = self.env._get_obs()
obs_n = []
for a in range(self.n_agents):
agent_id_feats = np.zeros(self.n_agents, dtype=np.float32)
agent_id_feats[a] = 1.0
# obs_n.append(self.get_obs_agent(a))
# obs_n.append(np.concatenate([state, self.get_obs_agent(a), agent_id_feats]))
# obs_n.append(np.concatenate([self.get_obs_agent(a), agent_id_feats]))
obs_i = np.concatenate([state, agent_id_feats])
obs_i = (obs_i - np.mean(obs_i)) / np.std(obs_i)
obs_n.append(obs_i)
return obs_n
def get_obs_agent(self, agent_id):
if self.agent_obsk is None:
return self.env._get_obs()
else:
# return build_obs(self.env,
# self.k_dicts[agent_id],
# self.k_categories,
# self.mujoco_globals,
# self.global_categories,
# vec_len=getattr(self, "obs_size", None))
return build_obs(self.env,
self.k_dicts[agent_id],
self.k_categories,
self.mujoco_globals,
self.global_categories)
def get_obs_size(self):
""" Returns the shape of the observation """
if self.agent_obsk is None:
return self.get_obs_agent(0).size
else:
return len(self.get_obs()[0])
# return max([len(self.get_obs_agent(agent_id)) for agent_id in range(self.n_agents)])
def get_state(self, team=None):
# TODO: May want global states for different teams (so cannot see what the other team is communicating e.g.)
state = self.env._get_obs()
share_obs = []
for a in range(self.n_agents):
agent_id_feats = np.zeros(self.n_agents, dtype=np.float32)
agent_id_feats[a] = 1.0
# share_obs.append(np.concatenate([state, self.get_obs_agent(a), agent_id_feats]))
state_i = np.concatenate([state, agent_id_feats])
state_i = (state_i - np.mean(state_i)) / np.std(state_i)
share_obs.append(state_i)
return share_obs
def get_state_size(self):
""" Returns the shape of the state"""
return len(self.get_state()[0])
def get_avail_actions(self): # all actions are always available
return np.ones(shape=(self.n_agents, self.n_actions,))
def get_avail_agent_actions(self, agent_id):
""" Returns the available actions for agent_id """
return np.ones(shape=(self.n_actions,))
def get_total_actions(self):
""" Returns the total number of actions an agent could ever take """
return self.n_actions # CAREFUL! - for continuous dims, this is action space dim rather
# return self.env.action_space.shape[0]
def get_stats(self):
return {}
# TODO: Temp hack
def get_agg_stats(self, stats):
return {}
def reset(self, **kwargs):
""" Returns initial observations and states"""
self.steps = 0
self.timelimit_env.reset()
return self.get_obs(), self.get_state(), self.get_avail_actions()
def render(self, **kwargs):
self.env.render(**kwargs)
def close(self):
pass
def seed(self, args):
pass
def get_env_info(self):
env_info = {"state_shape": self.get_state_size(),
"obs_shape": self.get_obs_size(),
"n_actions": self.get_total_actions(),
"n_agents": self.n_agents,
"episode_limit": self.episode_limit,
"action_spaces": self.action_space,
"actions_dtype": np.float32,
"normalise_actions": False
}
return env_info
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/multiagentenv.py
================================================
from collections import namedtuple
import numpy as np
def convert(dictionary):
return namedtuple('GenericDict', dictionary.keys())(**dictionary)
class MultiAgentEnv(object):
def __init__(self, batch_size=None, **kwargs):
# Unpack arguments from sacred
args = kwargs["env_args"]
if isinstance(args, dict):
args = convert(args)
self.args = args
if getattr(args, "seed", None) is not None:
self.seed = args.seed
self.rs = np.random.RandomState(self.seed) # initialise numpy random state
def step(self, actions):
""" Returns reward, terminated, info """
raise NotImplementedError
def get_obs(self):
""" Returns all agent observations in a list """
raise NotImplementedError
def get_obs_agent(self, agent_id):
""" Returns observation for agent_id """
raise NotImplementedError
def get_obs_size(self):
""" Returns the shape of the observation """
raise NotImplementedError
def get_state(self):
raise NotImplementedError
def get_state_size(self):
""" Returns the shape of the state"""
raise NotImplementedError
def get_avail_actions(self):
raise NotImplementedError
def get_avail_agent_actions(self, agent_id):
""" Returns the available actions for agent_id """
raise NotImplementedError
def get_total_actions(self):
""" Returns the total number of actions an agent could ever take """
# TODO: This is only suitable for a discrete 1 dimensional action space for each agent
raise NotImplementedError
def get_stats(self):
raise NotImplementedError
# TODO: Temp hack
def get_agg_stats(self, stats):
return {}
def reset(self):
""" Returns initial observations and states"""
raise NotImplementedError
def render(self):
raise NotImplementedError
def close(self):
raise NotImplementedError
def seed(self, seed):
raise NotImplementedError
def get_env_info(self):
env_info = {"state_shape": self.get_state_size(),
"obs_shape": self.get_obs_size(),
"n_actions": self.get_total_actions(),
"n_agents": self.n_agents,
"episode_limit": self.episode_limit}
return env_info
================================================
FILE: envs/ma_mujoco/multiagent_mujoco/obsk.py
================================================
import itertools
import numpy as np
from copy import deepcopy
class Node():
def __init__(self, label, qpos_ids, qvel_ids, act_ids, body_fn=None, bodies=None, extra_obs=None, tendons=None):
self.label = label
self.qpos_ids = qpos_ids
self.qvel_ids = qvel_ids
self.act_ids = act_ids
self.bodies = bodies
self.extra_obs = {} if extra_obs is None else extra_obs
self.body_fn = body_fn
self.tendons = tendons
pass
def __str__(self):
return self.label
def __repr__(self):
return self.label
class HyperEdge():
def __init__(self, *edges):
self.edges = set(edges)
def __contains__(self, item):
return item in self.edges
def __str__(self):
return "HyperEdge({})".format(self.edges)
def __repr__(self):
return "HyperEdge({})".format(self.edges)
def get_joints_at_kdist(agent_id, agent_partitions, hyperedges, k=0, kagents=False,):
""" Identify all joints at distance <= k from agent agent_id
:param agent_id: id of agent to be considered
:param agent_partitions: list of joint tuples in order of agentids
:param edges: list of tuples (joint1, joint2)
:param k: kth degree
:param kagents: True (observe all joints of an agent if a single one is) or False (individual joint granularity)
:return:
dict with k as key, and list of joints at that distance
"""
assert not kagents, "kagents not implemented!"
agent_joints = agent_partitions[agent_id]
def _adjacent(lst, kagents=False):
# return all sets adjacent to any element in lst
ret = set([])
for l in lst:
ret = ret.union(set(itertools.chain(*[e.edges.difference({l}) for e in hyperedges if l in e])))
return ret
seen = set([])
new = set([])
k_dict = {}
for _k in range(k+1):
if not _k:
new = set(agent_joints)
else:
print(hyperedges)
new = _adjacent(new) - seen
seen = seen.union(new)
k_dict[_k] = sorted(list(new), key=lambda x:x.label)
return k_dict
def build_obs(env, k_dict, k_categories, global_dict, global_categories, vec_len=None):
"""Given a k_dict from get_joints_at_kdist, extract observation vector.
:param k_dict: k_dict
:param qpos: qpos numpy array
:param qvel: qvel numpy array
:param vec_len: if None no padding, else zero-pad to vec_len
:return:
observation vector
"""
# TODO: This needs to be fixed, it was designed for half-cheetah only!
#if add_global_pos:
# obs_qpos_lst.append(global_qpos)
# obs_qvel_lst.append(global_qvel)
body_set_dict = {}
obs_lst = []
# Add parts attributes
for k in sorted(list(k_dict.keys())):
cats = k_categories[k]
for _t in k_dict[k]:
for c in cats:
if c in _t.extra_obs:
items = _t.extra_obs[c](env).tolist()
obs_lst.extend(items if isinstance(items, list) else [items])
else:
if c in ["qvel","qpos"]: # this is a "joint position/velocity" item
items = getattr(env.sim.data, c)[getattr(_t, "{}_ids".format(c))]
obs_lst.extend(items if isinstance(items, list) else [items])
elif c in ["qfrc_actuator"]: # this is a "vel position" item
items = getattr(env.sim.data, c)[getattr(_t, "{}_ids".format("qvel"))]
obs_lst.extend(items if isinstance(items, list) else [items])
elif c in ["cvel", "cinert", "cfrc_ext"]: # this is a "body position" item
if _t.bodies is not None:
for b in _t.bodies:
if c not in body_set_dict:
body_set_dict[c] = set()
if b not in body_set_dict[c]:
items = getattr(env.sim.data, c)[b].tolist()
items = getattr(_t, "body_fn", lambda _id,x:x)(b, items)
obs_lst.extend(items if isinstance(items, list) else [items])
body_set_dict[c].add(b)
# Add global attributes
body_set_dict = {}
for c in global_categories:
if c in ["qvel", "qpos"]: # this is a "joint position" item
for j in global_dict.get("joints", []):
items = getattr(env.sim.data, c)[getattr(j, "{}_ids".format(c))]
obs_lst.extend(items if isinstance(items, list) else [items])
else:
for b in global_dict.get("bodies", []):
if c not in body_set_dict:
body_set_dict[c] = set()
if b not in body_set_dict[c]:
obs_lst.extend(getattr(env.sim.data, c)[b].tolist())
body_set_dict[c].add(b)
if vec_len is not None:
pad = np.array((vec_len - len(obs_lst))*[0])
if len(pad):
return np.concatenate([np.array(obs_lst), pad])
return np.array(obs_lst)
def build_actions(agent_partitions, k_dict):
# Composes agent actions output from networks
# into coherent joint action vector to be sent to the env.
pass
def get_parts_and_edges(label, partitioning):
if label in ["half_cheetah", "HalfCheetah-v2"]:
# define Mujoco graph
bthigh = Node("bthigh", -6, -6, 0)
bshin = Node("bshin", -5, -5, 1)
bfoot = Node("bfoot", -4, -4, 2)
fthigh = Node("fthigh", -3, -3, 3)
fshin = Node("fshin", -2, -2, 4)
ffoot = Node("ffoot", -1, -1, 5)
edges = [HyperEdge(bfoot, bshin),
HyperEdge(bshin, bthigh),
HyperEdge(bthigh, fthigh),
HyperEdge(fthigh, fshin),
HyperEdge(fshin, ffoot)]
root_x = Node("root_x", 0, 0, -1,
extra_obs={"qpos": lambda env: np.array([])})
root_z = Node("root_z", 1, 1, -1)
root_y = Node("root_y", 2, 2, -1)
globals = {"joints":[root_x, root_y, root_z]}
if partitioning == "2x3":
parts = [(bfoot, bshin, bthigh),
(ffoot, fshin, fthigh)]
elif partitioning == "6x1":
parts = [(bfoot,), (bshin,), (bthigh,), (ffoot,), (fshin,), (fthigh,)]
elif partitioning == "3x2":
parts = [(bfoot, bshin,), (bthigh, ffoot,), (fshin, fthigh,)]
else:
raise Exception("UNKNOWN partitioning config: {}".format(partitioning))
return parts, edges, globals
elif label in ["Ant-v2"]:
# define Mujoco graph
torso = 1
front_left_leg = 2
aux_1 = 3
ankle_1 = 4
front_right_leg = 5
aux_2 = 6
ankle_2 = 7
back_leg = 8
aux_3 = 9
ankle_3 = 10
right_back_leg = 11
aux_4 = 12
ankle_4 = 13
hip1 = Node("hip1", -8, -8, 2, bodies=[torso, front_left_leg], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist()) #
ankle1 = Node("ankle1", -7, -7, 3, bodies=[front_left_leg, aux_1, ankle_1], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())#,
hip2 = Node("hip2", -6, -6, 4, bodies=[torso, front_right_leg], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())#,
ankle2 = Node("ankle2", -5, -5, 5, bodies=[front_right_leg, aux_2, ankle_2], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())#,
hip3 = Node("hip3", -4, -4, 6, bodies=[torso, back_leg], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())#,
ankle3 = Node("ankle3", -3, -3, 7, bodies=[back_leg, aux_3, ankle_3], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())#,
hip4 = Node("hip4", -2, -2, 0, bodies=[torso, right_back_leg], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())#,
ankle4 = Node("ankle4", -1, -1, 1, bodies=[right_back_leg, aux_4, ankle_4], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())#,
edges = [HyperEdge(ankle4, hip4),
HyperEdge(ankle1, hip1),
HyperEdge(ankle2, hip2),
HyperEdge(ankle3, hip3),
HyperEdge(hip4, hip1, hip2, hip3),
]
free_joint = Node("free", 0, 0, -1, extra_obs={"qpos": lambda env: env.sim.data.qpos[:7],
"qvel": lambda env: env.sim.data.qvel[:6],
"cfrc_ext": lambda env: np.clip(env.sim.data.cfrc_ext[0:1], -1, 1)})
globals = {"joints": [free_joint]}
if partitioning == "2x4": # neighbouring legs together
parts = [(hip1, ankle1, hip2, ankle2),
(hip3, ankle3, hip4, ankle4)]
elif partitioning == "2x4d": # diagonal legs together
parts = [(hip1, ankle1, hip3, ankle3),
(hip2, ankle2, hip4, ankle4)]
elif partitioning == "4x2":
parts = [(hip1, ankle1),
(hip2, ankle2),
(hip3, ankle3),
(hip4, ankle4)]
elif partitioning == "8x1":
parts = [(hip1,), (ankle1,),
(hip2,), (ankle2,),
(hip3,), (ankle3,),
(hip4,), (ankle4,)]
else:
raise Exception("UNKNOWN partitioning config: {}".format(partitioning))
return parts, edges, globals
elif label in ["Hopper-v2"]:
# define Mujoco-Graph
thigh_joint = Node("thigh_joint", -3, -3, 0,
extra_obs={"qvel": lambda env: np.clip(np.array([env.sim.data.qvel[-3]]), -10, 10)})
leg_joint = Node("leg_joint", -2, -2, 1,
extra_obs={"qvel": lambda env: np.clip(np.array([env.sim.data.qvel[-2]]), -10, 10)})
foot_joint = Node("foot_joint", -1, -1, 2,
extra_obs={"qvel": lambda env: np.clip(np.array([env.sim.data.qvel[-1]]), -10, 10)})
edges = [HyperEdge(foot_joint, leg_joint),
HyperEdge(leg_joint, thigh_joint)]
root_x = Node("root_x", 0, 0, -1, extra_obs={"qpos": lambda env: np.array([]),
"qvel": lambda env: np.clip(np.array([env.sim.data.qvel[1]]), -10, 10)})
root_z = Node("root_z", 1, 1, -1, extra_obs={"qvel": lambda env: np.clip(np.array([env.sim.data.qvel[1]]), -10, 10)})
root_y = Node("root_y", 2, 2, -1, extra_obs={"qvel": lambda env: np.clip(np.array([env.sim.data.qvel[2]]), -10, 10)})
globals = {"joints":[root_x, root_y, root_z]}
if partitioning == "3x1":
parts = [(thigh_joint,),
(leg_joint,),
(foot_joint,)]
else:
raise Exception("UNKNOWN partitioning config: {}".format(partitioning))
return parts, edges, globals
elif label in ["Humanoid-v2", "HumanoidStandup-v2"]:
# define Mujoco-Graph
abdomen_y = Node("abdomen_y", -16, -16, 0) # act ordering bug in env -- double check!
abdomen_z = Node("abdomen_z", -17, -17, 1)
abdomen_x = Node("abdomen_x", -15, -15, 2)
right_hip_x = Node("right_hip_x", -14, -14, 3)
right_hip_z = Node("right_hip_z", -13, -13, 4)
right_hip_y = Node("right_hip_y", -12, -12, 5)
right_knee = Node("right_knee", -11, -11, 6)
left_hip_x = Node("left_hip_x", -10, -10, 7)
left_hip_z = Node("left_hip_z", -9, -9, 8)
left_hip_y = Node("left_hip_y", -8, -8, 9)
left_knee = Node("left_knee", -7, -7, 10)
right_shoulder1 = Node("right_shoulder1", -6, -6, 11)
right_shoulder2 = Node("right_shoulder2", -5, -5, 12)
right_elbow = Node("right_elbow", -4, -4, 13)
left_shoulder1 = Node("left_shoulder1", -3, -3, 14)
left_shoulder2 = Node("left_shoulder2", -2, -2, 15)
left_elbow = Node("left_elbow", -1, -1, 16)
edges = [HyperEdge(abdomen_x, abdomen_y, abdomen_z),
HyperEdge(right_hip_x, right_hip_y, right_hip_z),
HyperEdge(left_hip_x, left_hip_y, left_hip_z),
HyperEdge(left_elbow, left_shoulder1, left_shoulder2),
HyperEdge(right_elbow, right_shoulder1, right_shoulder2),
HyperEdge(left_knee, left_hip_x, left_hip_y, left_hip_z),
HyperEdge(right_knee, right_hip_x, right_hip_y, right_hip_z),
HyperEdge(left_shoulder1, left_shoulder2, abdomen_x, abdomen_y, abdomen_z),
HyperEdge(right_shoulder1, right_shoulder2, abdomen_x, abdomen_y, abdomen_z),
HyperEdge(abdomen_x, abdomen_y, abdomen_z, left_hip_x, left_hip_y, left_hip_z),
HyperEdge(abdomen_x, abdomen_y, abdomen_z, right_hip_x, right_hip_y, right_hip_z),
]
globals = {}
if partitioning == "9|8": # 17 in total, so one action is a dummy (to be handled by pymarl)
# isolate upper and lower body
parts = [(left_shoulder1, left_shoulder2, abdomen_x, abdomen_y, abdomen_z,
right_shoulder1, right_shoulder2,
right_elbow, left_elbow),
(left_hip_x, left_hip_y, left_hip_z,
right_hip_x, right_hip_y, right_hip_z,
right_knee, left_knee)]
# TODO: There could be tons of decompositions here
elif partitioning == "17x1": # 17 in total, so one action is a dummy (to be handled by pymarl)
# isolate upper and lower body
parts = [(left_shoulder1,), (left_shoulder2,), (abdomen_x,), (abdomen_y,), (abdomen_z,),
(right_shoulder1,), (right_shoulder2,), (right_elbow,), (left_elbow,),
(left_hip_x,), (left_hip_y,), (left_hip_z,), (right_hip_x,), (right_hip_y,), (right_hip_z,),
(right_knee,), (left_knee,)]
else:
raise Exception("UNKNOWN partitioning config: {}".format(partitioning))
return parts, edges, globals
elif label in ["Reacher-v2"]:
# define Mujoco-Graph
body0 = 1
body1 = 2
fingertip = 3
joint0 = Node("joint0", -4, -4, 0,
bodies=[body0, body1],
extra_obs={"qpos":(lambda env:np.array([np.sin(env.sim.data.qpos[-4]),
np.cos(env.sim.data.qpos[-4])]))})
joint1 = Node("joint1", -3, -3, 1,
bodies=[body1, fingertip],
extra_obs={"fingertip_dist":(lambda env:env.get_body_com("fingertip") - env.get_body_com("target")),
"qpos":(lambda env:np.array([np.sin(env.sim.data.qpos[-3]),
np.cos(env.sim.data.qpos[-3])]))})
edges = [HyperEdge(joint0, joint1)]
worldbody = 0
target = 4
target_x = Node("target_x", -2, -2, -1, extra_obs={"qvel":(lambda env:np.array([]))})
target_y = Node("target_y", -1, -1, -1, extra_obs={"qvel":(lambda env:np.array([]))})
globals = {"bodies":[worldbody, target],
"joints":[target_x, target_y]}
if partitioning == "2x1":
# isolate upper and lower arms
parts = [(joint0,), (joint1,)]
# TODO: There could be tons of decompositions here
else:
raise Exception("UNKNOWN partitioning config: {}".format(partitioning))
return parts, edges, globals
elif label in ["Swimmer-v2"]:
# define Mujoco-Graph
joint0 = Node("rot2", -2, -2, 0) # TODO: double-check ids
joint1 = Node("rot3", -1, -1, 1)
edges = [HyperEdge(joint0, joint1)]
globals = {}
if partitioning == "2x1":
# isolate upper and lower body
parts = [(joint0,), (joint1,)]
# TODO: There could be tons of decompositions here
else:
raise Exception("UNKNOWN partitioning config: {}".format(partitioning))
return parts, edges, globals
elif label in ["Walker2d-v2"]:
# define Mujoco-Graph
thigh_joint = Node("thigh_joint", -6, -6, 0)
leg_joint = Node("leg_joint", -5, -5, 1)
foot_joint = Node("foot_joint", -4, -4, 2)
thigh_left_joint = Node("thigh_left_joint", -3, -3, 3)
leg_left_joint = Node("leg_left_joint", -2, -2, 4)
foot_left_joint = Node("foot_left_joint", -1, -1, 5)
edges = [HyperEdge(foot_joint, leg_joint),
HyperEdge(leg_joint, thigh_joint),
HyperEdge(foot_left_joint, leg_left_joint),
HyperEdge(leg_left_joint, thigh_left_joint),
HyperEdge(thigh_joint, thigh_left_joint)
]
globals = {}
if partitioning == "2x3":
# isolate upper and lower body
parts = [(foot_joint, leg_joint, thigh_joint),
(foot_left_joint, leg_left_joint, thigh_left_joint,)]
# TODO: There could be tons of decompositions here
elif partitioning == "6x1":
# isolate upper and lower body
parts = [(foot_joint,), (leg_joint,), (thigh_joint,),
(foot_left_joint,), (leg_left_joint,), (thigh_left_joint,)]
elif partitioning == "3x2":
# isolate upper and lower body
parts = [(foot_joint, leg_joint,), (thigh_joint, foot_left_joint,),
(leg_left_joint, thigh_left_joint,)]
else:
raise Exception("UNKNOWN partitioning config: {}".format(partitioning))
return parts, edges, globals
elif label in ["coupled_half_cheetah"]:
# define Mujoco graph
tendon = 0
bthigh = Node("bthigh", -6, -6, 0,
tendons=[tendon],
extra_obs = {"ten_J": lambda env: env.sim.data.ten_J[tendon],
"ten_length": lambda env: env.sim.data.ten_length,
"ten_velocity": lambda env: env.sim.data.ten_velocity})
bshin = Node("bshin", -5, -5, 1)
bfoot = Node("bfoot", -4, -4, 2)
fthigh = Node("fthigh", -3, -3, 3)
fshin = Node("fshin", -2, -2, 4)
ffoot = Node("ffoot", -1, -1, 5)
bthigh2 = Node("bthigh2", -6, -6, 0,
tendons=[tendon],
extra_obs={"ten_J": lambda env: env.sim.data.ten_J[tendon],
"ten_length": lambda env: env.sim.data.ten_length,
"ten_velocity": lambda env: env.sim.data.ten_velocity})
bshin2 = Node("bshin2", -5, -5, 1)
bfoot2 = Node("bfoot2", -4, -4, 2)
fthigh2 = Node("fthigh2", -3, -3, 3)
fshin2 = Node("fshin2", -2, -2, 4)
ffoot2 = Node("ffoot2", -1, -1, 5)
edges = [HyperEdge(bfoot, bshin),
HyperEdge(bshin, bthigh),
HyperEdge(bthigh, fthigh),
HyperEdge(fthigh, fshin),
HyperEdge(fshin, ffoot),
HyperEdge(bfoot2, bshin2),
HyperEdge(bshin2, bthigh2),
HyperEdge(bthigh2, fthigh2),
HyperEdge(fthigh2, fshin2),
HyperEdge(fshin2, ffoot2)
]
globals = {}
root_x = Node("root_x", 0, 0, -1,
extra_obs={"qpos": lambda env: np.array([])})
root_z = Node("root_z", 1, 1, -1)
root_y = Node("root_y", 2, 2, -1)
globals = {"joints":[root_x, root_y, root_z]}
if partitioning == "1p1":
parts = [(bfoot, bshin, bthigh, ffoot, fshin, fthigh),
(bfoot2, bshin2, bthigh2, ffoot2, fshin2, fthigh2)
]
else:
raise Exception("UNKNOWN partitioning config: {}".format(partitioning))
return parts, edges, globals
elif label in ["manyagent_swimmer"]:
# Generate asset file
try:
n_agents = int(partitioning.split("x")[0])
n_segs_per_agents = int(partitioning.split("x")[1])
n_segs = n_agents * n_segs_per_agents
except Exception as e:
raise Exception("UNKNOWN partitioning config: {}".format(partitioning))
# Note: Default Swimmer corresponds to n_segs = 3
# define Mujoco-Graph
joints = [Node("rot{:d}".format(i), -n_segs + i, -n_segs + i, i) for i in range(0, n_segs)]
edges = [HyperEdge(joints[i], joints[i+1]) for i in range(n_segs-1)]
globals = {}
parts = [tuple(joints[i * n_segs_per_agents:(i + 1) * n_segs_per_agents]) for i in range(n_agents)]
return parts, edges, globals
elif label in ["manyagent_ant"]: # TODO: FIX!
# Generate asset file
try:
n_agents = int(partitioning.split("x")[0])
n_segs_per_agents = int(partitioning.split("x")[1])
n_segs = n_agents * n_segs_per_agents
except Exception as e:
raise Exception("UNKNOWN partitioning config: {}".format(partitioning))
# # define Mujoco graph
# torso = 1
# front_left_leg = 2
# aux_1 = 3
# ankle_1 = 4
# right_back_leg = 11
# aux_4 = 12
# ankle_4 = 13
#
# off = -4*(n_segs-1)
# hip1 = Node("hip1", -4-off, -4-off, 2, bodies=[torso, front_left_leg], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist()) #
# ankle1 = Node("ankle1", -3-off, -3-off, 3, bodies=[front_left_leg, aux_1, ankle_1], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())#,
# hip4 = Node("hip4", -2-off, -2-off, 0, bodies=[torso, right_back_leg], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())#,
# ankle4 = Node("ankle4", -1-off, -1-off, 1, bodies=[right_back_leg, aux_4, ankle_4], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())#,
#
# edges = [HyperEdge(ankle4, hip4),
# HyperEdge(ankle1, hip1),
# HyperEdge(hip4, hip1),
# ]
edges = []
joints = []
for si in range(n_segs):
torso = 1 + si*7
front_right_leg = 2 + si*7
aux1 = 3 + si*7
ankle1 = 4 + si*7
back_leg = 5 + si*7
aux2 = 6 + si*7
ankle2 = 7 + si*7
off = -4 * (n_segs - 1 - si)
hip1n = Node("hip1_{:d}".format(si), -4-off, -4-off, 2+4*si, bodies=[torso, front_right_leg], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())
ankle1n = Node("ankle1_{:d}".format(si), -3-off, -3-off, 3+4*si, bodies=[front_right_leg, aux1, ankle1], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())
hip2n = Node("hip2_{:d}".format(si), -2-off, -2-off, 0+4*si, bodies=[torso, back_leg], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())
ankle2n = Node("ankle2_{:d}".format(si), -1-off, -1-off, 1+4*si, bodies=[back_leg, aux2, ankle2], body_fn=lambda _id, x:np.clip(x, -1, 1).tolist())
edges += [HyperEdge(ankle1n, hip1n),
HyperEdge(ankle2n, hip2n),
HyperEdge(hip1n, hip2n)]
if si:
edges += [HyperEdge(hip1m, hip2m, hip1n, hip2n)]
hip1m = deepcopy(hip1n)
hip2m = deepcopy(hip2n)
joints.append([hip1n,
ankle1n,
hip2n,
ankle2n])
free_joint = Node("free", 0, 0, -1, extra_obs={"qpos": lambda env: env.sim.data.qpos[:7],
"qvel": lambda env: env.sim.data.qvel[:6],
"cfrc_ext": lambda env: np.clip(env.sim.data.cfrc_ext[0:1], -1, 1)})
globals = {"joints": [free_joint]}
parts = [[x for sublist in joints[i * n_segs_per_agents:(i + 1) * n_segs_per_agents] for x in sublist] for i in range(n_agents)]
return parts, edges, globals
================================================
FILE: envs/starcraft2/StarCraft2_Env.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from .multiagentenv import MultiAgentEnv
from .smac_maps import get_map_params
import atexit
from operator import attrgetter
from copy import deepcopy
import numpy as np
import enum
import math
from absl import logging
from pysc2 import maps
from pysc2 import run_configs
from pysc2.lib import protocol
from s2clientprotocol import common_pb2 as sc_common
from s2clientprotocol import sc2api_pb2 as sc_pb
from s2clientprotocol import raw_pb2 as r_pb
from s2clientprotocol import debug_pb2 as d_pb
import random
from gym.spaces import Discrete
races = {
"R": sc_common.Random,
"P": sc_common.Protoss,
"T": sc_common.Terran,
"Z": sc_common.Zerg,
}
difficulties = {
"1": sc_pb.VeryEasy,
"2": sc_pb.Easy,
"3": sc_pb.Medium,
"4": sc_pb.MediumHard,
"5": sc_pb.Hard,
"6": sc_pb.Harder,
"7": sc_pb.VeryHard,
"8": sc_pb.CheatVision,
"9": sc_pb.CheatMoney,
"A": sc_pb.CheatInsane,
}
actions = {
"move": 16, # target: PointOrUnit
"attack": 23, # target: PointOrUnit
"stop": 4, # target: None
"heal": 386, # Unit
}
class Direction(enum.IntEnum):
NORTH = 0
SOUTH = 1
EAST = 2
WEST = 3
class StarCraft2Env(MultiAgentEnv):
"""The StarCraft II environment for decentralised multi-agent
micromanagement scenarios.
"""
def __init__(
self,
args,
step_mul=8,
move_amount=2,
difficulty="7",
game_version=None,
seed=None,
continuing_episode=False,
obs_all_health=True,
obs_own_health=True,
obs_last_action=True,
obs_pathing_grid=False,
obs_terrain_height=False,
obs_instead_of_state=False,
obs_timestep_number=False,
obs_agent_id=True,
state_pathing_grid=False,
state_terrain_height=False,
state_last_action=True,
state_timestep_number=False,
state_agent_id=True,
reward_sparse=False,
reward_only_positive=True,
reward_death_value=10,
reward_win=200,
reward_defeat=0,
reward_negative_scale=0.5,
reward_scale=True,
reward_scale_rate=20,
replay_dir="",
replay_prefix="",
window_size_x=1920,
window_size_y=1200,
heuristic_ai=False,
heuristic_rest=False,
debug=False,
):
"""
Create a StarCraftC2Env environment.
Parameters
----------
map_name : str, optional
The name of the SC2 map to play (default is "8m"). The full list
can be found by running bin/map_list.
step_mul : int, optional
How many game steps per agent step (default is 8). None
indicates to use the default map step_mul.
move_amount : float, optional
How far away units are ordered to move per step (default is 2).
difficulty : str, optional
The difficulty of built-in computer AI bot (default is "7").
game_version : str, optional
StarCraft II game version (default is None). None indicates the
latest version.
seed : int, optional
Random seed used during game initialisation. This allows to
continuing_episode : bool, optional
Whether to consider episodes continuing or finished after time
limit is reached (default is False).
obs_all_health : bool, optional
Agents receive the health of all units (in the sight range) as part
of observations (default is True).
obs_own_health : bool, optional
Agents receive their own health as a part of observations (default
is False). This flag is ignored when obs_all_health == True.
obs_last_action : bool, optional
Agents receive the last actions of all units (in the sight range)
as part of observations (default is False).
obs_pathing_grid : bool, optional
Whether observations include pathing values surrounding the agent
(default is False).
obs_terrain_height : bool, optional
Whether observations include terrain height values surrounding the
agent (default is False).
obs_instead_of_state : bool, optional
Use combination of all agents' observations as the global state
(default is False).
obs_timestep_number : bool, optional
Whether observations include the current timestep of the episode
(default is False).
state_last_action : bool, optional
Include the last actions of all agents as part of the global state
(default is True).
state_timestep_number : bool, optional
Whether the state include the current timestep of the episode
(default is False).
reward_sparse : bool, optional
Receive 1/-1 reward for winning/loosing an episode (default is
False). Whe rest of reward parameters are ignored if True.
reward_only_positive : bool, optional
Reward is always positive (default is True).
reward_death_value : float, optional
The amount of reward received for killing an enemy unit (default
is 10). This is also the negative penalty for having an allied unit
killed if reward_only_positive == False.
reward_win : float, optional
The reward for winning in an episode (default is 200).
reward_defeat : float, optional
The reward for loosing in an episode (default is 0). This value
should be nonpositive.
reward_negative_scale : float, optional
Scaling factor for negative rewards (default is 0.5). This
parameter is ignored when reward_only_positive == True.
reward_scale : bool, optional
Whether or not to scale the reward (default is True).
reward_scale_rate : float, optional
Reward scale rate (default is 20). When reward_scale == True, the
reward received by the agents is divided by (max_reward /
reward_scale_rate), where max_reward is the maximum possible
reward per episode without considering the shield regeneration
of Protoss units.
replay_dir : str, optional
The directory to save replays (default is None). If None, the
replay will be saved in Replays directory where StarCraft II is
installed.
replay_prefix : str, optional
The prefix of the replay to be saved (default is None). If None,
the name of the map will be used.
window_size_x : int, optional
The length of StarCraft II window size (default is 1920).
window_size_y: int, optional
The height of StarCraft II window size (default is 1200).
heuristic_ai: bool, optional
Whether or not to use a non-learning heuristic AI (default False).
heuristic_rest: bool, optional
At any moment, restrict the actions of the heuristic AI to be
chosen from actions available to RL agents (default is False).
Ignored if heuristic_ai == False.
debug: bool, optional
Log messages about observations, state, actions and rewards for
debugging purposes (default is False).
"""
# Map arguments
self.map_name = args.map_name
self.add_local_obs = args.add_local_obs
self.add_move_state = args.add_move_state
self.add_visible_state = args.add_visible_state
self.add_distance_state = args.add_distance_state
self.add_xy_state = args.add_xy_state
self.add_enemy_action_state = args.add_enemy_action_state
self.add_agent_id = args.add_agent_id
self.use_state_agent = args.use_state_agent
self.use_mustalive = args.use_mustalive
self.add_center_xy = args.add_center_xy
self.use_stacked_frames = args.use_stacked_frames
self.stacked_frames = args.stacked_frames
map_params = get_map_params(self.map_name)
self.n_agents = map_params["n_agents"]
self.n_enemies = map_params["n_enemies"]
self.episode_limit = map_params["limit"]
self._move_amount = move_amount
self._step_mul = step_mul
self.difficulty = difficulty
# Observations and state
self.obs_own_health = obs_own_health
self.obs_all_health = obs_all_health
self.obs_instead_of_state = args.use_obs_instead_of_state
self.obs_last_action = obs_last_action
self.obs_pathing_grid = obs_pathing_grid
self.obs_terrain_height = obs_terrain_height
self.obs_timestep_number = obs_timestep_number
self.obs_agent_id = obs_agent_id
self.state_pathing_grid = state_pathing_grid
self.state_terrain_height = state_terrain_height
self.state_last_action = state_last_action
self.state_timestep_number = state_timestep_number
self.state_agent_id = state_agent_id
if self.obs_all_health:
self.obs_own_health = True
self.n_obs_pathing = 8
self.n_obs_height = 9
# Rewards args
self.reward_sparse = reward_sparse
self.reward_only_positive = reward_only_positive
self.reward_negative_scale = reward_negative_scale
self.reward_death_value = reward_death_value
self.reward_win = reward_win
self.reward_defeat = reward_defeat
self.reward_scale = reward_scale
self.reward_scale_rate = reward_scale_rate
# Other
self.game_version = game_version
self.continuing_episode = continuing_episode
self._seed = seed
self.heuristic_ai = heuristic_ai
self.heuristic_rest = heuristic_rest
self.debug = debug
self.window_size = (window_size_x, window_size_y)
self.replay_dir = replay_dir
self.replay_prefix = replay_prefix
# Actions
self.n_actions_no_attack = 6
self.n_actions_move = 4
self.n_actions = self.n_actions_no_attack + self.n_enemies
# Map info
self._agent_race = map_params["a_race"]
self._bot_race = map_params["b_race"]
self.shield_bits_ally = 1 if self._agent_race == "P" else 0
self.shield_bits_enemy = 1 if self._bot_race == "P" else 0
self.unit_type_bits = map_params["unit_type_bits"]
self.map_type = map_params["map_type"]
self.max_reward = (
self.n_enemies * self.reward_death_value + self.reward_win
)
self.agents = {}
self.enemies = {}
self._episode_count = 0
self._episode_steps = 0
self._total_steps = 0
self._obs = None
self.battles_won = 0
self.battles_game = 0
self.timeouts = 0
self.force_restarts = 0
self.last_stats = None
self.death_tracker_ally = np.zeros(self.n_agents, dtype=np.float32)
self.death_tracker_enemy = np.zeros(self.n_enemies, dtype=np.float32)
self.previous_ally_units = None
self.previous_enemy_units = None
self.last_action = np.zeros((self.n_agents, self.n_actions), dtype=np.float32)
self._min_unit_type = 0
self.marine_id = self.marauder_id = self.medivac_id = 0
self.hydralisk_id = self.zergling_id = self.baneling_id = 0
self.stalker_id = self.colossus_id = self.zealot_id = 0
self.max_distance_x = 0
self.max_distance_y = 0
self.map_x = 0
self.map_y = 0
self.terrain_height = None
self.pathing_grid = None
self._run_config = None
self._sc2_proc = None
self._controller = None
# Try to avoid leaking SC2 processes on shutdown
atexit.register(lambda: self.close())
self.action_space = []
self.observation_space = []
self.share_observation_space = []
for i in range(self.n_agents):
self.action_space.append(Discrete(self.n_actions))
self.observation_space.append(self.get_obs_size())
self.share_observation_space.append(self.get_state_size())
if self.use_stacked_frames:
self.stacked_local_obs = np.zeros((self.n_agents, self.stacked_frames, int(self.get_obs_size()[0]/self.stacked_frames)), dtype=np.float32)
self.stacked_global_state = np.zeros((self.n_agents, self.stacked_frames, int(self.get_state_size()[0]/self.stacked_frames)), dtype=np.float32)
def _launch(self):
"""Launch the StarCraft II game."""
self._run_config = run_configs.get(version=self.game_version)
_map = maps.get(self.map_name)
self._seed += 1
# Setting up the interface
interface_options = sc_pb.InterfaceOptions(raw=True, score=False)
self._sc2_proc = self._run_config.start(window_size=self.window_size, want_rgb=False)
self._controller = self._sc2_proc.controller
# Request to create the game
create = sc_pb.RequestCreateGame(
local_map=sc_pb.LocalMap(
map_path=_map.path,
map_data=self._run_config.map_data(_map.path)),
realtime=False,
random_seed=self._seed)
create.player_setup.add(type=sc_pb.Participant)
create.player_setup.add(type=sc_pb.Computer, race=races[self._bot_race],
difficulty=difficulties[self.difficulty])
self._controller.create_game(create)
join = sc_pb.RequestJoinGame(race=races[self._agent_race],
options=interface_options)
self._controller.join_game(join)
game_info = self._controller.game_info()
map_info = game_info.start_raw
map_play_area_min = map_info.playable_area.p0
map_play_area_max = map_info.playable_area.p1
self.max_distance_x = map_play_area_max.x - map_play_area_min.x
self.max_distance_y = map_play_area_max.y - map_play_area_min.y
self.map_x = map_info.map_size.x
self.map_y = map_info.map_size.y
if map_info.pathing_grid.bits_per_pixel == 1:
vals = np.array(list(map_info.pathing_grid.data)).reshape(
self.map_x, int(self.map_y / 8))
self.pathing_grid = np.transpose(np.array([
[(b >> i) & 1 for b in row for i in range(7, -1, -1)]
for row in vals], dtype=np.bool))
else:
self.pathing_grid = np.invert(np.flip(np.transpose(np.array(
list(map_info.pathing_grid.data), dtype=np.bool).reshape(
self.map_x, self.map_y)), axis=1))
self.terrain_height = np.flip(
np.transpose(np.array(list(map_info.terrain_height.data))
.reshape(self.map_x, self.map_y)), 1) / 255
def reset(self):
"""Reset the environment. Required after each full episode.
Returns initial observations and states.
"""
self._episode_steps = 0
if self._episode_count == 0:
# Launch StarCraft II
self._launch()
else:
self._restart()
# Information kept for counting the reward
self.death_tracker_ally = np.zeros(self.n_agents, dtype=np.float32)
self.death_tracker_enemy = np.zeros(self.n_enemies, dtype=np.float32)
self.previous_ally_units = None
self.previous_enemy_units = None
self.win_counted = False
self.defeat_counted = False
self.last_action = np.zeros((self.n_agents, self.n_actions), dtype=np.float32)
if self.heuristic_ai:
self.heuristic_targets = [None] * self.n_agents
try:
self._obs = self._controller.observe()
self.init_units()
except (protocol.ProtocolError, protocol.ConnectionError):
self.full_restart()
available_actions = []
for i in range(self.n_agents):
available_actions.append(self.get_avail_agent_actions(i))
if self.debug:
logging.debug("Started Episode {}"
.format(self._episode_count).center(60, "*"))
if self.use_state_agent:
global_state = [self.get_state_agent(agent_id) for agent_id in range(self.n_agents)]
else:
global_state = [self.get_state(agent_id) for agent_id in range(self.n_agents)]
local_obs = self.get_obs()
if self.use_stacked_frames:
self.stacked_local_obs = np.roll(self.stacked_local_obs, 1, axis=1)
self.stacked_global_state = np.roll(self.stacked_global_state, 1, axis=1)
self.stacked_local_obs[:, -1, :] = np.array(local_obs).copy()
self.stacked_global_state[:, -1, :] = np.array(global_state).copy()
local_obs = self.stacked_local_obs.reshape(self.n_agents, -1)
global_state = self.stacked_global_state.reshape(self.n_agents, -1)
return local_obs, global_state, available_actions
def _restart(self):
"""Restart the environment by killing all units on the map.
There is a trigger in the SC2Map file, which restarts the
episode when there are no units left.
"""
try:
self._kill_all_units()
self._controller.step(2)
except (protocol.ProtocolError, protocol.ConnectionError):
self.full_restart()
def full_restart(self):
"""Full restart. Closes the SC2 process and launches a new one. """
self._sc2_proc.close()
self._launch()
self.force_restarts += 1
def step(self, actions):
"""A single environment step. Returns reward, terminated, info."""
terminated = False
bad_transition = False
infos = [{} for i in range(self.n_agents)]
dones = np.zeros((self.n_agents), dtype=bool)
actions_int = [int(a) for a in actions]
self.last_action = np.eye(self.n_actions)[np.array(actions_int)]
# Collect individual actions
sc_actions = []
if self.debug:
logging.debug("Actions".center(60, "-"))
for a_id, action in enumerate(actions_int):
if not self.heuristic_ai:
sc_action = self.get_agent_action(a_id, action)
else:
sc_action, action_num = self.get_agent_action_heuristic(
a_id, action)
actions[a_id] = action_num
if sc_action:
sc_actions.append(sc_action)
# Send action request
req_actions = sc_pb.RequestAction(actions=sc_actions)
try:
self._controller.actions(req_actions)
# Make step in SC2, i.e. apply actions
self._controller.step(self._step_mul)
# Observe here so that we know if the episode is over.
self._obs = self._controller.observe()
except (protocol.ProtocolError, protocol.ConnectionError):
self.full_restart()
terminated = True
available_actions = []
for i in range(self.n_agents):
available_actions.append(self.get_avail_agent_actions(i))
infos[i] = {
"battles_won": self.battles_won,
"battles_game": self.battles_game,
"battles_draw": self.timeouts,
"restarts": self.force_restarts,
"bad_transition": bad_transition,
"won": self.win_counted
}
if terminated:
dones[i] = True
else:
if self.death_tracker_ally[i]:
dones[i] = True
else:
dones[i] = False
if self.use_state_agent:
global_state = [self.get_state_agent(agent_id) for agent_id in range(self.n_agents)]
else:
global_state = [self.get_state(agent_id) for agent_id in range(self.n_agents)]
local_obs = self.get_obs()
if self.use_stacked_frames:
self.stacked_local_obs = np.roll(self.stacked_local_obs, 1, axis=1)
self.stacked_global_state = np.roll(self.stacked_global_state, 1, axis=1)
self.stacked_local_obs[:, -1, :] = np.array(local_obs).copy()
self.stacked_global_state[:, -1, :] = np.array(global_state).copy()
local_obs = self.stacked_local_obs.reshape(self.n_agents, -1)
global_state = self.stacked_global_state.reshape(self.n_agents, -1)
return local_obs, global_state, [[0]]*self.n_agents, dones, infos, available_actions
self._total_steps += 1
self._episode_steps += 1
# Update units
game_end_code = self.update_units()
reward = self.reward_battle()
available_actions = []
for i in range(self.n_agents):
available_actions.append(self.get_avail_agent_actions(i))
if game_end_code is not None:
# Battle is over
terminated = True
self.battles_game += 1
if game_end_code == 1 and not self.win_counted:
self.battles_won += 1
self.win_counted = True
if not self.reward_sparse:
reward += self.reward_win
else:
reward = 1
elif game_end_code == -1 and not self.defeat_counted:
self.defeat_counted = True
if not self.reward_sparse:
reward += self.reward_defeat
else:
reward = -1
elif self._episode_steps >= self.episode_limit:
# Episode limit reached
terminated = True
self.bad_transition = True
if self.continuing_episode:
info["episode_limit"] = True
self.battles_game += 1
self.timeouts += 1
for i in range(self.n_agents):
infos[i] = {
"battles_won": self.battles_won,
"battles_game": self.battles_game,
"battles_draw": self.timeouts,
"restarts": self.force_restarts,
"bad_transition": bad_transition,
"won": self.win_counted
}
if terminated:
dones[i] = True
else:
if self.death_tracker_ally[i]:
dones[i] = True
else:
dones[i] = False
if self.debug:
logging.debug("Reward = {}".format(reward).center(60, '-'))
if terminated:
self._episode_count += 1
if self.reward_scale:
reward /= self.max_reward / self.reward_scale_rate
rewards = [[reward]]*self.n_agents
if self.use_state_agent:
global_state = [self.get_state_agent(agent_id) for agent_id in range(self.n_agents)]
else:
global_state = [self.get_state(agent_id) for agent_id in range(self.n_agents)]
local_obs = self.get_obs()
if self.use_stacked_frames:
self.stacked_local_obs = np.roll(self.stacked_local_obs, 1, axis=1)
self.stacked_global_state = np.roll(self.stacked_global_state, 1, axis=1)
self.stacked_local_obs[:, -1, :] = np.array(local_obs).copy()
self.stacked_global_state[:, -1, :] = np.array(global_state).copy()
local_obs = self.stacked_local_obs.reshape(self.n_agents, -1)
global_state = self.stacked_global_state.reshape(self.n_agents, -1)
return local_obs, global_state, rewards, dones, infos, available_actions
def get_agent_action(self, a_id, action):
"""Construct the action for agent a_id."""
avail_actions = self.get_avail_agent_actions(a_id)
assert avail_actions[action] == 1, \
"Agent {} cannot perform action {}".format(a_id, action)
unit = self.get_unit_by_id(a_id)
tag = unit.tag
x = unit.pos.x
y = unit.pos.y
if action == 0:
# no-op (valid only when dead)
assert unit.health == 0, "No-op only available for dead agents."
if self.debug:
logging.debug("Agent {}: Dead".format(a_id))
return None
elif action == 1:
# stop
cmd = r_pb.ActionRawUnitCommand(
ability_id=actions["stop"],
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {}: Stop".format(a_id))
elif action == 2:
# move north
cmd = r_pb.ActionRawUnitCommand(
ability_id=actions["move"],
target_world_space_pos=sc_common.Point2D(
x=x, y=y + self._move_amount),
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {}: Move North".format(a_id))
elif action == 3:
# move south
cmd = r_pb.ActionRawUnitCommand(
ability_id=actions["move"],
target_world_space_pos=sc_common.Point2D(
x=x, y=y - self._move_amount),
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {}: Move South".format(a_id))
elif action == 4:
# move east
cmd = r_pb.ActionRawUnitCommand(
ability_id=actions["move"],
target_world_space_pos=sc_common.Point2D(
x=x + self._move_amount, y=y),
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {}: Move East".format(a_id))
elif action == 5:
# move west
cmd = r_pb.ActionRawUnitCommand(
ability_id=actions["move"],
target_world_space_pos=sc_common.Point2D(
x=x - self._move_amount, y=y),
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {}: Move West".format(a_id))
else:
# attack/heal units that are in range
target_id = action - self.n_actions_no_attack
if self.map_type == "MMM" and unit.unit_type == self.medivac_id:
target_unit = self.agents[target_id]
action_name = "heal"
else:
target_unit = self.enemies[target_id]
action_name = "attack"
action_id = actions[action_name]
target_tag = target_unit.tag
cmd = r_pb.ActionRawUnitCommand(
ability_id=action_id,
target_unit_tag=target_tag,
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {} {}s unit # {}".format(
a_id, action_name, target_id))
sc_action = sc_pb.Action(action_raw=r_pb.ActionRaw(unit_command=cmd))
return sc_action
def get_agent_action_heuristic(self, a_id, action):
unit = self.get_unit_by_id(a_id)
tag = unit.tag
target = self.heuristic_targets[a_id]
if unit.unit_type == self.medivac_id:
if (target is None or self.agents[target].health == 0 or
self.agents[target].health == self.agents[target].health_max):
min_dist = math.hypot(self.max_distance_x, self.max_distance_y)
min_id = -1
for al_id, al_unit in self.agents.items():
if al_unit.unit_type == self.medivac_id:
continue
if (al_unit.health != 0 and
al_unit.health != al_unit.health_max):
dist = self.distance(unit.pos.x, unit.pos.y,
al_unit.pos.x, al_unit.pos.y)
if dist < min_dist:
min_dist = dist
min_id = al_id
self.heuristic_targets[a_id] = min_id
if min_id == -1:
self.heuristic_targets[a_id] = None
return None, 0
action_id = actions['heal']
target_tag = self.agents[self.heuristic_targets[a_id]].tag
else:
if target is None or self.enemies[target].health == 0:
min_dist = math.hypot(self.max_distance_x, self.max_distance_y)
min_id = -1
for e_id, e_unit in self.enemies.items():
if (unit.unit_type == self.marauder_id and
e_unit.unit_type == self.medivac_id):
continue
if e_unit.health > 0:
dist = self.distance(unit.pos.x, unit.pos.y,
e_unit.pos.x, e_unit.pos.y)
if dist < min_dist:
min_dist = dist
min_id = e_id
self.heuristic_targets[a_id] = min_id
if min_id == -1:
self.heuristic_targets[a_id] = None
return None, 0
action_id = actions['attack']
target_tag = self.enemies[self.heuristic_targets[a_id]].tag
action_num = self.heuristic_targets[a_id] + self.n_actions_no_attack
# Check if the action is available
if (self.heuristic_rest and
self.get_avail_agent_actions(a_id)[action_num] == 0):
# Move towards the target rather than attacking/healing
if unit.unit_type == self.medivac_id:
target_unit = self.agents[self.heuristic_targets[a_id]]
else:
target_unit = self.enemies[self.heuristic_targets[a_id]]
delta_x = target_unit.pos.x - unit.pos.x
delta_y = target_unit.pos.y - unit.pos.y
if abs(delta_x) > abs(delta_y): # east or west
if delta_x > 0: # east
target_pos = sc_common.Point2D(
x=unit.pos.x + self._move_amount, y=unit.pos.y)
action_num = 4
else: # west
target_pos = sc_common.Point2D(
x=unit.pos.x - self._move_amount, y=unit.pos.y)
action_num = 5
else: # north or south
if delta_y > 0: # north
target_pos = sc_common.Point2D(
x=unit.pos.x, y=unit.pos.y + self._move_amount)
action_num = 2
else: # south
target_pos = sc_common.Point2D(
x=unit.pos.x, y=unit.pos.y - self._move_amount)
action_num = 3
cmd = r_pb.ActionRawUnitCommand(
ability_id=actions['move'],
target_world_space_pos=target_pos,
unit_tags=[tag],
queue_command=False)
else:
# Attack/heal the target
cmd = r_pb.ActionRawUnitCommand(
ability_id=action_id,
target_unit_tag=target_tag,
unit_tags=[tag],
queue_command=False)
sc_action = sc_pb.Action(action_raw=r_pb.ActionRaw(unit_command=cmd))
return sc_action, action_num
def reward_battle(self):
"""Reward function when self.reward_spare==False.
Returns accumulative hit/shield point damage dealt to the enemy
+ reward_death_value per enemy unit killed, and, in case
self.reward_only_positive == False, - (damage dealt to ally units
+ reward_death_value per ally unit killed) * self.reward_negative_scale
"""
if self.reward_sparse:
return 0
reward = 0
delta_deaths = 0
delta_ally = 0
delta_enemy = 0
neg_scale = self.reward_negative_scale
# update deaths
for al_id, al_unit in self.agents.items():
if not self.death_tracker_ally[al_id]:
# did not die so far
prev_health = (
self.previous_ally_units[al_id].health
+ self.previous_ally_units[al_id].shield
)
if al_unit.health == 0:
# just died
self.death_tracker_ally[al_id] = 1
if not self.reward_only_positive:
delta_deaths -= self.reward_death_value * neg_scale
delta_ally += prev_health * neg_scale
else:
# still alive
delta_ally += neg_scale * (
prev_health - al_unit.health - al_unit.shield
)
for e_id, e_unit in self.enemies.items():
if not self.death_tracker_enemy[e_id]:
prev_health = (
self.previous_enemy_units[e_id].health
+ self.previous_enemy_units[e_id].shield
)
if e_unit.health == 0:
self.death_tracker_enemy[e_id] = 1
delta_deaths += self.reward_death_value
delta_enemy += prev_health
else:
delta_enemy += prev_health - e_unit.health - e_unit.shield
if self.reward_only_positive:
reward = abs(delta_enemy + delta_deaths) # shield regeneration
else:
reward = delta_enemy + delta_deaths - delta_ally
return reward
def get_total_actions(self):
"""Returns the total number of actions an agent could ever take."""
return self.n_actions
@staticmethod
def distance(x1, y1, x2, y2):
"""Distance between two points."""
return math.hypot(x2 - x1, y2 - y1)
def unit_shoot_range(self, agent_id):
"""Returns the shooting range for an agent."""
return 6
def unit_sight_range(self, agent_id):
"""Returns the sight range for an agent."""
return 9
def unit_max_cooldown(self, unit):
"""Returns the maximal cooldown for a unit."""
switcher = {
self.marine_id: 15,
self.marauder_id: 25,
self.medivac_id: 200, # max energy
self.stalker_id: 35,
self.zealot_id: 22,
self.colossus_id: 24,
self.hydralisk_id: 10,
self.zergling_id: 11,
self.baneling_id: 1
}
return switcher.get(unit.unit_type, 15)
def save_replay(self):
"""Save a replay."""
prefix = self.replay_prefix or self.map_name
replay_dir = self.replay_dir or ""
replay_path = self._run_config.save_replay(
self._controller.save_replay(), replay_dir=replay_dir, prefix=prefix)
logging.info("Replay saved at: %s" % replay_path)
def unit_max_shield(self, unit):
"""Returns maximal shield for a given unit."""
if unit.unit_type == 74 or unit.unit_type == self.stalker_id:
return 80 # Protoss's Stalker
if unit.unit_type == 73 or unit.unit_type == self.zealot_id:
return 50 # Protoss's Zaelot
if unit.unit_type == 4 or unit.unit_type == self.colossus_id:
return 150 # Protoss's Colossus
def can_move(self, unit, direction):
"""Whether a unit can move in a given direction."""
m = self._move_amount / 2
if direction == Direction.NORTH:
x, y = int(unit.pos.x), int(unit.pos.y + m)
elif direction == Direction.SOUTH:
x, y = int(unit.pos.x), int(unit.pos.y - m)
elif direction == Direction.EAST:
x, y = int(unit.pos.x + m), int(unit.pos.y)
else:
x, y = int(unit.pos.x - m), int(unit.pos.y)
if self.check_bounds(x, y) and self.pathing_grid[x, y]:
return True
return False
def get_surrounding_points(self, unit, include_self=False):
"""Returns the surrounding points of the unit in 8 directions."""
x = int(unit.pos.x)
y = int(unit.pos.y)
ma = self._move_amount
points = [
(x, y + 2 * ma),
(x, y - 2 * ma),
(x + 2 * ma, y),
(x - 2 * ma, y),
(x + ma, y + ma),
(x - ma, y - ma),
(x + ma, y - ma),
(x - ma, y + ma),
]
if include_self:
points.append((x, y))
return points
def check_bounds(self, x, y):
"""Whether a point is within the map bounds."""
return (0 <= x < self.map_x and 0 <= y < self.map_y)
def get_surrounding_pathing(self, unit):
"""Returns pathing values of the grid surrounding the given unit."""
points = self.get_surrounding_points(unit, include_self=False)
vals = [
self.pathing_grid[x, y] if self.check_bounds(x, y) else 1
for x, y in points
]
return vals
def get_surrounding_height(self, unit):
"""Returns height values of the grid surrounding the given unit."""
points = self.get_surrounding_points(unit, include_self=True)
vals = [
self.terrain_height[x, y] if self.check_bounds(x, y) else 1
for x, y in points
]
return vals
def get_obs_agent(self, agent_id):
"""Returns observation for agent_id. The observation is composed of:
- agent movement features (where it can move to, height information and pathing grid)
- enemy features (available_to_attack, health, relative_x, relative_y, shield, unit_type)
- ally features (visible, distance, relative_x, relative_y, shield, unit_type)
- agent unit features (health, shield, unit_type)
All of this information is flattened and concatenated into a list,
in the aforementioned order. To know the sizes of each of the
features inside the final list of features, take a look at the
functions ``get_obs_move_feats_size()``,
``get_obs_enemy_feats_size()``, ``get_obs_ally_feats_size()`` and
``get_obs_own_feats_size()``.
The size of the observation vector may vary, depending on the
environment configuration and type of units present in the map.
For instance, non-Protoss units will not have shields, movement
features may or may not include terrain height and pathing grid,
unit_type is not included if there is only one type of unit in the
map etc.).
NOTE: Agents should have access only to their local observations
during decentralised execution.
"""
unit = self.get_unit_by_id(agent_id)
move_feats_dim = self.get_obs_move_feats_size()
enemy_feats_dim = self.get_obs_enemy_feats_size()
ally_feats_dim = self.get_obs_ally_feats_size()
own_feats_dim = self.get_obs_own_feats_size()
move_feats = np.zeros(move_feats_dim, dtype=np.float32)
enemy_feats = np.zeros(enemy_feats_dim, dtype=np.float32)
ally_feats = np.zeros(ally_feats_dim, dtype=np.float32)
own_feats = np.zeros(own_feats_dim, dtype=np.float32)
agent_id_feats = np.zeros(self.n_agents, dtype=np.float32)
if unit.health > 0: # otherwise dead, return all zeros
x = unit.pos.x
y = unit.pos.y
sight_range = self.unit_sight_range(agent_id)
# Movement features
avail_actions = self.get_avail_agent_actions(agent_id)
for m in range(self.n_actions_move):
move_feats[m] = avail_actions[m + 2]
ind = self.n_actions_move
if self.obs_pathing_grid:
move_feats[ind: ind + self.n_obs_pathing] = self.get_surrounding_pathing(unit)
ind += self.n_obs_pathing
if self.obs_terrain_height:
move_feats[ind:] = self.get_surrounding_height(unit)
# Enemy features
for e_id, e_unit in self.enemies.items():
e_x = e_unit.pos.x
e_y = e_unit.pos.y
dist = self.distance(x, y, e_x, e_y)
if (dist < sight_range and e_unit.health > 0): # visible and alive
# Sight range > shoot range
enemy_feats[e_id, 0] = avail_actions[self.n_actions_no_attack + e_id] # available
enemy_feats[e_id, 1] = dist / sight_range # distance
enemy_feats[e_id, 2] = (e_x - x) / sight_range # relative X
enemy_feats[e_id, 3] = (e_y - y) / sight_range # relative Y
ind = 4
if self.obs_all_health:
enemy_feats[e_id, ind] = (e_unit.health / e_unit.health_max) # health
ind += 1
if self.shield_bits_enemy > 0:
max_shield = self.unit_max_shield(e_unit)
enemy_feats[e_id, ind] = (e_unit.shield / max_shield) # shield
ind += 1
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(e_unit, False)
enemy_feats[e_id, ind + type_id] = 1 # unit type
# Ally features
al_ids = [al_id for al_id in range(self.n_agents) if al_id != agent_id]
for i, al_id in enumerate(al_ids):
al_unit = self.get_unit_by_id(al_id)
al_x = al_unit.pos.x
al_y = al_unit.pos.y
dist = self.distance(x, y, al_x, al_y)
if (dist < sight_range and al_unit.health > 0): # visible and alive
ally_feats[i, 0] = 1 # visible
ally_feats[i, 1] = dist / sight_range # distance
ally_feats[i, 2] = (al_x - x) / sight_range # relative X
ally_feats[i, 3] = (al_y - y) / sight_range # relative Y
ind = 4
if self.obs_all_health:
ally_feats[i, ind] = (al_unit.health / al_unit.health_max) # health
ind += 1
if self.shield_bits_ally > 0:
max_shield = self.unit_max_shield(al_unit)
ally_feats[i, ind] = (al_unit.shield / max_shield) # shield
ind += 1
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(al_unit, True)
ally_feats[i, ind + type_id] = 1
ind += self.unit_type_bits
if self.obs_last_action:
ally_feats[i, ind:] = self.last_action[al_id]
# Own features
ind = 0
own_feats[0] = 1 # visible
own_feats[1] = 0 # distance
own_feats[2] = 0 # X
own_feats[3] = 0 # Y
ind = 4
if self.obs_own_health:
own_feats[ind] = unit.health / unit.health_max
ind += 1
if self.shield_bits_ally > 0:
max_shield = self.unit_max_shield(unit)
own_feats[ind] = unit.shield / max_shield
ind += 1
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(unit, True)
own_feats[ind + type_id] = 1
ind += self.unit_type_bits
if self.obs_last_action:
own_feats[ind:] = self.last_action[agent_id]
agent_obs = np.concatenate((ally_feats.flatten(),
enemy_feats.flatten(),
move_feats.flatten(),
own_feats.flatten()))
# Agent id features
if self.obs_agent_id:
agent_id_feats[agent_id] = 1.
agent_obs = np.concatenate((ally_feats.flatten(),
enemy_feats.flatten(),
move_feats.flatten(),
own_feats.flatten(),
agent_id_feats.flatten()))
if self.obs_timestep_number:
agent_obs = np.append(agent_obs, self._episode_steps / self.episode_limit)
if self.debug:
logging.debug("Obs Agent: {}".format(agent_id).center(60, "-"))
logging.debug("Avail. actions {}".format(
self.get_avail_agent_actions(agent_id)))
logging.debug("Move feats {}".format(move_feats))
logging.debug("Enemy feats {}".format(enemy_feats))
logging.debug("Ally feats {}".format(ally_feats))
logging.debug("Own feats {}".format(own_feats))
return agent_obs
def get_obs(self):
"""Returns all agent observations in a list.
NOTE: Agents should have access only to their local observations
during decentralised execution.
"""
agents_obs = [self.get_obs_agent(i) for i in range(self.n_agents)]
return agents_obs
def get_state(self, agent_id=-1):
"""Returns the global state.
NOTE: This functon should not be used during decentralised execution.
"""
if self.obs_instead_of_state:
obs_concat = np.concatenate(self.get_obs(), axis=0).astype(np.float32)
return obs_concat
nf_al = 2 + self.shield_bits_ally + self.unit_type_bits
nf_en = 1 + self.shield_bits_enemy + self.unit_type_bits
if self.add_center_xy:
nf_al += 2
nf_en += 2
if self.add_distance_state:
nf_al += 1
nf_en += 1
if self.add_xy_state:
nf_al += 2
nf_en += 2
if self.add_visible_state:
nf_al += 1
nf_en += 1
if self.state_last_action:
nf_al += self.n_actions
nf_en += self.n_actions
if self.add_enemy_action_state:
nf_en += 1
nf_mv = self.get_state_move_feats_size()
ally_state = np.zeros((self.n_agents, nf_al), dtype=np.float32)
enemy_state = np.zeros((self.n_enemies, nf_en), dtype=np.float32)
move_state = np.zeros((1, nf_mv), dtype=np.float32)
agent_id_feats = np.zeros((self.n_agents, 1), dtype=np.float32)
center_x = self.map_x / 2
center_y = self.map_y / 2
unit = self.get_unit_by_id(agent_id)# get the unit of some agent
x = unit.pos.x
y = unit.pos.y
sight_range = self.unit_sight_range(agent_id)
avail_actions = self.get_avail_agent_actions(agent_id)
if (self.use_mustalive and unit.health > 0) or (not self.use_mustalive): # or else all zeros
# Movement features
for m in range(self.n_actions_move):
move_state[0, m] = avail_actions[m + 2]
ind = self.n_actions_move
if self.state_pathing_grid:
move_state[0, ind: ind + self.n_obs_pathing] = self.get_surrounding_pathing(unit)
ind += self.n_obs_pathing
if self.state_terrain_height:
move_state[0, ind:] = self.get_surrounding_height(unit)
for al_id, al_unit in self.agents.items():
if al_unit.health > 0:
al_x = al_unit.pos.x
al_y = al_unit.pos.y
max_cd = self.unit_max_cooldown(al_unit)
dist = self.distance(x, y, al_x, al_y)
ally_state[al_id, 0] = (al_unit.health / al_unit.health_max) # health
if (self.map_type == "MMM" and al_unit.unit_type == self.medivac_id):
ally_state[al_id, 1] = al_unit.energy / max_cd # energy
else:
ally_state[al_id, 1] = (al_unit.weapon_cooldown / max_cd) # cooldown
ind = 2
if self.add_center_xy:
ally_state[al_id, ind] = (al_x - center_x) / self.max_distance_x # center X
ally_state[al_id, ind+1] = (al_y - center_y) / self.max_distance_y # center Y
ind += 2
if self.shield_bits_ally > 0:
max_shield = self.unit_max_shield(al_unit)
ally_state[al_id, ind] = (al_unit.shield / max_shield) # shield
ind += 1
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(al_unit, True)
ally_state[al_id, ind + type_id] = 1
if unit.health > 0:
ind += self.unit_type_bits
if self.add_distance_state:
ally_state[al_id, ind] = dist / sight_range # distance
ind += 1
if self.add_xy_state:
ally_state[al_id, ind] = (al_x - x) / sight_range # relative X
ally_state[al_id, ind + 1] = (al_y - y) / sight_range # relative Y
ind += 2
if self.add_visible_state:
if dist < sight_range:
ally_state[al_id, ind] = 1 # visible
ind += 1
if self.state_last_action:
ally_state[al_id, ind:] = self.last_action[al_id]
for e_id, e_unit in self.enemies.items():
if e_unit.health > 0:
e_x = e_unit.pos.x
e_y = e_unit.pos.y
dist = self.distance(x, y, e_x, e_y)
enemy_state[e_id, 0] = (e_unit.health / e_unit.health_max) # health
ind = 1
if self.add_center_xy:
enemy_state[e_id, ind] = (e_x - center_x) / self.max_distance_x # center X
enemy_state[e_id, ind+1] = (e_y - center_y) / self.max_distance_y # center Y
ind += 2
if self.shield_bits_enemy > 0:
max_shield = self.unit_max_shield(e_unit)
enemy_state[e_id, ind] = (e_unit.shield / max_shield) # shield
ind += 1
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(e_unit, False)
enemy_state[e_id, ind + type_id] = 1
if unit.health > 0:
ind += self.unit_type_bits
if self.add_distance_state:
enemy_state[e_id, ind] = dist / sight_range # distance
ind += 1
if self.add_xy_state:
enemy_state[e_id, ind] = (e_x - x) / sight_range # relative X
enemy_state[e_id, ind + 1] = (e_y - y) / sight_range # relative Y
ind += 2
if self.add_visible_state:
if dist < sight_range:
enemy_state[e_id, ind] = 1 # visible
ind += 1
if self.add_enemy_action_state:
enemy_state[e_id, ind] = avail_actions[self.n_actions_no_attack + e_id] # available
state = np.append(ally_state.flatten(), enemy_state.flatten())
if self.add_move_state:
state = np.append(state, move_state.flatten())
if self.add_local_obs:
state = np.append(state, self.get_obs_agent(agent_id).flatten())
if self.state_timestep_number:
state = np.append(state, self._episode_steps / self.episode_limit)
if self.add_agent_id:
agent_id_feats[agent_id] = 1.0
state = np.append(state, agent_id_feats.flatten())
state = state.astype(dtype=np.float32)
if self.debug:
logging.debug("STATE".center(60, "-"))
logging.debug("Ally state {}".format(ally_state))
logging.debug("Enemy state {}".format(enemy_state))
logging.debug("Move state {}".format(move_state))
if self.state_last_action:
logging.debug("Last actions {}".format(self.last_action))
return state
def get_state_agent(self, agent_id):
"""Returns observation for agent_id. The observation is composed of:
- agent movement features (where it can move to, height information and pathing grid)
- enemy features (available_to_attack, health, relative_x, relative_y, shield, unit_type)
- ally features (visible, distance, relative_x, relative_y, shield, unit_type)
- agent unit features (health, shield, unit_type)
All of this information is flattened and concatenated into a list,
in the aforementioned order. To know the sizes of each of the
features inside the final list of features, take a look at the
functions ``get_obs_move_feats_size()``,
``get_obs_enemy_feats_size()``, ``get_obs_ally_feats_size()`` and
``get_obs_own_feats_size()``.
The size of the observation vector may vary, depending on the
environment configuration and type of units present in the map.
For instance, non-Protoss units will not have shields, movement
features may or may not include terrain height and pathing grid,
unit_type is not included if there is only one type of unit in the
map etc.).
NOTE: Agents should have access only to their local observations
during decentralised execution.
"""
if self.obs_instead_of_state:
obs_concat = np.concatenate(self.get_obs(), axis=0).astype(np.float32)
return obs_concat
unit = self.get_unit_by_id(agent_id)
move_feats_dim = self.get_obs_move_feats_size()
enemy_feats_dim = self.get_state_enemy_feats_size()
ally_feats_dim = self.get_state_ally_feats_size()
own_feats_dim = self.get_state_own_feats_size()
move_feats = np.zeros(move_feats_dim, dtype=np.float32)
enemy_feats = np.zeros(enemy_feats_dim, dtype=np.float32)
ally_feats = np.zeros(ally_feats_dim, dtype=np.float32)
own_feats = np.zeros(own_feats_dim, dtype=np.float32)
agent_id_feats = np.zeros(self.n_agents, dtype=np.float32)
center_x = self.map_x / 2
center_y = self.map_y / 2
if (self.use_mustalive and unit.health > 0) or (not self.use_mustalive): # otherwise dead, return all zeros
x = unit.pos.x
y = unit.pos.y
sight_range = self.unit_sight_range(agent_id)
# Movement features
avail_actions = self.get_avail_agent_actions(agent_id)
for m in range(self.n_actions_move):
move_feats[m] = avail_actions[m + 2]
ind = self.n_actions_move
if self.state_pathing_grid:
move_feats[ind: ind + self.n_obs_pathing] = self.get_surrounding_pathing(unit)
ind += self.n_obs_pathing
if self.state_terrain_height:
move_feats[ind:] = self.get_surrounding_height(unit)
# Enemy features
for e_id, e_unit in self.enemies.items():
e_x = e_unit.pos.x
e_y = e_unit.pos.y
dist = self.distance(x, y, e_x, e_y)
if e_unit.health > 0: # visible and alive
# Sight range > shoot range
if unit.health > 0:
enemy_feats[e_id, 0] = avail_actions[self.n_actions_no_attack + e_id] # available
enemy_feats[e_id, 1] = dist / sight_range # distance
enemy_feats[e_id, 2] = (e_x - x) / sight_range # relative X
enemy_feats[e_id, 3] = (e_y - y) / sight_range # relative Y
if dist < sight_range:
enemy_feats[e_id, 4] = 1 # visible
ind = 5
if self.obs_all_health:
enemy_feats[e_id, ind] = (e_unit.health / e_unit.health_max) # health
ind += 1
if self.shield_bits_enemy > 0:
max_shield = self.unit_max_shield(e_unit)
enemy_feats[e_id, ind] = (e_unit.shield / max_shield) # shield
ind += 1
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(e_unit, False)
enemy_feats[e_id, ind + type_id] = 1 # unit type
ind += self.unit_type_bits
if self.add_center_xy:
enemy_feats[e_id, ind] = (e_x - center_x) / self.max_distance_x # center X
enemy_feats[e_id, ind+1] = (e_y - center_y) / self.max_distance_y # center Y
# Ally features
al_ids = [al_id for al_id in range(self.n_agents) if al_id != agent_id]
for i, al_id in enumerate(al_ids):
al_unit = self.get_unit_by_id(al_id)
al_x = al_unit.pos.x
al_y = al_unit.pos.y
dist = self.distance(x, y, al_x, al_y)
max_cd = self.unit_max_cooldown(al_unit)
if al_unit.health > 0: # visible and alive
if unit.health > 0:
if dist < sight_range:
ally_feats[i, 0] = 1 # visible
ally_feats[i, 1] = dist / sight_range # distance
ally_feats[i, 2] = (al_x - x) / sight_range # relative X
ally_feats[i, 3] = (al_y - y) / sight_range # relative Y
if (self.map_type == "MMM" and al_unit.unit_type == self.medivac_id):
ally_feats[i, 4] = al_unit.energy / max_cd # energy
else:
ally_feats[i, 4] = (al_unit.weapon_cooldown / max_cd) # cooldown
ind = 5
if self.obs_all_health:
ally_feats[i, ind] = (al_unit.health / al_unit.health_max) # health
ind += 1
if self.shield_bits_ally > 0:
max_shield = self.unit_max_shield(al_unit)
ally_feats[i, ind] = (al_unit.shield / max_shield) # shield
ind += 1
if self.add_center_xy:
ally_feats[i, ind] = (al_x - center_x) / self.max_distance_x # center X
ally_feats[i, ind+1] = (al_y - center_y) / self.max_distance_y # center Y
ind += 2
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(al_unit, True)
ally_feats[i, ind + type_id] = 1
ind += self.unit_type_bits
if self.state_last_action:
ally_feats[i, ind:] = self.last_action[al_id]
# Own features
ind = 0
own_feats[0] = 1 # visible
own_feats[1] = 0 # distance
own_feats[2] = 0 # X
own_feats[3] = 0 # Y
ind = 4
if self.obs_own_health:
own_feats[ind] = unit.health / unit.health_max
ind += 1
if self.shield_bits_ally > 0:
max_shield = self.unit_max_shield(unit)
own_feats[ind] = unit.shield / max_shield
ind += 1
if self.add_center_xy:
own_feats[ind] = (x - center_x) / self.max_distance_x # center X
own_feats[ind+1] = (y - center_y) / self.max_distance_y # center Y
ind += 2
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(unit, True)
own_feats[ind + type_id] = 1
ind += self.unit_type_bits
if self.state_last_action:
own_feats[ind:] = self.last_action[agent_id]
state = np.concatenate((ally_feats.flatten(),
enemy_feats.flatten(),
move_feats.flatten(),
own_feats.flatten()))
# Agent id features
if self.state_agent_id:
agent_id_feats[agent_id] = 1.
state = np.append(state, agent_id_feats.flatten())
if self.state_timestep_number:
state = np.append(state, self._episode_steps / self.episode_limit)
if self.debug:
logging.debug("Obs Agent: {}".format(agent_id).center(60, "-"))
logging.debug("Avail. actions {}".format(
self.get_avail_agent_actions(agent_id)))
logging.debug("Move feats {}".format(move_feats))
logging.debug("Enemy feats {}".format(enemy_feats))
logging.debug("Ally feats {}".format(ally_feats))
logging.debug("Own feats {}".format(own_feats))
return state
def get_obs_enemy_feats_size(self):
""" Returns the dimensions of the matrix containing enemy features.
Size is n_enemies x n_features.
"""
nf_en = 4 + self.unit_type_bits
if self.obs_all_health:
nf_en += 1 + self.shield_bits_enemy
return self.n_enemies, nf_en
def get_state_enemy_feats_size(self):
""" Returns the dimensions of the matrix containing enemy features.
Size is n_enemies x n_features.
"""
nf_en = 5 + self.unit_type_bits
if self.obs_all_health:
nf_en += 1 + self.shield_bits_enemy
if self.add_center_xy:
nf_en += 2
return self.n_enemies, nf_en
def get_obs_ally_feats_size(self):
"""Returns the dimensions of the matrix containing ally features.
Size is n_allies x n_features.
"""
nf_al = 4 + self.unit_type_bits
if self.obs_all_health:
nf_al += 1 + self.shield_bits_ally
if self.obs_last_action:
nf_al += self.n_actions
return self.n_agents - 1, nf_al
def get_state_ally_feats_size(self):
"""Returns the dimensions of the matrix containing ally features.
Size is n_allies x n_features.
"""
nf_al = 5 + self.unit_type_bits
if self.obs_all_health:
nf_al += 1 + self.shield_bits_ally
if self.obs_last_action:
nf_al += self.n_actions
if self.add_center_xy:
nf_al += 2
return self.n_agents - 1, nf_al
def get_obs_own_feats_size(self):
"""Returns the size of the vector containing the agents' own features.
"""
own_feats = 4 + self.unit_type_bits
if self.obs_own_health:
own_feats += 1 + self.shield_bits_ally
if self.obs_last_action:
own_feats += self.n_actions
return own_feats
def get_state_own_feats_size(self):
"""Returns the size of the vector containing the agents' own features.
"""
own_feats = 4 + self.unit_type_bits
if self.obs_own_health:
own_feats += 1 + self.shield_bits_ally
if self.obs_last_action:
own_feats += self.n_actions
if self.add_center_xy:
own_feats += 2
return own_feats
def get_obs_move_feats_size(self):
"""Returns the size of the vector containing the agents's movement-related features."""
move_feats = self.n_actions_move
if self.obs_pathing_grid:
move_feats += self.n_obs_pathing
if self.obs_terrain_height:
move_feats += self.n_obs_height
return move_feats
def get_state_move_feats_size(self):
"""Returns the size of the vector containing the agents's movement-related features."""
move_feats = self.n_actions_move
if self.state_pathing_grid:
move_feats += self.n_obs_pathing
if self.state_terrain_height:
move_feats += self.n_obs_height
return move_feats
def get_obs_size(self):
"""Returns the size of the observation."""
own_feats = self.get_obs_own_feats_size()
move_feats = self.get_obs_move_feats_size()
n_enemies, n_enemy_feats = self.get_obs_enemy_feats_size()
n_allies, n_ally_feats = self.get_obs_ally_feats_size()
enemy_feats = n_enemies * n_enemy_feats
ally_feats = n_allies * n_ally_feats
all_feats = move_feats + enemy_feats + ally_feats + own_feats
agent_id_feats = 0
timestep_feats = 0
if self.obs_agent_id:
agent_id_feats = self.n_agents
all_feats += agent_id_feats
if self.obs_timestep_number:
timestep_feats = 1
all_feats += timestep_feats
return [all_feats * self.stacked_frames if self.use_stacked_frames else all_feats, [n_allies, n_ally_feats], [n_enemies, n_enemy_feats], [1, move_feats], [1, own_feats+agent_id_feats+timestep_feats]]
def get_state_size(self):
"""Returns the size of the global state."""
if self.obs_instead_of_state:
return [self.get_obs_size()[0] * self.n_agents, [self.n_agents, self.get_obs_size()[0]]]
if self.use_state_agent:
own_feats = self.get_state_own_feats_size()
move_feats = self.get_obs_move_feats_size()
n_enemies, n_enemy_feats = self.get_state_enemy_feats_size()
n_allies, n_ally_feats = self.get_state_ally_feats_size()
enemy_feats = n_enemies * n_enemy_feats
ally_feats = n_allies * n_ally_feats
all_feats = move_feats + enemy_feats + ally_feats + own_feats
agent_id_feats = 0
timestep_feats = 0
if self.state_agent_id:
agent_id_feats = self.n_agents
all_feats += agent_id_feats
if self.state_timestep_number:
timestep_feats = 1
all_feats += timestep_feats
return [all_feats * self.stacked_frames if self.use_stacked_frames else all_feats, [n_allies, n_ally_feats], [n_enemies, n_enemy_feats], [1, move_feats], [1, own_feats+agent_id_feats+timestep_feats]]
nf_al = 2 + self.shield_bits_ally + self.unit_type_bits
nf_en = 1 + self.shield_bits_enemy + self.unit_type_bits
nf_mv = self.get_state_move_feats_size()
if self.add_center_xy:
nf_al += 2
nf_en += 2
if self.state_last_action:
nf_al += self.n_actions
nf_en += self.n_actions
if self.add_visible_state:
nf_al += 1
nf_en += 1
if self.add_distance_state:
nf_al += 1
nf_en += 1
if self.add_xy_state:
nf_al += 2
nf_en += 2
if self.add_enemy_action_state:
nf_en += 1
enemy_state = self.n_enemies * nf_en
ally_state = self.n_agents * nf_al
size = enemy_state + ally_state
move_state = 0
obs_agent_size = 0
timestep_state = 0
agent_id_feats = 0
if self.add_move_state:
move_state = nf_mv
size += move_state
if self.add_local_obs:
obs_agent_size = self.get_obs_size()[0]
size += obs_agent_size
if self.state_timestep_number:
timestep_state = 1
size += timestep_state
if self.add_agent_id:
agent_id_feats = self.n_agents
size += agent_id_feats
return [size * self.stacked_frames if self.use_stacked_frames else size, [self.n_agents, nf_al], [self.n_enemies, nf_en], [1, move_state + obs_agent_size + timestep_state + agent_id_feats]]
def get_visibility_matrix(self):
"""Returns a boolean numpy array of dimensions
(n_agents, n_agents + n_enemies) indicating which units
are visible to each agent.
"""
arr = np.zeros((self.n_agents, self.n_agents + self.n_enemies), dtype=np.bool)
for agent_id in range(self.n_agents):
current_agent = self.get_unit_by_id(agent_id)
if current_agent.health > 0: # it agent not dead
x = current_agent.pos.x
y = current_agent.pos.y
sight_range = self.unit_sight_range(agent_id)
# Enemies
for e_id, e_unit in self.enemies.items():
e_x = e_unit.pos.x
e_y = e_unit.pos.y
dist = self.distance(x, y, e_x, e_y)
if (dist < sight_range and e_unit.health > 0):
# visible and alive
arr[agent_id, self.n_agents + e_id] = 1
# The matrix for allies is filled symmetrically
al_ids = [
al_id for al_id in range(self.n_agents)
if al_id > agent_id
]
for i, al_id in enumerate(al_ids):
al_unit = self.get_unit_by_id(al_id)
al_x = al_unit.pos.x
al_y = al_unit.pos.y
dist = self.distance(x, y, al_x, al_y)
if (dist < sight_range and al_unit.health > 0):
# visible and alive
arr[agent_id, al_id] = arr[al_id, agent_id] = 1
return arr
def get_unit_type_id(self, unit, ally):
"""Returns the ID of unit type in the given scenario."""
if ally: # use new SC2 unit types
type_id = unit.unit_type - self._min_unit_type
else: # use default SC2 unit types
if self.map_type == "stalkers_and_zealots":
# id(Stalker) = 74, id(Zealot) = 73
type_id = unit.unit_type - 73
elif self.map_type == "colossi_stalkers_zealots":
# id(Stalker) = 74, id(Zealot) = 73, id(Colossus) = 4
if unit.unit_type == 4:
type_id = 0
elif unit.unit_type == 74:
type_id = 1
else:
type_id = 2
elif self.map_type == "bane":
if unit.unit_type == 9:
type_id = 0
else:
type_id = 1
elif self.map_type == "MMM":
if unit.unit_type == 51:
type_id = 0
elif unit.unit_type == 48:
type_id = 1
else:
type_id = 2
return type_id
def get_avail_agent_actions(self, agent_id):
"""Returns the available actions for agent_id."""
unit = self.get_unit_by_id(agent_id)
if unit.health > 0:
# cannot choose no-op when alive
avail_actions = [0] * self.n_actions
# stop should be allowed
avail_actions[1] = 1
# see if we can move
if self.can_move(unit, Direction.NORTH):
avail_actions[2] = 1
if self.can_move(unit, Direction.SOUTH):
avail_actions[3] = 1
if self.can_move(unit, Direction.EAST):
avail_actions[4] = 1
if self.can_move(unit, Direction.WEST):
avail_actions[5] = 1
# Can attack only alive units that are alive in the shooting range
shoot_range = self.unit_shoot_range(agent_id)
target_items = self.enemies.items()
if self.map_type == "MMM" and unit.unit_type == self.medivac_id:
# Medivacs cannot heal themselves or other flying units
target_items = [
(t_id, t_unit)
for (t_id, t_unit) in self.agents.items()
if t_unit.unit_type != self.medivac_id
]
for t_id, t_unit in target_items:
if t_unit.health > 0:
dist = self.distance(
unit.pos.x, unit.pos.y, t_unit.pos.x, t_unit.pos.y
)
if dist <= shoot_range:
avail_actions[t_id + self.n_actions_no_attack] = 1
return avail_actions
else:
# only no-op allowed
return [1] + [0] * (self.n_actions - 1)
def get_avail_actions(self):
"""Returns the available actions of all agents in a list."""
avail_actions = []
for agent_id in range(self.n_agents):
avail_agent = self.get_avail_agent_actions(agent_id)
avail_actions.append(avail_agent)
return avail_actions
def close(self):
"""Close StarCraft II."""
if self._sc2_proc:
self._sc2_proc.close()
def seed(self, seed):
"""Returns the random seed used by the environment."""
self._seed = seed
def render(self):
"""Not implemented."""
pass
def _kill_all_units(self):
"""Kill all units on the map."""
units_alive = [
unit.tag for unit in self.agents.values() if unit.health > 0
] + [unit.tag for unit in self.enemies.values() if unit.health > 0]
debug_command = [
d_pb.DebugCommand(kill_unit=d_pb.DebugKillUnit(tag=units_alive))
]
self._controller.debug(debug_command)
def init_units(self):
"""Initialise the units."""
while True:
# Sometimes not all units have yet been created by SC2
self.agents = {}
self.enemies = {}
ally_units = [
unit
for unit in self._obs.observation.raw_data.units
if unit.owner == 1
]
ally_units_sorted = sorted(
ally_units,
key=attrgetter("unit_type", "pos.x", "pos.y"),
reverse=False,
)
for i in range(len(ally_units_sorted)):
self.agents[i] = ally_units_sorted[i]
if self.debug:
logging.debug(
"Unit {} is {}, x = {}, y = {}".format(
len(self.agents),
self.agents[i].unit_type,
self.agents[i].pos.x,
self.agents[i].pos.y,
)
)
for unit in self._obs.observation.raw_data.units:
if unit.owner == 2:
self.enemies[len(self.enemies)] = unit
if self._episode_count == 0:
self.max_reward += unit.health_max + unit.shield_max
if self._episode_count == 0:
min_unit_type = min(
unit.unit_type for unit in self.agents.values()
)
self._init_ally_unit_types(min_unit_type)
all_agents_created = (len(self.agents) == self.n_agents)
all_enemies_created = (len(self.enemies) == self.n_enemies)
if all_agents_created and all_enemies_created: # all good
return
try:
self._controller.step(1)
self._obs = self._controller.observe()
except (protocol.ProtocolError, protocol.ConnectionError):
self.full_restart()
self.reset()
def update_units(self):
"""Update units after an environment step.
This function assumes that self._obs is up-to-date.
"""
n_ally_alive = 0
n_enemy_alive = 0
# Store previous state
self.previous_ally_units = deepcopy(self.agents)
self.previous_enemy_units = deepcopy(self.enemies)
for al_id, al_unit in self.agents.items():
updated = False
for unit in self._obs.observation.raw_data.units:
if al_unit.tag == unit.tag:
self.agents[al_id] = unit
updated = True
n_ally_alive += 1
break
if not updated: # dead
al_unit.health = 0
for e_id, e_unit in self.enemies.items():
updated = False
for unit in self._obs.observation.raw_data.units:
if e_unit.tag == unit.tag:
self.enemies[e_id] = unit
updated = True
n_enemy_alive += 1
break
if not updated: # dead
e_unit.health = 0
if (n_ally_alive == 0 and n_enemy_alive > 0
or self.only_medivac_left(ally=True)):
return -1 # lost
if (n_ally_alive > 0 and n_enemy_alive == 0
or self.only_medivac_left(ally=False)):
return 1 # won
if n_ally_alive == 0 and n_enemy_alive == 0:
return 0
return None
def _init_ally_unit_types(self, min_unit_type):
"""Initialise ally unit types. Should be called once from the
init_units function.
"""
self._min_unit_type = min_unit_type
if self.map_type == "marines":
self.marine_id = min_unit_type
elif self.map_type == "stalkers_and_zealots":
self.stalker_id = min_unit_type
self.zealot_id = min_unit_type + 1
elif self.map_type == "colossi_stalkers_zealots":
self.colossus_id = min_unit_type
self.stalker_id = min_unit_type + 1
self.zealot_id = min_unit_type + 2
elif self.map_type == "MMM":
self.marauder_id = min_unit_type
self.marine_id = min_unit_type + 1
self.medivac_id = min_unit_type + 2
elif self.map_type == "zealots":
self.zealot_id = min_unit_type
elif self.map_type == "hydralisks":
self.hydralisk_id = min_unit_type
elif self.map_type == "stalkers":
self.stalker_id = min_unit_type
elif self.map_type == "colossus":
self.colossus_id = min_unit_type
elif self.map_type == "bane":
self.baneling_id = min_unit_type
self.zergling_id = min_unit_type + 1
def only_medivac_left(self, ally):
"""Check if only Medivac units are left."""
if self.map_type != "MMM":
return False
if ally:
units_alive = [
a
for a in self.agents.values()
if (a.health > 0 and a.unit_type != self.medivac_id)
]
if len(units_alive) == 0:
return True
return False
else:
units_alive = [
a
for a in self.enemies.values()
if (a.health > 0 and a.unit_type != self.medivac_id)
]
if len(units_alive) == 1 and units_alive[0].unit_type == 54:
return True
return False
def get_unit_by_id(self, a_id):
"""Get unit by ID."""
return self.agents[a_id]
def get_stats(self):
stats = {
"battles_won": self.battles_won,
"battles_game": self.battles_game,
"battles_draw": self.timeouts,
"win_rate": self.battles_won / self.battles_game,
"timeouts": self.timeouts,
"restarts": self.force_restarts,
}
return stats
================================================
FILE: envs/starcraft2/multiagentenv.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
class MultiAgentEnv(object):
def step(self, actions):
"""Returns reward, terminated, info."""
raise NotImplementedError
def get_obs(self):
"""Returns all agent observations in a list."""
raise NotImplementedError
def get_obs_agent(self, agent_id):
"""Returns observation for agent_id."""
raise NotImplementedError
def get_obs_size(self):
"""Returns the size of the observation."""
raise NotImplementedError
def get_state(self):
"""Returns the global state."""
raise NotImplementedError
def get_state_size(self):
"""Returns the size of the global state."""
raise NotImplementedError
def get_avail_actions(self):
"""Returns the available actions of all agents in a list."""
raise NotImplementedError
def get_avail_agent_actions(self, agent_id):
"""Returns the available actions for agent_id."""
raise NotImplementedError
def get_total_actions(self):
"""Returns the total number of actions an agent could ever take."""
raise NotImplementedError
def reset(self):
"""Returns initial observations and states."""
raise NotImplementedError
def render(self):
raise NotImplementedError
def close(self):
raise NotImplementedError
def seed(self):
raise NotImplementedError
def save_replay(self):
"""Save a replay."""
raise NotImplementedError
def get_env_info(self):
env_info = {"state_shape": self.get_state_size(),
"obs_shape": self.get_obs_size(),
"obs_alone_shape": self.get_obs_alone_size(),
"n_actions": self.get_total_actions(),
"n_agents": self.n_agents,
"episode_limit": self.episode_limit}
return env_info
================================================
FILE: envs/starcraft2/smac_maps.py
================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pysc2.maps import lib
class SMACMap(lib.Map):
directory = "SMAC_Maps"
download = "https://github.com/oxwhirl/smac#smac-maps"
players = 2
step_mul = 8
game_steps_per_episode = 0
map_param_registry = {
"3m": {
"n_agents": 3,
"n_enemies": 3,
"limit": 60,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"8m": {
"n_agents": 8,
"n_enemies": 8,
"limit": 120,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"25m": {
"n_agents": 25,
"n_enemies": 25,
"limit": 150,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"5m_vs_6m": {
"n_agents": 5,
"n_enemies": 6,
"limit": 70,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"8m_vs_9m": {
"n_agents": 8,
"n_enemies": 9,
"limit": 120,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"10m_vs_11m": {
"n_agents": 10,
"n_enemies": 11,
"limit": 150,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"27m_vs_30m": {
"n_agents": 27,
"n_enemies": 30,
"limit": 180,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"MMM": {
"n_agents": 10,
"n_enemies": 10,
"limit": 150,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 3,
"map_type": "MMM",
},
"MMM2": {
"n_agents": 10,
"n_enemies": 12,
"limit": 180,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 3,
"map_type": "MMM",
},
"2s3z": {
"n_agents": 5,
"n_enemies": 5,
"limit": 120,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"3s5z": {
"n_agents": 8,
"n_enemies": 8,
"limit": 150,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"3s5z_vs_3s6z": {
"n_agents": 8,
"n_enemies": 9,
"limit": 170,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"3s_vs_3z": {
"n_agents": 3,
"n_enemies": 3,
"limit": 150,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 0,
"map_type": "stalkers",
},
"3s_vs_4z": {
"n_agents": 3,
"n_enemies": 4,
"limit": 200,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 0,
"map_type": "stalkers",
},
"3s_vs_5z": {
"n_agents": 3,
"n_enemies": 5,
"limit": 250,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 0,
"map_type": "stalkers",
},
"1c3s5z": {
"n_agents": 9,
"n_enemies": 9,
"limit": 180,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 3,
"map_type": "colossi_stalkers_zealots",
},
"2m_vs_1z": {
"n_agents": 2,
"n_enemies": 1,
"limit": 150,
"a_race": "T",
"b_race": "P",
"unit_type_bits": 0,
"map_type": "marines",
},
"corridor": {
"n_agents": 6,
"n_enemies": 24,
"limit": 400,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 0,
"map_type": "zealots",
},
"6h_vs_8z": {
"n_agents": 6,
"n_enemies": 8,
"limit": 150,
"a_race": "Z",
"b_race": "P",
"unit_type_bits": 0,
"map_type": "hydralisks",
},
"2s_vs_1sc": {
"n_agents": 2,
"n_enemies": 1,
"limit": 300,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 0,
"map_type": "stalkers",
},
"so_many_baneling": {
"n_agents": 7,
"n_enemies": 32,
"limit": 100,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 0,
"map_type": "zealots",
},
"bane_vs_bane": {
"n_agents": 24,
"n_enemies": 24,
"limit": 200,
"a_race": "Z",
"b_race": "Z",
"unit_type_bits": 2,
"map_type": "bane",
},
"2c_vs_64zg": {
"n_agents": 2,
"n_enemies": 64,
"limit": 400,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 0,
"map_type": "colossus",
},
# This is adhoc environment
"1c2z_vs_1c1s1z": {
"n_agents": 3,
"n_enemies": 3,
"limit": 180,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 3,
"map_type": "colossi_stalkers_zealots",
},
"1c2s_vs_1c1s1z": {
"n_agents": 3,
"n_enemies": 3,
"limit": 180,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 3,
"map_type": "colossi_stalkers_zealots",
},
"2c1z_vs_1c1s1z": {
"n_agents": 3,
"n_enemies": 3,
"limit": 180,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 3,
"map_type": "colossi_stalkers_zealots",
},
"2c1s_vs_1c1s1z": {
"n_agents": 3,
"n_enemies": 3,
"limit": 180,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 3,
"map_type": "colossi_stalkers_zealots",
},
"1c1s1z_vs_1c1s1z": {
"n_agents": 3,
"n_enemies": 3,
"limit": 180,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 3,
"map_type": "colossi_stalkers_zealots",
},
"3s5z_vs_4s4z": {
"n_agents": 8,
"n_enemies": 8,
"limit": 150,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"4s4z_vs_4s4z": {
"n_agents": 8,
"n_enemies": 8,
"limit": 150,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"5s3z_vs_4s4z": {
"n_agents": 8,
"n_enemies": 8,
"limit": 150,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"6s2z_vs_4s4z": {
"n_agents": 8,
"n_enemies": 8,
"limit": 150,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"2s6z_vs_4s4z": {
"n_agents": 8,
"n_enemies": 8,
"limit": 150,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"6m_vs_6m_tz": {
"n_agents": 6,
"n_enemies": 6,
"limit": 70,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"5m_vs_6m_tz": {
"n_agents": 5,
"n_enemies": 6,
"limit": 70,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"3s6z_vs_3s6z": {
"n_agents": 9,
"n_enemies": 9,
"limit": 170,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"7h_vs_8z": {
"n_agents": 7,
"n_enemies": 8,
"limit": 150,
"a_race": "Z",
"b_race": "P",
"unit_type_bits": 0,
"map_type": "hydralisks",
},
"2s2z_vs_zg": {
"n_agents": 4,
"n_enemies": 20,
"limit": 200,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots_vs_zergling",
},
"1s3z_vs_zg": {
"n_agents": 4,
"n_enemies": 20,
"limit": 200,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots_vs_zergling",
},
"3s1z_vs_zg": {
"n_agents": 4,
"n_enemies": 20,
"limit": 200,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots_vs_zergling",
},
"2s2z_vs_zg_easy": {
"n_agents": 4,
"n_enemies": 18,
"limit": 200,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots_vs_zergling",
},
"1s3z_vs_zg_easy": {
"n_agents": 4,
"n_enemies": 18,
"limit": 200,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots_vs_zergling",
},
"3s1z_vs_zg_easy": {
"n_agents": 4,
"n_enemies": 18,
"limit": 200,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots_vs_zergling",
},
"28m_vs_30m": {
"n_agents": 28,
"n_enemies": 30,
"limit": 180,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"29m_vs_30m": {
"n_agents": 29,
"n_enemies": 30,
"limit": 180,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"30m_vs_30m": {
"n_agents": 30,
"n_enemies": 30,
"limit": 180,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"MMM2_test": {
"n_agents": 10,
"n_enemies": 12,
"limit": 180,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 3,
"map_type": "MMM",
},
}
def get_smac_map_registry():
return map_param_registry
for name in map_param_registry.keys():
globals()[name] = type(name, (SMACMap,), dict(filename=name))
def get_map_params(map_name):
map_param_registry = get_smac_map_registry()
return map_param_registry[map_name]
================================================
FILE: install_sc2.sh
================================================
#!/bin/bash
# Install SC2 and add the custom maps
if [ -z "$EXP_DIR" ]
then
EXP_DIR=~
fi
echo "EXP_DIR: $EXP_DIR"
cd $EXP_DIR/pymarl
mkdir 3rdparty
cd 3rdparty
export SC2PATH=`pwd`'/StarCraftII'
echo 'SC2PATH is set to '$SC2PATH
if [ ! -d $SC2PATH ]; then
echo 'StarCraftII is not installed. Installing now ...';
wget http://blzdistsc2-a.akamaihd.net/Linux/SC2.4.10.zip
unzip -P iagreetotheeula SC2.4.10.zip
rm -rf SC2.4.10.zip
else
echo 'StarCraftII is already installed.'
fi
echo 'Adding SMAC maps.'
MAP_DIR="$SC2PATH/Maps/"
echo 'MAP_DIR is set to '$MAP_DIR
if [ ! -d $MAP_DIR ]; then
mkdir -p $MAP_DIR
fi
cd ..
wget https://github.com/oxwhirl/smac/releases/download/v0.1-beta1/SMAC_Maps.zip
unzip SMAC_Maps.zip
mv SMAC_Maps $MAP_DIR
rm -rf SMAC_Maps.zip
echo 'StarCraft II and SMAC are installed.'
================================================
FILE: requirements.txt
================================================
absl-py==0.9.0
aiohttp==3.6.2
aioredis==1.3.1
astor==0.8.0
astunparse==1.6.3
async-timeout==3.0.1
atari-py==0.2.6
atomicwrites==1.2.1
attrs==18.2.0
beautifulsoup4==4.9.1
blessings==1.7
cachetools==4.1.1
certifi==2020.4.5.2
cffi==1.14.1
chardet==3.0.4
click==7.1.2
cloudpickle==1.3.0
colorama==0.4.3
colorful==0.5.4
configparser==5.0.1
contextvars==2.4
cycler==0.10.0
Cython==0.29.21
deepdiff==4.3.2
dill==0.3.2
docker-pycreds==0.4.0
docopt==0.6.2
fasteners==0.15
filelock==3.0.12
funcsigs==1.0.2
future==0.16.0
gast==0.2.2
gin==0.1.6
gin-config==0.3.0
gitdb==4.0.5
GitPython==3.1.9
glfw==1.12.0
google==3.0.0
google-api-core==1.22.1
google-auth==1.21.0
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
googleapis-common-protos==1.52.0
gpustat==0.6.0
gql==0.2.0
graphql-core==1.1
grpcio==1.31.0
gym==0.17.2
h5py==2.10.0
hiredis==1.1.0
idna==2.7
idna-ssl==1.1.0
imageio==2.4.1
immutables==0.14
importlib-metadata==1.7.0
joblib==0.16.0
jsonnet==0.16.0
jsonpickle==0.9.6
jsonschema==3.2.0
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.2
kiwisolver==1.0.1
lockfile==0.12.2
Markdown==3.1.1
matplotlib==3.0.0
mkl-fft==1.2.0
mkl-random==1.2.0
mkl-service==2.3.0
mock==2.0.0
monotonic==1.5
more-itertools==4.3.0
mpi4py==3.0.3
mpyq==0.2.5
msgpack==1.0.0
mujoco-py==2.0.2.8
multidict==4.7.6
munch==2.3.2
numpy
nvidia-ml-py3==7.352.0
oauthlib==3.1.0
opencensus==0.7.10
opencensus-context==0.1.1
opencv-python==4.2.0.34
opt-einsum==3.1.0
ordered-set==4.0.2
packaging==20.4
pandas==1.1.1
pathlib2==2.3.2
pathtools==0.1.2
pbr==4.3.0
Pillow==5.3.0
pluggy==0.7.1
portpicker==1.2.0
probscale==0.2.3
progressbar2==3.53.1
prometheus-client==0.8.0
promise==2.3
protobuf==3.12.4
psutil==5.7.2
py==1.6.0
py-spy==0.3.3
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
pygame==1.9.4
pyglet==1.5.0
PyOpenGL==3.1.5
PyOpenGL-accelerate==3.1.5
pyparsing==2.2.2
pyrsistent==0.16.0
PySC2==3.0.0
pytest==3.8.2
python-dateutil==2.7.3
python-utils==2.4.0
pytz==2020.1
PyYAML==3.13
pyzmq==19.0.2
redis==3.4.1
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
s2clientprotocol==4.10.1.75800.0
s2protocol==4.11.4.78285.0
sacred==0.7.2
scipy==1.4.1
seaborn==0.10.1
sentry-sdk==0.18.0
setproctitle==1.1.10
shortuuid==1.0.1
six==1.15.0
sk-video==1.1.10
smmap==3.0.4
snakeviz==1.0.0
soupsieve==2.0.1
subprocess32==3.5.4
tabulate==0.8.7
tensorboard==2.0.2
tensorboard-logger==0.1.0
tensorboard-plugin-wit==1.7.0
tensorboardX==2.0
tensorflow==2.0.0
tensorflow-estimator==2.0.0
termcolor==1.1.0
torch
torchvision
tornado
tqdm==4.48.2
typing-extensions==3.7.4.3
urllib3==1.23
watchdog==0.10.3
websocket-client==0.53.0
Werkzeug==0.16.1
whichcraft==0.5.2
wrapt==1.12.1
xmltodict==0.12.0
yarl==1.5.1
zipp==3.1.0
zmq==0.0.0
================================================
FILE: runners/__init__.py
================================================
from runners import separated
__all__=[
"separated"
]
================================================
FILE: runners/separated/__init__.py
================================================
from runners.separated import base_runner,smac_runner
__all__=[
"base_runner",
"smac_runner"
]
================================================
FILE: runners/separated/base_runner.py
================================================
import time
import os
import numpy as np
from itertools import chain
import torch
from tensorboardX import SummaryWriter
from utils.separated_buffer import SeparatedReplayBuffer
from utils.util import update_linear_schedule
def _t2n(x):
return x.detach().cpu().numpy()
class Runner(object):
def __init__(self, config):
self.all_args = config['all_args']
self.envs = config['envs']
self.eval_envs = config['eval_envs']
self.device = config['device']
self.num_agents = config['num_agents']
# parameters
self.env_name = self.all_args.env_name
self.algorithm_name = self.all_args.algorithm_name
self.experiment_name = self.all_args.experiment_name
self.use_centralized_V = self.all_args.use_centralized_V
self.use_obs_instead_of_state = self.all_args.use_obs_instead_of_state
self.num_env_steps = self.all_args.num_env_steps
self.episode_length = self.all_args.episode_length
self.n_rollout_threads = self.all_args.n_rollout_threads
self.n_eval_rollout_threads = self.all_args.n_eval_rollout_threads
self.use_linear_lr_decay = self.all_args.use_linear_lr_decay
self.hidden_size = self.all_args.hidden_size
self.use_render = self.all_args.use_render
self.recurrent_N = self.all_args.recurrent_N
self.use_single_network = self.all_args.use_single_network
# interval
self.save_interval = self.all_args.save_interval
self.use_eval = self.all_args.use_eval
self.eval_interval = self.all_args.eval_interval
self.log_interval = self.all_args.log_interval
# dir
self.model_dir = self.all_args.model_dir
if self.use_render:
import imageio
self.run_dir = config["run_dir"]
self.gif_dir = str(self.run_dir / 'gifs')
if not os.path.exists(self.gif_dir):
os.makedirs(self.gif_dir)
else:
self.run_dir = config["run_dir"]
self.log_dir = str(self.run_dir / 'logs')
if not os.path.exists(self.log_dir):
os.makedirs(self.log_dir)
self.writter = SummaryWriter(self.log_dir)
self.save_dir = str(self.run_dir / 'models')
if not os.path.exists(self.save_dir):
os.makedirs(self.save_dir)
if self.all_args.algorithm_name == "happo":
from algorithms.happo_trainer import HAPPO as TrainAlgo
from algorithms.happo_policy import HAPPO_Policy as Policy
elif self.all_args.algorithm_name == "hatrpo":
from algorithms.hatrpo_trainer import HATRPO as TrainAlgo
from algorithms.hatrpo_policy import HATRPO_Policy as Policy
else:
raise NotImplementedError
print("share_observation_space: ", self.envs.share_observation_space)
print("observation_space: ", self.envs.observation_space)
print("action_space: ", self.envs.action_space)
self.policy = []
for agent_id in range(self.num_agents):
share_observation_space = self.envs.share_observation_space[agent_id] if self.use_centralized_V else self.envs.observation_space[agent_id]
# policy network
po = Policy(self.all_args,
self.envs.observation_space[agent_id],
share_observation_space,
self.envs.action_space[agent_id],
device = self.device)
self.policy.append(po)
if self.model_dir is not None:
self.restore()
self.trainer = []
self.buffer = []
for agent_id in range(self.num_agents):
# algorithm
tr = TrainAlgo(self.all_args, self.policy[agent_id], device = self.device)
# buffer
share_observation_space = self.envs.share_observation_space[agent_id] if self.use_centralized_V else self.envs.observation_space[agent_id]
bu = SeparatedReplayBuffer(self.all_args,
self.envs.observation_space[agent_id],
share_observation_space,
self.envs.action_space[agent_id])
self.buffer.append(bu)
self.trainer.append(tr)
def run(self):
raise NotImplementedError
def warmup(self):
raise NotImplementedError
def collect(self, step):
raise NotImplementedError
def insert(self, data):
raise NotImplementedError
@torch.no_grad()
def compute(self):
for agent_id in range(self.num_agents):
self.trainer[agent_id].prep_rollout()
next_value = self.trainer[agent_id].policy.get_values(self.buffer[agent_id].share_obs[-1],
self.buffer[agent_id].rnn_states_critic[-1],
self.buffer[agent_id].masks[-1])
next_value = _t2n(next_value)
self.buffer[agent_id].compute_returns(next_value, self.trainer[agent_id].value_normalizer)
def train(self):
train_infos = []
# random update order
action_dim=self.buffer[0].actions.shape[-1]
factor = np.ones((self.episode_length, self.n_rollout_threads, 1), dtype=np.float32)
for agent_id in torch.randperm(self.num_agents):
self.trainer[agent_id].prep_training()
self.buffer[agent_id].update_factor(factor)
available_actions = None if self.buffer[agent_id].available_actions is None \
else self.buffer[agent_id].available_actions[:-1].reshape(-1, *self.buffer[agent_id].available_actions.shape[2:])
if self.all_args.algorithm_name == "hatrpo":
old_actions_logprob, _, _, _, _ =self.trainer[agent_id].policy.actor.evaluate_actions(self.buffer[agent_id].obs[:-1].reshape(-1, *self.buffer[agent_id].obs.shape[2:]),
self.buffer[agent_id].rnn_states[0:1].reshape(-1, *self.buffer[agent_id].rnn_states.shape[2:]),
self.buffer[agent_id].actions.reshape(-1, *self.buffer[agent_id].actions.shape[2:]),
self.buffer[agent_id].masks[:-1].reshape(-1, *self.buffer[agent_id].masks.shape[2:]),
available_actions,
self.buffer[agent_id].active_masks[:-1].reshape(-1, *self.buffer[agent_id].active_masks.shape[2:]))
else:
old_actions_logprob, _ =self.trainer[agent_id].policy.actor.evaluate_actions(self.buffer[agent_id].obs[:-1].reshape(-1, *self.buffer[agent_id].obs.shape[2:]),
self.buffer[agent_id].rnn_states[0:1].reshape(-1, *self.buffer[agent_id].rnn_states.shape[2:]),
self.buffer[agent_id].actions.reshape(-1, *self.buffer[agent_id].actions.shape[2:]),
self.buffer[agent_id].masks[:-1].reshape(-1, *self.buffer[agent_id].masks.shape[2:]),
available_actions,
self.buffer[agent_id].active_masks[:-1].reshape(-1, *self.buffer[agent_id].active_masks.shape[2:]))
train_info = self.trainer[agent_id].train(self.buffer[agent_id])
if self.all_args.algorithm_name == "hatrpo":
new_actions_logprob, _, _, _, _ =self.trainer[agent_id].policy.actor.evaluate_actions(self.buffer[agent_id].obs[:-1].reshape(-1, *self.buffer[agent_id].obs.shape[2:]),
self.buffer[agent_id].rnn_states[0:1].reshape(-1, *self.buffer[agent_id].rnn_states.shape[2:]),
self.buffer[agent_id].actions.reshape(-1, *self.buffer[agent_id].actions.shape[2:]),
self.buffer[agent_id].masks[:-1].reshape(-1, *self.buffer[agent_id].masks.shape[2:]),
available_actions,
self.buffer[agent_id].active_masks[:-1].reshape(-1, *self.buffer[agent_id].active_masks.shape[2:]))
else:
new_actions_logprob, _ =self.trainer[agent_id].policy.actor.evaluate_actions(self.buffer[agent_id].obs[:-1].reshape(-1, *self.buffer[agent_id].obs.shape[2:]),
self.buffer[agent_id].rnn_states[0:1].reshape(-1, *self.buffer[agent_id].rnn_states.shape[2:]),
self.buffer[agent_id].actions.reshape(-1, *self.buffer[agent_id].actions.shape[2:]),
self.buffer[agent_id].masks[:-1].reshape(-1, *self.buffer[agent_id].masks.shape[2:]),
available_actions,
self.buffer[agent_id].active_masks[:-1].reshape(-1, *self.buffer[agent_id].active_masks.shape[2:]))
factor = factor*_t2n(torch.prod(torch.exp(new_actions_logprob-old_actions_logprob),dim=-1).reshape(self.episode_length,self.n_rollout_threads,1))
train_infos.append(train_info)
self.buffer[agent_id].after_update()
return train_infos
def save(self):
for agent_id in range(self.num_agents):
if self.use_single_network:
policy_model = self.trainer[agent_id].policy.model
torch.save(policy_model.state_dict(), str(self.save_dir) + "/model_agent" + str(agent_id) + ".pt")
else:
policy_actor = self.trainer[agent_id].policy.actor
torch.save(policy_actor.state_dict(), str(self.save_dir) + "/actor_agent" + str(agent_id) + ".pt")
policy_critic = self.trainer[agent_id].policy.critic
torch.save(policy_critic.state_dict(), str(self.save_dir) + "/critic_agent" + str(agent_id) + ".pt")
def restore(self):
for agent_id in range(self.num_agents):
if self.use_single_network:
policy_model_state_dict = torch.load(str(self.model_dir) + '/model_agent' + str(agent_id) + '.pt')
self.policy[agent_id].model.load_state_dict(policy_model_state_dict)
else:
policy_actor_state_dict = torch.load(str(self.model_dir) + '/actor_agent' + str(agent_id) + '.pt')
self.policy[agent_id].actor.load_state_dict(policy_actor_state_dict)
policy_critic_state_dict = torch.load(str(self.model_dir) + '/critic_agent' + str(agent_id) + '.pt')
self.policy[agent_id].critic.load_state_dict(policy_critic_state_dict)
def log_train(self, train_infos, total_num_steps):
for agent_id in range(self.num_agents):
for k, v in train_infos[agent_id].items():
agent_k = "agent%i/" % agent_id + k
self.writter.add_scalars(agent_k, {agent_k: v}, total_num_steps)
def log_env(self, env_infos, total_num_steps):
for k, v in env_infos.items():
if len(v) > 0:
self.writter.add_scalars(k, {k: np.mean(v)}, total_num_steps)
================================================
FILE: runners/separated/mujoco_runner.py
================================================
import time
import numpy as np
from functools import reduce
import torch
from runners.separated.base_runner import Runner
def _t2n(x):
return x.detach().cpu().numpy()
class MujocoRunner(Runner):
"""Runner class to perform training, evaluation. and data collection for SMAC. See parent class for details."""
def __init__(self, config):
super(MujocoRunner, self).__init__(config)
def run(self):
self.warmup()
start = time.time()
episodes = int(self.num_env_steps) // self.episode_length // self.n_rollout_threads
train_episode_rewards = [0 for _ in range(self.n_rollout_threads)]
for episode in range(episodes):
if self.use_linear_lr_decay:
self.trainer.policy.lr_decay(episode, episodes)
done_episodes_rewards = []
for step in range(self.episode_length):
# Sample actions
values, actions, action_log_probs, rnn_states, rnn_states_critic = self.collect(step)
# Obser reward and next obs
obs, share_obs, rewards, dones, infos, _ = self.envs.step(actions)
dones_env = np.all(dones, axis=1)
reward_env = np.mean(rewards, axis=1).flatten()
train_episode_rewards += reward_env
for t in range(self.n_rollout_threads):
if dones_env[t]:
done_episodes_rewards.append(train_episode_rewards[t])
train_episode_rewards[t] = 0
data = obs, share_obs, rewards, dones, infos, \
values, actions, action_log_probs, \
rnn_states, rnn_states_critic
# insert data into buffer
self.insert(data)
# compute return and update network
self.compute()
train_infos = self.train()
# post process
total_num_steps = (episode + 1) * self.episode_length * self.n_rollout_threads
# save model
if (episode % self.save_interval == 0 or episode == episodes - 1):
self.save()
# log information
if episode % self.log_interval == 0:
end = time.time()
print("\n Scenario {} Algo {} Exp {} updates {}/{} episodes, total num timesteps {}/{}, FPS {}.\n"
.format(self.all_args.scenario,
self.algorithm_name,
self.experiment_name,
episode,
episodes,
total_num_steps,
self.num_env_steps,
int(total_num_steps / (end - start))))
self.log_train(train_infos, total_num_steps)
if len(done_episodes_rewards) > 0:
aver_episode_rewards = np.mean(done_episodes_rewards)
print("some episodes done, average rewards: ", aver_episode_rewards)
self.writter.add_scalars("train_episode_rewards", {"aver_rewards": aver_episode_rewards},
total_num_steps)
# eval
if episode % self.eval_interval == 0 and self.use_eval:
self.eval(total_num_steps)
def warmup(self):
# reset env
obs, share_obs, _ = self.envs.reset()
# replay buffer
if not self.use_centralized_V:
share_obs = obs
for agent_id in range(self.num_agents):
self.buffer[agent_id].share_obs[0] = share_obs[:, agent_id].copy()
self.buffer[agent_id].obs[0] = obs[:, agent_id].copy()
@torch.no_grad()
def collect(self, step):
value_collector = []
action_collector = []
action_log_prob_collector = []
rnn_state_collector = []
rnn_state_critic_collector = []
for agent_id in range(self.num_agents):
self.trainer[agent_id].prep_rollout()
value, action, action_log_prob, rnn_state, rnn_state_critic \
= self.trainer[agent_id].policy.get_actions(self.buffer[agent_id].share_obs[step],
self.buffer[agent_id].obs[step],
self.buffer[agent_id].rnn_states[step],
self.buffer[agent_id].rnn_states_critic[step],
self.buffer[agent_id].masks[step])
value_collector.append(_t2n(value))
action_collector.append(_t2n(action))
action_log_prob_collector.append(_t2n(action_log_prob))
rnn_state_collector.append(_t2n(rnn_state))
rnn_state_critic_collector.append(_t2n(rnn_state_critic))
# [self.envs, agents, dim]
values = np.array(value_collector).transpose(1, 0, 2)
actions = np.array(action_collector).transpose(1, 0, 2)
action_log_probs = np.array(action_log_prob_collector).transpose(1, 0, 2)
rnn_states = np.array(rnn_state_collector).transpose(1, 0, 2, 3)
rnn_states_critic = np.array(rnn_state_critic_collector).transpose(1, 0, 2, 3)
return values, actions, action_log_probs, rnn_states, rnn_states_critic
def insert(self, data):
obs, share_obs, rewards, dones, infos, \
values, actions, action_log_probs, rnn_states, rnn_states_critic = data
dones_env = np.all(dones, axis=1)
rnn_states[dones_env == True] = np.zeros(
((dones_env == True).sum(), self.num_agents, self.recurrent_N, self.hidden_size), dtype=np.float32)
rnn_states_critic[dones_env == True] = np.zeros(
((dones_env == True).sum(), self.num_agents, *self.buffer[0].rnn_states_critic.shape[2:]), dtype=np.float32)
masks = np.ones((self.n_rollout_threads, self.num_agents, 1), dtype=np.float32)
masks[dones_env == True] = np.zeros(((dones_env == True).sum(), self.num_agents, 1), dtype=np.float32)
active_masks = np.ones((self.n_rollout_threads, self.num_agents, 1), dtype=np.float32)
active_masks[dones == True] = np.zeros(((dones == True).sum(), 1), dtype=np.float32)
active_masks[dones_env == True] = np.ones(((dones_env == True).sum(), self.num_agents, 1), dtype=np.float32)
if not self.use_centralized_V:
share_obs = obs
for agent_id in range(self.num_agents):
self.buffer[agent_id].insert(share_obs[:, agent_id], obs[:, agent_id], rnn_states[:, agent_id],
rnn_states_critic[:, agent_id], actions[:, agent_id],
action_log_probs[:, agent_id],
values[:, agent_id], rewards[:, agent_id], masks[:, agent_id], None,
active_masks[:, agent_id], None)
def log_train(self, train_infos, total_num_steps):
print("average_step_rewards is {}.".format(np.mean(self.buffer[0].rewards)))
for agent_id in range(self.num_agents):
train_infos[agent_id]["average_step_rewards"] = np.mean(self.buffer[agent_id].rewards)
for k, v in train_infos[agent_id].items():
agent_k = "agent%i/" % agent_id + k
self.writter.add_scalars(agent_k, {agent_k: v}, total_num_steps)
@torch.no_grad()
def eval(self, total_num_steps):
eval_episode = 0
eval_episode_rewards = []
one_episode_rewards = []
for eval_i in range(self.n_eval_rollout_threads):
one_episode_rewards.append([])
eval_episode_rewards.append([])
eval_obs, eval_share_obs, _ = self.eval_envs.reset()
eval_rnn_states = np.zeros((self.n_eval_rollout_threads, self.num_agents, self.recurrent_N, self.hidden_size),
dtype=np.float32)
eval_masks = np.ones((self.n_eval_rollout_threads, self.num_agents, 1), dtype=np.float32)
while True:
eval_actions_collector = []
eval_rnn_states_collector = []
for agent_id in range(self.num_agents):
self.trainer[agent_id].prep_rollout()
eval_actions, temp_rnn_state = \
self.trainer[agent_id].policy.act(eval_obs[:, agent_id],
eval_rnn_states[:, agent_id],
eval_masks[:, agent_id],
deterministic=True)
eval_rnn_states[:, agent_id] = _t2n(temp_rnn_state)
eval_actions_collector.append(_t2n(eval_actions))
eval_actions = np.array(eval_actions_collector).transpose(1, 0, 2)
# Obser reward and next obs
eval_obs, eval_share_obs, eval_rewards, eval_dones, eval_infos, _ = self.eval_envs.step(
eval_actions)
for eval_i in range(self.n_eval_rollout_threads):
one_episode_rewards[eval_i].append(eval_rewards[eval_i])
eval_dones_env = np.all(eval_dones, axis=1)
eval_rnn_states[eval_dones_env == True] = np.zeros(
((eval_dones_env == True).sum(), self.num_agents, self.recurrent_N, self.hidden_size), dtype=np.float32)
eval_masks = np.ones((self.all_args.n_eval_rollout_threads, self.num_agents, 1), dtype=np.float32)
eval_masks[eval_dones_env == True] = np.zeros(((eval_dones_env == True).sum(), self.num_agents, 1),
dtype=np.float32)
for eval_i in range(self.n_eval_rollout_threads):
if eval_dones_env[eval_i]:
eval_episode += 1
eval_episode_rewards[eval_i].append(np.sum(one_episode_rewards[eval_i], axis=0))
one_episode_rewards[eval_i] = []
if eval_episode >= self.all_args.eval_episodes:
eval_episode_rewards = np.concatenate(eval_episode_rewards)
eval_env_infos = {'eval_average_episode_rewards': eval_episode_rewards,
'eval_max_episode_rewards': [np.max(eval_episode_rewards)]}
self.log_env(eval_env_infos, total_num_steps)
print("eval_average_episode_rewards is {}.".format(np.mean(eval_episode_rewards)))
break
================================================
FILE: runners/separated/smac_runner.py
================================================
import time
import numpy as np
from functools import reduce
import torch
from runners.separated.base_runner import Runner
def _t2n(x):
return x.detach().cpu().numpy()
class SMACRunner(Runner):
"""Runner class to perform training, evaluation. and data collection for SMAC. See parent class for details."""
def __init__(self, config):
super(SMACRunner, self).__init__(config)
def run(self):
self.warmup()
start = time.time()
episodes = int(self.num_env_steps) // self.episode_length // self.n_rollout_threads
last_battles_game = np.zeros(self.n_rollout_threads, dtype=np.float32)
last_battles_won = np.zeros(self.n_rollout_threads, dtype=np.float32)
for episode in range(episodes):
if self.use_linear_lr_decay:
self.trainer.policy.lr_decay(episode, episodes)
for step in range(self.episode_length):
# Sample actions
values, actions, action_log_probs, rnn_states, rnn_states_critic = self.collect(step)
# Obser reward and next obs
obs, share_obs, rewards, dones, infos, available_actions = self.envs.step(actions)
data = obs, share_obs, rewards, dones, infos, available_actions, \
values, actions, action_log_probs, \
rnn_states, rnn_states_critic
# insert data into buffer
self.insert(data)
# compute return and update network
self.compute()
train_infos = self.train()
# post process
total_num_steps = (episode + 1) * self.episode_length * self.n_rollout_threads
# save model
if (episode % self.save_interval == 0 or episode == episodes - 1):
self.save()
# log information
if episode % self.log_interval == 0:
end = time.time()
print("\n Map {} Algo {} Exp {} updates {}/{} episodes, total num timesteps {}/{}, FPS {}.\n"
.format(self.all_args.map_name,
self.algorithm_name,
self.experiment_name,
episode,
episodes,
total_num_steps,
self.num_env_steps,
int(total_num_steps / (end - start))))
if self.env_name == "StarCraft2":
battles_won = []
battles_game = []
incre_battles_won = []
incre_battles_game = []
for i, info in enumerate(infos):
if 'battles_won' in info[0].keys():
battles_won.append(info[0]['battles_won'])
incre_battles_won.append(info[0]['battles_won']-last_battles_won[i])
if 'battles_game' in info[0].keys():
battles_game.append(info[0]['battles_game'])
incre_battles_game.append(info[0]['battles_game']-last_battles_game[i])
incre_win_rate = np.sum(incre_battles_won)/np.sum(incre_battles_game) if np.sum(incre_battles_game)>0 else 0.0
print("incre win rate is {}.".format(incre_win_rate))
self.writter.add_scalars("incre_win_rate", {"incre_win_rate": incre_win_rate}, total_num_steps)
last_battles_game = battles_game
last_battles_won = battles_won
# modified
for agent_id in range(self.num_agents):
train_infos[agent_id]['dead_ratio'] = 1 - self.buffer[agent_id].active_masks.sum() /(self.num_agents* reduce(lambda x, y: x*y, list(self.buffer[agent_id].active_masks.shape)))
self.log_train(train_infos, total_num_steps)
# eval
if episode % self.eval_interval == 0 and self.use_eval:
self.eval(total_num_steps)
def warmup(self):
# reset env
obs, share_obs, available_actions = self.envs.reset()
# replay buffer
if not self.use_centralized_V:
share_obs = obs
for agent_id in range(self.num_agents):
self.buffer[agent_id].share_obs[0] = share_obs[:,agent_id].copy()
self.buffer[agent_id].obs[0] = obs[:,agent_id].copy()
self.buffer[agent_id].available_actions[0] = available_actions[:,agent_id].copy()
@torch.no_grad()
def collect(self, step):
value_collector=[]
action_collector=[]
action_log_prob_collector=[]
rnn_state_collector=[]
rnn_state_critic_collector=[]
for agent_id in range(self.num_agents):
self.trainer[agent_id].prep_rollout()
value, action, action_log_prob, rnn_state, rnn_state_critic \
= self.trainer[agent_id].policy.get_actions(self.buffer[agent_id].share_obs[step],
self.buffer[agent_id].obs[step],
self.buffer[agent_id].rnn_states[step],
self.buffer[agent_id].rnn_states_critic[step],
self.buffer[agent_id].masks[step],
self.buffer[agent_id].available_actions[step])
value_collector.append(_t2n(value))
action_collector.append(_t2n(action))
action_log_prob_collector.append(_t2n(action_log_prob))
rnn_state_collector.append(_t2n(rnn_state))
rnn_state_critic_collector.append(_t2n(rnn_state_critic))
# [self.envs, agents, dim]
values = np.array(value_collector).transpose(1, 0, 2)
actions = np.array(action_collector).transpose(1, 0, 2)
action_log_probs = np.array(action_log_prob_collector).transpose(1, 0, 2)
rnn_states = np.array(rnn_state_collector).transpose(1, 0, 2, 3)
rnn_states_critic = np.array(rnn_state_critic_collector).transpose(1, 0, 2, 3)
return values, actions, action_log_probs, rnn_states, rnn_states_critic
def insert(self, data):
obs, share_obs, rewards, dones, infos, available_actions, \
values, actions, action_log_probs, rnn_states, rnn_states_critic = data
dones_env = np.all(dones, axis=1)
rnn_states[dones_env == True] = np.zeros(((dones_env == True).sum(), self.num_agents, self.recurrent_N, self.hidden_size), dtype=np.float32)
rnn_states_critic[dones_env == True] = np.zeros(((dones_env == True).sum(), self.num_agents, *self.buffer[0].rnn_states_critic.shape[2:]), dtype=np.float32)
masks = np.ones((self.n_rollout_threads, self.num_agents, 1), dtype=np.float32)
masks[dones_env == True] = np.zeros(((dones_env == True).sum(), self.num_agents, 1), dtype=np.float32)
active_masks = np.ones((self.n_rollout_threads, self.num_agents, 1), dtype=np.float32)
active_masks[dones == True] = np.zeros(((dones == True).sum(), 1), dtype=np.float32)
active_masks[dones_env == True] = np.ones(((dones_env == True).sum(), self.num_agents, 1), dtype=np.float32)
bad_masks = np.array([[[0.0] if info[agent_id]['bad_transition'] else [1.0] for agent_id in range(self.num_agents)] for info in infos])
if not self.use_centralized_V:
share_obs = obs
for agent_id in range(self.num_agents):
self.buffer[agent_id].insert(share_obs[:,agent_id], obs[:,agent_id], rnn_states[:,agent_id],
rnn_states_critic[:,agent_id],actions[:,agent_id], action_log_probs[:,agent_id],
values[:,agent_id], rewards[:,agent_id], masks[:,agent_id], bad_masks[:,agent_id],
active_masks[:,agent_id], available_actions[:,agent_id])
def log_train(self, train_infos, total_num_steps):
for agent_id in range(self.num_agents):
train_infos[agent_id]["average_step_rewards"] = np.mean(self.buffer[agent_id].rewards)
for k, v in train_infos[agent_id].items():
agent_k = "agent%i/" % agent_id + k
self.writter.add_scalars(agent_k, {agent_k: v}, total_num_steps)
@torch.no_grad()
def eval(self, total_num_steps):
eval_battles_won = 0
eval_episode = 0
eval_episode_rewards = []
one_episode_rewards = []
for eval_i in range(self.n_eval_rollout_threads):
one_episode_rewards.append([])
eval_episode_rewards.append([])
eval_obs, eval_share_obs, eval_available_actions = self.eval_envs.reset()
eval_rnn_states = np.zeros((self.n_eval_rollout_threads, self.num_agents, self.recurrent_N, self.hidden_size), dtype=np.float32)
eval_masks = np.ones((self.n_eval_rollout_threads, self.num_agents, 1), dtype=np.float32)
while True:
eval_actions_collector=[]
eval_rnn_states_collector=[]
for agent_id in range(self.num_agents):
self.trainer[agent_id].prep_rollout()
eval_actions, temp_rnn_state = \
self.trainer[agent_id].policy.act(eval_obs[:,agent_id],
eval_rnn_states[:,agent_id],
eval_masks[:,agent_id],
eval_available_actions[:,agent_id],
deterministic=True)
eval_rnn_states[:,agent_id]=_t2n(temp_rnn_state)
eval_actions_collector.append(_t2n(eval_actions))
eval_actions = np.array(eval_actions_collector).transpose(1,0,2)
# Obser reward and next obs
eval_obs, eval_share_obs, eval_rewards, eval_dones, eval_infos, eval_available_actions = self.eval_envs.step(eval_actions)
for eval_i in range(self.n_eval_rollout_threads):
one_episode_rewards[eval_i].append(eval_rewards[eval_i])
eval_dones_env = np.all(eval_dones, axis=1)
eval_rnn_states[eval_dones_env == True] = np.zeros(((eval_dones_env == True).sum(), self.num_agents, self.recurrent_N, self.hidden_size), dtype=np.float32)
eval_masks = np.ones((self.all_args.n_eval_rollout_threads, self.num_agents, 1), dtype=np.float32)
eval_masks[eval_dones_env == True] = np.zeros(((eval_dones_env == True).sum(), self.num_agents, 1), dtype=np.float32)
for eval_i in range(self.n_eval_rollout_threads):
if eval_dones_env[eval_i]:
eval_episode += 1
eval_episode_rewards[eval_i].append(np.sum(one_episode_rewards[eval_i], axis=0))
one_episode_rewards[eval_i] = []
if eval_infos[eval_i][0]['won']:
eval_battles_won += 1
if eval_episode >= self.all_args.eval_episodes:
eval_episode_rewards = np.concatenate(eval_episode_rewards)
eval_env_infos = {'eval_average_episode_rewards': eval_episode_rewards}
self.log_env(eval_env_infos, total_num_steps)
eval_win_rate = eval_battles_won/eval_episode
print("eval win rate is {}.".format(eval_win_rate))
self.writter.add_scalars("eval_win_rate", {"eval_win_rate": eval_win_rate}, total_num_steps)
break
================================================
FILE: scripts/__init__.py
================================================
================================================
FILE: scripts/train/__init__.py
================================================
================================================
FILE: scripts/train/train_mujoco.py
================================================
#!/usr/bin/env python
import sys
import os
sys.path.append("../")
import socket
import setproctitle
import numpy as np
from pathlib import Path
import torch
from configs.config import get_config
from envs.ma_mujoco.multiagent_mujoco.mujoco_multi import MujocoMulti
from envs.env_wrappers import ShareSubprocVecEnv, ShareDummyVecEnv
from runners.separated.mujoco_runner import MujocoRunner as Runner
"""Train script for Mujoco."""
def make_train_env(all_args):
def get_env_fn(rank):
def init_env():
if all_args.env_name == "mujoco":
env_args = {"scenario": all_args.scenario,
"agent_conf": all_args.agent_conf,
"agent_obsk": all_args.agent_obsk,
"episode_limit": 1000}
env = MujocoMulti(env_args=env_args)
else:
print("Can not support the " + all_args.env_name + "environment.")
raise NotImplementedError
env.seed(all_args.seed + rank * 1000)
return env
return init_env
if all_args.n_rollout_threads == 1:
return ShareDummyVecEnv([get_env_fn(0)])
else:
return ShareSubprocVecEnv([get_env_fn(i) for i in range(all_args.n_rollout_threads)])
def make_eval_env(all_args):
def get_env_fn(rank):
def init_env():
if all_args.env_name == "mujoco":
env_args = {"scenario": all_args.scenario,
"agent_conf": all_args.agent_conf,
"agent_obsk": all_args.agent_obsk,
"episode_limit": 1000}
env = MujocoMulti(env_args=env_args)
else:
print("Can not support the " + all_args.env_name + "environment.")
raise NotImplementedError
env.seed(all_args.seed * 50000 + rank * 10000)
return env
return init_env
if all_args.n_eval_rollout_threads == 1:
return ShareDummyVecEnv([get_env_fn(0)])
else:
return ShareSubprocVecEnv([get_env_fn(i) for i in range(all_args.n_eval_rollout_threads)])
def parse_args(args, parser):
parser.add_argument('--scenario', type=str, default='Hopper-v2', help="Which mujoco task to run on")
parser.add_argument('--agent_conf', type=str, default='3x1')
parser.add_argument('--agent_obsk', type=int, default=0)
parser.add_argument("--add_move_state", action='store_true', default=False)
parser.add_argument("--add_local_obs", action='store_true', default=False)
parser.add_argument("--add_distance_state", action='store_true', default=False)
parser.add_argument("--add_enemy_action_state", action='store_true', default=False)
parser.add_argument("--add_agent_id", action='store_true', default=False)
parser.add_argument("--add_visible_state", action='store_true', default=False)
parser.add_argument("--add_xy_state", action='store_true', default=False)
# agent-specific state should be designed carefully
parser.add_argument("--use_state_agent", action='store_true', default=False)
parser.add_argument("--use_mustalive", action='store_false', default=True)
parser.add_argument("--add_center_xy", action='store_true', default=False)
parser.add_argument("--use_single_network", action='store_true', default=False)
all_args = parser.parse_known_args(args)[0]
return all_args
def main(args):
parser = get_config()
all_args = parse_args(args, parser)
print("all config: ", all_args)
if all_args.seed_specify:
all_args.seed=all_args.runing_id
else:
all_args.seed=np.random.randint(1000,10000)
print("seed is :",all_args.seed)
# cuda
if all_args.cuda and torch.cuda.is_available():
print("choose to use gpu...")
device = torch.device("cuda:0")
torch.set_num_threads(all_args.n_training_threads)
if all_args.cuda_deterministic:
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
else:
print("choose to use cpu...")
device = torch.device("cpu")
torch.set_num_threads(all_args.n_training_threads)
run_dir = Path(os.path.split(os.path.dirname(os.path.abspath(__file__)))[
0] + "/results") / all_args.env_name / all_args.scenario / all_args.algorithm_name / all_args.experiment_name / str(all_args.seed)
if not run_dir.exists():
os.makedirs(str(run_dir))
if not run_dir.exists():
curr_run = 'run1'
else:
exst_run_nums = [int(str(folder.name).split('run')[1]) for folder in run_dir.iterdir() if
str(folder.name).startswith('run')]
if len(exst_run_nums) == 0:
curr_run = 'run1'
else:
curr_run = 'run%i' % (max(exst_run_nums) + 1)
run_dir = run_dir / curr_run
if not run_dir.exists():
os.makedirs(str(run_dir))
setproctitle.setproctitle(
str(all_args.algorithm_name) + "-" + str(all_args.env_name) + "-" + str(all_args.experiment_name) + "@" + str(
all_args.user_name))
# seed
torch.manual_seed(all_args.seed)
torch.cuda.manual_seed_all(all_args.seed)
np.random.seed(all_args.seed)
# env
envs = make_train_env(all_args)
eval_envs = make_eval_env(all_args) if all_args.use_eval else None
num_agents = envs.n_agents
config = {
"all_args": all_args,
"envs": envs,
"eval_envs": eval_envs,
"num_agents": num_agents,
"device": device,
"run_dir": run_dir
}
# run experiments
runner = Runner(config)
runner.run()
# post process
envs.close()
if all_args.use_eval and eval_envs is not envs:
eval_envs.close()
runner.writter.export_scalars_to_json(str(runner.log_dir + '/summary.json'))
runner.writter.close()
if __name__ == "__main__":
main(sys.argv[1:])
================================================
FILE: scripts/train/train_smac.py
================================================
#!/usr/bin/env python
import sys
import os
sys.path.append("../")
import socket
import setproctitle
import numpy as np
from pathlib import Path
import torch
from configs.config import get_config
from envs.starcraft2.StarCraft2_Env import StarCraft2Env
from envs.starcraft2.smac_maps import get_map_params
from envs.env_wrappers import ShareSubprocVecEnv, ShareDummyVecEnv
from runners.separated.smac_runner import SMACRunner as Runner
"""Train script for SMAC."""
def make_train_env(all_args):
def get_env_fn(rank):
def init_env():
if all_args.env_name == "StarCraft2":
env = StarCraft2Env(all_args)
else:
print("Can not support the " + all_args.env_name + "environment.")
raise NotImplementedError
env.seed(all_args.seed + rank * 1000)
return env
return init_env
if all_args.n_rollout_threads == 1:
return ShareDummyVecEnv([get_env_fn(0)])
else:
return ShareSubprocVecEnv([get_env_fn(i) for i in range(all_args.n_rollout_threads)])
def make_eval_env(all_args):
def get_env_fn(rank):
def init_env():
if all_args.env_name == "StarCraft2":
env = StarCraft2Env(all_args)
else:
print("Can not support the " + all_args.env_name + "environment.")
raise NotImplementedError
env.seed(all_args.seed * 50000 + rank * 10000)
return env
return init_env
if all_args.n_eval_rollout_threads == 1:
return ShareDummyVecEnv([get_env_fn(0)])
else:
return ShareSubprocVecEnv([get_env_fn(i) for i in range(all_args.n_eval_rollout_threads)])
def parse_args(args, parser):
parser.add_argument('--map_name', type=str, default='3m',help="Which smac map to run on")
parser.add_argument("--add_move_state", action='store_true', default=False)
parser.add_argument("--add_local_obs", action='store_true', default=False)
parser.add_argument("--add_distance_state", action='store_true', default=False)
parser.add_argument("--add_enemy_action_state", action='store_true', default=False)
parser.add_argument("--add_agent_id", action='store_true', default=False)
parser.add_argument("--add_visible_state", action='store_true', default=False)
parser.add_argument("--add_xy_state", action='store_true', default=False)
parser.add_argument("--use_state_agent", action='store_true', default=False)
parser.add_argument("--use_mustalive", action='store_false', default=True)
parser.add_argument("--add_center_xy", action='store_true', default=False)
parser.add_argument("--use_single_network", action='store_true', default=False)
all_args = parser.parse_known_args(args)[0]
return all_args
def main(args):
parser = get_config()
all_args = parse_args(args, parser)
print("all config: ", all_args)
if all_args.seed_specify:
all_args.seed=all_args.runing_id
else:
all_args.seed=np.random.randint(1000,10000)
print("seed is :",all_args.seed)
# cuda
if all_args.cuda and torch.cuda.is_available():
print("choose to use gpu...")
device = torch.device("cuda:0")
torch.set_num_threads(all_args.n_training_threads)
if all_args.cuda_deterministic:
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
else:
print("choose to use cpu...")
device = torch.device("cpu")
torch.set_num_threads(all_args.n_training_threads)
run_dir = Path(os.path.split(os.path.dirname(os.path.abspath(__file__)))[
0] + "/results") / all_args.env_name / all_args.map_name / all_args.algorithm_name / all_args.experiment_name / str(all_args.seed)
if not run_dir.exists():
os.makedirs(str(run_dir))
if not run_dir.exists():
curr_run = 'run1'
else:
exst_run_nums = [int(str(folder.name).split('run')[1]) for folder in run_dir.iterdir() if
str(folder.name).startswith('run')]
if len(exst_run_nums) == 0:
curr_run = 'run1'
else:
curr_run = 'run%i' % (max(exst_run_nums) + 1)
run_dir = run_dir / curr_run
if not run_dir.exists():
os.makedirs(str(run_dir))
setproctitle.setproctitle(
str(all_args.algorithm_name) + "-" + str(all_args.env_name) + "-" + str(all_args.experiment_name) + "@" + str(
all_args.user_name))
# seed
torch.manual_seed(all_args.seed)
torch.cuda.manual_seed_all(all_args.seed)
np.random.seed(all_args.seed)
# env
envs = make_train_env(all_args)
eval_envs = make_eval_env(all_args) if all_args.use_eval else None
num_agents = get_map_params(all_args.map_name)["n_agents"]
config = {
"all_args": all_args,
"envs": envs,
"eval_envs": eval_envs,
"num_agents": num_agents,
"device": device,
"run_dir": run_dir
}
# run experiments
runner = Runner(config)
runner.run()
# post process
envs.close()
if all_args.use_eval and eval_envs is not envs:
eval_envs.close()
runner.writter.export_scalars_to_json(str(runner.log_dir + '/summary.json'))
runner.writter.close()
if __name__ == "__main__":
main(sys.argv[1:])
================================================
FILE: scripts/train_mujoco.sh
================================================
#!/bin/sh
env="mujoco"
scenario="Ant-v2"
agent_conf="2x4"
agent_obsk=2
algo="happo"
exp="mlp"
running_max=20
kl_threshold=1e-4
echo "env is ${env}, scenario is ${scenario}, algo is ${algo}, exp is ${exp}, max seed is ${seed_max}"
for number in `seq ${running_max}`;
do
echo "the ${number}-th running:"
CUDA_VISIBLE_DEVICES=1 python train/train_mujoco.py --env_name ${env} --algorithm_name ${algo} --experiment_name ${exp} --scenario ${scenario} --agent_conf ${agent_conf} --agent_obsk ${agent_obsk} --lr 5e-6 --critic_lr 5e-3 --std_x_coef 1 --std_y_coef 5e-1 --running_id ${number} --n_training_threads 8 --n_rollout_threads 4 --num_mini_batch 40 --episode_length 1000 --num_env_steps 10000000 --ppo_epoch 5 --kl_threshold ${kl_threshold} --use_value_active_masks --use_eval --add_center_xy --use_state_agent --share_policy
done
================================================
FILE: scripts/train_smac.sh
================================================
#!/bin/sh
env="StarCraft2"
map="3s5z"
algo="happo"
exp="mlp"
running_max=20
kl_threshold=0.06
echo "env is ${env}, map is ${map}, algo is ${algo}, exp is ${exp}, max seed is ${seed_max}"
for number in `seq ${running_max}`;
do
echo "the ${number}-th running:"
CUDA_VISIBLE_DEVICES=1 python train/train_smac.py --env_name ${env} --algorithm_name ${algo} --experiment_name ${exp} --map_name ${map} --running_id ${number} --gamma 0.95 --n_training_threads 32 --n_rollout_threads 20 --num_mini_batch 1 --episode_length 160 --num_env_steps 20000000 --ppo_epoch 5 --stacked_frames 1 --kl_threshold ${kl_threshold} --use_value_active_masks --use_eval --add_center_xy --use_state_agent --share_policy
done
================================================
FILE: utils/__init__.py
================================================
================================================
FILE: utils/multi_discrete.py
================================================
import gym
import numpy as np
# An old version of OpenAI Gym's multi_discrete.py. (Was getting affected by Gym updates)
# (https://github.com/openai/gym/blob/1fb81d4e3fb780ccf77fec731287ba07da35eb84/gym/spaces/multi_discrete.py)
class MultiDiscrete(gym.Space):
"""
- The multi-discrete action space consists of a series of discrete action spaces with different parameters
- It can be adapted to both a Discrete action space or a continuous (Box) action space
- It is useful to represent game controllers or keyboards where each key can be represented as a discrete action space
- It is parametrized by passing an array of arrays containing [min, max] for each discrete action space where the discrete action space can take any integers from `min` to `max` (both inclusive)
Note: A value of 0 always need to represent the NOOP action.
e.g. Nintendo Game Controller
- Can be conceptualized as 3 discrete action spaces:
1) Arrow Keys: Discrete 5 - NOOP[0], UP[1], RIGHT[2], DOWN[3], LEFT[4] - params: min: 0, max: 4
2) Button A: Discrete 2 - NOOP[0], Pressed[1] - params: min: 0, max: 1
3) Button B: Discrete 2 - NOOP[0], Pressed[1] - params: min: 0, max: 1
- Can be initialized as
MultiDiscrete([ [0,4], [0,1], [0,1] ])
"""
def __init__(self, array_of_param_array):
self.low = np.array([x[0] for x in array_of_param_array])
self.high = np.array([x[1] for x in array_of_param_array])
self.num_discrete_space = self.low.shape[0]
self.n = np.sum(self.high) + 2
def sample(self):
""" Returns a array with one sample from each discrete action space """
# For each row: round(random .* (max - min) + min, 0)
random_array = np.random.rand(self.num_discrete_space)
return [int(x) for x in np.floor(np.multiply((self.high - self.low + 1.), random_array) + self.low)]
def contains(self, x):
return len(x) == self.num_discrete_space and (np.array(x) >= self.low).all() and (np.array(x) <= self.high).all()
@property
def shape(self):
return self.num_discrete_space
def __repr__(self):
return "MultiDiscrete" + str(self.num_discrete_space)
def __eq__(self, other):
return np.array_equal(self.low, other.low) and np.array_equal(self.high, other.high)
================================================
FILE: utils/popart.py
================================================
import numpy as np
import torch
import torch.nn as nn
class PopArt(nn.Module):
""" Normalize a vector of observations - across the first norm_axes dimensions"""
def __init__(self, input_shape, norm_axes=1, beta=0.99999, per_element_update=False, epsilon=1e-5, device=torch.device("cpu")):
super(PopArt, self).__init__()
self.input_shape = input_shape
self.norm_axes = norm_axes
self.epsilon = epsilon
self.beta = beta
self.per_element_update = per_element_update
self.tpdv = dict(dtype=torch.float32, device=device)
self.running_mean = nn.Parameter(torch.zeros(input_shape), requires_grad=False).to(**self.tpdv)
self.running_mean_sq = nn.Parameter(torch.zeros(input_shape), requires_grad=False).to(**self.tpdv)
self.debiasing_term = nn.Parameter(torch.tensor(0.0), requires_grad=False).to(**self.tpdv)
def reset_parameters(self):
self.running_mean.zero_()
self.running_mean_sq.zero_()
self.debiasing_term.zero_()
def running_mean_var(self):
debiased_mean = self.running_mean / self.debiasing_term.clamp(min=self.epsilon)
debiased_mean_sq = self.running_mean_sq / self.debiasing_term.clamp(min=self.epsilon)
debiased_var = (debiased_mean_sq - debiased_mean ** 2).clamp(min=1e-2)
return debiased_mean, debiased_var
def forward(self, input_vector, train=True):
# Make sure input is float32
if type(input_vector) == np.ndarray:
input_vector = torch.from_numpy(input_vector)
input_vector = input_vector.to(**self.tpdv)
if train:
# Detach input before adding it to running means to avoid backpropping through it on
# subsequent batches.
detached_input = input_vector.detach()
batch_mean = detached_input.mean(dim=tuple(range(self.norm_axes)))
batch_sq_mean = (detached_input ** 2).mean(dim=tuple(range(self.norm_axes)))
if self.per_element_update:
batch_size = np.prod(detached_input.size()[:self.norm_axes])
weight = self.beta ** batch_size
else:
weight = self.beta
self.running_mean.mul_(weight).add_(batch_mean * (1.0 - weight))
self.running_mean_sq.mul_(weight).add_(batch_sq_mean * (1.0 - weight))
self.debiasing_term.mul_(weight).add_(1.0 * (1.0 - weight))
mean, var = self.running_mean_var()
out = (input_vector - mean[(None,) * self.norm_axes]) / torch.sqrt(var)[(None,) * self.norm_axes]
return out
def denormalize(self, input_vector):
""" Transform normalized data back into original distribution """
if type(input_vector) == np.ndarray:
input_vector = torch.from_numpy(input_vector)
input_vector = input_vector.to(**self.tpdv)
mean, var = self.running_mean_var()
out = input_vector * torch.sqrt(var)[(None,) * self.norm_axes] + mean[(None,) * self.norm_axes]
out = out.cpu().numpy()
return out
================================================
FILE: utils/separated_buffer.py
================================================
import torch
import numpy as np
from collections import defaultdict
from utils.util import check, get_shape_from_obs_space, get_shape_from_act_space
def _flatten(T, N, x):
return x.reshape(T * N, *x.shape[2:])
def _cast(x):
return x.transpose(1,0,2).reshape(-1, *x.shape[2:])
class SeparatedReplayBuffer(object):
def __init__(self, args, obs_space, share_obs_space, act_space):
self.episode_length = args.episode_length
self.n_rollout_threads = args.n_rollout_threads
self.rnn_hidden_size = args.hidden_size
self.recurrent_N = args.recurrent_N
self.gamma = args.gamma
self.gae_lambda = args.gae_lambda
self._use_gae = args.use_gae
self._use_popart = args.use_popart
self._use_valuenorm = args.use_valuenorm
self._use_proper_time_limits = args.use_proper_time_limits
obs_shape = get_shape_from_obs_space(obs_space)
share_obs_shape = get_shape_from_obs_space(share_obs_space)
if type(obs_shape[-1]) == list:
obs_shape = obs_shape[:1]
if type(share_obs_shape[-1]) == list:
share_obs_shape = share_obs_shape[:1]
self.share_obs = np.zeros((self.episode_length + 1, self.n_rollout_threads, *share_obs_shape), dtype=np.float32)
self.obs = np.zeros((self.episode_length + 1, self.n_rollout_threads, *obs_shape), dtype=np.float32)
self.rnn_states = np.zeros((self.episode_length + 1, self.n_rollout_threads, self.recurrent_N, self.rnn_hidden_size), dtype=np.float32)
self.rnn_states_critic = np.zeros_like(self.rnn_states)
self.value_preds = np.zeros((self.episode_length + 1, self.n_rollout_threads, 1), dtype=np.float32)
self.returns = np.zeros((self.episode_length + 1, self.n_rollout_threads, 1), dtype=np.float32)
if act_space.__class__.__name__ == 'Discrete':
self.available_actions = np.ones((self.episode_length + 1, self.n_rollout_threads, act_space.n), dtype=np.float32)
else:
self.available_actions = None
act_shape = get_shape_from_act_space(act_space)
self.actions = np.zeros((self.episode_length, self.n_rollout_threads, act_shape), dtype=np.float32)
self.action_log_probs = np.zeros((self.episode_length, self.n_rollout_threads, act_shape), dtype=np.float32)
self.rewards = np.zeros((self.episode_length, self.n_rollout_threads, 1), dtype=np.float32)
self.masks = np.ones((self.episode_length + 1, self.n_rollout_threads, 1), dtype=np.float32)
self.bad_masks = np.ones_like(self.masks)
self.active_masks = np.ones_like(self.masks)
self.factor = None
self.step = 0
def update_factor(self, factor):
self.factor = factor.copy()
def insert(self, share_obs, obs, rnn_states, rnn_states_critic, actions, action_log_probs,
value_preds, rewards, masks, bad_masks=None, active_masks=None, available_actions=None):
self.share_obs[self.step + 1] = share_obs.copy()
self.obs[self.step + 1] = obs.copy()
self.rnn_states[self.step + 1] = rnn_states.copy()
self.rnn_states_critic[self.step + 1] = rnn_states_critic.copy()
self.actions[self.step] = actions.copy()
self.action_log_probs[self.step] = action_log_probs.copy()
self.value_preds[self.step] = value_preds.copy()
self.rewards[self.step] = rewards.copy()
self.masks[self.step + 1] = masks.copy()
if bad_masks is not None:
self.bad_masks[self.step + 1] = bad_masks.copy()
if active_masks is not None:
self.active_masks[self.step + 1] = active_masks.copy()
if available_actions is not None:
self.available_actions[self.step + 1] = available_actions.copy()
self.step = (self.step + 1) % self.episode_length
def chooseinsert(self, share_obs, obs, rnn_states, rnn_states_critic, actions, action_log_probs,
value_preds, rewards, masks, bad_masks=None, active_masks=None, available_actions=None):
self.share_obs[self.step] = share_obs.copy()
self.obs[self.step] = obs.copy()
self.rnn_states[self.step + 1] = rnn_states.copy()
self.rnn_states_critic[self.step + 1] = rnn_states_critic.copy()
self.actions[self.step] = actions.copy()
self.action_log_probs[self.step] = action_log_probs.copy()
self.value_preds[self.step] = value_preds.copy()
self.rewards[self.step] = rewards.copy()
self.masks[self.step + 1] = masks.copy()
if bad_masks is not None:
self.bad_masks[self.step + 1] = bad_masks.copy()
if active_masks is not None:
self.active_masks[self.step] = active_masks.copy()
if available_actions is not None:
self.available_actions[self.step] = available_actions.copy()
self.step = (self.step + 1) % self.episode_length
def after_update(self):
self.share_obs[0] = self.share_obs[-1].copy()
self.obs[0] = self.obs[-1].copy()
self.rnn_states[0] = self.rnn_states[-1].copy()
self.rnn_states_critic[0] = self.rnn_states_critic[-1].copy()
self.masks[0] = self.masks[-1].copy()
self.bad_masks[0] = self.bad_masks[-1].copy()
self.active_masks[0] = self.active_masks[-1].copy()
if self.available_actions is not None:
self.available_actions[0] = self.available_actions[-1].copy()
def chooseafter_update(self):
self.rnn_states[0] = self.rnn_states[-1].copy()
self.rnn_states_critic[0] = self.rnn_states_critic[-1].copy()
self.masks[0] = self.masks[-1].copy()
self.bad_masks[0] = self.bad_masks[-1].copy()
def compute_returns(self, next_value, value_normalizer=None):
"""
use proper time limits, the difference of use or not is whether use bad_mask
"""
if self._use_proper_time_limits:
if self._use_gae:
self.value_preds[-1] = next_value
gae = 0
for step in reversed(range(self.rewards.shape[0])):
if self._use_popart or self._use_valuenorm:
delta = self.rewards[step] + self.gamma * value_normalizer.denormalize(self.value_preds[
step + 1]) * self.masks[step + 1] - value_normalizer.denormalize(self.value_preds[step])
gae = delta + self.gamma * self.gae_lambda * self.masks[step + 1] * gae
gae = gae * self.bad_masks[step + 1]
self.returns[step] = gae + value_normalizer.denormalize(self.value_preds[step])
else:
delta = self.rewards[step] + self.gamma * self.value_preds[step + 1] * self.masks[step + 1] - self.value_preds[step]
gae = delta + self.gamma * self.gae_lambda * self.masks[step + 1] * gae
gae = gae * self.bad_masks[step + 1]
self.returns[step] = gae + self.value_preds[step]
else:
self.returns[-1] = next_value
for step in reversed(range(self.rewards.shape[0])):
if self._use_popart:
self.returns[step] = (self.returns[step + 1] * self.gamma * self.masks[step + 1] + self.rewards[step]) * self.bad_masks[step + 1] \
+ (1 - self.bad_masks[step + 1]) * value_normalizer.denormalize(self.value_preds[step])
else:
self.returns[step] = (self.returns[step + 1] * self.gamma * self.masks[step + 1] + self.rewards[step]) * self.bad_masks[step + 1] \
+ (1 - self.bad_masks[step + 1]) * self.value_preds[step]
else:
if self._use_gae:
self.value_preds[-1] = next_value
gae = 0
for step in reversed(range(self.rewards.shape[0])):
if self._use_popart or self._use_valuenorm:
delta = self.rewards[step] + self.gamma * value_normalizer.denormalize(self.value_preds[step + 1]) * self.masks[step + 1] - value_normalizer.denormalize(self.value_preds[step])
gae = delta + self.gamma * self.gae_lambda * self.masks[step + 1] * gae
self.returns[step] = gae + value_normalizer.denormalize(self.value_preds[step])
else:
delta = self.rewards[step] + self.gamma * self.value_preds[step + 1] * self.masks[step + 1] - self.value_preds[step]
gae = delta + self.gamma * self.gae_lambda * self.masks[step + 1] * gae
self.returns[step] = gae + self.value_preds[step]
else:
self.returns[-1] = next_value
for step in reversed(range(self.rewards.shape[0])):
self.returns[step] = self.returns[step + 1] * self.gamma * self.masks[step + 1] + self.rewards[step]
def feed_forward_generator(self, advantages, num_mini_batch=None, mini_batch_size=None):
episode_length, n_rollout_threads = self.rewards.shape[0:2]
batch_size = n_rollout_threads * episode_length
if mini_batch_size is None:
assert batch_size >= num_mini_batch, (
"PPO requires the number of processes ({}) "
"* number of steps ({}) = {} "
"to be greater than or equal to the number of PPO mini batches ({})."
"".format(n_rollout_threads, episode_length, n_rollout_threads * episode_length,
num_mini_batch))
mini_batch_size = batch_size // num_mini_batch
rand = torch.randperm(batch_size).numpy()
sampler = [rand[i*mini_batch_size:(i+1)*mini_batch_size] for i in range(num_mini_batch)]
share_obs = self.share_obs[:-1].reshape(-1, *self.share_obs.shape[2:])
obs = self.obs[:-1].reshape(-1, *self.obs.shape[2:])
rnn_states = self.rnn_states[:-1].reshape(-1, *self.rnn_states.shape[2:])
rnn_states_critic = self.rnn_states_critic[:-1].reshape(-1, *self.rnn_states_critic.shape[2:])
actions = self.actions.reshape(-1, self.actions.shape[-1])
if self.available_actions is not None:
available_actions = self.available_actions[:-1].reshape(-1, self.available_actions.shape[-1])
value_preds = self.value_preds[:-1].reshape(-1, 1)
returns = self.returns[:-1].reshape(-1, 1)
masks = self.masks[:-1].reshape(-1, 1)
active_masks = self.active_masks[:-1].reshape(-1, 1)
action_log_probs = self.action_log_probs.reshape(-1, self.action_log_probs.shape[-1])
if self.factor is not None:
# factor = self.factor.reshape(-1,1)
factor = self.factor.reshape(-1, self.factor.shape[-1])
advantages = advantages.reshape(-1, 1)
for indices in sampler:
# obs size [T+1 N Dim]-->[T N Dim]-->[T*N,Dim]-->[index,Dim]
share_obs_batch = share_obs[indices]
obs_batch = obs[indices]
rnn_states_batch = rnn_states[indices]
rnn_states_critic_batch = rnn_states_critic[indices]
actions_batch = actions[indices]
if self.available_actions is not None:
available_actions_batch = available_actions[indices]
else:
available_actions_batch = None
value_preds_batch = value_preds[indices]
return_batch = returns[indices]
masks_batch = masks[indices]
active_masks_batch = active_masks[indices]
old_action_log_probs_batch = action_log_probs[indices]
if advantages is None:
adv_targ = None
else:
adv_targ = advantages[indices]
if self.factor is None:
yield share_obs_batch, obs_batch, rnn_states_batch, rnn_states_critic_batch, actions_batch, value_preds_batch, return_batch, masks_batch, active_masks_batch, old_action_log_probs_batch, adv_targ, available_actions_batch
else:
factor_batch = factor[indices]
yield share_obs_batch, obs_batch, rnn_states_batch, rnn_states_critic_batch, actions_batch, value_preds_batch, return_batch, masks_batch, active_masks_batch, old_action_log_probs_batch, adv_targ, available_actions_batch, factor_batch
def naive_recurrent_generator(self, advantages, num_mini_batch):
n_rollout_threads = self.rewards.shape[1]
assert n_rollout_threads >= num_mini_batch, (
"PPO requires the number of processes ({}) "
"to be greater than or equal to the number of "
"PPO mini batches ({}).".format(n_rollout_threads, num_mini_batch))
num_envs_per_batch = n_rollout_threads // num_mini_batch
perm = torch.randperm(n_rollout_threads).numpy()
for start_ind in range(0, n_rollout_threads, num_envs_per_batch):
share_obs_batch = []
obs_batch = []
rnn_states_batch = []
rnn_states_critic_batch = []
actions_batch = []
available_actions_batch = []
value_preds_batch = []
return_batch = []
masks_batch = []
active_masks_batch = []
old_action_log_probs_batch = []
adv_targ = []
factor_batch = []
for offset in range(num_envs_per_batch):
ind = perm[start_ind + offset]
share_obs_batch.append(self.share_obs[:-1, ind])
obs_batch.append(self.obs[:-1, ind])
rnn_states_batch.append(self.rnn_states[0:1, ind])
rnn_states_critic_batch.append(self.rnn_states_critic[0:1, ind])
actions_batch.append(self.actions[:, ind])
if self.available_actions is not None:
available_actions_batch.append(self.available_actions[:-1, ind])
value_preds_batch.append(self.value_preds[:-1, ind])
return_batch.append(self.returns[:-1, ind])
masks_batch.append(self.masks[:-1, ind])
active_masks_batch.append(self.active_masks[:-1, ind])
old_action_log_probs_batch.append(self.action_log_probs[:, ind])
adv_targ.append(advantages[:, ind])
if self.factor is not None:
factor_batch.append(self.factor[:,ind])
# [N[T, dim]]
T, N = self.episode_length, num_envs_per_batch
# These are all from_numpys of size (T, N, -1)
share_obs_batch = np.stack(share_obs_batch, 1)
obs_batch = np.stack(obs_batch, 1)
actions_batch = np.stack(actions_batch, 1)
if self.available_actions is not None:
available_actions_batch = np.stack(available_actions_batch, 1)
if self.factor is not None:
factor_batch=np.stack(factor_batch,1)
value_preds_batch = np.stack(value_preds_batch, 1)
return_batch = np.stack(return_batch, 1)
masks_batch = np.stack(masks_batch, 1)
active_masks_batch = np.stack(active_masks_batch, 1)
old_action_log_probs_batch = np.stack(old_action_log_probs_batch, 1)
adv_targ = np.stack(adv_targ, 1)
# States is just a (N, -1) from_numpy [N[1,dim]]
rnn_states_batch = np.stack(rnn_states_batch, 1).reshape(N, *self.rnn_states.shape[2:])
rnn_states_critic_batch = np.stack(rnn_states_critic_batch, 1).reshape(N, *self.rnn_states_critic.shape[2:])
# Flatten the (T, N, ...) from_numpys to (T * N, ...)
share_obs_batch = _flatten(T, N, share_obs_batch)
obs_batch = _flatten(T, N, obs_batch)
actions_batch = _flatten(T, N, actions_batch)
if self.available_actions is not None:
available_actions_batch = _flatten(T, N, available_actions_batch)
else:
available_actions_batch = None
if self.factor is not None:
factor_batch=_flatten(T,N,factor_batch)
value_preds_batch = _flatten(T, N, value_preds_batch)
return_batch = _flatten(T, N, return_batch)
masks_batch = _flatten(T, N, masks_batch)
active_masks_batch = _flatten(T, N, active_masks_batch)
old_action_log_probs_batch = _flatten(T, N, old_action_log_probs_batch)
adv_targ = _flatten(T, N, adv_targ)
if self.factor is not None:
yield share_obs_batch, obs_batch, rnn_states_batch, rnn_states_critic_batch, actions_batch, value_preds_batch, return_batch, masks_batch, active_masks_batch, old_action_log_probs_batch, adv_targ, available_actions_batch, factor_batch
else:
yield share_obs_batch, obs_batch, rnn_states_batch, rnn_states_critic_batch, actions_batch, value_preds_batch, return_batch, masks_batch, active_masks_batch, old_action_log_probs_batch, adv_targ, available_actions_batch
def recurrent_generator(self, advantages, num_mini_batch, data_chunk_length):
episode_length, n_rollout_threads = self.rewards.shape[0:2]
batch_size = n_rollout_threads * episode_length
data_chunks = batch_size // data_chunk_length # [C=r*T/L]
mini_batch_size = data_chunks // num_mini_batch
assert episode_length * n_rollout_threads >= data_chunk_length, (
"PPO requires the number of processes ({}) * episode length ({}) "
"to be greater than or equal to the number of "
"data chunk length ({}).".format(n_rollout_threads, episode_length, data_chunk_length))
assert data_chunks >= 2, ("need larger batch size")
rand = torch.randperm(data_chunks).numpy()
sampler = [rand[i*mini_batch_size:(i+1)*mini_batch_size] for i in range(num_mini_batch)]
if len(self.share_obs.shape) > 3:
share_obs = self.share_obs[:-1].transpose(1, 0, 2, 3, 4).reshape(-1, *self.share_obs.shape[2:])
obs = self.obs[:-1].transpose(1, 0, 2, 3, 4).reshape(-1, *self.obs.shape[2:])
else:
share_obs = _cast(self.share_obs[:-1])
obs = _cast(self.obs[:-1])
actions = _cast(self.actions)
action_log_probs = _cast(self.action_log_probs)
advantages = _cast(advantages)
value_preds = _cast(self.value_preds[:-1])
returns = _cast(self.returns[:-1])
masks = _cast(self.masks[:-1])
active_masks = _cast(self.active_masks[:-1])
if self.factor is not None:
factor = _cast(self.factor)
# rnn_states = _cast(self.rnn_states[:-1])
# rnn_states_critic = _cast(self.rnn_states_critic[:-1])
rnn_states = self.rnn_states[:-1].transpose(1, 0, 2, 3).reshape(-1, *self.rnn_states.shape[2:])
rnn_states_critic = self.rnn_states_critic[:-1].transpose(1, 0, 2, 3).reshape(-1, *self.rnn_states_critic.shape[2:])
if self.available_actions is not None:
available_actions = _cast(self.available_actions[:-1])
for indices in sampler:
share_obs_batch = []
obs_batch = []
rnn_states_batch = []
rnn_states_critic_batch = []
actions_batch = []
available_actions_batch = []
value_preds_batch = []
return_batch = []
masks_batch = []
active_masks_batch = []
old_action_log_probs_batch = []
adv_targ = []
factor_batch = []
for index in indices:
ind = index * data_chunk_length
# size [T+1 N M Dim]-->[T N Dim]-->[N T Dim]-->[T*N,Dim]-->[L,Dim]
share_obs_batch.append(share_obs[ind:ind+data_chunk_length])
obs_batch.append(obs[ind:ind+data_chunk_length])
actions_batch.append(actions[ind:ind+data_chunk_length])
if self.available_actions is not None:
available_actions_batch.append(available_actions[ind:ind+data_chunk_length])
value_preds_batch.append(value_preds[ind:ind+data_chunk_length])
return_batch.append(returns[ind:ind+data_chunk_length])
masks_batch.append(masks[ind:ind+data_chunk_length])
active_masks_batch.append(active_masks[ind:ind+data_chunk_length])
old_action_log_probs_batch.append(action_log_probs[ind:ind+data_chunk_length])
adv_targ.append(advantages[ind:ind+data_chunk_length])
# size [T+1 N Dim]-->[T N Dim]-->[T*N,Dim]-->[1,Dim]
rnn_states_batch.append(rnn_states[ind])
rnn_states_critic_batch.append(rnn_states_critic[ind])
if self.factor is not None:
factor_batch.append(factor[ind:ind+data_chunk_length])
L, N = data_chunk_length, mini_batch_size
# These are all from_numpys of size (N, L, Dim)
share_obs_batch = np.stack(share_obs_batch)
obs_batch = np.stack(obs_batch)
actions_batch = np.stack(actions_batch)
if self.available_actions is not None:
available_actions_batch = np.stack(available_actions_batch)
if self.factor is not None:
factor_batch = np.stack(factor_batch)
value_preds_batch = np.stack(value_preds_batch)
return_batch = np.stack(return_batch)
masks_batch = np.stack(masks_batch)
active_masks_batch = np.stack(active_masks_batch)
old_action_log_probs_batch = np.stack(old_action_log_probs_batch)
adv_targ = np.stack(adv_targ)
# States is just a (N, -1) from_numpy
rnn_states_batch = np.stack(rnn_states_batch).reshape(N, *self.rnn_states.shape[2:])
rnn_states_critic_batch = np.stack(rnn_states_critic_batch).reshape(N, *self.rnn_states_critic.shape[2:])
# Flatten the (L, N, ...) from_numpys to (L * N, ...)
share_obs_batch = _flatten(L, N, share_obs_batch)
obs_batch = _flatten(L, N, obs_batch)
actions_batch = _flatten(L, N, actions_batch)
if self.available_actions is not None:
available_actions_batch = _flatten(L, N, available_actions_batch)
else:
available_actions_batch = None
if self.factor is not None:
factor_batch = _flatten(L, N, factor_batch)
value_preds_batch = _flatten(L, N, value_preds_batch)
return_batch = _flatten(L, N, return_batch)
masks_batch = _flatten(L, N, masks_batch)
active_masks_batch = _flatten(L, N, active_masks_batch)
old_action_log_probs_batch = _flatten(L, N, old_action_log_probs_batch)
adv_targ = _flatten(L, N, adv_targ)
if self.factor is not None:
yield share_obs_batch, obs_batch, rnn_states_batch, rnn_states_critic_batch, actions_batch, value_preds_batch, return_batch, masks_batch, active_masks_batch, old_action_log_probs_batch, adv_targ, available_actions_batch, factor_batch
else:
yield share_obs_batch, obs_batch, rnn_states_batch, rnn_states_critic_batch, actions_batch, value_preds_batch, return_batch, masks_batch, active_masks_batch, old_action_log_probs_batch, adv_targ, available_actions_batch
================================================
FILE: utils/util.py
================================================
import numpy as np
import math
import torch
def check(input):
if type(input) == np.ndarray:
return torch.from_numpy(input)
def get_gard_norm(it):
sum_grad = 0
for x in it:
if x.grad is None:
continue
sum_grad += x.grad.norm() ** 2
return math.sqrt(sum_grad)
def update_linear_schedule(optimizer, epoch, total_num_epochs, initial_lr):
"""Decreases the learning rate linearly"""
lr = initial_lr - (initial_lr * (epoch / float(total_num_epochs)))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def huber_loss(e, d):
a = (abs(e) <= d).float()
b = (e > d).float()
return a*e**2/2 + b*d*(abs(e)-d/2)
def mse_loss(e):
return e**2/2
def get_shape_from_obs_space(obs_space):
if obs_space.__class__.__name__ == 'Box':
obs_shape = obs_space.shape
elif obs_space.__class__.__name__ == 'list':
obs_shape = obs_space
else:
raise NotImplementedError
return obs_shape
def get_shape_from_act_space(act_space):
if act_space.__class__.__name__ == 'Discrete':
act_shape = 1
elif act_space.__class__.__name__ == "MultiDiscrete":
act_shape = act_space.shape
elif act_space.__class__.__name__ == "Box":
act_shape = act_space.shape[0]
elif act_space.__class__.__name__ == "MultiBinary":
act_shape = act_space.shape[0]
else: # agar
act_shape = act_space[0].shape[0] + 1
return act_shape
def tile_images(img_nhwc):
"""
Tile N images into one big PxQ image
(P,Q) are chosen to be as close as possible, and if N
is square, then P=Q.
input: img_nhwc, list or array of images, ndim=4 once turned into array
n = batch index, h = height, w = width, c = channel
returns:
bigim_HWc, ndarray with ndim=3
"""
img_nhwc = np.asarray(img_nhwc)
N, h, w, c = img_nhwc.shape
H = int(np.ceil(np.sqrt(N)))
W = int(np.ceil(float(N)/H))
img_nhwc = np.array(list(img_nhwc) + [img_nhwc[0]*0 for _ in range(N, H*W)])
img_HWhwc = img_nhwc.reshape(H, W, h, w, c)
img_HhWwc = img_HWhwc.transpose(0, 2, 1, 3, 4)
img_Hh_Ww_c = img_HhWwc.reshape(H*h, W*w, c)
return img_Hh_Ww_c