main 390773137be0 cached
111 files
603.4 KB
157.9k tokens
584 symbols
1 requests
Download .txt
Showing preview only (639K chars total). Download the full file or copy to clipboard to get everything.
Repository: Shuijing725/CrowdNav_Prediction_AttnGraph
Branch: main
Commit: 390773137be0
Files: 111
Total size: 603.4 KB

Directory structure:
gitextract_a_pq4yd8/

├── .gitignore
├── CHANGELOG.md
├── LICENSE
├── README.md
├── arguments.py
├── collect_data.py
├── crowd_nav/
│   ├── __init__.py
│   ├── configs/
│   │   ├── __init__.py
│   │   └── config.py
│   └── policy/
│       ├── orca.py
│       ├── policy.py
│       ├── policy_factory.py
│       ├── social_force.py
│       └── srnn.py
├── crowd_sim/
│   ├── README.md
│   ├── __init__.py
│   └── envs/
│       ├── __init__.py
│       ├── crowd_sim.py
│       ├── crowd_sim_pred.py
│       ├── crowd_sim_pred_real_gst.py
│       ├── crowd_sim_var_num.py
│       ├── crowd_sim_var_num_collect.py
│       ├── ros_turtlebot2i_env.py
│       └── utils/
│           ├── __init__.py
│           ├── action.py
│           ├── agent.py
│           ├── human.py
│           ├── info.py
│           ├── recorder.py
│           ├── robot.py
│           └── state.py
├── gst_updated/
│   ├── .gitignore
│   ├── LICENSE
│   ├── README-old.md
│   ├── README.md
│   ├── __init__.py
│   ├── requirements.txt
│   ├── run/
│   │   ├── create_batch_datasets_eth_ucy.sh
│   │   ├── create_batch_datasets_self_eth_ucy.sh
│   │   ├── create_batch_datasets_synth.sh
│   │   ├── download_datasets_models.sh
│   │   ├── download_wrapper_data_model.sh
│   │   └── make_dirs.sh
│   ├── scripts/
│   │   ├── experiments/
│   │   │   ├── draft.py
│   │   │   ├── eval.py
│   │   │   ├── eval_trajnet.py
│   │   │   ├── load_trajnet_test_data.py
│   │   │   ├── pathhack.py
│   │   │   ├── test.py
│   │   │   ├── test_gst.py
│   │   │   └── train.py
│   │   └── wrapper/
│   │       ├── crowd_nav_interface_multi_env_parallel.py
│   │       ├── crowd_nav_interface_multi_env_visualization_test_single_batch.py
│   │       ├── crowd_nav_interface_parallel.py
│   │       └── pathhack.py
│   ├── src/
│   │   ├── gumbel_social_transformer/
│   │   │   ├── edge_selector_ghost.py
│   │   │   ├── edge_selector_no_ghost.py
│   │   │   ├── gumbel_social_transformer.py
│   │   │   ├── mha.py
│   │   │   ├── node_encoder_layer_ghost.py
│   │   │   ├── node_encoder_layer_no_ghost.py
│   │   │   ├── pathhack.py
│   │   │   ├── st_model.py
│   │   │   ├── temperature_scheduler.py
│   │   │   ├── temporal_convolution_net.py
│   │   │   └── utils.py
│   │   ├── mgnn/
│   │   │   ├── batch_trajectories.py
│   │   │   ├── trajectories.py
│   │   │   ├── trajectories_sdd.py
│   │   │   ├── trajectories_trajnet.py
│   │   │   ├── trajectories_trajnet_testset.py
│   │   │   └── utils.py
│   │   └── pec_net/
│   │       ├── config/
│   │       │   └── optimal.yaml
│   │       ├── sdd_trajectories.py
│   │       └── social_utils.py
│   └── tuning/
│       ├── 211130-train_shuijing.sh
│       ├── 211203-eval_shuijing.sh
│       ├── 211203-train_shuijing.sh
│       └── 211209-test_shuijing.sh
├── plot.py
├── requirements.txt
├── rl/
│   ├── .gitignore
│   ├── __init__.py
│   ├── evaluation.py
│   ├── networks/
│   │   ├── __init__.py
│   │   ├── distributions.py
│   │   ├── dummy_vec_env.py
│   │   ├── envs.py
│   │   ├── model.py
│   │   ├── network_utils.py
│   │   ├── selfAttn_srnn_temp_node.py
│   │   ├── shmem_vec_env.py
│   │   ├── srnn_model.py
│   │   └── storage.py
│   ├── ppo/
│   │   ├── __init__.py
│   │   └── ppo.py
│   └── vec_env/
│       ├── __init__.py
│       ├── dummy_vec_env.py
│       ├── envs.py
│       ├── logger.py
│       ├── running_mean_std.py
│       ├── shmem_vec_env.py
│       ├── tile_images.py
│       ├── util.py
│       ├── vec_env.py
│       ├── vec_frame_stack.py
│       ├── vec_normalize.py
│       ├── vec_pretext_normalize.py
│       └── vec_remove_dict_obs.py
├── test.py
└── train.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
gst_datasets/
gst_updated/scripts/data
trained_models/
.idea/
venv/
*.pyc 
*__pycache__*

train1.py
test1.py
test2.py
check_env.py

================================================
FILE: CHANGELOG.md
================================================
# Changelog

All notable changes to this project will be documented in this file.

## 2023-03-28
### Changed
- Changed instructions for Pytorch installation (moved Pytorch out of requirements.txt, and added link of official website in readme.md)
- Fixed bug for 'visible_masks' in generate_ob function in crowd_sim_var_num.py and crowd_sim_pred_real_gst.py, so that the shape of observation will not throw errors when config.sim.human_num_range > 0

## 2023-01
Initial commit


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2023 Shuijing Liu

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# CrowdNav++
This repository contains the codes for our paper titled "Intention Aware Robot Crowd Navigation with Attention-Based Interaction Graph" in ICRA 2023. 
For more details, please refer to the [project website](https://sites.google.com/view/intention-aware-crowdnav/home) and [arXiv preprint](https://arxiv.org/abs/2203.01821).
For experiment demonstrations, please refer to the [youtube video](https://www.youtube.com/watch?v=nxpxhF019VA).

**[News]**
- Please check out our most recent follow-up work below:
  - [HEIGHT: Heterogeneous Interaction Graph Transformer for Robot Navigation in Crowded and Constrained Environments](https://github.com/Shuijing725/CrowdNav_HEIGHT)
- Please check out our open-sourced sim2real tutorial [here](https://github.com/Shuijing725/CrowdNav_Sim2Real_Turtlebot)
- Please check out my curated paper list for robot social navigation [here](https://github.com/Shuijing725/awesome-robot-social-navigation) (It is under active development) 

## Abstract
We study the problem of safe and intention-aware robot navigation in dense and interactive crowds. 
Most previous reinforcement learning (RL) based methods fail to consider different types of interactions among all agents or ignore the intentions of people, which results in performance degradation. 
In this paper, we propose a novel recurrent graph neural network with attention mechanisms to capture heterogeneous interactions among agents through space and time. 
To encourage longsighted robot behaviors, we infer the intentions of dynamic agents by predicting their future trajectories for several timesteps. 
The predictions are incorporated into a model-free RL framework to prevent the robot from intruding into the intended paths of other agents. 
We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios. We successfully transfer the policy learned in simulation to a real-world TurtleBot 2i.

<p align="center">
<img src="/figures/open.png" width="450" />
</p>

## Setup
1. In a conda environment or virtual environment with Python 3.x, install the required python package
```
pip install -r requirements.txt
```

2. Install Pytorch 1.12.1 following the instructions [here](https://pytorch.org/get-started/previous-versions/#v1121)

3. Install [OpenAI Baselines](https://github.com/openai/baselines#installation) 
```
git clone https://github.com/openai/baselines.git
cd baselines
pip install -e .
```

4. Install [Python-RVO2](https://github.com/sybrenstuvel/Python-RVO2) library


## Overview
This repository is organized in five parts: 
- `crowd_nav/` folder contains configurations and policies used in the simulator.
- `crowd_sim/` folder contains the simulation environment. 
- `gst_updated/` folder contains the code for running inference of a human trajectory predictor, named Gumbel Social Transformer (GST) [2].
- `rl/` contains the code for the RL policy networks, wrappers for the prediction network, and ppo algorithm. 
- `trained_models/` contains some pretrained models provided by us. 

Note that this repository does not include codes for training a trajectory prediction network. Please refer to from [this repo](https://github.com/tedhuang96/gst) instead.

## Run the code
### Training
- Modify the configurations.
  1. Environment configurations: Modify `crowd_nav/configs/config.py`. Especially,
     - Choice of human trajectory predictor: 
       - Set `sim.predict_method = 'inferred'` if a learning-based GST predictor is used [2]. Please also change `pred.model_dir` to be the directory of a trained GST model. We provide two pretrained models [here](https://github.com/Shuijing725/CrowdNav_Prediction_AttnGraph/tree/main/gst_updated/results/).
       - Set `sim.predict_method = 'const_vel'` if constant velocity model is used.
       - Set `sim.predict_method = 'truth'` if ground truth predictor is used.
       - Set `sim.predict_method = 'none'` if you do not want to use future trajectories to change the observation and reward.
     - Randomization of human behaviors: If you want to randomize the ORCA humans, 
       - set `env.randomize_attributes` to True to randomize the preferred velocity and radius of humans;
       - set `humans.random_goal_changing` to True to let humans randomly change goals before they arrive at their original goals.

  2. PPO and network configurations: modify `arguments.py`
     - `env_name` (must be consistent with `sim.predict_method` in `crowd_nav/configs/config.py`): 
        - If you use the GST predictor, set to `CrowdSimPredRealGST-v0`.
        - If you use the ground truth predictor or constant velocity predictor, set to `CrowdSimPred-v0`.
        - If you don't want to use prediction, set to `CrowdSimVarNum-v0`. 
     - `use_self_attn`: human-human attention network will be included if set to True, else there will be no human-human attention.
     - `use_hr_attn`: robot-human attention network will be included if set to True, else there will be no robot-human attention.
- After you change the configurations, run
  ```
  python train.py 
  ```
- The checkpoints and configuration files will be saved to the folder specified by `output_dir` in `arguments.py`.

### Testing
Please modify the test arguments in line 20-33 of `test.py` (**Don't set the argument values in terminal!**), and run   
```
python test.py 
```
Note that the `config.py` and `arguments.py` in the testing folder will be loaded, instead of those in the root directory.  
The testing results are logged in `trained_models/your_output_dir/test/` folder, and are also printed on terminal.  
If you set `visualize=True` in `test.py`, you will be able to see visualizations like this:  
<img src="/figures/visual.gif" width="420" />

#### Test pre-trained models provided by us
| Method                                 | `--model_dir` in test.py               | `--test_model` in test.py |
|----------------------------------------|----------------------------------------|---------------------------|
| Ours without randomized humans         | `trained_models/GST_predictor_no_rand` | `41200.pt`                |
| ORCA without randomized humans         | `trained_models/ORCA_no_rand`          | `00000.pt`                |
| Social force without randomized humans | `trained_models/SF_no_rand`            | `00000.pt`                |
| Ours with randomized humans            | `trained_models/GST_predictor_rand`    | `41665.pt`                |

#### Plot predicted future human positions
To visualize the episodes with predicted human trajectories, as well as saving visualizations to disk, please refer to [save_slides branch](https://github.com/Shuijing725/CrowdNav_Prediction_AttnGraph/tree/save_slides).  
Note that the above visualization and file saving will slow down testing significantly!   
- Set `save_slides=True` in `test.py` and all rendered frames will be saved in a subfolder inside the `trained_models/your_output_dir/social_eval/`.   

### Plot the training curves
```
python plot.py
```
Here are example learning curves of our proposed network model with GST predictor.

<img src="/figures/rewards.png" width="370" /> <img src="/figures/losses.png" width="370" />

## Sim2Real
We are happy to announce that our sim2real tutorial and code are released [here](https://github.com/Shuijing725/CrowdNav_Sim2Real_Turtlebot)!  
**Note:** This repo only serves as a reference point for the sim2real transfer of crowd navigation. Since there are lots of uncertainties in real-world experiments that may affect performance, we cannot guarantee that it is reproducible on all cases. 

## Disclaimer
1. We only tested our code in Ubuntu with Python 3.6 and Python 3.8. The code may work on other OS or other versions of Python, but we do not have any guarantee.  

2. The performance of our code can vary depending on the choice of hyperparameters and random seeds (see [this reddit post](https://www.reddit.com/r/MachineLearning/comments/rkewa3/d_what_are_your_machine_learning_superstitions/)). 
Unfortunately, we do not have time or resources for a thorough hyperparameter search. Thus, if your results are slightly worse than what is claimed in the paper, it is normal. 
To achieve the best performance, we recommend some manual hyperparameter tuning.

## Citation
If you find the code or the paper useful for your research, please cite the following papers:
```
@inproceedings{liu2022intention,
  title={Intention Aware Robot Crowd Navigation with Attention-Based Interaction Graph},
  author={Liu, Shuijing and Chang, Peixin and Huang, Zhe and Chakraborty, Neeloy and Hong, Kaiwen and Liang, Weihang and Livingston McPherson, D. and Geng, Junyi and Driggs-Campbell, Katherine},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  year={2023},
  pages={12015-12021}
}

@inproceedings{liu2020decentralized,
  title={Decentralized Structural-RNN for Robot Crowd Navigation with Deep Reinforcement Learning},
  author={Liu, Shuijing and Chang, Peixin and Liang, Weihang and Chakraborty, Neeloy and Driggs-Campbell, Katherine},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  year={2021},
  pages={3517-3524}
}
```

## Credits
Other contributors:  
[Peixin Chang](https://github.com/PeixinC)  
[Zhe Huang](https://github.com/tedhuang96)   
[Neeloy Chakraborty](https://github.com/TheNeeloy)  

Part of the code is based on the following repositories:  

[1] S. Liu, P. Chang, W. Liang, N. Chakraborty, and K. Driggs-Campbell, "Decentralized Structural-RNN for Robot Crowd Navigation with Deep Reinforcement Learning," in IEEE International Conference on Robotics and Automation (ICRA), 2019, pp. 3517-3524. (Github: https://github.com/Shuijing725/CrowdNav_DSRNN)  

[2] Z. Huang, R. Li, K. Shin, and K. Driggs-Campbell. "Learning Sparse Interaction Graphs of Partially Detected Pedestrians for Trajectory Prediction," in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1198–1205, 2022. (Github: https://github.com/tedhuang96/gst)  

## Contact
If you have any questions or find any bugs, please feel free to open an issue or pull request.


================================================
FILE: arguments.py
================================================
import argparse

import torch


def get_args():
    parser = argparse.ArgumentParser(description='RL')

    # the saving directory for train.py
    parser.add_argument(
        '--output_dir', type=str, default='trained_models/my_model')

    # resume training from an existing checkpoint or not
    parser.add_argument(
        '--resume', default=False, action='store_true')
    # if resume = True, load from the following checkpoint
    parser.add_argument(
        '--load-path', default='trained_models/GST_predictor_non_rand/checkpoints/41200.pt',
        help='path of weights for resume training')
    parser.add_argument(
        '--overwrite',
        default=True,
        action='store_true',
        help = "whether to overwrite the output directory in training")
    parser.add_argument(
        '--num_threads',
        type=int,
        default=1,
        help="number of threads used for intraop parallelism on CPU")
    # only implement in testing
    parser.add_argument(
        '--phase', type=str, default='test')

    parser.add_argument(
        '--cuda-deterministic',
        action='store_true',
        default=False,
        help="sets flags for determinism when using CUDA (potentially slow!)")

    # only works for gpu only (although you can make it work on cpu after some minor fixes)
    parser.add_argument(
        '--no-cuda',
        action='store_true',
        default=False,
        help='disables CUDA training')
    parser.add_argument(
        '--seed', type=int, default=425, help='random seed (default: 1)')

    parser.add_argument(
        '--num-processes',
        type=int,
        default=16,
        help='how many training processes to use (default: 16)')

    parser.add_argument(
        '--num-mini-batch',
        type=int,
        default=2,
        help='number of batches for ppo (default: 32)')
    parser.add_argument(
        '--num-steps',
        type=int,
        default=30,
        help='number of forward steps in A2C (default: 5)')
    parser.add_argument(
        '--recurrent-policy',
        action='store_true',
        default=True,
        help='use a recurrent policy')

    parser.add_argument(
        '--ppo-epoch',
        type=int,
        default=5,
        help='number of ppo epochs (default: 4)')
    parser.add_argument(
        '--clip-param',
        type=float,
        default=0.2,
        help='ppo clip parameter (default: 0.2)')
    parser.add_argument(
        '--value-loss-coef',
        type=float,
        default=0.5,
        help='value loss coefficient (default: 0.5)')
    parser.add_argument(
        '--entropy-coef',
        type=float,
        default=0.0,
        help='entropy term coefficient (default: 0.01)')
    parser.add_argument(
        '--lr', type=float, default=4e-5, help='learning rate (default: 7e-4)')
    parser.add_argument(
        '--eps',
        type=float,
        default=1e-5,
        help='RMSprop optimizer epsilon (default: 1e-5)')
    parser.add_argument(
        '--alpha',
        type=float,
        default=0.99,
        help='RMSprop optimizer apha (default: 0.99)')
    parser.add_argument(
        '--gamma',
        type=float,
        default=0.99,
        help='discount factor for rewards (default: 0.99)')
    parser.add_argument(
        '--max-grad-norm',
        type=float,
        default=0.5,
        help='max norm of gradients (default: 0.5)')
    # 10e6 for holonomic, 20e6 for unicycle
    parser.add_argument(
        '--num-env-steps',
        type=int,
        default=20e6,
        help='number of environment steps to train (default: 10e6)')
    # True for unicycle, False for holonomic
    parser.add_argument(
        '--use-linear-lr-decay',
        action='store_true',
        default=False,
        help='use a linear schedule on the learning rate')
    parser.add_argument(
        '--algo', default='ppo', help='algorithm to use: a2c | ppo | acktr')
    parser.add_argument(
        '--save-interval',
        type=int,
        default=200,
        help='save interval, one save per n updates (default: 100)')
    parser.add_argument(
        '--use-gae',
        action='store_true',
        default=True,
        help='use generalized advantage estimation')
    parser.add_argument(
        '--gae-lambda',
        type=float,
        default=0.95,
        help='gae lambda parameter (default: 0.95)')
    parser.add_argument(
        '--log-interval',
        type=int,
        default=20,
        help='log interval, one log per n updates (default: 10)')
    parser.add_argument(
        '--use-proper-time-limits',
        action='store_true',
        default=False,
        help='compute returns taking into account time limits')

    # for srnn only
    # RNN size
    parser.add_argument('--human_node_rnn_size', type=int, default=128,
                        help='Size of Human Node RNN hidden state')
    parser.add_argument('--human_human_edge_rnn_size', type=int, default=256,
                        help='Size of Human Human Edge RNN hidden state')
    parser.add_argument(
        '--aux-loss',
        action='store_true',
        default=False,
        help='auxiliary loss on human nodes outputs')


    # Input and output size
    parser.add_argument('--human_node_input_size', type=int, default=3,
                        help='Dimension of the node features')
    parser.add_argument('--human_human_edge_input_size', type=int, default=2,
                        help='Dimension of the edge features')
    parser.add_argument('--human_node_output_size', type=int, default=256,
                        help='Dimension of the node output')

    # Embedding size
    parser.add_argument('--human_node_embedding_size', type=int, default=64,
                        help='Embedding size of node features')
    parser.add_argument('--human_human_edge_embedding_size', type=int, default=64,
                        help='Embedding size of edge features')

    # Attention vector dimension
    parser.add_argument('--attention_size', type=int, default=64,
                        help='Attention size')

    # Sequence length
    parser.add_argument('--seq_length', type=int, default=30,
                        help='Sequence length')

    # use self attn in human states or not
    parser.add_argument('--use_self_attn', type=bool, default=True,
                        help='Attention size')

    # use attn between humans and robots or not (todo: implment this in network models)
    parser.add_argument('--use_hr_attn', type=bool, default=True,
                        help='Attention size')

    # No prediction: for orca, sf, old_srnn, selfAttn_srnn_noPred ablation: 'CrowdSimVarNum-v0',
    # for constant velocity Pred, ground truth Pred: 'CrowdSimPred-v0'
    # gst pred: 'CrowdSimPredRealGST-v0'
    parser.add_argument(
        '--env-name',
        default='CrowdSimPredRealGST-v0',
        help='name of the environment')


    # sort all humans and squeeze them to the front or not
    parser.add_argument('--sort_humans', type=bool, default=True)


    args = parser.parse_args()

    args.cuda = not args.no_cuda and torch.cuda.is_available()

    assert args.algo in ['a2c', 'ppo', 'acktr']
    if args.recurrent_policy:
        assert args.algo in ['a2c', 'ppo'], \
            'Recurrent policy is not implemented for ACKTR'

    return args

================================================
FILE: collect_data.py
================================================
import matplotlib.pyplot as plt
import torch
import copy
import os
import numpy as np
from tqdm import trange

from crowd_nav.configs.config import Config
from rl.networks.envs import make_vec_envs
from crowd_sim.envs import *

# train_data: True if collect training data, False if collect testing data
def collectData(device, train_data, config):
    # set robot policy to orca
    config.robot.policy = 'orca'

    env_name = 'CrowdSimVarNumCollect-v0'
    # for render
    env_num = 1 if config.data.render else config.data.num_processes

    if config.data.render:
        fig, ax = plt.subplots(figsize=(7, 7))
        ax.set_xlim(-10, 10)
        ax.set_ylim(-10, 10)
        ax.set_xlabel('x(m)', fontsize=16)
        ax.set_ylabel('y(m)', fontsize=16)
        plt.ion()
        plt.show()
    else:
        ax = None

    # create parallel envs
    seed = np.random.randint(0, np.iinfo(np.uint32).max)
    envs = make_vec_envs(env_name, seed, env_num,
                         config.reward.gamma, None, device, allow_early_resets=True, config=config,
                         ax=ax, wrap_pytorch=False)


    # collect data for pretext training
    data = [] # list for all data collected
    for i in range(env_num):
        data.append([])

    obs = envs.reset()

    # 1 epoch -> 1 file
    # todo: add pretext arguments to config.py
    # make one prediction every "pred_interval" simulation steps
    pred_interval = config.data.pred_timestep // config.env.time_step
    tot_steps = int(config.data.tot_steps * pred_interval)
    for step in trange(tot_steps):

        if config.data.render:
            envs.render()
        if step % pred_interval == 0:
            # append a single data one by one
            for i in range(env_num):
                non_empty_obs = obs['pred_info'][i][np.logical_not(np.isinf(obs['pred_info'][i, :, -1]))]
                non_empty_obs = non_empty_obs.reshape((-1, 4)).tolist()
                if non_empty_obs:
                    data[i].extend(copy.deepcopy(non_empty_obs))

        action = np.zeros((env_num, 2))
        # action is is dummy action!
        obs, rew, done, info = envs.step(action)

    # save observations as pickle files
    filePath = os.path.join(config.data.data_save_dir, 'train') if train_data \
        else os.path.join(config.data.data_save_dir, 'test')
    if not os.path.isdir(filePath):
        os.makedirs(filePath)

    for i in range(env_num):
        with open(os.path.join(filePath, str(i)+'.txt'), 'w') as f:
            for item in data[i]:
                item = str(item[0]) + '\t' + str(item[1]) + '\t' + str(item[2]) + '\t' + str(item[3])
                f.write("%s\n" % item)


    envs.close()

if __name__ == '__main__':
    config = Config()
    device = torch.device("cuda")
    collectData(device, config.data.collect_train_data, config)


================================================
FILE: crowd_nav/__init__.py
================================================


================================================
FILE: crowd_nav/configs/__init__.py
================================================


================================================
FILE: crowd_nav/configs/config.py
================================================
import numpy as np
from arguments import get_args

class BaseConfig(object):
    def __init__(self):
        pass


class Config(object):
    # for now, import all args from arguments.py
    args = get_args()

    training = BaseConfig()
    training.device = "cuda:0" if args.cuda else "cpu"

    # general configs for OpenAI gym env
    env = BaseConfig()
    env.time_limit = 50
    env.time_step = 0.25
    env.val_size = 100
    env.test_size = 500
    # if randomize human behaviors, set to True, else set to False
    env.randomize_attributes = True
    env.num_processes = args.num_processes
    # record robot states and actions an episode for system identification in sim2real
    env.record = False
    env.load_act = False

    # config for reward function
    reward = BaseConfig()
    reward.success_reward = 10
    reward.collision_penalty = -20
    # discomfort distance
    reward.discomfort_dist = 0.25
    reward.discomfort_penalty_factor = 10
    reward.gamma = 0.99

    # config for simulation
    sim = BaseConfig()
    sim.circle_radius = 6 * np.sqrt(2)
    sim.arena_size = 6
    sim.human_num = 20
    # actual human num in each timestep, in [human_num-human_num_range, human_num+human_num_range]
    sim.human_num_range = 0
    sim.predict_steps = 5
    # 'const_vel': constant velocity model,
    # 'truth': ground truth future traj (with info in robot's fov)
    # 'inferred': inferred future traj from GST network
    # 'none': no prediction
    sim.predict_method = 'inferred'
    # render the simulation during training or not
    sim.render = False

    # for save_traj only
    render_traj = False
    save_slides = False
    save_path = None

    # whether wrap the vec env with VecPretextNormalize class
    # = True only if we are using a network for human trajectory prediction (sim.predict_method = 'inferred')
    if sim.predict_method == 'inferred':
        env.use_wrapper = True
    else:
        env.use_wrapper = False

    # human config
    humans = BaseConfig()
    humans.visible = True
    # orca or social_force for now
    humans.policy = "orca"
    humans.radius = 0.3
    humans.v_pref = 1
    humans.sensor = "coordinates"
    # FOV = this values * PI
    humans.FOV = 2.

    # a human may change its goal before it reaches its old goal
    # if randomize human behaviors, set to True, else set to False
    humans.random_goal_changing = True
    humans.goal_change_chance = 0.5

    # a human may change its goal after it reaches its old goal
    humans.end_goal_changing = True
    humans.end_goal_change_chance = 1.0

    # a human may change its radius and/or v_pref after it reaches its current goal
    humans.random_radii = False
    humans.random_v_pref = False

    # one human may have a random chance to be blind to other agents at every time step
    humans.random_unobservability = False
    humans.unobservable_chance = 0.3

    humans.random_policy_changing = False

    # robot config
    robot = BaseConfig()
    # whether robot is visible to humans (whether humans respond to the robot's motion)
    robot.visible = False
    # For baseline: srnn; our method: selfAttn_merge_srnn
    robot.policy = 'selfAttn_merge_srnn'
    robot.radius = 0.3
    robot.v_pref = 1
    robot.sensor = "coordinates"
    # FOV = this values * PI
    robot.FOV = 2
    # radius of perception range
    robot.sensor_range = 5

    # action space of the robot
    action_space = BaseConfig()
    # holonomic or unicycle
    action_space.kinematics = "holonomic"

    # config for ORCA
    orca = BaseConfig()
    orca.neighbor_dist = 10
    orca.safety_space = 0.15
    orca.time_horizon = 5
    orca.time_horizon_obst = 5

    # config for social force
    sf = BaseConfig()
    sf.A = 2.
    sf.B = 1
    sf.KI = 1

    # config for data collection for training the GST predictor
    data = BaseConfig()
    data.tot_steps = 40000
    data.render = False
    data.collect_train_data = False
    data.num_processes = 5
    data.data_save_dir = 'gst_updated/datasets/orca_20humans_no_rand'
    # number of seconds between each position in traj pred model
    data.pred_timestep = 0.25

    # config for the GST predictor
    pred = BaseConfig()
    # see 'gst_updated/results/README.md' for how to set this variable
    # If randomized humans: gst_updated/results/100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000_rand/sj
    # else: gst_updated/results/100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000/sj
    pred.model_dir = 'gst_updated/results/100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000_rand/sj'

    # LIDAR config
    lidar = BaseConfig()
    # angular resolution (offset angle between neighboring rays) in degrees
    lidar.angular_res = 5
    # range in meters
    lidar.range = 10

    # config for sim2real
    sim2real = BaseConfig()
    # use dummy robot and human states or not
    sim2real.use_dummy_detect = True
    sim2real.record = False
    sim2real.load_act = False
    sim2real.ROSStepInterval = 0.03
    sim2real.fixed_time_interval = 0.1
    sim2real.use_fixed_time_interval = True

    if sim.predict_method == 'inferred' and env.use_wrapper == False:
        raise ValueError("If using inferred prediction, you must wrap the envs!")
    if sim.predict_method != 'inferred' and env.use_wrapper:
        raise ValueError("If not using inferred prediction, you must NOT wrap the envs!")


================================================
FILE: crowd_nav/policy/orca.py
================================================
import numpy as np
import rvo2
from crowd_nav.policy.policy import Policy
from crowd_sim.envs.utils.action import ActionXY


class ORCA(Policy):
    def __init__(self, config):
        """
        timeStep        The time step of the simulation.
                        Must be positive.
        neighborDist    The default maximum distance (center point
                        to center point) to other agents a new agent
                        takes into account in the navigation. The
                        larger this number, the longer the running
                        time of the simulation. If the number is too
                        low, the simulation will not be safe. Must be
                        non-negative.
        maxNeighbors    The default maximum number of other agents a
                        new agent takes into account in the
                        navigation. The larger this number, the
                        longer the running time of the simulation.
                        If the number is too low, the simulation
                        will not be safe.
        timeHorizon     The default minimal amount of time for which
                        a new agent's velocities that are computed
                        by the simulation are safe with respect to
                        other agents. The larger this number, the
                        sooner an agent will respond to the presence
                        of other agents, but the less freedom the
                        agent has in choosing its velocities.
                        Must be positive.
        timeHorizonObst The default minimal amount of time for which
                        a new agent's velocities that are computed
                        by the simulation are safe with respect to
                        obstacles. The larger this number, the
                        sooner an agent will respond to the presence
                        of obstacles, but the less freedom the agent
                        has in choosing its velocities.
                        Must be positive.
        radius          The default radius of a new agent.
                        Must be non-negative.
        maxSpeed        The default maximum speed of a new agent.
                        Must be non-negative.
        velocity        The default initial two-dimensional linear
                        velocity of a new agent (optional).

        ORCA first uses neighborDist and maxNeighbors to find neighbors that need to be taken into account.
        Here set them to be large enough so that all agents will be considered as neighbors.
        Time_horizon should be set that at least it's safe for one time step

        In this work, obstacles are not considered. So the value of time_horizon_obst doesn't matter.

        """
        super().__init__(config)
        self.name = 'ORCA'
        self.max_neighbors = None
        self.radius = None
        self.max_speed = 1 # the ego agent assumes that all other agents have this max speed
        self.sim = None
        self.safety_space = self.config.orca.safety_space


    def predict(self, state):
        """
        Create a rvo2 simulation at each time step and run one step
        Python-RVO2 API: https://github.com/sybrenstuvel/Python-RVO2/blob/master/src/rvo2.pyx
        How simulation is done in RVO2: https://github.com/sybrenstuvel/Python-RVO2/blob/master/src/Agent.cpp

        Agent doesn't stop moving after it reaches the goal, because once it stops moving, the reciprocal rule is broken

        :param state:
        :return:
        """
        self_state = state.self_state
        # max number of humans = current number of humans
        self.max_neighbors = len(state.human_states)
        self.radius = state.self_state.radius
        params = self.config.orca.neighbor_dist, self.max_neighbors, self.config.orca.time_horizon, self.config.orca.time_horizon_obst
        if self.sim is not None and self.sim.getNumAgents() != len(state.human_states) + 1:
            del self.sim
            self.sim = None
        if self.sim is None:
            self.sim = rvo2.PyRVOSimulator(self.time_step, *params, self.radius, self.max_speed)
            self.sim.addAgent((self_state.px, self_state.py), *params, self_state.radius + 0.01 + self.safety_space,
                              self_state.v_pref, (self_state.vx, self_state.vy))
            for human_state in state.human_states:
                self.sim.addAgent((human_state.px, human_state.py), *params, human_state.radius + 0.01 + self.config.orca.safety_space,
                                  self.max_speed, (human_state.vx, human_state.vy))
        else:
            self.sim.setAgentPosition(0, (self_state.px, self_state.py))
            self.sim.setAgentVelocity(0, (self_state.vx, self_state.vy))
            for i, human_state in enumerate(state.human_states):
                self.sim.setAgentPosition(i + 1, (human_state.px, human_state.py))
                self.sim.setAgentVelocity(i + 1, (human_state.vx, human_state.vy))

        # Set the preferred velocity to be a vector of unit magnitude (speed) in the direction of the goal.
        velocity = np.array((self_state.gx - self_state.px, self_state.gy - self_state.py))
        speed = np.linalg.norm(velocity)
        pref_vel = velocity / speed if speed > 1 else velocity

        # Perturb a little to avoid deadlocks due to perfect symmetry.
        # perturb_angle = np.random.random() * 2 * np.pi
        # perturb_dist = np.random.random() * 0.01
        # perturb_vel = np.array((np.cos(perturb_angle), np.sin(perturb_angle))) * perturb_dist
        # pref_vel += perturb_vel

        self.sim.setAgentPrefVelocity(0, tuple(pref_vel))
        for i, human_state in enumerate(state.human_states):
            # unknown goal position of other humans
            self.sim.setAgentPrefVelocity(i + 1, (0, 0))

        self.sim.doStep()
        action = ActionXY(*self.sim.getAgentVelocity(0))
        self.last_state = state

        return action


================================================
FILE: crowd_nav/policy/policy.py
================================================
import abc
import numpy as np


class Policy(object):
    def __init__(self, config):
        """
        Base class for all policies, has an abstract method predict().
        """
        self.trainable = False
        self.phase = None
        self.model = None
        self.device = None
        self.last_state = None
        self.time_step = None
        # if agent is assumed to know the dynamics of real world
        self.env = None
        self.config = config


    @abc.abstractmethod
    def predict(self, state):
        """
        Policy takes state as input and output an action

        """
        return

    @staticmethod
    def reach_destination(state):
        self_state = state.self_state
        if np.linalg.norm((self_state.py - self_state.gy, self_state.px - self_state.gx)) < self_state.radius:
            return True
        else:
            return False



================================================
FILE: crowd_nav/policy/policy_factory.py
================================================
policy_factory = dict()
def none_policy():
    return None

from crowd_nav.policy.orca import ORCA
from crowd_nav.policy.social_force import SOCIAL_FORCE
from crowd_nav.policy.srnn import SRNN, selfAttn_merge_SRNN

policy_factory['orca'] = ORCA
policy_factory['none'] = none_policy
policy_factory['social_force'] = SOCIAL_FORCE
policy_factory['srnn'] = SRNN
policy_factory['selfAttn_merge_srnn'] = selfAttn_merge_SRNN

================================================
FILE: crowd_nav/policy/social_force.py
================================================
import numpy as np
from crowd_nav.policy.policy import Policy
from crowd_sim.envs.utils.action import ActionXY

class SOCIAL_FORCE(Policy):
    def __init__(self, config):
        super().__init__(config)
        self.name = 'social_force'


    def predict(self, state):
        """
        Produce action for agent with circular specification of social force model.
        """
        # Pull force to goal
        delta_x = state.self_state.gx - state.self_state.px
        delta_y = state.self_state.gy - state.self_state.py
        dist_to_goal = np.sqrt(delta_x**2 + delta_y**2)
        desired_vx = (delta_x / dist_to_goal) * state.self_state.v_pref
        desired_vy = (delta_y / dist_to_goal) * state.self_state.v_pref
        KI = self.config.sf.KI # Inverse of relocation time K_i
        curr_delta_vx = KI * (desired_vx - state.self_state.vx)
        curr_delta_vy = KI * (desired_vy - state.self_state.vy)
        
        # Push force(s) from other agents
        A = self.config.sf.A # Other observations' interaction strength: 1.5
        B = self.config.sf.B # Other observations' interaction range: 1.0
        interaction_vx = 0
        interaction_vy = 0
        for other_human_state in state.human_states:
            delta_x = state.self_state.px - other_human_state.px
            delta_y = state.self_state.py - other_human_state.py
            dist_to_human = np.sqrt(delta_x**2 + delta_y**2)
            interaction_vx += A * np.exp((state.self_state.radius + other_human_state.radius - dist_to_human) / B) * (delta_x / dist_to_human)
            interaction_vy += A * np.exp((state.self_state.radius + other_human_state.radius - dist_to_human) / B) * (delta_y / dist_to_human)

        # Sum of push & pull forces
        total_delta_vx = (curr_delta_vx + interaction_vx) * self.config.env.time_step
        total_delta_vy = (curr_delta_vy + interaction_vy) * self.config.env.time_step

        # clip the speed so that sqrt(vx^2 + vy^2) <= v_pref
        new_vx = state.self_state.vx + total_delta_vx
        new_vy = state.self_state.vy + total_delta_vy
        act_norm = np.linalg.norm([new_vx, new_vy])

        if act_norm > state.self_state.v_pref:
            return ActionXY(new_vx / act_norm * state.self_state.v_pref, new_vy / act_norm * state.self_state.v_pref)
        else:
            return ActionXY(new_vx, new_vy)


================================================
FILE: crowd_nav/policy/srnn.py
================================================

from crowd_nav.policy.policy import Policy
import numpy as np
from crowd_sim.envs.utils.action import ActionRot, ActionXY


class SRNN(Policy):
	def __init__(self, config):
		super().__init__(config)
		self.time_step = self.config.env.time_step # Todo: is this needed?
		self.name = 'srnn'
		self.trainable = True
		self.multiagent_training = True


	# clip the self.raw_action and return the clipped action
	def clip_action(self, raw_action, v_pref):
		"""
        Input state is the joint state of robot concatenated by the observable state of other agents

        To predict the best action, agent samples actions and propagates one step to see how good the next state is
        thus the reward function is needed

        """
		# quantize the action
		holonomic = True if self.config.action_space.kinematics == 'holonomic' else False
		# clip the action
		if holonomic:
			act_norm = np.linalg.norm(raw_action)
			if act_norm > v_pref:
				raw_action[0] = raw_action[0] / act_norm * v_pref
				raw_action[1] = raw_action[1] / act_norm * v_pref
			return ActionXY(raw_action[0], raw_action[1])
		else:
			# for sim2real
			# old value: -0.1, 0.1
			raw_action[0] = np.clip(raw_action[0], -0.1, 0.087) # action[0] is change of v
			# raw[0, 1] = np.clip(raw[0, 1], -0.25, 0.25) # action[1] is change of w
			# raw[0, 0] = np.clip(raw[0, 0], -state.self_state.v_pref, state.self_state.v_pref) # action[0] is v
			# old value: -0.1, 0.1
			# 0.073
			raw_action[1] = np.clip(raw_action[1], -0.06, 0.06) # action[1] is change of theta

			return ActionRot(raw_action[0], raw_action[1])


class selfAttn_merge_SRNN(SRNN):
	def __init__(self, config):
		super().__init__(config)
		self.name = 'selfAttn_merge_srnn'

================================================
FILE: crowd_sim/README.md
================================================
# Simulation Framework
## Environment
The environment contains n+1 agents. N of them are humans controlled by certain a fixed
policy. The other is robot and it's controlled by a trainable policy.
The environment is built on top of OpenAI gym library, and has implemented two abstract methods.
* reset(): the environment will reset positions for all the agents and return observation 
for robot. Observation for one agent is the observable states of all other agents.
* step(action): taking action of the robot as input, the environment computes observation
for each agent and call agent.act(observation) to get actions of agents. Then environment detects
whether there is a collision between agents. If not, the states of agents will be updated. Then 
observation, reward, done, info will be returned.


## Agent
Agent is a base class, and has two derived class of human and robot. Agent class holds
all the physical properties of an agent, including position, velocity, orientation, policy and etc.
* visibility: humans and robot can be set to be visible or invisible
* sensor: can be either visual input or coordinate input
* kinematics: can be either holonomic (move in any direction) or unicycle (has rotation constraints)
* act(observation): transform observation to state and pass it to policy


## Policy
Policy takes state as input and output an action. Current available policies:
* ORCA: model other agents as velocity obstacles to find optimal collision-free velocities under reciprocal assumption
* Social force: models the interactions in crowds using attractive and  repulsive  forces
* DS-RNN: our method


## State
There are multiple definition of states in different cases. The state of an agent representing all
the knowledge of environments is defined as JointState, and it's different from the state of the whole environment.
* ObservableState: position, velocity, radius of one agent
* FullState: position, velocity, radius, goal position, preferred velocity, rotation of one agent
* JoinState: concatenation of one agent's full state and all other agents' observable states 


## Action
There are two types of actions depending on what kinematics constraint the agent has.
* ActionXY: (vx, vy) if kinematics == 'holonomic'
* ActionRot: (velocity, rotation angle) if kinematics == 'unicycle'

================================================
FILE: crowd_sim/__init__.py
================================================
from gym.envs.registration import register

register(
    id='CrowdSim-v0',
    entry_point='crowd_sim.envs:CrowdSim',
)

register(
    id='CrowdSimPred-v0',
    entry_point='crowd_sim.envs:CrowdSimPred',
)

register(
    id='CrowdSimVarNum-v0',
    entry_point='crowd_sim.envs:CrowdSimVarNum',
)

register(
    id='CrowdSimVarNumCollect-v0',
    entry_point='crowd_sim.envs:CrowdSimVarNumCollect',
)

register(
    id='CrowdSimPredRealGST-v0',
    entry_point='crowd_sim.envs:CrowdSimPredRealGST',
)

register(
    id='rosTurtlebot2iEnv-v0',
    entry_point='crowd_sim.envs.ros_turtlebot2i_env:rosTurtlebot2iEnv',
)

================================================
FILE: crowd_sim/envs/__init__.py
================================================
from .crowd_sim import CrowdSim
from .crowd_sim_pred import CrowdSimPred
from .crowd_sim_var_num import CrowdSimVarNum
from .crowd_sim_var_num_collect import CrowdSimVarNumCollect
from .crowd_sim_pred_real_gst import CrowdSimPredRealGST

================================================
FILE: crowd_sim/envs/crowd_sim.py
================================================
import logging
import gym
import numpy as np
import rvo2
import random
import copy

from numpy.linalg import norm
from crowd_sim.envs.utils.human import Human
from crowd_sim.envs.utils.robot import Robot
from crowd_sim.envs.utils.info import *
from crowd_nav.policy.orca import ORCA
from crowd_sim.envs.utils.state import *
from crowd_sim.envs.utils.action import ActionRot, ActionXY
from crowd_sim.envs.utils.recorder import Recoder



class CrowdSim(gym.Env):
    """
    A base environment
    treat it as an abstract class, all other environments inherit from this one
    """
    def __init__(self):
        """
        Movement simulation for n+1 agents
        Agent can either be human or robot.
        humans are controlled by a unknown and fixed policy.
        robot is controlled by a known and learnable policy.
        """
        self.time_limit = None
        self.time_step = None
        self.robot = None # a Robot instance representing the robot
        self.humans = None # a list of Human instances, representing all humans in the environment
        self.global_time = None
        self.step_counter=0

        # reward function
        self.success_reward = None
        self.collision_penalty = None
        self.discomfort_dist = None
        self.discomfort_penalty_factor = None
        # simulation configuration
        self.config = None
        self.case_capacity = None
        self.case_size = None
        self.case_counter = None
        self.randomize_attributes = None

        self.circle_radius = None
        self.human_num = None


        self.action_space=None
        self.observation_space=None

        # limit FOV
        self.robot_fov = None
        self.human_fov = None

        self.dummy_human = None
        self.dummy_robot = None

        #seed
        self.thisSeed=None # the seed will be set when the env is created

        #nenv
        self.nenv=None # the number of env will be set when the env is created.
        # Because the human crossing cases are controlled by random seed, we will calculate unique random seed for each
        # parallel env.

        self.phase=None # set the phase to be train, val or test
        self.test_case=None # the test case ID, which will be used to calculate a seed to generate a human crossing case

        # for render
        self.render_axis=None

        self.humans = []

        self.potential = None

        self.desiredVelocity = [0.0, 0.0]

        self.last_left = 0.
        self.last_right = 0.


    def configure(self, config):
        """ read the config to the environment variables """

        self.config = config

        self.time_limit = config.env.time_limit
        self.time_step = config.env.time_step
        self.randomize_attributes = config.env.randomize_attributes

        self.success_reward = config.reward.success_reward
        self.collision_penalty = config.reward.collision_penalty
        self.discomfort_dist = config.reward.discomfort_dist
        self.discomfort_penalty_factor = config.reward.discomfort_penalty_factor


        self.case_capacity = {'train': np.iinfo(np.uint32).max - 2000, 'val': 1000, 'test': 1000}
        self.case_size = {'train': np.iinfo(np.uint32).max - 2000, 'val': self.config.env.val_size,
                          'test': self.config.env.test_size}
        self.circle_radius = config.sim.circle_radius
        self.human_num = config.sim.human_num

        self.arena_size = config.sim.arena_size

        self.case_counter = {'train': 0, 'test': 0, 'val': 0}

        logging.info('human number: {}'.format(self.human_num))
        if self.randomize_attributes:
            logging.info("Randomize human's radius and preferred speed")
        else:
            logging.info("Not randomize human's radius and preferred speed")

        logging.info('Circle width: {}'.format(self.circle_radius))


        self.robot_fov = np.pi * config.robot.FOV
        self.human_fov = np.pi * config.humans.FOV
        logging.info('robot FOV %f', self.robot_fov)
        logging.info('humans FOV %f', self.human_fov)


        # set dummy human and dummy robot
        # dummy humans, used if any human is not in view of other agents
        self.dummy_human = Human(self.config, 'humans')
        # if a human is not in view, set its state to (px = 100, py = 100, vx = 0, vy = 0, theta = 0, radius = 0)
        self.dummy_human.set(7, 7, 7, 7, 0, 0, 0) # (7, 7, 7, 7, 0, 0, 0)
        self.dummy_human.time_step = config.env.time_step

        self.dummy_robot = Robot(self.config, 'robot')
        self.dummy_robot.set(7, 7, 7, 7, 0, 0, 0)
        self.dummy_robot.time_step = config.env.time_step
        self.dummy_robot.kinematics = 'holonomic'
        self.dummy_robot.policy = ORCA(config)

        self.r = self.config.humans.radius


        # configure randomized goal changing of humans midway through episode
        self.random_goal_changing = config.humans.random_goal_changing
        if self.random_goal_changing:
            self.goal_change_chance = config.humans.goal_change_chance

        # configure randomized goal changing of humans after reaching their respective goals
        self.end_goal_changing = config.humans.end_goal_changing
        if self.end_goal_changing:
            self.end_goal_change_chance = config.humans.end_goal_change_chance

        self.last_human_states = np.zeros((self.human_num, 5))

        self.predict_steps = config.sim.predict_steps
        self.human_num_range = config.sim.human_num_range
        assert self.human_num > self.human_num_range

        self.max_human_num = self.human_num + self.human_num_range
        self.min_human_num = self.human_num - self.human_num_range

        # for sim2real dynamics check
        self.record=config.sim2real.record
        self.load_act=config.sim2real.load_act
        self.ROSStepInterval=config.sim2real.ROSStepInterval
        self.fixed_time_interval=config.sim2real.fixed_time_interval
        self.use_fixed_time_interval = config.sim2real.use_fixed_time_interval
        if self.record:
            self.episodeRecoder=Recoder()
            self.load_act=config.sim2real.load_act
            if self.load_act:
                self.episodeRecoder.loadActions()
        # use dummy robot and human states or use detected states from sensors
        self.use_dummy_detect = config.sim2real.use_dummy_detect



        # prediction period / control (or simulation) period
        self.pred_interval = int(config.data.pred_timestep // config.env.time_step)
        self.buffer_len = self.predict_steps * self.pred_interval

        # set robot for this envs
        rob_RL = Robot(config, 'robot')
        self.set_robot(rob_RL)


    def set_robot(self, robot):
        raise NotImplementedError


    def generate_random_human_position(self, human_num):
        """
        Calls generate_circle_crossing_human function to generate a certain number of random humans
        :param human_num: the total number of humans to be generated
        :return: None
        """
        # initial min separation distance to avoid danger penalty at beginning
        for i in range(human_num):
            self.humans.append(self.generate_circle_crossing_human())


    def generate_circle_crossing_human(self):
        """Generate a human: generate start position on a circle, goal position is at the opposite side"""
        human = Human(self.config, 'humans')
        if self.randomize_attributes:
            human.sample_random_attributes()

        while True:
            angle = np.random.random() * np.pi * 2
            # add some noise to simulate all the possible cases robot could meet with human
            v_pref = 1.0 if human.v_pref == 0 else human.v_pref
            px_noise = (np.random.random() - 0.5) * v_pref
            py_noise = (np.random.random() - 0.5) * v_pref
            px = self.circle_radius * np.cos(angle) + px_noise
            py = self.circle_radius * np.sin(angle) + py_noise
            collide = False

            for i, agent in enumerate([self.robot] + self.humans):
                # keep human at least 3 meters away from robot
                if self.robot.kinematics == 'unicycle' and i == 0:
                    min_dist = self.circle_radius / 2 # Todo: if circle_radius <= 4, it will get stuck here
                else:
                    min_dist = human.radius + agent.radius + self.discomfort_dist
                if norm((px - agent.px, py - agent.py)) < min_dist or \
                        norm((px - agent.gx, py - agent.gy)) < min_dist:
                    collide = True
                    break
            if not collide:
                break

        # px = np.random.uniform(-6, 6)
        # py = np.random.uniform(-3, 3.5)
        # human.set(px, py, px, py, 0, 0, 0)
        human.set(px, py, -px, -py, 0, 0, 0)
        return human



    # update the robot belief of human states
    # if a human is visible, its state is updated to its current ground truth state
    # else we assume it keeps going in a straight line with last observed velocity
    def update_last_human_states(self, human_visibility, reset):
        """
        update the self.last_human_states array
        human_visibility: list of booleans returned by get_human_in_fov (e.x. [T, F, F, T, F])
        reset: True if this function is called by reset, False if called by step
        :return:
        """
        # keep the order of 5 humans at each timestep
        for i in range(self.human_num):
            if human_visibility[i]:
                humanS = np.array(self.humans[i].get_observable_state_list())
                self.last_human_states[i, :] = humanS

            else:
                if reset:
                    humanS = np.array([15., 15., 0., 0., 0.3])
                    self.last_human_states[i, :] = humanS

                else:
                    # Plan A: linear approximation of human's next position
                    px, py, vx, vy, r = self.last_human_states[i, :]
                    px = px + vx * self.time_step
                    py = py + vy * self.time_step
                    self.last_human_states[i, :] = np.array([px, py, vx, vy, r])

                    # Plan B: assume the human doesn't move, use last observation
                    # self.last_human_states[i, :] = np.array([px, py, 0., 0., r])

                    # Plan C: put invisible humans in a dummy position
                    # humanS = np.array([15., 15., 0., 0., 0.3])
                    # self.last_human_states[i, :] = humanS

    # return the ground truth locations of all humans
    def get_true_human_states(self):
        true_human_states = np.zeros((self.human_num, 2))
        for i in range(self.human_num):
            humanS = np.array(self.humans[i].get_observable_state_list())
            true_human_states[i, :] = humanS[:2]
        return true_human_states


    # set robot initial state and generate all humans for reset function
    # for crowd nav: human_num == self.human_num
    # for leader follower: human_num = self.human_num - 1
    def generate_robot_humans(self, phase, human_num=None):
        if human_num is None:
            human_num = self.human_num

        if self.robot.kinematics == 'unicycle':
            angle = np.random.uniform(0, np.pi * 2)
            px = self.circle_radius * np.cos(angle)
            py = self.circle_radius * np.sin(angle)
            while True:
                gx, gy = np.random.uniform(-self.circle_radius, self.circle_radius, 2)
                if np.linalg.norm([px - gx, py - gy]) >= 6:  # 1 was 6
                    break
            self.robot.set(px, py, gx, gy, 0, 0, np.random.uniform(0, 2*np.pi)) # randomize init orientation

        # randomize starting position and goal position
        else:
            while True:
                px, py, gx, gy = np.random.uniform(-self.circle_radius, self.circle_radius, 4)
                if np.linalg.norm([px - gx, py - gy]) >= 6:
                    break
            self.robot.set(px, py, gx, gy, 0, 0, np.pi/2)


        # generate humans
        self.generate_random_human_position(human_num=human_num)



    def smooth_action(self, action):
        """ mimic the dynamics of Turtlebot2i for sim2real """
        # if action.r is delta theta
        w = action.r / self.time_step
        # if action.r is w
        # w = action.r
        beta = 0.1
        left = (2 * action.v - 0.23 * w) / (2 * 0.035)
        right = (2 * action.v + 0.23 * w) / (2 * 0.035)

        left = np.clip(left, -17.5, 17.5)
        right = np.clip(right, -17.5, 17.5)

        # print('Before: left:', left, 'right:', right)
        if self.phase == 'test':
            left = (1. - beta) * self.last_left + beta * left
            right = (1. - beta) * self.last_right + beta * right

        self.last_left = copy.deepcopy(left)
        self.last_right = copy.deepcopy(right)

        # subtract a noisy amount of delay from wheel speeds to simulate the delay in tb2
        # do this in the last step because this happens after we send action commands to tb2
        if left > 0:
            adjust_left = left - np.random.normal(loc=1.8, scale=0.15)
            left = max(0., adjust_left)
        else:
            adjust_left = left + np.random.normal(loc=1.8, scale=0.15)
            left = min(0., adjust_left)

        if right > 0:
            adjust_right = right - np.random.normal(loc=1.8, scale=0.15)
            right = max(0., adjust_right)
        else:
            adjust_right = right + np.random.normal(loc=1.8, scale=0.15)
            right = min(0., adjust_right)

        if self.record:
            self.episodeRecoder.wheelVelList.append([left, right])
        # print('After: left:', left, 'right:', right)

        v = 0.035 / 2 * (left + right)
        r = 0.035 / 0.23 * (right - left) * self.time_step
        return ActionRot(v, r)


    def reset(self, phase='train', test_case=None):
        """
        Reset the environment
        :return:
        """

        if self.phase is not None:
            phase = self.phase
        if self.test_case is not None:
            test_case=self.test_case

        if self.robot is None:
            raise AttributeError('robot has to be set!')
        assert phase in ['train', 'val', 'test']
        if test_case is not None:
            self.case_counter[phase] = test_case # test case is passed in to calculate specific seed to generate case
        self.global_time = 0
        self.step_counter=0


        self.humans = []
        # train, val, and test phase should start with different seed.
        # case capacity: the maximum number for train(max possible int -2000), val(1000), and test(1000)
        # val start from seed=0, test start from seed=case_capacity['val']=1000
        # train start from self.case_capacity['val'] + self.case_capacity['test']=2000
        counter_offset = {'train': self.case_capacity['val'] + self.case_capacity['test'],
                          'val': 0, 'test': self.case_capacity['val']}

        np.random.seed(counter_offset[phase] + self.case_counter[phase] + self.thisSeed)

        self.generate_robot_humans(phase)


        for agent in [self.robot] + self.humans:
            agent.time_step = self.time_step
            agent.policy.time_step = self.time_step


        # case size is used to make sure that the case_counter is always between 0 and case_size[phase]
        self.case_counter[phase] = (self.case_counter[phase] + int(1*self.nenv)) % self.case_size[phase]


        # get current observation
        ob = self.generate_ob(reset=True)

        # initialize potential
        self.potential = -abs(np.linalg.norm(np.array([self.robot.px, self.robot.py]) - np.array([self.robot.gx, self.robot.gy])))


        return ob


    # Update the humans' end goals in the environment
    # Produces valid end goals for each human
    def update_human_goals_randomly(self):
        # Update humans' goals randomly
        for human in self.humans:
            if human.isObstacle or human.v_pref == 0:
                continue
            if np.random.random() <= self.goal_change_chance:
                humans_copy = []
                for h in self.humans:
                    if h != human:
                        humans_copy.append(h)


                # Produce valid goal for human in case of circle setting
                while True:
                    angle = np.random.random() * np.pi * 2
                    # add some noise to simulate all the possible cases robot could meet with human
                    v_pref = 1.0 if human.v_pref == 0 else human.v_pref
                    gx_noise = (np.random.random() - 0.5) * v_pref
                    gy_noise = (np.random.random() - 0.5) * v_pref
                    gx = self.circle_radius * np.cos(angle) + gx_noise
                    gy = self.circle_radius * np.sin(angle) + gy_noise
                    collide = False

                    for agent in [self.robot] + humans_copy:
                        min_dist = human.radius + agent.radius + self.discomfort_dist
                        if norm((gx - agent.px, gy - agent.py)) < min_dist or \
                                norm((gx - agent.gx, gy - agent.gy)) < min_dist:
                            collide = True
                            break
                    if not collide:
                        break

                # Give human new goal
                human.gx = gx
                human.gy = gy
        return

    # Update the specified human's end goals in the environment randomly
    def update_human_goal(self, human):

        # Update human's goals randomly
        if np.random.random() <= self.end_goal_change_chance:
            humans_copy = []
            for h in self.humans:
                if h != human:
                    humans_copy.append(h)


            while True:
                angle = np.random.random() * np.pi * 2
                # add some noise to simulate all the possible cases robot could meet with human
                v_pref = 1.0 if human.v_pref == 0 else human.v_pref
                gx_noise = (np.random.random() - 0.5) * v_pref
                gy_noise = (np.random.random() - 0.5) * v_pref
                gx = self.circle_radius * np.cos(angle) + gx_noise
                gy = self.circle_radius * np.sin(angle) + gy_noise
                collide = False

                for agent in [self.robot] + humans_copy:
                    min_dist = human.radius + agent.radius + self.discomfort_dist
                    if norm((gx - agent.px, gy - agent.py)) < min_dist or \
                            norm((gx - agent.gx, gy - agent.gy)) < min_dist:
                        collide = True
                        break
                if not collide:
                    break

            # Give human new goal
            human.gx = gx
            human.gy = gy
        return

    # calculate the angle between the direction vector of state1 and the vector from state1 to state2
    def calc_offset_angle(self, state1, state2):
        if self.robot.kinematics == 'holonomic':
            real_theta = np.arctan2(state1.vy, state1.vx)
        else:
            real_theta = state1.theta
        # angle of center line of FOV of agent1
        v_fov = [np.cos(real_theta), np.sin(real_theta)]

        # angle between agent1 and agent2
        v_12 = [state2.px - state1.px, state2.py - state1.py]
        # angle between center of FOV and agent 2

        v_fov = v_fov / np.linalg.norm(v_fov)
        v_12 = v_12 / np.linalg.norm(v_12)

        offset = np.arccos(np.clip(np.dot(v_fov, v_12), a_min=-1, a_max=1))
        return offset

    # Caculate whether agent2 is in agent1's FOV
    # Not the same as whether agent1 is in agent2's FOV!!!!
    # arguments:
    # state1, state2: can be agent instance OR state instance
    # robot1: is True if state1 is robot, else is False
    # return value:
    # return True if state2 is visible to state1, else return False
    def detect_visible(self, state1, state2, robot1 = False, custom_fov=None, custom_sensor_range=None):
        if self.robot.kinematics == 'holonomic':
            real_theta = np.arctan2(state1.vy, state1.vx)
        else:
            real_theta = state1.theta
        # angle of center line of FOV of agent1
        v_fov = [np.cos(real_theta), np.sin(real_theta)]

        # angle between agent1 and agent2
        v_12 = [state2.px - state1.px, state2.py - state1.py]
        # angle between center of FOV and agent 2

        v_fov = v_fov / np.linalg.norm(v_fov)
        v_12 = v_12 / np.linalg.norm(v_12)

        offset = np.arccos(np.clip(np.dot(v_fov, v_12), a_min=-1, a_max=1))
        if custom_fov:
            fov = custom_fov
        else:
            if robot1:
                fov = self.robot_fov
            else:
                fov = self.human_fov

        if np.abs(offset) <= fov / 2:
            inFov = True
        else:
            inFov = False

        # detect whether state2 is in state1's sensor_range
        dist = np.linalg.norm([state1.px - state2.px, state1.py - state2.py]) - state1.radius - state2.radius
        if custom_sensor_range:
            inSensorRange = dist <= custom_sensor_range
        else:
            if robot1:
                inSensorRange = dist <= self.robot.sensor_range
            else:
                inSensorRange = True

        return (inFov and inSensorRange)


    # for robot:
    # return only visible humans to robot and number of visible humans
    # and a list of True/False to indicate whether each human is visible
    def get_num_human_in_fov(self):
        human_ids = []
        humans_in_view = []
        num_humans_in_view = 0

        for i in range(self.human_num):
            visible = self.detect_visible(self.robot, self.humans[i], robot1=True)
            if visible:
                humans_in_view.append(self.humans[i])
                num_humans_in_view = num_humans_in_view + 1
                human_ids.append(True)
            else:
                human_ids.append(False)

        return humans_in_view, num_humans_in_view, human_ids



    def last_human_states_obj(self):
        '''
        convert self.last_human_states to a list of observable state objects for old algorithms to use
        '''
        humans = []
        for i in range(self.human_num):
            h = ObservableState(*self.last_human_states[i])
            humans.append(h)
        return humans


    # calculate the reward at current timestep R(s, a)
    def calc_reward(self, action):
        # collision detection
        dmin = float('inf')

        danger_dists = []
        collision = False

        for i, human in enumerate(self.humans):
            dx = human.px - self.robot.px
            dy = human.py - self.robot.py
            closest_dist = (dx ** 2 + dy ** 2) ** (1 / 2) - human.radius - self.robot.radius

            if closest_dist < self.discomfort_dist:
                danger_dists.append(closest_dist)
            if closest_dist < 0:
                collision = True
                # logging.debug("Collision: distance between robot and p{} is {:.2E}".format(i, closest_dist))
                break
            elif closest_dist < dmin:
                dmin = closest_dist


        # check if reaching the goal
        reaching_goal = norm(np.array(self.robot.get_position()) - np.array(self.robot.get_goal_position())) < self.robot.radius

        if self.global_time >= self.time_limit - 1:
            reward = 0
            done = True
            episode_info = Timeout()
        elif collision:
            reward = self.collision_penalty
            done = True
            episode_info = Collision()
        elif reaching_goal:
            reward = self.success_reward
            done = True
            episode_info = ReachGoal()

        elif dmin < self.discomfort_dist:
            # only penalize agent for getting too close if it's visible
            # adjust the reward based on FPS
            # print(dmin)
            reward = (dmin - self.discomfort_dist) * self.discomfort_penalty_factor * self.time_step
            done = False
            episode_info = Danger(dmin)

        else:
            # potential reward
            potential_cur = np.linalg.norm(
                np.array([self.robot.px, self.robot.py]) - np.array(self.robot.get_goal_position()))
            reward = 2 * (-abs(potential_cur) - self.potential)
            self.potential = -abs(potential_cur)

            done = False
            episode_info = Nothing()


        # if the robot is near collision/arrival, it should be able to turn a large angle
        if self.robot.kinematics == 'unicycle':
            # add a rotational penalty
            # if action.r is w, factor = -0.02 if w in [-1.5, 1.5], factor = -0.045 if w in [-1, 1];
            # if action.r is delta theta, factor = -2 if r in [-0.15, 0.15], factor = -4.5 if r in [-0.1, 0.1]
            r_spin = -5 * action.r**2

            # add a penalty for going backwards
            if action.v < 0:
                r_back = -2 * abs(action.v)
            else:
                r_back = 0.

            reward = reward + r_spin + r_back

        return reward, done, episode_info

    # compute the observation
    def generate_ob(self, reset):
        visible_human_states, num_visible_humans, human_visibility = self.get_num_human_in_fov()
        self.update_last_human_states(human_visibility, reset=reset)
        if self.robot.policy.name in ['lstm_ppo', 'srnn']:
            ob = [num_visible_humans]
            # append robot's state
            robotS = np.array(self.robot.get_full_state_list())
            ob.extend(list(robotS))

            ob.extend(list(np.ravel(self.last_human_states)))
            ob = np.array(ob)

        else: # for orca and sf
            ob = self.last_human_states_obj()

        return ob

    def get_human_actions(self):
        # step all humans
        human_actions = []  # a list of all humans' actions

        for i, human in enumerate(self.humans):
            # observation for humans is always coordinates
            ob = []
            for other_human in self.humans:
                if other_human != human:
                    # Else detectable humans are always observable to each other
                    if self.detect_visible(human, other_human):
                        ob.append(other_human.get_observable_state())
                    else:
                        ob.append(self.dummy_human.get_observable_state())

            if self.robot.visible:
                if self.detect_visible(self.humans[i], self.robot):
                    ob += [self.robot.get_observable_state()]
                else:
                    ob += [self.dummy_robot.get_observable_state()]

            human_actions.append(human.act(ob))

        return human_actions

    def step(self, action, update=True):
        """
        Compute actions for all agents, detect collision, update environment and return (ob, reward, done, info)
        """

        # clip the action to obey robot's constraint
        action = self.robot.policy.clip_action(action, self.robot.v_pref)

        human_actions = self.get_human_actions()


        # compute reward and episode info
        reward, done, episode_info = self.calc_reward(action)


        # apply action and update all agents
        self.robot.step(action)
        for i, human_action in enumerate(human_actions):
            self.humans[i].step(human_action)
        self.global_time += self.time_step # max episode length=time_limit/time_step
        self.step_counter=self.step_counter+1

        ##### compute_ob goes here!!!!!
        ob = self.generate_ob(reset=False)


        if self.robot.policy.name in ['srnn']:
            info={'info':episode_info}
        else: # for orca and sf
            info=episode_info

        # Update all humans' goals randomly midway through episode
        if self.random_goal_changing:
            if self.global_time % 5 == 0:
                self.update_human_goals_randomly()


        # Update a specific human's goal once its reached its original goal
        if self.end_goal_changing:
            for human in self.humans:
                if not human.isObstacle and human.v_pref != 0 and norm((human.gx - human.px, human.gy - human.py)) < human.radius:
                    self.update_human_goal(human)

        return ob, reward, done, info

    def render(self, mode='human'):
        """ Render the current status of the environment using matplotlib """
        import matplotlib.pyplot as plt
        import matplotlib.lines as mlines
        from matplotlib import patches

        plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'

        robot_color = 'yellow'
        goal_color = 'red'
        arrow_color = 'red'
        arrow_style = patches.ArrowStyle("->", head_length=4, head_width=2)

        def calcFOVLineEndPoint(ang, point, extendFactor):
            # choose the extendFactor big enough
            # so that the endPoints of the FOVLine is out of xlim and ylim of the figure
            FOVLineRot = np.array([[np.cos(ang), -np.sin(ang), 0],
                                   [np.sin(ang), np.cos(ang), 0],
                                   [0, 0, 1]])
            point.extend([1])
            # apply rotation matrix
            newPoint = np.matmul(FOVLineRot, np.reshape(point, [3, 1]))
            # increase the distance between the line start point and the end point
            newPoint = [extendFactor * newPoint[0, 0], extendFactor * newPoint[1, 0], 1]
            return newPoint



        ax=self.render_axis
        artists=[]

        # add goal
        goal=mlines.Line2D([self.robot.gx], [self.robot.gy], color=goal_color, marker='*', linestyle='None', markersize=15, label='Goal')
        ax.add_artist(goal)
        artists.append(goal)

        # add robot
        robotX,robotY=self.robot.get_position()

        robot=plt.Circle((robotX,robotY), self.robot.radius, fill=True, color=robot_color)
        ax.add_artist(robot)
        artists.append(robot)

        plt.legend([robot, goal], ['Robot', 'Goal'], fontsize=16)


        # compute orientation in each step and add arrow to show the direction
        radius = self.robot.radius
        arrowStartEnd=[]

        robot_theta = self.robot.theta if self.robot.kinematics == 'unicycle' else np.arctan2(self.robot.vy, self.robot.vx)

        arrowStartEnd.append(((robotX, robotY), (robotX + radius * np.cos(robot_theta), robotY + radius * np.sin(robot_theta))))

        for i, human in enumerate(self.humans):
            theta = np.arctan2(human.vy, human.vx)
            arrowStartEnd.append(((human.px, human.py), (human.px + radius * np.cos(theta), human.py + radius * np.sin(theta))))

        arrows = [patches.FancyArrowPatch(*arrow, color=arrow_color, arrowstyle=arrow_style)
                  for arrow in arrowStartEnd]
        for arrow in arrows:
            ax.add_artist(arrow)
            artists.append(arrow)


        # draw FOV for the robot
        # add robot FOV

        if self.robot.FOV < 2 * np.pi:
            FOVAng = self.robot_fov / 2
            FOVLine1 = mlines.Line2D([0, 0], [0, 0], linestyle='--')
            FOVLine2 = mlines.Line2D([0, 0], [0, 0], linestyle='--')


            startPointX = robotX
            startPointY = robotY
            endPointX = robotX + radius * np.cos(robot_theta)
            endPointY = robotY + radius * np.sin(robot_theta)

            # transform the vector back to world frame origin, apply rotation matrix, and get end point of FOVLine
            # the start point of the FOVLine is the center of the robot
            FOVEndPoint1 = calcFOVLineEndPoint(FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)
            FOVLine1.set_xdata(np.array([startPointX, startPointX + FOVEndPoint1[0]]))
            FOVLine1.set_ydata(np.array([startPointY, startPointY + FOVEndPoint1[1]]))
            FOVEndPoint2 = calcFOVLineEndPoint(-FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)
            FOVLine2.set_xdata(np.array([startPointX, startPointX + FOVEndPoint2[0]]))
            FOVLine2.set_ydata(np.array([startPointY, startPointY + FOVEndPoint2[1]]))

            ax.add_artist(FOVLine1)
            ax.add_artist(FOVLine2)
            artists.append(FOVLine1)
            artists.append(FOVLine2)

        # add an arc of robot's sensor range
        sensor_range = plt.Circle(self.robot.get_position(), self.robot.sensor_range, fill=False, linestyle='--')
        ax.add_artist(sensor_range)
        artists.append(sensor_range)
        # add humans and change the color of them based on visibility
        human_circles = [plt.Circle(human.get_position(), human.radius, fill=False) for human in self.humans]


        for i in range(len(self.humans)):
            ax.add_artist(human_circles[i])
            artists.append(human_circles[i])

            # green: visible; red: invisible
            if self.detect_visible(self.robot, self.humans[i], robot1=True):
                human_circles[i].set_color(c='g')
            else:
                human_circles[i].set_color(c='r')

            # label numbers on each human
            # plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, str(self.humans[i].id), color='black', fontsize=12)
            plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, i, color='black', fontsize=12)



        plt.pause(0.1)
        for item in artists:
            item.remove() # there should be a better way to do this. For example,
            # initially use add_artist and draw_artist later on
        for t in ax.texts:
            t.set_visible(False)



================================================
FILE: crowd_sim/envs/crowd_sim_pred.py
================================================
import gym
import numpy as np
from numpy.linalg import norm
import copy

from crowd_sim.envs.utils.action import ActionRot, ActionXY
from crowd_sim.envs.crowd_sim_var_num import CrowdSimVarNum



class CrowdSimPred(CrowdSimVarNum):
    """
    The environment for our model with non-neural network trajectory predictors, including const vel predictor and ground truth predictor
    The number of humans at each timestep can change within a range
    """
    def __init__(self):
        """
        Movement simulation for n+1 agents
        Agent can either be human or robot.
        humans are controlled by a unknown and fixed policy.
        robot is controlled by a known and learnable policy.
        """
        super(CrowdSimPred, self).__init__()
        self.pred_method = None
        self.cur_human_states=None

    def configure(self, config):
        """ read the config to the environment variables """
        super().configure(config)
        # 'const_vel', 'truth', or 'inferred'
        self.pred_method = config.sim.predict_method

    def reset(self, phase='train', test_case=None):
        ob = super().reset(phase=phase, test_case=test_case)
        return ob

    # set observation space and action space
    def set_robot(self, robot):
        self.robot = robot

        # we set the max and min of action/observation space as inf
        # clip the action and observation as you need

        d = {}
        # robot node: num_visible_humans, px, py, r, gx, gy, v_pref, theta
        d['robot_node'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 7,), dtype=np.float32)
        # only consider all temporal edges (human_num+1) and spatial edges pointing to robot (human_num)
        d['temporal_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 2,), dtype=np.float32)

        d['spatial_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf,
                            shape=(self.config.sim.human_num + self.config.sim.human_num_range, int(2*(self.predict_steps+1))),
                            dtype=np.float32)
        # number of humans detected at each timestep
        d['detected_human_num'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1,), dtype=np.float32)
        self.observation_space = gym.spaces.Dict(d)

        high = np.inf * np.ones([2, ])
        self.action_space = gym.spaces.Box(-high, high, dtype=np.float32)


    # reset = True: reset calls this function; reset = False: step calls this function
    def generate_ob(self, reset, sort=True):
        """Generate observation for reset and step functions"""
        ob = {}

        # nodes
        visible_humans, num_visibles, self.human_visibility = self.get_num_human_in_fov()

        ob['robot_node'] = self.robot.get_full_state_list_noV()

        self.prev_human_pos = copy.deepcopy(self.last_human_states)
        self.update_last_human_states(self.human_visibility, reset=reset)

        # edges
        ob['temporal_edges'] = np.array([self.robot.vx, self.robot.vy])

        # initialize storage space for max_human_num humans
        ob['spatial_edges'] = np.ones((self.config.sim.human_num + self.config.sim.human_num_range, int(2*(self.predict_steps+1)))) * np.inf

        # calculate the current and future positions of present humans (first self.human_num humans in ob['spatial_edges'])
        predicted_states = self.calc_human_future_traj(method=self.pred_method) # [pred_steps + 1, human_num, 4]
        # [human_num, pred_steps + 1, 2]
        predicted_pos = np.transpose(predicted_states[:, :, :2], (1, 0, 2)) - np.array([self.robot.px, self.robot.py])

        # [human_num, (pred_steps + 1)*2]
        ob['spatial_edges'][:self.human_num][self.human_visibility] = predicted_pos.reshape((self.human_num, -1))[self.human_visibility]
        # sort all humans by distance (invisible humans will be in the end automatically)
        if self.config.args.sort_humans:
            ob['spatial_edges'] = np.array(sorted(ob['spatial_edges'], key=lambda x: np.linalg.norm(x[:2])))
        ob['spatial_edges'][np.isinf(ob['spatial_edges'])] = 15

        ob['detected_human_num'] = num_visibles
        # if no human is detected, assume there is one dummy human at (15, 15) to make the pack_padded_sequence work
        if ob['detected_human_num'] == 0:
            ob['detected_human_num'] = 1

        return ob


    def step(self, action, update=True):
        """
        step function
        Compute actions for all agents, detect collision, update environment and return (ob, reward, done, info)
        """
        if self.robot.policy.name == 'ORCA':
            # assemble observation for orca: px, py, vx, vy, r
            # include all observable humans from t to t+t_pred
            _, _, human_visibility = self.get_num_human_in_fov()
            # [self.predict_steps + 1, self.human_num, 4]
            human_states = copy.deepcopy(self.calc_human_future_traj(method='truth'))
            # append the radius, convert it to [human_num*(self.predict_steps+1), 5] by treating each predicted pos as a new human
            human_states = np.concatenate((human_states.reshape((-1, 4)),
                                           np.tile(self.last_human_states[:, -1], self.predict_steps+1).reshape((-1, 1))),
                                          axis=1)
            # get orca action
            action = self.robot.act(human_states.tolist())
        else:
            action = self.robot.policy.clip_action(action, self.robot.v_pref)

        if self.robot.kinematics == 'unicycle':
            self.desiredVelocity[0] = np.clip(self.desiredVelocity[0] + action.v, -self.robot.v_pref, self.robot.v_pref)
            action = ActionRot(self.desiredVelocity[0], action.r)

            # if action.r is delta theta
            action = ActionRot(self.desiredVelocity[0], action.r)
            if self.record:
                self.episodeRecoder.unsmoothed_actions.append(list(action))

            action = self.smooth_action(action)



        human_actions = self.get_human_actions()

        # need to update self.human_future_traj in testing to calculate number of intrusions
        if self.phase == 'test':
            # use ground truth future positions of humans
            self.calc_human_future_traj(method='truth')

        # compute reward and episode info
        reward, done, episode_info = self.calc_reward(action, danger_zone='future')


        if self.record:

            self.episodeRecoder.actionList.append(list(action))
            self.episodeRecoder.positionList.append([self.robot.px, self.robot.py])
            self.episodeRecoder.orientationList.append(self.robot.theta)

            if done:
                self.episodeRecoder.robot_goal.append([self.robot.gx, self.robot.gy])
                self.episodeRecoder.saveEpisode(self.case_counter['test'])

        # apply action and update all agents
        self.robot.step(action)
        for i, human_action in enumerate(human_actions):
            self.humans[i].step(human_action)
            self.cur_human_states[i] = np.array([self.humans[i].px, self.humans[i].py, self.humans[i].radius])

        self.global_time += self.time_step # max episode length=time_limit/time_step
        self.step_counter = self.step_counter+1

        info={'info':episode_info}

        # Add or remove at most self.human_num_range humans
        # if self.human_num_range == 0 -> human_num is fixed at all times
        if self.human_num_range > 0 and self.global_time % 5 == 0:
            # remove humans
            if np.random.rand() < 0.5:
                # if no human is visible, anyone can be removed
                if len(self.observed_human_ids) == 0:
                    max_remove_num = self.human_num - 1
                else:
                    max_remove_num = (self.human_num - 1) - max(self.observed_human_ids)
                remove_num = np.random.randint(low=0, high=min(self.human_num_range, max_remove_num) + 1)
                for _ in range(remove_num):
                    self.humans.pop()
                self.human_num = self.human_num - remove_num
                self.last_human_states = self.last_human_states[:self.human_num]
            # add humans
            else:
                add_num = np.random.randint(low=0, high=self.human_num_range + 1)
                if add_num > 0:
                    # set human ids
                    true_add_num = 0
                    for i in range(self.human_num, self.human_num + add_num):
                        if i == self.config.sim.human_num + self.human_num_range:
                            break
                        self.generate_random_human_position(human_num=1)
                        self.humans[i].id = i
                        true_add_num = true_add_num + 1
                    self.human_num = self.human_num + true_add_num
                    if true_add_num > 0:
                        self.last_human_states = np.concatenate((self.last_human_states, np.array([[15, 15, 0, 0, 0.3]]*true_add_num)), axis=0)


        # compute the observation
        ob = self.generate_ob(reset=False)


        # Update all humans' goals randomly midway through episode
        if self.random_goal_changing:
            if self.global_time % 5 == 0:
                self.update_human_goals_randomly()

        # Update a specific human's goal once its reached its original goal
        if self.end_goal_changing and not self.record:
            for i, human in enumerate(self.humans):
                if norm((human.gx - human.px, human.gy - human.py)) < human.radius:
                    self.humans[i] = self.generate_circle_crossing_human()
                    self.humans[i].id = i

        return ob, reward, done, info


    def calc_reward(self, action, danger_zone='future'):
        """Calculate the reward, which includes the functionality reward and social reward"""
        # compute the functionality reward, same as the reward function with no prediction
        reward, done, episode_info = super().calc_reward(action, danger_zone=danger_zone)
        # compute the social reward
        # if the robot collides with future states, give it a collision penalty
        relative_pos = self.human_future_traj[1:, :, :2] - np.array([self.robot.px, self.robot.py])
        # assumes constant radius of each personal zone circle
        collision_idx = np.linalg.norm(relative_pos, axis=-1) < self.robot.radius + self.config.humans.radius # [predict_steps, human_num]

        coefficients = 2. ** np.arange(2, self.predict_steps + 2).reshape((self.predict_steps, 1))  # 4, 8, 16, 32

        collision_penalties = self.collision_penalty / coefficients

        reward_future = collision_idx * collision_penalties # [predict_steps, human_num]
        reward_future = np.min(reward_future)

        return reward + reward_future, done, episode_info


    def render(self, mode='human'):
        """ Render the current status of the environment using matplotlib """
        import matplotlib.pyplot as plt
        import matplotlib.lines as mlines
        from matplotlib import patches

        plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'

        robot_color = 'gold'
        goal_color = 'red'
        arrow_color = 'red'
        arrow_style = patches.ArrowStyle("->", head_length=4, head_width=2)

        def calcFOVLineEndPoint(ang, point, extendFactor):
            # choose the extendFactor big enough
            # so that the endPoints of the FOVLine is out of xlim and ylim of the figure
            FOVLineRot = np.array([[np.cos(ang), -np.sin(ang), 0],
                                   [np.sin(ang), np.cos(ang), 0],
                                   [0, 0, 1]])
            point.extend([1])
            # apply rotation matrix
            newPoint = np.matmul(FOVLineRot, np.reshape(point, [3, 1]))
            # increase the distance between the line start point and the end point
            newPoint = [extendFactor * newPoint[0, 0], extendFactor * newPoint[1, 0], 1]
            return newPoint



        ax=self.render_axis
        artists=[]

        # add goal
        goal=mlines.Line2D([self.robot.gx], [self.robot.gy], color=goal_color, marker='*', linestyle='None', markersize=15, label='Goal')
        ax.add_artist(goal)
        artists.append(goal)

        # add robot
        robotX,robotY=self.robot.get_position()

        robot=plt.Circle((robotX,robotY), self.robot.radius, fill=True, color=robot_color)
        ax.add_artist(robot)
        artists.append(robot)

        # plt.legend([robot, goal], ['Robot', 'Goal'], fontsize=16)


        # compute orientation in each step and add arrow to show the direction
        radius = self.robot.radius
        arrowStartEnd=[]

        robot_theta = self.robot.theta if self.robot.kinematics == 'unicycle' else np.arctan2(self.robot.vy, self.robot.vx)

        arrowStartEnd.append(((robotX, robotY), (robotX + radius * np.cos(robot_theta), robotY + radius * np.sin(robot_theta))))

        for i, human in enumerate(self.humans):
            theta = np.arctan2(human.vy, human.vx)
            arrowStartEnd.append(((human.px, human.py), (human.px + radius * np.cos(theta), human.py + radius * np.sin(theta))))

        arrows = [patches.FancyArrowPatch(*arrow, color=arrow_color, arrowstyle=arrow_style)
                  for arrow in arrowStartEnd]
        for arrow in arrows:
            ax.add_artist(arrow)
            artists.append(arrow)


        # draw FOV for the robot
        # add robot FOV
        if self.robot.FOV < 2 * np.pi:
            FOVAng = self.robot_fov / 2
            FOVLine1 = mlines.Line2D([0, 0], [0, 0], linestyle='--')
            FOVLine2 = mlines.Line2D([0, 0], [0, 0], linestyle='--')


            startPointX = robotX
            startPointY = robotY
            endPointX = robotX + radius * np.cos(robot_theta)
            endPointY = robotY + radius * np.sin(robot_theta)

            # transform the vector back to world frame origin, apply rotation matrix, and get end point of FOVLine
            # the start point of the FOVLine is the center of the robot
            FOVEndPoint1 = calcFOVLineEndPoint(FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)
            FOVLine1.set_xdata(np.array([startPointX, startPointX + FOVEndPoint1[0]]))
            FOVLine1.set_ydata(np.array([startPointY, startPointY + FOVEndPoint1[1]]))
            FOVEndPoint2 = calcFOVLineEndPoint(-FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)
            FOVLine2.set_xdata(np.array([startPointX, startPointX + FOVEndPoint2[0]]))
            FOVLine2.set_ydata(np.array([startPointY, startPointY + FOVEndPoint2[1]]))

            ax.add_artist(FOVLine1)
            ax.add_artist(FOVLine2)
            artists.append(FOVLine1)
            artists.append(FOVLine2)

        # add an arc of robot's sensor range
        sensor_range = plt.Circle(self.robot.get_position(), self.robot.sensor_range + self.robot.radius+self.config.humans.radius, fill=False, linestyle='--')
        ax.add_artist(sensor_range)
        artists.append(sensor_range)

        # add humans and change the color of them based on visibility
        human_circles = [plt.Circle(human.get_position(), human.radius, fill=False, linewidth=1.5) for human in self.humans]
        # human_circles = [plt.Circle(human.get_position(), 2, fill=False) for human in self.humans]

        # hardcoded for now
        actual_arena_size = self.arena_size + 0.5
        # plot current human states
        for i in range(len(self.humans)):
            ax.add_artist(human_circles[i])
            artists.append(human_circles[i])

            # green: visible; red: invisible
            if self.detect_visible(self.robot, self.humans[i], robot1=True):
                human_circles[i].set_color(c='b')
            else:
                human_circles[i].set_color(c='r')


            if -actual_arena_size <= self.humans[i].px <= actual_arena_size and -actual_arena_size <= self.humans[
                i].py <= actual_arena_size:
                # label numbers on each human
                # plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, str(self.humans[i].id), color='black', fontsize=12)
                plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, i, color='black', fontsize=12)


        # save plot for slide show
        if self.config.save_slides:
            import os
            folder_path = os.path.join(self.config.save_path, str(self.rand_seed)+'pred')
            if not os.path.isdir(folder_path):
                os.makedirs(folder_path, exist_ok = True)
            plt.savefig(os.path.join(folder_path, str(self.step_counter)+'.png'), dpi=300)

        plt.pause(0.1)
        for item in artists:
            item.remove() # there should be a better way to do this. For example,
            # initially use add_artist and draw_artist later on
        for t in ax.texts:
            t.set_visible(False)


================================================
FILE: crowd_sim/envs/crowd_sim_pred_real_gst.py
================================================
import gym
import numpy as np

from crowd_sim.envs.crowd_sim_pred import CrowdSimPred


class CrowdSimPredRealGST(CrowdSimPred):
    '''
    Same as CrowdSimPred, except that
    The future human traj in 'spatial_edges' are dummy placeholders
    and will be replaced by the outputs of a real GST pred model in the wrapper function in vec_pretext_normalize.py
    '''
    def __init__(self):
        """
        Movement simulation for n+1 agents
        Agent can either be human or robot.
        humans are controlled by a unknown and fixed policy.
        robot is controlled by a known and learnable policy.
        """
        super(CrowdSimPredRealGST, self).__init__()
        self.pred_method = None

        # to receive data from gst pred model
        self.gst_out_traj = None


    def set_robot(self, robot):
        """set observation space and action space"""
        self.robot = robot

        # we set the max and min of action/observation space as inf
        # clip the action and observation as you need

        d = {}
        # robot node: num_visible_humans, px, py, r, gx, gy, v_pref, theta
        d['robot_node'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 7,), dtype=np.float32)
        # only consider all temporal edges (human_num+1) and spatial edges pointing to robot (human_num)
        d['temporal_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 2,), dtype=np.float32)
        '''
        format of spatial_edges: [max_human_num, [state_t, state_(t+1), ..., state(t+self.pred_steps)]]
        '''

        # predictions only include mu_x, mu_y (or px, py)
        self.spatial_edge_dim = int(2*(self.predict_steps+1))

        d['spatial_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf,
                            shape=(self.config.sim.human_num + self.config.sim.human_num_range, self.spatial_edge_dim),
                            dtype=np.float32)

        # masks for gst pred model
        # whether each human is visible to robot (ordered by human ID, should not be sorted)
        d['visible_masks'] = gym.spaces.Box(low=-np.inf, high=np.inf,
                                            shape=(self.config.sim.human_num + self.config.sim.human_num_range,),
                                            dtype=np.bool)

        # number of humans detected at each timestep
        d['detected_human_num'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1,), dtype=np.float32)

        self.observation_space = gym.spaces.Dict(d)

        high = np.inf * np.ones([2, ])
        self.action_space = gym.spaces.Box(-high, high, dtype=np.float32)

    def talk2Env(self, data):
        """
        Call this function when you want extra information to send to/recv from the env
        :param data: data that is sent from gst_predictor network to the env, it has 2 parts:
        output predicted traj and output masks
        :return: True means received
        """
        self.gst_out_traj=data
        return True


    # reset = True: reset calls this function; reset = False: step calls this function
    def generate_ob(self, reset, sort=False):
        """Generate observation for reset and step functions"""
        # since gst pred model needs ID tracking, don't sort all humans
        # inherit from crowd_sim_lstm, not crowd_sim_pred to avoid computation of true pred!
        # sort=False because we will sort in wrapper in vec_pretext_normalize.py later
        parent_ob = super(CrowdSimPred, self).generate_ob(reset=reset, sort=False)

        # add additional keys, removed unused keys
        ob = {}

        ob['visible_masks'] = parent_ob['visible_masks']
        ob['robot_node'] = parent_ob['robot_node']
        ob['temporal_edges'] = parent_ob['temporal_edges']

        ob['spatial_edges'] = np.tile(parent_ob['spatial_edges'], self.predict_steps+1)

        ob['detected_human_num'] = parent_ob['detected_human_num']

        return ob


    def calc_reward(self, action, danger_zone='future'):
        # inherit from crowd_sim_lstm, not crowd_sim_pred to prevent social reward calculation
        # since we don't know the GST predictions yet
        reward, done, episode_info = super(CrowdSimPred, self).calc_reward(action, danger_zone=danger_zone)
        return reward, done, episode_info


    def render(self, mode='human'):
        """
        render function
        use talk2env to plot the predicted future traj of humans
        """
        import matplotlib.pyplot as plt
        import matplotlib.lines as mlines
        from matplotlib import patches

        plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'

        robot_color = 'gold'
        goal_color = 'red'
        arrow_color = 'red'
        arrow_style = patches.ArrowStyle("->", head_length=4, head_width=2)

        def calcFOVLineEndPoint(ang, point, extendFactor):
            # choose the extendFactor big enough
            # so that the endPoints of the FOVLine is out of xlim and ylim of the figure
            FOVLineRot = np.array([[np.cos(ang), -np.sin(ang), 0],
                                   [np.sin(ang), np.cos(ang), 0],
                                   [0, 0, 1]])
            point.extend([1])
            # apply rotation matrix
            newPoint = np.matmul(FOVLineRot, np.reshape(point, [3, 1]))
            # increase the distance between the line start point and the end point
            newPoint = [extendFactor * newPoint[0, 0], extendFactor * newPoint[1, 0], 1]
            return newPoint



        ax=self.render_axis
        artists=[]

        # add goal
        goal=mlines.Line2D([self.robot.gx], [self.robot.gy], color=goal_color, marker='*', linestyle='None', markersize=15, label='Goal')
        ax.add_artist(goal)
        artists.append(goal)

        # add robot
        robotX,robotY=self.robot.get_position()

        robot=plt.Circle((robotX,robotY), self.robot.radius, fill=True, color=robot_color)
        ax.add_artist(robot)
        artists.append(robot)


        # compute orientation in each step and add arrow to show the direction
        radius = self.robot.radius
        arrowStartEnd=[]

        robot_theta = self.robot.theta if self.robot.kinematics == 'unicycle' else np.arctan2(self.robot.vy, self.robot.vx)

        arrowStartEnd.append(((robotX, robotY), (robotX + radius * np.cos(robot_theta), robotY + radius * np.sin(robot_theta))))

        for i, human in enumerate(self.humans):
            theta = np.arctan2(human.vy, human.vx)
            arrowStartEnd.append(((human.px, human.py), (human.px + radius * np.cos(theta), human.py + radius * np.sin(theta))))

        arrows = [patches.FancyArrowPatch(*arrow, color=arrow_color, arrowstyle=arrow_style)
                  for arrow in arrowStartEnd]
        for arrow in arrows:
            ax.add_artist(arrow)
            artists.append(arrow)


        # draw FOV for the robot
        # add robot FOV
        if self.robot.FOV < 2 * np.pi:
            FOVAng = self.robot_fov / 2
            FOVLine1 = mlines.Line2D([0, 0], [0, 0], linestyle='--')
            FOVLine2 = mlines.Line2D([0, 0], [0, 0], linestyle='--')


            startPointX = robotX
            startPointY = robotY
            endPointX = robotX + radius * np.cos(robot_theta)
            endPointY = robotY + radius * np.sin(robot_theta)

            # transform the vector back to world frame origin, apply rotation matrix, and get end point of FOVLine
            # the start point of the FOVLine is the center of the robot
            FOVEndPoint1 = calcFOVLineEndPoint(FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)
            FOVLine1.set_xdata(np.array([startPointX, startPointX + FOVEndPoint1[0]]))
            FOVLine1.set_ydata(np.array([startPointY, startPointY + FOVEndPoint1[1]]))
            FOVEndPoint2 = calcFOVLineEndPoint(-FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)
            FOVLine2.set_xdata(np.array([startPointX, startPointX + FOVEndPoint2[0]]))
            FOVLine2.set_ydata(np.array([startPointY, startPointY + FOVEndPoint2[1]]))

            ax.add_artist(FOVLine1)
            ax.add_artist(FOVLine2)
            artists.append(FOVLine1)
            artists.append(FOVLine2)

        # add an arc of robot's sensor range
        sensor_range = plt.Circle(self.robot.get_position(), self.robot.sensor_range + self.robot.radius+self.config.humans.radius, fill=False, linestyle='--')
        ax.add_artist(sensor_range)
        artists.append(sensor_range)

        # add humans and change the color of them based on visibility
        human_circles = [plt.Circle(human.get_position(), human.radius, fill=False, linewidth=1.5) for human in self.humans]

        # hardcoded for now
        actual_arena_size = self.arena_size + 0.5

        # plot the current human states
        for i in range(len(self.humans)):
            ax.add_artist(human_circles[i])
            artists.append(human_circles[i])

            # green: visible; red: invisible
            if self.human_visibility[i]:
                human_circles[i].set_color(c='b')
            else:
                human_circles[i].set_color(c='r')

            if -actual_arena_size <= self.humans[i].px <= actual_arena_size and -actual_arena_size <= self.humans[
                i].py <= actual_arena_size:
                # label numbers on each human
                # plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, str(self.humans[i].id), color='black', fontsize=12)
                plt.text(self.humans[i].px , self.humans[i].py , i, color='black', fontsize=12)

        # plot predicted human positions
        for i in range(len(self.humans)):
            # add future predicted positions of each human
            if self.gst_out_traj is not None:
                for j in range(self.predict_steps):
                    circle = plt.Circle(self.gst_out_traj[i, (2 * j):(2 * j + 2)] + np.array([robotX, robotY]),
                                        self.config.humans.radius, fill=False, color='tab:orange', linewidth=1.5)
                    # circle = plt.Circle(np.array([robotX, robotY]),
                    #                     self.humans[i].radius, fill=False)
                    ax.add_artist(circle)
                    artists.append(circle)

        plt.pause(0.1)
        for item in artists:
            item.remove() # there should be a better way to do this. For example,
            # initially use add_artist and draw_artist later on
        for t in ax.texts:
            t.set_visible(False)


================================================
FILE: crowd_sim/envs/crowd_sim_var_num.py
================================================
import gym
import numpy as np
from numpy.linalg import norm
import copy

from crowd_sim.envs.utils.action import ActionRot, ActionXY
from crowd_sim.envs import *
from crowd_sim.envs.utils.info import *
from crowd_sim.envs.utils.human import Human
from crowd_sim.envs.utils.state import JointState


class CrowdSimVarNum(CrowdSim):
    """
    The environment for our model with no trajectory prediction, or the baseline models with no prediction
    The number of humans at each timestep can change within a range
    """
    def __init__(self):
        """
        Movement simulation for n+1 agents
        Agent can either be human or robot.
        humans are controlled by a unknown and fixed policy.
        robot is controlled by a known and learnable policy.
        """
        super().__init__()
        self.id_counter = None
        self.observed_human_ids = None
        self.pred_method = None


    def configure(self, config):
        """ read the config to the environment variables """
        super(CrowdSimVarNum, self).configure(config)
        self.action_type=config.action_space.kinematics

    # set observation space and action space
    def set_robot(self, robot):
        self.robot = robot

        # we set the max and min of action/observation space as inf
        # clip the action and observation as you need

        d={}
        # robot node: px, py, r, gx, gy, v_pref, theta
        d['robot_node'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1,7,), dtype = np.float32)
        # only consider all temporal edges (human_num+1) and spatial edges pointing to robot (human_num)
        d['temporal_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 2,), dtype=np.float32)
        d['spatial_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(self.max_human_num, 2), dtype=np.float32)
        # number of humans detected at each timestep
        d['detected_human_num'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, ), dtype=np.float32)
        # whether each human is visible to robot (ordered by human ID, should not be sorted)
        d['visible_masks'] = gym.spaces.Box(low=-np.inf, high=np.inf,
                                            shape=(self.max_human_num,),
                                            dtype=np.bool)
        self.observation_space=gym.spaces.Dict(d)

        high = np.inf * np.ones([2, ])
        self.action_space = gym.spaces.Box(-high, high, dtype=np.float32)


    # set robot initial state and generate all humans for reset function
    # for crowd nav: human_num == self.human_num
    # for leader follower: human_num = self.human_num - 1
    def generate_robot_humans(self, phase, human_num=None):
        if self.record:
            px, py = 0, 0
            gx, gy = 0, -1.5
            self.robot.set(px, py, gx, gy, 0, 0, np.pi / 2)
            # generate a dummy human
            for i in range(self.max_human_num):
                human = Human(self.config, 'humans')
                human.set(15, 15, 15, 15, 0, 0, 0)
                human.isObstacle = True
                self.humans.append(human)

        else:
            # for sim2real
            if self.robot.kinematics == 'unicycle':
                # generate robot
                angle = np.random.uniform(0, np.pi * 2)
                px = self.arena_size * np.cos(angle)
                py = self.arena_size * np.sin(angle)
                while True:
                    gx, gy = np.random.uniform(-self.arena_size, self.arena_size, 2)
                    if np.linalg.norm([px - gx, py - gy]) >= 4:  # 1 was 6
                        break
                self.robot.set(px, py, gx, gy, 0, 0, np.random.uniform(0, 2 * np.pi))  # randomize init orientation
                # 1 to 4 humans
                self.human_num = np.random.randint(1, self.config.sim.human_num + self.human_num_range + 1)
                # print('human_num:', self.human_num)
                # self.human_num = 4


            # for sim exp
            else:
                # generate robot
                while True:
                    px, py, gx, gy = np.random.uniform(-self.arena_size, self.arena_size, 4)
                    if np.linalg.norm([px - gx, py - gy]) >= 8: # 6
                        break
                self.robot.set(px, py, gx, gy, 0, 0, np.pi / 2)
                # generate humans
                self.human_num = np.random.randint(low=self.config.sim.human_num - self.human_num_range,
                                                   high=self.config.sim.human_num + self.human_num_range + 1)


            self.generate_random_human_position(human_num=self.human_num)
            self.last_human_states = np.zeros((self.human_num, 5))
            # set human ids
            for i in range(self.human_num):
                self.humans[i].id = i



    # generate a human that starts on a circle, and its goal is on the opposite side of the circle
    def generate_circle_crossing_human(self):
        human = Human(self.config, 'humans')
        if self.randomize_attributes:
            human.sample_random_attributes()

        while True:
            angle = np.random.random() * np.pi * 2
            # add some noise to simulate all the possible cases robot could meet with human
            noise_range = 2
            px_noise = np.random.uniform(0, 1) * noise_range
            py_noise = np.random.uniform(0, 1) * noise_range
            px = self.circle_radius * np.cos(angle) + px_noise
            py = self.circle_radius * np.sin(angle) + py_noise
            collide = False

            for i, agent in enumerate([self.robot] + self.humans):
                # keep human at least 3 meters away from robot
                if self.robot.kinematics == 'unicycle' and i == 0:
                    min_dist = self.circle_radius / 2  # Todo: if circle_radius <= 4, it will get stuck here
                else:
                    min_dist = human.radius + agent.radius + self.discomfort_dist
                if norm((px - agent.px, py - agent.py)) < min_dist or \
                        norm((px - agent.gx, py - agent.gy)) < min_dist:
                    collide = True
                    break
            if not collide:
                break

        human.set(px, py, -px, -py, 0, 0, 0)

        return human

    # calculate the ground truth future trajectory of humans
    # if robot is visible: assume linear motion for robot
    # ret val: [self.predict_steps + 1, self.human_num, 4]
    # method: 'truth' or 'const_vel' or 'inferred'
    def calc_human_future_traj(self, method):
        # if the robot is invisible, it won't affect human motions
        # else it will
        human_num = self.human_num + 1 if self.robot.visible else self.human_num
        # buffer to store predicted future traj of all humans [px, py, vx, vy]
        # [time, human id, features]
        if method == 'truth':
            self.human_future_traj = np.zeros((self.buffer_len + 1, human_num, 4))
        elif method == 'const_vel':
            self.human_future_traj = np.zeros((self.predict_steps + 1, human_num, 4))
        else:
            raise NotImplementedError

        # initialize the 0-th position with current states
        for i in range(self.human_num):
            # use true states for now, to count for invisible humans' influence on visible humans
            # take px, py, vx, vy, remove radius
            self.human_future_traj[0, i] = np.array(self.humans[i].get_observable_state_list()[:-1])

        # if we are using constant velocity model, we need to use displacement to approximate velocity (pos_t - pos_t-1)
        # we shouldn't use true velocity for fair comparison with GST inferred pred
        if method == 'const_vel':
            self.human_future_traj[0, :, 2:4] = self.prev_human_pos[:, 2:4]

        # add robot to the end of the array
        if self.robot.visible:
            self.human_future_traj[0, -1] = np.array(self.robot.get_observable_state_list()[:-1])

        if method == 'truth':
            for i in range(1, self.buffer_len + 1):
                for j in range(self.human_num):
                    # prepare joint state for all humans
                    full_state = np.concatenate(
                        (self.human_future_traj[i - 1, j], self.humans[j].get_full_state_list()[4:]))
                    observable_states = []
                    for k in range(self.human_num):
                        if j == k:
                            continue
                        observable_states.append(
                            np.concatenate((self.human_future_traj[i - 1, k], [self.humans[k].radius])))

                    # use joint states to get actions from the states in the last step (i-1)
                    action = self.humans[j].act_joint_state(JointState(full_state, observable_states))

                    # step all humans with action
                    self.human_future_traj[i, j] = self.humans[j].one_step_lookahead(
                        self.human_future_traj[i - 1, j, :2], action)

                if self.robot.visible:
                    action = ActionXY(*self.human_future_traj[i - 1, -1, 2:])
                    # update px, py, vx, vy
                    self.human_future_traj[i, -1] = self.robot.one_step_lookahead(self.human_future_traj[i - 1, -1, :2],
                                                                                  action)
            # only take predictions every self.pred_interval steps
            self.human_future_traj = self.human_future_traj[::self.pred_interval]
        # for const vel model
        elif method == 'const_vel':
            # [self.pred_steps+1, human_num, 4]
            self.human_future_traj = np.tile(self.human_future_traj[0].reshape(1, human_num, 4), (self.predict_steps+1, 1, 1))
            # [self.pred_steps+1, human_num, 2]
            pred_timestep = np.tile(np.arange(0, self.predict_steps+1, dtype=float).reshape((self.predict_steps+1, 1, 1)) * self.time_step * self.pred_interval,
                                    [1, human_num, 2])
            pred_disp = pred_timestep * self.human_future_traj[:, :, 2:]
            self.human_future_traj[:, :, :2] = self.human_future_traj[:, :, :2] + pred_disp
        else:
            raise NotImplementedError

        # remove the robot if it is visible
        if self.robot.visible:
            self.human_future_traj = self.human_future_traj[:, :-1]


        # remove invisible humans
        self.human_future_traj[:, np.logical_not(self.human_visibility), :2] = 15
        self.human_future_traj[:, np.logical_not(self.human_visibility), 2:] = 0

        return self.human_future_traj


    # reset = True: reset calls this function; reset = False: step calls this function
    # sorted: sort all humans by distance to robot or not
    def generate_ob(self, reset, sort=False):
        """Generate observation for reset and step functions"""
        ob = {}

        # nodes
        visible_humans, num_visibles, self.human_visibility = self.get_num_human_in_fov()

        ob['robot_node'] = self.robot.get_full_state_list_noV()

        prev_human_pos = copy.deepcopy(self.last_human_states)
        self.update_last_human_states(self.human_visibility, reset=reset)

        # edges
        ob['temporal_edges'] = np.array([self.robot.vx, self.robot.vy])

        # ([relative px, relative py, disp_x, disp_y], human id)
        all_spatial_edges = np.ones((self.max_human_num, 2)) * np.inf

        for i in range(self.human_num):
            if self.human_visibility[i]:
                # vector pointing from human i to robot
                relative_pos = np.array(
                    [self.last_human_states[i, 0] - self.robot.px, self.last_human_states[i, 1] - self.robot.py])
                all_spatial_edges[self.humans[i].id, :2] = relative_pos

        ob['visible_masks'] = np.zeros(self.max_human_num, dtype=np.bool)
        # sort all humans by distance (invisible humans will be in the end automatically)
        if sort:
            ob['spatial_edges'] = np.array(sorted(all_spatial_edges, key=lambda x: np.linalg.norm(x)))
            # after sorting, the visible humans must be in the front
            if num_visibles > 0:
                ob['visible_masks'][:num_visibles] = True
        else:
            ob['spatial_edges'] = all_spatial_edges
            ob['visible_masks'][:self.human_num] = self.human_visibility
        ob['spatial_edges'][np.isinf(ob['spatial_edges'])] = 15
        ob['detected_human_num'] = num_visibles
        # if no human is detected, assume there is one dummy human at (15, 15) to make the pack_padded_sequence work
        if ob['detected_human_num'] == 0:
            ob['detected_human_num'] = 1

        # update self.observed_human_ids
        self.observed_human_ids = np.where(self.human_visibility)[0]

        self.ob = ob

        return ob



    # Update the specified human's end goals in the environment randomly
    def update_human_pos_goal(self, human):
        while True:
            angle = np.random.random() * np.pi * 2
            # add some noise to simulate all the possible cases robot could meet with human
            v_pref = 1.0 if human.v_pref == 0 else human.v_pref
            gx_noise = (np.random.random() - 0.5) * v_pref
            gy_noise = (np.random.random() - 0.5) * v_pref
            gx = self.circle_radius * np.cos(angle) + gx_noise
            gy = self.circle_radius * np.sin(angle) + gy_noise
            collide = False

            if not collide:
                break

        # Give human new goal
        human.gx = gx
        human.gy = gy


    def reset(self, phase='train', test_case=None):
        """
        Reset the environment
        :return:
        """

        if self.phase is not None:
            phase = self.phase
        if self.test_case is not None:
            test_case=self.test_case

        if self.robot is None:
            raise AttributeError('robot has to be set!')
        assert phase in ['train', 'val', 'test']
        if test_case is not None:
            self.case_counter[phase] = test_case # test case is passed in to calculate specific seed to generate case
        self.global_time = 0
        self.step_counter = 0
        self.id_counter = 0


        self.humans = []
        # self.human_num = self.config.sim.human_num
        # initialize a list to store observed humans' IDs
        self.observed_human_ids = []

        # train, val, and test phase should start with different seed.
        # case capacity: the maximum number for train(max possible int -2000), val(1000), and test(1000)
        # val start from seed=0, test start from seed=case_capacity['val']=1000
        # train start from self.case_capacity['val'] + self.case_capacity['test']=2000
        counter_offset = {'train': self.case_capacity['val'] + self.case_capacity['test'],
                          'val': 0, 'test': self.case_capacity['val']}

        # here we use a counter to calculate seed. The seed=counter_offset + case_counter
        self.rand_seed = counter_offset[phase] + self.case_counter[phase] + self.thisSeed
        np.random.seed(self.rand_seed)

        self.generate_robot_humans(phase)

        # record px, py, r of each human, used for crowd_sim_pc env
        self.cur_human_states = np.zeros((self.max_human_num, 3))
        for i in range(self.human_num):
            self.cur_human_states[i] = np.array([self.humans[i].px, self.humans[i].py, self.humans[i].radius])

        # case size is used to make sure that the case_counter is always between 0 and case_size[phase]
        self.case_counter[phase] = (self.case_counter[phase] + int(1*self.nenv)) % self.case_size[phase]

        # initialize potential and angular potential
        rob_goal_vec = np.array([self.robot.gx, self.robot.gy]) - np.array([self.robot.px, self.robot.py])
        self.potential = -abs(np.linalg.norm(rob_goal_vec))
        self.angle = np.arctan2(rob_goal_vec[1], rob_goal_vec[0]) - self.robot.theta
        if self.angle > np.pi:
            # self.abs_angle = np.pi * 2 - self.abs_angle
            self.angle = self.angle - 2 * np.pi
        elif self.angle < -np.pi:
            self.angle = self.angle + 2 * np.pi

        # get robot observation
        ob = self.generate_ob(reset=True, sort=self.config.args.sort_humans)

        return ob


    def step(self, action, update=True):
        """
        Step the environment forward for one timestep
        Compute actions for all agents, detect collision, update environment and return (ob, reward, done, info)
        """
        if self.robot.policy.name in ['ORCA', 'social_force']:
            # assemble observation for orca: px, py, vx, vy, r
            human_states = copy.deepcopy(self.last_human_states)
            # get orca action
            action = self.robot.act(human_states.tolist())
        else:
            action = self.robot.policy.clip_action(action, self.robot.v_pref)

        if self.robot.kinematics == 'unicycle':
            self.desiredVelocity[0] = np.clip(self.desiredVelocity[0] + action.v, -self.robot.v_pref, self.robot.v_pref)
            action = ActionRot(self.desiredVelocity[0], action.r)

        human_actions = self.get_human_actions()

        # need to update self.human_future_traj in testing to calculate number of intrusions
        if self.phase == 'test':
            # use ground truth future positions of humans
            self.calc_human_future_traj(method='truth')

        # compute reward and episode info
        reward, done, episode_info = self.calc_reward(action, danger_zone='future')


        # apply action and update all agents
        self.robot.step(action)
        for i, human_action in enumerate(human_actions):
            self.humans[i].step(human_action)

        self.global_time += self.time_step # max episode length=time_limit/time_step
        self.step_counter =self.step_counter+1

        info={'info':episode_info}

        # Add or remove at most self.human_num_range humans
        # if self.human_num_range == 0 -> human_num is fixed at all times
        if self.human_num_range > 0 and self.global_time % 5 == 0:
            # remove humans
            if np.random.rand() < 0.5:
                # print('before:', self.human_num,', self.min_human_num:', self.min_human_num)
                # if no human is visible, anyone can be removed
                if len(self.observed_human_ids) == 0:
                    max_remove_num = self.human_num - self.min_human_num
                    # print('max_remove_num, invisible', max_remove_num)
                else:
                    max_remove_num = min(self.human_num - self.min_human_num, (self.human_num - 1) - max(self.observed_human_ids))
                    # print('max_remove_num, visible', max_remove_num)
                remove_num = np.random.randint(low=0, high=max_remove_num + 1)
                for _ in range(remove_num):
                    self.humans.pop()
                self.human_num = self.human_num - remove_num
                # print('after:', self.human_num)
                self.last_human_states = self.last_human_states[:self.human_num]
            # add humans
            else:
                add_num = np.random.randint(low=0, high=self.human_num_range + 1)
                if add_num > 0:
                    # set human ids
                    true_add_num = 0
                    for i in range(self.human_num, self.human_num + add_num):
                        if i == self.config.sim.human_num + self.human_num_range:
                            break
                        self.generate_random_human_position(human_num=1)
                        self.humans[i].id = i
                        true_add_num = true_add_num + 1
                    self.human_num = self.human_num + true_add_num
                    if true_add_num > 0:
                        self.last_human_states = np.concatenate((self.last_human_states, np.array([[15, 15, 0, 0, 0.3]]*true_add_num)), axis=0)

        assert self.min_human_num <= self.human_num <= self.max_human_num

        # compute the observation
        ob = self.generate_ob(reset=False, sort=self.config.args.sort_humans)


        # Update all humans' goals randomly midway through episode
        if self.random_goal_changing:
            if self.global_time % 5 == 0:
                self.update_human_goals_randomly()

        # Update a specific human's goal once its reached its original goal
        if self.end_goal_changing:
            for i, human in enumerate(self.humans):
                if norm((human.gx - human.px, human.gy - human.py)) < human.radius:
                    if self.robot.kinematics == 'holonomic':
                        self.humans[i] = self.generate_circle_crossing_human()
                        self.humans[i].id = i
                    else:
                        self.update_human_goal(human)

        return ob, reward, done, info

    # find R(s, a)
    # danger_zone: how to define the personal_zone (if the robot intrudes into this zone, the info will be Danger)
    # circle (traditional) or future (based on true future traj of humans)
    def calc_reward(self, action, danger_zone='circle'):
        # collision detection
        dmin = float('inf')

        danger_dists = []
        collision = False

        # collision check with humans
        for i, human in enumerate(self.humans):
            dx = human.px - self.robot.px
            dy = human.py - self.robot.py
            closest_dist = (dx ** 2 + dy ** 2) ** (1 / 2) - human.radius - self.robot.radius

            if closest_dist < self.discomfort_dist:
                danger_dists.append(closest_dist)
            if closest_dist < 0:
                collision = True
                break
            elif closest_dist < dmin:
                dmin = closest_dist


        # check if reaching the goal
        if self.robot.kinematics == 'unicycle':
            goal_radius = 0.6
        else:
            goal_radius = self.robot.radius
        reaching_goal = norm(
            np.array(self.robot.get_position()) - np.array(self.robot.get_goal_position())) < goal_radius

        # use danger_zone to determine the condition for Danger
        if danger_zone == 'circle' or self.phase == 'train':
            danger_cond = dmin < self.discomfort_dist
            min_danger_dist = 0
        else:
            # if the robot collides with future states, give it a collision penalty
            relative_pos = self.human_future_traj[1:, :, :2] - np.array([self.robot.px, self.robot.py])
            relative_dist = np.linalg.norm(relative_pos, axis=-1)

            collision_idx = relative_dist < self.robot.radius + self.config.humans.radius  # [predict_steps, human_num]

            danger_cond = np.any(collision_idx)
            # if robot is dangerously close to any human, calculate the min distance between robot and its closest human
            if danger_cond:
                min_danger_dist = np.amin(relative_dist[collision_idx])
            else:
                min_danger_dist = 0

        if self.global_time >= self.time_limit - 1:
            reward = 0
            done = True
            episode_info = Timeout()
        elif collision:
            reward = self.collision_penalty
            done = True
            episode_info = Collision()
        elif reaching_goal:
            reward = self.success_reward
            done = True
            episode_info = ReachGoal()

        elif danger_cond:
            # only penalize agent for getting too close if it's visible
            # adjust the reward based on FPS
            # print(dmin)
            reward = (dmin - self.discomfort_dist) * self.discomfort_penalty_factor * self.time_step
            done = False
            episode_info = Danger(min_danger_dist)

        else:
            # potential reward
            if self.robot.kinematics == 'holonomic':
                pot_factor = 2
            else:
                pot_factor = 3
            potential_cur = np.linalg.norm(
                np.array([self.robot.px, self.robot.py]) - np.array(self.robot.get_goal_position()))
            reward = pot_factor * (-abs(potential_cur) - self.potential)
            self.potential = -abs(potential_cur)

            done = False
            episode_info = Nothing()

        # if the robot is near collision/arrival, it should be able to turn a large angle
        if self.robot.kinematics == 'unicycle':
            # add a rotational penalty
            r_spin = -4.5 * action.r ** 2

            # add a penalty for going backwards
            if action.v < 0:
                r_back = -2 * abs(action.v)
            else:
                r_back = 0.

            reward = reward + r_spin + r_back

        return reward, done, episode_info


    def render(self, mode='human'):
        """ Render the current status of the environment using matplotlib """
        import matplotlib.pyplot as plt
        import matplotlib.lines as mlines
        from matplotlib import patches

        plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'

        robot_color = 'gold'
        goal_color = 'red'
        arrow_color = 'red'
        arrow_style = patches.ArrowStyle("->", head_length=4, head_width=2)

        def calcFOVLineEndPoint(ang, point, extendFactor):
            # choose the extendFactor big enough
            # so that the endPoints of the FOVLine is out of xlim and ylim of the figure
            FOVLineRot = np.array([[np.cos(ang), -np.sin(ang), 0],
                                   [np.sin(ang), np.cos(ang), 0],
                                   [0, 0, 1]])
            point.extend([1])
            # apply rotation matrix
            newPoint = np.matmul(FOVLineRot, np.reshape(point, [3, 1]))
            # increase the distance between the line start point and the end point
            newPoint = [extendFactor * newPoint[0, 0], extendFactor * newPoint[1, 0], 1]
            return newPoint



        ax=self.render_axis
        artists=[]

        # add goal
        goal=mlines.Line2D([self.robot.gx], [self.robot.gy], color=goal_color, marker='*', linestyle='None', markersize=15, label='Goal')
        ax.add_artist(goal)
        artists.append(goal)

        # add robot
        robotX,robotY=self.robot.get_position()

        robot=plt.Circle((robotX,robotY), self.robot.radius, fill=True, color=robot_color)
        ax.add_artist(robot)
        artists.append(robot)

        # plt.legend([robot, goal], ['Robot', 'Goal'], fontsize=16)


        # compute orientation in each step and add arrow to show the direction
        radius = self.robot.radius
        arrowStartEnd=[]

        robot_theta = self.robot.theta if self.robot.kinematics == 'unicycle' else np.arctan2(self.robot.vy, self.robot.vx)

        arrowStartEnd.append(((robotX, robotY), (robotX + radius * np.cos(robot_theta), robotY + radius * np.sin(robot_theta))))

        for i, human in enumerate(self.humans):
            theta = np.arctan2(human.vy, human.vx)
            arrowStartEnd.append(((human.px, human.py), (human.px + radius * np.cos(theta), human.py + radius * np.sin(theta))))

        arrows = [patches.FancyArrowPatch(*arrow, color=arrow_color, arrowstyle=arrow_style)
                  for arrow in arrowStartEnd]
        for arrow in arrows:
            ax.add_artist(arrow)
            artists.append(arrow)


        # draw FOV for the robot
        # add robot FOV
        if self.robot.FOV < 2 * np.pi:
            FOVAng = self.robot_fov / 2
            FOVLine1 = mlines.Line2D([0, 0], [0, 0], linestyle='--')
            FOVLine2 = mlines.Line2D([0, 0], [0, 0], linestyle='--')


            startPointX = robotX
            startPointY = robotY
            endPointX = robotX + radius * np.cos(robot_theta)
            endPointY = robotY + radius * np.sin(robot_theta)

            # transform the vector back to world frame origin, apply rotation matrix, and get end point of FOVLine
            # the start point of the FOVLine is the center of the robot
            FOVEndPoint1 = calcFOVLineEndPoint(FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)
            FOVLine1.set_xdata(np.array([startPointX, startPointX + FOVEndPoint1[0]]))
            FOVLine1.set_ydata(np.array([startPointY, startPointY + FOVEndPoint1[1]]))
            FOVEndPoint2 = calcFOVLineEndPoint(-FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)
            FOVLine2.set_xdata(np.array([startPointX, startPointX + FOVEndPoint2[0]]))
            FOVLine2.set_ydata(np.array([startPointY, startPointY + FOVEndPoint2[1]]))

            ax.add_artist(FOVLine1)
            ax.add_artist(FOVLine2)
            artists.append(FOVLine1)
            artists.append(FOVLine2)

        # add an arc of robot's sensor range
        sensor_range = plt.Circle(self.robot.get_position(), self.robot.sensor_range + self.robot.radius+self.config.humans.radius, fill=False, linestyle='--')
        ax.add_artist(sensor_range)
        artists.append(sensor_range)

        # add humans and change the color of them based on visibility
        human_circles = [plt.Circle(human.get_position(), human.radius, fill=False, linewidth=1.5) for human in self.humans]

        # hardcoded for now
        actual_arena_size = self.arena_size + 0.5
        for i in range(len(self.humans)):
            ax.add_artist(human_circles[i])
            artists.append(human_circles[i])

            # green: visible; red: invisible
            # if self.detect_visible(self.robot, self.humans[i], robot1=True):
            if self.human_visibility[i]:
                human_circles[i].set_color(c='g')
            else:
                human_circles[i].set_color(c='r')
            if self.humans[i].id in self.observed_human_ids:
                human_circles[i].set_color(c='b')

            plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, str(self.humans[i].id), color='black', fontsize=12)





        plt.pause(0.01)
        for item in artists:
            item.remove() # there should be a better way to do this. For example,
            # initially use add_artist and draw_artist later on
        for t in ax.texts:
            t.set_visible(False)



================================================
FILE: crowd_sim/envs/crowd_sim_var_num_collect.py
================================================
import gym
import numpy as np
from numpy.linalg import norm
import copy

from crowd_sim.envs import *
from crowd_sim.envs.utils.info import *



class CrowdSimVarNumCollect(CrowdSimVarNum):
    """
    An environment for collecting a dataset of simulated humans to train GST predictor (used in collect_data.py)
    The observation contains all detected humans
    A key in ob indicates how many humans are detected
    """
    def __init__(self):
        """
        Movement simulation for n+1 agents
        Agent can either be human or robot.
        humans are controlled by a unknown and fixed policy.
        robot is controlled by a known and learnable policy.
        """
        super().__init__()


    def set_robot(self, robot):
        self.robot = robot

        # set observation space and action space
        # we set the max and min of action/observation space as inf
        # clip the action and observation as you need

        d={}
        # frame id, human prediction id, px, py
        d['pred_info'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(self.config.sim.human_num + self.config.sim.human_num_range, 4), dtype=np.float32)
        self.observation_space=gym.spaces.Dict(d)

        high = np.inf * np.ones([2, ])
        self.action_space = gym.spaces.Box(-high, high, dtype=np.float32)


    def reset(self, phase='train', test_case=None):
        """
        Set px, py, gx, gy, vx, vy, theta for robot and humans
        :return:
        """

        if self.phase is not None:
            phase = self.phase
        if self.test_case is not None:
            test_case=self.test_case

        if self.robot is None:
            raise AttributeError('robot has to be set!')
        assert phase in ['train', 'val', 'test']
        if test_case is not None:
            self.case_counter[phase] = test_case # test case is passed in to calculate specific seed to generate case
        self.global_time = 0
        self.id_counter = 0


        self.humans = []
        # self.human_num = self.config.sim.human_num
        # initialize a list to store observed humans' IDs
        self.observed_human_ids = []

        # train, val, and test phase should start with different seed.
        # case capacity: the maximum number for train(max possible int -2000), val(1000), and test(1000)
        # val start from seed=0, test start from seed=case_capacity['val']=1000
        # train start from self.case_capacity['val'] + self.case_capacity['test']=2000
        counter_offset = {'train': self.case_capacity['val'] + self.case_capacity['test'],
                          'val': 0, 'test': self.case_capacity['val']}

        # here we use a counter to calculate seed. The seed=counter_offset + case_counter
        np.random.seed(counter_offset[phase] + self.case_counter[phase] + self.thisSeed)
        self.rand_seed = counter_offset[phase] + self.case_counter[phase] + self.thisSeed
        # print(counter_offset[phase] + self.case_counter[phase] + self.thisSeed)
        # np.random.seed(1038)

        self.generate_robot_humans(phase)
        self.last_human_observability = np.zeros(self.human_num, dtype=bool)
        self.human_pred_id = np.arange(0, self.human_num)
        self.max_human_id = self.human_num

        # case size is used to make sure that the case_counter is always between 0 and case_size[phase]
        self.case_counter[phase] = (self.case_counter[phase] + int(1*self.nenv)) % self.case_size[phase]

        # get robot observation
        ob = self.generate_ob(reset=True)

        # initialize potential
        self.potential = -abs(np.linalg.norm(np.array([self.robot.px, self.robot.py]) - np.array([self.robot.gx, self.robot.gy])))

        return ob

    # reset = True: reset calls this function; reset = False: step calls this function
    def generate_ob(self, reset, sort=False):
        # we should keep human ID tracking for traj pred!
        #assert sort == False
        ob = {}

        # nodes
        visible_humans, num_visibles, human_visibility = self.get_num_human_in_fov()

        humans_out = np.logical_and(self.last_human_observability, np.logical_not(human_visibility))
        num_humans_out = np.sum(humans_out)

        self.human_pred_id[humans_out] = np.arange(self.max_human_id, self.max_human_id+num_humans_out)
        self.max_human_id = self.max_human_id + num_humans_out

        self.update_last_human_states(human_visibility, reset=reset)

        # ([relative px, relative py, disp_x, disp_y], human id)
        all_spatial_edges = np.ones((self.config.sim.human_num + self.config.sim.human_num_range, 2)) * np.inf

        for i in range(self.human_num):
            if human_visibility[i]:
                all_spatial_edges[self.humans[i].id, :2] = self.last_human_states[i, :2]

        frame_array = np.repeat(self.global_time/self.config.data.pred_timestep, self.human_num)
        ob['pred_info'] = np.concatenate((frame_array.reshape((self.human_num, 1)),
                          self.human_pred_id.reshape((self.human_num, 1)), all_spatial_edges), axis=1)

        # update self.observed_human_ids
        self.observed_human_ids = np.where(human_visibility)[0]
        self.ob = ob

        self.last_human_observability = copy.deepcopy(human_visibility)

        return ob

    # find R(s, a)
    # danger_zone: how to define the personal_zone (if the robot intrudes into this zone, the info will be Danger)
    # circle (traditional) or future (based on true future traj of humans)
    def calc_reward(self, action, danger_zone='circle'):
        # collision detection
        dmin = float('inf')

        danger_dists = []
        collision = False

        for i, human in enumerate(self.humans):
            dx = human.px - self.robot.px
            dy = human.py - self.robot.py
            closest_dist = (dx ** 2 + dy ** 2) ** (1 / 2) - human.radius - self.robot.radius

            if closest_dist < self.discomfort_dist:
                danger_dists.append(closest_dist)
            if closest_dist < 0:
                collision = True
                # logging.debug("Collision: distance between robot and p{} is {:.2E}".format(i, closest_dist))
                break
            elif closest_dist < dmin:
                dmin = closest_dist

        # check if reaching the goal
        reaching_goal = norm(
            np.array(self.robot.get_position()) - np.array(self.robot.get_goal_position())) < self.robot.radius


        if self.global_time >= 40000:
            done = True
            episode_info = Timeout()
        elif collision:
            done = False
            episode_info = Collision()
        elif reaching_goal:
            done = False
            episode_info = ReachGoal()
            # randomize human attributes

            # median of all humans
            if np.random.uniform(0, 1) < 0.5:
                human_pos = np.zeros((self.human_num, 2))
                for i, human in enumerate(self.humans):
                    human_pos[i] = np.array([human.px, human.py])
                self.robot.gx, self.robot.gy = np.median(human_pos, axis=0)
            # random goal
            else:
                self.robot.gx, self.robot.gy = np.random.uniform(-self.arena_size, self.arena_size, size=2)

        else:
            done = False
            episode_info = Nothing()



        return 0, done, episode_info

================================================
FILE: crowd_sim/envs/ros_turtlebot2i_env.py
================================================
import gym
import numpy as np
from numpy.linalg import norm
import pandas as pd
import os

# prevent import error if other code is run in conda env
try:
	# import ROS related packages
	import rospy
	import tf2_ros
	from geometry_msgs.msg import Twist, TransformStamped, PoseArray
	import tf
	from sensor_msgs.msg import JointState
	from threading import Lock
	from message_filters import ApproximateTimeSynchronizer, TimeSynchronizer, Subscriber

except:
	pass

import copy
import sys

from crowd_sim.envs.crowd_sim_pred_real_gst import CrowdSimPredRealGST


class rosTurtlebot2iEnv(CrowdSimPredRealGST):
	'''
	Environment for testing a simulated policy on a real Turtlebot2i
	To use it, change the env_name in arguments.py in the tested model folder to 'rosTurtlebot2iEnv-v0'
	'''
	metadata = {'render.modes': ['human']}

	def __init__(self):
		super(CrowdSimPredRealGST, self).__init__()

		# subscriber callback function will change these two variables
		self.robotMsg=None # robot state message
		self.humanMsg=None # human state message
		self.jointMsg=None # joint state message

		self.currentTime=0.0
		self.lastTime=0.0 # store time for calculating time interval

		self.human_visibility=None
		self.current_human_states = None  # (px,py)
		self.detectedHumanNum=0

		# goal positions will be set manually in self.reset()
		self.goal_x = 0.0
		self.goal_y = 0.0

		self.last_left = 0.
		self.last_right = 0.
		self.last_w = 0.0
		self.jointVel=None

		# to calculate vx, vy
		self.last_v = 0.0
		self.desiredVelocity=[0.0,0.0]

		self.mutex = Lock()



	def configure(self, config):
		super().configure(config)
		# kalman filter for human tracking

		if config.sim.predict_method == 'none':
			self.add_pred = False
		else:
			self.add_pred = True

		# define ob space and action space
		self.set_ob_act_space()
		# ROS
		rospy.init_node('ros_turtlebot2i_env_node', anonymous=True)

		self.actionPublisher = rospy.Publisher('/cmd_vel_mux/input/navi', Twist, queue_size=1)
		self.tfBuffer = tf2_ros.Buffer()
		self.transformListener = tf2_ros.TransformListener(self.tfBuffer)
		# ROS subscriber
		jointStateSub = Subscriber("/joint_states", JointState)
		humanStatesSub = Subscriber('/dr_spaam_detections', PoseArray)  # human px, py, visible
		if self.use_dummy_detect:
			subList = [jointStateSub]
		else:
			subList = [jointStateSub, humanStatesSub]

		# synchronize the robot base joint states and humnan detections with at most 1 seconds of difference
		self.ats = ApproximateTimeSynchronizer(subList, queue_size=1, slop=1)

		# if ignore sensor inputs and use fake human detections
		if self.use_dummy_detect:
			self.ats.registerCallback(self.state_cb_dummy)
		else:
			self.ats.registerCallback(self.state_cb)

		rospy.on_shutdown(self.shutdown)


	def set_robot(self, robot):
		self.robot = robot

	def set_ob_act_space(self):
		# set observation space and action space
		# we set the max and min of action/observation space as inf
		# clip the action and observation as you need

		d = {}
		# robot node: num_visible_humans, px, py, r, gx, gy, v_pref, theta
		d['robot_node'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 7,), dtype=np.float32)
		# only consider all temporal edges (human_num+1) and spatial edges pointing to robot (human_num)
		d['temporal_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 2,), dtype=np.float32)
		# add prediction
		if self.add_pred:
			self.spatial_edge_dim = int(2 * (self.predict_steps + 1))
			d['spatial_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf,
												shape=(self.config.sim.human_num + self.config.sim.human_num_range,
													   self.spatial_edge_dim),
												dtype=np.float32)
		# no prediction
		else:
			d['spatial_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf,
												shape=(self.config.sim.human_num + self.config.sim.human_num_range, 2),
												dtype=np.float32)
		# number of humans detected at each timestep

		# masks for gst pred model
		# whether each human is visible to robot
		d['visible_masks'] = gym.spaces.Box(low=-np.inf, high=np.inf,
											shape=(self.config.sim.human_num + self.config.sim.human_num_range,),
											dtype=np.bool)

		# number of humans detected at each timestep
		d['detected_human_num'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1,), dtype=np.float32)

		self.observation_space = gym.spaces.Dict(d)

		high = np.inf * np.ones([2, ])
		self.action_space = gym.spaces.Box(-high, high, dtype=np.float32)


	# (used if self.use_dummy_detect is False)
	# callback function to store the realtime messages from the robot to this env
	def state_cb(self, jointStateMsg, humanArrayMsg):
		self.humanMsg=humanArrayMsg.poses
		self.jointMsg=jointStateMsg

	# (used if self.use_dummy_detect is True)
	# callback function to store the realtime messages from the robot to this env
	# no need to real human message
	def state_cb_dummy(self, jointStateMsg):
		self.jointMsg = jointStateMsg

	def readMsg(self):
		"""
		read messages passed through ROS & prepare for generating obervations
		this function should be called right before generate_ob() is called
		"""
		self.mutex.acquire()
		# get time
		# print(self.jointMsg.header.stamp.secs, self.jointMsg.header.stamp.nsecs)
		self.currentTime = self.jointMsg.header.stamp.secs + self.jointMsg.header.stamp.nsecs / 1e9

		# get robot pose from T265 SLAM camera
		try:
			self.robotMsg = self.tfBuffer.lookup_transform('t265_odom_frame', 't265_pose_frame', rospy.Time.now(), rospy.Duration(1.0))
		except:
			print("problem in getting transform")

		# get robot base velocity from the base
		try:
			self.jointVel=self.jointMsg.velocity
		except:
			print("problem in getting joint velocity")

		# print(self.robotMsg, "ROBOT mSG")
		# store the robot pose and robot base velocity in self variables
		self.robot.px = -self.robotMsg.transform.translation.y
		self.robot.py = self.robotMsg.transform.translation.x
		print('robot pos:', self.robot.px, self.robot.py)

		quaternion = (
			self.robotMsg.transform.rotation.x,
			self.robotMsg.transform.rotation.y,
			self.robotMsg.transform.rotation.z,
			self.robotMsg.transform.rotation.w
		)

		if self.use_dummy_detect:
			self.detectedHumanNum = 1
			self.human_visibility = np.zeros((self.max_human_num,), dtype=np.bool)
		else:
			# human states

			self.detectedHumanNum=min(len(self.humanMsg), self.max_human_num)
			self.current_human_states_raw = np.ones((self.detectedHumanNum, 2)) * 15
			self.human_visibility=np.zeros((self.max_human_num, ), dtype=np.bool)


			for i in range(self.detectedHumanNum):
				# use hard coded map to filter out obstacles
				global_x = self.robot.px + self.humanMsg[i].position.x
				global_y = self.robot.py + self.humanMsg[i].position.y
				# if -2.5 < global_x < 2.5 and -2.5 < global_y < 2.5:
				if True:
					self.current_human_states_raw[i,0]=self.humanMsg[i].position.x
					self.current_human_states_raw[i,1] = self.humanMsg[i].position.y

		self.mutex.release()

		# robot orientation
		self.robot.theta = tf.transformations.euler_from_quaternion(quaternion)[2] + np.pi / 2

		if self.robot.theta < 0:
			self.robot.theta = self.robot.theta + 2 * np.pi

		# add 180 degrees because of the transform from lidar frame to t265 camera frame
		hMatrix = np.array([[np.cos(self.robot.theta+np.pi), -np.sin(self.robot.theta+np.pi), 0, 0],
							  [np.sin(self.robot.theta+np.pi), np.cos(self.robot.theta+np.pi), 0, 0],
							 [0,0,1,0], [0,0,0,1]])

		# if we detected at least one person
		self.current_human_states = np.ones((self.max_human_num, 2)) * 15

		if not self.use_dummy_detect:
			for j in range(self.detectedHumanNum):
				xy=np.matmul(hMatrix,np.array([[self.current_human_states_raw[j,0],
												self.current_human_states_raw[j,1],
												0,
												1]]).T)

				self.current_human_states[j]=xy[:2,0]

		else:
			self.current_human_states[0] = np.array([0, 1]) - np.array([self.robot.px, self.robot.py])



		self.robot.vx = self.last_v * np.cos(self.robot.theta)
		self.robot.vy = self.last_v * np.sin(self.robot.theta)




	def reset(self):
		"""
		Reset function
		"""

		# stop the turtlebot
		self.smoothStop()
		self.step_counter=0
		self.currentTime=0.0
		self.lastTime=0.0
		self.global_time = 0.

		self.human_visibility = np.zeros((self.max_human_num,), dtype=np.bool)
		self.detectedHumanNum=0
		self.current_human_states = np.ones((self.max_human_num, 2)) * 15
		self.desiredVelocity = [0.0, 0.0]
		self.last_left  = 0.
		self.last_right = 0.
		self.last_w = 0.0

		self.last_v = 0.0

		while True:
			a = input("Press y for the next episode \t")
			if a == "y":
				self.robot.gx=float(input("Input goal location in x-axis\t"))
				self.robot.gy=float(input("Input goal location in y-axis\t"))
				break
			else:
				sys.exit()


		if self.record:
			self.episodeRecoder.robot_goal.append([self.robot.gx, self.robot.gy])


		self.readMsg()
		ob=self.generate_ob() # generate initial obs


		return ob

	# input: v, w
	# output: v, w
	def smooth(self, v, w):
		beta = 0.1 #TODO: you use 0.2 in the simulator
		left = (2. * v - 0.23 * w) / (2. * 0.035)
		right = (2. * v + 0.23 * w) / (2. * 0.035)
		# print('199:', left, right)
		left = np.clip(left, -17.5, 17.5)
		right = np.clip(right, -17.5, 17.5)
		# print('202:', left, right)
		left = (1.-beta) * self.last_left + beta * left
		right = (1.-beta) * self.last_right + beta * right
		# print('205:', left, right)
		
		self.last_left = copy.deepcopy(left)
		self.last_right = copy.deepcopy(right)

		v_smooth = 0.035 / 2 * (left + right)
		w_smooth = 0.035 / 0.23 * (right - left)

		return v_smooth, w_smooth

	def generate_ob(self):
		ob = {}

		ob['robot_node'] = np.array([[self.robot.px, self.robot.py, self.robot.radius,
								self.robot.gx, self.robot.gy, self.robot.v_pref, self.robot.theta]])
		ob['temporal_edges']=np.array([[self.robot.vx, self.robot.vy]])
		# print(self.current_human_states.shape)
		spatial_edges=self.current_human_states
		if self.add_pred:
			# predicted steps will be filled in the vec_pretext_normalize wrapper
			spatial_edges=np.concatenate([spatial_edges, np.zeros((self.max_human_num, 2 * self.predict_steps))], axis=1)

		else:
			# sort humans by distance to robot
			spatial_edges = np.array(sorted(spatial_edges, key=lambda x: np.linalg.norm(x)))
			print(spatial_edges)
		ob['spatial_edges'] = spatial_edges
		ob['visible_masks'] = self.human_visibility

		ob['detected_human_num'] = self.detectedHumanNum
		if ob['detected_human_num'] == 0:
			ob['detected_human_num'] = 1
		print(ob['detected_human_num'])
			
		return ob
		

	def step(self, action, update=True):
		""" Step function """
		print("Step", self.step_counter)
		# process action
		realAction = Twist()

		if self.load_act: # load action from file for robot dynamics checking
			v_unsmooth= self.episodeRecoder.v_list[self.step_counter]
			# in the simulator we use and recrod delta theta. We convert it to omega by dividing it by the time interval
			w_unsmooth = self.episodeRecoder.delta_theta_list[self.step_counter] / self.delta_t
			# v_smooth, w_smooth = self.desiredVelocity[0], self.desiredVelocity[1]
			v_smooth, w_smooth = self.smooth(v_unsmooth, w_unsmooth)
		else:
			action = self.robot.policy.clip_action(action, None)

			self.desiredVelocity[0] = np.clip(self.desiredVelocity[0] + action.v, -self.robot.v_pref, self.robot.v_pref)
			self.desiredVelocity[1] = action.r / self.fixed_time_interval # TODO: dynamic time step is not supported now


			v_smooth, w_smooth = self.smooth(self.desiredVelocity[0], self.desiredVelocity[1])

		
		self.last_v = v_smooth

		realAction.linear.x = v_smooth
		realAction.angular.z = w_smooth

		self.actionPublisher.publish(realAction)


		rospy.sleep(self.ROSStepInterval)  # act as frame skip

		# get the latest states

		self.readMsg()


		# update time
		if self.step_counter==0: # if it is the first step of the episode
			self.delta_t = np.inf
		else:
			# time interval between two steps
			if self.use_fixed_time_interval:
				self.delta_t=self.fixed_time_interval
			else: self.delta_t = self.currentTime - self.lastTime
			print('delta_t:', self.currentTime - self.lastTime)
			#print('actual delta t:', currentTime - self.baseEnv.lastTime)
			self.global_time = self.global_time + self.delta_t
		self.step_counter=self.step_counter+1
		self.lastTime = self.currentTime


		# generate new observation
		ob=self.generate_ob()


		# calculate reward
		reward = 0

		# determine if the episode ends
		done=False
		reaching_goal = norm(np.array([self.robot.gx, self.robot.gy]) - np.array([self.robot.px, self.robot.py]))  < 0.6
		if self.global_time >= self.time_limit:
			done = True
			print("Timeout")
		elif reaching_goal:
			done = True
			print("Goal Achieved")
		elif self.load_act and self.record:
			if self.step_counter >= len(self.episodeRecoder.v_list):
				done = True
		else:
			done = False


		info = {'info': None}

		if self.record:
			self.episodeRecoder.wheelVelList.append(self.jointVel) # it is the calculated wheel velocity, not the measured
			self.episodeRecoder.actionList.append([v_smooth, w_smooth])
			self.episodeRecoder.positionList.append([self.robot.px, self.robot.py])
			self.episodeRecoder.orientationList.append(self.robot.theta)

		if done:
			print('DOne!')
			if self.record:
				self.episodeRecoder.saveEpisode(self.case_counter['test'])


		return ob, reward, done, info

	def render(self, mode='human'):
		import matplotlib.pyplot as plt
		import matplotlib.lines as mlines
		from matplotlib import patches

		plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'

		robot_color = 'yellow'
		goal_color = 'red'
		arrow_color = 'red'
		arrow_style = patches.ArrowStyle("->", head_length=4, head_width=2)

		def calcFOVLineEndPoint(ang, point, extendFactor):
			# choose the extendFactor big enough
			# so that the endPoints of the FOVLine is out of xlim and ylim of the figure
			FOVLineRot = np.array([[np.cos(ang), -np.sin(ang), 0],
								   [np.sin(ang), np.cos(ang), 0],
								   [0, 0, 1]])
			point.extend([1])
			# apply rotation matrix
			newPoint = np.matmul(FOVLineRot, np.reshape(point, [3, 1]))
			# increase the distance between the line start point and the end point
			newPoint = [extendFactor * newPoint[0, 0], extendFactor * newPoint[1, 0], 1]
			return newPoint

		ax = self.render_axis
		artists = []

		# add goal
		goal = mlines.Line2D([self.robot.gx], [self.robot.gy], color=goal_color, marker='*', linestyle='None',
							 markersize=15, label='Goal')
		ax.add_artist(goal)
		artists.append(goal)

		# add robot
		robotX, robotY = self.robot.px, self.robot.py

		robot = plt.Circle((robotX, robotY), self.robot.radius, fill=True, color=robot_color)
		ax.add_artist(robot)
		artists.append(robot)

		plt.legend([robot, goal], ['Robot', 'Goal'], fontsize=16)

		# compute orientation in each step and add arrow to show the direction
		radius = 0.1
		arrowStartEnd = []

		robot_theta = self.robot.theta

		arrowStartEnd.append(
			((robotX, robotY), (robotX + radius * np.cos(robot_theta), robotY + radius * np.sin(robot_theta))))

		arrows = [patches.FancyArrowPatch(*arrow, color=arrow_color, arrowstyle=arrow_style)
				  for arrow in arrowStartEnd]
		for arrow in arrows:
			ax.add_artist(arrow)
			artists.append(arrow)


		# add humans and change the color of them based on visibility
		# print(self.current_human_states.shape)
		# print(self.track_num)
		# render tracked humans
		human_circles = [plt.Circle(self.current_human_states[i] + np.array([self.robot.px, self.robot.py]), 0.2, fill=False) \
						 for i in range(self.max_human_num)]
		# render untracked humans
		# human_circles = [
		# 	plt.Circle(self.current_human_states_raw[i] + np.array([self.robot.px, self.robot.py]), 0.2, fill=False) for i
		# 	in range(self.detectedHumanNum)]


		for i in range(self.max_human_num):
		# for i in range(self.detectedHumanNum):
			ax.add_artist(human_circles[i])
			artists.append(human_circles[i])

			# green: visible; red: invisible
			if self.human_visibility[i]:
				human_circles[i].set_color(c='g')
			else:
				human_circles[i].set_color(c='r')


		plt.pause(0.1)
		for item in artists:
			item.remove()  # there should be a better way to do this. For example,
		# initially use add_artist and draw_artist later on
		for t in ax.texts:
			t.set_visible(False)

	def shutdown(self):
		self.smoothStop()
		print("You are stopping the robot!")
		self.reset()
		

	def smoothStop(self):
		realAction = Twist()
		self.actionPublisher.publish(Twist())





================================================
FILE: crowd_sim/envs/utils/__init__.py
================================================


================================================
FILE: crowd_sim/envs/utils/action.py
================================================
from collections import namedtuple

ActionXY = namedtuple('ActionXY', ['vx', 'vy'])
ActionRot = namedtuple('ActionRot', ['v', 'r'])


================================================
FILE: crowd_sim/envs/utils/agent.py
================================================
import numpy as np
from numpy.linalg import norm
import abc
import logging
from crowd_nav.policy.policy_factory import policy_factory
from crowd_sim.envs.utils.action import ActionXY, ActionRot
from crowd_sim.envs.utils.state import ObservableState, FullState


class Agent(object):
    def __init__(self, config, section):
        """
        Base class for robot and human. Have the physical attributes of an agent.

        """
        subconfig = config.robot if section == 'robot' else config.humans
        self.visible = subconfig.visible
        self.v_pref = subconfig.v_pref
        self.radius = subconfig.radius
        # randomize neighbor_dist of ORCA
        if config.env.randomize_attributes:
            config.orca.neighbor_dist = np.random.uniform(5, 10)
        self.policy = policy_factory[subconfig.policy](config)
        self.sensor = subconfig.sensor
        self.FOV = np.pi * subconfig.FOV
        # for humans: we only have holonomic kinematics; for robot: depend on config
        self.kinematics = 'holonomic' if section == 'humans' else config.action_space.kinematics
        self.px = None
        self.py = None
        self.gx = None
        self.gy = None
        self.vx = None
        self.vy = None
        self.theta = None
        self.time_step = config.env.time_step
        self.policy.time_step = config.env.time_step


    def print_info(self):
        logging.info('Agent is {} and has {} kinematic constraint'.format(
            'visible' if self.visible else 'invisible', self.kinematics))


    def sample_random_attributes(self):
        """
        Sample agent radius and v_pref attribute from certain distribution
        :return:
        """
        self.v_pref = np.random.uniform(0.5, 1.5)
        self.radius = np.random.uniform(0.3, 0.5)

    def set(self, px, py, gx, gy, vx, vy, theta, radius=None, v_pref=None):
        self.px = px
        self.py = py
        self.gx = gx
        self.gy = gy
        self.vx = vx
        self.vy = vy
        self.theta = theta

        if radius is not None:
            self.radius = radius
        if v_pref is not None:
            self.v_pref = v_pref

    # self.px, self.py, self.vx, self.vy, self.radius, self.gx, self.gy, self.v_pref, self.theta
    def set_list(self, px, py, vx, vy, radius, gx, gy, v_pref, theta):
        self.px = px
        self.py = py
        self.gx = gx
        self.gy = gy
        self.vx = vx
        self.vy = vy
        self.theta = theta
        self.radius = radius
        self.v_pref = v_pref

    def get_observable_state(self):
        return ObservableState(self.px, self.py, self.vx, self.vy, self.radius)

    def get_observable_state_list(self):
        return [self.px, self.py, self.vx, self.vy, self.radius]

    def get_observable_state_list_noV(self):
        return [self.px, self.py, self.radius]

    def get_next_observable_state(self, action):
        self.check_validity(action)
        pos = self.compute_position(action, self.time_step)
        next_px, next_py = pos
        if self.kinematics == 'holonomic':
            next_vx = action.vx
            next_vy = action.vy
        else:
            next_theta = self.theta + action.r
            next_vx = action.v * np.cos(next_theta)
            next_vy = action.v * np.sin(next_theta)
        return ObservableState(next_px, next_py, next_vx, next_vy, self.radius)

    def get_full_state(self):
        return FullState(self.px, self.py, self.vx, self.vy, self.radius, self.gx, self.gy, self.v_pref, self.theta)

    def get_full_state_list(self):
        return [self.px, self.py, self.vx, self.vy, self.radius, self.gx, self.gy, self.v_pref, self.theta]

    def get_full_state_list_noV(self):
        return [self.px, self.py, self.radius, self.gx, self.gy, self.v_pref, self.theta]
        # return [self.px, self.py, self.radius, self.gx, self.gy, self.v_pref]

    def get_position(self):
        return self.px, self.py

    def set_position(self, position):
        self.px = position[0]
        self.py = position[1]

    def get_goal_position(self):
        return self.gx, self.gy

    def get_velocity(self):
        return self.vx, self.vy


    def set_velocity(self, velocity):
        self.vx = velocity[0]
        self.vy = velocity[1]


    @abc.abstractmethod
    def act(self, ob):
        """
        Compute state using received observation and pass it to policy

        """
        return

    def check_validity(self, action):
        if self.kinematics == 'holonomic':
            assert isinstance(action, ActionXY)
        else:
            assert isinstance(action, ActionRot)

    def compute_position(self, action, delta_t):
        self.check_validity(action)
        if self.kinematics == 'holonomic':
            px = self.px + action.vx * delta_t
            py = self.py + action.vy * delta_t
        # unicycle
        else:
            # naive dynamics
            # theta = self.theta + action.r * delta_t # if action.r is w
            # # theta = self.theta + action.r # if action.r is delta theta
            # px = self.px + np.cos(theta) * action.v * delta_t
            # py = self.py + np.sin(theta) * action.v * delta_t

            # differential drive
            epsilon = 0.0001
            if abs(action.r) < epsilon:
                R = 0
            else:
                w = action.r/delta_t # action.r is delta theta
                R = action.v/w

            px = self.px - R * np.sin(self.theta) + R * np.sin(self.theta + action.r)
            py = self.py + R * np.cos(self.theta) - R * np.cos(self.theta + action.r)


        return px, py

    def step(self, action):
        """
        Perform an action and update the state
        """
        self.check_validity(action)
        pos = self.compute_position(action, self.time_step)
        self.px, self.py = pos
        if self.kinematics == 'holonomic':
            self.vx = action.vx
            self.vy = action.vy
        else:
            self.theta = (self.theta + action.r) % (2 * np.pi)
            self.vx = action.v * np.cos(self.theta)
            self.vy = action.v * np.sin(self.theta)

    def one_step_lookahead(self, pos, action):
        px, py = pos
        self.check_validity(action)
        new_px = px + action.vx * self.time_step
        new_py = py + action.vy * self.time_step
        new_vx = action.vx
        new_vy = action.vy
        return [new_px, new_py, new_vx, new_vy]

    def reached_destination(self):
        return norm(np.array(self.get_position()) - np.array(self.get_goal_position())) < self.radius



================================================
FILE: crowd_sim/envs/utils/human.py
================================================
from crowd_sim.envs.utils.agent import Agent
from crowd_sim.envs.utils.state import JointState


class Human(Agent):
    # see Agent class in agent.py for details!!!
    def __init__(self, config, section):
        super().__init__(config, section)
        self.isObstacle = False # whether the human is a static obstacle (part of wall) or a moving agent
        self.id = None # the human's ID, used for collecting traj data
        self.observed_id = -1 # if it is observed, give it a tracking ID

    # ob: a list of observable states
    def act(self, ob):
        """
        The state for human is its full state and all other agents' observable states
        :param ob:
        :return:
        """

        state = JointState(self.get_full_state(), ob)
        action = self.policy.predict(state)
        return action

    # ob: a joint state (ego agent's full state + other agents' observable states)
    def act_joint_state(self, ob):
        """
        The state for human is its full state and all other agents' observable states
        :param ob:
        :return:
        """
        action = self.policy.predict(ob)
        return action


================================================
FILE: crowd_sim/envs/utils/info.py
================================================
class Timeout(object):
    def __init__(self):
        pass

    def __str__(self):
        return 'Timeout'


class ReachGoal(object):
    def __init__(self):
        pass

    def __str__(self):
        return 'Reaching goal'


class Danger(object):
    def __init__(self, min_dist):
        self.min_dist = min_dist

    def __str__(self):
        return 'Too close'


class Collision(object):
    def __init__(self):
        pass

    def __str__(self):
        return 'Collision'

class OutRoad(object):
    def __init__(self):
        pass

    def __str__(self):
        return 'Out of road'

class Nothing(object):
    def __init__(self):
        pass

    def __str__(self):
        return ''


================================================
FILE: crowd_sim/envs/utils/recorder.py
================================================
import pandas as pd
import os
import numpy as np

class Recoder(object):
	def __init__(self):

		self.unsmoothed_actions = []
		self.actionList = []  # recorded in turtlebot_env.step. [trans,rot]
		self.wheelVelList = []  # recorded in apply_action(). [left,right]
		self.orientationList = []  # recorded in calc_state() [angle]
		self.positionList = []  # recorded in calc_state() [x,y]
		self.robot_goal = []

		self.saveTo='data/unicycle/selfAttn_truePred_noiseActt/record/real_recordings/'
		# self.loadFrom = 'data/unicycle/selfAttn_truePred_noiseActt/record/sim_recordings/ep0-1/action.csv'
		self.loadFrom = 'data/unicycle/selfAttn_truePred_noiseActt/record/sim_recordings/ep0-1point5/unsmoothed_action.csv'
		self.loadedAction=None
		self.episodeInitNum=0


	def saveEpisode(self,episodeCounter):

		savePath=os.path.join(self.saveTo,'ep'+str(self.episodeInitNum+episodeCounter))
		#savePath = os.path.join(self.saveTo, 'ep' + str(4))
		if not os.path.exists(savePath):
			#shutil.rmtree(savePath)
			os.makedirs(savePath)

		if len(self.actionList) > 0:
			action = pd.DataFrame({'trans': np.array(self.actionList)[:,0], 'rot': np.array(self.actionList)[:,1]})
			action.to_csv(os.path.join(savePath, 'action.csv'), index=False)
		if len(self.wheelVelList) > 0:
			wheelVel=pd.DataFrame({'left': np.array(self.wheelVelList)[:,0], 'right': np.array(self.wheelVelList)[:,1]})
			wheelVel.to_csv(os.path.join(savePath, 'wheelVel.csv'), index=False)
		if len(self.orientationList) > 0:
			orientation = pd.DataFrame({'ori': np.array(self.orientationList)})
			orientation.to_csv(os.path.join(savePath, 'orientation.csv'), index=False)
		if len(self.positionList) > 0:
			position = pd.DataFrame({'x': np.array(self.positionList)[:, 0], 'y': np.array(self.positionList)[:, 1]})
			position.to_csv(os.path.join(savePath, 'position.csv'), index=False)
		if len(self.unsmoothed_actions) > 0:
			unsmooth_action = pd.DataFrame({'trans': np.array(self.unsmoothed_actions)[:,0], 'rot': np.array(self.unsmoothed_actions)[:,1]})
			unsmooth_action.to_csv(os.path.join(savePath, 'unsmoothed_action.csv'), index=False)
		if len(self.robot_goal) > 0:
			robot_goal = pd.DataFrame({'x': np.array(self.robot_goal)[:, 0], 'y': np.array(self.robot_goal)[:, 1]})
			robot_goal.to_csv(os.path.join(savePath, 'robot_goal.csv'), index=False)

		print("csv written")
		self.clear()

	def loadActions(self):
		self.loadedAction = pd.read_csv(self.loadFrom)
		self.v_list = self.loadedAction['trans']
		self.delta_theta_list = self.loadedAction['rot']
		print("Reading actions from", self.loadFrom)

	def clear(self):
		self.actionList = []  # recorded in turtlebot_env.step. [trans,rot]
		self.wheelVelList = []  # recorded in apply_action(). [left,right]
		self.orientationList = []  # recorded in calc_state() [angle]
		self.positionList = []  # recorded in calc_state() [x,y]
		self.unsmoothed_actions = []
		self.robot_goal = []

class humanRecoder(object):
	def __init__(self, human_num):
		self.human_num = human_num
		# list of [px, py, radius, v_pref] with length = human_num
		self.initPosList = [[] for i in range(self.human_num)]
		# list of goal lists with length = human_num
		# i-th element is a list i-th human's goals in the episode
		self.goalPosList = [[] for i in range(self.human_num)]

	# update self.initPosList
	# human_id: 0~4
	# pos: list of [px, py, radius, v_pref]
	def addInitPos(self, human_id, px, py, radius, v_pref):
		self.initPosList[human_id] = [px, py, radius, v_pref]

	# append [px, py] at the end of i-th element of list
	def addGoalPos(self, human_id, px, py):
		self.goalPosList[human_id].append([px, py])

	def loadLists(self, initPos, goalPos):
		self.initPosList = initPos
		self.goalPosList = goalPos

	def getInitList(self):
		return self.initPosList

	def getGoalList(self):
		return self.goalPosList

	def getInitPos(self, human_id):
		return self.initPosList[human_id]

	# get next goal position for human i
	# if the last goal is already gotten before, return None
	def getNextGoalPos(self, human_id):
		# if the list is empty
		if not self.goalPosList[human_id]:
			return None
		return self.goalPosList[human_id].pop(0)

	# check whether human i has used up its goal
	def goalIsEmpty(self, human_id):
		if not self.goalPosList[human_id]:
			return True
		else:
			return False

	def clear(self):
		self.initPosList = [[] for i in range(self.human_num)]
		self.goalPosList = [[] for i in range(self.human_num)]

class jointStateRecoder(object):
	def __init__(self, human_num):
		self.human_num = human_num

		# px, py, r, gx, gy, v_pref, theta, vx, vy
		self.robot_s = []

		# nested list of [px1, ..., px5]
		self.humans_px = []
		# nested list of [py1, ..., py5]
		self.humans_py = []

		# record the random seed for this episode
		self.episode_num = []

		# data labels:
		self.episode_num_label = []
		self.traj_ratio = []
		self.time_ratio = []
		self.idle_time = []


	# append a joint state to the lists
	# ob should be the ob returned from env.step()
	def add_traj(self, seed, ob, srnn):
		# if ob is a dictionary from crowd_sim_dict.py
		if srnn:
			robot_s_noV = ob['robot_node'][0, 0].cpu().numpy()
			robot_v = ob['edges'][0, 0].cpu().numpy()
			self.robot_s.append(np.concatenate((robot_s_noV, robot_v)))
			self.humans_px.append(ob['edges'][0, 1:, 0].cpu().numpy())
			self.humans_py.append(ob['edges'][0, 1:, 1].cpu().numpy())
		# if ob is a list from crowd_sim.py
		else:
			ob = ob[0].cpu().numpy()
			# px, py, r, gx, gy, v_pref, theta, vx, vy
			self.robot_s.append(np.concatenate((ob[1:3], ob[5:10], ob[3:5])))
			px_list, py_list = [], []
			# human num = (ob.shape - 10) // 5
			for i in range((ob.shape[0] - 10) // 5):
				px_idx = 5 * i + 10
				py_idx = 5 * i + 11
				px_list.append(ob[px_idx])
				py_list.append(ob[py_idx])
			self.humans_px.append(px_list)
			self.humans_py.append(py_list)
		self.episode_num.append(seed)

	# seed: the random seed of this episode from env
	# respond_traj_len, respond_time: the total traj length and total timesteps when the user makes last choice
	# traj_len, tot_time: total traj length and timesteps of the whole episode
	def add_label(self, seed, respond_traj_len, traj_len, respond_time, tot_time, idle_time):
		self.traj_ratio.append(respond_traj_len/traj_len)
		self.time_ratio.append(respond_time/tot_time)
		self.idle_time.append(idle_time)
		self.episode_num_label.append(seed)



	def save_to_file(self, directory):
		if not os.path.exists(os.path.join('trajectories', directory)):
			os.makedirs(os.path.join('trajectories', directory))
		# 1. save all trajectories
		# add robot data
		robot_s = np.array(self.robot_s)
		data_dict = {'epi_num': np.array(self.episode_num), 'robot_px': robot_s[:, 0], 'robot_py': robot_s[:, 1],
					 'robot_r': robot_s[:, 2],
					 'robot_gx': robot_s[:, 3], 'robot_gy': robot_s[:, 4], 'robot_vpref': robot_s[:, 5],
					 'robot_theta': robot_s[:, 6], 'robot_vx': robot_s[:, 7], 'robot_vy': robot_s[:, 8]}
		# add humans data
		humans_px = np.array(self.humans_px)
		humans_py = np.array(self.humans_py)
		for i in range(self.human_num):
			data_dict['human'+str(i)+'px'] = humans_px[:, i]
			data_dict['human'+str(i)+'py'] = humans_py[:, i]

		all_data = pd.DataFrame(data_dict)
		all_data.to_csv(os.path.join('trajectories', directory, 'states.csv'), index=False)

		# 2. save all labels
		data_dict = {'epi_num': np.array(self.episode_num_label), 'traj_ratio': np.array(self.traj_ratio),
					 'time_ratio': np.array(self.time_ratio), 'idle_time': np.array(self.idle_time)}
		all_data = pd.DataFrame(data_dict)
		all_data.to_csv(os.path.join('trajectories', directory, 'labels.csv'), index=False)

		self.clear()

	def clear(self):
		# px, py, r, gx, gy, v_pref, theta, vx, vy
		self.robot_s = []

		# nested list of [px1, ..., px5]
		self.humans_px = []
		# nested list of [py1, ..., py5]
		self.humans_py = []

		# record the random seed for this episode
		self.episode_num = []

		# data labels:
		self.episode_num_label = []
		self.traj_ratio = []
		self.time_ratio = []
		self.idle_time = []



================================================
FILE: crowd_sim/envs/utils/robot.py
================================================
from crowd_sim.envs.utils.agent import Agent
from crowd_sim.envs.utils.state import JointState


class Robot(Agent):
    def __init__(self, config,section):
        super().__init__(config,section)
        self.sensor_range = config.robot.sensor_range

    def act(self, ob):
        if self.policy is None:
            raise AttributeError('Policy attribute has to be set!')

        state = JointState(self.get_full_state(), ob)
        action = self.policy.predict(state)
        return action


    def actWithJointState(self,ob):
        action = self.policy.predict(ob)
        return action


================================================
FILE: crowd_sim/envs/utils/state.py
================================================
from collections import namedtuple
import numpy as np

FullState = namedtuple('FullState', ['px', 'py', 'vx', 'vy', 'radius', 'gx', 'gy', 'v_pref', 'theta'])
ObservableState = namedtuple('ObservableState', ['px', 'py', 'vx', 'vy', 'radius'])

# JointState has 2 attributes:
# self.self_state is a FullState
# self.human_states is a list of ObservableStates
class JointState(object):
    # self_state: list of length 9
    # human_states: list of length human_num*5 or nested list [human_num, 5]
    def __init__(self, self_state, human_states):
        assert len(self_state) == 9
        human_states_namedtuple = []
        # if human states is a nested list [human_num, 5]
        if len(np.shape(human_states)) == 2:
            for human_state in human_states:
                assert len(human_state) == 5
                human_states_namedtuple.append(ObservableState(*human_state))
        # if human states is a flatten list of length human_num*5
        else:
            assert len(human_states) % 5 == 0
            human_num = len(human_states) // 5
            for i in range(human_num):
                human_states_namedtuple.append(ObservableState(*human_states[int(i*5):(int((i+1)*5))]))

        self.self_state = FullState(*self_state)
        self.human_states = human_states_namedtuple

    # convert a joint state to a flattened list of length 9 + 5 * human_num
    def to_flatten_list(self):
        flatten_list = list(self.self_state)
        for human_state in self.human_states:
            flatten_list.extend(list(human_state))
        return flatten_list

================================================
FILE: gst_updated/.gitignore
================================================
results/
datasets/
logs/
myenv/
*.pickle
*.png
*.pt

================================================
FILE: gst_updated/LICENSE
================================================
MIT License

Copyright (c) 2021 Zhe Huang

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: gst_updated/README-old.md
================================================
# gst
Gumbel Social Transformer.

This is the implementation for the paper
### Learning Sparse Interaction Graphs of Partially Observed Pedestrians for Trajectory Prediction
GST is the abbreviation of our model Gumbel Social Transformer. All code was developed and tested on Ubuntu 18.04 with CUDA 10.2, Python 3.6.9, and PyTorch 1.7.1. <br/>

### Setup
##### 1. Create a Virtual Environment. (Optional)
```
virtualenv -p /usr/bin/python3 myenv
source myenv/bin/activate
```
##### 2. Install Packages
You can run either <br/>
```
pip install -r requirements.txt
```
or <br/>
```
pip install numpy
pip install scipy
pip install matplotlib
pip install tensorboardX
pip install pandas
pip install pyyaml
pip install jupyterlab
pip install torch==1.7.1
pip install tensorflow
```
or for the machine with 3090s,
```
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
```
##### 3. Create Folders and Dataset Files.
```
sh scripts/make_dirs.sh
sh scripts/data/create_batch_datasets_eth_ucy.sh
python scripts/data/create_batch_datasets_sdd.py 
```

### Training and Evaluation on Various Configurations
To train and evaluate a model with n=1, i.e., the target pedestrian pays attention to at most one partially observed pedestrian, run
```
sh scripts/train_sparse.sh
sh scripts/eval_sparse.sh
```
To train and evaluate a model with n=1 and temporal component as a temporal convolution network, run
```
sh scripts/train_sparse_tcn.sh
sh scripts/eval_sparse_tcn.sh
```
To train and evaluate a model with full connection, i.e., the target pedestrian pays attention to all partially observed pedestrians in the scene, run
```
sh scripts/train_full_connection.sh
sh scripts/eval_full_connection.sh
```
To train and evaluate a model in which the target pedestrian pays attention to all fully observed pedestrians in the scene, run
```
sh scripts/train_full_connection_fully_observed.sh
sh scripts/eval_full_connection_fully_observed.sh
```

### Important Arguments for Building Customized Configurations
- `--spatial_num_heads_edges`: n, i.e., the upperbound number of pedestrians that the target pedestrian can pay attention to in the scene. When n=0, it is defined as full connection, i.e., the target pedestrian pays attention to all pedestrians in the scene. Default is 4.
- `--only_observe_full_period`: The target pedestrian only pays attention to fully observed pedestrians. Default is False.
- `--temporal`: Temporal component. `lstm` is Masked LSTM, and `temporal_convolution_net` is temporal convolution network. Default is `lstm`.
- `--decode_style`: Decoding style. It has to match the option `--temporal`. `recursive` matches `lstm`, and `readout` matches `temporal_convolution_net`. Default is `recursive`.
- `--ghost`: Add a ghost pedestrian in the scene to encourage sparsity. When `--spatial_num_heads_edges` is set as zero, i.e., the target pedestrian pays attention to all pedestrians in the scene, `--ghost` has to be set as False. Default is False.


================================================
FILE: gst_updated/README.md
================================================
# gst
Gumbel Social Transformer trained with data provided in Nov, 2021 by Shuijing.

## How to set up
##### 1. Create a Virtual Environment.
```
virtualenv -p /usr/bin/python3 myenv
source myenv/bin/activate
```
##### 2. Install Packages
You can run either <br/>
```
pip install -r requirements.txt
```
or <br/>
```
pip install numpy
pip install scipy
pip install matplotlib
pip install tensorboardX
pip install pandas
pip install pyyaml
pip install jupyterlab
pip install torch==1.7.1
pip install tensorflow
```
or for the machine with 3090s,
```
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
```
##### 3. Create Folders, and download datasets and pretrained models.
```
sh run/make_dirs.sh
sh run/download_datasets_models.sh
```

## How to run
##### 1. Test pretrained models.
```
source myenv/bin/activate
sh tuning/211209-test_shuijing.sh 
```
And you should see outputs which look like below:
```
--------------------------------------------------
dataset:  sj
No ghost version.
new gst
new st model
Loaded configuration:  100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000
Test loss from loaded model:  -4.0132044252296755
dataset: sj | test aoe: 0.1579 | test aoe std: 0.0175 | test foe: 0.3216 | test foe std: 0.0408 | min aoe: 0.1339, min foe: 0.2638
```
##### 2. Test wrapper.
Run
```
cd gst_updated
sh run/download_wrapper_data_model.sh
```
to collect model and demo data for wrapper. Then run
```
python scripts/wrapper/crowd_nav_interface_parallel.py
```
And you should see outputs which look like below:
```
INPUT DATA
number of environments:  4
input_traj shape:  torch.Size([4, 15, 5, 2])
input_binary_mask shape: torch.Size([4, 15, 5, 1])

No ghost version.
new gst
new st model
LOADED MODEL
device:  cuda:0


OUTPUT DATA
output_traj shape:  torch.Size([4, 15, 5, 5])
output_binary_mask shape: torch.Size([4, 15, 1])

0.png is created.
1.png is created.
2.png is created.
3.png is created.
```

## Observation dimensions for traj pred (by Shuijing)
As far as I know, the wrapper for SRNN model does not need to handle multiple timesteps simultaneously. So the time dimension is not mentioned here.
### Output format
The output of traj pred model (or the input of the planner model) at each timestep t is a 3D float array with size = [number_of_pedestrains, number of prediction steps, 5], where 5 includes [mu_x, mu_y, sigma_x, sigma_y, correlation coefficient].

If number of parallel environments > 1, the array becomes 4D with size = [number of env, number_of_pedestrains, number of prediction steps, 5].

The order of the dimensions can be swapped, and the dimensions can be combined in row-major order if needed. For example, [number of env, number_of_pedestrains, number of prediction steps * 5] is good as long as the last dimension is in row-major order.

### Input format 
The input format of traj pred model at each timestep t is a 2D float array with size = [number of pedestrains, 2], where 2 includes [px, py] of each pedestrain. 

If needed, the displacements of all pedestrains can also be added. In this case, input size = [number of pedestrains, 4], where 4 includes [px, py, delta_px, delta_py] of each pedestrain. 

In addition, there is a binary mask with size = [number of pedestrains, 1] that indicates the presence of each human. If a human with ID = i is present, then mask[i] = True, else mask[i] = False.

If number of parallel environments > 1, similarly, the number_of_env is added as the first dimension in addition to the above sizes.



================================================
FILE: gst_updated/__init__.py
================================================


================================================
FILE: gst_updated/requirements.txt
================================================
absl-py==0.13.0
anyio==3.3.0
argon2-cffi==21.1.0
astunparse==1.6.3
async-generator==1.10
attrs==21.2.0
Babel==2.9.1
backcall==0.2.0
bleach==4.1.0
cached-property==1.5.2
cachetools==4.2.2
certifi==2021.5.30
cffi==1.14.6
charset-normalizer==2.0.4
clang==5.0
contextvars==2.4
cycler==0.10.0
dataclasses==0.8
decorator==5.0.9
defusedxml==0.7.1
entrypoints==0.3
flatbuffers==1.12
gast==0.4.0
google-auth==1.35.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.39.0
h5py==3.1.0
idna==3.2
immutables==0.16
importlib-metadata==4.8.1
ipykernel==5.5.5
ipython==7.16.1
ipython-genutils==0.2.0
jedi==0.18.0
Jinja2==3.0.1
json5==0.9.6
jsonschema==3.2.0
jupyter-client==7.0.2
jupyter-core==4.7.1
jupyter-server==1.10.2
jupyterlab==3.1.10
jupyterlab-pygments==0.1.2
jupyterlab-server==2.7.2
keras==2.6.0
Keras-Preprocessing==1.1.2
kiwisolver==1.3.1
Markdown==3.3.4
MarkupSafe==2.0.1
matplotlib==3.3.4
mistune==0.8.4
nbclassic==0.3.1
nbclient==0.5.4
nbconvert==6.0.7
nbformat==5.1.3
nest-asyncio==1.5.1
notebook==6.4.3
numpy==1.19.5
oauthlib==3.1.1
opt-einsum==3.3.0
packaging==21.0
pandas==1.1.5
pandocfilters==1.4.3
parso==0.8.2
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.3.1
prometheus-client==0.11.0
prompt-toolkit==3.0.20
protobuf==3.17.3
ptyprocess==0.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
Pygments==2.10.0
pyparsing==2.4.7
pyrsistent==0.18.0
python-dateutil==2.8.2
pytz==2021.1
PyYAML==5.4.1
pyzmq==22.2.1
requests==2.26.0
requests-oauthlib==1.3.0
requests-unixsocket==0.2.0
rsa==4.7.2
scipy==1.5.4
Send2Trash==1.8.0
six==1.15.0
sniffio==1.2.0
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorboardX==2.4
tensorflow==2.6.0
tensorflow-estimator==2.6.0
termcolor==1.1.0
terminado==0.11.1
testpath==0.5.0
torch==1.7.1
tornado==6.1
traitlets==4.3.3
typing-extensions==3.7.4.3
urllib3==1.26.6
wcwidth==0.2.5
webencodings==0.5.1
websocket-client==1.2.1
Werkzeug==2.0.1
wrapt==1.12.1
zipp==3.5.0


================================================
FILE: gst_updated/run/create_batch_datasets_eth_ucy.sh
================================================
for dataset in eth hotel univ zara1 zara2; do
    python -u scripts/data/create_datasets_eth_ucy.py --dataset $dataset | tee -a logs/create_batch_datasets.txt
    python -u scripts/data/create_batch_datasets_eth_ucy.py --dataset $dataset | tee -a logs/create_batch_datasets.txt
    rm -rf datasets/eth_ucy/${dataset}/${dataset}_dset_train_trajectories.pt
    rm -rf datasets/eth_ucy/${dataset}/${dataset}_dset_val_trajectories.pt
    rm -rf datasets/eth_ucy/${dataset}/${dataset}_dset_test_trajectories.pt
done

================================================
FILE: gst_updated/run/create_batch_datasets_self_eth_ucy.sh
================================================
for dataset in eth hotel univ zara1 zara2; do
    python -u scripts/data/create_datasets_self_eth_ucy.py --dataset $dataset | tee -a logs/create_batch_datasets.txt
    python -u scripts/data/create_batch_datasets_self_eth_ucy.py --dataset $dataset | tee -a logs/create_batch_datasets.txt
    rm -rf datasets/self_eth_ucy/${dataset}/${dataset}_dset_train_trajectories.pt
    rm -rf datasets/self_eth_ucy/${dataset}/${dataset}_dset_val_trajectories.pt
    rm -rf datasets/self_eth_ucy/${dataset}/${dataset}_dset_test_trajectories.pt
done

================================================
FILE: gst_updated/run/create_batch_datasets_synth.sh
================================================
python -u scripts/data/create_datasets_trajnet.py --dataset synth | tee -a logs/create_datasets.txt
python -u scripts/data/create_batch_datasets_trajnet.py --dataset synth | tee -a logs/create_batch_datasets.txt

================================================
FILE: gst_updated/run/download_datasets_models.sh
================================================
cd datasets/shuijing/orca_20humans_fov
cd results/visual
# use curl for big-sized google drive files to redirect
curl -c /tmp/cookies "https://drive.google.com/uc?export=download&id=17QLGgTcCWatIrF4EE1Gai8BaMbqxz6kk" > /tmp/intermezzo.html
curl -L -b /tmp/cookies "https://drive.google.com$(cat /tmp/intermezzo.html | grep -Po 'uc-download-link" [^>]* href="\K[^"]*' | sed 's/\&amp;/\&/g')" > sj_dset_test_batch_trajectories.pt
cd ../../../results
# use wget for small-sized google drive files
wget -O 211203-model.zip "https://drive.google.com/uc?export=download&id=1ukON4cAwM63gKKlJJwgjcdLf7-4-f1KH" > /tmp/intermezzo.html
unzip 211203-model.zip && rm -rf 211203-model.zip

================================================
FILE: gst_updated/run/download_wrapper_data_model.sh
================================================
mkdir -p datasets/wrapper_demo
cd datasets/wrapper_demo
# use wget for small-sized google drive files
wget -O wrapper_demo_data.pt "https://drive.google.com/uc?export=download&id=1Fjc4nFAeqgCTd9UEUsXhJkESqgErNtlA" > /tmp/intermezzo.html

cd ../../results
# use wget for small-sized google drive files
wget -O 211222-model.zip "https://drive.google.com/uc?export=download&id=1mm364PGbp7hFHJ8uXN_uZI4Nq_3JIHPj" > /tmp/intermezzo.html
unzip 211222-model.zip && rm -rf 211222-model.zip

================================================
FILE: gst_updated/run/make_dirs.sh
================================================
mkdir -p datasets/shuijing/orca_20humans_fov
mkdir results
mkdir logs

================================================
FILE: gst_updated/scripts/experiments/draft.py
================================================
import pathhack
import pickle
import time
from os.path import join, isdir
from os import makedirs
import torch
import numpy as np
from tensorboardX import SummaryWriter
from torch.optim.lr_scheduler import StepLR
from src.mgnn.utils import arg_parse, average_offset_error, final_offset_error, \
    negative_log_likelihood, random_rotate_graph, args2writername
from src.gumbel_social_transformer.temperature_scheduler import Temp_Scheduler
from torch.utils.data import DataLoader
from datetime import datetime

num_epochs = 100
init_temp = 1.
checkpoint_epoch = 50
temperature_scheduler = Temp_Scheduler(num_epochs, init_temp, init_temp, \
        temp_min=0.03, last_epoch=checkpoint_epoch-1)
tau_list = []
for epoch in range(51, num_epochs+1):
    tau = temperature_scheduler.step()
    tau_list.append(tau)

temperature_scheduler_new = Temp_Scheduler(num_epochs, init_temp, init_temp, temp_min=0.03)  
tau_list_new = []
for epoch in range(1, num_epochs+1):
    tau = temperature_scheduler_new.step()
    if epoch > 50:
        tau_list_new.append(tau)
tau_list, tau_list_new = np.array(tau_list), np.array(tau_list_new)
print(tau_list-tau_list_new)
# print(tau_list_new)



# def load_batch_dataset(args, pkg_path, subfolder='train', num_workers=4, shuffle=None):
#     result_filename = args.dataset+'_dset_'+subfolder+'_batch_trajectories.pt'
#     if args.dataset == 'sdd':
#         dataset_folderpath = join(pkg_path, 'datasets/sdd/social_pool_data')
#     else:
#         dataset_folderpath = join(pkg_path, 'datasets/eth_ucy', args.dataset)
#     dset = torch.load(join(dataset_folderpath, result_filename))
#     if shuffle is None:
#         if subfolder == 'train':
#             shuffle = True
#         else:
#             shuffle = False
#     dloader = DataLoader(
#         dset,
#         batch_size=1,
#         shuffle=shuffle,
#         num_workers=num_workers)
#     return dloader

# def main(args):
#     print('\n\n')
#     print('-'*50)
#     print('arguments: ', args)
#     torch.manual_seed(args.random_seed)
#     np.random.seed(args.random_seed)
#     if args.batch_size != 1:
#         raise RuntimeError("Batch size must be 1 for BatchTrajectoriesDataset.")
#     if args.dataset == 'sdd' and args.rotation_pattern is not None:
#         raise RuntimeError("SDD should not allow rotation since it uses pixels.")
#     device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#     print('device: ', device)
#     loader_train = load_batch_dataset(args, pathhack.pkg_path, subfolder='train')
#     if args.dataset == 'sdd':
#         loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='test') # no val for sdd
#     else:
#         loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='val')
#     train_data_loaders = [loader_train, loader_val]
#     print('dataset: ', args.dataset)
#     writername = args2writername(args)
#     print('Config: ', writername)
#     logdir = join(pathhack.pkg_path, 'results', writername, args.dataset)
#     if isdir(logdir) and not args.resume_training:
#         print('Error: The result directory was already created and used.')
#         print('-'*50)
#         print('\n\n')
#         return
#     writer = SummaryWriter(logdir=logdir)
#     print('-'*50)
#     print('\n\n')
#     train(args, train_data_loaders, writer, logdir, device=device)
#     writer.close()

# def load_checkpoint(args, logdir, model, optimizer, lr_scheduler):
#     # ! to be finished
#     checkpoint = torch.load(checkpoint_filepath)
    
#     model.load_state_dict(checkpoint['state_dict'])
#     optimizer.load_state_dict(checkpoint['optimizer'])
#     lr_scheduler.load_state_dict(checkpoint['scheduler'])
#     return model, optimizer, lr_scheduler, temperature_scheduler
    

# def train(args, data_loaders, writer, logdir, device='cuda:0'):
#     print('-'*50)
#     print('Training Phase')
#     print('-'*50, '\n')
#     loader_train, loader_val = data_loaders
#     model = st_model(args, device=device).to(device)
#     optimizer = torch.optim.Adam(model.parameters(), lr=args.lr)
#     lr_scheduler = StepLR(optimizer, step_size=50, gamma=0.3)
#     if args.resume_training:
#         temperature_scheduler = Temp_Scheduler(args.num_epochs, args.init_temp, args.init
Download .txt
gitextract_a_pq4yd8/

├── .gitignore
├── CHANGELOG.md
├── LICENSE
├── README.md
├── arguments.py
├── collect_data.py
├── crowd_nav/
│   ├── __init__.py
│   ├── configs/
│   │   ├── __init__.py
│   │   └── config.py
│   └── policy/
│       ├── orca.py
│       ├── policy.py
│       ├── policy_factory.py
│       ├── social_force.py
│       └── srnn.py
├── crowd_sim/
│   ├── README.md
│   ├── __init__.py
│   └── envs/
│       ├── __init__.py
│       ├── crowd_sim.py
│       ├── crowd_sim_pred.py
│       ├── crowd_sim_pred_real_gst.py
│       ├── crowd_sim_var_num.py
│       ├── crowd_sim_var_num_collect.py
│       ├── ros_turtlebot2i_env.py
│       └── utils/
│           ├── __init__.py
│           ├── action.py
│           ├── agent.py
│           ├── human.py
│           ├── info.py
│           ├── recorder.py
│           ├── robot.py
│           └── state.py
├── gst_updated/
│   ├── .gitignore
│   ├── LICENSE
│   ├── README-old.md
│   ├── README.md
│   ├── __init__.py
│   ├── requirements.txt
│   ├── run/
│   │   ├── create_batch_datasets_eth_ucy.sh
│   │   ├── create_batch_datasets_self_eth_ucy.sh
│   │   ├── create_batch_datasets_synth.sh
│   │   ├── download_datasets_models.sh
│   │   ├── download_wrapper_data_model.sh
│   │   └── make_dirs.sh
│   ├── scripts/
│   │   ├── experiments/
│   │   │   ├── draft.py
│   │   │   ├── eval.py
│   │   │   ├── eval_trajnet.py
│   │   │   ├── load_trajnet_test_data.py
│   │   │   ├── pathhack.py
│   │   │   ├── test.py
│   │   │   ├── test_gst.py
│   │   │   └── train.py
│   │   └── wrapper/
│   │       ├── crowd_nav_interface_multi_env_parallel.py
│   │       ├── crowd_nav_interface_multi_env_visualization_test_single_batch.py
│   │       ├── crowd_nav_interface_parallel.py
│   │       └── pathhack.py
│   ├── src/
│   │   ├── gumbel_social_transformer/
│   │   │   ├── edge_selector_ghost.py
│   │   │   ├── edge_selector_no_ghost.py
│   │   │   ├── gumbel_social_transformer.py
│   │   │   ├── mha.py
│   │   │   ├── node_encoder_layer_ghost.py
│   │   │   ├── node_encoder_layer_no_ghost.py
│   │   │   ├── pathhack.py
│   │   │   ├── st_model.py
│   │   │   ├── temperature_scheduler.py
│   │   │   ├── temporal_convolution_net.py
│   │   │   └── utils.py
│   │   ├── mgnn/
│   │   │   ├── batch_trajectories.py
│   │   │   ├── trajectories.py
│   │   │   ├── trajectories_sdd.py
│   │   │   ├── trajectories_trajnet.py
│   │   │   ├── trajectories_trajnet_testset.py
│   │   │   └── utils.py
│   │   └── pec_net/
│   │       ├── config/
│   │       │   └── optimal.yaml
│   │       ├── sdd_trajectories.py
│   │       └── social_utils.py
│   └── tuning/
│       ├── 211130-train_shuijing.sh
│       ├── 211203-eval_shuijing.sh
│       ├── 211203-train_shuijing.sh
│       └── 211209-test_shuijing.sh
├── plot.py
├── requirements.txt
├── rl/
│   ├── .gitignore
│   ├── __init__.py
│   ├── evaluation.py
│   ├── networks/
│   │   ├── __init__.py
│   │   ├── distributions.py
│   │   ├── dummy_vec_env.py
│   │   ├── envs.py
│   │   ├── model.py
│   │   ├── network_utils.py
│   │   ├── selfAttn_srnn_temp_node.py
│   │   ├── shmem_vec_env.py
│   │   ├── srnn_model.py
│   │   └── storage.py
│   ├── ppo/
│   │   ├── __init__.py
│   │   └── ppo.py
│   └── vec_env/
│       ├── __init__.py
│       ├── dummy_vec_env.py
│       ├── envs.py
│       ├── logger.py
│       ├── running_mean_std.py
│       ├── shmem_vec_env.py
│       ├── tile_images.py
│       ├── util.py
│       ├── vec_env.py
│       ├── vec_frame_stack.py
│       ├── vec_normalize.py
│       ├── vec_pretext_normalize.py
│       └── vec_remove_dict_obs.py
├── test.py
└── train.py
Download .txt
SYMBOL INDEX (584 symbols across 71 files)

FILE: arguments.py
  function get_args (line 6) | def get_args():

FILE: collect_data.py
  function collectData (line 13) | def collectData(device, train_data, config):

FILE: crowd_nav/configs/config.py
  class BaseConfig (line 4) | class BaseConfig(object):
    method __init__ (line 5) | def __init__(self):
  class Config (line 9) | class Config(object):

FILE: crowd_nav/policy/orca.py
  class ORCA (line 7) | class ORCA(Policy):
    method __init__ (line 8) | def __init__(self, config):
    method predict (line 64) | def predict(self, state):

FILE: crowd_nav/policy/policy.py
  class Policy (line 5) | class Policy(object):
    method __init__ (line 6) | def __init__(self, config):
    method predict (line 22) | def predict(self, state):
    method reach_destination (line 30) | def reach_destination(state):

FILE: crowd_nav/policy/policy_factory.py
  function none_policy (line 2) | def none_policy():

FILE: crowd_nav/policy/social_force.py
  class SOCIAL_FORCE (line 5) | class SOCIAL_FORCE(Policy):
    method __init__ (line 6) | def __init__(self, config):
    method predict (line 11) | def predict(self, state):

FILE: crowd_nav/policy/srnn.py
  class SRNN (line 7) | class SRNN(Policy):
    method __init__ (line 8) | def __init__(self, config):
    method clip_action (line 17) | def clip_action(self, raw_action, v_pref):
  class selfAttn_merge_SRNN (line 47) | class selfAttn_merge_SRNN(SRNN):
    method __init__ (line 48) | def __init__(self, config):

FILE: crowd_sim/envs/crowd_sim.py
  class CrowdSim (line 19) | class CrowdSim(gym.Env):
    method __init__ (line 24) | def __init__(self):
    method configure (line 88) | def configure(self, config):
    method set_robot (line 188) | def set_robot(self, robot):
    method generate_random_human_position (line 192) | def generate_random_human_position(self, human_num):
    method generate_circle_crossing_human (line 203) | def generate_circle_crossing_human(self):
    method update_last_human_states (line 243) | def update_last_human_states(self, human_visibility, reset):
    method get_true_human_states (line 276) | def get_true_human_states(self):
    method generate_robot_humans (line 287) | def generate_robot_humans(self, phase, human_num=None):
    method smooth_action (line 315) | def smooth_action(self, action):
    method reset (line 361) | def reset(self, phase='train', test_case=None):
    method update_human_goals_randomly (line 415) | def update_human_goals_randomly(self):
    method update_human_goal (line 453) | def update_human_goal(self, human):
    method calc_offset_angle (line 488) | def calc_offset_angle(self, state1, state2):
    method detect_visible (line 513) | def detect_visible(self, state1, state2, robot1 = False, custom_fov=No...
    method get_num_human_in_fov (line 558) | def get_num_human_in_fov(self):
    method last_human_states_obj (line 576) | def last_human_states_obj(self):
    method calc_reward (line 588) | def calc_reward(self, action):
    method generate_ob (line 663) | def generate_ob(self, reset):
    method get_human_actions (line 680) | def get_human_actions(self):
    method step (line 705) | def step(self, action, update=True):
    method render (line 750) | def render(self, mode='human'):

FILE: crowd_sim/envs/crowd_sim_pred.py
  class CrowdSimPred (line 11) | class CrowdSimPred(CrowdSimVarNum):
    method __init__ (line 16) | def __init__(self):
    method configure (line 27) | def configure(self, config):
    method reset (line 33) | def reset(self, phase='train', test_case=None):
    method set_robot (line 38) | def set_robot(self, robot):
    method generate_ob (line 62) | def generate_ob(self, reset, sort=True):
    method step (line 100) | def step(self, action, update=True):
    method calc_reward (line 216) | def calc_reward(self, action, danger_zone='future'):
    method render (line 236) | def render(self, mode='human'):

FILE: crowd_sim/envs/crowd_sim_pred_real_gst.py
  class CrowdSimPredRealGST (line 7) | class CrowdSimPredRealGST(CrowdSimPred):
    method __init__ (line 13) | def __init__(self):
    method set_robot (line 27) | def set_robot(self, robot):
    method talk2Env (line 64) | def talk2Env(self, data):
    method generate_ob (line 76) | def generate_ob(self, reset, sort=False):
    method calc_reward (line 97) | def calc_reward(self, action, danger_zone='future'):
    method render (line 104) | def render(self, mode='human'):

FILE: crowd_sim/envs/crowd_sim_var_num.py
  class CrowdSimVarNum (line 13) | class CrowdSimVarNum(CrowdSim):
    method __init__ (line 18) | def __init__(self):
    method configure (line 31) | def configure(self, config):
    method set_robot (line 37) | def set_robot(self, robot):
    method generate_robot_humans (line 64) | def generate_robot_humans(self, phase, human_num=None):
    method generate_circle_crossing_human (line 116) | def generate_circle_crossing_human(self):
    method calc_human_future_traj (line 152) | def calc_human_future_traj(self, method):
    method generate_ob (line 233) | def generate_ob(self, reset, sort=False):
    method update_human_pos_goal (line 284) | def update_human_pos_goal(self, human):
    method reset (line 303) | def reset(self, phase='train', test_case=None):
    method step (line 366) | def step(self, action, update=True):
    method calc_reward (line 465) | def calc_reward(self, action, danger_zone='circle'):
    method render (line 564) | def render(self, mode='human'):

FILE: crowd_sim/envs/crowd_sim_var_num_collect.py
  class CrowdSimVarNumCollect (line 11) | class CrowdSimVarNumCollect(CrowdSimVarNum):
    method __init__ (line 17) | def __init__(self):
    method set_robot (line 27) | def set_robot(self, robot):
    method reset (line 43) | def reset(self, phase='train', test_case=None):
    method generate_ob (line 98) | def generate_ob(self, reset, sort=False):
    method calc_reward (line 136) | def calc_reward(self, action, danger_zone='circle'):

FILE: crowd_sim/envs/ros_turtlebot2i_env.py
  class rosTurtlebot2iEnv (line 27) | class rosTurtlebot2iEnv(CrowdSimPredRealGST):
    method __init__ (line 34) | def __init__(self):
    method configure (line 66) | def configure(self, config):
    method set_robot (line 103) | def set_robot(self, robot):
    method set_ob_act_space (line 106) | def set_ob_act_space(self):
    method state_cb (line 147) | def state_cb(self, jointStateMsg, humanArrayMsg):
    method state_cb_dummy (line 154) | def state_cb_dummy(self, jointStateMsg):
    method readMsg (line 157) | def readMsg(self):
    method reset (line 248) | def reset(self):
    method smooth (line 292) | def smooth(self, v, w):
    method generate_ob (line 312) | def generate_ob(self):
    method step (line 339) | def step(self, action, update=True):
    method render (line 430) | def render(self, mode='human'):
    method shutdown (line 520) | def shutdown(self):
    method smoothStop (line 526) | def smoothStop(self):

FILE: crowd_sim/envs/utils/agent.py
  class Agent (line 10) | class Agent(object):
    method __init__ (line 11) | def __init__(self, config, section):
    method print_info (line 39) | def print_info(self):
    method sample_random_attributes (line 44) | def sample_random_attributes(self):
    method set (line 52) | def set(self, px, py, gx, gy, vx, vy, theta, radius=None, v_pref=None):
    method set_list (line 67) | def set_list(self, px, py, vx, vy, radius, gx, gy, v_pref, theta):
    method get_observable_state (line 78) | def get_observable_state(self):
    method get_observable_state_list (line 81) | def get_observable_state_list(self):
    method get_observable_state_list_noV (line 84) | def get_observable_state_list_noV(self):
    method get_next_observable_state (line 87) | def get_next_observable_state(self, action):
    method get_full_state (line 100) | def get_full_state(self):
    method get_full_state_list (line 103) | def get_full_state_list(self):
    method get_full_state_list_noV (line 106) | def get_full_state_list_noV(self):
    method get_position (line 110) | def get_position(self):
    method set_position (line 113) | def set_position(self, position):
    method get_goal_position (line 117) | def get_goal_position(self):
    method get_velocity (line 120) | def get_velocity(self):
    method set_velocity (line 124) | def set_velocity(self, velocity):
    method act (line 130) | def act(self, ob):
    method check_validity (line 137) | def check_validity(self, action):
    method compute_position (line 143) | def compute_position(self, action, delta_t):
    method step (line 170) | def step(self, action):
    method one_step_lookahead (line 185) | def one_step_lookahead(self, pos, action):
    method reached_destination (line 194) | def reached_destination(self):

FILE: crowd_sim/envs/utils/human.py
  class Human (line 5) | class Human(Agent):
    method __init__ (line 7) | def __init__(self, config, section):
    method act (line 14) | def act(self, ob):
    method act_joint_state (line 26) | def act_joint_state(self, ob):

FILE: crowd_sim/envs/utils/info.py
  class Timeout (line 1) | class Timeout(object):
    method __init__ (line 2) | def __init__(self):
    method __str__ (line 5) | def __str__(self):
  class ReachGoal (line 9) | class ReachGoal(object):
    method __init__ (line 10) | def __init__(self):
    method __str__ (line 13) | def __str__(self):
  class Danger (line 17) | class Danger(object):
    method __init__ (line 18) | def __init__(self, min_dist):
    method __str__ (line 21) | def __str__(self):
  class Collision (line 25) | class Collision(object):
    method __init__ (line 26) | def __init__(self):
    method __str__ (line 29) | def __str__(self):
  class OutRoad (line 32) | class OutRoad(object):
    method __init__ (line 33) | def __init__(self):
    method __str__ (line 36) | def __str__(self):
  class Nothing (line 39) | class Nothing(object):
    method __init__ (line 40) | def __init__(self):
    method __str__ (line 43) | def __str__(self):

FILE: crowd_sim/envs/utils/recorder.py
  class Recoder (line 5) | class Recoder(object):
    method __init__ (line 6) | def __init__(self):
    method saveEpisode (line 22) | def saveEpisode(self,episodeCounter):
    method loadActions (line 52) | def loadActions(self):
    method clear (line 58) | def clear(self):
  class humanRecoder (line 66) | class humanRecoder(object):
    method __init__ (line 67) | def __init__(self, human_num):
    method addInitPos (line 78) | def addInitPos(self, human_id, px, py, radius, v_pref):
    method addGoalPos (line 82) | def addGoalPos(self, human_id, px, py):
    method loadLists (line 85) | def loadLists(self, initPos, goalPos):
    method getInitList (line 89) | def getInitList(self):
    method getGoalList (line 92) | def getGoalList(self):
    method getInitPos (line 95) | def getInitPos(self, human_id):
    method getNextGoalPos (line 100) | def getNextGoalPos(self, human_id):
    method goalIsEmpty (line 107) | def goalIsEmpty(self, human_id):
    method clear (line 113) | def clear(self):
  class jointStateRecoder (line 117) | class jointStateRecoder(object):
    method __init__ (line 118) | def __init__(self, human_num):
    method add_traj (line 141) | def add_traj(self, seed, ob, srnn):
    method add_label (line 168) | def add_label(self, seed, respond_traj_len, traj_len, respond_time, to...
    method save_to_file (line 176) | def save_to_file(self, directory):
    method clear (line 204) | def clear(self):

FILE: crowd_sim/envs/utils/robot.py
  class Robot (line 5) | class Robot(Agent):
    method __init__ (line 6) | def __init__(self, config,section):
    method act (line 10) | def act(self, ob):
    method actWithJointState (line 19) | def actWithJointState(self,ob):

FILE: crowd_sim/envs/utils/state.py
  class JointState (line 10) | class JointState(object):
    method __init__ (line 13) | def __init__(self, self_state, human_states):
    method to_flatten_list (line 32) | def to_flatten_list(self):

FILE: gst_updated/scripts/experiments/eval.py
  function main (line 12) | def main(args):
  function inference (line 56) | def inference(loader, model, args, mode='val', tau=1., device='cuda:0'):

FILE: gst_updated/scripts/experiments/eval_trajnet.py
  function main (line 12) | def main(args):
  function test (line 56) | def test(loader, model, tau=0.03, device='cuda:0'):
  function inference (line 85) | def inference(loader, model, args, mode='val', tau=1., device='cuda:0'):

FILE: gst_updated/scripts/experiments/test.py
  function main (line 12) | def main(args):
  function inference (line 49) | def inference(loader, model, args, mode='val', tau=1., device='cuda:0'):

FILE: gst_updated/scripts/experiments/test_gst.py
  function main (line 12) | def main(args):
  function inference (line 49) | def inference(loader, model, args, mode='val', tau=1., device='cuda:0'):

FILE: gst_updated/scripts/experiments/train.py
  function main (line 17) | def main(args):
  function train (line 50) | def train(args, data_loaders, writer, logdir, device='cuda:0'):

FILE: gst_updated/scripts/wrapper/crowd_nav_interface_multi_env_parallel.py
  class CrowdNavPredInterfaceMultiEnv (line 11) | class CrowdNavPredInterfaceMultiEnv(object):
    method __init__ (line 13) | def __init__(self, load_path, device, config, num_env):
    method forward (line 35) | def forward(self, input_traj,input_binary_mask, sampling = True):

FILE: gst_updated/scripts/wrapper/crowd_nav_interface_multi_env_visualization_test_single_batch.py
  function load_batch_dataset (line 15) | def load_batch_dataset(pkg_path, subfolder='test', num_workers=4, shuffl...
  function visualize_trajectory (line 101) | def visualize_trajectory(obs_traj, loss_mask, sample_index, obs_seq_len=...
  function CrowdNavPredInterfaceMultiEnv (line 130) | def CrowdNavPredInterfaceMultiEnv(
  function visualize_output_trajectory_deterministic (line 331) | def visualize_output_trajectory_deterministic(input_traj, input_binary_m...

FILE: gst_updated/scripts/wrapper/crowd_nav_interface_parallel.py
  function seq_to_graph (line 9) | def seq_to_graph(seq_, seq_rel, attn_mech='rel_conv'):
  class CrowdNavPredInterfaceMultiEnv (line 23) | class CrowdNavPredInterfaceMultiEnv(object):
    method __init__ (line 25) | def __init__(self, load_path, device, config, num_env):
    method forward (line 45) | def forward(self, input_traj,input_binary_mask, sampling = True):
  function visualize_output_trajectory_deterministic (line 117) | def visualize_output_trajectory_deterministic(input_traj, input_binary_m...

FILE: gst_updated/src/gumbel_social_transformer/edge_selector_ghost.py
  class EdgeSelector (line 8) | class EdgeSelector(nn.Module):
    method __init__ (line 10) | def __init__(self, d_motion, d_model, nhead=4, dropout=0.1, activation...
    method forward (line 27) | def forward(self, x, A, attn_mask, tau=1., hard=False, device='cuda:0'):
    method edge_sampler (line 101) | def edge_sampler(self, edge_multinomial, tau=1., hard=False):

FILE: gst_updated/src/gumbel_social_transformer/edge_selector_no_ghost.py
  class EdgeSelector (line 8) | class EdgeSelector(nn.Module):
    method __init__ (line 10) | def __init__(self, d_motion, d_model, nhead=4, dropout=0.1, activation...
    method forward (line 26) | def forward(self, x, A, attn_mask, tau=1., hard=False, device='cuda:0'):
    method edge_sampler (line 91) | def edge_sampler(self, edge_multinomial, tau=1., hard=False):

FILE: gst_updated/src/gumbel_social_transformer/gumbel_social_transformer.py
  class GumbelSocialTransformer (line 5) | class GumbelSocialTransformer(nn.Module):
    method __init__ (line 6) | def __init__(self, d_motion, d_model, nhead_nodes, nhead_edges, nlayer...
    method forward (line 43) | def forward(self, x, A, attn_mask, tau=1., hard=False, device='cuda:0'):

FILE: gst_updated/src/gumbel_social_transformer/mha.py
  function multi_head_attention_forward (line 12) | def multi_head_attention_forward(
  class VanillaMultiheadAttention (line 259) | class VanillaMultiheadAttention(Module):
    method __init__ (line 263) | def __init__(self, embed_dim, num_heads, dropout=0., bias=True, add_bi...
    method _reset_parameters (line 303) | def _reset_parameters(self):
    method __setstate__ (line 319) | def __setstate__(self, state):
    method forward (line 325) | def forward(self, query: Tensor, key: Tensor, value: Tensor, key_paddi...

FILE: gst_updated/src/gumbel_social_transformer/node_encoder_layer_ghost.py
  class NodeEncoderLayer (line 6) | class NodeEncoderLayer(nn.Module):
    method __init__ (line 8) | def __init__(self, d_model, nhead, dim_feedforward=512, dropout=0.1, a...
    method forward (line 26) | def forward(self, x, sampled_edges, attn_mask, device="cuda:0"):

FILE: gst_updated/src/gumbel_social_transformer/node_encoder_layer_no_ghost.py
  class NodeEncoderLayer (line 6) | class NodeEncoderLayer(nn.Module):
    method __init__ (line 8) | def __init__(self, d_model, nhead, dim_feedforward=512, dropout=0.1, a...
    method forward (line 25) | def forward(self, x, sampled_edges, attn_mask, device="cuda:0"):

FILE: gst_updated/src/gumbel_social_transformer/st_model.py
  function offset_error_square_full_partial (line 15) | def offset_error_square_full_partial(x_pred, x_target, loss_mask_ped, lo...
  function negative_log_likelihood_full_partial (line 62) | def negative_log_likelihood_full_partial(gaussian_params, x_target, loss...
  class st_model (line 115) | class st_model(nn.Module):
    method __init__ (line 117) | def __init__(self, args, device='cuda:0'):
    method raw2gaussian (line 192) | def raw2gaussian(self, prob_raw):
    method sample_gaussian (line 211) | def sample_gaussian(self, gaussian_params, device='cuda:0', detach_sam...
    method edge_evolution (line 246) | def edge_evolution(self, xt_plus, At, device='cuda:0'):
    method forward (line 271) | def forward(self, x, A, attn_mask, loss_mask_rel, tau=1., hard=False, ...

FILE: gst_updated/src/gumbel_social_transformer/temperature_scheduler.py
  class Temp_Scheduler (line 1) | class Temp_Scheduler(object):
    method __init__ (line 2) | def __init__(self, total_epochs, curr_temp, base_temp, temp_min=0.33, ...
    method step (line 10) | def step(self, epoch=None):
    method decay_whole_process (line 13) | def decay_whole_process(self, epoch=None):

FILE: gst_updated/src/gumbel_social_transformer/temporal_convolution_net.py
  class TemporalConvolutionNet (line 4) | class TemporalConvolutionNet(nn.Module):
    method __init__ (line 5) | def __init__(
    method forward (line 40) | def forward(self, x):

FILE: gst_updated/src/gumbel_social_transformer/utils.py
  function _get_activation_fn (line 8) | def _get_activation_fn(activation):
  function _get_clones (line 15) | def _get_clones(module, N):
  function gumbel_softmax (line 18) | def gumbel_softmax(logits, tau=1, hard=False, eps=1e-10):
  function gumbel_softmax_sample (line 32) | def gumbel_softmax_sample(logits, tau=1, eps=1e-10):
  function sample_gumbel (line 39) | def sample_gumbel(shape, eps=1e-10):

FILE: gst_updated/src/mgnn/batch_trajectories.py
  class BatchTrajectoriesDataset (line 3) | class BatchTrajectoriesDataset(Dataset):
    method __init__ (line 4) | def __init__(
    method add_batch (line 22) | def add_batch(
    method __len__ (line 51) | def __len__(self):
    method __getitem__ (line 55) | def __getitem__(self, index):

FILE: gst_updated/src/mgnn/trajectories.py
  class TrajectoriesDataset (line 8) | class TrajectoriesDataset(Dataset):
    method __init__ (line 9) | def __init__(
    method __len__ (line 145) | def __len__(self):
    method __getitem__ (line 149) | def __getitem__(self, index):
  function read_file (line 163) | def read_file(_path, delim='\t'):

FILE: gst_updated/src/mgnn/trajectories_sdd.py
  class TrajectoriesDataset (line 8) | class TrajectoriesDataset(Dataset):
    method __init__ (line 9) | def __init__(
    method __len__ (line 146) | def __len__(self):
    method __getitem__ (line 150) | def __getitem__(self, index):
  function read_file (line 164) | def read_file(_path, delim='\t'):
  function read_sdd_file (line 177) | def read_sdd_file(_path):

FILE: gst_updated/src/mgnn/trajectories_trajnet.py
  class TrajectoriesDataset (line 9) | class TrajectoriesDataset(Dataset):
    method __init__ (line 10) | def __init__(
    method __len__ (line 177) | def __len__(self):
    method __getitem__ (line 181) | def __getitem__(self, index):
  function read_file (line 195) | def read_file(_path, delim='\t'):
  function read_trajnet_file (line 209) | def read_trajnet_file(_path):

FILE: gst_updated/src/mgnn/trajectories_trajnet_testset.py
  class TrajectoriesDataset (line 9) | class TrajectoriesDataset(Dataset):
    method __init__ (line 10) | def __init__(
    method __len__ (line 175) | def __len__(self):
    method __getitem__ (line 179) | def __getitem__(self, index):
  function read_file (line 203) | def read_file(_path, delim='\t'):
  function read_trajnet_file (line 217) | def read_trajnet_file(_path):

FILE: gst_updated/src/mgnn/utils.py
  function average_offset_error (line 8) | def average_offset_error(x_pred, x_target, loss_mask=None):
  function final_offset_error (line 18) | def final_offset_error(x_pred, x_target, loss_mask=None):
  function negative_log_likelihood (line 29) | def negative_log_likelihood(gaussian_params, x_target, loss_mask=None):
  function seq_to_graph (line 43) | def seq_to_graph(seq_, seq_rel, attn_mech='glob_kip'):
  function rotate_graph (line 75) | def rotate_graph(vtx, adj, theta):
  function random_rotate_graph (line 86) | def random_rotate_graph(args, vtx_obs, adj_obs, vtx_pred_gt, adj_pred_gt):
  function load_batch_dataset (line 100) | def load_batch_dataset(args, pkg_path, subfolder='train', num_workers=4,...
  function args2writername (line 127) | def args2writername(args):
  function arg_parse (line 154) | def arg_parse():

FILE: gst_updated/src/pec_net/sdd_trajectories.py
  class SDDTrajectoriesDataset (line 4) | class SDDTrajectoriesDataset(Dataset):
    method __init__ (line 5) | def __init__(
    method add_batch (line 23) | def add_batch(
    method __len__ (line 52) | def __len__(self):
    method __getitem__ (line 55) | def __getitem__(self, index):

FILE: gst_updated/src/pec_net/social_utils.py
  function naive_social (line 13) | def naive_social(p1_key, p2_key, all_data_dict):
  function find_min_time (line 19) | def find_min_time(t1, t2):
  function find_min_dist (line 34) | def find_min_dist(p1x, p1y, p2x, p2y):
  function social_and_temporal_filter (line 47) | def social_and_temporal_filter(p1_key, p2_key, all_data_dict, time_thres...
  function mark_similar (line 60) | def mark_similar(mask, sim_list):
  function collect_data (line 66) | def collect_data(set_name, dataset_type = 'image', batch_size=512, time_...
  function generate_pooled_data (line 137) | def generate_pooled_data(b_size, t_tresh, d_tresh, train=True, scene=Non...
  function initial_pos (line 152) | def initial_pos(traj_batches):
  function calculate_loss (line 160) | def calculate_loss(x, reconstructed_x, mean, log_var, criterion, future,...
  class SocialDataset (line 171) | class SocialDataset(data.Dataset):
    method __init__ (line 173) | def __init__(self, set_name, pkg_path, b_size=4096, t_tresh=60, d_tres...
  function split_square_block_matrix (line 227) | def split_square_block_matrix(block_mat):

FILE: rl/evaluation.py
  function evaluate (line 7) | def evaluate(actor_critic, eval_envs, num_processes, device, test_size, ...

FILE: rl/networks/distributions.py
  class FixedCategorical (line 18) | class FixedCategorical(torch.distributions.Categorical):
    method sample (line 19) | def sample(self):
    method log_probs (line 22) | def log_probs(self, actions):
    method mode (line 31) | def mode(self):
  class FixedNormal (line 36) | class FixedNormal(torch.distributions.Normal):
    method log_probs (line 37) | def log_probs(self, actions):
    method entrop (line 40) | def entrop(self):
    method mode (line 43) | def mode(self):
  class FixedBernoulli (line 48) | class FixedBernoulli(torch.distributions.Bernoulli):
    method log_probs (line 49) | def log_probs(self, actions):
    method entropy (line 52) | def entropy(self):
    method mode (line 55) | def mode(self):
  class Categorical (line 59) | class Categorical(nn.Module):
    method __init__ (line 60) | def __init__(self, num_inputs, num_outputs):
    method forward (line 71) | def forward(self, x):
  class DiagGaussian (line 76) | class DiagGaussian(nn.Module):
    method __init__ (line 77) | def __init__(self, num_inputs, num_outputs):
    method forward (line 86) | def forward(self, x):
  class Bernoulli (line 98) | class Bernoulli(nn.Module):
    method __init__ (line 99) | def __init__(self, num_inputs, num_outputs):
    method forward (line 107) | def forward(self, x):

FILE: rl/networks/dummy_vec_env.py
  class DummyVecEnv (line 5) | class DummyVecEnv(VecEnv):
    method __init__ (line 12) | def __init__(self, env_fns):
    method step_async (line 31) | def step_async(self, actions):
    method step_wait (line 45) | def step_wait(self):
    method reset (line 58) | def reset(self):
    method talk2Env_async (line 64) | def talk2Env_async(self, data):
    method talk2Env_wait (line 68) | def talk2Env_wait(self):
    method _save_obs (line 71) | def _save_obs(self, e, obs):
    method _obs_from_buf (line 78) | def _obs_from_buf(self):
    method get_images (line 81) | def get_images(self):
    method render (line 84) | def render(self, mode='human'):

FILE: rl/networks/envs.py
  function make_env (line 36) | def make_env(env_id, seed, rank, log_dir, allow_early_resets, config=Non...
  function make_vec_envs (line 97) | def make_vec_envs(env_name,
  class TimeLimitMask (line 144) | class TimeLimitMask(gym.Wrapper):
    method step (line 145) | def step(self, action):
    method reset (line 152) | def reset(self, **kwargs):
  class MaskGoal (line 157) | class MaskGoal(gym.ObservationWrapper):
    method observation (line 158) | def observation(self, observation):
  class TransposeObs (line 164) | class TransposeObs(gym.ObservationWrapper):
    method __init__ (line 165) | def __init__(self, env=None):
  class TransposeImage (line 172) | class TransposeImage(TransposeObs):
    method __init__ (line 173) | def __init__(self, env=None, op=[2, 0, 1]):
    method observation (line 189) | def observation(self, ob):
  class VecPyTorch (line 193) | class VecPyTorch(VecEnvWrapper):
    method __init__ (line 194) | def __init__(self, venv, device):
    method reset (line 200) | def reset(self):
    method step_async (line 209) | def step_async(self, actions):
    method step_wait (line 216) | def step_wait(self):
    method render_traj (line 226) | def render_traj(self, path, episode_num):
  class VecNormalize (line 234) | class VecNormalize(VecNormalize_):
    method __init__ (line 235) | def __init__(self, *args, **kwargs):
    method _obfilt (line 239) | def _obfilt(self, obs, update=True):
    method train (line 250) | def train(self):
    method eval (line 253) | def eval(self):
  class VecPyTorchFrameStack (line 259) | class VecPyTorchFrameStack(VecEnvWrapper):
    method __init__ (line 260) | def __init__(self, venv, nstack, device=None):
    method step_wait (line 279) | def step_wait(self):
    method reset (line 289) | def reset(self):
    method close (line 298) | def close(self):

FILE: rl/networks/model.py
  class Flatten (line 9) | class Flatten(nn.Module):
    method forward (line 10) | def forward(self, x):
  class Policy (line 14) | class Policy(nn.Module):
    method __init__ (line 16) | def __init__(self, obs_shape, action_space, base=None, base_kwargs=None):
    method is_recurrent (line 45) | def is_recurrent(self):
    method recurrent_hidden_state_size (line 49) | def recurrent_hidden_state_size(self):
    method forward (line 53) | def forward(self, inputs, rnn_hxs, masks):
    method act (line 56) | def act(self, inputs, rnn_hxs, masks, deterministic=False):
    method get_value (line 76) | def get_value(self, inputs, rnn_hxs, masks):
    method evaluate_actions (line 82) | def evaluate_actions(self, inputs, rnn_hxs, masks, action):

FILE: rl/networks/network_utils.py
  function get_render_func (line 10) | def get_render_func(venv):
  function get_vec_normalize (line 21) | def get_vec_normalize(venv):
  class AddBias (line 31) | class AddBias(nn.Module):
    method __init__ (line 32) | def __init__(self, bias):
    method forward (line 36) | def forward(self, x):
  function update_linear_schedule (line 45) | def update_linear_schedule(optimizer, epoch, total_num_epochs, initial_lr):
  function init (line 52) | def init(module, weight_init, bias_init, gain=1):
  function cleanup_log_dir (line 58) | def cleanup_log_dir(log_dir):

FILE: rl/networks/selfAttn_srnn_temp_node.py
  class SpatialEdgeSelfAttn (line 5) | class SpatialEdgeSelfAttn(nn.Module):
    method __init__ (line 10) | def __init__(self, args):
    method create_attn_mask (line 49) | def create_attn_mask(self, each_seq_len, seq_len, nenv, max_human_num):
    method forward (line 63) | def forward(self, inp, each_seq_len):
  class EdgeAttention_M (line 95) | class EdgeAttention_M(nn.Module):
    method __init__ (line 99) | def __init__(self, args):
    method create_attn_mask (line 132) | def create_attn_mask(self, each_seq_len, seq_len, nenv, max_human_num):
    method att_func (line 145) | def att_func(self, temporal_embed, spatial_embed, h_spatials, attn_mas...
    method forward (line 184) | def forward(self, h_temporal, h_spatials, each_seq_len):
  class EndRNN (line 225) | class EndRNN(RNNBase):
    method __init__ (line 229) | def __init__(self, args):
    method forward (line 262) | def forward(self, robot_s, h_spatial_other, h, masks):
  class selfAttn_merge_SRNN (line 287) | class selfAttn_merge_SRNN(nn.Module):
    method __init__ (line 291) | def __init__(self, obs_space_dict, args, infer=False):
    method forward (line 360) | def forward(self, inputs, rnn_hxs, masks, infer=False):
  function reshapeT (line 452) | def reshapeT(T, seq_length, nenv):

FILE: rl/networks/shmem_vec_env.py
  class ShmemVecEnv (line 21) | class ShmemVecEnv(VecEnv):
    method __init__ (line 26) | def __init__(self, env_fns, spaces=None, context='spawn'):
    method reset (line 62) | def reset(self):
    method step_async (line 70) | def step_async(self, actions):
    method step_wait (line 76) | def step_wait(self):
    method talk2Env_async (line 82) | def talk2Env_async(self, data):
    method talk2Env_wait (line 88) | def talk2Env_wait(self):
    method close_extras (line 93) | def close_extras(self):
    method get_images (line 104) | def get_images(self, mode='human'):
    method _decode_obses (line 109) | def _decode_obses(self, obs):
  function _subproc_worker (line 119) | def _subproc_worker(pipe, parent_pipe, env_fn_wrapper, obs_bufs, obs_sha...

FILE: rl/networks/srnn_model.py
  class RNNBase (line 10) | class RNNBase(nn.Module):
    method __init__ (line 15) | def __init__(self, args, edge):
    method _forward_gru (line 35) | def _forward_gru(self, x, hxs, masks):
  class HumanNodeRNN (line 108) | class HumanNodeRNN(RNNBase):
    method __init__ (line 112) | def __init__(self, args):
    method forward (line 149) | def forward(self, pos, h_temporal, h_spatial_other, h, masks):
  class HumanHumanEdgeRNN (line 177) | class HumanHumanEdgeRNN(RNNBase):
    method __init__ (line 181) | def __init__(self, args):
    method forward (line 202) | def forward(self, inp, h, masks):
  class EdgeAttention (line 219) | class EdgeAttention(nn.Module):
    method __init__ (line 223) | def __init__(self, args):
    method att_func (line 256) | def att_func(self, temporal_embed, spatial_embed, h_spatials):
    method forward (line 291) | def forward(self, h_temporal, h_spatials):
  class SRNN (line 326) | class SRNN(nn.Module):
    method __init__ (line 330) | def __init__(self, obs_space_dict, args, infer=False):
    method forward (line 389) | def forward(self, inputs, rnn_hxs, masks, infer=False):
  function reshapeT (line 471) | def reshapeT(T, seq_length, nenv):

FILE: rl/networks/storage.py
  function _flatten_helper (line 5) | def _flatten_helper(T, N, _tensor):
  class RolloutStorage (line 13) | class RolloutStorage(object):
    method __init__ (line 15) | def __init__(self, num_steps, num_processes, obs_shape, action_space,
    method to (line 56) | def to(self, device):
    method insert (line 70) | def insert(self, obs, recurrent_hidden_states, actions, action_log_probs,
    method after_update (line 88) | def after_update(self):
    method compute_returns (line 98) | def compute_returns(self,
    method feed_forward_generator (line 139) | def feed_forward_generator(self,
    method recurrent_generator (line 184) | def recurrent_generator(self, advantages, num_mini_batch):

FILE: rl/ppo/ppo.py
  class PPO (line 6) | class PPO():
    method __init__ (line 8) | def __init__(self,
    method update (line 36) | def update(self, rollouts):

FILE: rl/vec_env/dummy_vec_env.py
  class DummyVecEnv (line 6) | class DummyVecEnv(VecEnv):
    method __init__ (line 13) | def __init__(self, env_fns):
    method step_async (line 33) | def step_async(self, actions):
    method step_wait (line 47) | def step_wait(self):
    method reset (line 60) | def reset(self):
    method talk2Env_async (line 66) | def talk2Env_async(self, data):
    method talk2Env_wait (line 70) | def talk2Env_wait(self):
    method _save_obs (line 73) | def _save_obs(self, e, obs):
    method _obs_from_buf (line 83) | def _obs_from_buf(self):
    method get_images (line 86) | def get_images(self):
    method render (line 89) | def render(self, mode='human'):

FILE: rl/vec_env/envs.py
  function make_env (line 18) | def make_env(env_id, seed, rank):
  function make_vec_envs (line 33) | def make_vec_envs(env_name,
  class TimeLimitMask (line 62) | class TimeLimitMask(gym.Wrapper):
    method step (line 63) | def step(self, action):
    method reset (line 70) | def reset(self, **kwargs):
  class VecPyTorch (line 73) | class VecPyTorch(VecEnvWrapper):
    method __init__ (line 74) | def __init__(self, venv, device):
    method reset (line 80) | def reset(self):
    method step_async (line 89) | def step_async(self, actions):
    method step_wait (line 96) | def step_wait(self):

FILE: rl/vec_env/logger.py
  class KVWriter (line 19) | class KVWriter(object):
    method writekvs (line 20) | def writekvs(self, kvs):
  class SeqWriter (line 23) | class SeqWriter(object):
    method writeseq (line 24) | def writeseq(self, seq):
  class HumanOutputFormat (line 27) | class HumanOutputFormat(KVWriter, SeqWriter):
    method __init__ (line 28) | def __init__(self, filename_or_file):
    method writekvs (line 37) | def writekvs(self, kvs):
    method _truncate (line 71) | def _truncate(self, s):
    method writeseq (line 75) | def writeseq(self, seq):
    method close (line 84) | def close(self):
  class JSONOutputFormat (line 88) | class JSONOutputFormat(KVWriter):
    method __init__ (line 89) | def __init__(self, filename):
    method writekvs (line 92) | def writekvs(self, kvs):
    method close (line 99) | def close(self):
  class CSVOutputFormat (line 102) | class CSVOutputFormat(KVWriter):
    method __init__ (line 103) | def __init__(self, filename):
    method writekvs (line 108) | def writekvs(self, kvs):
    method close (line 135) | def close(self):
  class TensorBoardOutputFormat (line 139) | class TensorBoardOutputFormat(KVWriter):
    method __init__ (line 143) | def __init__(self, dir):
    method writekvs (line 158) | def writekvs(self, kvs):
    method close (line 169) | def close(self):
  function make_output_format (line 174) | def make_output_format(format, ev_dir, log_suffix=''):
  function logkv (line 193) | def logkv(key, val):
  function logkv_mean (line 201) | def logkv_mean(key, val):
  function logkvs (line 207) | def logkvs(d):
  function dumpkvs (line 214) | def dumpkvs():
  function getkvs (line 220) | def getkvs():
  function log (line 224) | def log(*args, level=INFO):
  function debug (line 230) | def debug(*args):
  function info (line 233) | def info(*args):
  function warn (line 236) | def warn(*args):
  function error (line 239) | def error(*args):
  function set_level (line 243) | def set_level(level):
  function set_comm (line 249) | def set_comm(comm):
  function get_dir (line 252) | def get_dir():
  function profile_kv (line 263) | def profile_kv(scopename):
  function profile (line 271) | def profile(n):
  function get_current (line 289) | def get_current():
  class Logger (line 296) | class Logger(object):
    method __init__ (line 301) | def __init__(self, dir, output_formats, comm=None):
    method logkv (line 311) | def logkv(self, key, val):
    method logkv_mean (line 314) | def logkv_mean(self, key, val):
    method dumpkvs (line 319) | def dumpkvs(self):
    method log (line 337) | def log(self, *args, level=INFO):
    method set_level (line 343) | def set_level(self, level):
    method set_comm (line 346) | def set_comm(self, comm):
    method get_dir (line 349) | def get_dir(self):
    method close (line 352) | def close(self):
    method _do_log (line 358) | def _do_log(self, args):
  function get_rank_without_mpi_import (line 363) | def get_rank_without_mpi_import():
  function configure (line 372) | def configure(dir=None, format_strs=None, comm=None, log_suffix=''):
  function _configure_default_logger (line 401) | def _configure_default_logger():
  function reset (line 405) | def reset():
  function scoped_configure (line 412) | def scoped_configure(dir=None, format_strs=None, comm=None):
  function _demo (line 423) | def _demo():
  function read_json (line 456) | def read_json(fname):
  function read_csv (line 464) | def read_csv(fname):
  function read_tb (line 468) | def read_tb(path):

FILE: rl/vec_env/running_mean_std.py
  class RunningMeanStd (line 6) | class RunningMeanStd(object):
    method __init__ (line 7) | def __init__(self, epsilon = 1e-4, shape = ()):
    method update (line 18) | def update(self, arr):
    method update_from_moments (line 24) | def update_from_moments(self, batch_mean, batch_var, batch_count):

FILE: rl/vec_env/shmem_vec_env.py
  class ShmemVecEnv (line 20) | class ShmemVecEnv(VecEnv):
    method __init__ (line 25) | def __init__(self, env_fns, spaces=None, context='spawn'):
    method reset (line 61) | def reset(self):
    method step_async (line 69) | def step_async(self, actions):
    method step_wait (line 75) | def step_wait(self):
    method talk2Env_async (line 81) | def talk2Env_async(self, data):
    method talk2Env_wait (line 87) | def talk2Env_wait(self):
    method close_extras (line 92) | def close_extras(self):
    method get_images (line 103) | def get_images(self, mode='human'):
    method _decode_obses (line 108) | def _decode_obses(self, obs):
  function _subproc_worker (line 124) | def _subproc_worker(pipe, parent_pipe, env_fn_wrapper, obs_bufs, obs_sha...

FILE: rl/vec_env/tile_images.py
  function tile_images (line 3) | def tile_images(img_nhwc):

FILE: rl/vec_env/util.py
  function copy_obs_dict (line 11) | def copy_obs_dict(obs):
  function dict_to_obs (line 18) | def dict_to_obs(obs_dict):
  function obs_space_info (line 28) | def obs_space_info(obs_space):
  function obs_to_dict (line 56) | def obs_to_dict(obs):

FILE: rl/vec_env/vec_env.py
  class AlreadySteppingError (line 10) | class AlreadySteppingError(Exception):
    method __init__ (line 16) | def __init__(self):
  class NotSteppingError (line 21) | class NotSteppingError(Exception):
    method __init__ (line 27) | def __init__(self):
  class VecEnv (line 32) | class VecEnv(ABC):
    method __init__ (line 46) | def __init__(self, num_envs, observation_space, action_space):
    method reset (line 52) | def reset(self):
    method step_async (line 64) | def step_async(self, actions):
    method step_wait (line 76) | def step_wait(self):
    method talk2Env_async (line 90) | def talk2Env_async(self, data):
    method talk2Env_wait (line 94) | def talk2Env_wait(self):
    method close_extras (line 97) | def close_extras(self):
    method close (line 104) | def close(self):
    method step (line 112) | def step(self, actions):
    method render (line 121) | def render(self, mode='human'):
    method talk2Env (line 132) | def talk2Env(self, data):
    method get_images (line 136) | def get_images(self):
    method unwrapped (line 143) | def unwrapped(self):
    method get_viewer (line 149) | def get_viewer(self):
  class VecEnvWrapper (line 155) | class VecEnvWrapper(VecEnv):
    method __init__ (line 161) | def __init__(self, venv, observation_space=None, action_space=None):
    method step_async (line 167) | def step_async(self, actions):
    method talk2Env_async (line 170) | def talk2Env_async(self, data):
    method talk2Env_wait (line 174) | def talk2Env_wait(self):
    method reset (line 178) | def reset(self):
    method step_wait (line 182) | def step_wait(self):
    method close (line 185) | def close(self):
    method render (line 188) | def render(self, mode='human'):
    method get_images (line 191) | def get_images(self):
    method __getattr__ (line 194) | def __getattr__(self, name):
  class VecEnvObservationWrapper (line 199) | class VecEnvObservationWrapper(VecEnvWrapper):
    method process (line 201) | def process(self, obs):
    method reset (line 204) | def reset(self):
    method step_wait (line 208) | def step_wait(self):
  class CloudpickleWrapper (line 212) | class CloudpickleWrapper(object):
    method __init__ (line 217) | def __init__(self, x):
    method __getstate__ (line 220) | def __getstate__(self):
    method __setstate__ (line 224) | def __setstate__(self, ob):
  function clear_mpi_env_vars (line 230) | def clear_mpi_env_vars():

FILE: rl/vec_env/vec_frame_stack.py
  class VecFrameStack (line 6) | class VecFrameStack(VecEnvWrapper):
    method __init__ (line 7) | def __init__(self, venv, nstack):
    method step_wait (line 17) | def step_wait(self):
    method reset (line 26) | def reset(self):

FILE: rl/vec_env/vec_normalize.py
  class VecNormalize (line 4) | class VecNormalize(VecEnvWrapper):
    method __init__ (line 10) | def __init__(self, venv, ob=True, ret=True, clipob=10., cliprew=10., g...
    method step_wait (line 26) | def step_wait(self):
    method _obfilt (line 36) | def _obfilt(self, obs):
    method reset (line 44) | def reset(self):

FILE: rl/vec_env/vec_pretext_normalize.py
  class VecPretextNormalize (line 14) | class VecPretextNormalize(VecEnvWrapper):
    method __init__ (line 22) | def __init__(self, venv, ob=False, ret=False, clipob=10., cliprew=10.,...
    method talk2Env_async (line 61) | def talk2Env_async(self, data):
    method talk2Env_wait (line 65) | def talk2Env_wait(self):
    method step_wait (line 69) | def step_wait(self):
    method _obfilt (line 77) | def _obfilt(self, obs):
    method reset (line 85) | def reset(self):
    method process_obs_rew (line 112) | def process_obs_rew(self, O, done, rews=0.):

FILE: rl/vec_env/vec_remove_dict_obs.py
  class VecExtractDictObs (line 3) | class VecExtractDictObs(VecEnvObservationWrapper):
    method __init__ (line 4) | def __init__(self, venv, key):
    method process (line 9) | def process(self, obs):

FILE: test.py
  function main (line 16) | def main():

FILE: train.py
  function main (line 23) | def main():
Condensed preview — 111 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (646K chars).
[
  {
    "path": ".gitignore",
    "chars": 130,
    "preview": "gst_datasets/\ngst_updated/scripts/data\ntrained_models/\n.idea/\nvenv/\n*.pyc \n*__pycache__*\n\ntrain1.py\ntest1.py\ntest2.py\nch"
  },
  {
    "path": "CHANGELOG.md",
    "chars": 476,
    "preview": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\n## 2023-03-28\n### Changed\n- Changed i"
  },
  {
    "path": "LICENSE",
    "chars": 1069,
    "preview": "MIT License\n\nCopyright (c) 2023 Shuijing Liu\n\nPermission is hereby granted, free of charge, to any person obtaining a co"
  },
  {
    "path": "README.md",
    "chars": 10179,
    "preview": "# CrowdNav++\nThis repository contains the codes for our paper titled \"Intention Aware Robot Crowd Navigation with Attent"
  },
  {
    "path": "arguments.py",
    "chars": 7347,
    "preview": "import argparse\n\nimport torch\n\n\ndef get_args():\n    parser = argparse.ArgumentParser(description='RL')\n\n    # the saving"
  },
  {
    "path": "collect_data.py",
    "chars": 2846,
    "preview": "import matplotlib.pyplot as plt\nimport torch\nimport copy\nimport os\nimport numpy as np\nfrom tqdm import trange\n\nfrom crow"
  },
  {
    "path": "crowd_nav/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crowd_nav/configs/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crowd_nav/configs/config.py",
    "chars": 5537,
    "preview": "import numpy as np\nfrom arguments import get_args\n\nclass BaseConfig(object):\n    def __init__(self):\n        pass\n\n\nclas"
  },
  {
    "path": "crowd_nav/policy/orca.py",
    "chars": 6080,
    "preview": "import numpy as np\nimport rvo2\nfrom crowd_nav.policy.policy import Policy\nfrom crowd_sim.envs.utils.action import Action"
  },
  {
    "path": "crowd_nav/policy/policy.py",
    "chars": 889,
    "preview": "import abc\nimport numpy as np\n\n\nclass Policy(object):\n    def __init__(self, config):\n        \"\"\"\n        Base class for"
  },
  {
    "path": "crowd_nav/policy/policy_factory.py",
    "chars": 417,
    "preview": "policy_factory = dict()\ndef none_policy():\n    return None\n\nfrom crowd_nav.policy.orca import ORCA\nfrom crowd_nav.policy"
  },
  {
    "path": "crowd_nav/policy/social_force.py",
    "chars": 2362,
    "preview": "import numpy as np\nfrom crowd_nav.policy.policy import Policy\nfrom crowd_sim.envs.utils.action import ActionXY\n\nclass SO"
  },
  {
    "path": "crowd_nav/policy/srnn.py",
    "chars": 1714,
    "preview": "\nfrom crowd_nav.policy.policy import Policy\nimport numpy as np\nfrom crowd_sim.envs.utils.action import ActionRot, Action"
  },
  {
    "path": "crowd_sim/README.md",
    "chars": 2312,
    "preview": "# Simulation Framework\n## Environment\nThe environment contains n+1 agents. N of them are humans controlled by certain a "
  },
  {
    "path": "crowd_sim/__init__.py",
    "chars": 616,
    "preview": "from gym.envs.registration import register\n\nregister(\n    id='CrowdSim-v0',\n    entry_point='crowd_sim.envs:CrowdSim',\n)"
  },
  {
    "path": "crowd_sim/envs/__init__.py",
    "chars": 236,
    "preview": "from .crowd_sim import CrowdSim\nfrom .crowd_sim_pred import CrowdSimPred\nfrom .crowd_sim_var_num import CrowdSimVarNum\nf"
  },
  {
    "path": "crowd_sim/envs/crowd_sim.py",
    "chars": 34713,
    "preview": "import logging\r\nimport gym\r\nimport numpy as np\r\nimport rvo2\r\nimport random\r\nimport copy\r\n\r\nfrom numpy.linalg import norm"
  },
  {
    "path": "crowd_sim/envs/crowd_sim_pred.py",
    "chars": 17028,
    "preview": "import gym\nimport numpy as np\nfrom numpy.linalg import norm\nimport copy\n\nfrom crowd_sim.envs.utils.action import ActionR"
  },
  {
    "path": "crowd_sim/envs/crowd_sim_pred_real_gst.py",
    "chars": 10584,
    "preview": "import gym\nimport numpy as np\n\nfrom crowd_sim.envs.crowd_sim_pred import CrowdSimPred\n\n\nclass CrowdSimPredRealGST(CrowdS"
  },
  {
    "path": "crowd_sim/envs/crowd_sim_var_num.py",
    "chars": 30357,
    "preview": "import gym\nimport numpy as np\nfrom numpy.linalg import norm\nimport copy\n\nfrom crowd_sim.envs.utils.action import ActionR"
  },
  {
    "path": "crowd_sim/envs/crowd_sim_var_num_collect.py",
    "chars": 7381,
    "preview": "import gym\nimport numpy as np\nfrom numpy.linalg import norm\nimport copy\n\nfrom crowd_sim.envs import *\nfrom crowd_sim.env"
  },
  {
    "path": "crowd_sim/envs/ros_turtlebot2i_env.py",
    "chars": 16507,
    "preview": "import gym\nimport numpy as np\nfrom numpy.linalg import norm\nimport pandas as pd\nimport os\n\n# prevent import error if oth"
  },
  {
    "path": "crowd_sim/envs/utils/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "crowd_sim/envs/utils/action.py",
    "chars": 132,
    "preview": "from collections import namedtuple\n\nActionXY = namedtuple('ActionXY', ['vx', 'vy'])\nActionRot = namedtuple('ActionRot', "
  },
  {
    "path": "crowd_sim/envs/utils/agent.py",
    "chars": 6579,
    "preview": "import numpy as np\nfrom numpy.linalg import norm\nimport abc\nimport logging\nfrom crowd_nav.policy.policy_factory import p"
  },
  {
    "path": "crowd_sim/envs/utils/human.py",
    "chars": 1156,
    "preview": "from crowd_sim.envs.utils.agent import Agent\nfrom crowd_sim.envs.utils.state import JointState\n\n\nclass Human(Agent):\n   "
  },
  {
    "path": "crowd_sim/envs/utils/info.py",
    "chars": 702,
    "preview": "class Timeout(object):\n    def __init__(self):\n        pass\n\n    def __str__(self):\n        return 'Timeout'\n\n\nclass Rea"
  },
  {
    "path": "crowd_sim/envs/utils/recorder.py",
    "chars": 8045,
    "preview": "import pandas as pd\nimport os\nimport numpy as np\n\nclass Recoder(object):\n\tdef __init__(self):\n\n\t\tself.unsmoothed_actions"
  },
  {
    "path": "crowd_sim/envs/utils/robot.py",
    "chars": 598,
    "preview": "from crowd_sim.envs.utils.agent import Agent\nfrom crowd_sim.envs.utils.state import JointState\n\n\nclass Robot(Agent):\n   "
  },
  {
    "path": "crowd_sim/envs/utils/state.py",
    "chars": 1584,
    "preview": "from collections import namedtuple\nimport numpy as np\n\nFullState = namedtuple('FullState', ['px', 'py', 'vx', 'vy', 'rad"
  },
  {
    "path": "gst_updated/.gitignore",
    "chars": 51,
    "preview": "results/\ndatasets/\nlogs/\nmyenv/\n*.pickle\n*.png\n*.pt"
  },
  {
    "path": "gst_updated/LICENSE",
    "chars": 1066,
    "preview": "MIT License\n\nCopyright (c) 2021 Zhe Huang\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\n"
  },
  {
    "path": "gst_updated/README-old.md",
    "chars": 3034,
    "preview": "# gst\nGumbel Social Transformer.\n\nThis is the implementation for the paper\n### Learning Sparse Interaction Graphs of Par"
  },
  {
    "path": "gst_updated/README.md",
    "chars": 3622,
    "preview": "# gst\nGumbel Social Transformer trained with data provided in Nov, 2021 by Shuijing.\n\n## How to set up\n##### 1. Create a"
  },
  {
    "path": "gst_updated/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "gst_updated/requirements.txt",
    "chars": 1946,
    "preview": "absl-py==0.13.0\nanyio==3.3.0\nargon2-cffi==21.1.0\nastunparse==1.6.3\nasync-generator==1.10\nattrs==21.2.0\nBabel==2.9.1\nback"
  },
  {
    "path": "gst_updated/run/create_batch_datasets_eth_ucy.sh",
    "chars": 510,
    "preview": "for dataset in eth hotel univ zara1 zara2; do\n    python -u scripts/data/create_datasets_eth_ucy.py --dataset $dataset |"
  },
  {
    "path": "gst_updated/run/create_batch_datasets_self_eth_ucy.sh",
    "chars": 535,
    "preview": "for dataset in eth hotel univ zara1 zara2; do\n    python -u scripts/data/create_datasets_self_eth_ucy.py --dataset $data"
  },
  {
    "path": "gst_updated/run/create_batch_datasets_synth.sh",
    "chars": 211,
    "preview": "python -u scripts/data/create_datasets_trajnet.py --dataset synth | tee -a logs/create_datasets.txt\npython -u scripts/da"
  },
  {
    "path": "gst_updated/run/download_datasets_models.sh",
    "chars": 674,
    "preview": "cd datasets/shuijing/orca_20humans_fov\ncd results/visual\n# use curl for big-sized google drive files to redirect\ncurl -c"
  },
  {
    "path": "gst_updated/run/download_wrapper_data_model.sh",
    "chars": 481,
    "preview": "mkdir -p datasets/wrapper_demo\ncd datasets/wrapper_demo\n# use wget for small-sized google drive files\nwget -O wrapper_de"
  },
  {
    "path": "gst_updated/run/make_dirs.sh",
    "chars": 69,
    "preview": "mkdir -p datasets/shuijing/orca_20humans_fov\nmkdir results\nmkdir logs"
  },
  {
    "path": "gst_updated/scripts/experiments/draft.py",
    "chars": 14419,
    "preview": "import pathhack\nimport pickle\nimport time\nfrom os.path import join, isdir\nfrom os import makedirs\nimport torch\nimport nu"
  },
  {
    "path": "gst_updated/scripts/experiments/eval.py",
    "chars": 9384,
    "preview": "import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport csv\nimport numpy as np\nfrom src.mgnn.u"
  },
  {
    "path": "gst_updated/scripts/experiments/eval_trajnet.py",
    "chars": 10865,
    "preview": "import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport csv\nimport numpy as np\nfrom src.mgnn.u"
  },
  {
    "path": "gst_updated/scripts/experiments/load_trajnet_test_data.py",
    "chars": 18336,
    "preview": "import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport csv\nimport numpy as np\nfrom src.mgnn.u"
  },
  {
    "path": "gst_updated/scripts/experiments/pathhack.py",
    "chars": 154,
    "preview": "import sys\nfrom os.path import abspath, join, split\nfile_path = split(abspath(__file__))[0]\npkg_path = join(file_path, '"
  },
  {
    "path": "gst_updated/scripts/experiments/test.py",
    "chars": 9152,
    "preview": "from gst_updated.scripts.data import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport csv\nimpo"
  },
  {
    "path": "gst_updated/scripts/experiments/test_gst.py",
    "chars": 9152,
    "preview": "from gst_updated.scripts.data import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport csv\nimpo"
  },
  {
    "path": "gst_updated/scripts/experiments/train.py",
    "chars": 11208,
    "preview": "import pathhack\nimport pickle\nimport time\nfrom os.path import join, isdir\nfrom os import makedirs\nimport torch\nimport nu"
  },
  {
    "path": "gst_updated/scripts/wrapper/crowd_nav_interface_multi_env_parallel.py",
    "chars": 12430,
    "preview": "from gst_updated.scripts.data import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport numpy as"
  },
  {
    "path": "gst_updated/scripts/wrapper/crowd_nav_interface_multi_env_visualization_test_single_batch.py",
    "chars": 24049,
    "preview": "import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport numpy as np\nfrom src.mgnn.utils import"
  },
  {
    "path": "gst_updated/scripts/wrapper/crowd_nav_interface_parallel.py",
    "chars": 10306,
    "preview": "\nfrom gst_updated.src.gumbel_social_transformer.st_model import st_model\nfrom os.path import join, isdir\nimport pickle\ni"
  },
  {
    "path": "gst_updated/scripts/wrapper/pathhack.py",
    "chars": 154,
    "preview": "import sys\nfrom os.path import abspath, join, split\nfile_path = split(abspath(__file__))[0]\npkg_path = join(file_path, '"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/edge_selector_ghost.py",
    "chars": 7563,
    "preview": "# import pathhack\nimport torch\nimport torch.nn as nn\nfrom torch.nn.functional import softmax\nfrom src.gumbel_social_tran"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/edge_selector_no_ghost.py",
    "chars": 6168,
    "preview": "import torch\nimport torch.nn as nn\nfrom torch.nn.functional import softmax\nfrom gst_updated.src.gumbel_social_transforme"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/gumbel_social_transformer.py",
    "chars": 5157,
    "preview": "import torch\nimport torch.nn as nn\nfrom gst_updated.src.gumbel_social_transformer.utils import _get_clones\n\nclass Gumbel"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/mha.py",
    "chars": 14447,
    "preview": "import torch\nfrom torch.nn.functional import softmax, dropout, linear\nfrom torch import Tensor\nfrom typing import Option"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/node_encoder_layer_ghost.py",
    "chars": 4009,
    "preview": "import torch\nimport torch.nn as nn\nfrom src.gumbel_social_transformer.mha import VanillaMultiheadAttention\nfrom src.gumb"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/node_encoder_layer_no_ghost.py",
    "chars": 3671,
    "preview": "import torch\nimport torch.nn as nn\nfrom gst_updated.src.gumbel_social_transformer.mha import VanillaMultiheadAttention\nf"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/pathhack.py",
    "chars": 154,
    "preview": "import sys\nfrom os.path import abspath, join, split\nfile_path = split(abspath(__file__))[0]\npkg_path = join(file_path, '"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/st_model.py",
    "chars": 28281,
    "preview": "\"\"\"\nst_model.py\nA multi-pedestrian trajectory prediction model \nthat follows spatial -> temporal encoding manners.\n\"\"\"\n\n"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/temperature_scheduler.py",
    "chars": 789,
    "preview": "class Temp_Scheduler(object):\n    def __init__(self, total_epochs, curr_temp, base_temp, temp_min=0.33, last_epoch=-1):\n"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/temporal_convolution_net.py",
    "chars": 2386,
    "preview": "import torch.nn as nn\nfrom gst_updated.src.gumbel_social_transformer.utils import _get_clones, _get_activation_fn\n\nclass"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/utils.py",
    "chars": 1318,
    "preview": "import copy\nimport torch\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nfrom torch.nn.modules.conta"
  },
  {
    "path": "gst_updated/src/mgnn/batch_trajectories.py",
    "chars": 1942,
    "preview": "from torch.utils.data import Dataset\n\nclass BatchTrajectoriesDataset(Dataset):\n    def __init__(\n        self,\n        )"
  },
  {
    "path": "gst_updated/src/mgnn/trajectories.py",
    "chars": 8216,
    "preview": "from torch.utils.data import Dataset\nimport os\nimport numpy as np\nimport math\nimport torch\nfrom src.mgnn.utils import se"
  },
  {
    "path": "gst_updated/src/mgnn/trajectories_sdd.py",
    "chars": 8994,
    "preview": "from torch.utils.data import Dataset\nimport os\nimport numpy as np\nimport math\nimport torch\nfrom src.mgnn.utils import se"
  },
  {
    "path": "gst_updated/src/mgnn/trajectories_trajnet.py",
    "chars": 13295,
    "preview": "from torch.utils.data import Dataset\nimport os\nimport numpy as np\nimport math\nimport torch\nfrom src.mgnn.utils import se"
  },
  {
    "path": "gst_updated/src/mgnn/trajectories_trajnet_testset.py",
    "chars": 13303,
    "preview": "from torch.utils.data import Dataset\nimport os\nimport numpy as np\nimport math\nimport torch\nfrom src.mgnn.utils import se"
  },
  {
    "path": "gst_updated/src/mgnn/utils.py",
    "chars": 9521,
    "preview": "from os.path import join\nimport argparse\nimport torch\nimport numpy as np\nfrom torch.utils.data import DataLoader\n\n\ndef a"
  },
  {
    "path": "gst_updated/src/pec_net/config/optimal.yaml",
    "chars": 600,
    "preview": "adl_reg: 1\ndata_scale: 1.86\ndataset_type: image\ndec_size:\n- 1024\n- 512\n- 1024\ndist_thresh: 100\nenc_dest_size:\n- 8\n- 16\ne"
  },
  {
    "path": "gst_updated/src/pec_net/sdd_trajectories.py",
    "chars": 1991,
    "preview": "from torch.utils.data import Dataset\nimport numpy as np\n\nclass SDDTrajectoriesDataset(Dataset):\n    def __init__(\n      "
  },
  {
    "path": "gst_updated/src/pec_net/social_utils.py",
    "chars": 7565,
    "preview": "from IPython import embed\nimport glob\nimport pandas as pd\nimport pickle\nimport os\nimport torch\nfrom torch import nn\nfrom"
  },
  {
    "path": "gst_updated/tuning/211130-train_shuijing.sh",
    "chars": 4090,
    "preview": "temporal=faster_lstm\ndecode_style=recursive\nonly_observe_full_period=\n\nsave_epochs=10\nspatial=gumbel_social_transformer\n"
  },
  {
    "path": "gst_updated/tuning/211203-eval_shuijing.sh",
    "chars": 1587,
    "preview": "temporal=faster_lstm\ndecode_style=recursive\nonly_observe_full_period=\n\nsave_epochs=10\nspatial=gumbel_social_transformer\n"
  },
  {
    "path": "gst_updated/tuning/211203-train_shuijing.sh",
    "chars": 4098,
    "preview": "temporal=faster_lstm\ndecode_style=recursive\nonly_observe_full_period=\n\nsave_epochs=10\nspatial=gumbel_social_transformer\n"
  },
  {
    "path": "gst_updated/tuning/211209-test_shuijing.sh",
    "chars": 1587,
    "preview": "temporal=faster_lstm\ndecode_style=recursive\nonly_observe_full_period=\n\nsave_epochs=10\nspatial=gumbel_social_transformer\n"
  },
  {
    "path": "plot.py",
    "chars": 1068,
    "preview": "import pandas as pd\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\n\r\n\r\nlegends = ['Non-randomized humans', 'rando"
  },
  {
    "path": "requirements.txt",
    "chars": 87,
    "preview": "numpy==1.20.3\npandas==1.5.2\ngym==0.15.7\ntensorflow-gpu==2.11.0\nmatplotlib==3.6.2\nCython"
  },
  {
    "path": "rl/.gitignore",
    "chars": 1213,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": "rl/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "rl/evaluation.py",
    "chars": 5901,
    "preview": "import numpy as np\nimport torch\n\nfrom crowd_sim.envs.utils.info import *\n\n\ndef evaluate(actor_critic, eval_envs, num_pro"
  },
  {
    "path": "rl/networks/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "rl/networks/distributions.py",
    "chars": 2831,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom rl.networks.network_utils import A"
  },
  {
    "path": "rl/networks/dummy_vec_env.py",
    "chars": 3116,
    "preview": "import numpy as np\nfrom baselines.common.vec_env.vec_env import VecEnv\nfrom baselines.common.vec_env.util import dict_to"
  },
  {
    "path": "rl/networks/envs.py",
    "chars": 9843,
    "preview": "import os\n\nimport gym\nimport numpy as np\nimport torch\nfrom gym.spaces.box import Box\nfrom gym.spaces.dict import Dict\n\nf"
  },
  {
    "path": "rl/networks/model.py",
    "chars": 2812,
    "preview": "import torch\nimport torch.nn as nn\n\n\nfrom rl.networks.distributions import Bernoulli, Categorical, DiagGaussian\nfrom .sr"
  },
  {
    "path": "rl/networks/network_utils.py",
    "chars": 1560,
    "preview": "import glob\nimport os\n\nimport torch.nn as nn\n\nfrom rl.networks.envs import VecNormalize\n\n\n# Get a render function\ndef ge"
  },
  {
    "path": "rl/networks/selfAttn_srnn_temp_node.py",
    "chars": 17871,
    "preview": "import torch.nn.functional as F\n\nfrom .srnn_model import *\n\nclass SpatialEdgeSelfAttn(nn.Module):\n    \"\"\"\n    Class for "
  },
  {
    "path": "rl/networks/shmem_vec_env.py",
    "chars": 5746,
    "preview": "\"\"\"\nAn interface for asynchronous vectorized environments.\n\"\"\"\n\nimport multiprocessing as mp\nimport numpy as np\nfrom bas"
  },
  {
    "path": "rl/networks/srnn_model.py",
    "chars": 17614,
    "preview": "import torch.nn as nn\nfrom torch.autograd import Variable\nimport torch\nimport numpy as np\nimport copy\n\nfrom rl.networks."
  },
  {
    "path": "rl/networks/storage.py",
    "chars": 11658,
    "preview": "import torch\nfrom torch.utils.data.sampler import BatchSampler, SubsetRandomSampler\n\n\ndef _flatten_helper(T, N, _tensor)"
  },
  {
    "path": "rl/ppo/__init__.py",
    "chars": 21,
    "preview": "\nfrom .ppo import PPO"
  },
  {
    "path": "rl/ppo/ppo.py",
    "chars": 3834,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.optim as optim\n\n\nclass PPO():\n    \"\"\" Class for the PPO optimizer \"\"\"\n  "
  },
  {
    "path": "rl/vec_env/__init__.py",
    "chars": 559,
    "preview": "from .vec_env import AlreadySteppingError, NotSteppingError, VecEnv, VecEnvWrapper, VecEnvObservationWrapper, Cloudpickl"
  },
  {
    "path": "rl/vec_env/dummy_vec_env.py",
    "chars": 3226,
    "preview": "import numpy as np\nfrom .vec_env import VecEnv\nfrom .util import copy_obs_dict, dict_to_obs, obs_space_info\nimport copy\n"
  },
  {
    "path": "rl/vec_env/envs.py",
    "chars": 2987,
    "preview": "import os\r\n\r\nimport gym\r\nimport numpy as np\r\n\r\nfrom gym.spaces.box import Box\r\n\r\nfrom .vec_env import VecEnvWrapper\r\nimp"
  },
  {
    "path": "rl/vec_env/logger.py",
    "chars": 14802,
    "preview": "import os\nimport sys\nimport shutil\nimport os.path as osp\nimport json\nimport time\nimport datetime\nimport tempfile\nfrom co"
  },
  {
    "path": "rl/vec_env/running_mean_std.py",
    "chars": 1371,
    "preview": "#from typing import Tuple\r\n\r\nimport numpy as np\r\n\r\n\r\nclass RunningMeanStd(object):\r\n    def __init__(self, epsilon = 1e-"
  },
  {
    "path": "rl/vec_env/shmem_vec_env.py",
    "chars": 5779,
    "preview": "\"\"\"\nAn interface for asynchronous vectorized environments.\n\"\"\"\n\nimport multiprocessing as mp\nimport numpy as np\nfrom .ve"
  },
  {
    "path": "rl/vec_env/tile_images.py",
    "chars": 763,
    "preview": "import numpy as np\n\ndef tile_images(img_nhwc):\n    \"\"\"\n    Tile N images into one big PxQ image\n    (P,Q) are chosen to "
  },
  {
    "path": "rl/vec_env/util.py",
    "chars": 1513,
    "preview": "\"\"\"\nHelpers for dealing with vectorized environments.\n\"\"\"\n\nfrom collections import OrderedDict\n\nimport gym\nimport numpy "
  },
  {
    "path": "rl/vec_env/vec_env.py",
    "chars": 6633,
    "preview": "import contextlib\nimport os\nimport abc\nfrom abc import abstractmethod\nimport abc\nABC = abc.ABCMeta('ABC', (), {})\n\n#from"
  },
  {
    "path": "rl/vec_env/vec_frame_stack.py",
    "chars": 1150,
    "preview": "from .vec_env import VecEnvWrapper\nimport numpy as np\nfrom gym import spaces\n\n\nclass VecFrameStack(VecEnvWrapper):\n    d"
  },
  {
    "path": "rl/vec_env/vec_normalize.py",
    "chars": 1854,
    "preview": "from . import VecEnvWrapper\nimport numpy as np\n\nclass VecNormalize(VecEnvWrapper):\n    \"\"\"\n    A vectorized wrapper that"
  },
  {
    "path": "rl/vec_env/vec_pretext_normalize.py",
    "chars": 8502,
    "preview": "from . import VecEnvWrapper\r\nimport numpy as np\r\nfrom .running_mean_std import RunningMeanStd\r\nimport torch\r\nimport os\r\n"
  },
  {
    "path": "rl/vec_env/vec_remove_dict_obs.py",
    "chars": 321,
    "preview": "from .vec_env import VecEnvObservationWrapper\n\nclass VecExtractDictObs(VecEnvObservationWrapper):\n    def __init__(self,"
  },
  {
    "path": "test.py",
    "chars": 5306,
    "preview": "import logging\nimport argparse\nimport os\nimport sys\nfrom matplotlib import pyplot as plt\nimport torch\nimport torch.nn as"
  },
  {
    "path": "train.py",
    "chars": 7844,
    "preview": "import os\nimport shutil\nimport time\nfrom collections import deque\nimport numpy as np\nimport torch\nimport torch.nn as nn\n"
  }
]

About this extraction

This page contains the full source code of the Shuijing725/CrowdNav_Prediction_AttnGraph GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 111 files (603.4 KB), approximately 157.9k tokens, and a symbol index with 584 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!