[
  {
    "path": ".gitignore",
    "content": "gst_datasets/\ngst_updated/scripts/data\ntrained_models/\n.idea/\nvenv/\n*.pyc \n*__pycache__*\n\ntrain1.py\ntest1.py\ntest2.py\ncheck_env.py"
  },
  {
    "path": "CHANGELOG.md",
    "content": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\n## 2023-03-28\n### Changed\n- Changed instructions for Pytorch installation (moved Pytorch out of requirements.txt, and added link of official website in readme.md)\n- Fixed bug for 'visible_masks' in generate_ob function in crowd_sim_var_num.py and crowd_sim_pred_real_gst.py, so that the shape of observation will not throw errors when config.sim.human_num_range > 0\n\n## 2023-01\nInitial commit\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2023 Shuijing Liu\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# CrowdNav++\nThis repository contains the codes for our paper titled \"Intention Aware Robot Crowd Navigation with Attention-Based Interaction Graph\" in ICRA 2023. \nFor more details, please refer to the [project website](https://sites.google.com/view/intention-aware-crowdnav/home) and [arXiv preprint](https://arxiv.org/abs/2203.01821).\nFor experiment demonstrations, please refer to the [youtube video](https://www.youtube.com/watch?v=nxpxhF019VA).\n\n**[News]**\n- Please check out our most recent follow-up work below:\n  - [HEIGHT: Heterogeneous Interaction Graph Transformer for Robot Navigation in Crowded and Constrained Environments](https://github.com/Shuijing725/CrowdNav_HEIGHT)\n- Please check out our open-sourced sim2real tutorial [here](https://github.com/Shuijing725/CrowdNav_Sim2Real_Turtlebot)\n- Please check out my curated paper list for robot social navigation [here](https://github.com/Shuijing725/awesome-robot-social-navigation) (It is under active development) \n\n## Abstract\nWe study the problem of safe and intention-aware robot navigation in dense and interactive crowds. \nMost previous reinforcement learning (RL) based methods fail to consider different types of interactions among all agents or ignore the intentions of people, which results in performance degradation. \nIn this paper, we propose a novel recurrent graph neural network with attention mechanisms to capture heterogeneous interactions among agents through space and time. \nTo encourage longsighted robot behaviors, we infer the intentions of dynamic agents by predicting their future trajectories for several timesteps. \nThe predictions are incorporated into a model-free RL framework to prevent the robot from intruding into the intended paths of other agents. \nWe demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios. We successfully transfer the policy learned in simulation to a real-world TurtleBot 2i.\n\n<p align=\"center\">\n<img src=\"/figures/open.png\" width=\"450\" />\n</p>\n\n## Setup\n1. In a conda environment or virtual environment with Python 3.x, install the required python package\n```\npip install -r requirements.txt\n```\n\n2. Install Pytorch 1.12.1 following the instructions [here](https://pytorch.org/get-started/previous-versions/#v1121)\n\n3. Install [OpenAI Baselines](https://github.com/openai/baselines#installation) \n```\ngit clone https://github.com/openai/baselines.git\ncd baselines\npip install -e .\n```\n\n4. Install [Python-RVO2](https://github.com/sybrenstuvel/Python-RVO2) library\n\n\n## Overview\nThis repository is organized in five parts: \n- `crowd_nav/` folder contains configurations and policies used in the simulator.\n- `crowd_sim/` folder contains the simulation environment. \n- `gst_updated/` folder contains the code for running inference of a human trajectory predictor, named Gumbel Social Transformer (GST) [2].\n- `rl/` contains the code for the RL policy networks, wrappers for the prediction network, and ppo algorithm. \n- `trained_models/` contains some pretrained models provided by us. \n\nNote that this repository does not include codes for training a trajectory prediction network. Please refer to from [this repo](https://github.com/tedhuang96/gst) instead.\n\n## Run the code\n### Training\n- Modify the configurations.\n  1. Environment configurations: Modify `crowd_nav/configs/config.py`. Especially,\n     - Choice of human trajectory predictor: \n       - Set `sim.predict_method = 'inferred'` if a learning-based GST predictor is used [2]. Please also change `pred.model_dir` to be the directory of a trained GST model. We provide two pretrained models [here](https://github.com/Shuijing725/CrowdNav_Prediction_AttnGraph/tree/main/gst_updated/results/).\n       - Set `sim.predict_method = 'const_vel'` if constant velocity model is used.\n       - Set `sim.predict_method = 'truth'` if ground truth predictor is used.\n       - Set `sim.predict_method = 'none'` if you do not want to use future trajectories to change the observation and reward.\n     - Randomization of human behaviors: If you want to randomize the ORCA humans, \n       - set `env.randomize_attributes` to True to randomize the preferred velocity and radius of humans;\n       - set `humans.random_goal_changing` to True to let humans randomly change goals before they arrive at their original goals.\n\n  2. PPO and network configurations: modify `arguments.py`\n     - `env_name` (must be consistent with `sim.predict_method` in `crowd_nav/configs/config.py`): \n        - If you use the GST predictor, set to `CrowdSimPredRealGST-v0`.\n        - If you use the ground truth predictor or constant velocity predictor, set to `CrowdSimPred-v0`.\n        - If you don't want to use prediction, set to `CrowdSimVarNum-v0`. \n     - `use_self_attn`: human-human attention network will be included if set to True, else there will be no human-human attention.\n     - `use_hr_attn`: robot-human attention network will be included if set to True, else there will be no robot-human attention.\n- After you change the configurations, run\n  ```\n  python train.py \n  ```\n- The checkpoints and configuration files will be saved to the folder specified by `output_dir` in `arguments.py`.\n\n### Testing\nPlease modify the test arguments in line 20-33 of `test.py` (**Don't set the argument values in terminal!**), and run   \n```\npython test.py \n```\nNote that the `config.py` and `arguments.py` in the testing folder will be loaded, instead of those in the root directory.  \nThe testing results are logged in `trained_models/your_output_dir/test/` folder, and are also printed on terminal.  \nIf you set `visualize=True` in `test.py`, you will be able to see visualizations like this:  \n<img src=\"/figures/visual.gif\" width=\"420\" />\n\n#### Test pre-trained models provided by us\n| Method                                 | `--model_dir` in test.py               | `--test_model` in test.py |\n|----------------------------------------|----------------------------------------|---------------------------|\n| Ours without randomized humans         | `trained_models/GST_predictor_no_rand` | `41200.pt`                |\n| ORCA without randomized humans         | `trained_models/ORCA_no_rand`          | `00000.pt`                |\n| Social force without randomized humans | `trained_models/SF_no_rand`            | `00000.pt`                |\n| Ours with randomized humans            | `trained_models/GST_predictor_rand`    | `41665.pt`                |\n\n#### Plot predicted future human positions\nTo visualize the episodes with predicted human trajectories, as well as saving visualizations to disk, please refer to [save_slides branch](https://github.com/Shuijing725/CrowdNav_Prediction_AttnGraph/tree/save_slides).  \nNote that the above visualization and file saving will slow down testing significantly!   \n- Set `save_slides=True` in `test.py` and all rendered frames will be saved in a subfolder inside the `trained_models/your_output_dir/social_eval/`.   \n\n### Plot the training curves\n```\npython plot.py\n```\nHere are example learning curves of our proposed network model with GST predictor.\n\n<img src=\"/figures/rewards.png\" width=\"370\" /> <img src=\"/figures/losses.png\" width=\"370\" />\n\n## Sim2Real\nWe are happy to announce that our sim2real tutorial and code are released [here](https://github.com/Shuijing725/CrowdNav_Sim2Real_Turtlebot)!  \n**Note:** This repo only serves as a reference point for the sim2real transfer of crowd navigation. Since there are lots of uncertainties in real-world experiments that may affect performance, we cannot guarantee that it is reproducible on all cases. \n\n## Disclaimer\n1. We only tested our code in Ubuntu with Python 3.6 and Python 3.8. The code may work on other OS or other versions of Python, but we do not have any guarantee.  \n\n2. The performance of our code can vary depending on the choice of hyperparameters and random seeds (see [this reddit post](https://www.reddit.com/r/MachineLearning/comments/rkewa3/d_what_are_your_machine_learning_superstitions/)). \nUnfortunately, we do not have time or resources for a thorough hyperparameter search. Thus, if your results are slightly worse than what is claimed in the paper, it is normal. \nTo achieve the best performance, we recommend some manual hyperparameter tuning.\n\n## Citation\nIf you find the code or the paper useful for your research, please cite the following papers:\n```\n@inproceedings{liu2022intention,\n  title={Intention Aware Robot Crowd Navigation with Attention-Based Interaction Graph},\n  author={Liu, Shuijing and Chang, Peixin and Huang, Zhe and Chakraborty, Neeloy and Hong, Kaiwen and Liang, Weihang and Livingston McPherson, D. and Geng, Junyi and Driggs-Campbell, Katherine},\n  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},\n  year={2023},\n  pages={12015-12021}\n}\n\n@inproceedings{liu2020decentralized,\n  title={Decentralized Structural-RNN for Robot Crowd Navigation with Deep Reinforcement Learning},\n  author={Liu, Shuijing and Chang, Peixin and Liang, Weihang and Chakraborty, Neeloy and Driggs-Campbell, Katherine},\n  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},\n  year={2021},\n  pages={3517-3524}\n}\n```\n\n## Credits\nOther contributors:  \n[Peixin Chang](https://github.com/PeixinC)  \n[Zhe Huang](https://github.com/tedhuang96)   \n[Neeloy Chakraborty](https://github.com/TheNeeloy)  \n\nPart of the code is based on the following repositories:  \n\n[1] S. Liu, P. Chang, W. Liang, N. Chakraborty, and K. Driggs-Campbell, \"Decentralized Structural-RNN for Robot Crowd Navigation with Deep Reinforcement Learning,\" in IEEE International Conference on Robotics and Automation (ICRA), 2019, pp. 3517-3524. (Github: https://github.com/Shuijing725/CrowdNav_DSRNN)  \n\n[2] Z. Huang, R. Li, K. Shin, and K. Driggs-Campbell. \"Learning Sparse Interaction Graphs of Partially Detected Pedestrians for Trajectory Prediction,\" in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1198–1205, 2022. (Github: https://github.com/tedhuang96/gst)  \n\n## Contact\nIf you have any questions or find any bugs, please feel free to open an issue or pull request.\n"
  },
  {
    "path": "arguments.py",
    "content": "import argparse\n\nimport torch\n\n\ndef get_args():\n    parser = argparse.ArgumentParser(description='RL')\n\n    # the saving directory for train.py\n    parser.add_argument(\n        '--output_dir', type=str, default='trained_models/my_model')\n\n    # resume training from an existing checkpoint or not\n    parser.add_argument(\n        '--resume', default=False, action='store_true')\n    # if resume = True, load from the following checkpoint\n    parser.add_argument(\n        '--load-path', default='trained_models/GST_predictor_non_rand/checkpoints/41200.pt',\n        help='path of weights for resume training')\n    parser.add_argument(\n        '--overwrite',\n        default=True,\n        action='store_true',\n        help = \"whether to overwrite the output directory in training\")\n    parser.add_argument(\n        '--num_threads',\n        type=int,\n        default=1,\n        help=\"number of threads used for intraop parallelism on CPU\")\n    # only implement in testing\n    parser.add_argument(\n        '--phase', type=str, default='test')\n\n    parser.add_argument(\n        '--cuda-deterministic',\n        action='store_true',\n        default=False,\n        help=\"sets flags for determinism when using CUDA (potentially slow!)\")\n\n    # only works for gpu only (although you can make it work on cpu after some minor fixes)\n    parser.add_argument(\n        '--no-cuda',\n        action='store_true',\n        default=False,\n        help='disables CUDA training')\n    parser.add_argument(\n        '--seed', type=int, default=425, help='random seed (default: 1)')\n\n    parser.add_argument(\n        '--num-processes',\n        type=int,\n        default=16,\n        help='how many training processes to use (default: 16)')\n\n    parser.add_argument(\n        '--num-mini-batch',\n        type=int,\n        default=2,\n        help='number of batches for ppo (default: 32)')\n    parser.add_argument(\n        '--num-steps',\n        type=int,\n        default=30,\n        help='number of forward steps in A2C (default: 5)')\n    parser.add_argument(\n        '--recurrent-policy',\n        action='store_true',\n        default=True,\n        help='use a recurrent policy')\n\n    parser.add_argument(\n        '--ppo-epoch',\n        type=int,\n        default=5,\n        help='number of ppo epochs (default: 4)')\n    parser.add_argument(\n        '--clip-param',\n        type=float,\n        default=0.2,\n        help='ppo clip parameter (default: 0.2)')\n    parser.add_argument(\n        '--value-loss-coef',\n        type=float,\n        default=0.5,\n        help='value loss coefficient (default: 0.5)')\n    parser.add_argument(\n        '--entropy-coef',\n        type=float,\n        default=0.0,\n        help='entropy term coefficient (default: 0.01)')\n    parser.add_argument(\n        '--lr', type=float, default=4e-5, help='learning rate (default: 7e-4)')\n    parser.add_argument(\n        '--eps',\n        type=float,\n        default=1e-5,\n        help='RMSprop optimizer epsilon (default: 1e-5)')\n    parser.add_argument(\n        '--alpha',\n        type=float,\n        default=0.99,\n        help='RMSprop optimizer apha (default: 0.99)')\n    parser.add_argument(\n        '--gamma',\n        type=float,\n        default=0.99,\n        help='discount factor for rewards (default: 0.99)')\n    parser.add_argument(\n        '--max-grad-norm',\n        type=float,\n        default=0.5,\n        help='max norm of gradients (default: 0.5)')\n    # 10e6 for holonomic, 20e6 for unicycle\n    parser.add_argument(\n        '--num-env-steps',\n        type=int,\n        default=20e6,\n        help='number of environment steps to train (default: 10e6)')\n    # True for unicycle, False for holonomic\n    parser.add_argument(\n        '--use-linear-lr-decay',\n        action='store_true',\n        default=False,\n        help='use a linear schedule on the learning rate')\n    parser.add_argument(\n        '--algo', default='ppo', help='algorithm to use: a2c | ppo | acktr')\n    parser.add_argument(\n        '--save-interval',\n        type=int,\n        default=200,\n        help='save interval, one save per n updates (default: 100)')\n    parser.add_argument(\n        '--use-gae',\n        action='store_true',\n        default=True,\n        help='use generalized advantage estimation')\n    parser.add_argument(\n        '--gae-lambda',\n        type=float,\n        default=0.95,\n        help='gae lambda parameter (default: 0.95)')\n    parser.add_argument(\n        '--log-interval',\n        type=int,\n        default=20,\n        help='log interval, one log per n updates (default: 10)')\n    parser.add_argument(\n        '--use-proper-time-limits',\n        action='store_true',\n        default=False,\n        help='compute returns taking into account time limits')\n\n    # for srnn only\n    # RNN size\n    parser.add_argument('--human_node_rnn_size', type=int, default=128,\n                        help='Size of Human Node RNN hidden state')\n    parser.add_argument('--human_human_edge_rnn_size', type=int, default=256,\n                        help='Size of Human Human Edge RNN hidden state')\n    parser.add_argument(\n        '--aux-loss',\n        action='store_true',\n        default=False,\n        help='auxiliary loss on human nodes outputs')\n\n\n    # Input and output size\n    parser.add_argument('--human_node_input_size', type=int, default=3,\n                        help='Dimension of the node features')\n    parser.add_argument('--human_human_edge_input_size', type=int, default=2,\n                        help='Dimension of the edge features')\n    parser.add_argument('--human_node_output_size', type=int, default=256,\n                        help='Dimension of the node output')\n\n    # Embedding size\n    parser.add_argument('--human_node_embedding_size', type=int, default=64,\n                        help='Embedding size of node features')\n    parser.add_argument('--human_human_edge_embedding_size', type=int, default=64,\n                        help='Embedding size of edge features')\n\n    # Attention vector dimension\n    parser.add_argument('--attention_size', type=int, default=64,\n                        help='Attention size')\n\n    # Sequence length\n    parser.add_argument('--seq_length', type=int, default=30,\n                        help='Sequence length')\n\n    # use self attn in human states or not\n    parser.add_argument('--use_self_attn', type=bool, default=True,\n                        help='Attention size')\n\n    # use attn between humans and robots or not (todo: implment this in network models)\n    parser.add_argument('--use_hr_attn', type=bool, default=True,\n                        help='Attention size')\n\n    # No prediction: for orca, sf, old_srnn, selfAttn_srnn_noPred ablation: 'CrowdSimVarNum-v0',\n    # for constant velocity Pred, ground truth Pred: 'CrowdSimPred-v0'\n    # gst pred: 'CrowdSimPredRealGST-v0'\n    parser.add_argument(\n        '--env-name',\n        default='CrowdSimPredRealGST-v0',\n        help='name of the environment')\n\n\n    # sort all humans and squeeze them to the front or not\n    parser.add_argument('--sort_humans', type=bool, default=True)\n\n\n    args = parser.parse_args()\n\n    args.cuda = not args.no_cuda and torch.cuda.is_available()\n\n    assert args.algo in ['a2c', 'ppo', 'acktr']\n    if args.recurrent_policy:\n        assert args.algo in ['a2c', 'ppo'], \\\n            'Recurrent policy is not implemented for ACKTR'\n\n    return args"
  },
  {
    "path": "collect_data.py",
    "content": "import matplotlib.pyplot as plt\nimport torch\nimport copy\nimport os\nimport numpy as np\nfrom tqdm import trange\n\nfrom crowd_nav.configs.config import Config\nfrom rl.networks.envs import make_vec_envs\nfrom crowd_sim.envs import *\n\n# train_data: True if collect training data, False if collect testing data\ndef collectData(device, train_data, config):\n    # set robot policy to orca\n    config.robot.policy = 'orca'\n\n    env_name = 'CrowdSimVarNumCollect-v0'\n    # for render\n    env_num = 1 if config.data.render else config.data.num_processes\n\n    if config.data.render:\n        fig, ax = plt.subplots(figsize=(7, 7))\n        ax.set_xlim(-10, 10)\n        ax.set_ylim(-10, 10)\n        ax.set_xlabel('x(m)', fontsize=16)\n        ax.set_ylabel('y(m)', fontsize=16)\n        plt.ion()\n        plt.show()\n    else:\n        ax = None\n\n    # create parallel envs\n    seed = np.random.randint(0, np.iinfo(np.uint32).max)\n    envs = make_vec_envs(env_name, seed, env_num,\n                         config.reward.gamma, None, device, allow_early_resets=True, config=config,\n                         ax=ax, wrap_pytorch=False)\n\n\n    # collect data for pretext training\n    data = [] # list for all data collected\n    for i in range(env_num):\n        data.append([])\n\n    obs = envs.reset()\n\n    # 1 epoch -> 1 file\n    # todo: add pretext arguments to config.py\n    # make one prediction every \"pred_interval\" simulation steps\n    pred_interval = config.data.pred_timestep // config.env.time_step\n    tot_steps = int(config.data.tot_steps * pred_interval)\n    for step in trange(tot_steps):\n\n        if config.data.render:\n            envs.render()\n        if step % pred_interval == 0:\n            # append a single data one by one\n            for i in range(env_num):\n                non_empty_obs = obs['pred_info'][i][np.logical_not(np.isinf(obs['pred_info'][i, :, -1]))]\n                non_empty_obs = non_empty_obs.reshape((-1, 4)).tolist()\n                if non_empty_obs:\n                    data[i].extend(copy.deepcopy(non_empty_obs))\n\n        action = np.zeros((env_num, 2))\n        # action is is dummy action!\n        obs, rew, done, info = envs.step(action)\n\n    # save observations as pickle files\n    filePath = os.path.join(config.data.data_save_dir, 'train') if train_data \\\n        else os.path.join(config.data.data_save_dir, 'test')\n    if not os.path.isdir(filePath):\n        os.makedirs(filePath)\n\n    for i in range(env_num):\n        with open(os.path.join(filePath, str(i)+'.txt'), 'w') as f:\n            for item in data[i]:\n                item = str(item[0]) + '\\t' + str(item[1]) + '\\t' + str(item[2]) + '\\t' + str(item[3])\n                f.write(\"%s\\n\" % item)\n\n\n    envs.close()\n\nif __name__ == '__main__':\n    config = Config()\n    device = torch.device(\"cuda\")\n    collectData(device, config.data.collect_train_data, config)\n"
  },
  {
    "path": "crowd_nav/__init__.py",
    "content": ""
  },
  {
    "path": "crowd_nav/configs/__init__.py",
    "content": ""
  },
  {
    "path": "crowd_nav/configs/config.py",
    "content": "import numpy as np\nfrom arguments import get_args\n\nclass BaseConfig(object):\n    def __init__(self):\n        pass\n\n\nclass Config(object):\n    # for now, import all args from arguments.py\n    args = get_args()\n\n    training = BaseConfig()\n    training.device = \"cuda:0\" if args.cuda else \"cpu\"\n\n    # general configs for OpenAI gym env\n    env = BaseConfig()\n    env.time_limit = 50\n    env.time_step = 0.25\n    env.val_size = 100\n    env.test_size = 500\n    # if randomize human behaviors, set to True, else set to False\n    env.randomize_attributes = True\n    env.num_processes = args.num_processes\n    # record robot states and actions an episode for system identification in sim2real\n    env.record = False\n    env.load_act = False\n\n    # config for reward function\n    reward = BaseConfig()\n    reward.success_reward = 10\n    reward.collision_penalty = -20\n    # discomfort distance\n    reward.discomfort_dist = 0.25\n    reward.discomfort_penalty_factor = 10\n    reward.gamma = 0.99\n\n    # config for simulation\n    sim = BaseConfig()\n    sim.circle_radius = 6 * np.sqrt(2)\n    sim.arena_size = 6\n    sim.human_num = 20\n    # actual human num in each timestep, in [human_num-human_num_range, human_num+human_num_range]\n    sim.human_num_range = 0\n    sim.predict_steps = 5\n    # 'const_vel': constant velocity model,\n    # 'truth': ground truth future traj (with info in robot's fov)\n    # 'inferred': inferred future traj from GST network\n    # 'none': no prediction\n    sim.predict_method = 'inferred'\n    # render the simulation during training or not\n    sim.render = False\n\n    # for save_traj only\n    render_traj = False\n    save_slides = False\n    save_path = None\n\n    # whether wrap the vec env with VecPretextNormalize class\n    # = True only if we are using a network for human trajectory prediction (sim.predict_method = 'inferred')\n    if sim.predict_method == 'inferred':\n        env.use_wrapper = True\n    else:\n        env.use_wrapper = False\n\n    # human config\n    humans = BaseConfig()\n    humans.visible = True\n    # orca or social_force for now\n    humans.policy = \"orca\"\n    humans.radius = 0.3\n    humans.v_pref = 1\n    humans.sensor = \"coordinates\"\n    # FOV = this values * PI\n    humans.FOV = 2.\n\n    # a human may change its goal before it reaches its old goal\n    # if randomize human behaviors, set to True, else set to False\n    humans.random_goal_changing = True\n    humans.goal_change_chance = 0.5\n\n    # a human may change its goal after it reaches its old goal\n    humans.end_goal_changing = True\n    humans.end_goal_change_chance = 1.0\n\n    # a human may change its radius and/or v_pref after it reaches its current goal\n    humans.random_radii = False\n    humans.random_v_pref = False\n\n    # one human may have a random chance to be blind to other agents at every time step\n    humans.random_unobservability = False\n    humans.unobservable_chance = 0.3\n\n    humans.random_policy_changing = False\n\n    # robot config\n    robot = BaseConfig()\n    # whether robot is visible to humans (whether humans respond to the robot's motion)\n    robot.visible = False\n    # For baseline: srnn; our method: selfAttn_merge_srnn\n    robot.policy = 'selfAttn_merge_srnn'\n    robot.radius = 0.3\n    robot.v_pref = 1\n    robot.sensor = \"coordinates\"\n    # FOV = this values * PI\n    robot.FOV = 2\n    # radius of perception range\n    robot.sensor_range = 5\n\n    # action space of the robot\n    action_space = BaseConfig()\n    # holonomic or unicycle\n    action_space.kinematics = \"holonomic\"\n\n    # config for ORCA\n    orca = BaseConfig()\n    orca.neighbor_dist = 10\n    orca.safety_space = 0.15\n    orca.time_horizon = 5\n    orca.time_horizon_obst = 5\n\n    # config for social force\n    sf = BaseConfig()\n    sf.A = 2.\n    sf.B = 1\n    sf.KI = 1\n\n    # config for data collection for training the GST predictor\n    data = BaseConfig()\n    data.tot_steps = 40000\n    data.render = False\n    data.collect_train_data = False\n    data.num_processes = 5\n    data.data_save_dir = 'gst_updated/datasets/orca_20humans_no_rand'\n    # number of seconds between each position in traj pred model\n    data.pred_timestep = 0.25\n\n    # config for the GST predictor\n    pred = BaseConfig()\n    # see 'gst_updated/results/README.md' for how to set this variable\n    # If randomized humans: gst_updated/results/100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000_rand/sj\n    # else: gst_updated/results/100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000/sj\n    pred.model_dir = 'gst_updated/results/100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000_rand/sj'\n\n    # LIDAR config\n    lidar = BaseConfig()\n    # angular resolution (offset angle between neighboring rays) in degrees\n    lidar.angular_res = 5\n    # range in meters\n    lidar.range = 10\n\n    # config for sim2real\n    sim2real = BaseConfig()\n    # use dummy robot and human states or not\n    sim2real.use_dummy_detect = True\n    sim2real.record = False\n    sim2real.load_act = False\n    sim2real.ROSStepInterval = 0.03\n    sim2real.fixed_time_interval = 0.1\n    sim2real.use_fixed_time_interval = True\n\n    if sim.predict_method == 'inferred' and env.use_wrapper == False:\n        raise ValueError(\"If using inferred prediction, you must wrap the envs!\")\n    if sim.predict_method != 'inferred' and env.use_wrapper:\n        raise ValueError(\"If not using inferred prediction, you must NOT wrap the envs!\")\n"
  },
  {
    "path": "crowd_nav/policy/orca.py",
    "content": "import numpy as np\nimport rvo2\nfrom crowd_nav.policy.policy import Policy\nfrom crowd_sim.envs.utils.action import ActionXY\n\n\nclass ORCA(Policy):\n    def __init__(self, config):\n        \"\"\"\n        timeStep        The time step of the simulation.\n                        Must be positive.\n        neighborDist    The default maximum distance (center point\n                        to center point) to other agents a new agent\n                        takes into account in the navigation. The\n                        larger this number, the longer the running\n                        time of the simulation. If the number is too\n                        low, the simulation will not be safe. Must be\n                        non-negative.\n        maxNeighbors    The default maximum number of other agents a\n                        new agent takes into account in the\n                        navigation. The larger this number, the\n                        longer the running time of the simulation.\n                        If the number is too low, the simulation\n                        will not be safe.\n        timeHorizon     The default minimal amount of time for which\n                        a new agent's velocities that are computed\n                        by the simulation are safe with respect to\n                        other agents. The larger this number, the\n                        sooner an agent will respond to the presence\n                        of other agents, but the less freedom the\n                        agent has in choosing its velocities.\n                        Must be positive.\n        timeHorizonObst The default minimal amount of time for which\n                        a new agent's velocities that are computed\n                        by the simulation are safe with respect to\n                        obstacles. The larger this number, the\n                        sooner an agent will respond to the presence\n                        of obstacles, but the less freedom the agent\n                        has in choosing its velocities.\n                        Must be positive.\n        radius          The default radius of a new agent.\n                        Must be non-negative.\n        maxSpeed        The default maximum speed of a new agent.\n                        Must be non-negative.\n        velocity        The default initial two-dimensional linear\n                        velocity of a new agent (optional).\n\n        ORCA first uses neighborDist and maxNeighbors to find neighbors that need to be taken into account.\n        Here set them to be large enough so that all agents will be considered as neighbors.\n        Time_horizon should be set that at least it's safe for one time step\n\n        In this work, obstacles are not considered. So the value of time_horizon_obst doesn't matter.\n\n        \"\"\"\n        super().__init__(config)\n        self.name = 'ORCA'\n        self.max_neighbors = None\n        self.radius = None\n        self.max_speed = 1 # the ego agent assumes that all other agents have this max speed\n        self.sim = None\n        self.safety_space = self.config.orca.safety_space\n\n\n    def predict(self, state):\n        \"\"\"\n        Create a rvo2 simulation at each time step and run one step\n        Python-RVO2 API: https://github.com/sybrenstuvel/Python-RVO2/blob/master/src/rvo2.pyx\n        How simulation is done in RVO2: https://github.com/sybrenstuvel/Python-RVO2/blob/master/src/Agent.cpp\n\n        Agent doesn't stop moving after it reaches the goal, because once it stops moving, the reciprocal rule is broken\n\n        :param state:\n        :return:\n        \"\"\"\n        self_state = state.self_state\n        # max number of humans = current number of humans\n        self.max_neighbors = len(state.human_states)\n        self.radius = state.self_state.radius\n        params = self.config.orca.neighbor_dist, self.max_neighbors, self.config.orca.time_horizon, self.config.orca.time_horizon_obst\n        if self.sim is not None and self.sim.getNumAgents() != len(state.human_states) + 1:\n            del self.sim\n            self.sim = None\n        if self.sim is None:\n            self.sim = rvo2.PyRVOSimulator(self.time_step, *params, self.radius, self.max_speed)\n            self.sim.addAgent((self_state.px, self_state.py), *params, self_state.radius + 0.01 + self.safety_space,\n                              self_state.v_pref, (self_state.vx, self_state.vy))\n            for human_state in state.human_states:\n                self.sim.addAgent((human_state.px, human_state.py), *params, human_state.radius + 0.01 + self.config.orca.safety_space,\n                                  self.max_speed, (human_state.vx, human_state.vy))\n        else:\n            self.sim.setAgentPosition(0, (self_state.px, self_state.py))\n            self.sim.setAgentVelocity(0, (self_state.vx, self_state.vy))\n            for i, human_state in enumerate(state.human_states):\n                self.sim.setAgentPosition(i + 1, (human_state.px, human_state.py))\n                self.sim.setAgentVelocity(i + 1, (human_state.vx, human_state.vy))\n\n        # Set the preferred velocity to be a vector of unit magnitude (speed) in the direction of the goal.\n        velocity = np.array((self_state.gx - self_state.px, self_state.gy - self_state.py))\n        speed = np.linalg.norm(velocity)\n        pref_vel = velocity / speed if speed > 1 else velocity\n\n        # Perturb a little to avoid deadlocks due to perfect symmetry.\n        # perturb_angle = np.random.random() * 2 * np.pi\n        # perturb_dist = np.random.random() * 0.01\n        # perturb_vel = np.array((np.cos(perturb_angle), np.sin(perturb_angle))) * perturb_dist\n        # pref_vel += perturb_vel\n\n        self.sim.setAgentPrefVelocity(0, tuple(pref_vel))\n        for i, human_state in enumerate(state.human_states):\n            # unknown goal position of other humans\n            self.sim.setAgentPrefVelocity(i + 1, (0, 0))\n\n        self.sim.doStep()\n        action = ActionXY(*self.sim.getAgentVelocity(0))\n        self.last_state = state\n\n        return action\n"
  },
  {
    "path": "crowd_nav/policy/policy.py",
    "content": "import abc\nimport numpy as np\n\n\nclass Policy(object):\n    def __init__(self, config):\n        \"\"\"\n        Base class for all policies, has an abstract method predict().\n        \"\"\"\n        self.trainable = False\n        self.phase = None\n        self.model = None\n        self.device = None\n        self.last_state = None\n        self.time_step = None\n        # if agent is assumed to know the dynamics of real world\n        self.env = None\n        self.config = config\n\n\n    @abc.abstractmethod\n    def predict(self, state):\n        \"\"\"\n        Policy takes state as input and output an action\n\n        \"\"\"\n        return\n\n    @staticmethod\n    def reach_destination(state):\n        self_state = state.self_state\n        if np.linalg.norm((self_state.py - self_state.gy, self_state.px - self_state.gx)) < self_state.radius:\n            return True\n        else:\n            return False\n\n"
  },
  {
    "path": "crowd_nav/policy/policy_factory.py",
    "content": "policy_factory = dict()\ndef none_policy():\n    return None\n\nfrom crowd_nav.policy.orca import ORCA\nfrom crowd_nav.policy.social_force import SOCIAL_FORCE\nfrom crowd_nav.policy.srnn import SRNN, selfAttn_merge_SRNN\n\npolicy_factory['orca'] = ORCA\npolicy_factory['none'] = none_policy\npolicy_factory['social_force'] = SOCIAL_FORCE\npolicy_factory['srnn'] = SRNN\npolicy_factory['selfAttn_merge_srnn'] = selfAttn_merge_SRNN"
  },
  {
    "path": "crowd_nav/policy/social_force.py",
    "content": "import numpy as np\nfrom crowd_nav.policy.policy import Policy\nfrom crowd_sim.envs.utils.action import ActionXY\n\nclass SOCIAL_FORCE(Policy):\n    def __init__(self, config):\n        super().__init__(config)\n        self.name = 'social_force'\n\n\n    def predict(self, state):\n        \"\"\"\n        Produce action for agent with circular specification of social force model.\n        \"\"\"\n        # Pull force to goal\n        delta_x = state.self_state.gx - state.self_state.px\n        delta_y = state.self_state.gy - state.self_state.py\n        dist_to_goal = np.sqrt(delta_x**2 + delta_y**2)\n        desired_vx = (delta_x / dist_to_goal) * state.self_state.v_pref\n        desired_vy = (delta_y / dist_to_goal) * state.self_state.v_pref\n        KI = self.config.sf.KI # Inverse of relocation time K_i\n        curr_delta_vx = KI * (desired_vx - state.self_state.vx)\n        curr_delta_vy = KI * (desired_vy - state.self_state.vy)\n        \n        # Push force(s) from other agents\n        A = self.config.sf.A # Other observations' interaction strength: 1.5\n        B = self.config.sf.B # Other observations' interaction range: 1.0\n        interaction_vx = 0\n        interaction_vy = 0\n        for other_human_state in state.human_states:\n            delta_x = state.self_state.px - other_human_state.px\n            delta_y = state.self_state.py - other_human_state.py\n            dist_to_human = np.sqrt(delta_x**2 + delta_y**2)\n            interaction_vx += A * np.exp((state.self_state.radius + other_human_state.radius - dist_to_human) / B) * (delta_x / dist_to_human)\n            interaction_vy += A * np.exp((state.self_state.radius + other_human_state.radius - dist_to_human) / B) * (delta_y / dist_to_human)\n\n        # Sum of push & pull forces\n        total_delta_vx = (curr_delta_vx + interaction_vx) * self.config.env.time_step\n        total_delta_vy = (curr_delta_vy + interaction_vy) * self.config.env.time_step\n\n        # clip the speed so that sqrt(vx^2 + vy^2) <= v_pref\n        new_vx = state.self_state.vx + total_delta_vx\n        new_vy = state.self_state.vy + total_delta_vy\n        act_norm = np.linalg.norm([new_vx, new_vy])\n\n        if act_norm > state.self_state.v_pref:\n            return ActionXY(new_vx / act_norm * state.self_state.v_pref, new_vy / act_norm * state.self_state.v_pref)\n        else:\n            return ActionXY(new_vx, new_vy)\n"
  },
  {
    "path": "crowd_nav/policy/srnn.py",
    "content": "\nfrom crowd_nav.policy.policy import Policy\nimport numpy as np\nfrom crowd_sim.envs.utils.action import ActionRot, ActionXY\n\n\nclass SRNN(Policy):\n\tdef __init__(self, config):\n\t\tsuper().__init__(config)\n\t\tself.time_step = self.config.env.time_step # Todo: is this needed?\n\t\tself.name = 'srnn'\n\t\tself.trainable = True\n\t\tself.multiagent_training = True\n\n\n\t# clip the self.raw_action and return the clipped action\n\tdef clip_action(self, raw_action, v_pref):\n\t\t\"\"\"\n        Input state is the joint state of robot concatenated by the observable state of other agents\n\n        To predict the best action, agent samples actions and propagates one step to see how good the next state is\n        thus the reward function is needed\n\n        \"\"\"\n\t\t# quantize the action\n\t\tholonomic = True if self.config.action_space.kinematics == 'holonomic' else False\n\t\t# clip the action\n\t\tif holonomic:\n\t\t\tact_norm = np.linalg.norm(raw_action)\n\t\t\tif act_norm > v_pref:\n\t\t\t\traw_action[0] = raw_action[0] / act_norm * v_pref\n\t\t\t\traw_action[1] = raw_action[1] / act_norm * v_pref\n\t\t\treturn ActionXY(raw_action[0], raw_action[1])\n\t\telse:\n\t\t\t# for sim2real\n\t\t\t# old value: -0.1, 0.1\n\t\t\traw_action[0] = np.clip(raw_action[0], -0.1, 0.087) # action[0] is change of v\n\t\t\t# raw[0, 1] = np.clip(raw[0, 1], -0.25, 0.25) # action[1] is change of w\n\t\t\t# raw[0, 0] = np.clip(raw[0, 0], -state.self_state.v_pref, state.self_state.v_pref) # action[0] is v\n\t\t\t# old value: -0.1, 0.1\n\t\t\t# 0.073\n\t\t\traw_action[1] = np.clip(raw_action[1], -0.06, 0.06) # action[1] is change of theta\n\n\t\t\treturn ActionRot(raw_action[0], raw_action[1])\n\n\nclass selfAttn_merge_SRNN(SRNN):\n\tdef __init__(self, config):\n\t\tsuper().__init__(config)\n\t\tself.name = 'selfAttn_merge_srnn'"
  },
  {
    "path": "crowd_sim/README.md",
    "content": "# Simulation Framework\n## Environment\nThe environment contains n+1 agents. N of them are humans controlled by certain a fixed\npolicy. The other is robot and it's controlled by a trainable policy.\nThe environment is built on top of OpenAI gym library, and has implemented two abstract methods.\n* reset(): the environment will reset positions for all the agents and return observation \nfor robot. Observation for one agent is the observable states of all other agents.\n* step(action): taking action of the robot as input, the environment computes observation\nfor each agent and call agent.act(observation) to get actions of agents. Then environment detects\nwhether there is a collision between agents. If not, the states of agents will be updated. Then \nobservation, reward, done, info will be returned.\n\n\n## Agent\nAgent is a base class, and has two derived class of human and robot. Agent class holds\nall the physical properties of an agent, including position, velocity, orientation, policy and etc.\n* visibility: humans and robot can be set to be visible or invisible\n* sensor: can be either visual input or coordinate input\n* kinematics: can be either holonomic (move in any direction) or unicycle (has rotation constraints)\n* act(observation): transform observation to state and pass it to policy\n\n\n## Policy\nPolicy takes state as input and output an action. Current available policies:\n* ORCA: model other agents as velocity obstacles to find optimal collision-free velocities under reciprocal assumption\n* Social force: models the interactions in crowds using attractive and  repulsive  forces\n* DS-RNN: our method\n\n\n## State\nThere are multiple definition of states in different cases. The state of an agent representing all\nthe knowledge of environments is defined as JointState, and it's different from the state of the whole environment.\n* ObservableState: position, velocity, radius of one agent\n* FullState: position, velocity, radius, goal position, preferred velocity, rotation of one agent\n* JoinState: concatenation of one agent's full state and all other agents' observable states \n\n\n## Action\nThere are two types of actions depending on what kinematics constraint the agent has.\n* ActionXY: (vx, vy) if kinematics == 'holonomic'\n* ActionRot: (velocity, rotation angle) if kinematics == 'unicycle'"
  },
  {
    "path": "crowd_sim/__init__.py",
    "content": "from gym.envs.registration import register\n\nregister(\n    id='CrowdSim-v0',\n    entry_point='crowd_sim.envs:CrowdSim',\n)\n\nregister(\n    id='CrowdSimPred-v0',\n    entry_point='crowd_sim.envs:CrowdSimPred',\n)\n\nregister(\n    id='CrowdSimVarNum-v0',\n    entry_point='crowd_sim.envs:CrowdSimVarNum',\n)\n\nregister(\n    id='CrowdSimVarNumCollect-v0',\n    entry_point='crowd_sim.envs:CrowdSimVarNumCollect',\n)\n\nregister(\n    id='CrowdSimPredRealGST-v0',\n    entry_point='crowd_sim.envs:CrowdSimPredRealGST',\n)\n\nregister(\n    id='rosTurtlebot2iEnv-v0',\n    entry_point='crowd_sim.envs.ros_turtlebot2i_env:rosTurtlebot2iEnv',\n)"
  },
  {
    "path": "crowd_sim/envs/__init__.py",
    "content": "from .crowd_sim import CrowdSim\nfrom .crowd_sim_pred import CrowdSimPred\nfrom .crowd_sim_var_num import CrowdSimVarNum\nfrom .crowd_sim_var_num_collect import CrowdSimVarNumCollect\nfrom .crowd_sim_pred_real_gst import CrowdSimPredRealGST"
  },
  {
    "path": "crowd_sim/envs/crowd_sim.py",
    "content": "import logging\r\nimport gym\r\nimport numpy as np\r\nimport rvo2\r\nimport random\r\nimport copy\r\n\r\nfrom numpy.linalg import norm\r\nfrom crowd_sim.envs.utils.human import Human\r\nfrom crowd_sim.envs.utils.robot import Robot\r\nfrom crowd_sim.envs.utils.info import *\r\nfrom crowd_nav.policy.orca import ORCA\r\nfrom crowd_sim.envs.utils.state import *\r\nfrom crowd_sim.envs.utils.action import ActionRot, ActionXY\r\nfrom crowd_sim.envs.utils.recorder import Recoder\r\n\r\n\r\n\r\nclass CrowdSim(gym.Env):\r\n    \"\"\"\r\n    A base environment\r\n    treat it as an abstract class, all other environments inherit from this one\r\n    \"\"\"\r\n    def __init__(self):\r\n        \"\"\"\r\n        Movement simulation for n+1 agents\r\n        Agent can either be human or robot.\r\n        humans are controlled by a unknown and fixed policy.\r\n        robot is controlled by a known and learnable policy.\r\n        \"\"\"\r\n        self.time_limit = None\r\n        self.time_step = None\r\n        self.robot = None # a Robot instance representing the robot\r\n        self.humans = None # a list of Human instances, representing all humans in the environment\r\n        self.global_time = None\r\n        self.step_counter=0\r\n\r\n        # reward function\r\n        self.success_reward = None\r\n        self.collision_penalty = None\r\n        self.discomfort_dist = None\r\n        self.discomfort_penalty_factor = None\r\n        # simulation configuration\r\n        self.config = None\r\n        self.case_capacity = None\r\n        self.case_size = None\r\n        self.case_counter = None\r\n        self.randomize_attributes = None\r\n\r\n        self.circle_radius = None\r\n        self.human_num = None\r\n\r\n\r\n        self.action_space=None\r\n        self.observation_space=None\r\n\r\n        # limit FOV\r\n        self.robot_fov = None\r\n        self.human_fov = None\r\n\r\n        self.dummy_human = None\r\n        self.dummy_robot = None\r\n\r\n        #seed\r\n        self.thisSeed=None # the seed will be set when the env is created\r\n\r\n        #nenv\r\n        self.nenv=None # the number of env will be set when the env is created.\r\n        # Because the human crossing cases are controlled by random seed, we will calculate unique random seed for each\r\n        # parallel env.\r\n\r\n        self.phase=None # set the phase to be train, val or test\r\n        self.test_case=None # the test case ID, which will be used to calculate a seed to generate a human crossing case\r\n\r\n        # for render\r\n        self.render_axis=None\r\n\r\n        self.humans = []\r\n\r\n        self.potential = None\r\n\r\n        self.desiredVelocity = [0.0, 0.0]\r\n\r\n        self.last_left = 0.\r\n        self.last_right = 0.\r\n\r\n\r\n    def configure(self, config):\r\n        \"\"\" read the config to the environment variables \"\"\"\r\n\r\n        self.config = config\r\n\r\n        self.time_limit = config.env.time_limit\r\n        self.time_step = config.env.time_step\r\n        self.randomize_attributes = config.env.randomize_attributes\r\n\r\n        self.success_reward = config.reward.success_reward\r\n        self.collision_penalty = config.reward.collision_penalty\r\n        self.discomfort_dist = config.reward.discomfort_dist\r\n        self.discomfort_penalty_factor = config.reward.discomfort_penalty_factor\r\n\r\n\r\n        self.case_capacity = {'train': np.iinfo(np.uint32).max - 2000, 'val': 1000, 'test': 1000}\r\n        self.case_size = {'train': np.iinfo(np.uint32).max - 2000, 'val': self.config.env.val_size,\r\n                          'test': self.config.env.test_size}\r\n        self.circle_radius = config.sim.circle_radius\r\n        self.human_num = config.sim.human_num\r\n\r\n        self.arena_size = config.sim.arena_size\r\n\r\n        self.case_counter = {'train': 0, 'test': 0, 'val': 0}\r\n\r\n        logging.info('human number: {}'.format(self.human_num))\r\n        if self.randomize_attributes:\r\n            logging.info(\"Randomize human's radius and preferred speed\")\r\n        else:\r\n            logging.info(\"Not randomize human's radius and preferred speed\")\r\n\r\n        logging.info('Circle width: {}'.format(self.circle_radius))\r\n\r\n\r\n        self.robot_fov = np.pi * config.robot.FOV\r\n        self.human_fov = np.pi * config.humans.FOV\r\n        logging.info('robot FOV %f', self.robot_fov)\r\n        logging.info('humans FOV %f', self.human_fov)\r\n\r\n\r\n        # set dummy human and dummy robot\r\n        # dummy humans, used if any human is not in view of other agents\r\n        self.dummy_human = Human(self.config, 'humans')\r\n        # if a human is not in view, set its state to (px = 100, py = 100, vx = 0, vy = 0, theta = 0, radius = 0)\r\n        self.dummy_human.set(7, 7, 7, 7, 0, 0, 0) # (7, 7, 7, 7, 0, 0, 0)\r\n        self.dummy_human.time_step = config.env.time_step\r\n\r\n        self.dummy_robot = Robot(self.config, 'robot')\r\n        self.dummy_robot.set(7, 7, 7, 7, 0, 0, 0)\r\n        self.dummy_robot.time_step = config.env.time_step\r\n        self.dummy_robot.kinematics = 'holonomic'\r\n        self.dummy_robot.policy = ORCA(config)\r\n\r\n        self.r = self.config.humans.radius\r\n\r\n\r\n        # configure randomized goal changing of humans midway through episode\r\n        self.random_goal_changing = config.humans.random_goal_changing\r\n        if self.random_goal_changing:\r\n            self.goal_change_chance = config.humans.goal_change_chance\r\n\r\n        # configure randomized goal changing of humans after reaching their respective goals\r\n        self.end_goal_changing = config.humans.end_goal_changing\r\n        if self.end_goal_changing:\r\n            self.end_goal_change_chance = config.humans.end_goal_change_chance\r\n\r\n        self.last_human_states = np.zeros((self.human_num, 5))\r\n\r\n        self.predict_steps = config.sim.predict_steps\r\n        self.human_num_range = config.sim.human_num_range\r\n        assert self.human_num > self.human_num_range\r\n\r\n        self.max_human_num = self.human_num + self.human_num_range\r\n        self.min_human_num = self.human_num - self.human_num_range\r\n\r\n        # for sim2real dynamics check\r\n        self.record=config.sim2real.record\r\n        self.load_act=config.sim2real.load_act\r\n        self.ROSStepInterval=config.sim2real.ROSStepInterval\r\n        self.fixed_time_interval=config.sim2real.fixed_time_interval\r\n        self.use_fixed_time_interval = config.sim2real.use_fixed_time_interval\r\n        if self.record:\r\n            self.episodeRecoder=Recoder()\r\n            self.load_act=config.sim2real.load_act\r\n            if self.load_act:\r\n                self.episodeRecoder.loadActions()\r\n        # use dummy robot and human states or use detected states from sensors\r\n        self.use_dummy_detect = config.sim2real.use_dummy_detect\r\n\r\n\r\n\r\n        # prediction period / control (or simulation) period\r\n        self.pred_interval = int(config.data.pred_timestep // config.env.time_step)\r\n        self.buffer_len = self.predict_steps * self.pred_interval\r\n\r\n        # set robot for this envs\r\n        rob_RL = Robot(config, 'robot')\r\n        self.set_robot(rob_RL)\r\n\r\n\r\n    def set_robot(self, robot):\r\n        raise NotImplementedError\r\n\r\n\r\n    def generate_random_human_position(self, human_num):\r\n        \"\"\"\r\n        Calls generate_circle_crossing_human function to generate a certain number of random humans\r\n        :param human_num: the total number of humans to be generated\r\n        :return: None\r\n        \"\"\"\r\n        # initial min separation distance to avoid danger penalty at beginning\r\n        for i in range(human_num):\r\n            self.humans.append(self.generate_circle_crossing_human())\r\n\r\n\r\n    def generate_circle_crossing_human(self):\r\n        \"\"\"Generate a human: generate start position on a circle, goal position is at the opposite side\"\"\"\r\n        human = Human(self.config, 'humans')\r\n        if self.randomize_attributes:\r\n            human.sample_random_attributes()\r\n\r\n        while True:\r\n            angle = np.random.random() * np.pi * 2\r\n            # add some noise to simulate all the possible cases robot could meet with human\r\n            v_pref = 1.0 if human.v_pref == 0 else human.v_pref\r\n            px_noise = (np.random.random() - 0.5) * v_pref\r\n            py_noise = (np.random.random() - 0.5) * v_pref\r\n            px = self.circle_radius * np.cos(angle) + px_noise\r\n            py = self.circle_radius * np.sin(angle) + py_noise\r\n            collide = False\r\n\r\n            for i, agent in enumerate([self.robot] + self.humans):\r\n                # keep human at least 3 meters away from robot\r\n                if self.robot.kinematics == 'unicycle' and i == 0:\r\n                    min_dist = self.circle_radius / 2 # Todo: if circle_radius <= 4, it will get stuck here\r\n                else:\r\n                    min_dist = human.radius + agent.radius + self.discomfort_dist\r\n                if norm((px - agent.px, py - agent.py)) < min_dist or \\\r\n                        norm((px - agent.gx, py - agent.gy)) < min_dist:\r\n                    collide = True\r\n                    break\r\n            if not collide:\r\n                break\r\n\r\n        # px = np.random.uniform(-6, 6)\r\n        # py = np.random.uniform(-3, 3.5)\r\n        # human.set(px, py, px, py, 0, 0, 0)\r\n        human.set(px, py, -px, -py, 0, 0, 0)\r\n        return human\r\n\r\n\r\n\r\n    # update the robot belief of human states\r\n    # if a human is visible, its state is updated to its current ground truth state\r\n    # else we assume it keeps going in a straight line with last observed velocity\r\n    def update_last_human_states(self, human_visibility, reset):\r\n        \"\"\"\r\n        update the self.last_human_states array\r\n        human_visibility: list of booleans returned by get_human_in_fov (e.x. [T, F, F, T, F])\r\n        reset: True if this function is called by reset, False if called by step\r\n        :return:\r\n        \"\"\"\r\n        # keep the order of 5 humans at each timestep\r\n        for i in range(self.human_num):\r\n            if human_visibility[i]:\r\n                humanS = np.array(self.humans[i].get_observable_state_list())\r\n                self.last_human_states[i, :] = humanS\r\n\r\n            else:\r\n                if reset:\r\n                    humanS = np.array([15., 15., 0., 0., 0.3])\r\n                    self.last_human_states[i, :] = humanS\r\n\r\n                else:\r\n                    # Plan A: linear approximation of human's next position\r\n                    px, py, vx, vy, r = self.last_human_states[i, :]\r\n                    px = px + vx * self.time_step\r\n                    py = py + vy * self.time_step\r\n                    self.last_human_states[i, :] = np.array([px, py, vx, vy, r])\r\n\r\n                    # Plan B: assume the human doesn't move, use last observation\r\n                    # self.last_human_states[i, :] = np.array([px, py, 0., 0., r])\r\n\r\n                    # Plan C: put invisible humans in a dummy position\r\n                    # humanS = np.array([15., 15., 0., 0., 0.3])\r\n                    # self.last_human_states[i, :] = humanS\r\n\r\n    # return the ground truth locations of all humans\r\n    def get_true_human_states(self):\r\n        true_human_states = np.zeros((self.human_num, 2))\r\n        for i in range(self.human_num):\r\n            humanS = np.array(self.humans[i].get_observable_state_list())\r\n            true_human_states[i, :] = humanS[:2]\r\n        return true_human_states\r\n\r\n\r\n    # set robot initial state and generate all humans for reset function\r\n    # for crowd nav: human_num == self.human_num\r\n    # for leader follower: human_num = self.human_num - 1\r\n    def generate_robot_humans(self, phase, human_num=None):\r\n        if human_num is None:\r\n            human_num = self.human_num\r\n\r\n        if self.robot.kinematics == 'unicycle':\r\n            angle = np.random.uniform(0, np.pi * 2)\r\n            px = self.circle_radius * np.cos(angle)\r\n            py = self.circle_radius * np.sin(angle)\r\n            while True:\r\n                gx, gy = np.random.uniform(-self.circle_radius, self.circle_radius, 2)\r\n                if np.linalg.norm([px - gx, py - gy]) >= 6:  # 1 was 6\r\n                    break\r\n            self.robot.set(px, py, gx, gy, 0, 0, np.random.uniform(0, 2*np.pi)) # randomize init orientation\r\n\r\n        # randomize starting position and goal position\r\n        else:\r\n            while True:\r\n                px, py, gx, gy = np.random.uniform(-self.circle_radius, self.circle_radius, 4)\r\n                if np.linalg.norm([px - gx, py - gy]) >= 6:\r\n                    break\r\n            self.robot.set(px, py, gx, gy, 0, 0, np.pi/2)\r\n\r\n\r\n        # generate humans\r\n        self.generate_random_human_position(human_num=human_num)\r\n\r\n\r\n\r\n    def smooth_action(self, action):\r\n        \"\"\" mimic the dynamics of Turtlebot2i for sim2real \"\"\"\r\n        # if action.r is delta theta\r\n        w = action.r / self.time_step\r\n        # if action.r is w\r\n        # w = action.r\r\n        beta = 0.1\r\n        left = (2 * action.v - 0.23 * w) / (2 * 0.035)\r\n        right = (2 * action.v + 0.23 * w) / (2 * 0.035)\r\n\r\n        left = np.clip(left, -17.5, 17.5)\r\n        right = np.clip(right, -17.5, 17.5)\r\n\r\n        # print('Before: left:', left, 'right:', right)\r\n        if self.phase == 'test':\r\n            left = (1. - beta) * self.last_left + beta * left\r\n            right = (1. - beta) * self.last_right + beta * right\r\n\r\n        self.last_left = copy.deepcopy(left)\r\n        self.last_right = copy.deepcopy(right)\r\n\r\n        # subtract a noisy amount of delay from wheel speeds to simulate the delay in tb2\r\n        # do this in the last step because this happens after we send action commands to tb2\r\n        if left > 0:\r\n            adjust_left = left - np.random.normal(loc=1.8, scale=0.15)\r\n            left = max(0., adjust_left)\r\n        else:\r\n            adjust_left = left + np.random.normal(loc=1.8, scale=0.15)\r\n            left = min(0., adjust_left)\r\n\r\n        if right > 0:\r\n            adjust_right = right - np.random.normal(loc=1.8, scale=0.15)\r\n            right = max(0., adjust_right)\r\n        else:\r\n            adjust_right = right + np.random.normal(loc=1.8, scale=0.15)\r\n            right = min(0., adjust_right)\r\n\r\n        if self.record:\r\n            self.episodeRecoder.wheelVelList.append([left, right])\r\n        # print('After: left:', left, 'right:', right)\r\n\r\n        v = 0.035 / 2 * (left + right)\r\n        r = 0.035 / 0.23 * (right - left) * self.time_step\r\n        return ActionRot(v, r)\r\n\r\n\r\n    def reset(self, phase='train', test_case=None):\r\n        \"\"\"\r\n        Reset the environment\r\n        :return:\r\n        \"\"\"\r\n\r\n        if self.phase is not None:\r\n            phase = self.phase\r\n        if self.test_case is not None:\r\n            test_case=self.test_case\r\n\r\n        if self.robot is None:\r\n            raise AttributeError('robot has to be set!')\r\n        assert phase in ['train', 'val', 'test']\r\n        if test_case is not None:\r\n            self.case_counter[phase] = test_case # test case is passed in to calculate specific seed to generate case\r\n        self.global_time = 0\r\n        self.step_counter=0\r\n\r\n\r\n        self.humans = []\r\n        # train, val, and test phase should start with different seed.\r\n        # case capacity: the maximum number for train(max possible int -2000), val(1000), and test(1000)\r\n        # val start from seed=0, test start from seed=case_capacity['val']=1000\r\n        # train start from self.case_capacity['val'] + self.case_capacity['test']=2000\r\n        counter_offset = {'train': self.case_capacity['val'] + self.case_capacity['test'],\r\n                          'val': 0, 'test': self.case_capacity['val']}\r\n\r\n        np.random.seed(counter_offset[phase] + self.case_counter[phase] + self.thisSeed)\r\n\r\n        self.generate_robot_humans(phase)\r\n\r\n\r\n        for agent in [self.robot] + self.humans:\r\n            agent.time_step = self.time_step\r\n            agent.policy.time_step = self.time_step\r\n\r\n\r\n        # case size is used to make sure that the case_counter is always between 0 and case_size[phase]\r\n        self.case_counter[phase] = (self.case_counter[phase] + int(1*self.nenv)) % self.case_size[phase]\r\n\r\n\r\n        # get current observation\r\n        ob = self.generate_ob(reset=True)\r\n\r\n        # initialize potential\r\n        self.potential = -abs(np.linalg.norm(np.array([self.robot.px, self.robot.py]) - np.array([self.robot.gx, self.robot.gy])))\r\n\r\n\r\n        return ob\r\n\r\n\r\n    # Update the humans' end goals in the environment\r\n    # Produces valid end goals for each human\r\n    def update_human_goals_randomly(self):\r\n        # Update humans' goals randomly\r\n        for human in self.humans:\r\n            if human.isObstacle or human.v_pref == 0:\r\n                continue\r\n            if np.random.random() <= self.goal_change_chance:\r\n                humans_copy = []\r\n                for h in self.humans:\r\n                    if h != human:\r\n                        humans_copy.append(h)\r\n\r\n\r\n                # Produce valid goal for human in case of circle setting\r\n                while True:\r\n                    angle = np.random.random() * np.pi * 2\r\n                    # add some noise to simulate all the possible cases robot could meet with human\r\n                    v_pref = 1.0 if human.v_pref == 0 else human.v_pref\r\n                    gx_noise = (np.random.random() - 0.5) * v_pref\r\n                    gy_noise = (np.random.random() - 0.5) * v_pref\r\n                    gx = self.circle_radius * np.cos(angle) + gx_noise\r\n                    gy = self.circle_radius * np.sin(angle) + gy_noise\r\n                    collide = False\r\n\r\n                    for agent in [self.robot] + humans_copy:\r\n                        min_dist = human.radius + agent.radius + self.discomfort_dist\r\n                        if norm((gx - agent.px, gy - agent.py)) < min_dist or \\\r\n                                norm((gx - agent.gx, gy - agent.gy)) < min_dist:\r\n                            collide = True\r\n                            break\r\n                    if not collide:\r\n                        break\r\n\r\n                # Give human new goal\r\n                human.gx = gx\r\n                human.gy = gy\r\n        return\r\n\r\n    # Update the specified human's end goals in the environment randomly\r\n    def update_human_goal(self, human):\r\n\r\n        # Update human's goals randomly\r\n        if np.random.random() <= self.end_goal_change_chance:\r\n            humans_copy = []\r\n            for h in self.humans:\r\n                if h != human:\r\n                    humans_copy.append(h)\r\n\r\n\r\n            while True:\r\n                angle = np.random.random() * np.pi * 2\r\n                # add some noise to simulate all the possible cases robot could meet with human\r\n                v_pref = 1.0 if human.v_pref == 0 else human.v_pref\r\n                gx_noise = (np.random.random() - 0.5) * v_pref\r\n                gy_noise = (np.random.random() - 0.5) * v_pref\r\n                gx = self.circle_radius * np.cos(angle) + gx_noise\r\n                gy = self.circle_radius * np.sin(angle) + gy_noise\r\n                collide = False\r\n\r\n                for agent in [self.robot] + humans_copy:\r\n                    min_dist = human.radius + agent.radius + self.discomfort_dist\r\n                    if norm((gx - agent.px, gy - agent.py)) < min_dist or \\\r\n                            norm((gx - agent.gx, gy - agent.gy)) < min_dist:\r\n                        collide = True\r\n                        break\r\n                if not collide:\r\n                    break\r\n\r\n            # Give human new goal\r\n            human.gx = gx\r\n            human.gy = gy\r\n        return\r\n\r\n    # calculate the angle between the direction vector of state1 and the vector from state1 to state2\r\n    def calc_offset_angle(self, state1, state2):\r\n        if self.robot.kinematics == 'holonomic':\r\n            real_theta = np.arctan2(state1.vy, state1.vx)\r\n        else:\r\n            real_theta = state1.theta\r\n        # angle of center line of FOV of agent1\r\n        v_fov = [np.cos(real_theta), np.sin(real_theta)]\r\n\r\n        # angle between agent1 and agent2\r\n        v_12 = [state2.px - state1.px, state2.py - state1.py]\r\n        # angle between center of FOV and agent 2\r\n\r\n        v_fov = v_fov / np.linalg.norm(v_fov)\r\n        v_12 = v_12 / np.linalg.norm(v_12)\r\n\r\n        offset = np.arccos(np.clip(np.dot(v_fov, v_12), a_min=-1, a_max=1))\r\n        return offset\r\n\r\n    # Caculate whether agent2 is in agent1's FOV\r\n    # Not the same as whether agent1 is in agent2's FOV!!!!\r\n    # arguments:\r\n    # state1, state2: can be agent instance OR state instance\r\n    # robot1: is True if state1 is robot, else is False\r\n    # return value:\r\n    # return True if state2 is visible to state1, else return False\r\n    def detect_visible(self, state1, state2, robot1 = False, custom_fov=None, custom_sensor_range=None):\r\n        if self.robot.kinematics == 'holonomic':\r\n            real_theta = np.arctan2(state1.vy, state1.vx)\r\n        else:\r\n            real_theta = state1.theta\r\n        # angle of center line of FOV of agent1\r\n        v_fov = [np.cos(real_theta), np.sin(real_theta)]\r\n\r\n        # angle between agent1 and agent2\r\n        v_12 = [state2.px - state1.px, state2.py - state1.py]\r\n        # angle between center of FOV and agent 2\r\n\r\n        v_fov = v_fov / np.linalg.norm(v_fov)\r\n        v_12 = v_12 / np.linalg.norm(v_12)\r\n\r\n        offset = np.arccos(np.clip(np.dot(v_fov, v_12), a_min=-1, a_max=1))\r\n        if custom_fov:\r\n            fov = custom_fov\r\n        else:\r\n            if robot1:\r\n                fov = self.robot_fov\r\n            else:\r\n                fov = self.human_fov\r\n\r\n        if np.abs(offset) <= fov / 2:\r\n            inFov = True\r\n        else:\r\n            inFov = False\r\n\r\n        # detect whether state2 is in state1's sensor_range\r\n        dist = np.linalg.norm([state1.px - state2.px, state1.py - state2.py]) - state1.radius - state2.radius\r\n        if custom_sensor_range:\r\n            inSensorRange = dist <= custom_sensor_range\r\n        else:\r\n            if robot1:\r\n                inSensorRange = dist <= self.robot.sensor_range\r\n            else:\r\n                inSensorRange = True\r\n\r\n        return (inFov and inSensorRange)\r\n\r\n\r\n    # for robot:\r\n    # return only visible humans to robot and number of visible humans\r\n    # and a list of True/False to indicate whether each human is visible\r\n    def get_num_human_in_fov(self):\r\n        human_ids = []\r\n        humans_in_view = []\r\n        num_humans_in_view = 0\r\n\r\n        for i in range(self.human_num):\r\n            visible = self.detect_visible(self.robot, self.humans[i], robot1=True)\r\n            if visible:\r\n                humans_in_view.append(self.humans[i])\r\n                num_humans_in_view = num_humans_in_view + 1\r\n                human_ids.append(True)\r\n            else:\r\n                human_ids.append(False)\r\n\r\n        return humans_in_view, num_humans_in_view, human_ids\r\n\r\n\r\n\r\n    def last_human_states_obj(self):\r\n        '''\r\n        convert self.last_human_states to a list of observable state objects for old algorithms to use\r\n        '''\r\n        humans = []\r\n        for i in range(self.human_num):\r\n            h = ObservableState(*self.last_human_states[i])\r\n            humans.append(h)\r\n        return humans\r\n\r\n\r\n    # calculate the reward at current timestep R(s, a)\r\n    def calc_reward(self, action):\r\n        # collision detection\r\n        dmin = float('inf')\r\n\r\n        danger_dists = []\r\n        collision = False\r\n\r\n        for i, human in enumerate(self.humans):\r\n            dx = human.px - self.robot.px\r\n            dy = human.py - self.robot.py\r\n            closest_dist = (dx ** 2 + dy ** 2) ** (1 / 2) - human.radius - self.robot.radius\r\n\r\n            if closest_dist < self.discomfort_dist:\r\n                danger_dists.append(closest_dist)\r\n            if closest_dist < 0:\r\n                collision = True\r\n                # logging.debug(\"Collision: distance between robot and p{} is {:.2E}\".format(i, closest_dist))\r\n                break\r\n            elif closest_dist < dmin:\r\n                dmin = closest_dist\r\n\r\n\r\n        # check if reaching the goal\r\n        reaching_goal = norm(np.array(self.robot.get_position()) - np.array(self.robot.get_goal_position())) < self.robot.radius\r\n\r\n        if self.global_time >= self.time_limit - 1:\r\n            reward = 0\r\n            done = True\r\n            episode_info = Timeout()\r\n        elif collision:\r\n            reward = self.collision_penalty\r\n            done = True\r\n            episode_info = Collision()\r\n        elif reaching_goal:\r\n            reward = self.success_reward\r\n            done = True\r\n            episode_info = ReachGoal()\r\n\r\n        elif dmin < self.discomfort_dist:\r\n            # only penalize agent for getting too close if it's visible\r\n            # adjust the reward based on FPS\r\n            # print(dmin)\r\n            reward = (dmin - self.discomfort_dist) * self.discomfort_penalty_factor * self.time_step\r\n            done = False\r\n            episode_info = Danger(dmin)\r\n\r\n        else:\r\n            # potential reward\r\n            potential_cur = np.linalg.norm(\r\n                np.array([self.robot.px, self.robot.py]) - np.array(self.robot.get_goal_position()))\r\n            reward = 2 * (-abs(potential_cur) - self.potential)\r\n            self.potential = -abs(potential_cur)\r\n\r\n            done = False\r\n            episode_info = Nothing()\r\n\r\n\r\n        # if the robot is near collision/arrival, it should be able to turn a large angle\r\n        if self.robot.kinematics == 'unicycle':\r\n            # add a rotational penalty\r\n            # if action.r is w, factor = -0.02 if w in [-1.5, 1.5], factor = -0.045 if w in [-1, 1];\r\n            # if action.r is delta theta, factor = -2 if r in [-0.15, 0.15], factor = -4.5 if r in [-0.1, 0.1]\r\n            r_spin = -5 * action.r**2\r\n\r\n            # add a penalty for going backwards\r\n            if action.v < 0:\r\n                r_back = -2 * abs(action.v)\r\n            else:\r\n                r_back = 0.\r\n\r\n            reward = reward + r_spin + r_back\r\n\r\n        return reward, done, episode_info\r\n\r\n    # compute the observation\r\n    def generate_ob(self, reset):\r\n        visible_human_states, num_visible_humans, human_visibility = self.get_num_human_in_fov()\r\n        self.update_last_human_states(human_visibility, reset=reset)\r\n        if self.robot.policy.name in ['lstm_ppo', 'srnn']:\r\n            ob = [num_visible_humans]\r\n            # append robot's state\r\n            robotS = np.array(self.robot.get_full_state_list())\r\n            ob.extend(list(robotS))\r\n\r\n            ob.extend(list(np.ravel(self.last_human_states)))\r\n            ob = np.array(ob)\r\n\r\n        else: # for orca and sf\r\n            ob = self.last_human_states_obj()\r\n\r\n        return ob\r\n\r\n    def get_human_actions(self):\r\n        # step all humans\r\n        human_actions = []  # a list of all humans' actions\r\n\r\n        for i, human in enumerate(self.humans):\r\n            # observation for humans is always coordinates\r\n            ob = []\r\n            for other_human in self.humans:\r\n                if other_human != human:\r\n                    # Else detectable humans are always observable to each other\r\n                    if self.detect_visible(human, other_human):\r\n                        ob.append(other_human.get_observable_state())\r\n                    else:\r\n                        ob.append(self.dummy_human.get_observable_state())\r\n\r\n            if self.robot.visible:\r\n                if self.detect_visible(self.humans[i], self.robot):\r\n                    ob += [self.robot.get_observable_state()]\r\n                else:\r\n                    ob += [self.dummy_robot.get_observable_state()]\r\n\r\n            human_actions.append(human.act(ob))\r\n\r\n        return human_actions\r\n\r\n    def step(self, action, update=True):\r\n        \"\"\"\r\n        Compute actions for all agents, detect collision, update environment and return (ob, reward, done, info)\r\n        \"\"\"\r\n\r\n        # clip the action to obey robot's constraint\r\n        action = self.robot.policy.clip_action(action, self.robot.v_pref)\r\n\r\n        human_actions = self.get_human_actions()\r\n\r\n\r\n        # compute reward and episode info\r\n        reward, done, episode_info = self.calc_reward(action)\r\n\r\n\r\n        # apply action and update all agents\r\n        self.robot.step(action)\r\n        for i, human_action in enumerate(human_actions):\r\n            self.humans[i].step(human_action)\r\n        self.global_time += self.time_step # max episode length=time_limit/time_step\r\n        self.step_counter=self.step_counter+1\r\n\r\n        ##### compute_ob goes here!!!!!\r\n        ob = self.generate_ob(reset=False)\r\n\r\n\r\n        if self.robot.policy.name in ['srnn']:\r\n            info={'info':episode_info}\r\n        else: # for orca and sf\r\n            info=episode_info\r\n\r\n        # Update all humans' goals randomly midway through episode\r\n        if self.random_goal_changing:\r\n            if self.global_time % 5 == 0:\r\n                self.update_human_goals_randomly()\r\n\r\n\r\n        # Update a specific human's goal once its reached its original goal\r\n        if self.end_goal_changing:\r\n            for human in self.humans:\r\n                if not human.isObstacle and human.v_pref != 0 and norm((human.gx - human.px, human.gy - human.py)) < human.radius:\r\n                    self.update_human_goal(human)\r\n\r\n        return ob, reward, done, info\r\n\r\n    def render(self, mode='human'):\r\n        \"\"\" Render the current status of the environment using matplotlib \"\"\"\r\n        import matplotlib.pyplot as plt\r\n        import matplotlib.lines as mlines\r\n        from matplotlib import patches\r\n\r\n        plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'\r\n\r\n        robot_color = 'yellow'\r\n        goal_color = 'red'\r\n        arrow_color = 'red'\r\n        arrow_style = patches.ArrowStyle(\"->\", head_length=4, head_width=2)\r\n\r\n        def calcFOVLineEndPoint(ang, point, extendFactor):\r\n            # choose the extendFactor big enough\r\n            # so that the endPoints of the FOVLine is out of xlim and ylim of the figure\r\n            FOVLineRot = np.array([[np.cos(ang), -np.sin(ang), 0],\r\n                                   [np.sin(ang), np.cos(ang), 0],\r\n                                   [0, 0, 1]])\r\n            point.extend([1])\r\n            # apply rotation matrix\r\n            newPoint = np.matmul(FOVLineRot, np.reshape(point, [3, 1]))\r\n            # increase the distance between the line start point and the end point\r\n            newPoint = [extendFactor * newPoint[0, 0], extendFactor * newPoint[1, 0], 1]\r\n            return newPoint\r\n\r\n\r\n\r\n        ax=self.render_axis\r\n        artists=[]\r\n\r\n        # add goal\r\n        goal=mlines.Line2D([self.robot.gx], [self.robot.gy], color=goal_color, marker='*', linestyle='None', markersize=15, label='Goal')\r\n        ax.add_artist(goal)\r\n        artists.append(goal)\r\n\r\n        # add robot\r\n        robotX,robotY=self.robot.get_position()\r\n\r\n        robot=plt.Circle((robotX,robotY), self.robot.radius, fill=True, color=robot_color)\r\n        ax.add_artist(robot)\r\n        artists.append(robot)\r\n\r\n        plt.legend([robot, goal], ['Robot', 'Goal'], fontsize=16)\r\n\r\n\r\n        # compute orientation in each step and add arrow to show the direction\r\n        radius = self.robot.radius\r\n        arrowStartEnd=[]\r\n\r\n        robot_theta = self.robot.theta if self.robot.kinematics == 'unicycle' else np.arctan2(self.robot.vy, self.robot.vx)\r\n\r\n        arrowStartEnd.append(((robotX, robotY), (robotX + radius * np.cos(robot_theta), robotY + radius * np.sin(robot_theta))))\r\n\r\n        for i, human in enumerate(self.humans):\r\n            theta = np.arctan2(human.vy, human.vx)\r\n            arrowStartEnd.append(((human.px, human.py), (human.px + radius * np.cos(theta), human.py + radius * np.sin(theta))))\r\n\r\n        arrows = [patches.FancyArrowPatch(*arrow, color=arrow_color, arrowstyle=arrow_style)\r\n                  for arrow in arrowStartEnd]\r\n        for arrow in arrows:\r\n            ax.add_artist(arrow)\r\n            artists.append(arrow)\r\n\r\n\r\n        # draw FOV for the robot\r\n        # add robot FOV\r\n\r\n        if self.robot.FOV < 2 * np.pi:\r\n            FOVAng = self.robot_fov / 2\r\n            FOVLine1 = mlines.Line2D([0, 0], [0, 0], linestyle='--')\r\n            FOVLine2 = mlines.Line2D([0, 0], [0, 0], linestyle='--')\r\n\r\n\r\n            startPointX = robotX\r\n            startPointY = robotY\r\n            endPointX = robotX + radius * np.cos(robot_theta)\r\n            endPointY = robotY + radius * np.sin(robot_theta)\r\n\r\n            # transform the vector back to world frame origin, apply rotation matrix, and get end point of FOVLine\r\n            # the start point of the FOVLine is the center of the robot\r\n            FOVEndPoint1 = calcFOVLineEndPoint(FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)\r\n            FOVLine1.set_xdata(np.array([startPointX, startPointX + FOVEndPoint1[0]]))\r\n            FOVLine1.set_ydata(np.array([startPointY, startPointY + FOVEndPoint1[1]]))\r\n            FOVEndPoint2 = calcFOVLineEndPoint(-FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)\r\n            FOVLine2.set_xdata(np.array([startPointX, startPointX + FOVEndPoint2[0]]))\r\n            FOVLine2.set_ydata(np.array([startPointY, startPointY + FOVEndPoint2[1]]))\r\n\r\n            ax.add_artist(FOVLine1)\r\n            ax.add_artist(FOVLine2)\r\n            artists.append(FOVLine1)\r\n            artists.append(FOVLine2)\r\n\r\n        # add an arc of robot's sensor range\r\n        sensor_range = plt.Circle(self.robot.get_position(), self.robot.sensor_range, fill=False, linestyle='--')\r\n        ax.add_artist(sensor_range)\r\n        artists.append(sensor_range)\r\n        # add humans and change the color of them based on visibility\r\n        human_circles = [plt.Circle(human.get_position(), human.radius, fill=False) for human in self.humans]\r\n\r\n\r\n        for i in range(len(self.humans)):\r\n            ax.add_artist(human_circles[i])\r\n            artists.append(human_circles[i])\r\n\r\n            # green: visible; red: invisible\r\n            if self.detect_visible(self.robot, self.humans[i], robot1=True):\r\n                human_circles[i].set_color(c='g')\r\n            else:\r\n                human_circles[i].set_color(c='r')\r\n\r\n            # label numbers on each human\r\n            # plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, str(self.humans[i].id), color='black', fontsize=12)\r\n            plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, i, color='black', fontsize=12)\r\n\r\n\r\n\r\n        plt.pause(0.1)\r\n        for item in artists:\r\n            item.remove() # there should be a better way to do this. For example,\r\n            # initially use add_artist and draw_artist later on\r\n        for t in ax.texts:\r\n            t.set_visible(False)\r\n\r\n"
  },
  {
    "path": "crowd_sim/envs/crowd_sim_pred.py",
    "content": "import gym\nimport numpy as np\nfrom numpy.linalg import norm\nimport copy\n\nfrom crowd_sim.envs.utils.action import ActionRot, ActionXY\nfrom crowd_sim.envs.crowd_sim_var_num import CrowdSimVarNum\n\n\n\nclass CrowdSimPred(CrowdSimVarNum):\n    \"\"\"\n    The environment for our model with non-neural network trajectory predictors, including const vel predictor and ground truth predictor\n    The number of humans at each timestep can change within a range\n    \"\"\"\n    def __init__(self):\n        \"\"\"\n        Movement simulation for n+1 agents\n        Agent can either be human or robot.\n        humans are controlled by a unknown and fixed policy.\n        robot is controlled by a known and learnable policy.\n        \"\"\"\n        super(CrowdSimPred, self).__init__()\n        self.pred_method = None\n        self.cur_human_states=None\n\n    def configure(self, config):\n        \"\"\" read the config to the environment variables \"\"\"\n        super().configure(config)\n        # 'const_vel', 'truth', or 'inferred'\n        self.pred_method = config.sim.predict_method\n\n    def reset(self, phase='train', test_case=None):\n        ob = super().reset(phase=phase, test_case=test_case)\n        return ob\n\n    # set observation space and action space\n    def set_robot(self, robot):\n        self.robot = robot\n\n        # we set the max and min of action/observation space as inf\n        # clip the action and observation as you need\n\n        d = {}\n        # robot node: num_visible_humans, px, py, r, gx, gy, v_pref, theta\n        d['robot_node'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 7,), dtype=np.float32)\n        # only consider all temporal edges (human_num+1) and spatial edges pointing to robot (human_num)\n        d['temporal_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 2,), dtype=np.float32)\n\n        d['spatial_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf,\n                            shape=(self.config.sim.human_num + self.config.sim.human_num_range, int(2*(self.predict_steps+1))),\n                            dtype=np.float32)\n        # number of humans detected at each timestep\n        d['detected_human_num'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1,), dtype=np.float32)\n        self.observation_space = gym.spaces.Dict(d)\n\n        high = np.inf * np.ones([2, ])\n        self.action_space = gym.spaces.Box(-high, high, dtype=np.float32)\n\n\n    # reset = True: reset calls this function; reset = False: step calls this function\n    def generate_ob(self, reset, sort=True):\n        \"\"\"Generate observation for reset and step functions\"\"\"\n        ob = {}\n\n        # nodes\n        visible_humans, num_visibles, self.human_visibility = self.get_num_human_in_fov()\n\n        ob['robot_node'] = self.robot.get_full_state_list_noV()\n\n        self.prev_human_pos = copy.deepcopy(self.last_human_states)\n        self.update_last_human_states(self.human_visibility, reset=reset)\n\n        # edges\n        ob['temporal_edges'] = np.array([self.robot.vx, self.robot.vy])\n\n        # initialize storage space for max_human_num humans\n        ob['spatial_edges'] = np.ones((self.config.sim.human_num + self.config.sim.human_num_range, int(2*(self.predict_steps+1)))) * np.inf\n\n        # calculate the current and future positions of present humans (first self.human_num humans in ob['spatial_edges'])\n        predicted_states = self.calc_human_future_traj(method=self.pred_method) # [pred_steps + 1, human_num, 4]\n        # [human_num, pred_steps + 1, 2]\n        predicted_pos = np.transpose(predicted_states[:, :, :2], (1, 0, 2)) - np.array([self.robot.px, self.robot.py])\n\n        # [human_num, (pred_steps + 1)*2]\n        ob['spatial_edges'][:self.human_num][self.human_visibility] = predicted_pos.reshape((self.human_num, -1))[self.human_visibility]\n        # sort all humans by distance (invisible humans will be in the end automatically)\n        if self.config.args.sort_humans:\n            ob['spatial_edges'] = np.array(sorted(ob['spatial_edges'], key=lambda x: np.linalg.norm(x[:2])))\n        ob['spatial_edges'][np.isinf(ob['spatial_edges'])] = 15\n\n        ob['detected_human_num'] = num_visibles\n        # if no human is detected, assume there is one dummy human at (15, 15) to make the pack_padded_sequence work\n        if ob['detected_human_num'] == 0:\n            ob['detected_human_num'] = 1\n\n        return ob\n\n\n    def step(self, action, update=True):\n        \"\"\"\n        step function\n        Compute actions for all agents, detect collision, update environment and return (ob, reward, done, info)\n        \"\"\"\n        if self.robot.policy.name == 'ORCA':\n            # assemble observation for orca: px, py, vx, vy, r\n            # include all observable humans from t to t+t_pred\n            _, _, human_visibility = self.get_num_human_in_fov()\n            # [self.predict_steps + 1, self.human_num, 4]\n            human_states = copy.deepcopy(self.calc_human_future_traj(method='truth'))\n            # append the radius, convert it to [human_num*(self.predict_steps+1), 5] by treating each predicted pos as a new human\n            human_states = np.concatenate((human_states.reshape((-1, 4)),\n                                           np.tile(self.last_human_states[:, -1], self.predict_steps+1).reshape((-1, 1))),\n                                          axis=1)\n            # get orca action\n            action = self.robot.act(human_states.tolist())\n        else:\n            action = self.robot.policy.clip_action(action, self.robot.v_pref)\n\n        if self.robot.kinematics == 'unicycle':\n            self.desiredVelocity[0] = np.clip(self.desiredVelocity[0] + action.v, -self.robot.v_pref, self.robot.v_pref)\n            action = ActionRot(self.desiredVelocity[0], action.r)\n\n            # if action.r is delta theta\n            action = ActionRot(self.desiredVelocity[0], action.r)\n            if self.record:\n                self.episodeRecoder.unsmoothed_actions.append(list(action))\n\n            action = self.smooth_action(action)\n\n\n\n        human_actions = self.get_human_actions()\n\n        # need to update self.human_future_traj in testing to calculate number of intrusions\n        if self.phase == 'test':\n            # use ground truth future positions of humans\n            self.calc_human_future_traj(method='truth')\n\n        # compute reward and episode info\n        reward, done, episode_info = self.calc_reward(action, danger_zone='future')\n\n\n        if self.record:\n\n            self.episodeRecoder.actionList.append(list(action))\n            self.episodeRecoder.positionList.append([self.robot.px, self.robot.py])\n            self.episodeRecoder.orientationList.append(self.robot.theta)\n\n            if done:\n                self.episodeRecoder.robot_goal.append([self.robot.gx, self.robot.gy])\n                self.episodeRecoder.saveEpisode(self.case_counter['test'])\n\n        # apply action and update all agents\n        self.robot.step(action)\n        for i, human_action in enumerate(human_actions):\n            self.humans[i].step(human_action)\n            self.cur_human_states[i] = np.array([self.humans[i].px, self.humans[i].py, self.humans[i].radius])\n\n        self.global_time += self.time_step # max episode length=time_limit/time_step\n        self.step_counter = self.step_counter+1\n\n        info={'info':episode_info}\n\n        # Add or remove at most self.human_num_range humans\n        # if self.human_num_range == 0 -> human_num is fixed at all times\n        if self.human_num_range > 0 and self.global_time % 5 == 0:\n            # remove humans\n            if np.random.rand() < 0.5:\n                # if no human is visible, anyone can be removed\n                if len(self.observed_human_ids) == 0:\n                    max_remove_num = self.human_num - 1\n                else:\n                    max_remove_num = (self.human_num - 1) - max(self.observed_human_ids)\n                remove_num = np.random.randint(low=0, high=min(self.human_num_range, max_remove_num) + 1)\n                for _ in range(remove_num):\n                    self.humans.pop()\n                self.human_num = self.human_num - remove_num\n                self.last_human_states = self.last_human_states[:self.human_num]\n            # add humans\n            else:\n                add_num = np.random.randint(low=0, high=self.human_num_range + 1)\n                if add_num > 0:\n                    # set human ids\n                    true_add_num = 0\n                    for i in range(self.human_num, self.human_num + add_num):\n                        if i == self.config.sim.human_num + self.human_num_range:\n                            break\n                        self.generate_random_human_position(human_num=1)\n                        self.humans[i].id = i\n                        true_add_num = true_add_num + 1\n                    self.human_num = self.human_num + true_add_num\n                    if true_add_num > 0:\n                        self.last_human_states = np.concatenate((self.last_human_states, np.array([[15, 15, 0, 0, 0.3]]*true_add_num)), axis=0)\n\n\n        # compute the observation\n        ob = self.generate_ob(reset=False)\n\n\n        # Update all humans' goals randomly midway through episode\n        if self.random_goal_changing:\n            if self.global_time % 5 == 0:\n                self.update_human_goals_randomly()\n\n        # Update a specific human's goal once its reached its original goal\n        if self.end_goal_changing and not self.record:\n            for i, human in enumerate(self.humans):\n                if norm((human.gx - human.px, human.gy - human.py)) < human.radius:\n                    self.humans[i] = self.generate_circle_crossing_human()\n                    self.humans[i].id = i\n\n        return ob, reward, done, info\n\n\n    def calc_reward(self, action, danger_zone='future'):\n        \"\"\"Calculate the reward, which includes the functionality reward and social reward\"\"\"\n        # compute the functionality reward, same as the reward function with no prediction\n        reward, done, episode_info = super().calc_reward(action, danger_zone=danger_zone)\n        # compute the social reward\n        # if the robot collides with future states, give it a collision penalty\n        relative_pos = self.human_future_traj[1:, :, :2] - np.array([self.robot.px, self.robot.py])\n        # assumes constant radius of each personal zone circle\n        collision_idx = np.linalg.norm(relative_pos, axis=-1) < self.robot.radius + self.config.humans.radius # [predict_steps, human_num]\n\n        coefficients = 2. ** np.arange(2, self.predict_steps + 2).reshape((self.predict_steps, 1))  # 4, 8, 16, 32\n\n        collision_penalties = self.collision_penalty / coefficients\n\n        reward_future = collision_idx * collision_penalties # [predict_steps, human_num]\n        reward_future = np.min(reward_future)\n\n        return reward + reward_future, done, episode_info\n\n\n    def render(self, mode='human'):\n        \"\"\" Render the current status of the environment using matplotlib \"\"\"\n        import matplotlib.pyplot as plt\n        import matplotlib.lines as mlines\n        from matplotlib import patches\n\n        plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'\n\n        robot_color = 'gold'\n        goal_color = 'red'\n        arrow_color = 'red'\n        arrow_style = patches.ArrowStyle(\"->\", head_length=4, head_width=2)\n\n        def calcFOVLineEndPoint(ang, point, extendFactor):\n            # choose the extendFactor big enough\n            # so that the endPoints of the FOVLine is out of xlim and ylim of the figure\n            FOVLineRot = np.array([[np.cos(ang), -np.sin(ang), 0],\n                                   [np.sin(ang), np.cos(ang), 0],\n                                   [0, 0, 1]])\n            point.extend([1])\n            # apply rotation matrix\n            newPoint = np.matmul(FOVLineRot, np.reshape(point, [3, 1]))\n            # increase the distance between the line start point and the end point\n            newPoint = [extendFactor * newPoint[0, 0], extendFactor * newPoint[1, 0], 1]\n            return newPoint\n\n\n\n        ax=self.render_axis\n        artists=[]\n\n        # add goal\n        goal=mlines.Line2D([self.robot.gx], [self.robot.gy], color=goal_color, marker='*', linestyle='None', markersize=15, label='Goal')\n        ax.add_artist(goal)\n        artists.append(goal)\n\n        # add robot\n        robotX,robotY=self.robot.get_position()\n\n        robot=plt.Circle((robotX,robotY), self.robot.radius, fill=True, color=robot_color)\n        ax.add_artist(robot)\n        artists.append(robot)\n\n        # plt.legend([robot, goal], ['Robot', 'Goal'], fontsize=16)\n\n\n        # compute orientation in each step and add arrow to show the direction\n        radius = self.robot.radius\n        arrowStartEnd=[]\n\n        robot_theta = self.robot.theta if self.robot.kinematics == 'unicycle' else np.arctan2(self.robot.vy, self.robot.vx)\n\n        arrowStartEnd.append(((robotX, robotY), (robotX + radius * np.cos(robot_theta), robotY + radius * np.sin(robot_theta))))\n\n        for i, human in enumerate(self.humans):\n            theta = np.arctan2(human.vy, human.vx)\n            arrowStartEnd.append(((human.px, human.py), (human.px + radius * np.cos(theta), human.py + radius * np.sin(theta))))\n\n        arrows = [patches.FancyArrowPatch(*arrow, color=arrow_color, arrowstyle=arrow_style)\n                  for arrow in arrowStartEnd]\n        for arrow in arrows:\n            ax.add_artist(arrow)\n            artists.append(arrow)\n\n\n        # draw FOV for the robot\n        # add robot FOV\n        if self.robot.FOV < 2 * np.pi:\n            FOVAng = self.robot_fov / 2\n            FOVLine1 = mlines.Line2D([0, 0], [0, 0], linestyle='--')\n            FOVLine2 = mlines.Line2D([0, 0], [0, 0], linestyle='--')\n\n\n            startPointX = robotX\n            startPointY = robotY\n            endPointX = robotX + radius * np.cos(robot_theta)\n            endPointY = robotY + radius * np.sin(robot_theta)\n\n            # transform the vector back to world frame origin, apply rotation matrix, and get end point of FOVLine\n            # the start point of the FOVLine is the center of the robot\n            FOVEndPoint1 = calcFOVLineEndPoint(FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)\n            FOVLine1.set_xdata(np.array([startPointX, startPointX + FOVEndPoint1[0]]))\n            FOVLine1.set_ydata(np.array([startPointY, startPointY + FOVEndPoint1[1]]))\n            FOVEndPoint2 = calcFOVLineEndPoint(-FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)\n            FOVLine2.set_xdata(np.array([startPointX, startPointX + FOVEndPoint2[0]]))\n            FOVLine2.set_ydata(np.array([startPointY, startPointY + FOVEndPoint2[1]]))\n\n            ax.add_artist(FOVLine1)\n            ax.add_artist(FOVLine2)\n            artists.append(FOVLine1)\n            artists.append(FOVLine2)\n\n        # add an arc of robot's sensor range\n        sensor_range = plt.Circle(self.robot.get_position(), self.robot.sensor_range + self.robot.radius+self.config.humans.radius, fill=False, linestyle='--')\n        ax.add_artist(sensor_range)\n        artists.append(sensor_range)\n\n        # add humans and change the color of them based on visibility\n        human_circles = [plt.Circle(human.get_position(), human.radius, fill=False, linewidth=1.5) for human in self.humans]\n        # human_circles = [plt.Circle(human.get_position(), 2, fill=False) for human in self.humans]\n\n        # hardcoded for now\n        actual_arena_size = self.arena_size + 0.5\n        # plot current human states\n        for i in range(len(self.humans)):\n            ax.add_artist(human_circles[i])\n            artists.append(human_circles[i])\n\n            # green: visible; red: invisible\n            if self.detect_visible(self.robot, self.humans[i], robot1=True):\n                human_circles[i].set_color(c='b')\n            else:\n                human_circles[i].set_color(c='r')\n\n\n            if -actual_arena_size <= self.humans[i].px <= actual_arena_size and -actual_arena_size <= self.humans[\n                i].py <= actual_arena_size:\n                # label numbers on each human\n                # plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, str(self.humans[i].id), color='black', fontsize=12)\n                plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, i, color='black', fontsize=12)\n\n\n        # save plot for slide show\n        if self.config.save_slides:\n            import os\n            folder_path = os.path.join(self.config.save_path, str(self.rand_seed)+'pred')\n            if not os.path.isdir(folder_path):\n                os.makedirs(folder_path, exist_ok = True)\n            plt.savefig(os.path.join(folder_path, str(self.step_counter)+'.png'), dpi=300)\n\n        plt.pause(0.1)\n        for item in artists:\n            item.remove() # there should be a better way to do this. For example,\n            # initially use add_artist and draw_artist later on\n        for t in ax.texts:\n            t.set_visible(False)\n"
  },
  {
    "path": "crowd_sim/envs/crowd_sim_pred_real_gst.py",
    "content": "import gym\nimport numpy as np\n\nfrom crowd_sim.envs.crowd_sim_pred import CrowdSimPred\n\n\nclass CrowdSimPredRealGST(CrowdSimPred):\n    '''\n    Same as CrowdSimPred, except that\n    The future human traj in 'spatial_edges' are dummy placeholders\n    and will be replaced by the outputs of a real GST pred model in the wrapper function in vec_pretext_normalize.py\n    '''\n    def __init__(self):\n        \"\"\"\n        Movement simulation for n+1 agents\n        Agent can either be human or robot.\n        humans are controlled by a unknown and fixed policy.\n        robot is controlled by a known and learnable policy.\n        \"\"\"\n        super(CrowdSimPredRealGST, self).__init__()\n        self.pred_method = None\n\n        # to receive data from gst pred model\n        self.gst_out_traj = None\n\n\n    def set_robot(self, robot):\n        \"\"\"set observation space and action space\"\"\"\n        self.robot = robot\n\n        # we set the max and min of action/observation space as inf\n        # clip the action and observation as you need\n\n        d = {}\n        # robot node: num_visible_humans, px, py, r, gx, gy, v_pref, theta\n        d['robot_node'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 7,), dtype=np.float32)\n        # only consider all temporal edges (human_num+1) and spatial edges pointing to robot (human_num)\n        d['temporal_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 2,), dtype=np.float32)\n        '''\n        format of spatial_edges: [max_human_num, [state_t, state_(t+1), ..., state(t+self.pred_steps)]]\n        '''\n\n        # predictions only include mu_x, mu_y (or px, py)\n        self.spatial_edge_dim = int(2*(self.predict_steps+1))\n\n        d['spatial_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf,\n                            shape=(self.config.sim.human_num + self.config.sim.human_num_range, self.spatial_edge_dim),\n                            dtype=np.float32)\n\n        # masks for gst pred model\n        # whether each human is visible to robot (ordered by human ID, should not be sorted)\n        d['visible_masks'] = gym.spaces.Box(low=-np.inf, high=np.inf,\n                                            shape=(self.config.sim.human_num + self.config.sim.human_num_range,),\n                                            dtype=np.bool)\n\n        # number of humans detected at each timestep\n        d['detected_human_num'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1,), dtype=np.float32)\n\n        self.observation_space = gym.spaces.Dict(d)\n\n        high = np.inf * np.ones([2, ])\n        self.action_space = gym.spaces.Box(-high, high, dtype=np.float32)\n\n    def talk2Env(self, data):\n        \"\"\"\n        Call this function when you want extra information to send to/recv from the env\n        :param data: data that is sent from gst_predictor network to the env, it has 2 parts:\n        output predicted traj and output masks\n        :return: True means received\n        \"\"\"\n        self.gst_out_traj=data\n        return True\n\n\n    # reset = True: reset calls this function; reset = False: step calls this function\n    def generate_ob(self, reset, sort=False):\n        \"\"\"Generate observation for reset and step functions\"\"\"\n        # since gst pred model needs ID tracking, don't sort all humans\n        # inherit from crowd_sim_lstm, not crowd_sim_pred to avoid computation of true pred!\n        # sort=False because we will sort in wrapper in vec_pretext_normalize.py later\n        parent_ob = super(CrowdSimPred, self).generate_ob(reset=reset, sort=False)\n\n        # add additional keys, removed unused keys\n        ob = {}\n\n        ob['visible_masks'] = parent_ob['visible_masks']\n        ob['robot_node'] = parent_ob['robot_node']\n        ob['temporal_edges'] = parent_ob['temporal_edges']\n\n        ob['spatial_edges'] = np.tile(parent_ob['spatial_edges'], self.predict_steps+1)\n\n        ob['detected_human_num'] = parent_ob['detected_human_num']\n\n        return ob\n\n\n    def calc_reward(self, action, danger_zone='future'):\n        # inherit from crowd_sim_lstm, not crowd_sim_pred to prevent social reward calculation\n        # since we don't know the GST predictions yet\n        reward, done, episode_info = super(CrowdSimPred, self).calc_reward(action, danger_zone=danger_zone)\n        return reward, done, episode_info\n\n\n    def render(self, mode='human'):\n        \"\"\"\n        render function\n        use talk2env to plot the predicted future traj of humans\n        \"\"\"\n        import matplotlib.pyplot as plt\n        import matplotlib.lines as mlines\n        from matplotlib import patches\n\n        plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'\n\n        robot_color = 'gold'\n        goal_color = 'red'\n        arrow_color = 'red'\n        arrow_style = patches.ArrowStyle(\"->\", head_length=4, head_width=2)\n\n        def calcFOVLineEndPoint(ang, point, extendFactor):\n            # choose the extendFactor big enough\n            # so that the endPoints of the FOVLine is out of xlim and ylim of the figure\n            FOVLineRot = np.array([[np.cos(ang), -np.sin(ang), 0],\n                                   [np.sin(ang), np.cos(ang), 0],\n                                   [0, 0, 1]])\n            point.extend([1])\n            # apply rotation matrix\n            newPoint = np.matmul(FOVLineRot, np.reshape(point, [3, 1]))\n            # increase the distance between the line start point and the end point\n            newPoint = [extendFactor * newPoint[0, 0], extendFactor * newPoint[1, 0], 1]\n            return newPoint\n\n\n\n        ax=self.render_axis\n        artists=[]\n\n        # add goal\n        goal=mlines.Line2D([self.robot.gx], [self.robot.gy], color=goal_color, marker='*', linestyle='None', markersize=15, label='Goal')\n        ax.add_artist(goal)\n        artists.append(goal)\n\n        # add robot\n        robotX,robotY=self.robot.get_position()\n\n        robot=plt.Circle((robotX,robotY), self.robot.radius, fill=True, color=robot_color)\n        ax.add_artist(robot)\n        artists.append(robot)\n\n\n        # compute orientation in each step and add arrow to show the direction\n        radius = self.robot.radius\n        arrowStartEnd=[]\n\n        robot_theta = self.robot.theta if self.robot.kinematics == 'unicycle' else np.arctan2(self.robot.vy, self.robot.vx)\n\n        arrowStartEnd.append(((robotX, robotY), (robotX + radius * np.cos(robot_theta), robotY + radius * np.sin(robot_theta))))\n\n        for i, human in enumerate(self.humans):\n            theta = np.arctan2(human.vy, human.vx)\n            arrowStartEnd.append(((human.px, human.py), (human.px + radius * np.cos(theta), human.py + radius * np.sin(theta))))\n\n        arrows = [patches.FancyArrowPatch(*arrow, color=arrow_color, arrowstyle=arrow_style)\n                  for arrow in arrowStartEnd]\n        for arrow in arrows:\n            ax.add_artist(arrow)\n            artists.append(arrow)\n\n\n        # draw FOV for the robot\n        # add robot FOV\n        if self.robot.FOV < 2 * np.pi:\n            FOVAng = self.robot_fov / 2\n            FOVLine1 = mlines.Line2D([0, 0], [0, 0], linestyle='--')\n            FOVLine2 = mlines.Line2D([0, 0], [0, 0], linestyle='--')\n\n\n            startPointX = robotX\n            startPointY = robotY\n            endPointX = robotX + radius * np.cos(robot_theta)\n            endPointY = robotY + radius * np.sin(robot_theta)\n\n            # transform the vector back to world frame origin, apply rotation matrix, and get end point of FOVLine\n            # the start point of the FOVLine is the center of the robot\n            FOVEndPoint1 = calcFOVLineEndPoint(FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)\n            FOVLine1.set_xdata(np.array([startPointX, startPointX + FOVEndPoint1[0]]))\n            FOVLine1.set_ydata(np.array([startPointY, startPointY + FOVEndPoint1[1]]))\n            FOVEndPoint2 = calcFOVLineEndPoint(-FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)\n            FOVLine2.set_xdata(np.array([startPointX, startPointX + FOVEndPoint2[0]]))\n            FOVLine2.set_ydata(np.array([startPointY, startPointY + FOVEndPoint2[1]]))\n\n            ax.add_artist(FOVLine1)\n            ax.add_artist(FOVLine2)\n            artists.append(FOVLine1)\n            artists.append(FOVLine2)\n\n        # add an arc of robot's sensor range\n        sensor_range = plt.Circle(self.robot.get_position(), self.robot.sensor_range + self.robot.radius+self.config.humans.radius, fill=False, linestyle='--')\n        ax.add_artist(sensor_range)\n        artists.append(sensor_range)\n\n        # add humans and change the color of them based on visibility\n        human_circles = [plt.Circle(human.get_position(), human.radius, fill=False, linewidth=1.5) for human in self.humans]\n\n        # hardcoded for now\n        actual_arena_size = self.arena_size + 0.5\n\n        # plot the current human states\n        for i in range(len(self.humans)):\n            ax.add_artist(human_circles[i])\n            artists.append(human_circles[i])\n\n            # green: visible; red: invisible\n            if self.human_visibility[i]:\n                human_circles[i].set_color(c='b')\n            else:\n                human_circles[i].set_color(c='r')\n\n            if -actual_arena_size <= self.humans[i].px <= actual_arena_size and -actual_arena_size <= self.humans[\n                i].py <= actual_arena_size:\n                # label numbers on each human\n                # plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, str(self.humans[i].id), color='black', fontsize=12)\n                plt.text(self.humans[i].px , self.humans[i].py , i, color='black', fontsize=12)\n\n        # plot predicted human positions\n        for i in range(len(self.humans)):\n            # add future predicted positions of each human\n            if self.gst_out_traj is not None:\n                for j in range(self.predict_steps):\n                    circle = plt.Circle(self.gst_out_traj[i, (2 * j):(2 * j + 2)] + np.array([robotX, robotY]),\n                                        self.config.humans.radius, fill=False, color='tab:orange', linewidth=1.5)\n                    # circle = plt.Circle(np.array([robotX, robotY]),\n                    #                     self.humans[i].radius, fill=False)\n                    ax.add_artist(circle)\n                    artists.append(circle)\n\n        plt.pause(0.1)\n        for item in artists:\n            item.remove() # there should be a better way to do this. For example,\n            # initially use add_artist and draw_artist later on\n        for t in ax.texts:\n            t.set_visible(False)\n"
  },
  {
    "path": "crowd_sim/envs/crowd_sim_var_num.py",
    "content": "import gym\nimport numpy as np\nfrom numpy.linalg import norm\nimport copy\n\nfrom crowd_sim.envs.utils.action import ActionRot, ActionXY\nfrom crowd_sim.envs import *\nfrom crowd_sim.envs.utils.info import *\nfrom crowd_sim.envs.utils.human import Human\nfrom crowd_sim.envs.utils.state import JointState\n\n\nclass CrowdSimVarNum(CrowdSim):\n    \"\"\"\n    The environment for our model with no trajectory prediction, or the baseline models with no prediction\n    The number of humans at each timestep can change within a range\n    \"\"\"\n    def __init__(self):\n        \"\"\"\n        Movement simulation for n+1 agents\n        Agent can either be human or robot.\n        humans are controlled by a unknown and fixed policy.\n        robot is controlled by a known and learnable policy.\n        \"\"\"\n        super().__init__()\n        self.id_counter = None\n        self.observed_human_ids = None\n        self.pred_method = None\n\n\n    def configure(self, config):\n        \"\"\" read the config to the environment variables \"\"\"\n        super(CrowdSimVarNum, self).configure(config)\n        self.action_type=config.action_space.kinematics\n\n    # set observation space and action space\n    def set_robot(self, robot):\n        self.robot = robot\n\n        # we set the max and min of action/observation space as inf\n        # clip the action and observation as you need\n\n        d={}\n        # robot node: px, py, r, gx, gy, v_pref, theta\n        d['robot_node'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1,7,), dtype = np.float32)\n        # only consider all temporal edges (human_num+1) and spatial edges pointing to robot (human_num)\n        d['temporal_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 2,), dtype=np.float32)\n        d['spatial_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(self.max_human_num, 2), dtype=np.float32)\n        # number of humans detected at each timestep\n        d['detected_human_num'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, ), dtype=np.float32)\n        # whether each human is visible to robot (ordered by human ID, should not be sorted)\n        d['visible_masks'] = gym.spaces.Box(low=-np.inf, high=np.inf,\n                                            shape=(self.max_human_num,),\n                                            dtype=np.bool)\n        self.observation_space=gym.spaces.Dict(d)\n\n        high = np.inf * np.ones([2, ])\n        self.action_space = gym.spaces.Box(-high, high, dtype=np.float32)\n\n\n    # set robot initial state and generate all humans for reset function\n    # for crowd nav: human_num == self.human_num\n    # for leader follower: human_num = self.human_num - 1\n    def generate_robot_humans(self, phase, human_num=None):\n        if self.record:\n            px, py = 0, 0\n            gx, gy = 0, -1.5\n            self.robot.set(px, py, gx, gy, 0, 0, np.pi / 2)\n            # generate a dummy human\n            for i in range(self.max_human_num):\n                human = Human(self.config, 'humans')\n                human.set(15, 15, 15, 15, 0, 0, 0)\n                human.isObstacle = True\n                self.humans.append(human)\n\n        else:\n            # for sim2real\n            if self.robot.kinematics == 'unicycle':\n                # generate robot\n                angle = np.random.uniform(0, np.pi * 2)\n                px = self.arena_size * np.cos(angle)\n                py = self.arena_size * np.sin(angle)\n                while True:\n                    gx, gy = np.random.uniform(-self.arena_size, self.arena_size, 2)\n                    if np.linalg.norm([px - gx, py - gy]) >= 4:  # 1 was 6\n                        break\n                self.robot.set(px, py, gx, gy, 0, 0, np.random.uniform(0, 2 * np.pi))  # randomize init orientation\n                # 1 to 4 humans\n                self.human_num = np.random.randint(1, self.config.sim.human_num + self.human_num_range + 1)\n                # print('human_num:', self.human_num)\n                # self.human_num = 4\n\n\n            # for sim exp\n            else:\n                # generate robot\n                while True:\n                    px, py, gx, gy = np.random.uniform(-self.arena_size, self.arena_size, 4)\n                    if np.linalg.norm([px - gx, py - gy]) >= 8: # 6\n                        break\n                self.robot.set(px, py, gx, gy, 0, 0, np.pi / 2)\n                # generate humans\n                self.human_num = np.random.randint(low=self.config.sim.human_num - self.human_num_range,\n                                                   high=self.config.sim.human_num + self.human_num_range + 1)\n\n\n            self.generate_random_human_position(human_num=self.human_num)\n            self.last_human_states = np.zeros((self.human_num, 5))\n            # set human ids\n            for i in range(self.human_num):\n                self.humans[i].id = i\n\n\n\n    # generate a human that starts on a circle, and its goal is on the opposite side of the circle\n    def generate_circle_crossing_human(self):\n        human = Human(self.config, 'humans')\n        if self.randomize_attributes:\n            human.sample_random_attributes()\n\n        while True:\n            angle = np.random.random() * np.pi * 2\n            # add some noise to simulate all the possible cases robot could meet with human\n            noise_range = 2\n            px_noise = np.random.uniform(0, 1) * noise_range\n            py_noise = np.random.uniform(0, 1) * noise_range\n            px = self.circle_radius * np.cos(angle) + px_noise\n            py = self.circle_radius * np.sin(angle) + py_noise\n            collide = False\n\n            for i, agent in enumerate([self.robot] + self.humans):\n                # keep human at least 3 meters away from robot\n                if self.robot.kinematics == 'unicycle' and i == 0:\n                    min_dist = self.circle_radius / 2  # Todo: if circle_radius <= 4, it will get stuck here\n                else:\n                    min_dist = human.radius + agent.radius + self.discomfort_dist\n                if norm((px - agent.px, py - agent.py)) < min_dist or \\\n                        norm((px - agent.gx, py - agent.gy)) < min_dist:\n                    collide = True\n                    break\n            if not collide:\n                break\n\n        human.set(px, py, -px, -py, 0, 0, 0)\n\n        return human\n\n    # calculate the ground truth future trajectory of humans\n    # if robot is visible: assume linear motion for robot\n    # ret val: [self.predict_steps + 1, self.human_num, 4]\n    # method: 'truth' or 'const_vel' or 'inferred'\n    def calc_human_future_traj(self, method):\n        # if the robot is invisible, it won't affect human motions\n        # else it will\n        human_num = self.human_num + 1 if self.robot.visible else self.human_num\n        # buffer to store predicted future traj of all humans [px, py, vx, vy]\n        # [time, human id, features]\n        if method == 'truth':\n            self.human_future_traj = np.zeros((self.buffer_len + 1, human_num, 4))\n        elif method == 'const_vel':\n            self.human_future_traj = np.zeros((self.predict_steps + 1, human_num, 4))\n        else:\n            raise NotImplementedError\n\n        # initialize the 0-th position with current states\n        for i in range(self.human_num):\n            # use true states for now, to count for invisible humans' influence on visible humans\n            # take px, py, vx, vy, remove radius\n            self.human_future_traj[0, i] = np.array(self.humans[i].get_observable_state_list()[:-1])\n\n        # if we are using constant velocity model, we need to use displacement to approximate velocity (pos_t - pos_t-1)\n        # we shouldn't use true velocity for fair comparison with GST inferred pred\n        if method == 'const_vel':\n            self.human_future_traj[0, :, 2:4] = self.prev_human_pos[:, 2:4]\n\n        # add robot to the end of the array\n        if self.robot.visible:\n            self.human_future_traj[0, -1] = np.array(self.robot.get_observable_state_list()[:-1])\n\n        if method == 'truth':\n            for i in range(1, self.buffer_len + 1):\n                for j in range(self.human_num):\n                    # prepare joint state for all humans\n                    full_state = np.concatenate(\n                        (self.human_future_traj[i - 1, j], self.humans[j].get_full_state_list()[4:]))\n                    observable_states = []\n                    for k in range(self.human_num):\n                        if j == k:\n                            continue\n                        observable_states.append(\n                            np.concatenate((self.human_future_traj[i - 1, k], [self.humans[k].radius])))\n\n                    # use joint states to get actions from the states in the last step (i-1)\n                    action = self.humans[j].act_joint_state(JointState(full_state, observable_states))\n\n                    # step all humans with action\n                    self.human_future_traj[i, j] = self.humans[j].one_step_lookahead(\n                        self.human_future_traj[i - 1, j, :2], action)\n\n                if self.robot.visible:\n                    action = ActionXY(*self.human_future_traj[i - 1, -1, 2:])\n                    # update px, py, vx, vy\n                    self.human_future_traj[i, -1] = self.robot.one_step_lookahead(self.human_future_traj[i - 1, -1, :2],\n                                                                                  action)\n            # only take predictions every self.pred_interval steps\n            self.human_future_traj = self.human_future_traj[::self.pred_interval]\n        # for const vel model\n        elif method == 'const_vel':\n            # [self.pred_steps+1, human_num, 4]\n            self.human_future_traj = np.tile(self.human_future_traj[0].reshape(1, human_num, 4), (self.predict_steps+1, 1, 1))\n            # [self.pred_steps+1, human_num, 2]\n            pred_timestep = np.tile(np.arange(0, self.predict_steps+1, dtype=float).reshape((self.predict_steps+1, 1, 1)) * self.time_step * self.pred_interval,\n                                    [1, human_num, 2])\n            pred_disp = pred_timestep * self.human_future_traj[:, :, 2:]\n            self.human_future_traj[:, :, :2] = self.human_future_traj[:, :, :2] + pred_disp\n        else:\n            raise NotImplementedError\n\n        # remove the robot if it is visible\n        if self.robot.visible:\n            self.human_future_traj = self.human_future_traj[:, :-1]\n\n\n        # remove invisible humans\n        self.human_future_traj[:, np.logical_not(self.human_visibility), :2] = 15\n        self.human_future_traj[:, np.logical_not(self.human_visibility), 2:] = 0\n\n        return self.human_future_traj\n\n\n    # reset = True: reset calls this function; reset = False: step calls this function\n    # sorted: sort all humans by distance to robot or not\n    def generate_ob(self, reset, sort=False):\n        \"\"\"Generate observation for reset and step functions\"\"\"\n        ob = {}\n\n        # nodes\n        visible_humans, num_visibles, self.human_visibility = self.get_num_human_in_fov()\n\n        ob['robot_node'] = self.robot.get_full_state_list_noV()\n\n        prev_human_pos = copy.deepcopy(self.last_human_states)\n        self.update_last_human_states(self.human_visibility, reset=reset)\n\n        # edges\n        ob['temporal_edges'] = np.array([self.robot.vx, self.robot.vy])\n\n        # ([relative px, relative py, disp_x, disp_y], human id)\n        all_spatial_edges = np.ones((self.max_human_num, 2)) * np.inf\n\n        for i in range(self.human_num):\n            if self.human_visibility[i]:\n                # vector pointing from human i to robot\n                relative_pos = np.array(\n                    [self.last_human_states[i, 0] - self.robot.px, self.last_human_states[i, 1] - self.robot.py])\n                all_spatial_edges[self.humans[i].id, :2] = relative_pos\n\n        ob['visible_masks'] = np.zeros(self.max_human_num, dtype=np.bool)\n        # sort all humans by distance (invisible humans will be in the end automatically)\n        if sort:\n            ob['spatial_edges'] = np.array(sorted(all_spatial_edges, key=lambda x: np.linalg.norm(x)))\n            # after sorting, the visible humans must be in the front\n            if num_visibles > 0:\n                ob['visible_masks'][:num_visibles] = True\n        else:\n            ob['spatial_edges'] = all_spatial_edges\n            ob['visible_masks'][:self.human_num] = self.human_visibility\n        ob['spatial_edges'][np.isinf(ob['spatial_edges'])] = 15\n        ob['detected_human_num'] = num_visibles\n        # if no human is detected, assume there is one dummy human at (15, 15) to make the pack_padded_sequence work\n        if ob['detected_human_num'] == 0:\n            ob['detected_human_num'] = 1\n\n        # update self.observed_human_ids\n        self.observed_human_ids = np.where(self.human_visibility)[0]\n\n        self.ob = ob\n\n        return ob\n\n\n\n    # Update the specified human's end goals in the environment randomly\n    def update_human_pos_goal(self, human):\n        while True:\n            angle = np.random.random() * np.pi * 2\n            # add some noise to simulate all the possible cases robot could meet with human\n            v_pref = 1.0 if human.v_pref == 0 else human.v_pref\n            gx_noise = (np.random.random() - 0.5) * v_pref\n            gy_noise = (np.random.random() - 0.5) * v_pref\n            gx = self.circle_radius * np.cos(angle) + gx_noise\n            gy = self.circle_radius * np.sin(angle) + gy_noise\n            collide = False\n\n            if not collide:\n                break\n\n        # Give human new goal\n        human.gx = gx\n        human.gy = gy\n\n\n    def reset(self, phase='train', test_case=None):\n        \"\"\"\n        Reset the environment\n        :return:\n        \"\"\"\n\n        if self.phase is not None:\n            phase = self.phase\n        if self.test_case is not None:\n            test_case=self.test_case\n\n        if self.robot is None:\n            raise AttributeError('robot has to be set!')\n        assert phase in ['train', 'val', 'test']\n        if test_case is not None:\n            self.case_counter[phase] = test_case # test case is passed in to calculate specific seed to generate case\n        self.global_time = 0\n        self.step_counter = 0\n        self.id_counter = 0\n\n\n        self.humans = []\n        # self.human_num = self.config.sim.human_num\n        # initialize a list to store observed humans' IDs\n        self.observed_human_ids = []\n\n        # train, val, and test phase should start with different seed.\n        # case capacity: the maximum number for train(max possible int -2000), val(1000), and test(1000)\n        # val start from seed=0, test start from seed=case_capacity['val']=1000\n        # train start from self.case_capacity['val'] + self.case_capacity['test']=2000\n        counter_offset = {'train': self.case_capacity['val'] + self.case_capacity['test'],\n                          'val': 0, 'test': self.case_capacity['val']}\n\n        # here we use a counter to calculate seed. The seed=counter_offset + case_counter\n        self.rand_seed = counter_offset[phase] + self.case_counter[phase] + self.thisSeed\n        np.random.seed(self.rand_seed)\n\n        self.generate_robot_humans(phase)\n\n        # record px, py, r of each human, used for crowd_sim_pc env\n        self.cur_human_states = np.zeros((self.max_human_num, 3))\n        for i in range(self.human_num):\n            self.cur_human_states[i] = np.array([self.humans[i].px, self.humans[i].py, self.humans[i].radius])\n\n        # case size is used to make sure that the case_counter is always between 0 and case_size[phase]\n        self.case_counter[phase] = (self.case_counter[phase] + int(1*self.nenv)) % self.case_size[phase]\n\n        # initialize potential and angular potential\n        rob_goal_vec = np.array([self.robot.gx, self.robot.gy]) - np.array([self.robot.px, self.robot.py])\n        self.potential = -abs(np.linalg.norm(rob_goal_vec))\n        self.angle = np.arctan2(rob_goal_vec[1], rob_goal_vec[0]) - self.robot.theta\n        if self.angle > np.pi:\n            # self.abs_angle = np.pi * 2 - self.abs_angle\n            self.angle = self.angle - 2 * np.pi\n        elif self.angle < -np.pi:\n            self.angle = self.angle + 2 * np.pi\n\n        # get robot observation\n        ob = self.generate_ob(reset=True, sort=self.config.args.sort_humans)\n\n        return ob\n\n\n    def step(self, action, update=True):\n        \"\"\"\n        Step the environment forward for one timestep\n        Compute actions for all agents, detect collision, update environment and return (ob, reward, done, info)\n        \"\"\"\n        if self.robot.policy.name in ['ORCA', 'social_force']:\n            # assemble observation for orca: px, py, vx, vy, r\n            human_states = copy.deepcopy(self.last_human_states)\n            # get orca action\n            action = self.robot.act(human_states.tolist())\n        else:\n            action = self.robot.policy.clip_action(action, self.robot.v_pref)\n\n        if self.robot.kinematics == 'unicycle':\n            self.desiredVelocity[0] = np.clip(self.desiredVelocity[0] + action.v, -self.robot.v_pref, self.robot.v_pref)\n            action = ActionRot(self.desiredVelocity[0], action.r)\n\n        human_actions = self.get_human_actions()\n\n        # need to update self.human_future_traj in testing to calculate number of intrusions\n        if self.phase == 'test':\n            # use ground truth future positions of humans\n            self.calc_human_future_traj(method='truth')\n\n        # compute reward and episode info\n        reward, done, episode_info = self.calc_reward(action, danger_zone='future')\n\n\n        # apply action and update all agents\n        self.robot.step(action)\n        for i, human_action in enumerate(human_actions):\n            self.humans[i].step(human_action)\n\n        self.global_time += self.time_step # max episode length=time_limit/time_step\n        self.step_counter =self.step_counter+1\n\n        info={'info':episode_info}\n\n        # Add or remove at most self.human_num_range humans\n        # if self.human_num_range == 0 -> human_num is fixed at all times\n        if self.human_num_range > 0 and self.global_time % 5 == 0:\n            # remove humans\n            if np.random.rand() < 0.5:\n                # print('before:', self.human_num,', self.min_human_num:', self.min_human_num)\n                # if no human is visible, anyone can be removed\n                if len(self.observed_human_ids) == 0:\n                    max_remove_num = self.human_num - self.min_human_num\n                    # print('max_remove_num, invisible', max_remove_num)\n                else:\n                    max_remove_num = min(self.human_num - self.min_human_num, (self.human_num - 1) - max(self.observed_human_ids))\n                    # print('max_remove_num, visible', max_remove_num)\n                remove_num = np.random.randint(low=0, high=max_remove_num + 1)\n                for _ in range(remove_num):\n                    self.humans.pop()\n                self.human_num = self.human_num - remove_num\n                # print('after:', self.human_num)\n                self.last_human_states = self.last_human_states[:self.human_num]\n            # add humans\n            else:\n                add_num = np.random.randint(low=0, high=self.human_num_range + 1)\n                if add_num > 0:\n                    # set human ids\n                    true_add_num = 0\n                    for i in range(self.human_num, self.human_num + add_num):\n                        if i == self.config.sim.human_num + self.human_num_range:\n                            break\n                        self.generate_random_human_position(human_num=1)\n                        self.humans[i].id = i\n                        true_add_num = true_add_num + 1\n                    self.human_num = self.human_num + true_add_num\n                    if true_add_num > 0:\n                        self.last_human_states = np.concatenate((self.last_human_states, np.array([[15, 15, 0, 0, 0.3]]*true_add_num)), axis=0)\n\n        assert self.min_human_num <= self.human_num <= self.max_human_num\n\n        # compute the observation\n        ob = self.generate_ob(reset=False, sort=self.config.args.sort_humans)\n\n\n        # Update all humans' goals randomly midway through episode\n        if self.random_goal_changing:\n            if self.global_time % 5 == 0:\n                self.update_human_goals_randomly()\n\n        # Update a specific human's goal once its reached its original goal\n        if self.end_goal_changing:\n            for i, human in enumerate(self.humans):\n                if norm((human.gx - human.px, human.gy - human.py)) < human.radius:\n                    if self.robot.kinematics == 'holonomic':\n                        self.humans[i] = self.generate_circle_crossing_human()\n                        self.humans[i].id = i\n                    else:\n                        self.update_human_goal(human)\n\n        return ob, reward, done, info\n\n    # find R(s, a)\n    # danger_zone: how to define the personal_zone (if the robot intrudes into this zone, the info will be Danger)\n    # circle (traditional) or future (based on true future traj of humans)\n    def calc_reward(self, action, danger_zone='circle'):\n        # collision detection\n        dmin = float('inf')\n\n        danger_dists = []\n        collision = False\n\n        # collision check with humans\n        for i, human in enumerate(self.humans):\n            dx = human.px - self.robot.px\n            dy = human.py - self.robot.py\n            closest_dist = (dx ** 2 + dy ** 2) ** (1 / 2) - human.radius - self.robot.radius\n\n            if closest_dist < self.discomfort_dist:\n                danger_dists.append(closest_dist)\n            if closest_dist < 0:\n                collision = True\n                break\n            elif closest_dist < dmin:\n                dmin = closest_dist\n\n\n        # check if reaching the goal\n        if self.robot.kinematics == 'unicycle':\n            goal_radius = 0.6\n        else:\n            goal_radius = self.robot.radius\n        reaching_goal = norm(\n            np.array(self.robot.get_position()) - np.array(self.robot.get_goal_position())) < goal_radius\n\n        # use danger_zone to determine the condition for Danger\n        if danger_zone == 'circle' or self.phase == 'train':\n            danger_cond = dmin < self.discomfort_dist\n            min_danger_dist = 0\n        else:\n            # if the robot collides with future states, give it a collision penalty\n            relative_pos = self.human_future_traj[1:, :, :2] - np.array([self.robot.px, self.robot.py])\n            relative_dist = np.linalg.norm(relative_pos, axis=-1)\n\n            collision_idx = relative_dist < self.robot.radius + self.config.humans.radius  # [predict_steps, human_num]\n\n            danger_cond = np.any(collision_idx)\n            # if robot is dangerously close to any human, calculate the min distance between robot and its closest human\n            if danger_cond:\n                min_danger_dist = np.amin(relative_dist[collision_idx])\n            else:\n                min_danger_dist = 0\n\n        if self.global_time >= self.time_limit - 1:\n            reward = 0\n            done = True\n            episode_info = Timeout()\n        elif collision:\n            reward = self.collision_penalty\n            done = True\n            episode_info = Collision()\n        elif reaching_goal:\n            reward = self.success_reward\n            done = True\n            episode_info = ReachGoal()\n\n        elif danger_cond:\n            # only penalize agent for getting too close if it's visible\n            # adjust the reward based on FPS\n            # print(dmin)\n            reward = (dmin - self.discomfort_dist) * self.discomfort_penalty_factor * self.time_step\n            done = False\n            episode_info = Danger(min_danger_dist)\n\n        else:\n            # potential reward\n            if self.robot.kinematics == 'holonomic':\n                pot_factor = 2\n            else:\n                pot_factor = 3\n            potential_cur = np.linalg.norm(\n                np.array([self.robot.px, self.robot.py]) - np.array(self.robot.get_goal_position()))\n            reward = pot_factor * (-abs(potential_cur) - self.potential)\n            self.potential = -abs(potential_cur)\n\n            done = False\n            episode_info = Nothing()\n\n        # if the robot is near collision/arrival, it should be able to turn a large angle\n        if self.robot.kinematics == 'unicycle':\n            # add a rotational penalty\n            r_spin = -4.5 * action.r ** 2\n\n            # add a penalty for going backwards\n            if action.v < 0:\n                r_back = -2 * abs(action.v)\n            else:\n                r_back = 0.\n\n            reward = reward + r_spin + r_back\n\n        return reward, done, episode_info\n\n\n    def render(self, mode='human'):\n        \"\"\" Render the current status of the environment using matplotlib \"\"\"\n        import matplotlib.pyplot as plt\n        import matplotlib.lines as mlines\n        from matplotlib import patches\n\n        plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'\n\n        robot_color = 'gold'\n        goal_color = 'red'\n        arrow_color = 'red'\n        arrow_style = patches.ArrowStyle(\"->\", head_length=4, head_width=2)\n\n        def calcFOVLineEndPoint(ang, point, extendFactor):\n            # choose the extendFactor big enough\n            # so that the endPoints of the FOVLine is out of xlim and ylim of the figure\n            FOVLineRot = np.array([[np.cos(ang), -np.sin(ang), 0],\n                                   [np.sin(ang), np.cos(ang), 0],\n                                   [0, 0, 1]])\n            point.extend([1])\n            # apply rotation matrix\n            newPoint = np.matmul(FOVLineRot, np.reshape(point, [3, 1]))\n            # increase the distance between the line start point and the end point\n            newPoint = [extendFactor * newPoint[0, 0], extendFactor * newPoint[1, 0], 1]\n            return newPoint\n\n\n\n        ax=self.render_axis\n        artists=[]\n\n        # add goal\n        goal=mlines.Line2D([self.robot.gx], [self.robot.gy], color=goal_color, marker='*', linestyle='None', markersize=15, label='Goal')\n        ax.add_artist(goal)\n        artists.append(goal)\n\n        # add robot\n        robotX,robotY=self.robot.get_position()\n\n        robot=plt.Circle((robotX,robotY), self.robot.radius, fill=True, color=robot_color)\n        ax.add_artist(robot)\n        artists.append(robot)\n\n        # plt.legend([robot, goal], ['Robot', 'Goal'], fontsize=16)\n\n\n        # compute orientation in each step and add arrow to show the direction\n        radius = self.robot.radius\n        arrowStartEnd=[]\n\n        robot_theta = self.robot.theta if self.robot.kinematics == 'unicycle' else np.arctan2(self.robot.vy, self.robot.vx)\n\n        arrowStartEnd.append(((robotX, robotY), (robotX + radius * np.cos(robot_theta), robotY + radius * np.sin(robot_theta))))\n\n        for i, human in enumerate(self.humans):\n            theta = np.arctan2(human.vy, human.vx)\n            arrowStartEnd.append(((human.px, human.py), (human.px + radius * np.cos(theta), human.py + radius * np.sin(theta))))\n\n        arrows = [patches.FancyArrowPatch(*arrow, color=arrow_color, arrowstyle=arrow_style)\n                  for arrow in arrowStartEnd]\n        for arrow in arrows:\n            ax.add_artist(arrow)\n            artists.append(arrow)\n\n\n        # draw FOV for the robot\n        # add robot FOV\n        if self.robot.FOV < 2 * np.pi:\n            FOVAng = self.robot_fov / 2\n            FOVLine1 = mlines.Line2D([0, 0], [0, 0], linestyle='--')\n            FOVLine2 = mlines.Line2D([0, 0], [0, 0], linestyle='--')\n\n\n            startPointX = robotX\n            startPointY = robotY\n            endPointX = robotX + radius * np.cos(robot_theta)\n            endPointY = robotY + radius * np.sin(robot_theta)\n\n            # transform the vector back to world frame origin, apply rotation matrix, and get end point of FOVLine\n            # the start point of the FOVLine is the center of the robot\n            FOVEndPoint1 = calcFOVLineEndPoint(FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)\n            FOVLine1.set_xdata(np.array([startPointX, startPointX + FOVEndPoint1[0]]))\n            FOVLine1.set_ydata(np.array([startPointY, startPointY + FOVEndPoint1[1]]))\n            FOVEndPoint2 = calcFOVLineEndPoint(-FOVAng, [endPointX - startPointX, endPointY - startPointY], 20. / self.robot.radius)\n            FOVLine2.set_xdata(np.array([startPointX, startPointX + FOVEndPoint2[0]]))\n            FOVLine2.set_ydata(np.array([startPointY, startPointY + FOVEndPoint2[1]]))\n\n            ax.add_artist(FOVLine1)\n            ax.add_artist(FOVLine2)\n            artists.append(FOVLine1)\n            artists.append(FOVLine2)\n\n        # add an arc of robot's sensor range\n        sensor_range = plt.Circle(self.robot.get_position(), self.robot.sensor_range + self.robot.radius+self.config.humans.radius, fill=False, linestyle='--')\n        ax.add_artist(sensor_range)\n        artists.append(sensor_range)\n\n        # add humans and change the color of them based on visibility\n        human_circles = [plt.Circle(human.get_position(), human.radius, fill=False, linewidth=1.5) for human in self.humans]\n\n        # hardcoded for now\n        actual_arena_size = self.arena_size + 0.5\n        for i in range(len(self.humans)):\n            ax.add_artist(human_circles[i])\n            artists.append(human_circles[i])\n\n            # green: visible; red: invisible\n            # if self.detect_visible(self.robot, self.humans[i], robot1=True):\n            if self.human_visibility[i]:\n                human_circles[i].set_color(c='g')\n            else:\n                human_circles[i].set_color(c='r')\n            if self.humans[i].id in self.observed_human_ids:\n                human_circles[i].set_color(c='b')\n\n            plt.text(self.humans[i].px - 0.1, self.humans[i].py - 0.1, str(self.humans[i].id), color='black', fontsize=12)\n\n\n\n\n\n        plt.pause(0.01)\n        for item in artists:\n            item.remove() # there should be a better way to do this. For example,\n            # initially use add_artist and draw_artist later on\n        for t in ax.texts:\n            t.set_visible(False)\n\n"
  },
  {
    "path": "crowd_sim/envs/crowd_sim_var_num_collect.py",
    "content": "import gym\nimport numpy as np\nfrom numpy.linalg import norm\nimport copy\n\nfrom crowd_sim.envs import *\nfrom crowd_sim.envs.utils.info import *\n\n\n\nclass CrowdSimVarNumCollect(CrowdSimVarNum):\n    \"\"\"\n    An environment for collecting a dataset of simulated humans to train GST predictor (used in collect_data.py)\n    The observation contains all detected humans\n    A key in ob indicates how many humans are detected\n    \"\"\"\n    def __init__(self):\n        \"\"\"\n        Movement simulation for n+1 agents\n        Agent can either be human or robot.\n        humans are controlled by a unknown and fixed policy.\n        robot is controlled by a known and learnable policy.\n        \"\"\"\n        super().__init__()\n\n\n    def set_robot(self, robot):\n        self.robot = robot\n\n        # set observation space and action space\n        # we set the max and min of action/observation space as inf\n        # clip the action and observation as you need\n\n        d={}\n        # frame id, human prediction id, px, py\n        d['pred_info'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(self.config.sim.human_num + self.config.sim.human_num_range, 4), dtype=np.float32)\n        self.observation_space=gym.spaces.Dict(d)\n\n        high = np.inf * np.ones([2, ])\n        self.action_space = gym.spaces.Box(-high, high, dtype=np.float32)\n\n\n    def reset(self, phase='train', test_case=None):\n        \"\"\"\n        Set px, py, gx, gy, vx, vy, theta for robot and humans\n        :return:\n        \"\"\"\n\n        if self.phase is not None:\n            phase = self.phase\n        if self.test_case is not None:\n            test_case=self.test_case\n\n        if self.robot is None:\n            raise AttributeError('robot has to be set!')\n        assert phase in ['train', 'val', 'test']\n        if test_case is not None:\n            self.case_counter[phase] = test_case # test case is passed in to calculate specific seed to generate case\n        self.global_time = 0\n        self.id_counter = 0\n\n\n        self.humans = []\n        # self.human_num = self.config.sim.human_num\n        # initialize a list to store observed humans' IDs\n        self.observed_human_ids = []\n\n        # train, val, and test phase should start with different seed.\n        # case capacity: the maximum number for train(max possible int -2000), val(1000), and test(1000)\n        # val start from seed=0, test start from seed=case_capacity['val']=1000\n        # train start from self.case_capacity['val'] + self.case_capacity['test']=2000\n        counter_offset = {'train': self.case_capacity['val'] + self.case_capacity['test'],\n                          'val': 0, 'test': self.case_capacity['val']}\n\n        # here we use a counter to calculate seed. The seed=counter_offset + case_counter\n        np.random.seed(counter_offset[phase] + self.case_counter[phase] + self.thisSeed)\n        self.rand_seed = counter_offset[phase] + self.case_counter[phase] + self.thisSeed\n        # print(counter_offset[phase] + self.case_counter[phase] + self.thisSeed)\n        # np.random.seed(1038)\n\n        self.generate_robot_humans(phase)\n        self.last_human_observability = np.zeros(self.human_num, dtype=bool)\n        self.human_pred_id = np.arange(0, self.human_num)\n        self.max_human_id = self.human_num\n\n        # case size is used to make sure that the case_counter is always between 0 and case_size[phase]\n        self.case_counter[phase] = (self.case_counter[phase] + int(1*self.nenv)) % self.case_size[phase]\n\n        # get robot observation\n        ob = self.generate_ob(reset=True)\n\n        # initialize potential\n        self.potential = -abs(np.linalg.norm(np.array([self.robot.px, self.robot.py]) - np.array([self.robot.gx, self.robot.gy])))\n\n        return ob\n\n    # reset = True: reset calls this function; reset = False: step calls this function\n    def generate_ob(self, reset, sort=False):\n        # we should keep human ID tracking for traj pred!\n        #assert sort == False\n        ob = {}\n\n        # nodes\n        visible_humans, num_visibles, human_visibility = self.get_num_human_in_fov()\n\n        humans_out = np.logical_and(self.last_human_observability, np.logical_not(human_visibility))\n        num_humans_out = np.sum(humans_out)\n\n        self.human_pred_id[humans_out] = np.arange(self.max_human_id, self.max_human_id+num_humans_out)\n        self.max_human_id = self.max_human_id + num_humans_out\n\n        self.update_last_human_states(human_visibility, reset=reset)\n\n        # ([relative px, relative py, disp_x, disp_y], human id)\n        all_spatial_edges = np.ones((self.config.sim.human_num + self.config.sim.human_num_range, 2)) * np.inf\n\n        for i in range(self.human_num):\n            if human_visibility[i]:\n                all_spatial_edges[self.humans[i].id, :2] = self.last_human_states[i, :2]\n\n        frame_array = np.repeat(self.global_time/self.config.data.pred_timestep, self.human_num)\n        ob['pred_info'] = np.concatenate((frame_array.reshape((self.human_num, 1)),\n                          self.human_pred_id.reshape((self.human_num, 1)), all_spatial_edges), axis=1)\n\n        # update self.observed_human_ids\n        self.observed_human_ids = np.where(human_visibility)[0]\n        self.ob = ob\n\n        self.last_human_observability = copy.deepcopy(human_visibility)\n\n        return ob\n\n    # find R(s, a)\n    # danger_zone: how to define the personal_zone (if the robot intrudes into this zone, the info will be Danger)\n    # circle (traditional) or future (based on true future traj of humans)\n    def calc_reward(self, action, danger_zone='circle'):\n        # collision detection\n        dmin = float('inf')\n\n        danger_dists = []\n        collision = False\n\n        for i, human in enumerate(self.humans):\n            dx = human.px - self.robot.px\n            dy = human.py - self.robot.py\n            closest_dist = (dx ** 2 + dy ** 2) ** (1 / 2) - human.radius - self.robot.radius\n\n            if closest_dist < self.discomfort_dist:\n                danger_dists.append(closest_dist)\n            if closest_dist < 0:\n                collision = True\n                # logging.debug(\"Collision: distance between robot and p{} is {:.2E}\".format(i, closest_dist))\n                break\n            elif closest_dist < dmin:\n                dmin = closest_dist\n\n        # check if reaching the goal\n        reaching_goal = norm(\n            np.array(self.robot.get_position()) - np.array(self.robot.get_goal_position())) < self.robot.radius\n\n\n        if self.global_time >= 40000:\n            done = True\n            episode_info = Timeout()\n        elif collision:\n            done = False\n            episode_info = Collision()\n        elif reaching_goal:\n            done = False\n            episode_info = ReachGoal()\n            # randomize human attributes\n\n            # median of all humans\n            if np.random.uniform(0, 1) < 0.5:\n                human_pos = np.zeros((self.human_num, 2))\n                for i, human in enumerate(self.humans):\n                    human_pos[i] = np.array([human.px, human.py])\n                self.robot.gx, self.robot.gy = np.median(human_pos, axis=0)\n            # random goal\n            else:\n                self.robot.gx, self.robot.gy = np.random.uniform(-self.arena_size, self.arena_size, size=2)\n\n        else:\n            done = False\n            episode_info = Nothing()\n\n\n\n        return 0, done, episode_info"
  },
  {
    "path": "crowd_sim/envs/ros_turtlebot2i_env.py",
    "content": "import gym\nimport numpy as np\nfrom numpy.linalg import norm\nimport pandas as pd\nimport os\n\n# prevent import error if other code is run in conda env\ntry:\n\t# import ROS related packages\n\timport rospy\n\timport tf2_ros\n\tfrom geometry_msgs.msg import Twist, TransformStamped, PoseArray\n\timport tf\n\tfrom sensor_msgs.msg import JointState\n\tfrom threading import Lock\n\tfrom message_filters import ApproximateTimeSynchronizer, TimeSynchronizer, Subscriber\n\nexcept:\n\tpass\n\nimport copy\nimport sys\n\nfrom crowd_sim.envs.crowd_sim_pred_real_gst import CrowdSimPredRealGST\n\n\nclass rosTurtlebot2iEnv(CrowdSimPredRealGST):\n\t'''\n\tEnvironment for testing a simulated policy on a real Turtlebot2i\n\tTo use it, change the env_name in arguments.py in the tested model folder to 'rosTurtlebot2iEnv-v0'\n\t'''\n\tmetadata = {'render.modes': ['human']}\n\n\tdef __init__(self):\n\t\tsuper(CrowdSimPredRealGST, self).__init__()\n\n\t\t# subscriber callback function will change these two variables\n\t\tself.robotMsg=None # robot state message\n\t\tself.humanMsg=None # human state message\n\t\tself.jointMsg=None # joint state message\n\n\t\tself.currentTime=0.0\n\t\tself.lastTime=0.0 # store time for calculating time interval\n\n\t\tself.human_visibility=None\n\t\tself.current_human_states = None  # (px,py)\n\t\tself.detectedHumanNum=0\n\n\t\t# goal positions will be set manually in self.reset()\n\t\tself.goal_x = 0.0\n\t\tself.goal_y = 0.0\n\n\t\tself.last_left = 0.\n\t\tself.last_right = 0.\n\t\tself.last_w = 0.0\n\t\tself.jointVel=None\n\n\t\t# to calculate vx, vy\n\t\tself.last_v = 0.0\n\t\tself.desiredVelocity=[0.0,0.0]\n\n\t\tself.mutex = Lock()\n\n\n\n\tdef configure(self, config):\n\t\tsuper().configure(config)\n\t\t# kalman filter for human tracking\n\n\t\tif config.sim.predict_method == 'none':\n\t\t\tself.add_pred = False\n\t\telse:\n\t\t\tself.add_pred = True\n\n\t\t# define ob space and action space\n\t\tself.set_ob_act_space()\n\t\t# ROS\n\t\trospy.init_node('ros_turtlebot2i_env_node', anonymous=True)\n\n\t\tself.actionPublisher = rospy.Publisher('/cmd_vel_mux/input/navi', Twist, queue_size=1)\n\t\tself.tfBuffer = tf2_ros.Buffer()\n\t\tself.transformListener = tf2_ros.TransformListener(self.tfBuffer)\n\t\t# ROS subscriber\n\t\tjointStateSub = Subscriber(\"/joint_states\", JointState)\n\t\thumanStatesSub = Subscriber('/dr_spaam_detections', PoseArray)  # human px, py, visible\n\t\tif self.use_dummy_detect:\n\t\t\tsubList = [jointStateSub]\n\t\telse:\n\t\t\tsubList = [jointStateSub, humanStatesSub]\n\n\t\t# synchronize the robot base joint states and humnan detections with at most 1 seconds of difference\n\t\tself.ats = ApproximateTimeSynchronizer(subList, queue_size=1, slop=1)\n\n\t\t# if ignore sensor inputs and use fake human detections\n\t\tif self.use_dummy_detect:\n\t\t\tself.ats.registerCallback(self.state_cb_dummy)\n\t\telse:\n\t\t\tself.ats.registerCallback(self.state_cb)\n\n\t\trospy.on_shutdown(self.shutdown)\n\n\n\tdef set_robot(self, robot):\n\t\tself.robot = robot\n\n\tdef set_ob_act_space(self):\n\t\t# set observation space and action space\n\t\t# we set the max and min of action/observation space as inf\n\t\t# clip the action and observation as you need\n\n\t\td = {}\n\t\t# robot node: num_visible_humans, px, py, r, gx, gy, v_pref, theta\n\t\td['robot_node'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 7,), dtype=np.float32)\n\t\t# only consider all temporal edges (human_num+1) and spatial edges pointing to robot (human_num)\n\t\td['temporal_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1, 2,), dtype=np.float32)\n\t\t# add prediction\n\t\tif self.add_pred:\n\t\t\tself.spatial_edge_dim = int(2 * (self.predict_steps + 1))\n\t\t\td['spatial_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf,\n\t\t\t\t\t\t\t\t\t\t\t\tshape=(self.config.sim.human_num + self.config.sim.human_num_range,\n\t\t\t\t\t\t\t\t\t\t\t\t\t   self.spatial_edge_dim),\n\t\t\t\t\t\t\t\t\t\t\t\tdtype=np.float32)\n\t\t# no prediction\n\t\telse:\n\t\t\td['spatial_edges'] = gym.spaces.Box(low=-np.inf, high=np.inf,\n\t\t\t\t\t\t\t\t\t\t\t\tshape=(self.config.sim.human_num + self.config.sim.human_num_range, 2),\n\t\t\t\t\t\t\t\t\t\t\t\tdtype=np.float32)\n\t\t# number of humans detected at each timestep\n\n\t\t# masks for gst pred model\n\t\t# whether each human is visible to robot\n\t\td['visible_masks'] = gym.spaces.Box(low=-np.inf, high=np.inf,\n\t\t\t\t\t\t\t\t\t\t\tshape=(self.config.sim.human_num + self.config.sim.human_num_range,),\n\t\t\t\t\t\t\t\t\t\t\tdtype=np.bool)\n\n\t\t# number of humans detected at each timestep\n\t\td['detected_human_num'] = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(1,), dtype=np.float32)\n\n\t\tself.observation_space = gym.spaces.Dict(d)\n\n\t\thigh = np.inf * np.ones([2, ])\n\t\tself.action_space = gym.spaces.Box(-high, high, dtype=np.float32)\n\n\n\t# (used if self.use_dummy_detect is False)\n\t# callback function to store the realtime messages from the robot to this env\n\tdef state_cb(self, jointStateMsg, humanArrayMsg):\n\t\tself.humanMsg=humanArrayMsg.poses\n\t\tself.jointMsg=jointStateMsg\n\n\t# (used if self.use_dummy_detect is True)\n\t# callback function to store the realtime messages from the robot to this env\n\t# no need to real human message\n\tdef state_cb_dummy(self, jointStateMsg):\n\t\tself.jointMsg = jointStateMsg\n\n\tdef readMsg(self):\n\t\t\"\"\"\n\t\tread messages passed through ROS & prepare for generating obervations\n\t\tthis function should be called right before generate_ob() is called\n\t\t\"\"\"\n\t\tself.mutex.acquire()\n\t\t# get time\n\t\t# print(self.jointMsg.header.stamp.secs, self.jointMsg.header.stamp.nsecs)\n\t\tself.currentTime = self.jointMsg.header.stamp.secs + self.jointMsg.header.stamp.nsecs / 1e9\n\n\t\t# get robot pose from T265 SLAM camera\n\t\ttry:\n\t\t\tself.robotMsg = self.tfBuffer.lookup_transform('t265_odom_frame', 't265_pose_frame', rospy.Time.now(), rospy.Duration(1.0))\n\t\texcept:\n\t\t\tprint(\"problem in getting transform\")\n\n\t\t# get robot base velocity from the base\n\t\ttry:\n\t\t\tself.jointVel=self.jointMsg.velocity\n\t\texcept:\n\t\t\tprint(\"problem in getting joint velocity\")\n\n\t\t# print(self.robotMsg, \"ROBOT mSG\")\n\t\t# store the robot pose and robot base velocity in self variables\n\t\tself.robot.px = -self.robotMsg.transform.translation.y\n\t\tself.robot.py = self.robotMsg.transform.translation.x\n\t\tprint('robot pos:', self.robot.px, self.robot.py)\n\n\t\tquaternion = (\n\t\t\tself.robotMsg.transform.rotation.x,\n\t\t\tself.robotMsg.transform.rotation.y,\n\t\t\tself.robotMsg.transform.rotation.z,\n\t\t\tself.robotMsg.transform.rotation.w\n\t\t)\n\n\t\tif self.use_dummy_detect:\n\t\t\tself.detectedHumanNum = 1\n\t\t\tself.human_visibility = np.zeros((self.max_human_num,), dtype=np.bool)\n\t\telse:\n\t\t\t# human states\n\n\t\t\tself.detectedHumanNum=min(len(self.humanMsg), self.max_human_num)\n\t\t\tself.current_human_states_raw = np.ones((self.detectedHumanNum, 2)) * 15\n\t\t\tself.human_visibility=np.zeros((self.max_human_num, ), dtype=np.bool)\n\n\n\t\t\tfor i in range(self.detectedHumanNum):\n\t\t\t\t# use hard coded map to filter out obstacles\n\t\t\t\tglobal_x = self.robot.px + self.humanMsg[i].position.x\n\t\t\t\tglobal_y = self.robot.py + self.humanMsg[i].position.y\n\t\t\t\t# if -2.5 < global_x < 2.5 and -2.5 < global_y < 2.5:\n\t\t\t\tif True:\n\t\t\t\t\tself.current_human_states_raw[i,0]=self.humanMsg[i].position.x\n\t\t\t\t\tself.current_human_states_raw[i,1] = self.humanMsg[i].position.y\n\n\t\tself.mutex.release()\n\n\t\t# robot orientation\n\t\tself.robot.theta = tf.transformations.euler_from_quaternion(quaternion)[2] + np.pi / 2\n\n\t\tif self.robot.theta < 0:\n\t\t\tself.robot.theta = self.robot.theta + 2 * np.pi\n\n\t\t# add 180 degrees because of the transform from lidar frame to t265 camera frame\n\t\thMatrix = np.array([[np.cos(self.robot.theta+np.pi), -np.sin(self.robot.theta+np.pi), 0, 0],\n\t\t\t\t\t\t\t  [np.sin(self.robot.theta+np.pi), np.cos(self.robot.theta+np.pi), 0, 0],\n\t\t\t\t\t\t\t [0,0,1,0], [0,0,0,1]])\n\n\t\t# if we detected at least one person\n\t\tself.current_human_states = np.ones((self.max_human_num, 2)) * 15\n\n\t\tif not self.use_dummy_detect:\n\t\t\tfor j in range(self.detectedHumanNum):\n\t\t\t\txy=np.matmul(hMatrix,np.array([[self.current_human_states_raw[j,0],\n\t\t\t\t\t\t\t\t\t\t\t\tself.current_human_states_raw[j,1],\n\t\t\t\t\t\t\t\t\t\t\t\t0,\n\t\t\t\t\t\t\t\t\t\t\t\t1]]).T)\n\n\t\t\t\tself.current_human_states[j]=xy[:2,0]\n\n\t\telse:\n\t\t\tself.current_human_states[0] = np.array([0, 1]) - np.array([self.robot.px, self.robot.py])\n\n\n\n\t\tself.robot.vx = self.last_v * np.cos(self.robot.theta)\n\t\tself.robot.vy = self.last_v * np.sin(self.robot.theta)\n\n\n\n\n\tdef reset(self):\n\t\t\"\"\"\n\t\tReset function\n\t\t\"\"\"\n\n\t\t# stop the turtlebot\n\t\tself.smoothStop()\n\t\tself.step_counter=0\n\t\tself.currentTime=0.0\n\t\tself.lastTime=0.0\n\t\tself.global_time = 0.\n\n\t\tself.human_visibility = np.zeros((self.max_human_num,), dtype=np.bool)\n\t\tself.detectedHumanNum=0\n\t\tself.current_human_states = np.ones((self.max_human_num, 2)) * 15\n\t\tself.desiredVelocity = [0.0, 0.0]\n\t\tself.last_left  = 0.\n\t\tself.last_right = 0.\n\t\tself.last_w = 0.0\n\n\t\tself.last_v = 0.0\n\n\t\twhile True:\n\t\t\ta = input(\"Press y for the next episode \\t\")\n\t\t\tif a == \"y\":\n\t\t\t\tself.robot.gx=float(input(\"Input goal location in x-axis\\t\"))\n\t\t\t\tself.robot.gy=float(input(\"Input goal location in y-axis\\t\"))\n\t\t\t\tbreak\n\t\t\telse:\n\t\t\t\tsys.exit()\n\n\n\t\tif self.record:\n\t\t\tself.episodeRecoder.robot_goal.append([self.robot.gx, self.robot.gy])\n\n\n\t\tself.readMsg()\n\t\tob=self.generate_ob() # generate initial obs\n\n\n\t\treturn ob\n\n\t# input: v, w\n\t# output: v, w\n\tdef smooth(self, v, w):\n\t\tbeta = 0.1 #TODO: you use 0.2 in the simulator\n\t\tleft = (2. * v - 0.23 * w) / (2. * 0.035)\n\t\tright = (2. * v + 0.23 * w) / (2. * 0.035)\n\t\t# print('199:', left, right)\n\t\tleft = np.clip(left, -17.5, 17.5)\n\t\tright = np.clip(right, -17.5, 17.5)\n\t\t# print('202:', left, right)\n\t\tleft = (1.-beta) * self.last_left + beta * left\n\t\tright = (1.-beta) * self.last_right + beta * right\n\t\t# print('205:', left, right)\n\t\t\n\t\tself.last_left = copy.deepcopy(left)\n\t\tself.last_right = copy.deepcopy(right)\n\n\t\tv_smooth = 0.035 / 2 * (left + right)\n\t\tw_smooth = 0.035 / 0.23 * (right - left)\n\n\t\treturn v_smooth, w_smooth\n\n\tdef generate_ob(self):\n\t\tob = {}\n\n\t\tob['robot_node'] = np.array([[self.robot.px, self.robot.py, self.robot.radius,\n\t\t\t\t\t\t\t\tself.robot.gx, self.robot.gy, self.robot.v_pref, self.robot.theta]])\n\t\tob['temporal_edges']=np.array([[self.robot.vx, self.robot.vy]])\n\t\t# print(self.current_human_states.shape)\n\t\tspatial_edges=self.current_human_states\n\t\tif self.add_pred:\n\t\t\t# predicted steps will be filled in the vec_pretext_normalize wrapper\n\t\t\tspatial_edges=np.concatenate([spatial_edges, np.zeros((self.max_human_num, 2 * self.predict_steps))], axis=1)\n\n\t\telse:\n\t\t\t# sort humans by distance to robot\n\t\t\tspatial_edges = np.array(sorted(spatial_edges, key=lambda x: np.linalg.norm(x)))\n\t\t\tprint(spatial_edges)\n\t\tob['spatial_edges'] = spatial_edges\n\t\tob['visible_masks'] = self.human_visibility\n\n\t\tob['detected_human_num'] = self.detectedHumanNum\n\t\tif ob['detected_human_num'] == 0:\n\t\t\tob['detected_human_num'] = 1\n\t\tprint(ob['detected_human_num'])\n\t\t\t\n\t\treturn ob\n\t\t\n\n\tdef step(self, action, update=True):\n\t\t\"\"\" Step function \"\"\"\n\t\tprint(\"Step\", self.step_counter)\n\t\t# process action\n\t\trealAction = Twist()\n\n\t\tif self.load_act: # load action from file for robot dynamics checking\n\t\t\tv_unsmooth= self.episodeRecoder.v_list[self.step_counter]\n\t\t\t# in the simulator we use and recrod delta theta. We convert it to omega by dividing it by the time interval\n\t\t\tw_unsmooth = self.episodeRecoder.delta_theta_list[self.step_counter] / self.delta_t\n\t\t\t# v_smooth, w_smooth = self.desiredVelocity[0], self.desiredVelocity[1]\n\t\t\tv_smooth, w_smooth = self.smooth(v_unsmooth, w_unsmooth)\n\t\telse:\n\t\t\taction = self.robot.policy.clip_action(action, None)\n\n\t\t\tself.desiredVelocity[0] = np.clip(self.desiredVelocity[0] + action.v, -self.robot.v_pref, self.robot.v_pref)\n\t\t\tself.desiredVelocity[1] = action.r / self.fixed_time_interval # TODO: dynamic time step is not supported now\n\n\n\t\t\tv_smooth, w_smooth = self.smooth(self.desiredVelocity[0], self.desiredVelocity[1])\n\n\t\t\n\t\tself.last_v = v_smooth\n\n\t\trealAction.linear.x = v_smooth\n\t\trealAction.angular.z = w_smooth\n\n\t\tself.actionPublisher.publish(realAction)\n\n\n\t\trospy.sleep(self.ROSStepInterval)  # act as frame skip\n\n\t\t# get the latest states\n\n\t\tself.readMsg()\n\n\n\t\t# update time\n\t\tif self.step_counter==0: # if it is the first step of the episode\n\t\t\tself.delta_t = np.inf\n\t\telse:\n\t\t\t# time interval between two steps\n\t\t\tif self.use_fixed_time_interval:\n\t\t\t\tself.delta_t=self.fixed_time_interval\n\t\t\telse: self.delta_t = self.currentTime - self.lastTime\n\t\t\tprint('delta_t:', self.currentTime - self.lastTime)\n\t\t\t#print('actual delta t:', currentTime - self.baseEnv.lastTime)\n\t\t\tself.global_time = self.global_time + self.delta_t\n\t\tself.step_counter=self.step_counter+1\n\t\tself.lastTime = self.currentTime\n\n\n\t\t# generate new observation\n\t\tob=self.generate_ob()\n\n\n\t\t# calculate reward\n\t\treward = 0\n\n\t\t# determine if the episode ends\n\t\tdone=False\n\t\treaching_goal = norm(np.array([self.robot.gx, self.robot.gy]) - np.array([self.robot.px, self.robot.py]))  < 0.6\n\t\tif self.global_time >= self.time_limit:\n\t\t\tdone = True\n\t\t\tprint(\"Timeout\")\n\t\telif reaching_goal:\n\t\t\tdone = True\n\t\t\tprint(\"Goal Achieved\")\n\t\telif self.load_act and self.record:\n\t\t\tif self.step_counter >= len(self.episodeRecoder.v_list):\n\t\t\t\tdone = True\n\t\telse:\n\t\t\tdone = False\n\n\n\t\tinfo = {'info': None}\n\n\t\tif self.record:\n\t\t\tself.episodeRecoder.wheelVelList.append(self.jointVel) # it is the calculated wheel velocity, not the measured\n\t\t\tself.episodeRecoder.actionList.append([v_smooth, w_smooth])\n\t\t\tself.episodeRecoder.positionList.append([self.robot.px, self.robot.py])\n\t\t\tself.episodeRecoder.orientationList.append(self.robot.theta)\n\n\t\tif done:\n\t\t\tprint('DOne!')\n\t\t\tif self.record:\n\t\t\t\tself.episodeRecoder.saveEpisode(self.case_counter['test'])\n\n\n\t\treturn ob, reward, done, info\n\n\tdef render(self, mode='human'):\n\t\timport matplotlib.pyplot as plt\n\t\timport matplotlib.lines as mlines\n\t\tfrom matplotlib import patches\n\n\t\tplt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'\n\n\t\trobot_color = 'yellow'\n\t\tgoal_color = 'red'\n\t\tarrow_color = 'red'\n\t\tarrow_style = patches.ArrowStyle(\"->\", head_length=4, head_width=2)\n\n\t\tdef calcFOVLineEndPoint(ang, point, extendFactor):\n\t\t\t# choose the extendFactor big enough\n\t\t\t# so that the endPoints of the FOVLine is out of xlim and ylim of the figure\n\t\t\tFOVLineRot = np.array([[np.cos(ang), -np.sin(ang), 0],\n\t\t\t\t\t\t\t\t   [np.sin(ang), np.cos(ang), 0],\n\t\t\t\t\t\t\t\t   [0, 0, 1]])\n\t\t\tpoint.extend([1])\n\t\t\t# apply rotation matrix\n\t\t\tnewPoint = np.matmul(FOVLineRot, np.reshape(point, [3, 1]))\n\t\t\t# increase the distance between the line start point and the end point\n\t\t\tnewPoint = [extendFactor * newPoint[0, 0], extendFactor * newPoint[1, 0], 1]\n\t\t\treturn newPoint\n\n\t\tax = self.render_axis\n\t\tartists = []\n\n\t\t# add goal\n\t\tgoal = mlines.Line2D([self.robot.gx], [self.robot.gy], color=goal_color, marker='*', linestyle='None',\n\t\t\t\t\t\t\t markersize=15, label='Goal')\n\t\tax.add_artist(goal)\n\t\tartists.append(goal)\n\n\t\t# add robot\n\t\trobotX, robotY = self.robot.px, self.robot.py\n\n\t\trobot = plt.Circle((robotX, robotY), self.robot.radius, fill=True, color=robot_color)\n\t\tax.add_artist(robot)\n\t\tartists.append(robot)\n\n\t\tplt.legend([robot, goal], ['Robot', 'Goal'], fontsize=16)\n\n\t\t# compute orientation in each step and add arrow to show the direction\n\t\tradius = 0.1\n\t\tarrowStartEnd = []\n\n\t\trobot_theta = self.robot.theta\n\n\t\tarrowStartEnd.append(\n\t\t\t((robotX, robotY), (robotX + radius * np.cos(robot_theta), robotY + radius * np.sin(robot_theta))))\n\n\t\tarrows = [patches.FancyArrowPatch(*arrow, color=arrow_color, arrowstyle=arrow_style)\n\t\t\t\t  for arrow in arrowStartEnd]\n\t\tfor arrow in arrows:\n\t\t\tax.add_artist(arrow)\n\t\t\tartists.append(arrow)\n\n\n\t\t# add humans and change the color of them based on visibility\n\t\t# print(self.current_human_states.shape)\n\t\t# print(self.track_num)\n\t\t# render tracked humans\n\t\thuman_circles = [plt.Circle(self.current_human_states[i] + np.array([self.robot.px, self.robot.py]), 0.2, fill=False) \\\n\t\t\t\t\t\t for i in range(self.max_human_num)]\n\t\t# render untracked humans\n\t\t# human_circles = [\n\t\t# \tplt.Circle(self.current_human_states_raw[i] + np.array([self.robot.px, self.robot.py]), 0.2, fill=False) for i\n\t\t# \tin range(self.detectedHumanNum)]\n\n\n\t\tfor i in range(self.max_human_num):\n\t\t# for i in range(self.detectedHumanNum):\n\t\t\tax.add_artist(human_circles[i])\n\t\t\tartists.append(human_circles[i])\n\n\t\t\t# green: visible; red: invisible\n\t\t\tif self.human_visibility[i]:\n\t\t\t\thuman_circles[i].set_color(c='g')\n\t\t\telse:\n\t\t\t\thuman_circles[i].set_color(c='r')\n\n\n\t\tplt.pause(0.1)\n\t\tfor item in artists:\n\t\t\titem.remove()  # there should be a better way to do this. For example,\n\t\t# initially use add_artist and draw_artist later on\n\t\tfor t in ax.texts:\n\t\t\tt.set_visible(False)\n\n\tdef shutdown(self):\n\t\tself.smoothStop()\n\t\tprint(\"You are stopping the robot!\")\n\t\tself.reset()\n\t\t\n\n\tdef smoothStop(self):\n\t\trealAction = Twist()\n\t\tself.actionPublisher.publish(Twist())\n\n\n\n"
  },
  {
    "path": "crowd_sim/envs/utils/__init__.py",
    "content": ""
  },
  {
    "path": "crowd_sim/envs/utils/action.py",
    "content": "from collections import namedtuple\n\nActionXY = namedtuple('ActionXY', ['vx', 'vy'])\nActionRot = namedtuple('ActionRot', ['v', 'r'])\n"
  },
  {
    "path": "crowd_sim/envs/utils/agent.py",
    "content": "import numpy as np\nfrom numpy.linalg import norm\nimport abc\nimport logging\nfrom crowd_nav.policy.policy_factory import policy_factory\nfrom crowd_sim.envs.utils.action import ActionXY, ActionRot\nfrom crowd_sim.envs.utils.state import ObservableState, FullState\n\n\nclass Agent(object):\n    def __init__(self, config, section):\n        \"\"\"\n        Base class for robot and human. Have the physical attributes of an agent.\n\n        \"\"\"\n        subconfig = config.robot if section == 'robot' else config.humans\n        self.visible = subconfig.visible\n        self.v_pref = subconfig.v_pref\n        self.radius = subconfig.radius\n        # randomize neighbor_dist of ORCA\n        if config.env.randomize_attributes:\n            config.orca.neighbor_dist = np.random.uniform(5, 10)\n        self.policy = policy_factory[subconfig.policy](config)\n        self.sensor = subconfig.sensor\n        self.FOV = np.pi * subconfig.FOV\n        # for humans: we only have holonomic kinematics; for robot: depend on config\n        self.kinematics = 'holonomic' if section == 'humans' else config.action_space.kinematics\n        self.px = None\n        self.py = None\n        self.gx = None\n        self.gy = None\n        self.vx = None\n        self.vy = None\n        self.theta = None\n        self.time_step = config.env.time_step\n        self.policy.time_step = config.env.time_step\n\n\n    def print_info(self):\n        logging.info('Agent is {} and has {} kinematic constraint'.format(\n            'visible' if self.visible else 'invisible', self.kinematics))\n\n\n    def sample_random_attributes(self):\n        \"\"\"\n        Sample agent radius and v_pref attribute from certain distribution\n        :return:\n        \"\"\"\n        self.v_pref = np.random.uniform(0.5, 1.5)\n        self.radius = np.random.uniform(0.3, 0.5)\n\n    def set(self, px, py, gx, gy, vx, vy, theta, radius=None, v_pref=None):\n        self.px = px\n        self.py = py\n        self.gx = gx\n        self.gy = gy\n        self.vx = vx\n        self.vy = vy\n        self.theta = theta\n\n        if radius is not None:\n            self.radius = radius\n        if v_pref is not None:\n            self.v_pref = v_pref\n\n    # self.px, self.py, self.vx, self.vy, self.radius, self.gx, self.gy, self.v_pref, self.theta\n    def set_list(self, px, py, vx, vy, radius, gx, gy, v_pref, theta):\n        self.px = px\n        self.py = py\n        self.gx = gx\n        self.gy = gy\n        self.vx = vx\n        self.vy = vy\n        self.theta = theta\n        self.radius = radius\n        self.v_pref = v_pref\n\n    def get_observable_state(self):\n        return ObservableState(self.px, self.py, self.vx, self.vy, self.radius)\n\n    def get_observable_state_list(self):\n        return [self.px, self.py, self.vx, self.vy, self.radius]\n\n    def get_observable_state_list_noV(self):\n        return [self.px, self.py, self.radius]\n\n    def get_next_observable_state(self, action):\n        self.check_validity(action)\n        pos = self.compute_position(action, self.time_step)\n        next_px, next_py = pos\n        if self.kinematics == 'holonomic':\n            next_vx = action.vx\n            next_vy = action.vy\n        else:\n            next_theta = self.theta + action.r\n            next_vx = action.v * np.cos(next_theta)\n            next_vy = action.v * np.sin(next_theta)\n        return ObservableState(next_px, next_py, next_vx, next_vy, self.radius)\n\n    def get_full_state(self):\n        return FullState(self.px, self.py, self.vx, self.vy, self.radius, self.gx, self.gy, self.v_pref, self.theta)\n\n    def get_full_state_list(self):\n        return [self.px, self.py, self.vx, self.vy, self.radius, self.gx, self.gy, self.v_pref, self.theta]\n\n    def get_full_state_list_noV(self):\n        return [self.px, self.py, self.radius, self.gx, self.gy, self.v_pref, self.theta]\n        # return [self.px, self.py, self.radius, self.gx, self.gy, self.v_pref]\n\n    def get_position(self):\n        return self.px, self.py\n\n    def set_position(self, position):\n        self.px = position[0]\n        self.py = position[1]\n\n    def get_goal_position(self):\n        return self.gx, self.gy\n\n    def get_velocity(self):\n        return self.vx, self.vy\n\n\n    def set_velocity(self, velocity):\n        self.vx = velocity[0]\n        self.vy = velocity[1]\n\n\n    @abc.abstractmethod\n    def act(self, ob):\n        \"\"\"\n        Compute state using received observation and pass it to policy\n\n        \"\"\"\n        return\n\n    def check_validity(self, action):\n        if self.kinematics == 'holonomic':\n            assert isinstance(action, ActionXY)\n        else:\n            assert isinstance(action, ActionRot)\n\n    def compute_position(self, action, delta_t):\n        self.check_validity(action)\n        if self.kinematics == 'holonomic':\n            px = self.px + action.vx * delta_t\n            py = self.py + action.vy * delta_t\n        # unicycle\n        else:\n            # naive dynamics\n            # theta = self.theta + action.r * delta_t # if action.r is w\n            # # theta = self.theta + action.r # if action.r is delta theta\n            # px = self.px + np.cos(theta) * action.v * delta_t\n            # py = self.py + np.sin(theta) * action.v * delta_t\n\n            # differential drive\n            epsilon = 0.0001\n            if abs(action.r) < epsilon:\n                R = 0\n            else:\n                w = action.r/delta_t # action.r is delta theta\n                R = action.v/w\n\n            px = self.px - R * np.sin(self.theta) + R * np.sin(self.theta + action.r)\n            py = self.py + R * np.cos(self.theta) - R * np.cos(self.theta + action.r)\n\n\n        return px, py\n\n    def step(self, action):\n        \"\"\"\n        Perform an action and update the state\n        \"\"\"\n        self.check_validity(action)\n        pos = self.compute_position(action, self.time_step)\n        self.px, self.py = pos\n        if self.kinematics == 'holonomic':\n            self.vx = action.vx\n            self.vy = action.vy\n        else:\n            self.theta = (self.theta + action.r) % (2 * np.pi)\n            self.vx = action.v * np.cos(self.theta)\n            self.vy = action.v * np.sin(self.theta)\n\n    def one_step_lookahead(self, pos, action):\n        px, py = pos\n        self.check_validity(action)\n        new_px = px + action.vx * self.time_step\n        new_py = py + action.vy * self.time_step\n        new_vx = action.vx\n        new_vy = action.vy\n        return [new_px, new_py, new_vx, new_vy]\n\n    def reached_destination(self):\n        return norm(np.array(self.get_position()) - np.array(self.get_goal_position())) < self.radius\n\n"
  },
  {
    "path": "crowd_sim/envs/utils/human.py",
    "content": "from crowd_sim.envs.utils.agent import Agent\nfrom crowd_sim.envs.utils.state import JointState\n\n\nclass Human(Agent):\n    # see Agent class in agent.py for details!!!\n    def __init__(self, config, section):\n        super().__init__(config, section)\n        self.isObstacle = False # whether the human is a static obstacle (part of wall) or a moving agent\n        self.id = None # the human's ID, used for collecting traj data\n        self.observed_id = -1 # if it is observed, give it a tracking ID\n\n    # ob: a list of observable states\n    def act(self, ob):\n        \"\"\"\n        The state for human is its full state and all other agents' observable states\n        :param ob:\n        :return:\n        \"\"\"\n\n        state = JointState(self.get_full_state(), ob)\n        action = self.policy.predict(state)\n        return action\n\n    # ob: a joint state (ego agent's full state + other agents' observable states)\n    def act_joint_state(self, ob):\n        \"\"\"\n        The state for human is its full state and all other agents' observable states\n        :param ob:\n        :return:\n        \"\"\"\n        action = self.policy.predict(ob)\n        return action\n"
  },
  {
    "path": "crowd_sim/envs/utils/info.py",
    "content": "class Timeout(object):\n    def __init__(self):\n        pass\n\n    def __str__(self):\n        return 'Timeout'\n\n\nclass ReachGoal(object):\n    def __init__(self):\n        pass\n\n    def __str__(self):\n        return 'Reaching goal'\n\n\nclass Danger(object):\n    def __init__(self, min_dist):\n        self.min_dist = min_dist\n\n    def __str__(self):\n        return 'Too close'\n\n\nclass Collision(object):\n    def __init__(self):\n        pass\n\n    def __str__(self):\n        return 'Collision'\n\nclass OutRoad(object):\n    def __init__(self):\n        pass\n\n    def __str__(self):\n        return 'Out of road'\n\nclass Nothing(object):\n    def __init__(self):\n        pass\n\n    def __str__(self):\n        return ''\n"
  },
  {
    "path": "crowd_sim/envs/utils/recorder.py",
    "content": "import pandas as pd\nimport os\nimport numpy as np\n\nclass Recoder(object):\n\tdef __init__(self):\n\n\t\tself.unsmoothed_actions = []\n\t\tself.actionList = []  # recorded in turtlebot_env.step. [trans,rot]\n\t\tself.wheelVelList = []  # recorded in apply_action(). [left,right]\n\t\tself.orientationList = []  # recorded in calc_state() [angle]\n\t\tself.positionList = []  # recorded in calc_state() [x,y]\n\t\tself.robot_goal = []\n\n\t\tself.saveTo='data/unicycle/selfAttn_truePred_noiseActt/record/real_recordings/'\n\t\t# self.loadFrom = 'data/unicycle/selfAttn_truePred_noiseActt/record/sim_recordings/ep0-1/action.csv'\n\t\tself.loadFrom = 'data/unicycle/selfAttn_truePred_noiseActt/record/sim_recordings/ep0-1point5/unsmoothed_action.csv'\n\t\tself.loadedAction=None\n\t\tself.episodeInitNum=0\n\n\n\tdef saveEpisode(self,episodeCounter):\n\n\t\tsavePath=os.path.join(self.saveTo,'ep'+str(self.episodeInitNum+episodeCounter))\n\t\t#savePath = os.path.join(self.saveTo, 'ep' + str(4))\n\t\tif not os.path.exists(savePath):\n\t\t\t#shutil.rmtree(savePath)\n\t\t\tos.makedirs(savePath)\n\n\t\tif len(self.actionList) > 0:\n\t\t\taction = pd.DataFrame({'trans': np.array(self.actionList)[:,0], 'rot': np.array(self.actionList)[:,1]})\n\t\t\taction.to_csv(os.path.join(savePath, 'action.csv'), index=False)\n\t\tif len(self.wheelVelList) > 0:\n\t\t\twheelVel=pd.DataFrame({'left': np.array(self.wheelVelList)[:,0], 'right': np.array(self.wheelVelList)[:,1]})\n\t\t\twheelVel.to_csv(os.path.join(savePath, 'wheelVel.csv'), index=False)\n\t\tif len(self.orientationList) > 0:\n\t\t\torientation = pd.DataFrame({'ori': np.array(self.orientationList)})\n\t\t\torientation.to_csv(os.path.join(savePath, 'orientation.csv'), index=False)\n\t\tif len(self.positionList) > 0:\n\t\t\tposition = pd.DataFrame({'x': np.array(self.positionList)[:, 0], 'y': np.array(self.positionList)[:, 1]})\n\t\t\tposition.to_csv(os.path.join(savePath, 'position.csv'), index=False)\n\t\tif len(self.unsmoothed_actions) > 0:\n\t\t\tunsmooth_action = pd.DataFrame({'trans': np.array(self.unsmoothed_actions)[:,0], 'rot': np.array(self.unsmoothed_actions)[:,1]})\n\t\t\tunsmooth_action.to_csv(os.path.join(savePath, 'unsmoothed_action.csv'), index=False)\n\t\tif len(self.robot_goal) > 0:\n\t\t\trobot_goal = pd.DataFrame({'x': np.array(self.robot_goal)[:, 0], 'y': np.array(self.robot_goal)[:, 1]})\n\t\t\trobot_goal.to_csv(os.path.join(savePath, 'robot_goal.csv'), index=False)\n\n\t\tprint(\"csv written\")\n\t\tself.clear()\n\n\tdef loadActions(self):\n\t\tself.loadedAction = pd.read_csv(self.loadFrom)\n\t\tself.v_list = self.loadedAction['trans']\n\t\tself.delta_theta_list = self.loadedAction['rot']\n\t\tprint(\"Reading actions from\", self.loadFrom)\n\n\tdef clear(self):\n\t\tself.actionList = []  # recorded in turtlebot_env.step. [trans,rot]\n\t\tself.wheelVelList = []  # recorded in apply_action(). [left,right]\n\t\tself.orientationList = []  # recorded in calc_state() [angle]\n\t\tself.positionList = []  # recorded in calc_state() [x,y]\n\t\tself.unsmoothed_actions = []\n\t\tself.robot_goal = []\n\nclass humanRecoder(object):\n\tdef __init__(self, human_num):\n\t\tself.human_num = human_num\n\t\t# list of [px, py, radius, v_pref] with length = human_num\n\t\tself.initPosList = [[] for i in range(self.human_num)]\n\t\t# list of goal lists with length = human_num\n\t\t# i-th element is a list i-th human's goals in the episode\n\t\tself.goalPosList = [[] for i in range(self.human_num)]\n\n\t# update self.initPosList\n\t# human_id: 0~4\n\t# pos: list of [px, py, radius, v_pref]\n\tdef addInitPos(self, human_id, px, py, radius, v_pref):\n\t\tself.initPosList[human_id] = [px, py, radius, v_pref]\n\n\t# append [px, py] at the end of i-th element of list\n\tdef addGoalPos(self, human_id, px, py):\n\t\tself.goalPosList[human_id].append([px, py])\n\n\tdef loadLists(self, initPos, goalPos):\n\t\tself.initPosList = initPos\n\t\tself.goalPosList = goalPos\n\n\tdef getInitList(self):\n\t\treturn self.initPosList\n\n\tdef getGoalList(self):\n\t\treturn self.goalPosList\n\n\tdef getInitPos(self, human_id):\n\t\treturn self.initPosList[human_id]\n\n\t# get next goal position for human i\n\t# if the last goal is already gotten before, return None\n\tdef getNextGoalPos(self, human_id):\n\t\t# if the list is empty\n\t\tif not self.goalPosList[human_id]:\n\t\t\treturn None\n\t\treturn self.goalPosList[human_id].pop(0)\n\n\t# check whether human i has used up its goal\n\tdef goalIsEmpty(self, human_id):\n\t\tif not self.goalPosList[human_id]:\n\t\t\treturn True\n\t\telse:\n\t\t\treturn False\n\n\tdef clear(self):\n\t\tself.initPosList = [[] for i in range(self.human_num)]\n\t\tself.goalPosList = [[] for i in range(self.human_num)]\n\nclass jointStateRecoder(object):\n\tdef __init__(self, human_num):\n\t\tself.human_num = human_num\n\n\t\t# px, py, r, gx, gy, v_pref, theta, vx, vy\n\t\tself.robot_s = []\n\n\t\t# nested list of [px1, ..., px5]\n\t\tself.humans_px = []\n\t\t# nested list of [py1, ..., py5]\n\t\tself.humans_py = []\n\n\t\t# record the random seed for this episode\n\t\tself.episode_num = []\n\n\t\t# data labels:\n\t\tself.episode_num_label = []\n\t\tself.traj_ratio = []\n\t\tself.time_ratio = []\n\t\tself.idle_time = []\n\n\n\t# append a joint state to the lists\n\t# ob should be the ob returned from env.step()\n\tdef add_traj(self, seed, ob, srnn):\n\t\t# if ob is a dictionary from crowd_sim_dict.py\n\t\tif srnn:\n\t\t\trobot_s_noV = ob['robot_node'][0, 0].cpu().numpy()\n\t\t\trobot_v = ob['edges'][0, 0].cpu().numpy()\n\t\t\tself.robot_s.append(np.concatenate((robot_s_noV, robot_v)))\n\t\t\tself.humans_px.append(ob['edges'][0, 1:, 0].cpu().numpy())\n\t\t\tself.humans_py.append(ob['edges'][0, 1:, 1].cpu().numpy())\n\t\t# if ob is a list from crowd_sim.py\n\t\telse:\n\t\t\tob = ob[0].cpu().numpy()\n\t\t\t# px, py, r, gx, gy, v_pref, theta, vx, vy\n\t\t\tself.robot_s.append(np.concatenate((ob[1:3], ob[5:10], ob[3:5])))\n\t\t\tpx_list, py_list = [], []\n\t\t\t# human num = (ob.shape - 10) // 5\n\t\t\tfor i in range((ob.shape[0] - 10) // 5):\n\t\t\t\tpx_idx = 5 * i + 10\n\t\t\t\tpy_idx = 5 * i + 11\n\t\t\t\tpx_list.append(ob[px_idx])\n\t\t\t\tpy_list.append(ob[py_idx])\n\t\t\tself.humans_px.append(px_list)\n\t\t\tself.humans_py.append(py_list)\n\t\tself.episode_num.append(seed)\n\n\t# seed: the random seed of this episode from env\n\t# respond_traj_len, respond_time: the total traj length and total timesteps when the user makes last choice\n\t# traj_len, tot_time: total traj length and timesteps of the whole episode\n\tdef add_label(self, seed, respond_traj_len, traj_len, respond_time, tot_time, idle_time):\n\t\tself.traj_ratio.append(respond_traj_len/traj_len)\n\t\tself.time_ratio.append(respond_time/tot_time)\n\t\tself.idle_time.append(idle_time)\n\t\tself.episode_num_label.append(seed)\n\n\n\n\tdef save_to_file(self, directory):\n\t\tif not os.path.exists(os.path.join('trajectories', directory)):\n\t\t\tos.makedirs(os.path.join('trajectories', directory))\n\t\t# 1. save all trajectories\n\t\t# add robot data\n\t\trobot_s = np.array(self.robot_s)\n\t\tdata_dict = {'epi_num': np.array(self.episode_num), 'robot_px': robot_s[:, 0], 'robot_py': robot_s[:, 1],\n\t\t\t\t\t 'robot_r': robot_s[:, 2],\n\t\t\t\t\t 'robot_gx': robot_s[:, 3], 'robot_gy': robot_s[:, 4], 'robot_vpref': robot_s[:, 5],\n\t\t\t\t\t 'robot_theta': robot_s[:, 6], 'robot_vx': robot_s[:, 7], 'robot_vy': robot_s[:, 8]}\n\t\t# add humans data\n\t\thumans_px = np.array(self.humans_px)\n\t\thumans_py = np.array(self.humans_py)\n\t\tfor i in range(self.human_num):\n\t\t\tdata_dict['human'+str(i)+'px'] = humans_px[:, i]\n\t\t\tdata_dict['human'+str(i)+'py'] = humans_py[:, i]\n\n\t\tall_data = pd.DataFrame(data_dict)\n\t\tall_data.to_csv(os.path.join('trajectories', directory, 'states.csv'), index=False)\n\n\t\t# 2. save all labels\n\t\tdata_dict = {'epi_num': np.array(self.episode_num_label), 'traj_ratio': np.array(self.traj_ratio),\n\t\t\t\t\t 'time_ratio': np.array(self.time_ratio), 'idle_time': np.array(self.idle_time)}\n\t\tall_data = pd.DataFrame(data_dict)\n\t\tall_data.to_csv(os.path.join('trajectories', directory, 'labels.csv'), index=False)\n\n\t\tself.clear()\n\n\tdef clear(self):\n\t\t# px, py, r, gx, gy, v_pref, theta, vx, vy\n\t\tself.robot_s = []\n\n\t\t# nested list of [px1, ..., px5]\n\t\tself.humans_px = []\n\t\t# nested list of [py1, ..., py5]\n\t\tself.humans_py = []\n\n\t\t# record the random seed for this episode\n\t\tself.episode_num = []\n\n\t\t# data labels:\n\t\tself.episode_num_label = []\n\t\tself.traj_ratio = []\n\t\tself.time_ratio = []\n\t\tself.idle_time = []\n\n"
  },
  {
    "path": "crowd_sim/envs/utils/robot.py",
    "content": "from crowd_sim.envs.utils.agent import Agent\nfrom crowd_sim.envs.utils.state import JointState\n\n\nclass Robot(Agent):\n    def __init__(self, config,section):\n        super().__init__(config,section)\n        self.sensor_range = config.robot.sensor_range\n\n    def act(self, ob):\n        if self.policy is None:\n            raise AttributeError('Policy attribute has to be set!')\n\n        state = JointState(self.get_full_state(), ob)\n        action = self.policy.predict(state)\n        return action\n\n\n    def actWithJointState(self,ob):\n        action = self.policy.predict(ob)\n        return action\n"
  },
  {
    "path": "crowd_sim/envs/utils/state.py",
    "content": "from collections import namedtuple\nimport numpy as np\n\nFullState = namedtuple('FullState', ['px', 'py', 'vx', 'vy', 'radius', 'gx', 'gy', 'v_pref', 'theta'])\nObservableState = namedtuple('ObservableState', ['px', 'py', 'vx', 'vy', 'radius'])\n\n# JointState has 2 attributes:\n# self.self_state is a FullState\n# self.human_states is a list of ObservableStates\nclass JointState(object):\n    # self_state: list of length 9\n    # human_states: list of length human_num*5 or nested list [human_num, 5]\n    def __init__(self, self_state, human_states):\n        assert len(self_state) == 9\n        human_states_namedtuple = []\n        # if human states is a nested list [human_num, 5]\n        if len(np.shape(human_states)) == 2:\n            for human_state in human_states:\n                assert len(human_state) == 5\n                human_states_namedtuple.append(ObservableState(*human_state))\n        # if human states is a flatten list of length human_num*5\n        else:\n            assert len(human_states) % 5 == 0\n            human_num = len(human_states) // 5\n            for i in range(human_num):\n                human_states_namedtuple.append(ObservableState(*human_states[int(i*5):(int((i+1)*5))]))\n\n        self.self_state = FullState(*self_state)\n        self.human_states = human_states_namedtuple\n\n    # convert a joint state to a flattened list of length 9 + 5 * human_num\n    def to_flatten_list(self):\n        flatten_list = list(self.self_state)\n        for human_state in self.human_states:\n            flatten_list.extend(list(human_state))\n        return flatten_list"
  },
  {
    "path": "gst_updated/.gitignore",
    "content": "results/\ndatasets/\nlogs/\nmyenv/\n*.pickle\n*.png\n*.pt"
  },
  {
    "path": "gst_updated/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2021 Zhe Huang\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "gst_updated/README-old.md",
    "content": "# gst\nGumbel Social Transformer.\n\nThis is the implementation for the paper\n### Learning Sparse Interaction Graphs of Partially Observed Pedestrians for Trajectory Prediction\nGST is the abbreviation of our model Gumbel Social Transformer. All code was developed and tested on Ubuntu 18.04 with CUDA 10.2, Python 3.6.9, and PyTorch 1.7.1. <br/>\n\n### Setup\n##### 1. Create a Virtual Environment. (Optional)\n```\nvirtualenv -p /usr/bin/python3 myenv\nsource myenv/bin/activate\n```\n##### 2. Install Packages\nYou can run either <br/>\n```\npip install -r requirements.txt\n```\nor <br/>\n```\npip install numpy\npip install scipy\npip install matplotlib\npip install tensorboardX\npip install pandas\npip install pyyaml\npip install jupyterlab\npip install torch==1.7.1\npip install tensorflow\n```\nor for the machine with 3090s,\n```\npip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html\n```\n##### 3. Create Folders and Dataset Files.\n```\nsh scripts/make_dirs.sh\nsh scripts/data/create_batch_datasets_eth_ucy.sh\npython scripts/data/create_batch_datasets_sdd.py \n```\n\n### Training and Evaluation on Various Configurations\nTo train and evaluate a model with n=1, i.e., the target pedestrian pays attention to at most one partially observed pedestrian, run\n```\nsh scripts/train_sparse.sh\nsh scripts/eval_sparse.sh\n```\nTo train and evaluate a model with n=1 and temporal component as a temporal convolution network, run\n```\nsh scripts/train_sparse_tcn.sh\nsh scripts/eval_sparse_tcn.sh\n```\nTo train and evaluate a model with full connection, i.e., the target pedestrian pays attention to all partially observed pedestrians in the scene, run\n```\nsh scripts/train_full_connection.sh\nsh scripts/eval_full_connection.sh\n```\nTo train and evaluate a model in which the target pedestrian pays attention to all fully observed pedestrians in the scene, run\n```\nsh scripts/train_full_connection_fully_observed.sh\nsh scripts/eval_full_connection_fully_observed.sh\n```\n\n### Important Arguments for Building Customized Configurations\n- `--spatial_num_heads_edges`: n, i.e., the upperbound number of pedestrians that the target pedestrian can pay attention to in the scene. When n=0, it is defined as full connection, i.e., the target pedestrian pays attention to all pedestrians in the scene. Default is 4.\n- `--only_observe_full_period`: The target pedestrian only pays attention to fully observed pedestrians. Default is False.\n- `--temporal`: Temporal component. `lstm` is Masked LSTM, and `temporal_convolution_net` is temporal convolution network. Default is `lstm`.\n- `--decode_style`: Decoding style. It has to match the option `--temporal`. `recursive` matches `lstm`, and `readout` matches `temporal_convolution_net`. Default is `recursive`.\n- `--ghost`: Add a ghost pedestrian in the scene to encourage sparsity. When `--spatial_num_heads_edges` is set as zero, i.e., the target pedestrian pays attention to all pedestrians in the scene, `--ghost` has to be set as False. Default is False.\n"
  },
  {
    "path": "gst_updated/README.md",
    "content": "# gst\nGumbel Social Transformer trained with data provided in Nov, 2021 by Shuijing.\n\n## How to set up\n##### 1. Create a Virtual Environment.\n```\nvirtualenv -p /usr/bin/python3 myenv\nsource myenv/bin/activate\n```\n##### 2. Install Packages\nYou can run either <br/>\n```\npip install -r requirements.txt\n```\nor <br/>\n```\npip install numpy\npip install scipy\npip install matplotlib\npip install tensorboardX\npip install pandas\npip install pyyaml\npip install jupyterlab\npip install torch==1.7.1\npip install tensorflow\n```\nor for the machine with 3090s,\n```\npip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html\n```\n##### 3. Create Folders, and download datasets and pretrained models.\n```\nsh run/make_dirs.sh\nsh run/download_datasets_models.sh\n```\n\n## How to run\n##### 1. Test pretrained models.\n```\nsource myenv/bin/activate\nsh tuning/211209-test_shuijing.sh \n```\nAnd you should see outputs which look like below:\n```\n--------------------------------------------------\ndataset:  sj\nNo ghost version.\nnew gst\nnew st model\nLoaded configuration:  100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000\nTest loss from loaded model:  -4.0132044252296755\ndataset: sj | test aoe: 0.1579 | test aoe std: 0.0175 | test foe: 0.3216 | test foe std: 0.0408 | min aoe: 0.1339, min foe: 0.2638\n```\n##### 2. Test wrapper.\nRun\n```\ncd gst_updated\nsh run/download_wrapper_data_model.sh\n```\nto collect model and demo data for wrapper. Then run\n```\npython scripts/wrapper/crowd_nav_interface_parallel.py\n```\nAnd you should see outputs which look like below:\n```\nINPUT DATA\nnumber of environments:  4\ninput_traj shape:  torch.Size([4, 15, 5, 2])\ninput_binary_mask shape: torch.Size([4, 15, 5, 1])\n\nNo ghost version.\nnew gst\nnew st model\nLOADED MODEL\ndevice:  cuda:0\n\n\nOUTPUT DATA\noutput_traj shape:  torch.Size([4, 15, 5, 5])\noutput_binary_mask shape: torch.Size([4, 15, 1])\n\n0.png is created.\n1.png is created.\n2.png is created.\n3.png is created.\n```\n\n## Observation dimensions for traj pred (by Shuijing)\nAs far as I know, the wrapper for SRNN model does not need to handle multiple timesteps simultaneously. So the time dimension is not mentioned here.\n### Output format\nThe output of traj pred model (or the input of the planner model) at each timestep t is a 3D float array with size = [number_of_pedestrains, number of prediction steps, 5], where 5 includes [mu_x, mu_y, sigma_x, sigma_y, correlation coefficient].\n\nIf number of parallel environments > 1, the array becomes 4D with size = [number of env, number_of_pedestrains, number of prediction steps, 5].\n\nThe order of the dimensions can be swapped, and the dimensions can be combined in row-major order if needed. For example, [number of env, number_of_pedestrains, number of prediction steps * 5] is good as long as the last dimension is in row-major order.\n\n### Input format \nThe input format of traj pred model at each timestep t is a 2D float array with size = [number of pedestrains, 2], where 2 includes [px, py] of each pedestrain. \n\nIf needed, the displacements of all pedestrains can also be added. In this case, input size = [number of pedestrains, 4], where 4 includes [px, py, delta_px, delta_py] of each pedestrain. \n\nIn addition, there is a binary mask with size = [number of pedestrains, 1] that indicates the presence of each human. If a human with ID = i is present, then mask[i] = True, else mask[i] = False.\n\nIf number of parallel environments > 1, similarly, the number_of_env is added as the first dimension in addition to the above sizes.\n\n"
  },
  {
    "path": "gst_updated/__init__.py",
    "content": ""
  },
  {
    "path": "gst_updated/requirements.txt",
    "content": "absl-py==0.13.0\nanyio==3.3.0\nargon2-cffi==21.1.0\nastunparse==1.6.3\nasync-generator==1.10\nattrs==21.2.0\nBabel==2.9.1\nbackcall==0.2.0\nbleach==4.1.0\ncached-property==1.5.2\ncachetools==4.2.2\ncertifi==2021.5.30\ncffi==1.14.6\ncharset-normalizer==2.0.4\nclang==5.0\ncontextvars==2.4\ncycler==0.10.0\ndataclasses==0.8\ndecorator==5.0.9\ndefusedxml==0.7.1\nentrypoints==0.3\nflatbuffers==1.12\ngast==0.4.0\ngoogle-auth==1.35.0\ngoogle-auth-oauthlib==0.4.6\ngoogle-pasta==0.2.0\ngrpcio==1.39.0\nh5py==3.1.0\nidna==3.2\nimmutables==0.16\nimportlib-metadata==4.8.1\nipykernel==5.5.5\nipython==7.16.1\nipython-genutils==0.2.0\njedi==0.18.0\nJinja2==3.0.1\njson5==0.9.6\njsonschema==3.2.0\njupyter-client==7.0.2\njupyter-core==4.7.1\njupyter-server==1.10.2\njupyterlab==3.1.10\njupyterlab-pygments==0.1.2\njupyterlab-server==2.7.2\nkeras==2.6.0\nKeras-Preprocessing==1.1.2\nkiwisolver==1.3.1\nMarkdown==3.3.4\nMarkupSafe==2.0.1\nmatplotlib==3.3.4\nmistune==0.8.4\nnbclassic==0.3.1\nnbclient==0.5.4\nnbconvert==6.0.7\nnbformat==5.1.3\nnest-asyncio==1.5.1\nnotebook==6.4.3\nnumpy==1.19.5\noauthlib==3.1.1\nopt-einsum==3.3.0\npackaging==21.0\npandas==1.1.5\npandocfilters==1.4.3\nparso==0.8.2\npexpect==4.8.0\npickleshare==0.7.5\nPillow==8.3.1\nprometheus-client==0.11.0\nprompt-toolkit==3.0.20\nprotobuf==3.17.3\nptyprocess==0.7.0\npyasn1==0.4.8\npyasn1-modules==0.2.8\npycparser==2.20\nPygments==2.10.0\npyparsing==2.4.7\npyrsistent==0.18.0\npython-dateutil==2.8.2\npytz==2021.1\nPyYAML==5.4.1\npyzmq==22.2.1\nrequests==2.26.0\nrequests-oauthlib==1.3.0\nrequests-unixsocket==0.2.0\nrsa==4.7.2\nscipy==1.5.4\nSend2Trash==1.8.0\nsix==1.15.0\nsniffio==1.2.0\ntensorboard==2.6.0\ntensorboard-data-server==0.6.1\ntensorboard-plugin-wit==1.8.0\ntensorboardX==2.4\ntensorflow==2.6.0\ntensorflow-estimator==2.6.0\ntermcolor==1.1.0\nterminado==0.11.1\ntestpath==0.5.0\ntorch==1.7.1\ntornado==6.1\ntraitlets==4.3.3\ntyping-extensions==3.7.4.3\nurllib3==1.26.6\nwcwidth==0.2.5\nwebencodings==0.5.1\nwebsocket-client==1.2.1\nWerkzeug==2.0.1\nwrapt==1.12.1\nzipp==3.5.0\n"
  },
  {
    "path": "gst_updated/run/create_batch_datasets_eth_ucy.sh",
    "content": "for dataset in eth hotel univ zara1 zara2; do\n    python -u scripts/data/create_datasets_eth_ucy.py --dataset $dataset | tee -a logs/create_batch_datasets.txt\n    python -u scripts/data/create_batch_datasets_eth_ucy.py --dataset $dataset | tee -a logs/create_batch_datasets.txt\n    rm -rf datasets/eth_ucy/${dataset}/${dataset}_dset_train_trajectories.pt\n    rm -rf datasets/eth_ucy/${dataset}/${dataset}_dset_val_trajectories.pt\n    rm -rf datasets/eth_ucy/${dataset}/${dataset}_dset_test_trajectories.pt\ndone"
  },
  {
    "path": "gst_updated/run/create_batch_datasets_self_eth_ucy.sh",
    "content": "for dataset in eth hotel univ zara1 zara2; do\n    python -u scripts/data/create_datasets_self_eth_ucy.py --dataset $dataset | tee -a logs/create_batch_datasets.txt\n    python -u scripts/data/create_batch_datasets_self_eth_ucy.py --dataset $dataset | tee -a logs/create_batch_datasets.txt\n    rm -rf datasets/self_eth_ucy/${dataset}/${dataset}_dset_train_trajectories.pt\n    rm -rf datasets/self_eth_ucy/${dataset}/${dataset}_dset_val_trajectories.pt\n    rm -rf datasets/self_eth_ucy/${dataset}/${dataset}_dset_test_trajectories.pt\ndone"
  },
  {
    "path": "gst_updated/run/create_batch_datasets_synth.sh",
    "content": "python -u scripts/data/create_datasets_trajnet.py --dataset synth | tee -a logs/create_datasets.txt\npython -u scripts/data/create_batch_datasets_trajnet.py --dataset synth | tee -a logs/create_batch_datasets.txt"
  },
  {
    "path": "gst_updated/run/download_datasets_models.sh",
    "content": "cd datasets/shuijing/orca_20humans_fov\ncd results/visual\n# use curl for big-sized google drive files to redirect\ncurl -c /tmp/cookies \"https://drive.google.com/uc?export=download&id=17QLGgTcCWatIrF4EE1Gai8BaMbqxz6kk\" > /tmp/intermezzo.html\ncurl -L -b /tmp/cookies \"https://drive.google.com$(cat /tmp/intermezzo.html | grep -Po 'uc-download-link\" [^>]* href=\"\\K[^\"]*' | sed 's/\\&amp;/\\&/g')\" > sj_dset_test_batch_trajectories.pt\ncd ../../../results\n# use wget for small-sized google drive files\nwget -O 211203-model.zip \"https://drive.google.com/uc?export=download&id=1ukON4cAwM63gKKlJJwgjcdLf7-4-f1KH\" > /tmp/intermezzo.html\nunzip 211203-model.zip && rm -rf 211203-model.zip"
  },
  {
    "path": "gst_updated/run/download_wrapper_data_model.sh",
    "content": "mkdir -p datasets/wrapper_demo\ncd datasets/wrapper_demo\n# use wget for small-sized google drive files\nwget -O wrapper_demo_data.pt \"https://drive.google.com/uc?export=download&id=1Fjc4nFAeqgCTd9UEUsXhJkESqgErNtlA\" > /tmp/intermezzo.html\n\ncd ../../results\n# use wget for small-sized google drive files\nwget -O 211222-model.zip \"https://drive.google.com/uc?export=download&id=1mm364PGbp7hFHJ8uXN_uZI4Nq_3JIHPj\" > /tmp/intermezzo.html\nunzip 211222-model.zip && rm -rf 211222-model.zip"
  },
  {
    "path": "gst_updated/run/make_dirs.sh",
    "content": "mkdir -p datasets/shuijing/orca_20humans_fov\nmkdir results\nmkdir logs"
  },
  {
    "path": "gst_updated/scripts/experiments/draft.py",
    "content": "import pathhack\nimport pickle\nimport time\nfrom os.path import join, isdir\nfrom os import makedirs\nimport torch\nimport numpy as np\nfrom tensorboardX import SummaryWriter\nfrom torch.optim.lr_scheduler import StepLR\nfrom src.mgnn.utils import arg_parse, average_offset_error, final_offset_error, \\\n    negative_log_likelihood, random_rotate_graph, args2writername\nfrom src.gumbel_social_transformer.temperature_scheduler import Temp_Scheduler\nfrom torch.utils.data import DataLoader\nfrom datetime import datetime\n\nnum_epochs = 100\ninit_temp = 1.\ncheckpoint_epoch = 50\ntemperature_scheduler = Temp_Scheduler(num_epochs, init_temp, init_temp, \\\n        temp_min=0.03, last_epoch=checkpoint_epoch-1)\ntau_list = []\nfor epoch in range(51, num_epochs+1):\n    tau = temperature_scheduler.step()\n    tau_list.append(tau)\n\ntemperature_scheduler_new = Temp_Scheduler(num_epochs, init_temp, init_temp, temp_min=0.03)  \ntau_list_new = []\nfor epoch in range(1, num_epochs+1):\n    tau = temperature_scheduler_new.step()\n    if epoch > 50:\n        tau_list_new.append(tau)\ntau_list, tau_list_new = np.array(tau_list), np.array(tau_list_new)\nprint(tau_list-tau_list_new)\n# print(tau_list_new)\n\n\n\n# def load_batch_dataset(args, pkg_path, subfolder='train', num_workers=4, shuffle=None):\n#     result_filename = args.dataset+'_dset_'+subfolder+'_batch_trajectories.pt'\n#     if args.dataset == 'sdd':\n#         dataset_folderpath = join(pkg_path, 'datasets/sdd/social_pool_data')\n#     else:\n#         dataset_folderpath = join(pkg_path, 'datasets/eth_ucy', args.dataset)\n#     dset = torch.load(join(dataset_folderpath, result_filename))\n#     if shuffle is None:\n#         if subfolder == 'train':\n#             shuffle = True\n#         else:\n#             shuffle = False\n#     dloader = DataLoader(\n#         dset,\n#         batch_size=1,\n#         shuffle=shuffle,\n#         num_workers=num_workers)\n#     return dloader\n\n# def main(args):\n#     print('\\n\\n')\n#     print('-'*50)\n#     print('arguments: ', args)\n#     torch.manual_seed(args.random_seed)\n#     np.random.seed(args.random_seed)\n#     if args.batch_size != 1:\n#         raise RuntimeError(\"Batch size must be 1 for BatchTrajectoriesDataset.\")\n#     if args.dataset == 'sdd' and args.rotation_pattern is not None:\n#         raise RuntimeError(\"SDD should not allow rotation since it uses pixels.\")\n#     device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n#     print('device: ', device)\n#     loader_train = load_batch_dataset(args, pathhack.pkg_path, subfolder='train')\n#     if args.dataset == 'sdd':\n#         loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='test') # no val for sdd\n#     else:\n#         loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='val')\n#     train_data_loaders = [loader_train, loader_val]\n#     print('dataset: ', args.dataset)\n#     writername = args2writername(args)\n#     print('Config: ', writername)\n#     logdir = join(pathhack.pkg_path, 'results', writername, args.dataset)\n#     if isdir(logdir) and not args.resume_training:\n#         print('Error: The result directory was already created and used.')\n#         print('-'*50)\n#         print('\\n\\n')\n#         return\n#     writer = SummaryWriter(logdir=logdir)\n#     print('-'*50)\n#     print('\\n\\n')\n#     train(args, train_data_loaders, writer, logdir, device=device)\n#     writer.close()\n\n# def load_checkpoint(args, logdir, model, optimizer, lr_scheduler):\n#     # ! to be finished\n#     checkpoint = torch.load(checkpoint_filepath)\n    \n#     model.load_state_dict(checkpoint['state_dict'])\n#     optimizer.load_state_dict(checkpoint['optimizer'])\n#     lr_scheduler.load_state_dict(checkpoint['scheduler'])\n#     return model, optimizer, lr_scheduler, temperature_scheduler\n    \n\n# def train(args, data_loaders, writer, logdir, device='cuda:0'):\n#     print('-'*50)\n#     print('Training Phase')\n#     print('-'*50, '\\n')\n#     loader_train, loader_val = data_loaders\n#     model = st_model(args, device=device).to(device)\n#     optimizer = torch.optim.Adam(model.parameters(), lr=args.lr)\n#     lr_scheduler = StepLR(optimizer, step_size=50, gamma=0.3)\n#     if args.resume_training:\n#         temperature_scheduler = Temp_Scheduler(args.num_epochs, args.init_temp, args.init_temp, temp_min=0.03)      \n#     else:\n#         model, optimizer, lr_scheduler, temperature_scheduler = \\\n#             load_checkpoint(args, logdir, model, optimizer, lr_scheduler)\n\n#     # def load_ckp(checkpoint_fpath, model, optimizer):\n#     # checkpoint = torch.load(checkpoint_fpath)\n#     # model.load_state_dict(checkpoint['state_dict'])\n#     # optimizer.load_state_dict(checkpoint['optimizer'])\n#     # return model, optimizer, checkpoint['epoch']\n\n\n#     print('Model is initialized.')\n#     print('learning rate: ', args.lr)\n#     checkpoint_dir = join(logdir, 'checkpoint')\n#     if not isdir(checkpoint_dir):\n#         makedirs(checkpoint_dir)\n#     with open(join(checkpoint_dir, 'args.pickle'), 'wb') as f:\n#         pickle.dump(args, f)\n#     print('EPOCHS: ', args.num_epochs)\n#     print('Training started.\\n')\n#     train_loss_task, train_aoe_task, train_foe_task = [], [], []\n#     val_loss_task, val_aoe_task, val_foe_task = [], [], []\n#     hist = {}\n#     for epoch in range(1, args.num_epochs+1):\n#         model.train()\n#         epoch_start_time = time.time()\n#         tau = temperature_scheduler.step()\n#         train_loss_epoch, train_aoe_epoch, train_foe_epoch, train_loss_mask_epoch = [], [], [], []\n#         for batch_idx, batch in enumerate(loader_train):\n#             obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n#             v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt = batch\n#             if args.rotation_pattern is not None:\n#                 (v_obs, A_obs, v_pred_gt, A_pred_gt), _ = \\\n#                     random_rotate_graph(args, v_obs, A_obs, v_pred_gt, A_pred_gt)\n#             v_obs, A_obs, v_pred_gt, attn_mask_obs, loss_mask_rel = \\\n#                 v_obs.to(device), A_obs.to(device), v_pred_gt.to(device), \\\n#                 attn_mask_obs.to(device), loss_mask_rel.to(device)\n#             if args.deterministic:\n#                 sampling = False\n#             else:\n#                 sampling = True\n#             results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=False, sampling=sampling, device=device)\n#             gaussian_params_pred, x_sample_pred, info = results\n#             loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n#             loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n#             if args.deterministic:\n#                 loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n#                 offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n#                 loss = offset_error_sq.sum()/eventual_loss_mask.sum()\n#             else:\n#                 loss = negative_log_likelihood(gaussian_params_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)  \n#             train_loss_epoch.append(loss.detach().to('cpu').item())\n#             loss = loss / args.batch_size\n#             loss.backward()\n#             aoe = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n#             foe = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n#             train_aoe_epoch.append(aoe.detach().to('cpu').numpy())\n#             train_foe_epoch.append(foe.detach().to('cpu').numpy())\n#             train_loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n\n#             if args.clip_grad is not None:\n#                 torch.nn.utils.clip_grad_norm_(\n#                     model.parameters(), args.clip_grad)\n#             optimizer.step()\n#             optimizer.zero_grad()\n\n#         lr_scheduler.step()\n#         train_aoe_epoch, train_foe_epoch, train_loss_mask_epoch = \\\n#             np.concatenate(train_aoe_epoch, axis=0), \\\n#             np.concatenate(train_foe_epoch, axis=0), \\\n#             np.concatenate(train_loss_mask_epoch, axis=0)\n#         train_loss_epoch, train_aoe_epoch, train_foe_epoch = \\\n#             np.mean(train_loss_epoch), \\\n#             train_aoe_epoch.sum()/train_loss_mask_epoch.sum(), \\\n#             train_foe_epoch.sum()/train_loss_mask_epoch.sum()\n#         train_loss_task.append(train_loss_epoch)\n#         train_aoe_task.append(train_aoe_epoch)\n#         train_foe_task.append(train_foe_epoch)\n#         training_epoch_period = time.time() - epoch_start_time\n#         training_epoch_period_per_sample = training_epoch_period/len(loader_train)\n\n#         val_loss_epoch, val_aoe_epoch, val_foe_epoch = inference(loader_val, model, args, mode='val', tau=tau, device=device)\n#         val_loss_task.append(val_loss_epoch)\n#         val_aoe_task.append(val_aoe_epoch)\n#         val_foe_task.append(val_foe_epoch)\n#         print('Epoch: {0} | train loss: {1:.4f} | val loss: {2:.4f} | train aoe: {3:.4f} | val aoe: {4:.4f} | train foe: {5:.4f} | val foe: {6:.4f} | period: {7:.2f} sec | time per sample: {8:.4f} sec'\\\n#                         .format(epoch, train_loss_epoch, val_loss_epoch,\\\n#                         train_aoe_epoch, val_aoe_epoch,\\\n#                         train_foe_epoch, val_foe_epoch,\\\n#                         training_epoch_period, training_epoch_period_per_sample))\n#         if epoch % 10 == 0:\n#             model_filename = join(checkpoint_dir, 'epoch_'+str(epoch)+'.pt')\n#             torch.save({\n#                     'epoch': epoch,\n#                     'model_state_dict': model.state_dict(),\n#                     'optimizer_state_dict': optimizer.state_dict(),\n#                     'train_loss_task': train_loss_task,\n#                     'val_loss_task': val_loss_task,\n#                     'train_aoe_task': train_aoe_task,\n#                     'val_aoe_task': val_aoe_task, \n#                     'train_foe_task': train_foe_task,\n#                     'val_foe_task': val_foe_task,\n#                     'training_date': datetime.today().strftime('%y%m%d'),\n#                     }, model_filename)\n#             print('epoch_'+str(epoch)+'.pt is saved.')\n#             hist['epoch'] = epoch\n#             hist['train_loss'], hist['val_loss'] = train_loss_task, val_loss_task\n#             hist['train_aoe'], hist['val_aoe'] = train_aoe_task, val_aoe_task\n#             hist['train_foe'], hist['val_foe'] = train_foe_task, val_foe_task\n#             with open(join(checkpoint_dir, 'train_hist.pickle'), 'wb') as f:\n#                 pickle.dump(hist, f)\n#                 print(join(checkpoint_dir, 'train_hist.pickle')+' is saved.')\n#         writer.add_scalars('loss', {'train': train_loss_task[-1], 'val': val_loss_task[-1]}, epoch)\n#         writer.add_scalars('aoe', {'train': train_aoe_task[-1], 'val': val_aoe_task[-1]}, epoch)\n#         writer.add_scalars('foe', {'train': train_foe_task[-1], 'val': val_foe_task[-1]}, epoch)\n#     return\n\n    \n\n\n# def inference(loader, model, args, mode='val', tau=1., device='cuda:0'):\n#     with torch.no_grad():\n#         model.eval()\n#         epoch_start_time = time.time()\n#         loss_epoch, aoe_epoch, foe_epoch, loss_mask_epoch = [], [], [], []\n\n#         for batch_idx, batch in enumerate(loader):\n#             obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n#             v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt = batch\n#             v_obs, A_obs, v_pred_gt, attn_mask_obs, loss_mask_rel = \\\n#                 v_obs.to(device), A_obs.to(device), v_pred_gt.to(device), \\\n#                 attn_mask_obs.to(device), loss_mask_rel.to(device)\n#             if mode == 'val':\n#                 if args.deterministic:\n#                     sampling = False\n#                 else:\n#                     sampling = True\n#                 results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=False, sampling=sampling, device=device)\n#                 gaussian_params_pred, x_sample_pred, info = results\n#                 loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n#                 loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n#                 if args.deterministic:\n#                     loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n#                     offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n#                     loss = offset_error_sq.sum()/eventual_loss_mask.sum()\n#                 else:\n#                     loss = negative_log_likelihood(gaussian_params_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n#                 aoe = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n#                 foe = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n#             else:\n#                 raise RuntimeError('We now only support val mode.')\n#             loss_epoch.append(loss.detach().to('cpu').item())\n#             aoe_epoch.append(aoe.detach().to('cpu').numpy())\n#             foe_epoch.append(foe.detach().to('cpu').numpy())\n#             loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n\n#         aoe_epoch, foe_epoch, loss_mask_epoch = \\\n#             np.concatenate(aoe_epoch, axis=0), \\\n#             np.concatenate(foe_epoch, axis=0), \\\n#             np.concatenate(loss_mask_epoch, axis=0)\n\n#         loss_epoch, aoe_epoch, foe_epoch = \\\n#             np.mean(loss_epoch), \\\n#             aoe_epoch.sum()/loss_mask_epoch.sum(), \\\n#             foe_epoch.sum()/loss_mask_epoch.sum()\n#         return loss_epoch, aoe_epoch, foe_epoch\n\n\n# if __name__ == \"__main__\":\n#     args = arg_parse()\n#     if args.temporal == \"lstm\" or args.temporal == \"faster_lstm\":\n#         from src.gumbel_social_transformer.st_model import st_model, offset_error_square_full_partial\n#     elif args.temporal == \"temporal_convolution_net\":\n#         from src.gumbel_social_transformer.st_model_tcn import st_model, offset_error_square_full_partial\n#     else:\n#         raise RuntimeError('The temporal component is not lstm nor tcn nor faster_lstm.')\n#     main(args)"
  },
  {
    "path": "gst_updated/scripts/experiments/eval.py",
    "content": "import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport csv\nimport numpy as np\nfrom src.mgnn.utils import arg_parse, average_offset_error, final_offset_error, args2writername, load_batch_dataset\nfrom src.gumbel_social_transformer.st_model import st_model, offset_error_square_full_partial,\\\n    negative_log_likelihood_full_partial\n\n\ndef main(args):\n    print('\\n\\n')\n    print('-'*50)\n    torch.manual_seed(args.random_seed)\n    np.random.seed(args.random_seed)\n    device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n    loader_test = load_batch_dataset(args, pathhack.pkg_path, subfolder='test')\n    if args.dataset == 'sdd':\n        loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='test') # no val for sdd\n    else:\n        loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='val')\n    print('dataset: ', args.dataset)\n    writername = args2writername(args)\n    logdir = join(pathhack.pkg_path, 'results', writername, args.dataset)\n    if not isdir(logdir):\n        raise RuntimeError('The result directory was not found.')\n    checkpoint_dir = join(logdir, 'checkpoint')\n    model_filename = 'epoch_'+str(args.num_epochs)+'.pt'\n    with open(join(checkpoint_dir, 'args.pickle'), 'rb') as f:\n        args_eval = pickle.load(f)\n    model = st_model(args_eval, device=device).to(device)\n    model_checkpoint = torch.load(join(checkpoint_dir, model_filename), map_location=device)\n    model.load_state_dict(model_checkpoint['model_state_dict'])\n    print('Loaded configuration: ', writername)\n    print('The best validation losses printed below should be the same.')\n    print('Validation loss in the checkpoint: ', model_checkpoint['val_loss_epoch'])\n    val_loss_epoch, val_aoe_epoch, val_foe_epoch = inference(loader_val, model, args, mode='val', tau=0.03, device=device)\n    print('Validation loss from loaded model: ', val_loss_epoch)\n    test_loss_epoch, test_aoe, test_foe, test_aoe_std, test_foe_std, test_aoe_min, test_foe_min = inference(loader_test, model, args, mode='test', tau=0.03, device=device)\n    print('Test loss from loaded model: ', test_loss_epoch)\n    print('dataset: {0} | test aoe: {1:.4f} | test aoe std: {2:.4f} | test foe: {3:.4f} | test foe std: {4:.4f} | min aoe: {5:.4f}, min foe: {6:.4f}'\\\n                        .format(args.dataset, test_aoe, test_aoe_std, test_foe, test_foe_std, test_aoe_min, test_foe_min))\n    # dataset_list = ['eth', 'hotel', 'univ', 'zara1', 'zara2']\n    # dataset_idx = dataset_list.index(args.dataset)\n    # csv_row_data = np.ones((4,5))*999999.\n    # csv_row_data[:,dataset_idx] = np.array([test_aoe, test_foe, test_aoe_std, test_foe_std])\n    # csv_row_data = csv_row_data.reshape(-1)\n    # csv_filename = join(checkpoint_dir, '../..', 'test_performance.csv')\n    # with open(csv_filename, mode='a') as test_file:\n    #     test_writer = csv.writer(test_file, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL)\n    #     test_writer.writerow(csv_row_data)\n    #     print(csv_filename+' is written.')\n\n\ndef inference(loader, model, args, mode='val', tau=1., device='cuda:0'):\n    with torch.no_grad():\n        model.eval()\n        loss_epoch, aoe_epoch, foe_epoch, loss_mask_epoch = [], [], [], []\n        aoe_mean_epoch, aoe_std_epoch, foe_mean_epoch, foe_std_epoch = [], [], [], []\n        aoe_min_epoch, foe_min_epoch = [], []\n\n        for batch_idx, batch in enumerate(loader):\n            obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n            v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt = batch\n            v_obs, A_obs, v_pred_gt, attn_mask_obs, loss_mask_rel = \\\n                v_obs.to(device), A_obs.to(device), v_pred_gt.to(device), \\\n                attn_mask_obs.to(device), loss_mask_rel.to(device)\n            if mode == 'val':\n                results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=False, sampling=False, device=device)\n                gaussian_params_pred, x_sample_pred, info = results\n                loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n                loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n                loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n                if args.deterministic:\n                    offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                    loss = offset_error_sq.sum()/eventual_loss_mask.sum()\n                else:\n                    prob_loss, eventual_loss_mask = negative_log_likelihood_full_partial(gaussian_params_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                    loss = prob_loss.sum()/eventual_loss_mask.sum()\n                aoe = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                foe = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n            \n            elif mode == 'test':\n                if args.deterministic:\n                    sampling = False\n                else:\n                    sampling = True\n                best_aoe_mean = 999999.\n                aoe, foe, loss = [], [], []\n                for _ in range(20):\n                    results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=True, sampling=sampling, device=device)\n                    gaussian_params_pred, x_sample_pred, info = results\n                    loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n                    loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n                    loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n                    if args.deterministic:\n                        offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                        loss_tmp = offset_error_sq.sum()/eventual_loss_mask.sum()\n                    else:\n                        prob_loss, eventual_loss_mask = negative_log_likelihood_full_partial(gaussian_params_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                        loss_tmp = prob_loss.sum()/eventual_loss_mask.sum()\n                    aoe_tmp = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                    foe_tmp = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                    aoe.append(aoe_tmp)\n                    foe.append(foe_tmp)\n                    loss.append(loss_tmp)\n                aoe = torch.stack(aoe, dim=0)\n                foe = torch.stack(foe, dim=0)\n                aoe = aoe.sum(1)\n                foe = foe.sum(1)\n                aoe_mean = aoe.mean()\n                aoe_std = aoe.std()\n                foe_mean = foe.mean()\n                foe_std = foe.std()\n                aoe_min, foe_min = aoe.min(), foe.min()\n                loss = sum(loss)/len(loss)\n            else:\n                raise RuntimeError('We now only support val and test mode.')\n            \n            if mode == 'val':\n                loss_epoch.append(loss.detach().to('cpu').item())\n                aoe_epoch.append(aoe.detach().to('cpu').numpy())\n                foe_epoch.append(foe.detach().to('cpu').numpy())\n                loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n            elif mode == 'test':\n                loss_epoch.append(loss.detach().to('cpu').item())\n                aoe_mean_epoch.append(aoe_mean.item())\n                foe_mean_epoch.append(foe_mean.item())\n                aoe_std_epoch.append(aoe_std.item())\n                foe_std_epoch.append(foe_std.item())\n                aoe_min_epoch.append(aoe_min.item())\n                foe_min_epoch.append(foe_min.item())\n                loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n            else:\n                raise RuntimeError('We only support val and test mode.')\n        \n        if mode == 'val':\n            aoe_epoch, foe_epoch, loss_mask_epoch = \\\n                np.concatenate(aoe_epoch, axis=0), \\\n                np.concatenate(foe_epoch, axis=0), \\\n                np.concatenate(loss_mask_epoch, axis=0)\n            loss_epoch, aoe_epoch, foe_epoch = \\\n                np.mean(loss_epoch), \\\n                aoe_epoch.sum()/loss_mask_epoch.sum(), \\\n                foe_epoch.sum()/loss_mask_epoch.sum()\n            return loss_epoch, aoe_epoch, foe_epoch\n        elif mode == 'test':\n            loss_mask_epoch = np.concatenate(loss_mask_epoch, axis=0)\n            aoe = sum(aoe_mean_epoch)/loss_mask_epoch.sum()\n            foe = sum(foe_mean_epoch)/loss_mask_epoch.sum()\n            aoe_std = sum(aoe_std_epoch)/loss_mask_epoch.sum()\n            foe_std = sum(foe_std_epoch)/loss_mask_epoch.sum()\n            aoe_min = sum(aoe_min_epoch)/loss_mask_epoch.sum()\n            foe_min = sum(foe_min_epoch)/loss_mask_epoch.sum()\n            loss_epoch = np.mean(loss_epoch)\n            return loss_epoch, aoe, foe, aoe_std, foe_std, aoe_min, foe_min\n        else:\n            raise RuntimeError('We now only support val mode.')    \n\n\nif __name__ == \"__main__\":\n    args = arg_parse()\n    main(args)"
  },
  {
    "path": "gst_updated/scripts/experiments/eval_trajnet.py",
    "content": "import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport csv\nimport numpy as np\nfrom src.mgnn.utils import arg_parse, average_offset_error, final_offset_error, args2writername, load_batch_dataset\nfrom src.gumbel_social_transformer.st_model import st_model, offset_error_square_full_partial,\\\n    negative_log_likelihood_full_partial\n\n\ndef main(args):\n    print('\\n\\n')\n    print('-'*50)\n    torch.manual_seed(args.random_seed)\n    np.random.seed(args.random_seed)\n    device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n    loader_test = load_batch_dataset(args, pathhack.pkg_path, subfolder='test')\n    if args.dataset == 'sdd':\n        loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='test') # no val for sdd\n    else:\n        loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='val')\n    print('dataset: ', args.dataset)\n    writername = args2writername(args)\n    logdir = join(pathhack.pkg_path, 'results', writername, args.dataset)\n    if not isdir(logdir):\n        raise RuntimeError('The result directory was not found.')\n    checkpoint_dir = join(logdir, 'checkpoint')\n    model_filename = 'epoch_'+str(args.num_epochs)+'.pt'\n    with open(join(checkpoint_dir, 'args.pickle'), 'rb') as f:\n        args_eval = pickle.load(f)\n    model = st_model(args_eval, device=device).to(device)\n    model_checkpoint = torch.load(join(checkpoint_dir, model_filename), map_location=device)\n    model.load_state_dict(model_checkpoint['model_state_dict'])\n    print('Loaded configuration: ', writername)\n    print('The best validation losses printed below should be the same.')\n    print('Validation loss in the checkpoint: ', model_checkpoint['val_loss_epoch'])\n    val_loss_epoch, val_aoe_epoch, val_foe_epoch = inference(loader_val, model, args, mode='val', tau=0.03, device=device)\n    print('Validation loss from loaded model: ', val_loss_epoch)\n    test_loss_epoch, test_aoe, test_foe, test_aoe_std, test_foe_std, test_aoe_min, test_foe_min = inference(loader_test, model, args, mode='test', tau=0.03, device=device)\n    print('Test loss from loaded model: ', test_loss_epoch)\n    print('dataset: {0} | test aoe: {1:.4f} | test aoe std: {2:.4f} | test foe: {3:.4f} | test foe std: {4:.4f} | min aoe: {5:.4f}, min foe: {6:.4f}'\\\n                        .format(args.dataset, test_aoe, test_aoe_std, test_foe, test_foe_std, test_aoe_min, test_foe_min))\n    dataset_list = ['eth', 'hotel', 'univ', 'zara1', 'zara2']\n    dataset_idx = dataset_list.index(args.dataset)\n    csv_row_data = np.ones((4,5))*999999.\n    csv_row_data[:,dataset_idx] = np.array([test_aoe, test_foe, test_aoe_std, test_foe_std])\n    csv_row_data = csv_row_data.reshape(-1)\n    csv_filename = join(checkpoint_dir, '../..', 'test_performance.csv')\n    with open(csv_filename, mode='a') as test_file:\n        test_writer = csv.writer(test_file, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL)\n        test_writer.writerow(csv_row_data)\n        print(csv_filename+' is written.')\n\n\ndef test(loader, model, tau=0.03, device='cuda:0'):\n    with torch.no_grad():\n        model.eval()\n        for batch_idx, batch in enumerate(loader):\n            obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n            v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt, frame_id_seq, ped_id_list = batch\n            v_obs, A_obs, v_pred_gt, attn_mask_obs, loss_mask_rel = \\\n                v_obs.to(device), A_obs.to(device), v_pred_gt.to(device), \\\n                attn_mask_obs.to(device), loss_mask_rel.to(device)\n            \n            results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=False, sampling=False, device=device)\n            gaussian_params_pred, x_sample_pred, info = results\n            import pdb; pdb.set_trace()\n            # loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n            # loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n            # pos_pred = torch.cumsum(x_sample_pred, dim=1)\n            # pos_target = torch.cumsum(x_target_m, dim=1)\n\n\n                # loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n                # loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n\n\n            # gaussian_params_pred, x_sample_pred, info = results\n            #     loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n            #     loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n    \n            break\n\ndef inference(loader, model, args, mode='val', tau=1., device='cuda:0'):\n    with torch.no_grad():\n        model.eval()\n        loss_epoch, aoe_epoch, foe_epoch, loss_mask_epoch = [], [], [], []\n        aoe_mean_epoch, aoe_std_epoch, foe_mean_epoch, foe_std_epoch = [], [], [], []\n        aoe_min_epoch, foe_min_epoch = [], []\n\n        for batch_idx, batch in enumerate(loader):\n            obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n            v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt = batch\n            v_obs, A_obs, v_pred_gt, attn_mask_obs, loss_mask_rel = \\\n                v_obs.to(device), A_obs.to(device), v_pred_gt.to(device), \\\n                attn_mask_obs.to(device), loss_mask_rel.to(device)\n            if mode == 'val':\n                results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=False, sampling=False, device=device)\n                gaussian_params_pred, x_sample_pred, info = results\n                loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n                loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n                loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n                if args.deterministic:\n                    offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                    loss = offset_error_sq.sum()/eventual_loss_mask.sum()\n                else:\n                    prob_loss, eventual_loss_mask = negative_log_likelihood_full_partial(gaussian_params_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                    loss = prob_loss.sum()/eventual_loss_mask.sum()\n                aoe = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                foe = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n            \n            elif mode == 'test':\n                if args.deterministic:\n                    sampling = False\n                else:\n                    sampling = True\n                best_aoe_mean = 999999.\n                aoe, foe, loss = [], [], []\n                for _ in range(20):\n                    results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=True, sampling=sampling, device=device)\n                    gaussian_params_pred, x_sample_pred, info = results\n                    loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n                    loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n                    loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n                    if args.deterministic:\n                        offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                        loss_tmp = offset_error_sq.sum()/eventual_loss_mask.sum()\n                    else:\n                        prob_loss, eventual_loss_mask = negative_log_likelihood_full_partial(gaussian_params_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                        loss_tmp = prob_loss.sum()/eventual_loss_mask.sum()\n                    aoe_tmp = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                    foe_tmp = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                    aoe.append(aoe_tmp)\n                    foe.append(foe_tmp)\n                    loss.append(loss_tmp)\n                aoe = torch.stack(aoe, dim=0)\n                foe = torch.stack(foe, dim=0)\n                aoe = aoe.sum(1)\n                foe = foe.sum(1)\n                aoe_mean = aoe.mean()\n                aoe_std = aoe.std()\n                foe_mean = foe.mean()\n                foe_std = foe.std()\n                aoe_min, foe_min = aoe.min(), foe.min()\n                loss = sum(loss)/len(loss)\n            else:\n                raise RuntimeError('We now only support val and test mode.')\n            \n            if mode == 'val':\n                loss_epoch.append(loss.detach().to('cpu').item())\n                aoe_epoch.append(aoe.detach().to('cpu').numpy())\n                foe_epoch.append(foe.detach().to('cpu').numpy())\n                loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n            elif mode == 'test':\n                loss_epoch.append(loss.detach().to('cpu').item())\n                aoe_mean_epoch.append(aoe_mean.item())\n                foe_mean_epoch.append(foe_mean.item())\n                aoe_std_epoch.append(aoe_std.item())\n                foe_std_epoch.append(foe_std.item())\n                aoe_min_epoch.append(aoe_min.item())\n                foe_min_epoch.append(foe_min.item())\n                loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n            else:\n                raise RuntimeError('We only support val and test mode.')\n        \n        if mode == 'val':\n            aoe_epoch, foe_epoch, loss_mask_epoch = \\\n                np.concatenate(aoe_epoch, axis=0), \\\n                np.concatenate(foe_epoch, axis=0), \\\n                np.concatenate(loss_mask_epoch, axis=0)\n            loss_epoch, aoe_epoch, foe_epoch = \\\n                np.mean(loss_epoch), \\\n                aoe_epoch.sum()/loss_mask_epoch.sum(), \\\n                foe_epoch.sum()/loss_mask_epoch.sum()\n            return loss_epoch, aoe_epoch, foe_epoch\n        elif mode == 'test':\n            loss_mask_epoch = np.concatenate(loss_mask_epoch, axis=0)\n            aoe = sum(aoe_mean_epoch)/loss_mask_epoch.sum()\n            foe = sum(foe_mean_epoch)/loss_mask_epoch.sum()\n            aoe_std = sum(aoe_std_epoch)/loss_mask_epoch.sum()\n            foe_std = sum(foe_std_epoch)/loss_mask_epoch.sum()\n            aoe_min = sum(aoe_min_epoch)/loss_mask_epoch.sum()\n            foe_min = sum(foe_min_epoch)/loss_mask_epoch.sum()\n            loss_epoch = np.mean(loss_epoch)\n            return loss_epoch, aoe, foe, aoe_std, foe_std, aoe_min, foe_min\n        else:\n            raise RuntimeError('We now only support val mode.')    \n\n\nif __name__ == \"__main__\":\n    args = arg_parse()\n    main(args)"
  },
  {
    "path": "gst_updated/scripts/experiments/load_trajnet_test_data.py",
    "content": "import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport csv\nimport numpy as np\nfrom src.mgnn.utils import arg_parse, average_offset_error, final_offset_error, args2writername, load_batch_dataset\nfrom src.gumbel_social_transformer.st_model import st_model, offset_error_square_full_partial,\\\n    negative_log_likelihood_full_partial\nfrom torch.utils.data import DataLoader\nimport ndjson\n\n# def load_single_batch_dataset_eth_ucy(args, pkg_path, subfolder='train', num_workers=4, shuffle=None):\nshuffle = None\nsubfolder = 'test'\n# dset_name = 'orca_synth'\n# dset_name = 'collision_test'\nwritername = \"100-gumbel_social_transformer-faster_lstm-lr_0.001-deterministic-init_temp_0.5-edge_head_4-ebd_64-snl_1-snh_8-ghost-seed_1000\"\n# \"100-gumbel_social_transformer-faster_lstm-lr_0.001-deterministic-init_temp_0.5-edge_head_1-ebd_64-snl_1-snh_8-ghost-seed_1000\"\n\n# \"100-gumbel_social_transformer-faster_lstm-lr_0.001-deterministic-init_temp_0.5-edge_head_8-ebd_64-snl_1-snh_8-ghost-seed_1000\"\nfor dset_name in ['orca_synth', 'collision_test']:\n    result_filename = dset_name+'_dset_test_trajectories.pt'\n    # result_filename = 'orca_synth_dset_test_trajectories.pt'\n    # result_filename = 'collision_test_dset_test_trajectories.pt'\n    dataset_folderpath = join(pathhack.pkg_path, 'datasets/trajnet++/test')\n    dset = torch.load(join(dataset_folderpath, result_filename))\n\n\n    dataset_folderpath = join(pathhack.pkg_path, 'datasets/trajnet++/test')\n    ndjson_filepath = join(dataset_folderpath, 'synth_data', dset_name+'.ndjson')\n    # ndjson_filepath = join(dataset_folderpath, 'synth_data', 'orca_synth.ndjson')\n    # ndjson_filepath = join(dataset_folderpath, 'synth_data', 'collision_test.ndjson')\n    scene_posts = []\n    with open(ndjson_filepath) as f:\n        reader = ndjson.reader(f)\n        for post in reader:\n            if \"scene\" in post.keys():\n                scene_posts.append(post)\n\n    if shuffle is None:\n        if subfolder == 'train':\n            shuffle = True\n        else:\n            shuffle = False\n    dloader = DataLoader(\n        dset,\n        batch_size=1,\n        shuffle=shuffle,\n        num_workers=4)\n\n    frame_diff = 1\n    for batch_idx, batch in enumerate(dloader):\n        obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n        v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt, frame_id_seq, ped_id_list = batch\n        print(v_obs)\n        print(frame_id_seq)\n        print(ped_id_list)\n        print(pred_traj_gt)\n        break\n    device = \"cuda:0\"\n    \n    logdir = join(pathhack.pkg_path, 'results', writername, \"synth\") \n    checkpoint_dir = join(logdir, 'checkpoint')\n    model_filename = 'epoch_'+str(100)+'.pt'\n    with open(join(checkpoint_dir, 'args.pickle'), 'rb') as f:\n        args_eval = pickle.load(f)\n\n    model = st_model(args_eval, device=device).to(device)\n    model_checkpoint = torch.load(join(checkpoint_dir, model_filename), map_location=device)\n    model.load_state_dict(model_checkpoint['model_state_dict'])\n    print('Loaded configuration: ', writername)\n    print('The best validation losses printed below should be the same.')\n    print('Validation loss in the checkpoint: ', model_checkpoint['val_loss_epoch'])\n\n    ndjson_lines = []\n    with torch.no_grad():\n        model.eval()\n        for batch_idx, batch in enumerate(dloader):\n            if batch_idx % 100 == 0:\n                print(batch_idx)\n            obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n            v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt, frame_id_seq, ped_id_list = batch\n            v_obs, A_obs, attn_mask_obs, loss_mask_rel = \\\n                v_obs.to(device), A_obs.to(device), \\\n                attn_mask_obs.to(device), loss_mask_rel.to(device)\n            \n            results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=0.03, hard=True, sampling=False, device=device)\n            gaussian_params_pred, x_sample_pred, info = results\n\n            obs_traj = obs_traj.to(\"cpu\")\n            x_sample_pred = x_sample_pred.to(\"cpu\").permute(0, 2, 3, 1) #[1, 6, 2, 12]\n            pred_traj = torch.cat((obs_traj[:,:,:,-1:], x_sample_pred), dim=3) # [1, 6, 2, 13]\n            pred_traj = torch.cumsum(pred_traj, dim=3)\n\n            start_frame_id = int(frame_id_seq.item())\n            pred_traj = pred_traj[:,:,:,1:] # [1, 6, 2, 12]\n            loss_mask_rel_full_partial = info['loss_mask_rel_full_partial'][0]\n            # {\"track\": {\"f\": 370, \"p\": 3, \"x\": -1.17, \"y\": 4.24, \"prediction_number\": 0, \"scene_id\": 0}}\n            for i in range(len(loss_mask_rel_full_partial)):\n                if loss_mask_rel_full_partial[i]:\n                    for tt in range(9, 21): # 9-20\n                        ndjson_line = {}\n                        ndjson_line[\"track\"] = {}\n                        ndjson_line[\"track\"][\"f\"] = start_frame_id + int(frame_diff)*tt\n                        ndjson_line[\"track\"][\"p\"] = int(ped_id_list[i].item())\n                        ndjson_line[\"track\"][\"x\"] = round(pred_traj[0,i,0,tt-9].item(), 2)\n                        ndjson_line[\"track\"][\"y\"] = round(pred_traj[0,i,1,tt-9].item(), 2)\n                        ndjson_line[\"track\"][\"prediction_number\"] = 0\n                        ndjson_line[\"track\"][\"scene_id\"] = batch_idx\n                        ndjson_lines.append(ndjson_line)\n        print(\"final scene id: \", batch_idx)\n\n    # with open('./collision_test_predictions.ndjson', 'w') as f:\n    # with open('./orca_synth_predictions.ndjson', 'w') as f:\n    with open('./'+dset_name+'_predictions.ndjson', 'w') as f:\n        writer = ndjson.writer(f, ensure_ascii=False)\n        for post in scene_posts:\n            writer.writerow(post)\n        for post in ndjson_lines:\n            writer.writerow(post)\n\n# import pdb; pdb.set_trace()\n\"\"\"     \n# import pdb; pdb.set_trace()\n# break\nndjson_lines = []\n# ndjson_line = {}\n# ndjson_line[\"track\"] = {}\nstart_frame_id = int(frame_id_seq.item())\n# ndjson_line[\"track\"][\"p\"] = int(frame_id_seq.item()[0])\n\n\n# {\"track\": {\"f\": 113, \"p\": 19973, \"x\": -0.95, \"y\": -1.86}}\npred_traj = pred_traj[:,:,:,1:] # [1, 6, 2, 12]\nloss_mask_rel_full_partial = info['loss_mask_rel_full_partial'][0]\n# {\"track\": {\"f\": 370, \"p\": 3, \"x\": -1.17, \"y\": 4.24, \"prediction_number\": 0, \"scene_id\": 0}}\nfor i in range(len(loss_mask_rel_full_partial)):\n    if loss_mask_rel_full_partial[i]:\n        for tt in range(9, 21): # 9-20\n            ndjson_line = {}\n            ndjson_line[\"track\"] = {}\n            ndjson_line[\"track\"][\"f\"] = start_frame_id + int(frame_diff)*tt\n            ndjson_line[\"track\"][\"p\"] = int(ped_id_list[i].item())\n            ndjson_line[\"track\"][\"x\"] = round(pred_traj[0,i,0,tt-9], 2)\n            ndjson_line[\"track\"][\"y\"] = round(pred_traj[0,i,0,tt-9], 2)\n            ndjson_line[\"track\"][\"prediction_number\"] = 0\n            ndjson_line[\"track\"][\"scene_id\"] = batch_idx\n\"\"\"\n\n\n# import matplotlib .pyplot as plt\n# (Pdb) fig, ax = plt.subplots()\n# (Pdb) [ax.plot(pred_traj[0,i,0,:], pred_traj[0,i,1,:], '.-') for i in range(6)]\n# [[<matplotlib.lines.Line2D object at 0x7f9317e107b8>], [<matplotlib.lines.Line2D object at 0x7f93bc3eab38>], [<matplotlib.lines.Line2D object at 0x7f93bc3eada0>], [<matplotlib.lines.Line2D object at 0x7f93bc403048>], [<matplotlib.lines.Line2D object at 0x7f93bc403320>], [<matplotlib.lines.Line2D object at 0x7f93bc4035f8>]]\n# (Pdb) fig.savefig(\"tmp4.jpg\")\n\n\n# import ndjson\n\n# # Streaming lines from ndjson file:\n# with open('./posts.ndjson') as f:\n#     reader = ndjson.reader(f)\n\n#     for post in reader:\n#         print(post)\n\n# # Writing items to a ndjson file\n# with open('./posts.ndjson', 'w') as f:\n#     writer = ndjson.writer(f, ensure_ascii=False)\n\n#     for post in posts:\n#         writer.writerow(post)\n\n\n# print(pred_traj.shape)\n# obs_traj\n# obs_traj # (1,6,2,8)\n# x_sample_pred torch.Size([1, 12, 6, 2])\n# pos_pred = torch.cumsum(x_sample_pred, dim=1)\n# pos_target = torch.cumsum(x_target_m, dim=1)\n\n# self.obs_traj[start:end, :], self.pred_traj[start:end, :],\n#             self.obs_traj_rel[start:end, :], self.pred_traj_rel[start:end, :],\n#             self.loss_mask_rel[start:end, :], self.loss_mask[start:end, :],\n#             self.v_obs[index], self.A_obs[index],\n#             self.v_pred[index], self.A_pred[index],\n#             self.attn_mask_obs[index], self.attn_mask_pred[index],\n#         ]\n#     for mode in ['test']:\n#         dataset_filepath = join(dataset_folderpath, 'biwi_eth.ndjson')\n#         dset = TrajectoriesDataset(\n#             dataset_filepath,\n#             obs_seq_len=args.obs_seq_len,\n#             pred_seq_len=args.pred_seq_len,\n#             skip=1,\n#             invalid_value=args.invalid_value,\n#             mode=mode,\n#         )\n#         result_filename = 'biwi_eth_dset_'+mode+'_trajectories.pt'\n#         torch.save(dset, join(dataset_folderpath, '..', result_filename))\n#         print(join(dataset_folderpath, '..', result_filename)+' is created.')\n\n\n\n\n# def main(args):\n#     print('\\n\\n')\n#     print('-'*50)\n#     torch.manual_seed(args.random_seed)\n#     np.random.seed(args.random_seed)\n#     device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n#     loader_test = load_batch_dataset(args, pathhack.pkg_path, subfolder='test')\n#     if args.dataset == 'sdd':\n#         loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='test') # no val for sdd\n#     else:\n#         loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='val')\n#     print('dataset: ', args.dataset)\n#     writername = args2writername(args)\n#     logdir = join(pathhack.pkg_path, 'results', writername, args.dataset)\n#     if not isdir(logdir):\n#         raise RuntimeError('The result directory was not found.')\n#     checkpoint_dir = join(logdir, 'checkpoint')\n#     model_filename = 'epoch_'+str(args.num_epochs)+'.pt'\n#     with open(join(checkpoint_dir, 'args.pickle'), 'rb') as f:\n#         args_eval = pickle.load(f)\n#     model = st_model(args_eval, device=device).to(device)\n#     model_checkpoint = torch.load(join(checkpoint_dir, model_filename), map_location=device)\n#     model.load_state_dict(model_checkpoint['model_state_dict'])\n#     print('Loaded configuration: ', writername)\n#     print('The best validation losses printed below should be the same.')\n#     print('Validation loss in the checkpoint: ', model_checkpoint['val_loss_epoch'])\n#     val_loss_epoch, val_aoe_epoch, val_foe_epoch = inference(loader_val, model, args, mode='val', tau=0.03, device=device)\n#     print('Validation loss from loaded model: ', val_loss_epoch)\n#     test_loss_epoch, test_aoe, test_foe, test_aoe_std, test_foe_std, test_aoe_min, test_foe_min = inference(loader_test, model, args, mode='test', tau=0.03, device=device)\n#     print('Test loss from loaded model: ', test_loss_epoch)\n#     print('dataset: {0} | test aoe: {1:.4f} | test aoe std: {2:.4f} | test foe: {3:.4f} | test foe std: {4:.4f} | min aoe: {5:.4f}, min foe: {6:.4f}'\\\n#                         .format(args.dataset, test_aoe, test_aoe_std, test_foe, test_foe_std, test_aoe_min, test_foe_min))\n#     dataset_list = ['eth', 'hotel', 'univ', 'zara1', 'zara2']\n#     dataset_idx = dataset_list.index(args.dataset)\n#     csv_row_data = np.ones((4,5))*999999.\n#     csv_row_data[:,dataset_idx] = np.array([test_aoe, test_foe, test_aoe_std, test_foe_std])\n#     csv_row_data = csv_row_data.reshape(-1)\n#     csv_filename = join(checkpoint_dir, '../..', 'test_performance.csv')\n#     with open(csv_filename, mode='a') as test_file:\n#         test_writer = csv.writer(test_file, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL)\n#         test_writer.writerow(csv_row_data)\n#         print(csv_filename+' is written.')\n\n\n# def inference(loader, model, args, mode='val', tau=1., device='cuda:0'):\n#     with torch.no_grad():\n#         model.eval()\n#         loss_epoch, aoe_epoch, foe_epoch, loss_mask_epoch = [], [], [], []\n#         aoe_mean_epoch, aoe_std_epoch, foe_mean_epoch, foe_std_epoch = [], [], [], []\n#         aoe_min_epoch, foe_min_epoch = [], []\n\n#         for batch_idx, batch in enumerate(loader):\n#             obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n#             v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt = batch\n#             v_obs, A_obs, v_pred_gt, attn_mask_obs, loss_mask_rel = \\\n#                 v_obs.to(device), A_obs.to(device), v_pred_gt.to(device), \\\n#                 attn_mask_obs.to(device), loss_mask_rel.to(device)\n#             if mode == 'val':\n#                 results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=False, sampling=False, device=device)\n#                 gaussian_params_pred, x_sample_pred, info = results\n#                 loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n#                 loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n#                 loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n#                 if args.deterministic:\n#                     offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n#                     loss = offset_error_sq.sum()/eventual_loss_mask.sum()\n#                 else:\n#                     prob_loss, eventual_loss_mask = negative_log_likelihood_full_partial(gaussian_params_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n#                     loss = prob_loss.sum()/eventual_loss_mask.sum()\n#                 aoe = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n#                 foe = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n            \n#             elif mode == 'test':\n#                 if args.deterministic:\n#                     sampling = False\n#                 else:\n#                     sampling = True\n#                 best_aoe_mean = 999999.\n#                 aoe, foe, loss = [], [], []\n#                 for _ in range(20):\n#                     results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=True, sampling=sampling, device=device)\n#                     gaussian_params_pred, x_sample_pred, info = results\n#                     loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n#                     loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n#                     loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n#                     if args.deterministic:\n#                         offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n#                         loss_tmp = offset_error_sq.sum()/eventual_loss_mask.sum()\n#                     else:\n#                         prob_loss, eventual_loss_mask = negative_log_likelihood_full_partial(gaussian_params_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n#                         loss_tmp = prob_loss.sum()/eventual_loss_mask.sum()\n#                     aoe_tmp = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n#                     foe_tmp = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n#                     aoe.append(aoe_tmp)\n#                     foe.append(foe_tmp)\n#                     loss.append(loss_tmp)\n#                 aoe = torch.stack(aoe, dim=0)\n#                 foe = torch.stack(foe, dim=0)\n#                 aoe = aoe.sum(1)\n#                 foe = foe.sum(1)\n#                 aoe_mean = aoe.mean()\n#                 aoe_std = aoe.std()\n#                 foe_mean = foe.mean()\n#                 foe_std = foe.std()\n#                 aoe_min, foe_min = aoe.min(), foe.min()\n#                 loss = sum(loss)/len(loss)\n#             else:\n#                 raise RuntimeError('We now only support val and test mode.')\n            \n#             if mode == 'val':\n#                 loss_epoch.append(loss.detach().to('cpu').item())\n#                 aoe_epoch.append(aoe.detach().to('cpu').numpy())\n#                 foe_epoch.append(foe.detach().to('cpu').numpy())\n#                 loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n#             elif mode == 'test':\n#                 loss_epoch.append(loss.detach().to('cpu').item())\n#                 aoe_mean_epoch.append(aoe_mean.item())\n#                 foe_mean_epoch.append(foe_mean.item())\n#                 aoe_std_epoch.append(aoe_std.item())\n#                 foe_std_epoch.append(foe_std.item())\n#                 aoe_min_epoch.append(aoe_min.item())\n#                 foe_min_epoch.append(foe_min.item())\n#                 loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n#             else:\n#                 raise RuntimeError('We only support val and test mode.')\n        \n#         if mode == 'val':\n#             aoe_epoch, foe_epoch, loss_mask_epoch = \\\n#                 np.concatenate(aoe_epoch, axis=0), \\\n#                 np.concatenate(foe_epoch, axis=0), \\\n#                 np.concatenate(loss_mask_epoch, axis=0)\n#             loss_epoch, aoe_epoch, foe_epoch = \\\n#                 np.mean(loss_epoch), \\\n#                 aoe_epoch.sum()/loss_mask_epoch.sum(), \\\n#                 foe_epoch.sum()/loss_mask_epoch.sum()\n#             return loss_epoch, aoe_epoch, foe_epoch\n#         elif mode == 'test':\n#             loss_mask_epoch = np.concatenate(loss_mask_epoch, axis=0)\n#             aoe = sum(aoe_mean_epoch)/loss_mask_epoch.sum()\n#             foe = sum(foe_mean_epoch)/loss_mask_epoch.sum()\n#             aoe_std = sum(aoe_std_epoch)/loss_mask_epoch.sum()\n#             foe_std = sum(foe_std_epoch)/loss_mask_epoch.sum()\n#             aoe_min = sum(aoe_min_epoch)/loss_mask_epoch.sum()\n#             foe_min = sum(foe_min_epoch)/loss_mask_epoch.sum()\n#             loss_epoch = np.mean(loss_epoch)\n#             return loss_epoch, aoe, foe, aoe_std, foe_std, aoe_min, foe_min\n#         else:\n#             raise RuntimeError('We now only support val mode.')    \n\n\n# if __name__ == \"__main__\":\n#     args = arg_parse()\n#     main(args)"
  },
  {
    "path": "gst_updated/scripts/experiments/pathhack.py",
    "content": "import sys\nfrom os.path import abspath, join, split\nfile_path = split(abspath(__file__))[0]\npkg_path = join(file_path, '../..')\nsys.path.append(pkg_path)\n"
  },
  {
    "path": "gst_updated/scripts/experiments/test.py",
    "content": "from gst_updated.scripts.data import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport csv\nimport numpy as np\nfrom gst_updated.src.mgnn.utils import arg_parse, average_offset_error, final_offset_error, args2writername, load_batch_dataset\nfrom gst_updated.src.gumbel_social_transformer.st_model import st_model, offset_error_square_full_partial,\\\n    negative_log_likelihood_full_partial\n\n\ndef main(args):\n    print('\\n\\n')\n    print('-'*50)\n    torch.manual_seed(args.random_seed)\n    np.random.seed(args.random_seed)\n    device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n    loader_test = load_batch_dataset(args, pathhack.pkg_path, subfolder='test')\n    print('dataset: ', args.dataset)\n    writername = args2writername(args)\n    logdir = join(pathhack.pkg_path, 'results', writername, args.dataset)\n    logdir = '/home/shuijing/Desktop/CrowdNav_Prediction/gst_updated/results/100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000/sj'\n    if not isdir(logdir):\n        raise RuntimeError('The result directory was not found.')\n    checkpoint_dir = join(logdir, 'checkpoint')\n    model_filename = 'epoch_'+str(args.num_epochs)+'.pt'\n    with open(join(checkpoint_dir, 'args.pickle'), 'rb') as f:\n        args_eval = pickle.load(f)\n    model = st_model(args_eval, device=device).to(device)\n    model_checkpoint = torch.load(join(checkpoint_dir, model_filename), map_location=device)\n    model.load_state_dict(model_checkpoint['model_state_dict'])\n    print('Loaded configuration: ', writername)\n    test_loss_epoch, test_aoe, test_foe, test_aoe_std, test_foe_std, test_aoe_min, test_foe_min = inference(loader_test, model, args, mode='test', tau=0.03, device=device)\n    print('Test loss from loaded model: ', test_loss_epoch)\n    print('dataset: {0} | test aoe: {1:.4f} | test aoe std: {2:.4f} | test foe: {3:.4f} | test foe std: {4:.4f} | min aoe: {5:.4f}, min foe: {6:.4f}'\\\n                        .format(args.dataset, test_aoe, test_aoe_std, test_foe, test_foe_std, test_aoe_min, test_foe_min))\n    # dataset_list = ['eth', 'hotel', 'univ', 'zara1', 'zara2']\n    # dataset_idx = dataset_list.index(args.dataset)\n    # csv_row_data = np.ones((4,5))*999999.\n    # csv_row_data[:,dataset_idx] = np.array([test_aoe, test_foe, test_aoe_std, test_foe_std])\n    # csv_row_data = csv_row_data.reshape(-1)\n    # csv_filename = join(checkpoint_dir, '../..', 'test_performance.csv')\n    # with open(csv_filename, mode='a') as test_file:\n    #     test_writer = csv.writer(test_file, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL)\n    #     test_writer.writerow(csv_row_data)\n    #     print(csv_filename+' is written.')\n\n\ndef inference(loader, model, args, mode='val', tau=1., device='cuda:0'):\n    with torch.no_grad():\n        model.eval()\n        loss_epoch, aoe_epoch, foe_epoch, loss_mask_epoch = [], [], [], []\n        aoe_mean_epoch, aoe_std_epoch, foe_mean_epoch, foe_std_epoch = [], [], [], []\n        aoe_min_epoch, foe_min_epoch = [], []\n\n        for batch_idx, batch in enumerate(loader):\n            obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n            v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt = batch\n            v_obs, A_obs, v_pred_gt, attn_mask_obs, loss_mask_rel = \\\n                v_obs.to(device), A_obs.to(device), v_pred_gt.to(device), \\\n                attn_mask_obs.to(device), loss_mask_rel.to(device)\n            if mode == 'val':\n                results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=False, sampling=False, device=device)\n                gaussian_params_pred, x_sample_pred, info = results\n                loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n                loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n                loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n                if args.deterministic:\n                    offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                    loss = offset_error_sq.sum()/eventual_loss_mask.sum()\n                else:\n                    prob_loss, eventual_loss_mask = negative_log_likelihood_full_partial(gaussian_params_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                    loss = prob_loss.sum()/eventual_loss_mask.sum()\n                aoe = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                foe = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n            \n            elif mode == 'test':\n                if args.deterministic:\n                    sampling = False\n                else:\n                    sampling = True\n                best_aoe_mean = 999999.\n                aoe, foe, loss = [], [], []\n                for _ in range(20):\n                    results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=True, sampling=sampling, device=device)\n                    gaussian_params_pred, x_sample_pred, info = results\n                    loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n                    loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n                    loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n                    if args.deterministic:\n                        offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                        loss_tmp = offset_error_sq.sum()/eventual_loss_mask.sum()\n                    else:\n                        prob_loss, eventual_loss_mask = negative_log_likelihood_full_partial(gaussian_params_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                        loss_tmp = prob_loss.sum()/eventual_loss_mask.sum()\n                    aoe_tmp = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                    foe_tmp = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                    aoe.append(aoe_tmp)\n                    foe.append(foe_tmp)\n                    loss.append(loss_tmp)\n                aoe = torch.stack(aoe, dim=0)\n                foe = torch.stack(foe, dim=0)\n                aoe = aoe.sum(1)\n                foe = foe.sum(1)\n                aoe_mean = aoe.mean()\n                aoe_std = aoe.std()\n                foe_mean = foe.mean()\n                foe_std = foe.std()\n                aoe_min, foe_min = aoe.min(), foe.min()\n                loss = sum(loss)/len(loss)\n            else:\n                raise RuntimeError('We now only support val and test mode.')\n            \n            if mode == 'val':\n                loss_epoch.append(loss.detach().to('cpu').item())\n                aoe_epoch.append(aoe.detach().to('cpu').numpy())\n                foe_epoch.append(foe.detach().to('cpu').numpy())\n                loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n            elif mode == 'test':\n                loss_epoch.append(loss.detach().to('cpu').item())\n                aoe_mean_epoch.append(aoe_mean.item())\n                foe_mean_epoch.append(foe_mean.item())\n                aoe_std_epoch.append(aoe_std.item())\n                foe_std_epoch.append(foe_std.item())\n                aoe_min_epoch.append(aoe_min.item())\n                foe_min_epoch.append(foe_min.item())\n                loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n            else:\n                raise RuntimeError('We only support val and test mode.')\n        \n        if mode == 'val':\n            aoe_epoch, foe_epoch, loss_mask_epoch = \\\n                np.concatenate(aoe_epoch, axis=0), \\\n                np.concatenate(foe_epoch, axis=0), \\\n                np.concatenate(loss_mask_epoch, axis=0)\n            loss_epoch, aoe_epoch, foe_epoch = \\\n                np.mean(loss_epoch), \\\n                aoe_epoch.sum()/loss_mask_epoch.sum(), \\\n                foe_epoch.sum()/loss_mask_epoch.sum()\n            return loss_epoch, aoe_epoch, foe_epoch\n        elif mode == 'test':\n            loss_mask_epoch = np.concatenate(loss_mask_epoch, axis=0)\n            aoe = sum(aoe_mean_epoch)/loss_mask_epoch.sum()\n            foe = sum(foe_mean_epoch)/loss_mask_epoch.sum()\n            aoe_std = sum(aoe_std_epoch)/loss_mask_epoch.sum()\n            foe_std = sum(foe_std_epoch)/loss_mask_epoch.sum()\n            aoe_min = sum(aoe_min_epoch)/loss_mask_epoch.sum()\n            foe_min = sum(foe_min_epoch)/loss_mask_epoch.sum()\n            loss_epoch = np.mean(loss_epoch)\n            return loss_epoch, aoe, foe, aoe_std, foe_std, aoe_min, foe_min\n        else:\n            raise RuntimeError('We now only support val mode.')    \n\n\nif __name__ == \"__main__\":\n    # args = arg_parse()\n    # main(args)\n    from crowd_nav.configs.config import Config\n    config = Config()\n    main(config.pred)"
  },
  {
    "path": "gst_updated/scripts/experiments/test_gst.py",
    "content": "from gst_updated.scripts.data import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport csv\nimport numpy as np\nfrom gst_updated.src.mgnn.utils import arg_parse, average_offset_error, final_offset_error, args2writername, load_batch_dataset\nfrom gst_updated.src.gumbel_social_transformer.st_model import st_model, offset_error_square_full_partial,\\\n    negative_log_likelihood_full_partial\n\n\ndef main(args):\n    print('\\n\\n')\n    print('-'*50)\n    torch.manual_seed(args.random_seed)\n    np.random.seed(args.random_seed)\n    device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n    loader_test = load_batch_dataset(args, pathhack.pkg_path, subfolder='test')\n    print('dataset: ', args.dataset)\n    writername = args2writername(args)\n    logdir = join(pathhack.pkg_path, 'results', writername, args.dataset)\n    logdir = '/home/shuijing/Desktop/CrowdNav_Prediction/gst_updated/results/100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000/sj'\n    if not isdir(logdir):\n        raise RuntimeError('The result directory was not found.')\n    checkpoint_dir = join(logdir, 'checkpoint')\n    model_filename = 'epoch_'+str(args.num_epochs)+'.pt'\n    with open(join(checkpoint_dir, 'args.pickle'), 'rb') as f:\n        args_eval = pickle.load(f)\n    model = st_model(args_eval, device=device).to(device)\n    model_checkpoint = torch.load(join(checkpoint_dir, model_filename), map_location=device)\n    model.load_state_dict(model_checkpoint['model_state_dict'])\n    print('Loaded configuration: ', writername)\n    test_loss_epoch, test_aoe, test_foe, test_aoe_std, test_foe_std, test_aoe_min, test_foe_min = inference(loader_test, model, args, mode='test', tau=0.03, device=device)\n    print('Test loss from loaded model: ', test_loss_epoch)\n    print('dataset: {0} | test aoe: {1:.4f} | test aoe std: {2:.4f} | test foe: {3:.4f} | test foe std: {4:.4f} | min aoe: {5:.4f}, min foe: {6:.4f}'\\\n                        .format(args.dataset, test_aoe, test_aoe_std, test_foe, test_foe_std, test_aoe_min, test_foe_min))\n    # dataset_list = ['eth', 'hotel', 'univ', 'zara1', 'zara2']\n    # dataset_idx = dataset_list.index(args.dataset)\n    # csv_row_data = np.ones((4,5))*999999.\n    # csv_row_data[:,dataset_idx] = np.array([test_aoe, test_foe, test_aoe_std, test_foe_std])\n    # csv_row_data = csv_row_data.reshape(-1)\n    # csv_filename = join(checkpoint_dir, '../..', 'test_performance.csv')\n    # with open(csv_filename, mode='a') as test_file:\n    #     test_writer = csv.writer(test_file, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL)\n    #     test_writer.writerow(csv_row_data)\n    #     print(csv_filename+' is written.')\n\n\ndef inference(loader, model, args, mode='val', tau=1., device='cuda:0'):\n    with torch.no_grad():\n        model.eval()\n        loss_epoch, aoe_epoch, foe_epoch, loss_mask_epoch = [], [], [], []\n        aoe_mean_epoch, aoe_std_epoch, foe_mean_epoch, foe_std_epoch = [], [], [], []\n        aoe_min_epoch, foe_min_epoch = [], []\n\n        for batch_idx, batch in enumerate(loader):\n            obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n            v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt = batch\n            v_obs, A_obs, v_pred_gt, attn_mask_obs, loss_mask_rel = \\\n                v_obs.to(device), A_obs.to(device), v_pred_gt.to(device), \\\n                attn_mask_obs.to(device), loss_mask_rel.to(device)\n            if mode == 'val':\n                results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=False, sampling=False, device=device)\n                gaussian_params_pred, x_sample_pred, info = results\n                loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n                loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n                loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n                if args.deterministic:\n                    offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                    loss = offset_error_sq.sum()/eventual_loss_mask.sum()\n                else:\n                    prob_loss, eventual_loss_mask = negative_log_likelihood_full_partial(gaussian_params_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                    loss = prob_loss.sum()/eventual_loss_mask.sum()\n                aoe = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                foe = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n            \n            elif mode == 'test':\n                if args.deterministic:\n                    sampling = False\n                else:\n                    sampling = True\n                best_aoe_mean = 999999.\n                aoe, foe, loss = [], [], []\n                for _ in range(20):\n                    results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=True, sampling=sampling, device=device)\n                    gaussian_params_pred, x_sample_pred, info = results\n                    loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n                    loss_mask_rel_full_partial = info['loss_mask_rel_full_partial']\n                    loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n                    if args.deterministic:\n                        offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                        loss_tmp = offset_error_sq.sum()/eventual_loss_mask.sum()\n                    else:\n                        prob_loss, eventual_loss_mask = negative_log_likelihood_full_partial(gaussian_params_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                        loss_tmp = prob_loss.sum()/eventual_loss_mask.sum()\n                    aoe_tmp = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                    foe_tmp = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n                    aoe.append(aoe_tmp)\n                    foe.append(foe_tmp)\n                    loss.append(loss_tmp)\n                aoe = torch.stack(aoe, dim=0)\n                foe = torch.stack(foe, dim=0)\n                aoe = aoe.sum(1)\n                foe = foe.sum(1)\n                aoe_mean = aoe.mean()\n                aoe_std = aoe.std()\n                foe_mean = foe.mean()\n                foe_std = foe.std()\n                aoe_min, foe_min = aoe.min(), foe.min()\n                loss = sum(loss)/len(loss)\n            else:\n                raise RuntimeError('We now only support val and test mode.')\n            \n            if mode == 'val':\n                loss_epoch.append(loss.detach().to('cpu').item())\n                aoe_epoch.append(aoe.detach().to('cpu').numpy())\n                foe_epoch.append(foe.detach().to('cpu').numpy())\n                loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n            elif mode == 'test':\n                loss_epoch.append(loss.detach().to('cpu').item())\n                aoe_mean_epoch.append(aoe_mean.item())\n                foe_mean_epoch.append(foe_mean.item())\n                aoe_std_epoch.append(aoe_std.item())\n                foe_std_epoch.append(foe_std.item())\n                aoe_min_epoch.append(aoe_min.item())\n                foe_min_epoch.append(foe_min.item())\n                loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n            else:\n                raise RuntimeError('We only support val and test mode.')\n        \n        if mode == 'val':\n            aoe_epoch, foe_epoch, loss_mask_epoch = \\\n                np.concatenate(aoe_epoch, axis=0), \\\n                np.concatenate(foe_epoch, axis=0), \\\n                np.concatenate(loss_mask_epoch, axis=0)\n            loss_epoch, aoe_epoch, foe_epoch = \\\n                np.mean(loss_epoch), \\\n                aoe_epoch.sum()/loss_mask_epoch.sum(), \\\n                foe_epoch.sum()/loss_mask_epoch.sum()\n            return loss_epoch, aoe_epoch, foe_epoch\n        elif mode == 'test':\n            loss_mask_epoch = np.concatenate(loss_mask_epoch, axis=0)\n            aoe = sum(aoe_mean_epoch)/loss_mask_epoch.sum()\n            foe = sum(foe_mean_epoch)/loss_mask_epoch.sum()\n            aoe_std = sum(aoe_std_epoch)/loss_mask_epoch.sum()\n            foe_std = sum(foe_std_epoch)/loss_mask_epoch.sum()\n            aoe_min = sum(aoe_min_epoch)/loss_mask_epoch.sum()\n            foe_min = sum(foe_min_epoch)/loss_mask_epoch.sum()\n            loss_epoch = np.mean(loss_epoch)\n            return loss_epoch, aoe, foe, aoe_std, foe_std, aoe_min, foe_min\n        else:\n            raise RuntimeError('We now only support val mode.')    \n\n\nif __name__ == \"__main__\":\n    # args = arg_parse()\n    # main(args)\n    from crowd_nav.configs.config import Config\n    config = Config()\n    main(config.pred)"
  },
  {
    "path": "gst_updated/scripts/experiments/train.py",
    "content": "import pathhack\nimport pickle\nimport time\nfrom os.path import join, isdir\nfrom os import makedirs\nimport torch\nimport numpy as np\nfrom tensorboardX import SummaryWriter\nfrom torch.optim.lr_scheduler import StepLR\nfrom src.mgnn.utils import arg_parse, average_offset_error, final_offset_error, random_rotate_graph, args2writername, load_batch_dataset\nfrom src.gumbel_social_transformer.temperature_scheduler import Temp_Scheduler\nfrom scripts.experiments.eval import inference\nfrom datetime import datetime\n\n\n\ndef main(args):\n    print('\\n\\n')\n    print('-'*50)\n    print('arguments: ', args)\n    torch.manual_seed(args.random_seed)\n    np.random.seed(args.random_seed)\n    if args.batch_size != 1:\n        raise RuntimeError(\"Batch size must be 1 for BatchTrajectoriesDataset.\")\n    if args.dataset == 'sdd' and args.rotation_pattern is not None:\n        raise RuntimeError(\"SDD should not allow rotation since it uses pixels.\")\n    device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n    print('device: ', device)\n    loader_train = load_batch_dataset(args, pathhack.pkg_path, subfolder='train')\n    if args.dataset == 'sdd':\n        loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='test') # no val for sdd\n    else:\n        loader_val = load_batch_dataset(args, pathhack.pkg_path, subfolder='val')\n    train_data_loaders = [loader_train, loader_val]\n    print('dataset: ', args.dataset)\n    writername = args2writername(args)\n    print('Config: ', writername)\n    logdir = join(pathhack.pkg_path, 'results', writername, args.dataset)\n    if isdir(logdir) and not args.resume_training:\n        print('Error: The result directory was already created and used.')\n        print('-'*50)\n        print('\\n\\n')\n        return\n    writer = SummaryWriter(logdir=logdir)\n    print('-'*50)\n    print('\\n\\n')\n    train(args, train_data_loaders, writer, logdir, device=device)\n    writer.close()\n\ndef train(args, data_loaders, writer, logdir, device='cuda:0'):\n    print('-'*50)\n    print('Training Phase')\n    print('-'*50, '\\n')\n    loader_train, loader_val = data_loaders\n    model = st_model(args, device=device).to(device)\n    optimizer = torch.optim.Adam(model.parameters(), lr=args.lr)\n    lr_scheduler = StepLR(optimizer, step_size=int(args.temp_epochs/4), gamma=0.3)\n    checkpoint_dir = join(logdir, 'checkpoint')\n    if args.resume_training:\n        if not isdir(checkpoint_dir):\n            raise RuntimeError(\"Checkpoint folder does not exist.\")\n        with open(join(checkpoint_dir, 'train_hist.pickle'), 'rb') as f:\n            hist = pickle.load(f)\n            print(join(checkpoint_dir, 'train_hist.pickle')+' is loaded.')\n        if args.resume_epoch is None:\n            checkpoint_epoch = hist['epoch']\n        else:\n            checkpoint_epoch = args.resume_epoch\n            hist['train_loss_task'], hist['val_loss_task'] = \\\n                hist['train_loss_task'][:checkpoint_epoch//args.save_epochs], hist['val_loss_task'][:checkpoint_epoch//args.save_epochs]\n            hist['train_aoe_task'], hist['val_aoe_task'] = \\\n                hist['train_aoe_task'][:checkpoint_epoch//args.save_epochs], hist['val_aoe_task'][:checkpoint_epoch//args.save_epochs]\n            hist['train_foe_task'], hist['val_foe_task'] = \\\n                hist['train_foe_task'][:checkpoint_epoch//args.save_epochs], hist['val_foe_task'][:checkpoint_epoch//args.save_epochs]\n        checkpoint = torch.load(join(checkpoint_dir, 'epoch_'+str(checkpoint_epoch)+'.pt'))\n        # load checkpoint\n        model.load_state_dict(checkpoint['model_state_dict'])\n        optimizer.load_state_dict(checkpoint['optimizer_state_dict'])\n        lr_scheduler.load_state_dict(checkpoint['lr_scheduler_state_dict'])\n        temperature_scheduler = Temp_Scheduler(args.temp_epochs, args.init_temp, args.init_temp, \\\n            temp_min=0.03, last_epoch=checkpoint_epoch-1)\n        start_epoch = checkpoint_epoch+1\n        print('Model, optimizer, lr_scheduler, and temperature scheduler are loaded.')\n        print('EPOCHS: '+str(start_epoch)+' to '+str(args.num_epochs))\n    else:\n        if not isdir(checkpoint_dir):\n            makedirs(checkpoint_dir)\n        with open(join(checkpoint_dir, 'args.pickle'), 'wb') as f:\n            pickle.dump(args, f)\n        temperature_scheduler = Temp_Scheduler(args.temp_epochs, args.init_temp, args.init_temp, temp_min=0.03)\n        hist = {}\n        hist['epoch'] = 0\n        hist['train_loss_task'], hist['val_loss_task'] = [], []\n        hist['train_aoe_task'], hist['val_aoe_task'] = [], []\n        hist['train_foe_task'], hist['val_foe_task'] = [], []\n        start_epoch = 1\n        print('Model, optimizer, lr_scheduler, and temperature scheduler are initialized.')\n        print('EPOCHS: '+str(start_epoch)+' to '+str(args.num_epochs))\n    print('Training started.\\n')\n    for epoch in range(start_epoch, args.num_epochs+1):\n        model.train()\n        epoch_start_time = time.time()\n        tau = temperature_scheduler.step()\n        train_loss_epoch, train_aoe_epoch, train_foe_epoch, train_loss_mask_epoch = [], [], [], []\n        batch_len = 0\n        valid_batch_len = 0\n        for batch_idx, batch in enumerate(loader_train):\n            batch_len+=1\n            obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n            v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt = batch\n            # print(v_obs.shape)\n            if v_obs.shape[2]>128:\n                continue\n            valid_batch_len += 1\n            if args.rotation_pattern is not None:\n                (v_obs, A_obs, v_pred_gt, A_pred_gt), _ = \\\n                    random_rotate_graph(args, v_obs, A_obs, v_pred_gt, A_pred_gt)\n            v_obs, A_obs, v_pred_gt, attn_mask_obs, loss_mask_rel = \\\n                v_obs.to(device), A_obs.to(device), v_pred_gt.to(device), \\\n                attn_mask_obs.to(device), loss_mask_rel.to(device)\n            results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=False, sampling=False, device=device)\n            gaussian_params_pred, x_sample_pred, info = results\n            loss_mask_per_pedestrian = info['loss_mask_per_pedestrian']\n            loss_mask_rel_full_partial = info['loss_mask_rel_full_partial'] # value depends on only_observe_full_period\n            loss_mask_rel_pred = loss_mask_rel[:,:,-args.pred_seq_len:]\n            if args.deterministic:\n                offset_error_sq, eventual_loss_mask = offset_error_square_full_partial(x_sample_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                loss = offset_error_sq.sum()/eventual_loss_mask.sum()\n            else:\n                prob_loss, eventual_loss_mask = negative_log_likelihood_full_partial(gaussian_params_pred, v_pred_gt, loss_mask_rel_full_partial, loss_mask_rel_pred)\n                loss = prob_loss.sum()/eventual_loss_mask.sum()\n\n            train_loss_epoch.append(loss.detach().to('cpu').item())\n            loss = loss / args.batch_size\n            loss.backward()\n            # only consider the fully detected pedestrians\n            aoe = average_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n            foe = final_offset_error(x_sample_pred, v_pred_gt, loss_mask=loss_mask_per_pedestrian)\n            train_aoe_epoch.append(aoe.detach().to('cpu').numpy())\n            train_foe_epoch.append(foe.detach().to('cpu').numpy())\n            train_loss_mask_epoch.append(loss_mask_per_pedestrian[0].detach().to('cpu').numpy())\n            if args.clip_grad is not None:\n                torch.nn.utils.clip_grad_norm_(\n                    model.parameters(), args.clip_grad)\n            optimizer.step()\n            optimizer.zero_grad()\n        lr_scheduler.step()\n        print(\"valid batch ratio: \", valid_batch_len/batch_len)\n        train_aoe_epoch, train_foe_epoch, train_loss_mask_epoch = \\\n            np.concatenate(train_aoe_epoch, axis=0), \\\n            np.concatenate(train_foe_epoch, axis=0), \\\n            np.concatenate(train_loss_mask_epoch, axis=0)\n        train_loss_epoch, train_aoe_epoch, train_foe_epoch = \\\n            np.mean(train_loss_epoch), \\\n            train_aoe_epoch.sum()/train_loss_mask_epoch.sum(), \\\n            train_foe_epoch.sum()/train_loss_mask_epoch.sum()\n        hist['train_loss_task'].append(train_loss_epoch)\n        hist['train_aoe_task'].append(train_aoe_epoch)\n        hist['train_foe_task'].append(train_foe_epoch)\n        training_epoch_period = time.time() - epoch_start_time\n        training_epoch_period_per_sample = training_epoch_period/len(loader_train)\n\n        val_loss_epoch, val_aoe_epoch, val_foe_epoch = inference(loader_val, model, args, mode='val', tau=tau, device=device)\n        hist['val_loss_task'].append(val_loss_epoch)\n        hist['val_aoe_task'].append(val_aoe_epoch)\n        hist['val_foe_task'].append(val_foe_epoch)\n        hist['epoch'] = epoch\n        print('Epoch: {0} | train loss: {1:.4f} | val loss: {2:.4f} | train aoe: {3:.4f} | val aoe: {4:.4f} | train foe: {5:.4f} | val foe: {6:.4f} | period: {7:.2f} sec | time per sample: {8:.4f} sec'\\\n                        .format(epoch, train_loss_epoch, val_loss_epoch,\\\n                        train_aoe_epoch, val_aoe_epoch,\\\n                        train_foe_epoch, val_foe_epoch,\\\n                        training_epoch_period, training_epoch_period_per_sample))\n        if epoch % args.save_epochs == 0:\n            model_filename = join(checkpoint_dir, 'epoch_'+str(epoch)+'.pt')\n            torch.save({\n                    'epoch': epoch,\n                    'model_state_dict': model.state_dict(),\n                    'optimizer_state_dict': optimizer.state_dict(),\n                    'lr_scheduler_state_dict': lr_scheduler.state_dict(),\n                    'train_loss_epoch': train_loss_epoch,\n                    'val_loss_epoch': val_loss_epoch,\n                    'train_aoe_epoch': train_aoe_epoch,\n                    'val_aoe_epoch': val_aoe_epoch, \n                    'train_foe_epoch': train_foe_epoch,\n                    'val_foe_epoch': val_foe_epoch,\n                    'training_date': datetime.today().strftime('%y%m%d'),\n                    }, model_filename)\n            print('epoch_'+str(epoch)+'.pt is saved.')\n            with open(join(checkpoint_dir, 'train_hist.pickle'), 'wb') as f:\n                pickle.dump(hist, f)\n                print(join(checkpoint_dir, 'train_hist.pickle')+' is saved.')\n        writer.add_scalars('loss', {'train': hist['train_loss_task'][-1], 'val': hist['val_loss_task'][-1]}, epoch)\n        writer.add_scalars('aoe', {'train': hist['train_aoe_task'][-1], 'val': hist['val_aoe_task'][-1]}, epoch)\n        writer.add_scalars('foe', {'train': hist['train_foe_task'][-1], 'val': hist['val_foe_task'][-1]}, epoch)\n    return\n\nif __name__ == \"__main__\":\n    args = arg_parse()\n    if args.temporal == \"lstm\" or args.temporal == \"faster_lstm\":\n        from src.gumbel_social_transformer.st_model import st_model, offset_error_square_full_partial, \\\n            negative_log_likelihood_full_partial\n    else:\n        raise RuntimeError('The temporal component is not lstm nor faster_lstm.')\n    main(args)"
  },
  {
    "path": "gst_updated/scripts/wrapper/crowd_nav_interface_multi_env_parallel.py",
    "content": "from gst_updated.scripts.data import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport numpy as np\nfrom gst_updated.src.mgnn.utils import seq_to_graph\nfrom gst_updated.src.gumbel_social_transformer.st_model import st_model\n\n\n\nclass CrowdNavPredInterfaceMultiEnv(object):\n\n    def __init__(self, load_path, device, config, num_env):\n        # *** Load model\n        self.args = config\n        self.device = device\n        self.nenv = num_env\n        if not isdir(load_path):\n            raise RuntimeError('The result directory was not found.')\n        checkpoint_dir = join(load_path, 'checkpoint')\n        with open(join(checkpoint_dir, 'args.pickle'), 'rb') as f:\n            args_eval = pickle.load(f)\n        # Uncomment if you want a fixed random seed.\n        # torch.manual_seed(args_eval.random_seed)\n        # np.random.seed(args_eval.random_seed)\n        self.model = st_model(args_eval, device=device).to(device)\n        model_filename = 'epoch_'+str(args_eval.num_epochs)+'.pt'\n        model_checkpoint = torch.load(join(checkpoint_dir, model_filename), map_location=device)\n        self.model.load_state_dict(model_checkpoint['model_state_dict'])\n        self.model.eval()\n        print(\"LOADED MODEL\")\n        print(\"device: \", device)\n        print()\n\n    def forward(self, input_traj,input_binary_mask, sampling = True):\n        \"\"\"\n        inputs:\n            - input_traj:\n                # numpy\n                # (n_env, num_peds, obs_seq_len, 2)\n            - input_binary_mask:\n                # numpy\n                # (n_env, num_peds, obs_seq_len, 1)\n                # Zhe: I think we should not just have the binary mask of shape (n_env, number of pedestrains, 1)\n                # because some agents are partially detected, and they should not be simply ignored.\n            - sampling:\n                # bool\n                # True means you sample from Gaussian.\n                # False means you choose to use the mean of Gaussian as output.\n        outputs:\n            - output_traj:\n                # torch \"cpu\"\n                # (n_env, num_peds, pred_seq_len, 5)\n                # where 5 includes [mu_x, mu_y, sigma_x, sigma_y, correlation coefficient]\n            - output_binary_mask:\n                # torch \"cpu\"\n                # (n_env, num_peds, 1)\n                # Zhe: this means for prediction, if an agent does not show up in the last and second\n                # last observation time step, then the agent will not be predicted.\n        \"\"\"\n        input_traj = input_traj.cpu().numpy()\n        input_binary_mask = input_binary_mask.cpu().numpy()\n        obs_seq_len = 5\n        pred_seq_len = 5\n        invalid_value = -999.\n\n        # *** Process input data\n        num_peds_batch = 0\n        b_num_peds = []\n        b_obs_traj, b_obs_traj_rel, b_loss_mask_rel, b_loss_mask, b_v_obs, b_A_obs, b_attn_mask_obs = \\\n            [],[],[],[],[],[],[]\n        for i in range(self.nenv):\n            input_traj_i = input_traj[i]\n            input_binary_mask_i = input_binary_mask[i]\n            obs_traj = np.transpose(input_traj_i, (0,2,1)) # (num_peds, 2, obs_seq_len)\n            n_peds = obs_traj.shape[0]\n            loss_mask_obs = input_binary_mask_i[:,:,0] # (num_peds, obs_seq_len)\n            loss_mask_rel_obs = loss_mask_obs[:,:-1] * loss_mask_obs[:,-1:]\n            loss_mask_rel_obs = np.concatenate((loss_mask_obs[:,0:1], loss_mask_rel_obs), axis=1) # (num_peds, obs_seq_len)\n            loss_mask_rel_pred = (np.ones((n_peds, pred_seq_len)) * loss_mask_rel_obs[:,-1:]).astype('bool') # (num_peds)\n            obs_traj_rel = obs_traj[:,:,1:] - obs_traj[:,:,:-1]\n            obs_traj_rel = np.concatenate((np.zeros((n_peds, 2, 1)), obs_traj_rel), axis=2)\n            obs_traj_rel = invalid_value*np.ones_like(obs_traj_rel)*(1-loss_mask_rel_obs[:,np.newaxis,:]) \\\n                + obs_traj_rel*loss_mask_rel_obs[:,np.newaxis,:]\n            obs_traj = torch.from_numpy(obs_traj)\n            obs_traj_rel = torch.from_numpy(obs_traj_rel)\n            # v_obs, A_obs = seq_to_graph(obs_traj, obs_traj_rel, attn_mech='rel_conv')\n            # v_obs, A_obs = v_obs.unsqueeze(0), A_obs.unsqueeze(0)\n            loss_mask_rel = np.concatenate((loss_mask_rel_obs, loss_mask_rel_pred), axis=1)[np.newaxis,:,:] # (1, num_peds, seq_len)\n            loss_mask_rel = torch.from_numpy(loss_mask_rel)\n            loss_mask_rel_obs = torch.from_numpy(loss_mask_rel_obs[np.newaxis,:,:])\n            attn_mask_obs = []\n            for tt in range(obs_seq_len):\n                loss_mask_rel_tt = loss_mask_rel_obs[0,:,tt] # (n_peds,)\n                attn_mask_obs.append(torch.outer(loss_mask_rel_tt, loss_mask_rel_tt).float()) # (n_peds, n_peds)\n            attn_mask_obs = torch.stack(attn_mask_obs, dim=0).unsqueeze(0) # (1,obs_seq_len, n_peds, n_peds)\n            obs_traj = obs_traj.unsqueeze(0)\n            obs_traj_rel = obs_traj_rel.unsqueeze(0)\n            loss_mask_pred = loss_mask_rel_pred\n            loss_mask = np.concatenate((loss_mask_obs, loss_mask_pred), axis=1)\n            loss_mask = torch.from_numpy(loss_mask).unsqueeze(0)\n            num_peds_batch += obs_traj.shape[1]\n            b_num_peds.append(obs_traj.shape[1])\n            b_obs_traj.append(obs_traj[0].float())\n            b_obs_traj_rel.append(obs_traj_rel[0].float())\n            b_loss_mask_rel.append(loss_mask_rel[0].float())\n            b_loss_mask.append(loss_mask[0].float())\n            # b_v_obs.append(v_obs[0].float())\n            # b_A_obs.append(A_obs[0].float())\n            b_attn_mask_obs.append(attn_mask_obs[0].float())\n        b_obs_traj = torch.cat(b_obs_traj, dim=0)\n        b_obs_traj_rel = torch.cat(b_obs_traj_rel, dim=0)\n        b_loss_mask_rel = torch.cat(b_loss_mask_rel, dim=0)\n        b_loss_mask = torch.cat(b_loss_mask, dim=0)\n        b_v_obs, b_A_obs = seq_to_graph(b_obs_traj, b_obs_traj_rel, attn_mech='rel_conv')\n        b_attn_mask_obs_tensor = torch.zeros(obs_seq_len, num_peds_batch, num_peds_batch).float()\n        curr_ped_idx = 0\n        for num_peds, attn_mask_obs_single in zip(b_num_peds, b_attn_mask_obs):\n            b_attn_mask_obs_tensor[:,curr_ped_idx:curr_ped_idx+num_peds, \\\n                curr_ped_idx:curr_ped_idx+num_peds] = attn_mask_obs_single\n            curr_ped_idx += num_peds\n        b_obs_traj = b_obs_traj.unsqueeze(0)\n        b_obs_traj_rel = b_obs_traj_rel.unsqueeze(0)\n        b_v_obs = b_v_obs.unsqueeze(0)\n        b_A_obs = b_A_obs.unsqueeze(0)\n        b_loss_mask_rel = b_loss_mask_rel.unsqueeze(0)\n        b_attn_mask_obs_tensor = b_attn_mask_obs_tensor.unsqueeze(0)\n        b_loss_mask = b_loss_mask.unsqueeze(0)\n        # print(\"PROCESSED DATA\")\n        # print(\"b_obs_traj shape: \", b_obs_traj.shape)\n        # print(\"b_obs_traj_rel shape: \", b_obs_traj_rel.shape)\n        # print(\"b_v_obs shape: \", b_v_obs.shape)\n        # print(\"b_A_obs shape: \", b_A_obs.shape)\n        # print(\"b_loss_mask_rel shape: \", b_loss_mask_rel.shape)\n        # print(\"b_attn_mask_obs_tensor shape: \", b_attn_mask_obs_tensor.shape)\n        # print(\"b_loss_mask shape: \", b_loss_mask.shape)\n        # print()\n\n        # *** Perform trajectory prediction\n        sampling = False\n        with torch.no_grad():\n\n            b_v_obs, b_A_obs, b_attn_mask_obs_tensor, b_loss_mask_rel = \\\n                b_v_obs.to(self.device), b_A_obs.to(self.device), \\\n                b_attn_mask_obs_tensor.float().to(self.device), b_loss_mask_rel.float().to(self.device)\n            results = self.model(b_v_obs, b_A_obs, b_attn_mask_obs_tensor, b_loss_mask_rel, tau=0.03, hard=True, sampling=sampling, device=self.device)\n            gaussian_params_pred, x_sample_pred, info = results\n        mu, sx, sy, corr = gaussian_params_pred\n        # print(mu.shape) # torch.Size([1, 5, 48, 2])\n        # print(sx.shape)\n        # print(sy.shape)\n        # print(corr.shape)\n        # print(b_obs_traj.shape)\n        # print(b_loss_mask.shape)\n        mu = mu.cumsum(1)\n        sx_squared = sx**2.\n        sy_squared = sy**2.\n        corr_sx_sy = corr*sx*sy\n        sx_squared_cumsum = sx_squared.cumsum(1)\n        sy_squared_cumsum = sy_squared.cumsum(1)\n        corr_sx_sy_cumsum = corr_sx_sy.cumsum(1)\n        sx_cumsum = sx_squared_cumsum**(1./2)\n        sy_cumsum = sy_squared_cumsum**(1./2)\n        corr_cumsum = corr_sx_sy_cumsum/(sx_cumsum*sy_cumsum)\n        mu_cumsum = mu.detach().to(\"cpu\") + np.transpose(b_obs_traj[:,:,:,-1:], (0,3,1,2)) # (batch, time, node, 2)\n        loss_mask_pred = b_loss_mask[:,:,obs_seq_len:].permute(0,2,1).unsqueeze(-1).bool()\n        mu_cumsum = mu_cumsum * loss_mask_pred + invalid_value*(~loss_mask_pred)\n        output_traj = torch.cat((mu_cumsum.detach().to(\"cpu\"), sx_cumsum.detach().to(\"cpu\"), sy_cumsum.detach().to(\"cpu\"), corr_cumsum.detach().to(\"cpu\")), dim=3)\n        output_traj = output_traj.permute(0, 2, 1, 3)[0] # (ped, time, 5)\n        output_binary_mask = loss_mask_pred[0,0,:,:].detach().to(\"cpu\") # (ped, 1)\n        num_total_peds = output_traj.shape[0]\n        output_traj = output_traj.reshape(self.nenv, num_total_peds // self.nenv, output_traj.shape[1], output_traj.shape[2]) # (n_env, ped, time, 5)\n        output_binary_mask = output_binary_mask.reshape(self.nenv, num_total_peds // self.nenv, 1) # (n_env, ped, 1)\n\n        output_traj, output_binary_mask = output_traj.to(self.device), output_binary_mask.to(self.device)\n        # reshape has been tested\n        # print(\"PERFORMED PREDICTION\")\n        # print(\"mu_cumsum shape: \", mu_cumsum.shape)\n        # print(\"sx_cumsum shape: \", sx_cumsum.shape)\n        # print(\"sy_cumsum shape: \", sy_cumsum.shape)\n        # print(\"corr_cumsum shape: \", corr_cumsum.shape)\n        # print(\"output_traj device: \", output_traj.device)\n        # print(\"output_binary_mask device: \", output_binary_mask.device)\n        # print(\"output_traj shape: \", output_traj.shape)\n        # print(\"output_binary_mask shape: \", output_binary_mask.shape)\n        # print()\n        return output_traj, output_binary_mask\n\nif __name__ == '__main__':\n    # *** Create an input that aligns with the format of CrowdNav\n    obs_seq_len = 5\n    pred_seq_len = 5\n    invalid_value = -999.\n    ped_step = 0.56\n    x_init = 2.\n    ped_0 = np.stack((x_init+np.arange((obs_seq_len-1)*ped_step, -ped_step, -ped_step),np.zeros(obs_seq_len)), 0)\n    ped_1 = -ped_0 # (2,8)\n    ped_3 = np.stack((np.zeros(obs_seq_len), x_init+np.arange((obs_seq_len-1)*ped_step, -ped_step, -ped_step)), 0)\n    ped_2 = -ped_3\n    ped_4 = np.ones((2,obs_seq_len))*invalid_value\n    ped_5 = np.ones((2,obs_seq_len))*invalid_value\n    obs_traj_undamaged = np.stack((ped_0,ped_1,ped_2,ped_3,ped_4,ped_5), axis=0) # (num_peds, 2, obs_seq_len)\n    n_peds = obs_traj_undamaged.shape[0]\n    original_obs_traj = obs_traj_undamaged # (num_peds, 2, obs_seq_len)\n    # 0: right, 1: left, 2: up, 3: down, 4,5: invalid agent\n    original_obs_traj[2,:,:2] = invalid_value\n    original_obs_traj[3,:,:2] = invalid_value\n    # * input_traj and input_binary_mask are example inputs from CrowdNav\n    input_traj = np.transpose(original_obs_traj, (0,2,1)) # (num_peds, obs_seq_len, 2)\n    input_binary_mask = (input_traj!=invalid_value).prod(2).astype('bool')[:,:,np.newaxis] # (num_peds, obs_seq_len, 1)\n\n    n_env = 8\n    input_traj = np.stack([input_traj for i in range(n_env)], axis=0) # (n_env, num_peds, obs_seq_len, 2)\n    input_binary_mask = np.stack([input_binary_mask for i in range(n_env)],axis=0) # (n_env, num_peds, obs_seq_len, 1)\n    print()\n    print(\"INPUT DATA\")\n    print(\"input_traj shape: \", input_traj.shape)\n    print(\"input_binary_mask shape:\", input_binary_mask.shape)\n    print()\n\n    device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n    args = \"100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000\"\n    model = CrowdNavPredInterfaceMultiEnv(load_path='/home/shuijing/Desktop/CrowdNav_Prediction/gst_updated/results/100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000/sj',\n                                          device=device, config = args, num_env=n_env)\n    output_traj, output_binary_mask = model.forward(\n        input_traj,\n        input_binary_mask,\n        sampling = True,\n    )\n    print()\n    print(\"OUTPUT DATA\")\n    print(\"output_traj shape: \", output_traj.shape)\n    print(\"output_binary_mask shape:\", output_binary_mask.shape)\n    print()"
  },
  {
    "path": "gst_updated/scripts/wrapper/crowd_nav_interface_multi_env_visualization_test_single_batch.py",
    "content": "import pathhack\nimport pickle\nfrom os.path import join, isdir\nimport torch\nimport numpy as np\nfrom src.mgnn.utils import seq_to_graph\nfrom src.gumbel_social_transformer.st_model import st_model\nimport matplotlib.pyplot as plt\n\n# from gst_updated.src.mgnn.utils import average_offset_error, final_offset_error, args2writername, load_batch_dataset\n# from gst_updated.src.gumbel_social_transformer.st_model import st_model, offset_error_square_full_partial,\\\n#     negative_log_likelihood_full_partial\nfrom torch.utils.data import DataLoader\n\ndef load_batch_dataset(pkg_path, subfolder='test', num_workers=4, shuffle=None):\n    result_filename = 'sj_dset_'+subfolder+'_trajectories.pt'\n    dataset_folderpath = join(pkg_path, 'datasets/shuijing/orca_20humans_fov')\n    dset = torch.load(join(dataset_folderpath, result_filename))\n    if shuffle is None:\n        if subfolder == 'train':\n            shuffle = True\n        else:\n            shuffle = False\n    dloader = DataLoader(\n        dset,\n        batch_size=1,\n        shuffle=shuffle,\n        num_workers=num_workers)\n    return dloader\n\nloader_test = load_batch_dataset(pathhack.pkg_path, subfolder='test')\nprint(len(loader_test))\n\ntest_batches = []\nmax_num_peds = 0\nfor batch_idx, batch in enumerate(loader_test):\n    if batch_idx % 1000 == 0 and len(test_batches) < 4 and batch_idx != 0:\n        obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n        v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt = batch\n        test_batches.append(batch)\n        print(\"obs traj shape: \", obs_traj.shape)\n        print(\"loss mask shape: \", loss_mask.shape)\n        if max_num_peds < obs_traj.shape[1]:\n            max_num_peds = obs_traj.shape[1]\n            print(\"max number of pedestrians: \", max_num_peds)\n\nwrapper_input_traj = []\nwrapper_input_binary_mask = []\nwrapper_output_traj_gt = []\nwrapper_output_binary_mask_gt = []\n\ninvalid_value = -999.\nfor batch in test_batches:\n    obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_rel_gt, loss_mask_rel, loss_mask, \\\n        v_obs, A_obs, v_pred_gt, A_pred_gt, attn_mask_obs, attn_mask_pred_gt = batch\n    input_traj = obs_traj[0].permute(0,2,1) # (n_peds, obs_seq_len,  2)\n    input_binary_mask = loss_mask[0,:,:input_traj.shape[1]].unsqueeze(2) # (n_peds, obs_seq_len, 1)\n    # print(input_binary_mask.shape)\n    output_traj_gt = pred_traj_gt[0].permute(0,2,1) # (num_peds, pred_seq_len, 2)\n    output_binary_mask_gt = loss_mask[0,:,input_traj.shape[1]:].unsqueeze(2) # (n_peds, pred_seq_len, 1)\n    n_peds, obs_seq_len, pred_seq_len = input_traj.shape[0], input_traj.shape[1], output_traj_gt.shape[1]\n    if n_peds < max_num_peds:\n        input_traj_complement = torch.ones(max_num_peds-n_peds, obs_seq_len, 2)*invalid_value\n        input_binary_mask_complement = torch.zeros(max_num_peds-n_peds, obs_seq_len, 1)\n        output_traj_gt_complement = torch.ones(max_num_peds-n_peds, pred_seq_len, 2)*invalid_value\n        output_binary_mask_gt_complement = torch.zeros(max_num_peds-n_peds, pred_seq_len, 1)\n\n        input_traj = torch.cat((input_traj, input_traj_complement), dim=0)\n        input_binary_mask = torch.cat((input_binary_mask, input_binary_mask_complement), dim=0)\n        output_traj_gt = torch.cat((output_traj_gt, output_traj_gt_complement), dim=0)\n        output_binary_mask_gt = torch.cat((output_binary_mask_gt, output_binary_mask_gt_complement), dim=0)\n    \n    wrapper_input_traj.append(input_traj)\n    wrapper_input_binary_mask.append(input_binary_mask)\n    wrapper_output_traj_gt.append(output_traj_gt)\n    wrapper_output_binary_mask_gt.append(output_binary_mask_gt)\n\nwrapper_input_traj = torch.stack(wrapper_input_traj, dim=0)\nwrapper_input_binary_mask = torch.stack(wrapper_input_binary_mask, dim=0)\nwrapper_output_traj_gt = torch.stack(wrapper_output_traj_gt, dim=0)\nwrapper_output_binary_mask_gt = torch.stack(wrapper_output_binary_mask_gt, dim=0)\nprint(\"wrapper_input_traj shape: \", wrapper_input_traj.shape)\nprint(\"wrapper_input_binary_mask shape: \", wrapper_input_binary_mask.shape)  \nprint(\"wrapper_output_traj_gt shape: \", wrapper_output_traj_gt.shape)\nprint(\"wrapper_output_binary_mask_gt shape: \", wrapper_output_binary_mask_gt.shape)  \n# wrapper_input_traj shape:  torch.Size([4, 15, 5, 2])\n# wrapper_input_binary_mask shape:  torch.Size([4, 15, 5, 1])\n# wrapper_output_traj_gt shape:  torch.Size([4, 15, 5, 2])\n# wrapper_output_binary_mask_gt shape:  torch.Size([4, 15, 5, 1])\n\nn_env = wrapper_input_traj.shape[0]\ninput_traj = wrapper_input_traj.numpy()\ninput_binary_mask = wrapper_input_binary_mask.numpy()\nprint()\nprint(\"INPUT DATA\")\nprint(\"input_traj shape: \", input_traj.shape)\nprint(\"input_binary_mask shape:\", input_binary_mask.shape)\nprint()\n\n# *** Visualize input data\ndef visualize_trajectory(obs_traj, loss_mask, sample_index, obs_seq_len=5, pred_seq_len=5):\n    ##### Print Visualization Started #####\n    n_peds, seq_len = obs_traj.shape[1], obs_seq_len+pred_seq_len\n    full_ped_idx = torch.where(loss_mask.sum(2)[0]==seq_len)[0] # loss_mask tensor: (1, num_peds, seq_len)\n    fig, ax = plt.subplots()\n    # ax.set_xlim((-8, 8))\n    # ax.set_ylim((-5, 5))\n    fig.set_tight_layout(True)\n    for ped_idx in range(n_peds):\n        if ped_idx in full_ped_idx:\n            ax.plot(obs_traj[0, ped_idx, 0, :obs_seq_len], obs_traj[0, ped_idx, 1, :obs_seq_len], '.-', c='k') # black for obs\n        else:\n            for t_idx in range(seq_len):\n                if loss_mask[0,ped_idx,t_idx] == 1:\n                    if t_idx < obs_seq_len:\n                        # obs blue for partially detected pedestrians\n                        ax.plot(obs_traj[0, ped_idx, 0, t_idx], obs_traj[0, ped_idx, 1, t_idx], '.', c='b')\n    ax.set_aspect('equal', adjustable='box')\n    ax.plot()\n    fig.savefig(\"tmp_img_to_be_deleted_\"+str(sample_index)+\".png\")\n    print(\"tmp_img_to_be_deleted_\"+str(sample_index)+\".png is created.\")\n    return\n\n\n# visualize_trajectory(obs_traj, loss_mask, sample_index, obs_seq_len=obs_seq_len, pred_seq_len=obs_seq_len)\n\n\n\n\ndef CrowdNavPredInterfaceMultiEnv(\n    input_traj,\n    input_binary_mask,\n    sampling = True,\n):\n    \"\"\"\n    inputs:\n        - input_traj:\n            # numpy\n            # (n_env, num_peds, obs_seq_len, 2)\n        - input_binary_mask:\n            # numpy\n            # (n_env, num_peds, obs_seq_len, 1)\n            # Zhe: I think we should not just have the binary mask of shape (n_env, number of pedestrains, 1)\n            # because some agents are partially detected, and they should not be simply ignored.\n        - sampling:\n            # bool \n            # True means you sample from Gaussian.\n            # False means you choose to use the mean of Gaussian as output.\n    outputs:\n        - output_traj:\n            # torch \"cpu\"\n            # (n_env, num_peds, pred_seq_len, 5)\n            # where 5 includes [mu_x, mu_y, sigma_x, sigma_y, correlation coefficient]\n        - output_binary_mask:\n            # torch \"cpu\"\n            # (n_env, num_peds, 1)\n            # Zhe: this means for prediction, if an agent does not show up in the last and second \n            # last observation time step, then the agent will not be predicted.\n    \"\"\"\n    # *** Process input data\n    num_peds_batch = 0\n    b_num_peds = []\n    b_obs_traj, b_obs_traj_rel, b_loss_mask_rel, b_loss_mask, b_v_obs, b_A_obs, b_attn_mask_obs = \\\n        [],[],[],[],[],[],[]\n    n_env = 1\n    for i in range(n_env):\n        input_traj_i = input_traj[i]\n        input_binary_mask_i = input_binary_mask[i]\n        obs_traj = np.transpose(input_traj_i, (0,2,1)) # (num_peds, 2, obs_seq_len)\n        n_peds = obs_traj.shape[0]\n        loss_mask_obs = input_binary_mask_i[:,:,0] # (num_peds, obs_seq_len)\n        loss_mask_rel_obs = loss_mask_obs[:,:-1] * loss_mask_obs[:,-1:]\n        loss_mask_rel_obs = np.concatenate((loss_mask_obs[:,0:1], loss_mask_rel_obs), axis=1) # (num_peds, obs_seq_len)\n        # import pdb; pdb.set_trace()\n        loss_mask_rel_pred = (np.ones((n_peds, pred_seq_len)) * loss_mask_rel_obs[:,-1:]).astype('bool') # (num_peds)\n        obs_traj_rel = obs_traj[:,:,1:] - obs_traj[:,:,:-1]\n        obs_traj_rel = np.concatenate((np.zeros((n_peds, 2, 1)), obs_traj_rel), axis=2)\n        obs_traj_rel = invalid_value*np.ones_like(obs_traj_rel)*(1-loss_mask_rel_obs[:,np.newaxis,:]) \\\n            + obs_traj_rel*loss_mask_rel_obs[:,np.newaxis,:]\n        obs_traj = torch.from_numpy(obs_traj)\n        obs_traj_rel = torch.from_numpy(obs_traj_rel)\n        v_obs, A_obs = seq_to_graph(obs_traj, obs_traj_rel, attn_mech='rel_conv')\n        v_obs, A_obs = v_obs.unsqueeze(0), A_obs.unsqueeze(0)\n        loss_mask_rel = np.concatenate((loss_mask_rel_obs, loss_mask_rel_pred), axis=1)[np.newaxis,:,:] # (1, num_peds, seq_len)\n        loss_mask_rel = torch.from_numpy(loss_mask_rel)\n        loss_mask_rel_obs = torch.from_numpy(loss_mask_rel_obs[np.newaxis,:,:])\n        attn_mask_obs = []\n        for tt in range(obs_seq_len):\n            loss_mask_rel_tt = loss_mask_rel_obs[0,:,tt] # (n_peds,)\n            attn_mask_obs.append(torch.outer(loss_mask_rel_tt, loss_mask_rel_tt).float()) # (n_peds, n_peds)\n        attn_mask_obs = torch.stack(attn_mask_obs, dim=0).unsqueeze(0) # (1,obs_seq_len, n_peds, n_peds)\n        obs_traj = obs_traj.unsqueeze(0)\n        obs_traj_rel = obs_traj_rel.unsqueeze(0)\n        loss_mask_pred = loss_mask_rel_pred\n        loss_mask = np.concatenate((loss_mask_obs, loss_mask_pred), axis=1)\n        loss_mask = torch.from_numpy(loss_mask).unsqueeze(0)\n        visualize_trajectory(obs_traj, loss_mask, i, obs_seq_len=obs_seq_len, pred_seq_len=obs_seq_len)\n        num_peds_batch += obs_traj.shape[1]\n        b_num_peds.append(obs_traj.shape[1])\n        b_obs_traj.append(obs_traj[0].float())\n        b_obs_traj_rel.append(obs_traj_rel[0].float())\n        b_loss_mask_rel.append(loss_mask_rel[0].float())\n        b_loss_mask.append(loss_mask[0].float())\n        b_v_obs.append(v_obs[0].float())\n        b_A_obs.append(A_obs[0].float())\n        b_attn_mask_obs.append(attn_mask_obs[0].float())\n    b_obs_traj = torch.cat(b_obs_traj, dim=0)\n    b_obs_traj_rel = torch.cat(b_obs_traj_rel, dim=0)\n    b_loss_mask_rel = torch.cat(b_loss_mask_rel, dim=0)\n    b_loss_mask = torch.cat(b_loss_mask, dim=0)\n    b_v_obs, b_A_obs = seq_to_graph(b_obs_traj, b_obs_traj_rel, attn_mech='rel_conv')\n    b_attn_mask_obs_tensor = torch.zeros(obs_seq_len, num_peds_batch, num_peds_batch).float()\n    curr_ped_idx = 0\n    for num_peds, attn_mask_obs_single in zip(b_num_peds, b_attn_mask_obs):\n        b_attn_mask_obs_tensor[:,curr_ped_idx:curr_ped_idx+num_peds, \\\n            curr_ped_idx:curr_ped_idx+num_peds] = attn_mask_obs_single\n        curr_ped_idx += num_peds\n    b_obs_traj = b_obs_traj.unsqueeze(0)\n    b_obs_traj_rel = b_obs_traj_rel.unsqueeze(0)\n    b_v_obs = b_v_obs.unsqueeze(0)\n    b_A_obs = b_A_obs.unsqueeze(0)\n    b_loss_mask_rel = b_loss_mask_rel.unsqueeze(0)\n    b_attn_mask_obs_tensor = b_attn_mask_obs_tensor.unsqueeze(0)\n    b_loss_mask = b_loss_mask.unsqueeze(0)\n    # print(\"PROCESSED DATA\")\n    # print(\"b_obs_traj shape: \", b_obs_traj.shape)\n    # print(\"b_obs_traj_rel shape: \", b_obs_traj_rel.shape)\n    # print(\"b_v_obs shape: \", b_v_obs.shape)\n    # print(\"b_A_obs shape: \", b_A_obs.shape)\n    # print(\"b_loss_mask_rel shape: \", b_loss_mask_rel.shape)\n    # print(\"b_attn_mask_obs_tensor shape: \", b_attn_mask_obs_tensor.shape)\n    # print(\"b_loss_mask shape: \", b_loss_mask.shape)\n    # print()\n    # *** Load model\n    writername = \"100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000\"\n    dataset = \"sj\" # Shui Jing\n    logdir = join(pathhack.pkg_path, 'results', writername, dataset)\n    device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n    if not isdir(logdir):\n        raise RuntimeError('The result directory was not found.')\n    checkpoint_dir = join(logdir, 'checkpoint')\n    with open(join(checkpoint_dir, 'args.pickle'), 'rb') as f:\n        args_eval = pickle.load(f)\n    # Uncomment if you want a fixed random seed.\n    # torch.manual_seed(args_eval.random_seed)\n    # np.random.seed(args_eval.random_seed)\n    model = st_model(args_eval, device=device).to(device)\n    model_filename = 'epoch_'+str(args_eval.num_epochs)+'.pt'\n    model_checkpoint = torch.load(join(checkpoint_dir, model_filename), map_location=device)\n    model.load_state_dict(model_checkpoint['model_state_dict'])\n    # print(\"LOADED MODEL\")\n    # print(\"device: \", device)\n    # print('Loaded configuration: ', writername)\n    # print()\n    # *** Perform trajectory prediction\n    sampling = False\n    with torch.no_grad():\n        model.eval()\n        b_v_obs, b_A_obs, b_attn_mask_obs_tensor, b_loss_mask_rel = \\\n            b_v_obs.to(device), b_A_obs.to(device), \\\n            b_attn_mask_obs_tensor.float().to(device), b_loss_mask_rel.float().to(device)\n        results = model(b_v_obs, b_A_obs, b_attn_mask_obs_tensor, b_loss_mask_rel, tau=0.03, hard=True, sampling=sampling, device=device)\n        gaussian_params_pred, x_sample_pred, info = results\n    mu, sx, sy, corr = gaussian_params_pred\n    # print(mu.shape) # torch.Size([1, 5, 48, 2])\n    # print(sx.shape)\n    # print(sy.shape)\n    # print(corr.shape)\n    # print(b_obs_traj.shape)\n    # print(b_loss_mask.shape)\n    mu = mu.cumsum(1)\n    sx_squared = sx**2.\n    sy_squared = sy**2.\n    corr_sx_sy = corr*sx*sy\n    sx_squared_cumsum = sx_squared.cumsum(1)\n    sy_squared_cumsum = sy_squared.cumsum(1)\n    corr_sx_sy_cumsum = corr_sx_sy.cumsum(1)\n    sx_cumsum = sx_squared_cumsum**(1./2)\n    sy_cumsum = sy_squared_cumsum**(1./2)\n    corr_cumsum = corr_sx_sy_cumsum/(sx_cumsum*sy_cumsum)\n    mu_cumsum = mu.detach().to(\"cpu\") + np.transpose(b_obs_traj[:,:,:,-1:], (0,3,1,2)) # (batch, time, node, 2)\n    loss_mask_pred = b_loss_mask[:,:,obs_seq_len:].permute(0,2,1).unsqueeze(-1).bool()\n    mu_cumsum = mu_cumsum * loss_mask_pred + invalid_value*(~loss_mask_pred)\n    output_traj = torch.cat((mu_cumsum.detach().to(\"cpu\"), sx_cumsum.detach().to(\"cpu\"), sy_cumsum.detach().to(\"cpu\"), corr_cumsum.detach().to(\"cpu\")), dim=3)\n    output_traj = output_traj.permute(0, 2, 1, 3)[0] # (ped, time, 5)\n    output_binary_mask = loss_mask_pred[0,0,:,:].detach().to(\"cpu\") # (ped, 1)\n    num_total_peds = output_traj.shape[0]\n    output_traj = output_traj.reshape(n_env, num_total_peds // n_env, output_traj.shape[1], output_traj.shape[2]) # (n_env, ped, time, 5)\n    output_binary_mask = output_binary_mask.reshape(n_env, num_total_peds // n_env, 1) # (n_env, ped, 1)\n    output_traj, output_binary_mask = output_traj.numpy(), output_binary_mask.numpy()\n    # reshape has been tested\n    # print(\"PERFORMED PREDICTION\")\n    # print(\"mu_cumsum shape: \", mu_cumsum.shape)\n    # print(\"sx_cumsum shape: \", sx_cumsum.shape)\n    # print(\"sy_cumsum shape: \", sy_cumsum.shape)\n    # print(\"corr_cumsum shape: \", corr_cumsum.shape)\n    # print(\"output_traj device: \", output_traj.device)\n    # print(\"output_binary_mask device: \", output_binary_mask.device)\n    # print(\"output_traj shape: \", output_traj.shape)\n    # print(\"output_binary_mask shape: \", output_binary_mask.shape)\n    # print()\n    return output_traj, output_binary_mask\n\noutput_traj, output_binary_mask = CrowdNavPredInterfaceMultiEnv(\n    input_traj[:1],\n    input_binary_mask[:1],\n    sampling = True,\n)\nprint()\nprint(\"OUTPUT DATA\")\nprint(\"output_traj shape: \", output_traj.shape)\nprint(\"output_binary_mask shape:\", output_binary_mask.shape)\nprint()\n\n\n# wrapper_input_traj shape:  torch.Size([4, 15, 5, 2])\n# wrapper_input_binary_mask shape:  torch.Size([4, 15, 5, 1])\n# wrapper_output_traj_gt shape:  torch.Size([4, 15, 5, 2])\n# wrapper_output_binary_mask_gt shape:  torch.Size([4, 15, 5, 1])\n# OUTPUT DATA\n# output_traj shape:  torch.Size([4, 15, 5, 5])\n# output_binary_mask shape: torch.Size([4, 15, 1])\n# n_env = wrapper_input_traj.shape[0]\n# input_traj = wrapper_input_traj.numpy()\n# input_binary_mask = wrapper_input_binary_mask.numpy()\n# print()\n# print(\"INPUT DATA\")\n# print(\"input_traj shape: \", input_traj.shape)\n# print(\"input_binary_mask shape:\", input_binary_mask.shape)\n# print()\ndef visualize_output_trajectory_deterministic(input_traj, input_binary_mask, output_traj, output_binary_mask, sample_index, obs_seq_len=5, pred_seq_len=5):\n    ##### Print Visualization Started #####\n    input_traj_i = input_traj[sample_index] #15, 5, 2])\n    input_binary_mask_i = input_binary_mask[sample_index] #15, 5, 1])\n    output_traj_i = output_traj[sample_index] #15, 5, 5])\n    output_binary_mask_i = output_binary_mask[sample_index] #15, 1])\n    n_peds, seq_len = input_traj_i.shape[0], obs_seq_len+pred_seq_len\n    full_obs_ped_idx = np.where(input_binary_mask_i.sum(1)[:,0]==obs_seq_len)[0]\n    full_traj = np.concatenate((input_traj_i, output_traj_i[:,:,:2]), axis=1) # (15, 10, 2)\n    output_binary_mask_i_pred_len = np.stack([output_binary_mask_i for j in range(pred_seq_len)], axis=1) # (15, 5, 1)\n\n    loss_mask = np.concatenate((input_binary_mask_i, output_binary_mask_i_pred_len), axis=1) # (15, 10, 1)\n\n    fig, ax = plt.subplots()\n    # ax.set_xlim((-8, 8))\n    # ax.set_ylim((-5, 5))\n    fig.set_tight_layout(True)\n    for ped_idx in range(n_peds):\n        if ped_idx in full_obs_ped_idx:\n            ax.plot(full_traj[ped_idx, obs_seq_len:, 0], full_traj[ped_idx, obs_seq_len:, 1], '.-', c='r')\n            ax.plot(full_traj[ped_idx, :obs_seq_len, 0], full_traj[ped_idx, :obs_seq_len, 1], '.-', c='k') # black for obs   \n        else:\n            for t_idx in range(seq_len):\n                if loss_mask[ped_idx,t_idx,0] == 1:\n                    if t_idx < obs_seq_len:\n                        # obs blue for partially detected pedestrians\n                        ax.plot(full_traj[ped_idx, t_idx, 0], full_traj[ped_idx, t_idx, 1], '.', c='b')\n                    else:\n                        # pred orange for partially detected pedestrians\n                        ax.plot(full_traj[ped_idx, t_idx, 0], full_traj[ped_idx, t_idx, 1], '.', c='C1', alpha=0.2)\n\n    ax.set_aspect('equal', adjustable='box')\n    ax.plot()\n    fig.savefig(\"SINGLE_pred_tmp_img_to_be_deleted_\"+str(sample_index)+\".png\")\n    print(\"SINGLE_pred_tmp_img_to_be_deleted_\"+str(sample_index)+\".png is created.\")\n    return\n\nfor sample_index in range(n_env):\n    visualize_output_trajectory_deterministic(input_traj, input_binary_mask, output_traj, output_binary_mask, sample_index, obs_seq_len=5, pred_seq_len=5)\n\n\nwrapper_visualization_data = {}\nwrapper_visualization_data['input_traj'] = input_traj\nwrapper_visualization_data['input_mask'] = input_binary_mask\nwrapper_visualization_data['output_traj'] = output_traj\nwrapper_visualization_data['output_mask'] = output_binary_mask\nwrapper_visualization_data['output_traj_gt'] = wrapper_output_traj_gt.numpy()\nwrapper_visualization_data['output_mask_gt'] = wrapper_output_binary_mask_gt.numpy()\nwith open(join('wrapper_visualization_data.pickle'), 'wb') as f:\n    pickle.dump(wrapper_visualization_data, f)\n    print(\"wrapper_visualization_data.pickle is dumped.\")\n# output_traj, output_binary_mask\n\n# wrapper_input_traj = torch.stack(wrapper_input_traj, dim=0)\n# wrapper_input_binary_mask = torch.stack(wrapper_input_binary_mask, dim=0)\n# wrapper_output_traj_gt = torch.stack(wrapper_output_traj_gt, dim=0)\n# wrapper_output_binary_mask_gt = torch.stack(wrapper_output_binary_mask_gt, dim=0)\n# print(\"wrapper_input_traj shape: \", wrapper_input_traj.shape)\n# print(\"wrapper_input_binary_mask shape: \", wrapper_input_binary_mask.shape)  \n# print(\"wrapper_output_traj_gt shape: \", wrapper_output_traj_gt.shape)\n# print(\"wrapper_output_binary_mask_gt shape: \", wrapper_output_binary_mask_gt.shape)  \n# wrapper_input_traj shape:  torch.Size([4, 15, 5, 2])\n# wrapper_input_binary_mask shape:  torch.Size([4, 15, 5, 1])\n# wrapper_output_traj_gt shape:  torch.Size([4, 15, 5, 2])\n# wrapper_output_binary_mask_gt shape:  torch.Size([4, 15, 5, 1])\n\n# n_env = wrapper_input_traj.shape[0]\n# input_traj = wrapper_input_traj.numpy()\n# input_binary_mask = wrapper_input_binary_mask.numpy()\n\n# with open(join(checkpoint_dir, 'args.pickle'), 'wb') as f:\n#     pickle.dump(args, f)\n\n# def visualize_trajectory(obs_traj, loss_mask, sample_index, obs_seq_len=5, pred_seq_len=5):\n#     ##### Print Visualization Started #####\n#     n_peds, seq_len = obs_traj.shape[1], obs_seq_len+pred_seq_len\n#     full_ped_idx = torch.where(loss_mask.sum(2)[0]==seq_len)[0] # loss_mask tensor: (1, num_peds, seq_len)\n#     fig, ax = plt.subplots()\n#     # ax.set_xlim((-8, 8))\n#     # ax.set_ylim((-5, 5))\n#     fig.set_tight_layout(True)\n#     for ped_idx in range(n_peds):\n#         if ped_idx in full_ped_idx:\n#             ax.plot(obs_traj[0, ped_idx, 0, :obs_seq_len], obs_traj[0, ped_idx, 1, :obs_seq_len], '.-', c='k') # black for obs\n#         else:\n#             for t_idx in range(seq_len):\n#                 if loss_mask[0,ped_idx,t_idx] == 1:\n#                     if t_idx < obs_seq_len:\n#                         # obs blue for partially detected pedestrians\n#                         ax.plot(obs_traj[0, ped_idx, 0, t_idx], obs_traj[0, ped_idx, 1, t_idx], '.', c='b')\n#     ax.set_aspect('equal', adjustable='box')\n#     ax.plot()\n#     fig.savefig(\"tmp_img_to_be_deleted_\"+str(sample_index)+\".png\")\n#     print(\"tmp_img_to_be_deleted_\"+str(sample_index)+\".png is created.\")\n#     return\n\n\n# # lstm \n# torch.manual_seed(args.random_seed)\n# np.random.seed(args.random_seed)\n# target_ped_idx_from_full = 1#2#0#1#2 # 0-2 for zara1\n# # print(full_ped_idx[target_ped_idx_from_full])\n# with torch.no_grad():\n#     model.eval()\n#     v_obs, A_obs, attn_mask_obs, loss_mask_rel = \\\n#         v_obs.to(device), A_obs.to(device), \\\n#         attn_mask_obs.float().to(device), loss_mask_rel.float().to(device)\n\n#     results = model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=tau, hard=True, sampling=False, device=device)\n#     # gaussian_params_pred, x_sample_pred, _, loss_mask_per_pedestrian = results\n#     gaussian_params_pred, x_sample_pred, info = results\n    \n#     pred_traj_rel = torch.cat((obs_traj[:,:,:,-1:], x_sample_pred.permute(0,2,3,1)), dim=3) # (1, num_peds, 2, 1+pred_seq_len)\n#     pred_traj = pred_traj_rel.cumsum(dim=3).to('cpu')\n    \n    \n#     # full_traj = torch.cat((obs_traj, pred_traj_gt), dim=3) # (1, num_peds, 2, seq_len)  \n#     full_traj = torch.cat((obs_traj[:,:,:,:-1], pred_traj), dim=3) # (1, num_peds, 2, seq_len)  \n#     n_peds, seq_len = full_traj.shape[1], full_traj.shape[3]\n#     # full_ped_idx = torch.where(loss_mask.sum(2)[0]==seq_len)[0] # # loss_mask tensor: (1, num_peds, seq_len)\n# #     full_ped_idx = torch.where(info['loss_mask_rel_full_partial']==1)[1]\n# #     info['loss_mask_rel_full_partial']\n#     full_ped_idx = torch.where(info['loss_mask_per_pedestrian']==1)[1]\n#     partial_full_ped_idx = torch.where(info['loss_mask_rel_full_partial']==1)[1]\n\n#     fig, ax = plt.subplots()\n#     ax.set_xlim((-8, 8))\n#     ax.set_ylim((-5, 5))\n#     fig.set_tight_layout(True)\n#     for ped_idx in range(n_peds):\n#         if ped_idx in partial_full_ped_idx:\n#             if ped_idx in full_ped_idx:\n# #                 ax.plot(full_traj[0, ped_idx, 0, 7:], full_traj[0, ped_idx, 1, 7:], '.-', c='C2') # green for pred gt\n#                 ax.plot(full_traj[0, ped_idx, 0, :8], full_traj[0, ped_idx, 1, :8], '.-', c='k') # black for obs\n#                 ax.plot(pred_traj[0, ped_idx, 0], pred_traj[0, ped_idx, 1], '.-', c='C1',alpha=0.2) # orange for pred           \n#         if ped_idx not in full_ped_idx:\n#             for t_idx in range(seq_len):\n#                 if loss_mask[0,ped_idx,t_idx] == 1:\n#                     if t_idx < 8:\n#                         ax.plot(full_traj[0, ped_idx, 0, t_idx], full_traj[0, ped_idx, 1, t_idx], '.', c='b')\n#                         # obs blue\n#                     else:\n#                         ax.plot(full_traj[0, ped_idx, 0, t_idx], full_traj[0, ped_idx, 1, t_idx], '.', c='r')\n#                         # pred red\n#     ax.set_aspect('equal', adjustable='box')\n#     ax.plot()\n    "
  },
  {
    "path": "gst_updated/scripts/wrapper/crowd_nav_interface_parallel.py",
    "content": "\nfrom gst_updated.src.gumbel_social_transformer.st_model import st_model\nfrom os.path import join, isdir\nimport pickle\nimport torch\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef seq_to_graph(seq_, seq_rel, attn_mech='rel_conv'):\n    \"\"\"\n    inputs:\n        - seq_ # (n_env, num_peds, 2, obs_seq_len)\n        - seq_rel # (n_env, num_peds, 2, obs_seq_len)\n    outputs:\n        - V # (n_env, obs_seq_len, num_peds, 2)\n        - A # (n_env, obs_seq_len, num_peds, num_peds, 2)\n    \"\"\"\n    V = seq_rel.permute(0, 3, 1, 2) # (n_env, obs_seq_len, num_peds, 2)\n    seq_permute = seq_.permute(0, 3, 1, 2) # (n_env, obs_seq_len, num_peds, 2)\n    A = seq_permute.unsqueeze(3)-seq_permute.unsqueeze(2) # (n_env, obs_seq_len, num_peds, 1, 2) - (n_env, obs_seq_len, 1, num_peds, 2)\n    return V, A\n\nclass CrowdNavPredInterfaceMultiEnv(object):\n\n    def __init__(self, load_path, device, config, num_env):\n        # *** Load model\n        self.args = config\n        self.device = device\n        self.nenv = num_env\n\n        # Uncomment if you want a fixed random seed.\n        # torch.manual_seed(args_eval.random_seed)\n        # np.random.seed(args_eval.random_seed)\n        self.args_eval = config\n        checkpoint_dir = join(load_path, 'checkpoint')\n        self.model = st_model(self.args_eval, device=device).to(device)\n        model_filename = 'epoch_'+str(self.args_eval.num_epochs)+'.pt'\n        model_checkpoint = torch.load(join(checkpoint_dir, model_filename), map_location=device)\n        self.model.load_state_dict(model_checkpoint['model_state_dict'])\n        self.model.eval()\n        print(\"LOADED MODEL\")\n        print(\"device: \", device)\n        print()\n\n    def forward(self, input_traj,input_binary_mask, sampling = True):\n        \"\"\"\n        inputs:\n            - input_traj:\n                # numpy\n                # (n_env, num_peds, obs_seq_len, 2)\n            - input_binary_mask:\n                # numpy\n                # (n_env, num_peds, obs_seq_len, 1)\n                # Zhe: I think we should not just have the binary mask of shape (n_env, number of pedestrains, 1)\n                # because some agents are partially detected, and they should not be simply ignored.\n            - sampling:\n                # bool\n                # True means you sample from Gaussian.\n                # False means you choose to use the mean of Gaussian as output.\n        outputs:\n            - output_traj:\n                # torch \"cpu\"\n                # (n_env, num_peds, pred_seq_len, 5)\n                # where 5 includes [mu_x, mu_y, sigma_x, sigma_y, correlation coefficient]\n            - output_binary_mask:\n                # torch \"cpu\"\n                # (n_env, num_peds, 1)\n                # Zhe: this means for prediction, if an agent does not show up in the last and second\n                # last observation time step, then the agent will not be predicted.\n        \"\"\"\n\n        invalid_value = -999.\n        # *** Process input data\n        obs_traj = input_traj.permute(0,1,3,2) # (n_env, num_peds, 2, obs_seq_len)\n        n_env, num_peds = obs_traj.shape[:2]\n        loss_mask_obs = input_binary_mask[:,:,:,0] # (n_env, num_peds, obs_seq_len)\n        loss_mask_rel_obs = loss_mask_obs[:,:,:-1] * loss_mask_obs[:,:,-1:]\n        loss_mask_rel_obs = torch.cat((loss_mask_obs[:,:,:1], loss_mask_rel_obs), dim=2) # (n_env, num_peds, obs_seq_len)\n        loss_mask_rel_pred = (torch.ones((n_env, num_peds, self.args_eval.pred_seq_len), device=self.device) * loss_mask_rel_obs[:,:,-1:])\n        loss_mask_rel = torch.cat((loss_mask_rel_obs, loss_mask_rel_pred), dim=2) # (n_env, num_peds, seq_len)\n        loss_mask_pred = loss_mask_rel_pred\n        loss_mask_rel_obs_permute = loss_mask_rel_obs.permute(0,2,1).reshape(n_env*self.args_eval.obs_seq_len, num_peds) # (n_env*obs_seq_len, num_peds)\n        attn_mask_obs = torch.bmm(loss_mask_rel_obs_permute.unsqueeze(2), loss_mask_rel_obs_permute.unsqueeze(1)) # (n_env*obs_seq_len, num_peds, num_peds)\n        attn_mask_obs = attn_mask_obs.reshape(n_env, self.args_eval.obs_seq_len, num_peds, num_peds)\n        \n        obs_traj_rel = obs_traj[:,:,:,1:] - obs_traj[:,:,:,:-1]\n        obs_traj_rel = torch.cat((torch.zeros(n_env, num_peds, 2, 1, device=self.device), obs_traj_rel), dim=3)\n        obs_traj_rel = invalid_value*torch.ones_like(obs_traj_rel)*(1-loss_mask_rel_obs.unsqueeze(2)) \\\n            + obs_traj_rel*loss_mask_rel_obs.unsqueeze(2)\n        v_obs, A_obs = seq_to_graph(obs_traj, obs_traj_rel, attn_mech='rel_conv')\n        # *** Perform trajectory prediction\n        sampling = False\n        with torch.no_grad():\n            v_obs, A_obs, attn_mask_obs, loss_mask_rel = \\\n                v_obs.to(self.device), A_obs.to(self.device), attn_mask_obs.to(self.device), loss_mask_rel.to(self.device)\n            results = self.model(v_obs, A_obs, attn_mask_obs, loss_mask_rel, tau=0.03, hard=True, sampling=sampling, device=self.device)\n            gaussian_params_pred, x_sample_pred, info = results\n        mu, sx, sy, corr = gaussian_params_pred\n        mu = mu.cumsum(1)\n        sx_squared = sx**2.\n        sy_squared = sy**2.\n        corr_sx_sy = corr*sx*sy\n        sx_squared_cumsum = sx_squared.cumsum(1)\n        sy_squared_cumsum = sy_squared.cumsum(1)\n        corr_sx_sy_cumsum = corr_sx_sy.cumsum(1)\n        sx_cumsum = sx_squared_cumsum**(1./2)\n        sy_cumsum = sy_squared_cumsum**(1./2)\n        corr_cumsum = corr_sx_sy_cumsum/(sx_cumsum*sy_cumsum)\n        mu_cumsum = mu.detach().to(self.device) + obs_traj.permute(0,3,1,2)[:,-1:]# np.transpose(obs_traj[:,:,:,-1:], (0,3,1,2)) # (batch, time, node, 2)\n        mu_cumsum = mu_cumsum * loss_mask_pred.permute(0,2,1).unsqueeze(-1) + invalid_value*(1-loss_mask_pred.permute(0,2,1).unsqueeze(-1))\n        output_traj = torch.cat((mu_cumsum.detach().to(self.device), sx_cumsum.detach().to(self.device), sy_cumsum.detach().to(self.device), corr_cumsum.detach().to(self.device)), dim=3)\n        output_traj = output_traj.permute(0, 2, 1, 3) # (n_env, num_peds, pred_seq_len, 5)\n        output_binary_mask = loss_mask_pred[:,:,:1].detach().to(self.device) # (n_env, num_peds, 1) # first step same as following in prediction\n        return output_traj, output_binary_mask\n\n\ndef visualize_output_trajectory_deterministic(input_traj, input_binary_mask, output_traj, output_binary_mask, sample_index, obs_seq_len=5, pred_seq_len=5):\n    ##### Print Visualization Started #####\n    input_traj_i = input_traj[sample_index]\n    input_binary_mask_i = input_binary_mask[sample_index]\n    output_traj_i = output_traj[sample_index]\n    output_binary_mask_i = output_binary_mask[sample_index]\n    num_peds, seq_len = input_traj_i.shape[0], obs_seq_len+pred_seq_len\n    full_obs_ped_idx = np.where(input_binary_mask_i.sum(1)[:,0]==obs_seq_len)[0]\n    full_traj = np.concatenate((input_traj_i, output_traj_i[:,:,:2]), axis=1)\n    output_binary_mask_i_pred_len = np.stack([output_binary_mask_i for j in range(pred_seq_len)], axis=1)\n    loss_mask = np.concatenate((input_binary_mask_i, output_binary_mask_i_pred_len), axis=1)\n    fig, ax = plt.subplots()\n    fig.set_tight_layout(True)\n    for ped_idx in range(num_peds):\n        if ped_idx in full_obs_ped_idx:\n            ax.plot(full_traj[ped_idx, obs_seq_len:, 0], full_traj[ped_idx, obs_seq_len:, 1], '.-', c='r')\n            ax.plot(full_traj[ped_idx, :obs_seq_len, 0], full_traj[ped_idx, :obs_seq_len, 1], '.-', c='k') # black for obs   \n        else:\n            for t_idx in range(seq_len):\n                if loss_mask[ped_idx,t_idx,0] == 1:\n                    if t_idx < obs_seq_len:\n                        # obs blue for partially detected pedestrians\n                        ax.plot(full_traj[ped_idx, t_idx, 0], full_traj[ped_idx, t_idx, 1], '.', c='b')\n                    else:\n                        # pred orange for partially detected pedestrians\n                        ax.plot(full_traj[ped_idx, t_idx, 0], full_traj[ped_idx, t_idx, 1], '.', c='C1', alpha=0.2)\n\n    ax.set_aspect('equal', adjustable='box')\n    ax.plot()\n    fig.savefig(str(sample_index)+\".png\")\n    print(str(sample_index)+\".png is created.\")\n    return\n\n\nif __name__ == '__main__':\n    # *** Create an input that aligns with the format of CrowdNav\n    obs_seq_len = 5\n    pred_seq_len = 5\n    invalid_value = -999.\n    wrapper_demo_data = torch.load('/home/shuijing/Desktop/CrowdNav_Prediction/gst_updated/datasets/wrapper_demo/wrapper_demo_data.pt')\n    print(\"wrapper_demo_data.pt is loaded.\")\n    input_traj, input_binary_mask = wrapper_demo_data['input_traj'], wrapper_demo_data['input_mask']\n    n_env = input_traj.shape[0]\n    assert input_traj.shape[2] == obs_seq_len\n    \"\"\"\n    - input_traj:\n        # tensor\n        # (n_env, num_peds, obs_seq_len, 2)\n    - input_binary_mask:\n        # tensor\n        # (n_env, num_peds, obs_seq_len, 1)\n    \"\"\"\n    print()\n    print(\"INPUT DATA\")\n    print(\"number of environments: \", n_env)\n    print(\"input_traj shape: \", input_traj.shape)\n    print(\"input_binary_mask shape:\", input_binary_mask.shape)\n    print()\n    load_path = '/home/shuijing/Desktop/CrowdNav_Prediction/gst_updated/results/100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000/sj'\n    # load_path = join(pathhack.pkg_path, 'results/100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000/sj')\n    device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n    args = \"100-gumbel_social_transformer-faster_lstm-lr_0.001-init_temp_0.5-edge_head_0-ebd_64-snl_1-snh_8-seed_1000\"\n    model = CrowdNavPredInterfaceMultiEnv(load_path=load_path,\n                                          device=device, config = args, num_env=n_env)\n\n    input_traj = input_traj.cuda()\n    input_binary_mask = input_binary_mask.cuda()\n    output_traj, output_binary_mask = model.forward(\n        input_traj,\n        input_binary_mask,\n        sampling = True,\n    )\n    print()\n    print(\"OUTPUT DATA\")\n    print(\"output_traj shape: \", output_traj.shape)\n    print(\"output_binary_mask shape:\", output_binary_mask.shape)\n    print()\n    for sample_index in range(n_env):\n        visualize_output_trajectory_deterministic(input_traj, input_binary_mask, output_traj, output_binary_mask, sample_index, obs_seq_len=5, pred_seq_len=5)\n"
  },
  {
    "path": "gst_updated/scripts/wrapper/pathhack.py",
    "content": "import sys\nfrom os.path import abspath, join, split\nfile_path = split(abspath(__file__))[0]\npkg_path = join(file_path, '../..')\nsys.path.append(pkg_path)\n"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/edge_selector_ghost.py",
    "content": "# import pathhack\nimport torch\nimport torch.nn as nn\nfrom torch.nn.functional import softmax\nfrom src.gumbel_social_transformer.mha import VanillaMultiheadAttention\nfrom src.gumbel_social_transformer.utils import _get_activation_fn, gumbel_softmax\n\nclass EdgeSelector(nn.Module):\n    r\"\"\"Ghost version.\"\"\"\n    def __init__(self, d_motion, d_model, nhead=4, dropout=0.1, activation=\"relu\"):\n        super(EdgeSelector, self).__init__()\n        assert d_model % nhead == 0\n        self.nhead = nhead\n        self.head_dim = d_model // nhead\n        self.augmented_edge_embedding = nn.Linear(3*d_motion, d_model)\n        self.self_attn = VanillaMultiheadAttention(d_model, nhead, dropout=0.0)\n        self.norm_augmented_edge = nn.LayerNorm(d_model)\n        self.linear1 = nn.Linear(self.head_dim, self.head_dim)\n        self.linear2 = nn.Linear(self.head_dim, 1)\n        self.dropout1 = nn.Dropout(dropout)\n        self.activation = _get_activation_fn(activation)\n        self.d_model = d_model\n        self.d_motion = d_motion\n        print(\"new edge selector\")\n        \n    \n    def forward(self, x, A, attn_mask, tau=1., hard=False, device='cuda:0'):\n        \"\"\"\n        Encode pedestrian edge with node information.\n        inputs:\n            # * done: x, A need to be masked first before processing.\n            - x: vertices representing pedestrians of one sample. \n                # * bsz is batch size corresponding to Transformer setting.\n                # * In pedestrian setting, bsz = batch_size*time_step\n                # (bsz, node, d_motion)\n            - A: edges representation relationships between pedestrians of one sample.\n                # (bsz, node, node, d_motion)\n                # row -> neighbor, col -> target\n            - attn_mask: attention mask provided in advance.\n                # (bsz, target_node, neighbor_node)\n                # 1. means yes, i.e. attention exists.  0. means no.\n            - tau: temperature hyperparameter of gumbel softmax. \n                # ! Need annealing though training.\n            - hard: hard or soft sampling.\n                # True means one-hot sample for evaluation.\n                # False means soft sample for reparametrization.\n            - device: 'cuda:0' or 'cpu'.\n        outputs:\n            - edge_multinomial: The categorical distribution over the connections from targets to the neighbors\n                # (time_step, target_node, num_heads, neighbor_node)\n                # neighbor_node = nnode + 1 in ghost mode\n            - sampled_edges: The edges sampled from edge_multinomial\n                # (time_step, target_node, num_heads, neighbor_node)\n                # neighbor_node = nnode + 1 in ghost mode\n        \"\"\"\n        bsz, nnode, d_motion = x.shape\n        assert d_motion == self.d_motion\n        attn_mask = attn_mask.to(\"cpu\")\n        attn_mask_ped = (attn_mask.sum(-1) > 0).float().unsqueeze(-1) # (bsz, nnode, 1)\n        x = x * attn_mask_ped.to(device)\n        x_ghost = torch.zeros(bsz, 1, d_motion).to(device)\n        x_neighbor = torch.cat((x, x_ghost), dim=1) # (bsz, nnode+1, d_motion)\n        x_neighbor = torch.ones(bsz,nnode+1,nnode,d_motion).to(device)*x_neighbor.view(bsz,nnode+1,1,d_motion) # row -> neighbor\n        x_target = torch.ones(bsz,nnode+1,nnode,d_motion).to(device)*x.view(bsz,1,nnode,d_motion) # col -> target\n        x_neighbor_target = torch.cat((x_neighbor, x_target), dim=-1) # (bsz,nnode+1,nnode,2*d_motion)\n\n        A = A * attn_mask.permute(0,2,1).unsqueeze(-1).to(device) # (bsz, neighbor_node, target_node, d_motion)\n        A_ghost = torch.zeros(bsz, 1, nnode, d_motion).to(device)\n        A = torch.cat((A, A_ghost), dim=1) # (bsz,nnode+1,nnode,d_motion)\n        A = torch.cat((x_neighbor_target, A), dim=-1) # (bsz,nnode+1,nnode,3*d_motion) # n_node==t_node+1==nnode+1\n        A = self.augmented_edge_embedding(A) # (bsz,nnode+1,nnode,d_model)\n        A = self.norm_augmented_edge(A)\n        A_perm = A.permute(0,2,1,3) # (bsz,nnode,nnode+1,d_model)\n        A_perm = A_perm.reshape(A_perm.shape[0]*A_perm.shape[1], A_perm.shape[2], A_perm.shape[3]) # (time_step*target_node, neighbor_node, d_model)\n        A_perm = A_perm.permute(1,0,2) # (neighbor_node, time_step*target_node, d_model)\n        \n        attn_mask_ghost = torch.ones(bsz, nnode, 1) # ghost exists all the time, not missing\n        attn_mask = torch.cat((attn_mask, attn_mask_ghost), dim=2) #(bsz, nnode, nnode+1) # n_node==t_node+1==nnode+1\n        attn_mask_neighbors = attn_mask.view(bsz, nnode, nnode+1, 1) * attn_mask.view(bsz, nnode, 1, nnode+1) # (bsz, nnode, nnode+1, nnode+1)\n        attn_mask_neighbors = attn_mask_neighbors.view(bsz*nnode, nnode+1, nnode+1) # (time_step*target_node, neighbor_node, neighbor_node)\n        # attn_mask_neighbors = torch.cat([attn_mask_neighbors for _ in range(self.nhead)], dim=0) # (nhead*time_step*target_node, neighbor_node, neighbor_node) # ! bug fixed\n        attn_mask_neighbors = torch.stack([attn_mask_neighbors for _ in range(self.nhead)], dim=1) # (time_step*target_node, nhead, neighbor_node, neighbor_node)\n        attn_mask_neighbors = attn_mask_neighbors.view(attn_mask_neighbors.shape[0]*attn_mask_neighbors.shape[1], \\\n            attn_mask_neighbors.shape[2], attn_mask_neighbors.shape[3]) # (time_step*target_node*nhead, neighbor_node, neighbor_node)   \n        \n        _, _, A2 = self.self_attn(A_perm, A_perm, A_perm, attn_mask=attn_mask_neighbors.to(device))\n        # inputs: (L, N, E), (S, N, E), (S, N, E), (N*nhead, L, S)\n        # bsz, num_heads, tgt_len, head_dim # A2 # (time_step*target_node, num_heads, neighbor_node, 4*d_model/nhead) # we use head_dim = 4*d_model/nhead\n        A2 = A2.reshape(bsz, nnode, self.nhead, nnode+1, self.head_dim) # (time_step, target_node, num_heads, neighbor_node, head_dim)\n\n        A2 = self.linear2(self.dropout1(self.activation(self.linear1(A2)))).squeeze(-1) # (time_step, target_node, num_heads, neighbor_node)\n        edge_multinomial = softmax(A2, dim=-1) # (time_step, target_node, num_heads, neighbor_node)\n        \n        edge_multinomial = edge_multinomial * attn_mask.unsqueeze(2).to(device) # (time_step, target_node, num_heads, neighbor_node)\n        edge_multinomial = edge_multinomial / (edge_multinomial.sum(-1).unsqueeze(-1)+1e-10)\n        sampled_edges = self.edge_sampler(edge_multinomial, tau=tau, hard=hard)\n        \n        return edge_multinomial, sampled_edges\n\n\n    def edge_sampler(self, edge_multinomial, tau=1., hard=False):\n        r\"\"\"\n        Sample from edge_multinomial using gumbel softmax for differentiable search.\n        \"\"\"\n        logits = torch.log(edge_multinomial+1e-10) # (time_step, target_node, num_heads, neighbor_node)\n        sampled_edges = gumbel_softmax(logits, tau=tau, hard=hard, eps=1e-10) # (time_step, target_node, num_heads, neighbor_node)\n        return sampled_edges\n\nif __name__ == \"__main__\":\n    device = \"cuda:0\"\n    edge_selector = EdgeSelector(2, 32, nhead=2, dropout=0.1).to(device)\n    x = torch.randn(8, 3, 2)\n    position = torch.randn(8,3,2)\n    A = position.unsqueeze(2)-position.unsqueeze(1) # (8,3,3,2)\n    loss_mask = torch.ones(3,8)\n    loss_mask[0,:3] = 0.\n    loss_mask[1,:5] = 0.\n    attn_mask = []\n    x, A = x.to(device), A.to(device)\n    for i in range(8):\n        attn_mask.append(torch.outer(loss_mask[:,i], loss_mask[:,i]))\n    attn_mask = torch.stack(attn_mask, dim=0)\n    attn_mask = attn_mask.to(device)\n    print(attn_mask[3])\n    edge_multinomial, sampled_edges = edge_selector(x, A, attn_mask, tau=1., hard=False, device=device)\n    print(\"hello world.\")"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/edge_selector_no_ghost.py",
    "content": "import torch\nimport torch.nn as nn\nfrom torch.nn.functional import softmax\nfrom gst_updated.src.gumbel_social_transformer.mha import VanillaMultiheadAttention\nfrom gst_updated.src.gumbel_social_transformer.utils import _get_activation_fn, gumbel_softmax\n\n\nclass EdgeSelector(nn.Module):\n    r\"\"\"No ghost version.\"\"\"\n    def __init__(self, d_motion, d_model, nhead=4, dropout=0.1, activation=\"relu\"):\n        super(EdgeSelector, self).__init__()\n        assert d_model % nhead == 0\n        self.nhead = nhead\n        self.head_dim = d_model // nhead\n        self.augmented_edge_embedding = nn.Linear(3*d_motion, d_model)\n        self.self_attn = VanillaMultiheadAttention(d_model, nhead, dropout=0.0)\n        self.norm_augmented_edge = nn.LayerNorm(d_model)\n        self.linear1 = nn.Linear(self.head_dim, self.head_dim)\n        self.linear2 = nn.Linear(self.head_dim, 1)\n        self.dropout1 = nn.Dropout(dropout)\n        self.activation = _get_activation_fn(activation)\n        self.d_model = d_model\n        self.d_motion = d_motion\n\n    \n    def forward(self, x, A, attn_mask, tau=1., hard=False, device='cuda:0'):\n        \"\"\"\n        Encode pedestrian edge with node information.\n        inputs:\n            # * done: x, A need to be masked first before processing.\n            - x: vertices representing pedestrians of one sample. \n                # * bsz is batch size corresponding to Transformer setting.\n                # * In pedestrian setting, bsz = batch_size*time_step\n                # (bsz, node, d_motion)\n            - A: edges representation relationships between pedestrians of one sample.\n                # (bsz, node, node, 2*d_motion)\n                # row -> neighbor, col -> target\n            - attn_mask: attention mask provided in advance.\n                # (bsz, target_node, neighbor_node)\n                # 1. means yes, i.e. attention exists.  0. means no.\n            - tau: temperature hyperparameter of gumbel softmax. \n                # ! Need annealing though training.\n            - hard: hard or soft sampling.\n                # True means one-hot sample for evaluation.\n                # False means soft sample for reparametrization.\n            - device: 'cuda:0' or 'cpu'.\n        outputs:\n            - edge_multinomial: The categorical distribution over the connections from targets to the neighbors\n                # (time_step, target_node, num_heads, neighbor_node)\n                # neighbor_node = nnode in no ghost mode\n            - sampled_edges: The edges sampled from edge_multinomial\n                # (time_step, target_node, num_heads, neighbor_node)\n                # neighbor_node = nnode in no ghost mode\n        \"\"\"\n        bsz, nnode, d_motion = x.shape\n        assert d_motion == self.d_motion\n        attn_mask = attn_mask.to(\"cpu\")\n        attn_mask_ped = (attn_mask.sum(-1) > 0).float().unsqueeze(-1) # (bsz, nnode, 1)\n        x = x * attn_mask_ped.to(device)\n\n        x_neighbor = torch.ones(bsz,nnode,nnode,d_motion).to(device)*x.view(bsz,nnode,1,d_motion) # row -> neighbor\n        x_target = torch.ones(bsz,nnode,nnode,d_motion).to(device)*x.view(bsz,1,nnode,d_motion) # col -> target\n        x_neighbor_target = torch.cat((x_neighbor, x_target), dim=-1) # (bsz,nnode,nnode,2*d_motion)\n\n        A = A * attn_mask.permute(0,2,1).unsqueeze(-1).to(device) # (bsz, neighbor_node, target_node, d_motion)\n        A = torch.cat((x_neighbor_target, A), dim=-1) # (bsz,n_node,t_node,3*d_motion)\n        A = self.augmented_edge_embedding(A) # (bsz,n_node,t_node,d_model)\n        A = self.norm_augmented_edge(A)\n        A_perm = A.permute(0,2,1,3) # (bsz,t_node,n_node,d_model)\n        A_perm = A_perm.reshape(A_perm.shape[0]*A_perm.shape[1], A_perm.shape[2], A_perm.shape[3]) # (time_step*target_node, neighbor_node, d_model)\n        A_perm = A_perm.permute(1,0,2) # (neighbor_node, time_step*target_node, d_model)\n\n        attn_mask_neighbors = attn_mask.view(bsz, nnode, nnode, 1) * attn_mask.view(bsz, nnode, 1, nnode) # (bsz, t_node, n_node, n_nnode)\n        attn_mask_neighbors = attn_mask_neighbors.reshape(bsz*nnode, nnode, nnode) # (time_step*target_node, neighbor_node, neighbor_node)\n        # attn_mask_neighbors = torch.cat([attn_mask_neighbors for _ in range(self.nhead)], dim=0) # (nhead*time_step*target_node, neighbor_node, neighbor_node) # ! bug fixed\n        attn_mask_neighbors = torch.stack([attn_mask_neighbors for _ in range(self.nhead)], dim=1) # (time_step*target_node, nhead, neighbor_node, neighbor_node)\n        attn_mask_neighbors = attn_mask_neighbors.view(attn_mask_neighbors.shape[0]*attn_mask_neighbors.shape[1], \\\n            attn_mask_neighbors.shape[2], attn_mask_neighbors.shape[3]) # (time_step*target_node*nhead, neighbor_node, neighbor_node)   \n\n        _, _, A2 = self.self_attn(A_perm, A_perm, A_perm, attn_mask=attn_mask_neighbors.to(device)) \n        # bsz, num_heads, tgt_len, head_dim # A2 # (time_step*target_node, num_heads, neighbor_node, d_model/nhead) # we use head_dim = 4*d_model/nhead\n        A2 = A2.reshape(bsz, nnode, self.nhead, nnode, self.head_dim) # (time_step, target_node, num_heads, neighbor_node, head_dim)\n\n        A2 = self.linear2(self.dropout1(self.activation(self.linear1(A2)))).squeeze(-1) # (time_step, target_node, num_heads, neighbor_node)\n        edge_multinomial = softmax(A2, dim=-1) # (time_step, target_node, num_heads, neighbor_node)\n        edge_multinomial = edge_multinomial * attn_mask.unsqueeze(2).to(device) # (time_step, target_node, num_heads, neighbor_node)\n        edge_multinomial = edge_multinomial / (edge_multinomial.sum(-1).unsqueeze(-1)+1e-10)\n        sampled_edges = self.edge_sampler(edge_multinomial, tau=tau, hard=hard)\n        return edge_multinomial, sampled_edges\n\n    def edge_sampler(self, edge_multinomial, tau=1., hard=False):\n        r\"\"\"\n        Sample from edge_multinomial using gumbel softmax for differentiable search.\n        \"\"\"\n        logits = torch.log(edge_multinomial+1e-10) # (time_step, target_node, num_heads, neighbor_node)\n        sampled_edges = gumbel_softmax(logits, tau=tau, hard=hard, eps=1e-10) # (time_step, target_node, num_heads, neighbor_node)\n        return sampled_edges"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/gumbel_social_transformer.py",
    "content": "import torch\nimport torch.nn as nn\nfrom gst_updated.src.gumbel_social_transformer.utils import _get_clones\n\nclass GumbelSocialTransformer(nn.Module):\n    def __init__(self, d_motion, d_model, nhead_nodes, nhead_edges, nlayer, dim_feedforward=512, dim_hidden=32, \\\n        dropout=0.1, activation=\"relu\", attn_mech=\"vanilla\", ghost=True):\n        super(GumbelSocialTransformer, self).__init__()\n        if ghost:\n            if nhead_edges == 0:\n                raise RuntimeError(\"Full connectivity conflicts with the Ghost setting.\")\n            print(\"Ghost version.\")\n            from gst_updated.src.gumbel_social_transformer.edge_selector_ghost import EdgeSelector\n            from gst_updated.src.gumbel_social_transformer.node_encoder_layer_ghost import NodeEncoderLayer\n        else:\n            print(\"No ghost version.\")\n            from gst_updated.src.gumbel_social_transformer.edge_selector_no_ghost import EdgeSelector\n            from gst_updated.src.gumbel_social_transformer.node_encoder_layer_no_ghost import NodeEncoderLayer\n        if nhead_edges != 0:\n            # 0 means it is fully connected\n            self.edge_selector = EdgeSelector(\n                d_motion,\n                d_model,\n                nhead=nhead_edges,\n                dropout=dropout,\n                activation=activation,\n            )\n        self.node_embedding = nn.Linear(d_motion, d_model)\n        node_encoder_layer = NodeEncoderLayer(\n            d_model,\n            nhead_nodes,\n            dim_feedforward=dim_feedforward,\n            dropout=dropout,\n            activation=activation,\n            attn_mech=attn_mech,\n        )\n        self.node_encoder_layers = _get_clones(node_encoder_layer, nlayer)\n        self.nlayer = nlayer\n        self.nhead_nodes = nhead_nodes\n        self.nhead_edges = nhead_edges\n        print(\"new gst\")\n\n    def forward(self, x, A, attn_mask, tau=1., hard=False, device='cuda:0'):\n        r\"\"\"\n        Pass the input through the encoder layers in turn.\n        inputs:\n            - x: vertices representing pedestrians of one sample. \n                # * bsz is batch size corresponding to Transformer setting.\n                # * In pedestrian setting, bsz = batch_size*time_step\n                # (bsz, nnode, d_motion)\n            - A: edges representation relationships between pedestrians of one sample.\n                # (bsz, nnode <neighbor>, nnode <target>, d_motion)\n                # row -> neighbor, col -> target\n            - attn_mask: attention mask provided in advance.\n                # (bsz, nnode <target>, nnode <neighbor>)\n                # row -> target, col -> neighbor\n                # 1. means yes, i.e. attention exists.  0. means no.\n            - tau: temperature hyperparameter of gumbel softmax.\n                # ! Need annealing though training. 1 is considered really soft at the beginning.\n            - hard: hard or soft sampling.\n                # True means one-hot sample for evaluation.\n                # False means soft sample for reparametrization.\n            - device: 'cuda:0' or 'cpu'.\n        outputs:\n            - x: encoded vertices representing pedestrians of one sample. \n                # (bsz, nnode, d_model) # same as input\n            - sampled_edges: sampled adjacency matrix at the last column.\n                # (bsz, nnode <target>, nhead_edges, neighbor_node)\n                # * where neighbor_node = nnode+1 <neighbor> for ghost==True,\n                # * and   neighbor_node = nnode   <neighbor> for ghost==False.\n            - edge_multinomial: multinomial where sampled_edges are sampled.\n                # (bsz, nnode <target>, nhead_edges, neighbor_node)\n            - attn_weights: attention weights during self-attention for nodes x.\n                # (nlayer, bsz, nhead, nnode <target>, neighbor_node)\n        \"\"\"\n        if self.nhead_edges != 0:\n            # edge_multinomial # (bsz, nnode <target>, nhead_edges, neighbor_node)\n            # sampled_edges    # (bsz, nnode <target>, nhead_edges, neighbor_node)\n            # ! Remember here edges does not include self edges automatically.\n            edge_multinomial, sampled_edges = \\\n                self.edge_selector(x, A, attn_mask, tau=tau, hard=hard, device=device)\n        else:\n            # full connectivity\n            bsz, nnode = attn_mask.shape[0], attn_mask.shape[1]\n            sampled_edges = torch.ones(bsz, nnode, 1, nnode).to(device) * attn_mask.unsqueeze(2)\n            edge_multinomial = torch.ones(bsz, nnode, 1, nnode).to(device) * attn_mask.unsqueeze(2) # fake edge_multinomial, meaningless\n\n        attn_weights_list = []\n        x = self.node_embedding(x)\n        for i in range(self.nlayer):\n            x, attn_weights_layer = self.node_encoder_layers[i](x, sampled_edges, attn_mask, device=device)\n            attn_weights_list.append(attn_weights_layer)\n            # x (bsz, nnode, d_model)\n            # attn_weights_layer: (bsz, nhead, nnode <target>, neighbor_node)\n        attn_weights = torch.stack(attn_weights_list, dim=0) # (nlayer, bsz, nhead, nnode <target>, neighbor_node)\n        return x, sampled_edges, edge_multinomial, attn_weights\n"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/mha.py",
    "content": "import torch\nfrom torch.nn.functional import softmax, dropout, linear\nfrom torch import Tensor\nfrom typing import Optional, Tuple\nfrom torch.nn import Module, Parameter\n# from torch.nn.modules.linear import _LinearWithBias\nfrom torch.nn import Linear\nfrom torch.nn.init import xavier_uniform_, constant_\nfrom torch.overrides import has_torch_function\n\n\ndef multi_head_attention_forward(\n    query: Tensor,\n    key: Tensor,\n    value: Tensor,\n    embed_dim_to_check: int,\n    num_heads: int,\n    in_proj_weight: Tensor,\n    in_proj_bias: Optional[Tensor],\n    bias_k: Optional[Tensor],\n    bias_v: Optional[Tensor],\n    add_zero_attn: bool,\n    dropout_p: float,\n    out_proj_weight: Tensor,\n    out_proj_bias: Optional[Tensor],\n    training: bool = True,\n    key_padding_mask: Optional[Tensor] = None,\n    need_weights: bool = True,\n    attn_mask: Optional[Tensor] = None,\n    use_separate_proj_weight: bool = False,\n    q_proj_weight: Optional[Tensor] = None,\n    k_proj_weight: Optional[Tensor] = None,\n    v_proj_weight: Optional[Tensor] = None,\n    static_k: Optional[Tensor] = None,\n    static_v: Optional[Tensor] = None,\n) -> Tuple[Tensor, Optional[Tensor]]:\n    tens_ops = (query, key, value, in_proj_weight, in_proj_bias, bias_k, bias_v, out_proj_weight, out_proj_bias)\n    if has_torch_function(tens_ops):\n        return handle_torch_function(\n            multi_head_attention_forward,\n            tens_ops,\n            query,\n            key,\n            value,\n            embed_dim_to_check,\n            num_heads,\n            in_proj_weight,\n            in_proj_bias,\n            bias_k,\n            bias_v,\n            add_zero_attn,\n            dropout_p,\n            out_proj_weight,\n            out_proj_bias,\n            training=training,\n            key_padding_mask=key_padding_mask,\n            need_weights=need_weights,\n            attn_mask=attn_mask,\n            use_separate_proj_weight=use_separate_proj_weight,\n            q_proj_weight=q_proj_weight,\n            k_proj_weight=k_proj_weight,\n            v_proj_weight=v_proj_weight,\n            static_k=static_k,\n            static_v=static_v,\n        )\n    tgt_len, bsz, embed_dim = query.size()\n    assert embed_dim == embed_dim_to_check\n    assert key.size(0) == value.size(0) and key.size(1) == value.size(1)\n\n    if isinstance(embed_dim, torch.Tensor):\n        head_dim = embed_dim.div(num_heads, rounding_mode='trunc')\n    else:\n        head_dim = embed_dim // num_heads\n    assert head_dim * num_heads == embed_dim, \"embed_dim must be divisible by num_heads\"\n    scaling = float(head_dim) ** -0.5\n\n    if not use_separate_proj_weight:\n        if (query is key or torch.equal(query, key)) and (key is value or torch.equal(key, value)):\n            q, k, v = linear(query, in_proj_weight, in_proj_bias).chunk(3, dim=-1)\n\n        elif key is value or torch.equal(key, value):\n            _b = in_proj_bias\n            _start = 0\n            _end = embed_dim\n            _w = in_proj_weight[_start:_end, :]\n            if _b is not None:\n                _b = _b[_start:_end]\n            q = linear(query, _w, _b)\n\n            if key is None:\n                assert value is None\n                k = None\n                v = None\n            else:\n                _b = in_proj_bias\n                _start = embed_dim\n                _end = None\n                _w = in_proj_weight[_start:, :]\n                if _b is not None:\n                    _b = _b[_start:]\n                k, v = linear(key, _w, _b).chunk(2, dim=-1)\n        else:\n            _b = in_proj_bias\n            _start = 0\n            _end = embed_dim\n            _w = in_proj_weight[_start:_end, :]\n            if _b is not None:\n                _b = _b[_start:_end]\n            q = linear(query, _w, _b)\n            _b = in_proj_bias\n            _start = embed_dim\n            _end = embed_dim * 2\n            _w = in_proj_weight[_start:_end, :]\n            if _b is not None:\n                _b = _b[_start:_end]\n            k = linear(key, _w, _b)\n            _b = in_proj_bias\n            _start = embed_dim * 2\n            _end = None\n            _w = in_proj_weight[_start:, :]\n            if _b is not None:\n                _b = _b[_start:]\n            v = linear(value, _w, _b)\n    else:\n        q_proj_weight_non_opt = torch.jit._unwrap_optional(q_proj_weight)\n        len1, len2 = q_proj_weight_non_opt.size()\n        assert len1 == embed_dim and len2 == query.size(-1)\n\n        k_proj_weight_non_opt = torch.jit._unwrap_optional(k_proj_weight)\n        len1, len2 = k_proj_weight_non_opt.size()\n        assert len1 == embed_dim and len2 == key.size(-1)\n\n        v_proj_weight_non_opt = torch.jit._unwrap_optional(v_proj_weight)\n        len1, len2 = v_proj_weight_non_opt.size()\n        assert len1 == embed_dim and len2 == value.size(-1)\n\n        if in_proj_bias is not None:\n            q = linear(query, q_proj_weight_non_opt, in_proj_bias[0:embed_dim])\n            k = linear(key, k_proj_weight_non_opt, in_proj_bias[embed_dim : (embed_dim * 2)])\n            v = linear(value, v_proj_weight_non_opt, in_proj_bias[(embed_dim * 2) :])\n        else:\n            q = linear(query, q_proj_weight_non_opt, in_proj_bias)\n            k = linear(key, k_proj_weight_non_opt, in_proj_bias)\n            v = linear(value, v_proj_weight_non_opt, in_proj_bias)\n    q = q * scaling\n\n    if attn_mask is not None:\n        assert (\n            attn_mask.dtype == torch.float32\n            or attn_mask.dtype == torch.float64\n            or attn_mask.dtype == torch.float16\n            or attn_mask.dtype == torch.uint8\n            or attn_mask.dtype == torch.bool\n        ), \"Only float, byte, and bool types are supported for attn_mask, not {}\".format(attn_mask.dtype)\n        if attn_mask.dtype == torch.uint8:\n            warnings.warn(\"Byte tensor for attn_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.\")\n            attn_mask = attn_mask.to(torch.bool)\n\n        if attn_mask.dim() == 2:\n            attn_mask = attn_mask.unsqueeze(0)\n            if list(attn_mask.size()) != [1, query.size(0), key.size(0)]:\n                raise RuntimeError(\"The size of the 2D attn_mask is not correct.\")\n        elif attn_mask.dim() == 3:\n            if list(attn_mask.size()) != [bsz * num_heads, query.size(0), key.size(0)]:\n                raise RuntimeError(\"The size of the 3D attn_mask is not correct.\")\n        else:\n            raise RuntimeError(\"attn_mask's dimension {} is not supported\".format(attn_mask.dim()))\n\n    if key_padding_mask is not None and key_padding_mask.dtype == torch.uint8:\n        warnings.warn(\n            \"Byte tensor for key_padding_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.\"\n        )\n        key_padding_mask = key_padding_mask.to(torch.bool)\n\n    if bias_k is not None and bias_v is not None:\n        if static_k is None and static_v is None:\n            k = torch.cat([k, bias_k.repeat(1, bsz, 1)])\n            v = torch.cat([v, bias_v.repeat(1, bsz, 1)])\n            if attn_mask is not None:\n                attn_mask = pad(attn_mask, (0, 1))\n            if key_padding_mask is not None:\n                key_padding_mask = pad(key_padding_mask, (0, 1))\n        else:\n            assert static_k is None, \"bias cannot be added to static key.\"\n            assert static_v is None, \"bias cannot be added to static value.\"\n    else:\n        assert bias_k is None\n        assert bias_v is None\n\n    q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1)\n    if k is not None:\n        k = k.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)\n    if v is not None:\n        v = v.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)\n\n    if static_k is not None:\n        assert static_k.size(0) == bsz * num_heads\n        assert static_k.size(2) == head_dim\n        k = static_k\n\n    if static_v is not None:\n        assert static_v.size(0) == bsz * num_heads\n        assert static_v.size(2) == head_dim\n        v = static_v\n\n    src_len = k.size(1)\n\n    if key_padding_mask is not None:\n        assert key_padding_mask.size(0) == bsz\n        assert key_padding_mask.size(1) == src_len\n\n    if add_zero_attn:\n        src_len += 1\n        k = torch.cat([k, torch.zeros((k.size(0), 1) + k.size()[2:], dtype=k.dtype, device=k.device)], dim=1)\n        v = torch.cat([v, torch.zeros((v.size(0), 1) + v.size()[2:], dtype=v.dtype, device=v.device)], dim=1)\n        if attn_mask is not None:\n            attn_mask = pad(attn_mask, (0, 1))\n        if key_padding_mask is not None:\n            key_padding_mask = pad(key_padding_mask, (0, 1))\n\n    attn_output_weights = torch.bmm(q, k.transpose(1, 2))\n    assert list(attn_output_weights.size()) == [bsz * num_heads, tgt_len, src_len]\n\n    if attn_mask is not None:\n        if attn_mask.dtype == torch.bool:\n            attn_output_weights.masked_fill_(attn_mask, float(\"-inf\"))\n\n    if key_padding_mask is not None:\n        attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)\n        attn_output_weights = attn_output_weights.masked_fill(\n            key_padding_mask.unsqueeze(1).unsqueeze(2),\n            float(\"-inf\"),\n        )\n        attn_output_weights = attn_output_weights.view(bsz * num_heads, tgt_len, src_len)\n\n    attn_output_weights = softmax(attn_output_weights, dim=-1)\n    if attn_mask is not None:\n        if attn_mask.dtype == torch.float32 \\\n            or attn_mask.dtype == torch.float64 \\\n            or attn_mask.dtype == torch.float16:\n            attn_output_weights = attn_output_weights * attn_mask\n            attn_output_weights = attn_output_weights / (attn_output_weights.sum(-1).unsqueeze(-1)+1e-10)\n    attn_output_weights = dropout(attn_output_weights, p=dropout_p, training=training)\n\n    attn_output = torch.bmm(attn_output_weights, v)\n    assert list(attn_output.size()) == [bsz * num_heads, tgt_len, head_dim]\n    attn_output_heads = attn_output.reshape(bsz, num_heads, tgt_len, head_dim)\n    attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)\n    attn_output = linear(attn_output, out_proj_weight, out_proj_bias)\n\n    if need_weights:\n        attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)\n        return attn_output, attn_output_weights, attn_output_heads\n\n    else:\n        return attn_output, None, attn_output_heads\n\n\nclass VanillaMultiheadAttention(Module):\n    bias_k: Optional[torch.Tensor]\n    bias_v: Optional[torch.Tensor]\n\n    def __init__(self, embed_dim, num_heads, dropout=0., bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None):\n        super(VanillaMultiheadAttention, self).__init__()\n        self.embed_dim = embed_dim\n        self.kdim = kdim if kdim is not None else embed_dim\n        self.vdim = vdim if vdim is not None else embed_dim\n        self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim\n\n        self.num_heads = num_heads\n        self.dropout = dropout\n        self.head_dim = embed_dim // num_heads\n        assert self.head_dim * num_heads == self.embed_dim, \"embed_dim must be divisible by num_heads\"\n\n        if self._qkv_same_embed_dim is False:\n            self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim))\n            self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim))\n            self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim))\n            self.register_parameter('in_proj_weight', None)\n        else:\n            self.in_proj_weight = Parameter(torch.empty(3 * embed_dim, embed_dim))\n            self.register_parameter('q_proj_weight', None)\n            self.register_parameter('k_proj_weight', None)\n            self.register_parameter('v_proj_weight', None)\n\n        if bias:\n            self.in_proj_bias = Parameter(torch.empty(3 * embed_dim))\n        else:\n            self.register_parameter('in_proj_bias', None)\n        # self.out_proj = _LinearWithBias(embed_dim, embed_dim)\n        self.out_proj = Linear(embed_dim, embed_dim)\n\n        if add_bias_kv:\n            self.bias_k = Parameter(torch.empty(1, 1, embed_dim))\n            self.bias_v = Parameter(torch.empty(1, 1, embed_dim))\n        else:\n            self.bias_k = self.bias_v = None\n\n        self.add_zero_attn = add_zero_attn\n\n        self._reset_parameters()\n\n    def _reset_parameters(self):\n        if self._qkv_same_embed_dim:\n            xavier_uniform_(self.in_proj_weight)\n        else:\n            xavier_uniform_(self.q_proj_weight)\n            xavier_uniform_(self.k_proj_weight)\n            xavier_uniform_(self.v_proj_weight)\n\n        if self.in_proj_bias is not None:\n            constant_(self.in_proj_bias, 0.)\n            constant_(self.out_proj.bias, 0.)\n        if self.bias_k is not None:\n            xavier_normal_(self.bias_k)\n        if self.bias_v is not None:\n            xavier_normal_(self.bias_v)\n\n    def __setstate__(self, state):\n        if '_qkv_same_embed_dim' not in state:\n            state['_qkv_same_embed_dim'] = True\n\n        super(VanillaMultiheadAttention, self).__setstate__(state)\n\n    def forward(self, query: Tensor, key: Tensor, value: Tensor, key_padding_mask: Optional[Tensor] = None,\n                need_weights: bool = True, attn_mask: Optional[Tensor] = None) -> Tuple[Tensor, Optional[Tensor]]:\n        if not self._qkv_same_embed_dim:\n            return multi_head_attention_forward(\n                query, key, value, self.embed_dim, self.num_heads,\n                self.in_proj_weight, self.in_proj_bias,\n                self.bias_k, self.bias_v, self.add_zero_attn,\n                self.dropout, self.out_proj.weight, self.out_proj.bias,\n                training=self.training,\n                key_padding_mask=key_padding_mask, need_weights=need_weights,\n                attn_mask=attn_mask, use_separate_proj_weight=True,\n                q_proj_weight=self.q_proj_weight, k_proj_weight=self.k_proj_weight,\n                v_proj_weight=self.v_proj_weight)\n        else:\n            return multi_head_attention_forward(\n                query, key, value, self.embed_dim, self.num_heads,\n                self.in_proj_weight, self.in_proj_bias,\n                self.bias_k, self.bias_v, self.add_zero_attn,\n                self.dropout, self.out_proj.weight, self.out_proj.bias,\n                training=self.training,\n                key_padding_mask=key_padding_mask, need_weights=need_weights,\n                attn_mask=attn_mask)"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/node_encoder_layer_ghost.py",
    "content": "import torch\nimport torch.nn as nn\nfrom src.gumbel_social_transformer.mha import VanillaMultiheadAttention\nfrom src.gumbel_social_transformer.utils import _get_activation_fn\n\nclass NodeEncoderLayer(nn.Module):\n    r\"\"\"Ghost version\"\"\"\n    def __init__(self, d_model, nhead, dim_feedforward=512, dropout=0.1, activation=\"relu\", attn_mech='vanilla'):\n        super(NodeEncoderLayer, self).__init__()\n        self.attn_mech = attn_mech\n        if self.attn_mech == 'vanilla':\n            self.self_attn = VanillaMultiheadAttention(d_model, nhead, dropout=dropout)\n            self.norm_node = nn.LayerNorm(d_model)\n        else:\n            raise RuntimeError('NodeEncoderLayer currently only supports vanilla mode.')\n        self.norm1_node = nn.LayerNorm(d_model)\n        self.linear1 = nn.Linear(d_model, dim_feedforward)\n        self.linear2 = nn.Linear(dim_feedforward, d_model)\n        self.dropout = nn.Dropout(dropout)\n        self.dropout1 = nn.Dropout(dropout)\n        self.dropout2 = nn.Dropout(dropout)\n        self.activation = _get_activation_fn(activation)\n        self.nhead = nhead\n        self.d_model = d_model\n    \n    def forward(self, x, sampled_edges, attn_mask, device=\"cuda:0\"):\n        \"\"\"\n        Encode pedestrian edge with node information.\n        inputs:\n            - x: vertices representing pedestrians of one sample. \n                # bsz is batch size corresponding to Transformer setting. it corresponds to time steps in pedestrian setting.\n                # (bsz, nnode, d_model)\n            - sampled_edges: sampled adjacency matrix with ghost at the last column.\n                # (time_step, nnode <target>, nhead_edges, neighbor_node)\n                # where neighbor_node = nnode+1 <neighbor>\n            - attn_mask: attention mask provided in advance.\n                # (bsz, nnode <target>, nnode <neighbor>)\n                # row -> target, col -> neighbor\n                # 1. means yes, i.e. attention exists.  0. means no.\n            - device: 'cuda:0' or 'cpu'.\n        outputs:\n            - x: encoded vertices. # (bsz, nnode, d_model)\n            - attn_weights: attention weights. # (bsz, nhead, nnode <target>, neighbor_node)\n                # where neighbor_node = nnode+1 <neighbor>\n        \"\"\"\n        if self.attn_mech == 'vanilla':\n            bsz = x.shape[0]\n            attn_mask_ped = (attn_mask.sum(-1) > 0).float().unsqueeze(-1).to(device) # (bsz, nnode, 1)\n            x = self.norm_node(x)\n            x = x * attn_mask_ped\n            x_ghost = torch.zeros(bsz, 1, self.d_model).to(device)\n            # add zero vector for ghost at every node encoder layer.\n            x_w_ghost = torch.cat((x, x_ghost), dim=1) # (bsz, nnode+1, d_model) # x with ghost\n            x_w_ghost_perm = x_w_ghost.permute(1,0,2) # (nnode+1, bsz, d_model)\n            x_perm = x.permute(1, 0, 2) # (nnode, bsz, d_model)\n            adj_mat = sampled_edges.sum(2) # (bsz, nnode, nnode+1)\n            # adj_mat = torch.cat([adj_mat for _ in range(self.nhead)], dim=0) # (nhead*bsz, target_node, neighbor_node)\n            adj_mat = torch.stack([adj_mat for _ in range(self.nhead)], dim=1) # (bsz, nhead, target_node, neighbor_node)\n            adj_mat = adj_mat.view(bsz*self.nhead, adj_mat.shape[2], adj_mat.shape[3]) # (bsz*nhead, target_node, neighbor_node) # ! really important bug fix\n            x2, attn_weights, _ = self.self_attn(x_perm, x_w_ghost_perm, x_w_ghost_perm, attn_mask=adj_mat) \n            # inputs: (L, N, E), (S, N, E), (S, N, E), (N*nhead, L, S)\n            # x2: (nnode, bsz, d_model); attn_weights: (bsz, nhead, target_node, neighbor_node)\n            x2 = x2.permute(1, 0, 2) # (bsz, node, d_model)\n            x = x + self.dropout(x2)\n        else:\n            raise RuntimeError('NodeEncoderLayer currently only supports vanilla mode.')\n        \n        x2 = self.norm1_node(x)\n        x2 = self.dropout1(self.activation(self.linear1(x2)))\n        x2 = self.dropout2(self.linear2(x2))\n        x = x + x2\n        return x, attn_weights"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/node_encoder_layer_no_ghost.py",
    "content": "import torch\nimport torch.nn as nn\nfrom gst_updated.src.gumbel_social_transformer.mha import VanillaMultiheadAttention\nfrom gst_updated.src.gumbel_social_transformer.utils import _get_activation_fn\n\nclass NodeEncoderLayer(nn.Module):\n    r\"\"\"No ghost version\"\"\"\n    def __init__(self, d_model, nhead, dim_feedforward=512, dropout=0.1, activation=\"relu\", attn_mech='vanilla'):\n        super(NodeEncoderLayer, self).__init__()\n        self.attn_mech = attn_mech\n        if self.attn_mech == 'vanilla':\n            self.self_attn = VanillaMultiheadAttention(d_model, nhead, dropout=dropout)\n            self.norm_node = nn.LayerNorm(d_model)\n        else:\n            raise RuntimeError('NodeEncoderLayer currently only supports vanilla mode.')\n        self.norm1_node = nn.LayerNorm(d_model)\n        self.linear1 = nn.Linear(d_model, dim_feedforward)\n        self.linear2 = nn.Linear(dim_feedforward, d_model)\n        self.dropout = nn.Dropout(dropout)\n        self.dropout1 = nn.Dropout(dropout)\n        self.dropout2 = nn.Dropout(dropout)\n        self.activation = _get_activation_fn(activation)\n        self.nhead = nhead\n    \n    def forward(self, x, sampled_edges, attn_mask, device=\"cuda:0\"):\n        \"\"\"\n        Encode pedestrian edge with node information.\n        inputs:\n            - x: vertices representing pedestrians of one sample. \n                # bsz is batch size corresponding to Transformer setting. it corresponds to time steps in pedestrian setting.\n                # (bsz, nnode, d_model)\n            - sampled_edges: sampled adjacency matrix with ghost at the last column.\n                # (time_step, nnode <target>, nhead_edges, neighbor_node)\n                # where neighbor_node = nnode <neighbor>\n            - attn_mask: attention mask provided in advance.\n                # (bsz, nnode <target>, nnode <neighbor>)\n                # row -> target, col -> neighbor\n                # 1. means yes, i.e. attention exists.  0. means no.\n            - device: 'cuda:0' or 'cpu'.\n        outputs:\n            - x: encoded vertices. # (bsz, nnode, d_model)\n            - attn_weights: attention weights. # (bsz, nhead, nnode <target>, neighbor_node)\n                # where neighbor_node = nnode <neighbor>\n        \"\"\"\n        if self.attn_mech == 'vanilla':\n            bsz = x.shape[0]\n            attn_mask_ped = (attn_mask.sum(-1) > 0).float().unsqueeze(-1).to(device) # (bsz, nnode, 1)\n            x = self.norm_node(x)\n            x = x * attn_mask_ped\n            x_perm = x.permute(1, 0, 2) # (nnode, bsz, d_model)\n            adj_mat = sampled_edges.sum(2) # (bsz, nnode, nnode)\n            # adj_mat = torch.cat([adj_mat for _ in range(self.nhead)], dim=0) # (nhead*bsz, target_node, neighbor_node)\n            adj_mat = torch.stack([adj_mat for _ in range(self.nhead)], dim=1) # (bsz, nhead, target_node, neighbor_node)\n            adj_mat = adj_mat.view(bsz*self.nhead, adj_mat.shape[2], adj_mat.shape[3]) # (bsz*nhead, target_node, neighbor_node) # ! really important bug fix\n            x2, attn_weights, _ = self.self_attn(x_perm, x_perm, x_perm, attn_mask=adj_mat) \n            # inputs: (L, N, E), (S, N, E), (S, N, E), (N*nhead, L, S)\n            # x2: (nnode, bsz, d_model); attn_weights: (bsz, nhead, target_node, neighbor_node)\n            x2 = x2.permute(1, 0, 2) # (bsz, node, d_model)\n            x = x + self.dropout(x2)\n        else:\n            raise RuntimeError('NodeEncoderLayer currently only supports vanilla mode.')\n        \n        x2 = self.norm1_node(x)\n        x2 = self.dropout1(self.activation(self.linear1(x2)))\n        x2 = self.dropout2(self.linear2(x2))\n        x = x + x2\n        return x, attn_weights"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/pathhack.py",
    "content": "import sys\nfrom os.path import abspath, join, split\nfile_path = split(abspath(__file__))[0]\npkg_path = join(file_path, '../..')\nsys.path.append(pkg_path)\n"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/st_model.py",
    "content": "\"\"\"\nst_model.py\nA multi-pedestrian trajectory prediction model \nthat follows spatial -> temporal encoding manners.\n\"\"\"\n\nimport torch\nimport torch.nn as nn\n\n# from src.social_transformer.social_transformer import SpatialSocialTransformerEncoder\nfrom gst_updated.src.gumbel_social_transformer.gumbel_social_transformer import GumbelSocialTransformer\nfrom gst_updated.src.gumbel_social_transformer.temporal_convolution_net import TemporalConvolutionNet\n\n\ndef offset_error_square_full_partial(x_pred, x_target, loss_mask_ped, loss_mask_pred_seq):\n    \"\"\"\n    Offset Error Square between positions.\n    # * average_offset_error and final_offset_error in utils.py are computed for full pedestrians.\n    inputs:\n        - x_pred\n            # prediction on pedestrian displacements in prediction period.\n            # (batch, pred_seq_len, node, motion_dim)\n            # batch = 1\n        - x_target\n            # ground truth pedestrian displacements in prediction period.\n            # (batch, pred_seq_len, node, motion_dim)\n        - loss_mask_ped\n            # loss mask on each pedestrian. 1 means the pedestrian is valid, and 0 means not valid.\n            # * equivalent as loss_mask_rel_full_partial in st_model.\n            # * Used to filter out the ones we do not predict. (disappear early, not appear until prediction period.)\n            # (batch, node)\n        - loss_mask_pred_seq\n            # loss_mask_rel in prediction sequence. float32 tensor: (batch, num_peds, pred_seq_len)\n    outputs:\n        - offset_error_sq: offset error for each pedestrians. \n            # Already times eventual_loss_mask before output. shape: (pred_seq_len, node)\n        - eventual_loss_mask: eventual loss mask on each pedestrian and each prediction step. \n            # shape: (pred_seq_len, node)\n    \"\"\"\n    assert x_pred.shape[0] == loss_mask_ped.shape[0] == loss_mask_pred_seq.shape[0] == 1 # batch\n    assert x_pred.shape[1] == x_target.shape[1] == loss_mask_pred_seq.shape[2] # pred_seq_len\n    assert x_pred.shape[2] == x_target.shape[2] == loss_mask_ped.shape[1] == loss_mask_pred_seq.shape[1] # num_peds\n    assert x_pred.shape[3] == x_target.shape[3] == 2 # motion dim\n    # mask out invalid values\n    loss_mask_rel_pred = loss_mask_pred_seq.permute(0, 2, 1).unsqueeze(-1) # (batch, pred_seq_len, num_peds, 1)\n    x_pred_m = x_pred * loss_mask_rel_pred # (batch, pred_seq_len, node, motion_dim)\n    x_target_m = x_target * loss_mask_rel_pred\n    x_pred_m = x_pred_m * loss_mask_ped.unsqueeze(1).unsqueeze(-1) # (batch, pred_seq_len, node, motion_dim)\n    x_target_m = x_target_m * loss_mask_ped.unsqueeze(1).unsqueeze(-1) # (batch, pred_seq_len, node, motion_dim)\n\n    pos_pred = torch.cumsum(x_pred_m, dim=1)\n    pos_target = torch.cumsum(x_target_m, dim=1)\n    offset_error_sq = (((pos_pred-pos_target)**2.).sum(3))[0] # (pred_seq_len, node)\n\n    eventual_loss_mask = loss_mask_rel_pred[0,:,:,0] * loss_mask_ped[0] # (pred_seq_len, node)\n    offset_error_sq = offset_error_sq * eventual_loss_mask\n\n    return offset_error_sq, eventual_loss_mask\n\n\n\ndef negative_log_likelihood_full_partial(gaussian_params, x_target, loss_mask_ped, loss_mask_pred_seq):\n    \"\"\"\n    Compute negative log likelihood of gaussian parameters.\n    inputs:\n        - gaussian_params: tuple.\n            - mu: (batch, pred_seq_len, node, 2)\n            - sx: (batch, pred_seq_len, node, 1)\n            - sy: (batch, pred_seq_len, node, 1)\n            - corr: (batch, pred_seq_len, node, 1)\n        - x_target\n            # ground truth pedestrian displacements in prediction period.\n            # (batch, pred_seq_len, node, motion_dim)\n        - loss_mask_ped\n            # loss mask on each pedestrian. 1 means the pedestrian is valid, and 0 means not valid.\n            # * equivalent as loss_mask_rel_full_partial in st_model.\n            # * Used to filter out the ones we do not predict. (disappear early, not appear until prediction period.)\n            # (batch, node)\n        - loss_mask_pred_seq\n            # loss_mask_rel in prediction sequence. float32 tensor: (batch, num_peds, pred_seq_len)\n    outputs:\n        - prob_loss: (pred_seq_len, node)\n        - eventual_loss_mask: eventual loss mask on each pedestrian and each prediction step. \n            # shape: (pred_seq_len, node)\n    \"\"\"\n\n    mu, sx, sy, corr = gaussian_params\n    \n    loss_mask_rel_pred = loss_mask_pred_seq.permute(0, 2, 1).unsqueeze(-1) # (batch, pred_seq_len, num_peds, 1)\n    mu = mu * loss_mask_rel_pred # (batch, pred_seq_len, node, 2)\n    corr = corr * loss_mask_rel_pred # (batch, pred_seq_len, node, 1)\n    x_target = x_target * loss_mask_rel_pred\n    mu = mu * loss_mask_ped.unsqueeze(1).unsqueeze(-1) # (batch, pred_seq_len, node, 2)\n    corr = corr * loss_mask_ped.unsqueeze(1).unsqueeze(-1) # (batch, pred_seq_len, node, 1)\n    x_target = x_target * loss_mask_ped.unsqueeze(1).unsqueeze(-1) # (batch, pred_seq_len, node, 2)\n    \n    sx = sx * loss_mask_rel_pred + (1.-loss_mask_rel_pred) # sigma should not be zero, so mask with one\n    sy = sy * loss_mask_rel_pred + (1.-loss_mask_rel_pred) # sigma should not be zero, so mask with one\n    sx = sx * loss_mask_ped.unsqueeze(1).unsqueeze(-1) + (1.-loss_mask_ped.unsqueeze(1).unsqueeze(-1)) # (batch, pred_seq_len, node, 1)\n    sy = sy * loss_mask_ped.unsqueeze(1).unsqueeze(-1) + (1.-loss_mask_ped.unsqueeze(1).unsqueeze(-1)) # (batch, pred_seq_len, node, 1)\n    sigma = torch.cat((sx, sy), dim=3)\n\n    x_target_norm = (x_target-mu)/sigma\n    nx, ny = x_target_norm[:,:,:,0:1], x_target_norm[:,:,:,1:2]\n    loss_term_1 = torch.log(1.-corr**2.)/2.+torch.log(sx)+torch.log(sy)\n    loss_term_2 = (nx**2.-2.*corr*nx*ny+ny**2.)/(2.*(1.-corr**2.))\n    prob_loss = (loss_term_1+loss_term_2).squeeze(3).squeeze(0) # (pred_seq_len, node)\n    eventual_loss_mask = loss_mask_rel_pred[0,:,:,0] * loss_mask_ped[0] # (pred_seq_len, node)\n    prob_loss = prob_loss * eventual_loss_mask\n\n    return prob_loss, eventual_loss_mask\n\n\n\nclass st_model(nn.Module):\n\n    def __init__(self, args, device='cuda:0'):\n        \"\"\"\n        Initialize spatial and temporal encoding components.\n        inputs:\n            - args: arguments from user input. Here only list arguments used in st_model.\n                (in __init__)\n                ### * in function __init__() * ###\n                - spatial # spatial encoding methods. options: rel_conv.\n                - temporal # temporal encoding methods. options: lstm.\n                - motion_dim # pedestrian motion is 2D, so motion_dim is always 2.\n                - output_dim # 5 means probabilistic output (mu_x, mu_y, sigma_x, sigma_y, corr)\n                # 2 means deterministic output (x, y) # ! may not do output_dim=2 in our work\n                - embedding_size # size of pedstrian embeddings after spatial encoding.\n                - spatial_num_heads # number of heads for multi-head attention\n                # mechanism in spatial encoding.\n                - spatial_beta # beta used in skip connection as a percentage of original input.\n                # default can be None. If beta is not None, beta = 0.9 means\n                # out <- 0.9 * x + 0.1 * out\n                - lstm_hidden_size # hidden size of lstm.\n                - lstm_num_layers # number of layers of lstm.\n                - lstm_batch_first # batch first or not for lstm. \n                - lstm_dropout # dropout rate of lstm.\n                - decode_style # 'recursive' or 'readout'.\n                # 'recursive' means recursively encode and decode.\n                # 'readout' means encoding and decoding are separated.\n                - detach_sample # bool value on whether detach samples from gaussian_params or not.\n                # detach_sample=False is default. It means using reparametrization trick and enable gradient flow.\n                # detach_sample=True means to disable reparametrization trick.\n                # ! To add\n                # ! args.spatial_num_heads_edges\n                # ! args.ghost\n                ### * in function foward() * ###\n                - pred_seq_len # length of prediction period: 12\n\n            - device: 'cuda:0' or 'cpu'.\n        \"\"\"\n        super(st_model, self).__init__()\n        ## spatial\n        if args.spatial == 'gumbel_social_transformer':\n            # self.node_embedding = nn.Linear(args.motion_dim, args.embedding_size).to(device)\n            # self.edge_embedding = nn.Linear(args.motion_dim, 2 * args.embedding_size).to(device)\n            self.gumbel_social_transformer = GumbelSocialTransformer(\n                args.motion_dim,\n                args.embedding_size,\n                args.spatial_num_heads,\n                args.spatial_num_heads_edges,\n                args.spatial_num_layers,\n                dim_feedforward=128,\n                dim_hidden=32,\n                dropout=0.1,\n                activation=\"relu\",\n                attn_mech=\"vanilla\",\n                ghost=args.ghost,\n            ).to(device)\n            # self.spatial_social_transformer = SpatialSocialTransformerEncoder(args.embedding_size, args.spatial_num_heads, args.spatial_num_layers, \\\n            #     dim_feedforward=256, dim_hidden=32, dropout=0.1, activation=\"relu\", attn_mech=\"vanilla\").to(device)\n        else:\n            raise RuntimeError('The spatial component is not found.')\n        ## temporal\n        if args.temporal == 'lstm' or args.temporal == 'faster_lstm':\n            self.lstm = nn.LSTM(\n                    input_size=args.embedding_size,\n                    hidden_size=args.lstm_hidden_size,\n                    num_layers=args.lstm_num_layers,\n                    batch_first=False,\n                    dropout=0.,\n                    bidirectional=False,\n                    ).to(device)\n            self.hidden2pos = nn.Linear(args.lstm_num_layers*args.lstm_hidden_size, args.output_dim).to(device)\n        else:\n            raise RuntimeError('The temporal component is not lstm nor faster_lstm.')\n        ## others\n        self.args = args\n        print(\"new st model\")\n\n    def raw2gaussian(self, prob_raw):\n        \"\"\"\n        Turn raw values into gaussian parameters.\n        inputs:\n            - prob_raw: (batch, time, node, output_dim)\n            - device: 'cuda:0' or 'cpu'.\n        outputs:\n            - gaussian_params: tuple.\n                - mu: (batch, time, node, 2)\n                - sx: (batch, time, node, 1)\n                - sy: (batch, time, node, 1)\n                - corr: (batch, time, node, 1)\n        \"\"\"\n        mu = prob_raw[:,:,:,:2]\n        sx, sy = torch.exp(prob_raw[:,:,:,2:3]), torch.exp(prob_raw[:,:,:,3:4])\n        corr = torch.tanh(prob_raw[:,:,:,4:5])\n        gaussian_params = (mu, sx, sy, corr)\n        return gaussian_params\n\n    def sample_gaussian(self, gaussian_params, device='cuda:0', detach_sample=False, sampling=True):\n        \"\"\"\n        Generate a sample from Gaussian.\n        inputs:\n            - gaussian_params: tuple.\n                - mu: (batch, time, node, 2)\n                - sx: (batch, time, node, 1)\n                - sy: (batch, time, node, 1)\n                - corr: (batch, time, node, 1)\n            - device: 'cuda:0' or 'cpu'\n            - detach_sample: Bool. Default False.\n                # Detach is to cut the gradient flow between gaussian_params and the next sample.\n                # detach_sample=True means reparameterization trick is disabled.\n                # detach_sample=False means reparameterization trick is enabled.\n                # ! if it causes error, we need to manually turn detach_sample=False \n                # ! or we have to change args file for val_best before jan 4, 2021.\n            - sampling: \n                # True means sampling. # False means using mu.\n        outputs:\n            - sample: (batch, time, node, 2)\n        \"\"\"\n        mu, sx, sy, corr = gaussian_params\n        if detach_sample:\n            mu, sx, sy, corr = mu.detach(), sx.detach(), sy.detach(), corr.detach()\n        if sampling:\n            sample_unit = torch.empty(mu.shape).normal_().to(device) # N(0,1) with shape (batch, time, node, 2)\n            sample_unit_x, sample_unit_y = sample_unit[:,:,:,0:1], sample_unit[:,:,:,1:2] # (batch, time, node, 1)\n            sample_x = sx*sample_unit_x\n            sample_y = corr*sy*sample_unit_x+((1.-corr**2.)**0.5)*sy*sample_unit_y\n            sample = torch.cat((sample_x, sample_y), dim=3)+mu\n        else:\n            sample = mu\n        return sample\n\n\n    def edge_evolution(self, xt_plus, At, device='cuda:0'):\n        \"\"\"\n        Compute edges at the next time step (At_plus) based on \n        pedestrian displacements at the next time step (xt_plus)\n        and edges at the current time step (At).\n        inputs:\n            - xt_plus: vertices representing pedestrian displacement from t to t+1.\n            # (batch, unit_time, node, motion_dim)\n            - At: edges representing relative position between pedestrians at time t.\n            At(i, j) is the vector pos_i,t - pos_j,t. I.e. the vector from pedestrian j\n            to pedestrian i. \n            # (batch, unit_time, node, node, edge_feat)\n            # batch = unit_time = 1.\n            # edge_feat = 2.\n            - device: 'cuda:0' or 'cpu'.\n        outputs:\n            - At_plus: edges representing relative position between pedestrians at time t.\n            # (batch, unit_time, node, node, edge_feat)\n        \"\"\"\n        # xt_plus # (batch, unit_time, node, motion_dim)\n        # At # (batch, unit_time, node, node, edge_feat)\n        # (batch, unit_time, node, 1, motion_dim) - (batch, unit_time, 1, node, motion_dim)\n        At_plus = At + (xt_plus.unsqueeze(3) - xt_plus.unsqueeze(2))\n        return At_plus\n\n    def forward(self, x, A, attn_mask, loss_mask_rel, tau=1., hard=False, sampling=True, device='cuda:0'):\n        \"\"\"\n        Forward function.\n        inputs:\n            - x\n                # vertices representing pedestrians during observation period.\n                # (batch, obs_seq_len, node, in_feat)\n                # node: number of pedestrians\n                # in_feat: motion_dim, i.e. 2.\n                # Refer to V_obs in src.mgnn.utils.dataset_format().\n            - A\n                # edges representation relationships between pedestrians during observation period.\n                # (batch, obs_seq_len, node, node, edge_feat)\n                # edge_feat: feature dim of edges. if spatial encoding is rel_conv, edge_feat = 2. \n                # Refer to A_obs in src.mgnn.utils.dataset_format().\n            - attn_mask\n                # attention mask on pedestrian interactions in observation period.\n                # row -> neighbor, col -> target\n                # Should neighbor affect target?\n                # 1 means yes, i.e. attention exists.  0 means no.\n                # float32 tensor: (batch, obs_seq_len, neighbor_num_peds, target_num_peds)\n            - loss_mask_rel\n                # loss mask on displacement in the whole period\n                # float32 tensor: (batch, num_peds, seq_len)\n                # 1 means the displacement of pedestrian i at time t is valid. 0 means not valid.\n                # If the displacement of pedestrian i at time t is valid,\n                # then position of pedestrian i at time t and t-1 is valid.\n                # If t is zero, then it means position of pedestrian i at time t is valid.\n            - tau: temperature hyperparameter of gumbel softmax.\n                # ! Need annealing though training. 1 is considered really soft at the beginning.\n            - hard: hard or soft sampling.\n                # True means one-hot sample for evaluation.\n                # False means soft sample for reparametrization.\n            - sampling: sample gaussian (True) or use mean for prediction (False).\n            - device: 'cuda:0' or 'cpu'.\n        outputs:\n            # TODO\n        \"\"\"\n        # ! Start writing multiple batches\n        info = {}\n        # processing when missing pedestrians are included\n        batch_size, _, num_peds, _ = x.shape\n        loss_mask_per_pedestrian = (loss_mask_rel.sum(2)==loss_mask_rel.shape[2]).float() # (batch, num_peds)\n        if self.args.only_observe_full_period:\n            attn_mask_single_step = torch.bmm(loss_mask_per_pedestrian.unsqueeze(2), loss_mask_per_pedestrian.unsqueeze(1)) # (batch, num_peds, num_peds)\n            attn_mask = torch.ones(batch_size, self.args.obs_seq_len, num_peds, num_peds).to(device) * attn_mask_single_step.unsqueeze(1) # (batch, obs_seq_len, num_peds, num_peds)\n        ## observation period: spatial\n        if self.args.spatial == 'gumbel_social_transformer':\n            # x_embedding = self.node_embedding(x)[0] # (obs_seq_len, node, d_model)\n            # A_embedding = self.edge_embedding(A)[0] # (obs_seq_len, nnode <neighbor>, nnode <target>, 2*d_model)\n            attn_mask = attn_mask.permute(0, 1, 3, 2) # (batch, obs_seq_len, nnode <target>, nnode <neighbor>)\n            attn_mask_reshaped = attn_mask.reshape(batch_size*self.args.obs_seq_len, num_peds, num_peds)\n            x_reshaped = x.reshape(batch_size*self.args.obs_seq_len, num_peds, -1)\n            A_reshaped = A.reshape(batch_size*self.args.obs_seq_len, num_peds, num_peds, -1) #(batch, obs_seq_len, node, node, edge_feat)\n            # ! attn_mask = attn_mask[0].permute(0,2,1) # (obs_seq_len, nnode <target>, nnode <neighbor>)\n            # ! xs, sampled_edges, edge_multinomial, attn_weights = self.gumbel_social_transformer(x[0], A[0], attn_mask, tau=tau, hard=hard, device=device)\n            xs, sampled_edges, edge_multinomial, attn_weights = self.gumbel_social_transformer(x_reshaped, A_reshaped, attn_mask_reshaped, tau=tau, hard=hard, device=device)\n            xs = xs.reshape(batch_size, self.args.obs_seq_len, num_peds, -1) # (batch, obs_seq_len, nnode, embedding_size)\n            info['sampled_edges'], info['edge_multinomial'], info['attn_weights'] = [], [], []\n            sampled_edges = sampled_edges.reshape(batch_size, self.args.obs_seq_len,\n                sampled_edges.shape[1], sampled_edges.shape[2], sampled_edges.shape[3])\n            edge_multinomial = edge_multinomial.reshape(batch_size, self.args.obs_seq_len,\n                edge_multinomial.shape[1], edge_multinomial.shape[2], edge_multinomial.shape[3])\n            attn_weights = attn_weights.reshape(attn_weights.shape[0], batch_size, self.args.obs_seq_len,\n                attn_weights.shape[2], attn_weights.shape[3], attn_weights.shape[4])\n            info['sampled_edges'].append(sampled_edges.detach().to(\"cpu\"))\n            info['edge_multinomial'].append(edge_multinomial.detach().to(\"cpu\"))\n            info['attn_weights'].append(attn_weights.detach().to(\"cpu\")) \n        else:\n            raise RuntimeError(\"The spatial component is not found.\")\n        ## observation period: temporal\n        ht = torch.zeros(self.args.lstm_num_layers, batch_size*num_peds, self.args.lstm_hidden_size).to(device)\n        ct = torch.zeros(self.args.lstm_num_layers, batch_size*num_peds, self.args.lstm_hidden_size).to(device) \n        if self.args.temporal == 'lstm':\n            for tt in range(self.args.obs_seq_len):\n                loss_mask_rel_tt = loss_mask_rel[:,:,tt:tt+1].reshape(-1,1) # (batch*num_peds, 1)\n                xs_tt = xs[:, tt].reshape(batch_size*num_peds, -1).unsqueeze(0)*loss_mask_rel_tt # (1, batch*num_peds, embedding_size)\n                _, (htp, ctp) = self.lstm(xs_tt, (ht, ct)) # tp == tplus\n                ht = htp * loss_mask_rel_tt + ht * (1.-loss_mask_rel_tt)\n                ct = ctp * loss_mask_rel_tt + ct * (1.-loss_mask_rel_tt)\n        elif self.args.temporal == 'faster_lstm':\n            obs_mask = loss_mask_rel[:,:,:self.args.obs_seq_len].permute(0,2,1).unsqueeze(-1) # (batch, obs_seq_len, num_peds,1)\n            xs_masked = xs*obs_mask # (batch, obs_seq_len, num_peds, embedding_size)\n            xs_masked = xs_masked.permute(1,0,2,3).reshape(self.args.obs_seq_len, batch_size*num_peds, -1) # (obs_seq_len, batch*num_peds, embedding_size)\n            _, (ht, ct) = self.lstm(xs_masked, (ht, ct))\n        else:\n            raise RuntimeError('The temporal component is not lstm nor faster_lstm.')\n        if self.args.only_observe_full_period:\n            loss_mask_rel_full_partial = loss_mask_per_pedestrian # (batch, num_peds)\n        else:\n            # We predict motion of pedestrians that show up in observation period, and did not disappear after last observed time step. <-> equivalent as only predict pedestrians whose relative position is present at the last observed time step.\n            loss_mask_rel_full_partial = loss_mask_rel[:,:,self.args.obs_seq_len-1] # (batch, num_peds)\n        ht = ht * loss_mask_rel_full_partial.reshape(-1).unsqueeze(-1)\n        ct = ct * loss_mask_rel_full_partial.reshape(-1).unsqueeze(-1)\n        attn_mask_pred = torch.bmm(loss_mask_rel_full_partial.unsqueeze(2), loss_mask_rel_full_partial.unsqueeze(1)).permute(0,2,1) # (batch*unit_time, nnode <target>, nnode <neighbor>)\n        ## prediction period\n        if self.args.decode_style == 'recursive':\n            if self.args.temporal == 'lstm' or self.args.temporal == 'faster_lstm':\n                # ht # (num_layers, node, hidden_size)\n                # ht # (num_layers, batch_size*num_peds, hidden_size) # batch_size*num_peds, sth\n                prob_raw = self.hidden2pos(ht.permute(1,0,2).reshape(batch_size*num_peds, -1)).reshape(batch_size, num_peds, -1).unsqueeze(1) # (batch, unit_time, node, output_dim)\n                gaussian_params = self.raw2gaussian(prob_raw)\n                mu, sx, sy, corr = gaussian_params\n                x_sample = self.sample_gaussian(gaussian_params, device=device, detach_sample=self.args.detach_sample, sampling=sampling)\n                # x_sample: (batch, unit_time, node, motion_dim)\n                # loss_mask_rel_full_partial： (batch, num_peds)\n                x_sample = x_sample * loss_mask_rel_full_partial.unsqueeze(1).unsqueeze(-1) # mask x_sample here for edge evolution\n                # A: (batch, obs_seq_len, node, node, edge_feat)\n                A_sample = self.edge_evolution(x_sample, A[:,-1:], device=device) # (batch, unit_time, node, node, edge_feat)\n                # starts recursion\n                prob_raw_pred, x_sample_pred, A_sample_pred = [], [], []\n                mu_pred, sx_pred, sy_pred, corr_pred = [], [], [], []\n                prob_raw_pred.append(prob_raw)\n                x_sample_pred.append(x_sample)\n                A_sample_pred.append(A_sample)\n                mu_pred.append(mu)\n                sx_pred.append(sx)\n                sy_pred.append(sy)\n                corr_pred.append(corr)\n                for tt in range(1, self.args.pred_seq_len):\n                    if self.args.spatial == 'gumbel_social_transformer':\n                        # * spatial encoding at prediction step tt\n                        # # attn_mask_pred # (batch*unit_time, nnode <target>, nnode <neighbor>)\n                        x_sample_reshaped = x_sample.reshape(batch_size, num_peds, -1) # (batch*unit_time, node, motion_dim)\n                        A_sample_reshaped = A_sample.reshape(batch_size, num_peds, num_peds, -1) # (batch*unit_time, node, node, edge_feat)\n                        xs_tt, sampled_edges, edge_multinomial, attn_weights = \\\n                            self.gumbel_social_transformer(x_sample_reshaped, A_sample_reshaped, attn_mask_pred, \\\n                            tau=tau, hard=hard, device=device)\n                        sampled_edges = sampled_edges.reshape(batch_size, 1,\n                            sampled_edges.shape[1], sampled_edges.shape[2], sampled_edges.shape[3])\n                        edge_multinomial = edge_multinomial.reshape(batch_size, 1,\n                            edge_multinomial.shape[1], edge_multinomial.shape[2], edge_multinomial.shape[3])\n                        attn_weights = attn_weights.reshape(attn_weights.shape[0], batch_size, 1,\n                            attn_weights.shape[2], attn_weights.shape[3], attn_weights.shape[4])\n                        info['sampled_edges'].append(sampled_edges.detach().to(\"cpu\"))\n                        info['edge_multinomial'].append(edge_multinomial.detach().to(\"cpu\"))\n                        info['attn_weights'].append(attn_weights.detach().to(\"cpu\"))\n                        # xs_tt: # (batch*unit_time, node, d_model)\n                        # * temporal encoding at prediction step tt\n                        loss_mask_rel_tt = loss_mask_rel_full_partial.reshape(-1,1) # (batch*num_peds,1)\n                        xs_tt = xs_tt.reshape(batch_size*num_peds, -1).unsqueeze(0)*loss_mask_rel_tt # (unit_time, batch*num_peds, embedding_size)\n                        _, (htp, ctp) = self.lstm(xs_tt, (ht, ct)) # tp == tplus\n                        ht = htp * loss_mask_rel_tt + ht * (1.-loss_mask_rel_tt) # (num_layers, batch_size*num_peds, hidden_size)\n                        ct = ctp * loss_mask_rel_tt + ct * (1.-loss_mask_rel_tt) # (num_layers, batch_size*num_peds, hidden_size)\n                        # * prediction at prediction step tt\n                        prob_raw = self.hidden2pos(ht.permute(1,0,2).reshape(batch_size*num_peds, -1)).reshape(batch_size, num_peds, -1).unsqueeze(1) # (batch, unit_time, node, output_dim)\n                        gaussian_params = self.raw2gaussian(prob_raw)\n                        mu, sx, sy, corr = gaussian_params\n                        x_sample = self.sample_gaussian(gaussian_params, device=device, detach_sample=self.args.detach_sample, sampling=sampling)\n                        x_sample = x_sample * loss_mask_rel_full_partial.unsqueeze(1).unsqueeze(-1) # mask x_sample here for edge evolution\n                        A_sample = self.edge_evolution(x_sample, A_sample, device=device) # (batch, unit_time, node, node, edge_feat)\n                        # * append to results\n                        prob_raw_pred.append(prob_raw)\n                        x_sample_pred.append(x_sample)\n                        A_sample_pred.append(A_sample)\n                        mu_pred.append(mu)\n                        sx_pred.append(sx)\n                        sy_pred.append(sy)\n                        corr_pred.append(corr)\n                    else:\n                        raise RuntimeError(\"The spatial component is not found.\")\n                \n                # concatenate predictions together\n                prob_raw_pred = torch.cat(prob_raw_pred, dim=1) # (batch, pred_seq_len, node, output_dim)\n                x_sample_pred = torch.cat(x_sample_pred, dim=1) # (batch, pred_seq_len, node, motion_dim)\n                A_sample_pred = torch.cat(A_sample_pred, dim=1) # (batch, pred_seq_len, node, node, edge_feat)\n                mu_pred = torch.cat(mu_pred, dim=1) # (batch, pred_seq_len, node, 2)\n                sx_pred = torch.cat(sx_pred, dim=1) # (batch, pred_seq_len, node, 1)\n                sy_pred = torch.cat(sy_pred, dim=1) # (batch, pred_seq_len, node, 1)\n                corr_pred = torch.cat(corr_pred, dim=1) # (batch, pred_seq_len, node, 1)\n                gaussian_params_pred = (mu_pred, sx_pred, sy_pred, corr_pred)\n                # gaussian_params_pred = self.raw2gaussian(prob_raw_pred)\n                info['sampled_edges'] = torch.cat(info['sampled_edges'], dim=1) # (batch_size, seq_len-1, nnode <target>, nhead_edges, neighbor_node)\n                info['edge_multinomial'] = torch.cat(info['edge_multinomial'], dim=1) # (batch_size, seq_len-1, nnode <target>, nhead_edges, neighbor_node)\n                info['attn_weights'] = torch.cat(info['attn_weights'], dim=2) # (batch_size, nlayer, seq_len-1, nhead, nnode <target>, neighbor_node)\n                info['A_sample_pred'] = A_sample_pred # (batch, pred_seq_len, node, node, edge_feat)\n                info['loss_mask_rel_full_partial'] = loss_mask_rel_full_partial # (batch, num_peds)\n                info['loss_mask_per_pedestrian'] = loss_mask_per_pedestrian # (batch, num_peds)\n                # pack results\n                results = (gaussian_params_pred, x_sample_pred, info)\n                return results\n            else:\n                raise RuntimeError('The temporal component is not lstm nor faster_lstm.')\n        else:\n            raise RuntimeError(\"The decoder style is not recursive.\")"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/temperature_scheduler.py",
    "content": "class Temp_Scheduler(object):\n    def __init__(self, total_epochs, curr_temp, base_temp, temp_min=0.33, last_epoch=-1):\n        self.curr_temp = curr_temp\n        self.base_temp = base_temp\n        self.temp_min = temp_min\n        self.last_epoch = last_epoch\n        self.total_epochs = total_epochs\n        self.step(last_epoch + 1)\n\n    def step(self, epoch=None):\n        return self.decay_whole_process()\n\n    def decay_whole_process(self, epoch=None):\n        if epoch is None:\n            epoch = self.last_epoch + 1\n        self.last_epoch = epoch\n        self.curr_temp = (1 - self.last_epoch / self.total_epochs) * (self.base_temp - self.temp_min) + self.temp_min\n        if self.curr_temp < self.temp_min:\n            self.curr_temp = self.temp_min\n        return self.curr_temp"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/temporal_convolution_net.py",
    "content": "import torch.nn as nn\nfrom gst_updated.src.gumbel_social_transformer.utils import _get_clones, _get_activation_fn\n\nclass TemporalConvolutionNet(nn.Module):\n    def __init__(\n        self,\n        in_channels,\n        out_channels,\n        dim_hidden,\n        nconv=2,\n        obs_seq_len=8,\n        pred_seq_len=12,\n        kernel_size=3,\n        stride=1,\n        dropout=0.1,\n        activation=\"relu\",\n        ):\n        super(TemporalConvolutionNet,self).__init__()\n        assert kernel_size % 2 == 1\n        assert nconv >= 2\n        padding = ((kernel_size - 1) // 2, 0)\n        norm_layer = nn.LayerNorm(in_channels)\n        timeconv_layer = nn.Conv2d(in_channels, in_channels, (kernel_size, 1), (stride, 1), padding)\n\n        self.nconv = nconv\n        self.norms = _get_clones(norm_layer, nconv)\n        self.timeconvs = _get_clones(timeconv_layer, nconv)\n\n        self.timelinear1 = nn.Linear(obs_seq_len, pred_seq_len)\n        self.timelinear2 = nn.Linear(pred_seq_len, pred_seq_len)\n        self.timedropout1 = nn.Dropout(dropout)\n        self.timedropout2 = nn.Dropout(dropout)\n\n        self.linear1 = nn.Linear(in_channels, dim_hidden)\n        self.linear2 = nn.Linear(dim_hidden, out_channels)\n        self.dropout = nn.Dropout(dropout)\n\n        self.activation = _get_activation_fn(activation)\n\n    def forward(self, x):\n        # x # (batch, obs_seq_len, node, embedding_size) # in_channels = embedding_size\n        # out # (batch, pred_seq_len, node, output_size) # output_size = 5\n        for i in range(self.nconv):\n            x_norm = self.norms[i](x)\n            x_perm = x_norm.permute(0, 3, 1, 2) # (batch, embedding_size, obs_seq_len, node)\n            x_perm = self.activation(self.timeconvs[i](x_perm)) # (N, C, H, W) # (N, channels, T_{in}, V)\n            x_perm = x_perm.permute(0, 2, 3, 1) # (batch, obs_seq_len, node, embedding_size)\n            x = x + x_perm\n        x = x.permute(0, 2, 3, 1) # (batch, node, embedding_size, obs_seq_len)\n        x = self.timedropout1(self.activation(self.timelinear1(x)))\n        x = self.timedropout2(self.activation(self.timelinear2(x))) # (batch, node, embedding_size, pred_seq_len)\n        x = x.permute(0, 3, 1, 2) # (batch, pred_seq_len, node, embedding_size)\n        x = self.dropout(self.activation(self.linear1(x)))\n        out = self.linear2(x) # (batch, pred_seq_len, node, out_channels)\n        return out"
  },
  {
    "path": "gst_updated/src/gumbel_social_transformer/utils.py",
    "content": "import copy\nimport torch\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nfrom torch.nn.modules.container import ModuleList\n\n\ndef _get_activation_fn(activation):\n    if activation == \"relu\":\n        return F.relu\n    elif activation == \"gelu\":\n        return F.gelu\n    raise RuntimeError(\"activation should be relu/gelu, not {}\".format(activation))\n\ndef _get_clones(module, N):\n    return ModuleList([copy.deepcopy(module) for i in range(N)])\n\ndef gumbel_softmax(logits, tau=1, hard=False, eps=1e-10):\n    y_soft = gumbel_softmax_sample(logits, tau=tau, eps=eps)\n    if hard:\n        shape = logits.size()\n        _, k = y_soft.data.max(-1)\n        y_hard = torch.zeros(*shape)\n        if y_soft.is_cuda:\n            y_hard = y_hard.cuda()\n        y_hard = y_hard.zero_().scatter_(-1, k.view(shape[:-1] + (1,)), 1.0)\n        y = Variable(y_hard - y_soft.data) + y_soft\n    else:\n        y = y_soft\n    return y\n\ndef gumbel_softmax_sample(logits, tau=1, eps=1e-10):\n    gumbel_noise = sample_gumbel(logits.size(), eps=eps)\n    if logits.is_cuda:\n        gumbel_noise = gumbel_noise.cuda()\n    y = logits + Variable(gumbel_noise)\n    return F.softmax(y / tau, dim=-1)\n\ndef sample_gumbel(shape, eps=1e-10):\n    uniform = torch.rand(shape).float()\n    return - torch.log(eps - torch.log(uniform + eps))"
  },
  {
    "path": "gst_updated/src/mgnn/batch_trajectories.py",
    "content": "from torch.utils.data import Dataset\n\nclass BatchTrajectoriesDataset(Dataset):\n    def __init__(\n        self,\n        ):\n        super(BatchTrajectoriesDataset, self).__init__()\n        self.num_seq = 0\n        self.obs_traj = []\n        self.pred_traj = []\n        self.obs_traj_rel = []\n        self.pred_traj_rel = []\n        self.loss_mask_rel = []\n        self.loss_mask = []\n        self.v_obs = []\n        self.A_obs = []\n        self.v_pred = []\n        self.A_pred = []\n        self.attn_mask_obs = []\n        self.attn_mask_pred = []\n\n    def add_batch(\n        self,\n        obs_traj,\n        pred_traj_gt,\n        obs_traj_rel,\n        pred_traj_rel_gt,\n        loss_mask_rel,\n        loss_mask,\n        v_obs, \n        A_obs,\n        v_pred,\n        A_pred,\n        attn_mask_obs,\n        attn_mask_pred,\n    ):\n        self.num_seq += 1\n        self.obs_traj.append(obs_traj.float())\n        self.pred_traj.append(pred_traj_gt.float())\n        self.obs_traj_rel.append(obs_traj_rel.float())\n        self.pred_traj_rel.append(pred_traj_rel_gt.float())\n        self.loss_mask_rel.append(loss_mask_rel.float())\n        self.loss_mask.append(loss_mask.float())\n        self.v_obs.append(v_obs.float())\n        self.A_obs.append(A_obs.float())\n        self.v_pred.append(v_pred.float())\n        self.A_pred.append(A_pred.float())\n        self.attn_mask_obs.append(attn_mask_obs.float())\n        self.attn_mask_pred.append(attn_mask_pred.float())\n\n    def __len__(self):\n        return self.num_seq\n\n\n    def __getitem__(self, index):\n        out = [\n            self.obs_traj[index], self.pred_traj[index],\n            self.obs_traj_rel[index], self.pred_traj_rel[index],\n            self.loss_mask_rel[index], self.loss_mask[index],\n            self.v_obs[index], self.A_obs[index],\n            self.v_pred[index], self.A_pred[index],\n            self.attn_mask_obs[index], self.attn_mask_pred[index],\n        ]\n        return out\n"
  },
  {
    "path": "gst_updated/src/mgnn/trajectories.py",
    "content": "from torch.utils.data import Dataset\nimport os\nimport numpy as np\nimport math\nimport torch\nfrom src.mgnn.utils import seq_to_graph\n\nclass TrajectoriesDataset(Dataset):\n    def __init__(\n        self,\n        data_dir,\n        obs_seq_len=8,\n        pred_seq_len=12,\n        skip=1,\n        delim='\\t',\n        invalid_value=-999.,\n        mode=None,\n        frame_diff=10.,\n        ):\n        super(TrajectoriesDataset, self).__init__()\n        self.data_dir = data_dir\n        self.obs_seq_len = obs_seq_len\n        self.pred_seq_len = pred_seq_len\n        self.skip = skip\n        self.seq_len = self.obs_seq_len + self.pred_seq_len\n        self.delim = delim\n        all_files = os.listdir(self.data_dir)\n        all_files = [os.path.join(self.data_dir, _path) for _path in all_files]\n        num_peds_in_seq = []\n        seq_list = []\n        seq_list_rel = []\n        loss_mask_list = []\n        loss_mask_rel_list = []\n        frame_id_seq = []\n        print(\"Files to be written into the dataset: \")\n        print(all_files)\n        for path in all_files:\n            print(path)\n            data = read_file(path, delim)\n            frames = np.unique(data[:, 0]).tolist()\n            frame_data = []\n            for frame in frames:\n                frame_data.append(data[data[:, 0]==frame, :])\n            num_sequences = math.floor((len(frames)-self.seq_len)/self.skip)+1\n            if mode is None:\n                idx_range = range(0, num_sequences * self.skip + 1, skip)\n            elif mode == 'train':\n                idx_range = range(0, int((num_sequences * self.skip + 1)*0.8), skip)\n            elif mode == 'val':\n                idx_range = range(int((num_sequences * self.skip + 1)*0.8), num_sequences * self.skip + 1, skip)\n            elif mode == 'test':\n                idx_range = range(int((num_sequences * self.skip + 1)*0.8), num_sequences * self.skip + 1, skip)\n            else:\n                raise RuntimeError(\"Wrong mode for TrajectoriesDataset.\")\n            # idx_range = range(0, num_sequences * self.skip + 1, skip)\n            for idx in idx_range:\n                curr_seq_data = np.concatenate(frame_data[idx:idx + self.seq_len], axis=0)\n                start_frame_id = curr_seq_data[0, 0]\n                peds_in_curr_seq = np.unique(curr_seq_data[:, 1])\n                ped_survive_all_time = False\n                for _, ped_id in enumerate(peds_in_curr_seq):\n                    curr_ped_seq = curr_seq_data[curr_seq_data[:, 1] == ped_id, :]\n                    frames_curr_ped_seq = np.unique(curr_ped_seq[:,0])\n                    if len(frames_curr_ped_seq) == self.seq_len and np.all(frames_curr_ped_seq[1:]-frames_curr_ped_seq[:-1]==frame_diff):\n                        ped_survive_all_time = True\n                        break\n                if not ped_survive_all_time:\n                    continue\n                curr_seq = np.ones((len(peds_in_curr_seq), 2, self.seq_len)) * invalid_value\n                curr_seq_rel = np.ones((len(peds_in_curr_seq), 2, self.seq_len)) * invalid_value\n                curr_loss_mask = np.zeros((len(peds_in_curr_seq), self.seq_len))\n                curr_loss_mask_rel = np.zeros((len(peds_in_curr_seq), self.seq_len))\n                for ped_id_curr_seq, ped_id in enumerate(peds_in_curr_seq):\n                    for tt in range(self.seq_len):\n                        frame_id = start_frame_id + tt * frame_diff\n                        frame_ped_id = (curr_seq_data[:,0]==frame_id) * (curr_seq_data[:,1]==ped_id)\n                        if len(curr_seq_data[frame_ped_id]) == 0:\n                            curr_loss_mask[ped_id_curr_seq, tt] = 0\n                            curr_loss_mask_rel[ped_id_curr_seq, tt] = 0\n                        elif len(curr_seq_data[frame_ped_id]) == 1:\n                            curr_seq[ped_id_curr_seq,:,tt] = curr_seq_data[frame_ped_id, 2:]\n                            curr_loss_mask[ped_id_curr_seq, tt] = 1\n                            if tt == 0:\n                                curr_seq_rel[ped_id_curr_seq,:,tt] = np.zeros((2,))\n                                curr_loss_mask_rel[ped_id_curr_seq, tt] = 1\n                            else:\n                                if curr_loss_mask[ped_id_curr_seq, tt-1] == 1:\n                                    curr_seq_rel[ped_id_curr_seq,:,tt] = curr_seq[ped_id_curr_seq,:,tt] - curr_seq[ped_id_curr_seq,:,tt-1]\n                                    curr_loss_mask_rel[ped_id_curr_seq, tt] = 1\n                                else:\n                                    curr_loss_mask_rel[ped_id_curr_seq, tt] = 0\n                        else:\n                            raise RuntimeError(\"The same pedestrian has multiple locations in the same frame.\")\n\n                num_peds_in_seq.append(len(peds_in_curr_seq))\n                seq_list.append(curr_seq)\n                seq_list_rel.append(curr_seq_rel)\n                loss_mask_list.append(curr_loss_mask)\n                loss_mask_rel_list.append(curr_loss_mask_rel)\n                frame_id_seq.append(start_frame_id)\n\n        self.num_seq = len(seq_list)\n        seq_list = np.concatenate(seq_list, axis=0)\n        seq_list_rel = np.concatenate(seq_list_rel, axis=0)\n        loss_mask_list = np.concatenate(loss_mask_list, axis=0)\n        loss_mask_rel_list = np.concatenate(loss_mask_rel_list, axis=0)\n        self.obs_traj = torch.from_numpy(\n            seq_list[:, :, :self.obs_seq_len]).type(torch.float)\n        self.pred_traj = torch.from_numpy(\n            seq_list[:, :, self.obs_seq_len:]).type(torch.float)\n        self.obs_traj_rel = torch.from_numpy(\n            seq_list_rel[:, :, :self.obs_seq_len]).type(torch.float)\n        self.pred_traj_rel = torch.from_numpy(\n            seq_list_rel[:, :, self.obs_seq_len:]).type(torch.float)\n        self.loss_mask = torch.from_numpy(loss_mask_list).type(torch.float)\n        self.loss_mask_rel = torch.from_numpy(loss_mask_rel_list).type(torch.float)\n        cum_start_idx = [0] + np.cumsum(num_peds_in_seq).tolist()\n        self.seq_start_end = [(start, end) for start, end in zip(cum_start_idx[:-1], cum_start_idx[1:])]\n        self.frame_id_seq = frame_id_seq\n        self.v_obs = []\n        self.A_obs = []\n        self.attn_mask_obs = []\n        self.v_pred = []\n        self.A_pred = []\n        self.attn_mask_pred = []\n        for ss in range(len(self.seq_start_end)):\n            start, end = self.seq_start_end[ss]\n            v_, a_ = seq_to_graph(\n                self.obs_traj[start:end, :], self.obs_traj_rel[start:end, :], attn_mech='rel_conv')\n            self.v_obs.append(v_.clone())\n            self.A_obs.append(a_.clone())\n            v_, a_ = seq_to_graph(\n                self.pred_traj[start:end, :], self.pred_traj_rel[start:end, :], attn_mech='rel_conv')\n            self.v_pred.append(v_.clone())\n            self.A_pred.append(a_.clone())\n            attn_mask = []\n            for tt in range(self.seq_len):\n                loss_mask_rel_tt = self.loss_mask_rel[start:end, tt]\n                attn_mask.append(torch.outer(loss_mask_rel_tt, loss_mask_rel_tt).float())\n            attn_mask = torch.stack(attn_mask, dim=0)\n            self.attn_mask_obs.append(attn_mask[:self.obs_seq_len])\n            self.attn_mask_pred.append(attn_mask[self.obs_seq_len:])\n\n\n    def __len__(self):\n        return self.num_seq\n\n\n    def __getitem__(self, index):\n        start, end = self.seq_start_end[index]\n        out = [\n            self.obs_traj[start:end, :], self.pred_traj[start:end, :],\n            self.obs_traj_rel[start:end, :], self.pred_traj_rel[start:end, :],\n            self.loss_mask_rel[start:end, :], self.loss_mask[start:end, :],\n            self.v_obs[index], self.A_obs[index],\n            self.v_pred[index], self.A_pred[index],\n            self.attn_mask_obs[index], self.attn_mask_pred[index],\n        ]\n        return out\n    \n    \n\ndef read_file(_path, delim='\\t'):\n    data = []\n    if delim == 'tab':\n        delim = '\\t'\n    elif delim == 'space':\n        delim = ' '\n    with open(_path, 'r') as f:\n        for line in f:\n            line = line.strip().split(delim)\n            line = [float(i) for i in line]\n            data.append(line)\n    return np.asarray(data)\n"
  },
  {
    "path": "gst_updated/src/mgnn/trajectories_sdd.py",
    "content": "from torch.utils.data import Dataset\nimport os\nimport numpy as np\nimport math\nimport torch\nfrom src.mgnn.utils import seq_to_graph\n\nclass TrajectoriesDataset(Dataset):\n    def __init__(\n        self,\n        data_dir,\n        obs_seq_len=8,\n        pred_seq_len=12,\n        skip=1,\n        delim='\\t',\n        invalid_value=-999.,\n        mode=None,\n        frame_diff=10.,\n        ):\n        super(TrajectoriesDataset, self).__init__()\n        self.data_dir = data_dir\n        self.obs_seq_len = obs_seq_len\n        self.pred_seq_len = pred_seq_len\n        self.skip = skip\n        self.seq_len = self.obs_seq_len + self.pred_seq_len\n        self.delim = delim\n        self.frame_diff = frame_diff\n        all_files = os.listdir(self.data_dir)\n        all_files = [os.path.join(self.data_dir, _path) for _path in all_files]\n        num_peds_in_seq = []\n        seq_list = []\n        seq_list_rel = []\n        loss_mask_list = []\n        loss_mask_rel_list = []\n        frame_id_seq = []\n        print(\"Files to be written into the dataset: \")\n        print(all_files)\n        for path in all_files:\n            print(path)\n            print(\"current sequence list: \", len(seq_list))\n            data = read_sdd_file(path)\n            frames = np.unique(data[:, 0]).tolist()\n            frame_data = []\n            for frame in frames:\n                frame_data.append(data[data[:, 0]==frame, :])\n            num_sequences = math.floor((len(frames)-self.seq_len)/self.skip)+1\n            if mode is None:\n                idx_range = range(0, num_sequences * self.skip + 1, skip)\n            elif mode == 'train':\n                idx_range = range(0, int((num_sequences * self.skip + 1)*0.8), skip)\n            elif mode == 'val':\n                idx_range = range(int((num_sequences * self.skip + 1)*0.8), int((num_sequences * self.skip + 1)*0.9), skip)\n            elif mode == 'test':\n                idx_range = range(int((num_sequences * self.skip + 1)*0.9), num_sequences * self.skip + 1, skip)\n            else:\n                raise RuntimeError(\"Wrong mode for TrajectoriesDataset.\")\n            for idx in idx_range:\n                curr_seq_data = np.concatenate(frame_data[idx:idx + self.seq_len], axis=0)\n                start_frame_id = curr_seq_data[0, 0]\n                peds_in_curr_seq = np.unique(curr_seq_data[:, 1])\n                ped_survive_all_time = False\n                for _, ped_id in enumerate(peds_in_curr_seq):\n                    curr_ped_seq = curr_seq_data[curr_seq_data[:, 1] == ped_id, :]\n                    frames_curr_ped_seq = np.unique(curr_ped_seq[:,0])\n                    if len(frames_curr_ped_seq) == self.seq_len and np.all(frames_curr_ped_seq[1:]-frames_curr_ped_seq[:-1]==self.frame_diff):\n                        ped_survive_all_time = True\n                        break\n                if not ped_survive_all_time:\n                    continue\n                curr_seq = np.ones((len(peds_in_curr_seq), 2, self.seq_len)) * invalid_value\n                curr_seq_rel = np.ones((len(peds_in_curr_seq), 2, self.seq_len)) * invalid_value\n                curr_loss_mask = np.zeros((len(peds_in_curr_seq), self.seq_len))\n                curr_loss_mask_rel = np.zeros((len(peds_in_curr_seq), self.seq_len))\n                for ped_id_curr_seq, ped_id in enumerate(peds_in_curr_seq):\n                    for tt in range(self.seq_len):\n                        frame_id = start_frame_id + tt * self.frame_diff\n                        frame_ped_id = (curr_seq_data[:,0]==frame_id) * (curr_seq_data[:,1]==ped_id)\n                        if len(curr_seq_data[frame_ped_id]) == 0:\n                            curr_loss_mask[ped_id_curr_seq, tt] = 0\n                            curr_loss_mask_rel[ped_id_curr_seq, tt] = 0\n                        elif len(curr_seq_data[frame_ped_id]) == 1:\n                            curr_seq[ped_id_curr_seq,:,tt] = curr_seq_data[frame_ped_id, 2:]\n                            curr_loss_mask[ped_id_curr_seq, tt] = 1\n                            if tt == 0:\n                                curr_seq_rel[ped_id_curr_seq,:,tt] = np.zeros((2,))\n                                curr_loss_mask_rel[ped_id_curr_seq, tt] = 1\n                            else:\n                                if curr_loss_mask[ped_id_curr_seq, tt-1] == 1:\n                                    curr_seq_rel[ped_id_curr_seq,:,tt] = curr_seq[ped_id_curr_seq,:,tt] - curr_seq[ped_id_curr_seq,:,tt-1]\n                                    curr_loss_mask_rel[ped_id_curr_seq, tt] = 1\n                                else:\n                                    curr_loss_mask_rel[ped_id_curr_seq, tt] = 0\n                        else:\n                            raise RuntimeError(\"The same pedestrian has multiple locations in the same frame.\")\n\n                num_peds_in_seq.append(len(peds_in_curr_seq))\n                seq_list.append(curr_seq)\n                seq_list_rel.append(curr_seq_rel)\n                loss_mask_list.append(curr_loss_mask)\n                loss_mask_rel_list.append(curr_loss_mask_rel)\n                frame_id_seq.append(start_frame_id)\n\n        self.num_seq = len(seq_list)\n        seq_list = np.concatenate(seq_list, axis=0)\n        seq_list_rel = np.concatenate(seq_list_rel, axis=0)\n        loss_mask_list = np.concatenate(loss_mask_list, axis=0)\n        loss_mask_rel_list = np.concatenate(loss_mask_rel_list, axis=0)\n        self.obs_traj = torch.from_numpy(\n            seq_list[:, :, :self.obs_seq_len]).type(torch.float)\n        self.pred_traj = torch.from_numpy(\n            seq_list[:, :, self.obs_seq_len:]).type(torch.float)\n        self.obs_traj_rel = torch.from_numpy(\n            seq_list_rel[:, :, :self.obs_seq_len]).type(torch.float)\n        self.pred_traj_rel = torch.from_numpy(\n            seq_list_rel[:, :, self.obs_seq_len:]).type(torch.float)\n        self.loss_mask = torch.from_numpy(loss_mask_list).type(torch.float)\n        self.loss_mask_rel = torch.from_numpy(loss_mask_rel_list).type(torch.float)\n        cum_start_idx = [0] + np.cumsum(num_peds_in_seq).tolist()\n        self.seq_start_end = [(start, end) for start, end in zip(cum_start_idx[:-1], cum_start_idx[1:])]\n        self.frame_id_seq = frame_id_seq\n        self.v_obs = []\n        self.A_obs = []\n        self.attn_mask_obs = []\n        self.v_pred = []\n        self.A_pred = []\n        self.attn_mask_pred = []\n        for ss in range(len(self.seq_start_end)):\n            start, end = self.seq_start_end[ss]\n            v_, a_ = seq_to_graph(\n                self.obs_traj[start:end, :], self.obs_traj_rel[start:end, :], attn_mech='rel_conv')\n            self.v_obs.append(v_.clone())\n            self.A_obs.append(a_.clone())\n            v_, a_ = seq_to_graph(\n                self.pred_traj[start:end, :], self.pred_traj_rel[start:end, :], attn_mech='rel_conv')\n            self.v_pred.append(v_.clone())\n            self.A_pred.append(a_.clone())\n            attn_mask = []\n            for tt in range(self.seq_len):\n                loss_mask_rel_tt = self.loss_mask_rel[start:end, tt]\n                attn_mask.append(torch.outer(loss_mask_rel_tt, loss_mask_rel_tt).float())\n            attn_mask = torch.stack(attn_mask, dim=0)\n            self.attn_mask_obs.append(attn_mask[:self.obs_seq_len])\n            self.attn_mask_pred.append(attn_mask[self.obs_seq_len:])\n\n\n    def __len__(self):\n        return self.num_seq\n\n\n    def __getitem__(self, index):\n        start, end = self.seq_start_end[index]\n        out = [\n            self.obs_traj[start:end, :], self.pred_traj[start:end, :],\n            self.obs_traj_rel[start:end, :], self.pred_traj_rel[start:end, :],\n            self.loss_mask_rel[start:end, :], self.loss_mask[start:end, :],\n            self.v_obs[index], self.A_obs[index],\n            self.v_pred[index], self.A_pred[index],\n            self.attn_mask_obs[index], self.attn_mask_pred[index],\n        ]\n        return out\n    \n    \n\ndef read_file(_path, delim='\\t'):\n    data = []\n    if delim == 'tab':\n        delim = '\\t'\n    elif delim == 'space':\n        delim = ' '\n    with open(_path, 'r') as f:\n        for line in f:\n            line = line.strip().split(delim)\n            line = [float(i) for i in line]\n            data.append(line)\n    return np.asarray(data)\n\ndef read_sdd_file(_path):\n    data = []\n    with open(_path, 'r') as f:\n        for line in f:\n            line = line.rstrip()\n            line = line.split()\n            line[-1] = line[-1].strip('\"')\n            if line[-1] == \"Car\":\n                continue\n            line = [float(i) for i in line[:-1]]\n            agent_id,xmin,ymin,xmax,ymax,frame,lost,occluded,generated = line\n            if lost == 1 or frame % 10 != 0:\n                # lost agent, and have to be selected every 10 frames. (30fps)\n                continue\n            f = frame\n            p = agent_id\n            x = (xmin+xmax)/2.\n            y = (ymin+ymax)/2.\n            line = [f,p,x,y]\n            data.append(line)\n    return np.asarray(data)\n"
  },
  {
    "path": "gst_updated/src/mgnn/trajectories_trajnet.py",
    "content": "from torch.utils.data import Dataset\nimport os\nimport numpy as np\nimport math\nimport torch\nfrom src.mgnn.utils import seq_to_graph\nimport ndjson\n\nclass TrajectoriesDataset(Dataset):\n    def __init__(\n        self,\n        data_dir,\n        obs_seq_len=8,\n        pred_seq_len=12,\n        skip=1,\n        delim='\\t',\n        invalid_value=-999.,\n        mode=None,\n        ):\n        super(TrajectoriesDataset, self).__init__()\n        self.data_dir = data_dir\n        self.obs_seq_len = obs_seq_len\n        self.pred_seq_len = pred_seq_len\n        self.skip = skip\n        self.seq_len = self.obs_seq_len + self.pred_seq_len\n        self.delim = delim\n        all_files = os.listdir(self.data_dir)\n        all_files = [os.path.join(self.data_dir, _path) for _path in all_files]\n        num_peds_in_seq = []\n        seq_list = []\n        seq_list_rel = []\n        loss_mask_list = []\n        loss_mask_rel_list = []\n        frame_id_seq = []\n        print(\"Files to be written into the dataset: \")\n        print(all_files)\n        for path in all_files:\n            print(\"current sequence length: \", len(seq_list))\n            print(path)\n            data, frame_diff, start_frames_considered, filename_str = read_trajnet_file(path)\n            if len(data) == 0:\n                continue\n            # data = read_file(path, delim)\n            frames_np = np.unique(data[:, 0])\n            frames = frames_np.tolist()\n            frame_data = []\n            for frame in frames:\n                frame_data.append(data[data[:, 0]==frame, :])\n            if filename_str[:3]==\"cff\": # too big\n                skip = 100\n            else:\n                skip = self.skip\n            print(\"skip: \", skip)\n            print(\"total length: \", len(start_frames_considered[::skip]))\n            if mode is None:\n                start_frames_np = start_frames_considered[::skip]\n            elif mode == 'train':\n                start_frames_np = start_frames_considered[:int(0.8*len(start_frames_considered)):skip]\n                # idx_range = range(0, int((num_sequences * self.skip + 1)*0.8), skip)\n            elif mode == 'val':\n                start_frames_np = start_frames_considered[int(0.8*len(start_frames_considered)):int(0.9*len(start_frames_considered)):skip]\n                # idx_range = range(int((num_sequences * self.skip + 1)*0.8), num_sequences * self.skip + 1, skip)\n            elif mode == 'test':\n                start_frames_np = start_frames_considered[int(0.9*len(start_frames_considered))::skip]\n                # idx_range = range(int((num_sequences * self.skip + 1)*0.8), num_sequences * self.skip + 1, skip)\n            else:\n                raise RuntimeError(\"Wrong mode for TrajectoriesDataset.\")\n\n            # num_sequences = math.floor((len(frames)-self.seq_len)/self.skip)+1\n            # if mode is None:\n            #     idx_range = range(0, num_sequences * self.skip + 1, skip)\n            # elif mode == 'train':\n            #     idx_range = range(0, int((num_sequences * self.skip + 1)*0.8), skip)\n            # elif mode == 'val':\n            #     idx_range = range(int((num_sequences * self.skip + 1)*0.8), num_sequences * self.skip + 1, skip)\n            # elif mode == 'test':\n            #     idx_range = range(int((num_sequences * self.skip + 1)*0.8), num_sequences * self.skip + 1, skip)\n            # else:\n            #     raise RuntimeError(\"Wrong mode for TrajectoriesDataset.\")\n            # idx_range = range(0, num_sequences * self.skip + 1, skip)\n            # print(len(idx_range))\n            # for idx in idx_range:\n            for start_frame in start_frames_np:\n                idx, = np.where(frames_np==start_frame)\n                idx = idx[0]\n                curr_seq_data = np.concatenate(frame_data[idx:idx + self.seq_len], axis=0)\n                start_frame_id = curr_seq_data[0, 0]\n                peds_in_curr_seq = np.unique(curr_seq_data[:, 1])\n                ped_survive_all_time = False\n                for _, ped_id in enumerate(peds_in_curr_seq):\n                    curr_ped_seq = curr_seq_data[curr_seq_data[:, 1] == ped_id, :]\n                    frames_curr_ped_seq = np.unique(curr_ped_seq[:,0])\n                    if len(frames_curr_ped_seq) == self.seq_len and \\\n                        np.all(frames_curr_ped_seq[1:]-frames_curr_ped_seq[:-1]==frame_diff):\n                            ped_survive_all_time = True\n                            break\n                # import pdb; pdb.set_trace()\n                if not ped_survive_all_time:\n                    continue\n                curr_seq = np.ones((len(peds_in_curr_seq), 2, self.seq_len)) * invalid_value\n                curr_seq_rel = np.ones((len(peds_in_curr_seq), 2, self.seq_len)) * invalid_value\n                curr_loss_mask = np.zeros((len(peds_in_curr_seq), self.seq_len))\n                curr_loss_mask_rel = np.zeros((len(peds_in_curr_seq), self.seq_len))\n                for ped_id_curr_seq, ped_id in enumerate(peds_in_curr_seq):\n                    for tt in range(self.seq_len):\n                        frame_id = start_frame_id + tt * frame_diff\n                        frame_ped_id = (curr_seq_data[:,0]==frame_id) * (curr_seq_data[:,1]==ped_id)\n                        if len(curr_seq_data[frame_ped_id]) == 0:\n                            curr_loss_mask[ped_id_curr_seq, tt] = 0\n                            curr_loss_mask_rel[ped_id_curr_seq, tt] = 0\n                        elif len(curr_seq_data[frame_ped_id]) == 1:\n                            curr_seq[ped_id_curr_seq,:,tt] = curr_seq_data[frame_ped_id, 2:]\n                            curr_loss_mask[ped_id_curr_seq, tt] = 1\n                            if tt == 0:\n                                curr_seq_rel[ped_id_curr_seq,:,tt] = np.zeros((2,))\n                                curr_loss_mask_rel[ped_id_curr_seq, tt] = 1\n                            else:\n                                if curr_loss_mask[ped_id_curr_seq, tt-1] == 1:\n                                    curr_seq_rel[ped_id_curr_seq,:,tt] = curr_seq[ped_id_curr_seq,:,tt] - curr_seq[ped_id_curr_seq,:,tt-1]\n                                    curr_loss_mask_rel[ped_id_curr_seq, tt] = 1\n                                else:\n                                    curr_loss_mask_rel[ped_id_curr_seq, tt] = 0\n                        else:\n                            raise RuntimeError(\"The same pedestrian has multiple locations in the same frame.\")\n\n                num_peds_in_seq.append(len(peds_in_curr_seq))\n                seq_list.append(curr_seq)\n                seq_list_rel.append(curr_seq_rel)\n                loss_mask_list.append(curr_loss_mask)\n                loss_mask_rel_list.append(curr_loss_mask_rel)\n                frame_id_seq.append(start_frame_id)\n\n        self.num_seq = len(seq_list)\n        print(\"total sequence length: \", len(seq_list))\n        seq_list = np.concatenate(seq_list, axis=0)\n        seq_list_rel = np.concatenate(seq_list_rel, axis=0)\n        loss_mask_list = np.concatenate(loss_mask_list, axis=0)\n        loss_mask_rel_list = np.concatenate(loss_mask_rel_list, axis=0)\n        self.obs_traj = torch.from_numpy(\n            seq_list[:, :, :self.obs_seq_len]).type(torch.float)\n        self.pred_traj = torch.from_numpy(\n            seq_list[:, :, self.obs_seq_len:]).type(torch.float)\n        self.obs_traj_rel = torch.from_numpy(\n            seq_list_rel[:, :, :self.obs_seq_len]).type(torch.float)\n        self.pred_traj_rel = torch.from_numpy(\n            seq_list_rel[:, :, self.obs_seq_len:]).type(torch.float)\n        self.loss_mask = torch.from_numpy(loss_mask_list).type(torch.float)\n        self.loss_mask_rel = torch.from_numpy(loss_mask_rel_list).type(torch.float)\n        cum_start_idx = [0] + np.cumsum(num_peds_in_seq).tolist()\n        self.seq_start_end = [(start, end) for start, end in zip(cum_start_idx[:-1], cum_start_idx[1:])]\n        self.frame_id_seq = frame_id_seq\n        self.v_obs = []\n        self.A_obs = []\n        self.attn_mask_obs = []\n        self.v_pred = []\n        self.A_pred = []\n        self.attn_mask_pred = []\n        for ss in range(len(self.seq_start_end)):\n            start, end = self.seq_start_end[ss]\n            v_, a_ = seq_to_graph(\n                self.obs_traj[start:end, :], self.obs_traj_rel[start:end, :], attn_mech='rel_conv')\n            self.v_obs.append(v_.clone())\n            self.A_obs.append(a_.clone())\n            v_, a_ = seq_to_graph(\n                self.pred_traj[start:end, :], self.pred_traj_rel[start:end, :], attn_mech='rel_conv')\n            self.v_pred.append(v_.clone())\n            self.A_pred.append(a_.clone())\n            attn_mask = []\n            for tt in range(self.seq_len):\n                loss_mask_rel_tt = self.loss_mask_rel[start:end, tt]\n                attn_mask.append(torch.outer(loss_mask_rel_tt, loss_mask_rel_tt).float())\n            attn_mask = torch.stack(attn_mask, dim=0)\n            self.attn_mask_obs.append(attn_mask[:self.obs_seq_len])\n            self.attn_mask_pred.append(attn_mask[self.obs_seq_len:])\n\n\n    def __len__(self):\n        return self.num_seq\n\n\n    def __getitem__(self, index):\n        start, end = self.seq_start_end[index]\n        out = [\n            self.obs_traj[start:end, :], self.pred_traj[start:end, :],\n            self.obs_traj_rel[start:end, :], self.pred_traj_rel[start:end, :],\n            self.loss_mask_rel[start:end, :], self.loss_mask[start:end, :],\n            self.v_obs[index], self.A_obs[index],\n            self.v_pred[index], self.A_pred[index],\n            self.attn_mask_obs[index], self.attn_mask_pred[index],\n        ]\n        return out\n    \n    \n\ndef read_file(_path, delim='\\t'):\n    data = []\n    if delim == 'tab':\n        delim = '\\t'\n    elif delim == 'space':\n        delim = ' '\n    with open(_path, 'r') as f:\n        for line in f:\n            line = line.strip().split(delim)\n            line = [float(i) for i in line]\n            data.append(line)\n    return np.asarray(data)\n\n\ndef read_trajnet_file(_path):\n    _, filename = os.path.split(_path)\n    print(filename)\n    filename_str, filename_ext = filename.split('.')\n    if filename_ext != \"ndjson\":\n        print(\"Not ndjson file for trajnet++.\")\n        data = []\n        frame_diff = 0.\n        start_frames_considered = []\n        return data, frame_diff, start_frames_considered, filename_str\n        # raise RuntimeError(\"Not ndjson file for trajnet++.\")\n    lines = []\n    frame_diff = 0.\n    start_frames_considered = []\n    with open(_path) as f:\n        reader = ndjson.reader(f)\n        for post in reader:\n            # print(type(post))\n            # print(post.keys())\n            # print(post['scene'])\n            if \"scene\" in post.keys():# and len(start_frames_considered) < 20:\n                start_frame = post[\"scene\"][\"s\"]\n                if frame_diff==0.:\n                    frame_diff = (post[\"scene\"][\"e\"]-post[\"scene\"][\"s\"])/20\n                start_frames_considered.append(start_frame)\n\n                # np.unique(data[:, 0]).tolist()\n                # {\"scene\": {\"id\": 0, \"p\": 24, \"s\": 500, \"e\": 700, \"fps\": 2.5, \"tag\": [3, [2]]}}\n                # {\"scene\": {\"id\": 1, \"p\": 24, \"s\": 520, \"e\": 720, \"fps\": 2.5, \"tag\": [4, []]}}\n                # {\"scene\": {\"id\": 2, \"p\": 24, \"s\": 540, \"e\": 740, \"fps\": 2.5, \"tag\": [4, []]}}\n                # {\"scene\": {\"id\": 3, \"p\": 24, \"s\": 560, \"e\": 760, \"fps\": 2.5, \"tag\": [4, []]}}\n            if \"track\" in post.keys():\n                f = post[\"track\"][\"f\"]\n                p = post[\"track\"][\"p\"]\n                x = post[\"track\"][\"x\"]\n                y = post[\"track\"][\"y\"]\n                line = [f,p,x,y] # str(f)+'\\t'+str(p)+'\\t'+str(x)+'\\t'+str(y)+'\\n'\n                lines.append(line)\n                # if len(lines)==3000:\n                #     break\n    data = np.asarray(lines)\n    start_frames_considered = np.unique(start_frames_considered)\n    return data, frame_diff, start_frames_considered, filename_str\n\n\n\n\n\n#     import pathhack\n# import ndjson\n# import os\n\n# raw_folder = 'real_data'\n# output_folder = raw_folder+'_txt'\n# data_dir = os.path.join(pathhack.pkg_path, \"datasets/trajnet++/train\", raw_folder)\n# output_dir = os.path.join(pathhack.pkg_path, \"datasets/trajnet++/train\", output_folder)\n# all_files = os.listdir(data_dir)\n# all_files = [os.path.join(data_dir, _path) for _path in all_files]\n\n# for path in all_files:\n#     print(path)\n#     _, filename = os.path.split(path)\n#     filename_str, filename_ext = filename.split('.')\n#     frame = None\n#     if filename_ext == \"ndjson\":\n#         lines = []\n#         with open(path) as f:\n#             reader = ndjson.reader(f)\n#             for post in reader:\n#                 # print(type(post))\n#                 # print(post.keys())\n#                 # print(post['scene'])\n#                 if \"track\" in post.keys():\n#                     f = post[\"track\"][\"f\"]\n#                     if frame is None:\n#                         frame = f\n#                     else:\n#                         print(f-frame)\n#                         break\n#                     p = post[\"track\"][\"p\"]\n#                     x = post[\"track\"][\"x\"]\n#                     y = post[\"track\"][\"y\"]\n#                     line = str(f)+'\\t'+str(p)+'\\t'+str(x)+'\\t'+str(y)+'\\n'\n#                     lines.append(line)\n"
  },
  {
    "path": "gst_updated/src/mgnn/trajectories_trajnet_testset.py",
    "content": "from torch.utils.data import Dataset\nimport os\nimport numpy as np\nimport math\nimport torch\nfrom src.mgnn.utils import seq_to_graph\nimport ndjson\n\nclass TrajectoriesDataset(Dataset):\n    def __init__(\n        self,\n        data_filepath,\n        obs_seq_len=8,\n        pred_seq_len=12,\n        skip=1,\n        delim='\\t',\n        invalid_value=-999.,\n        mode=None,\n        ):\n        super(TrajectoriesDataset, self).__init__()\n        self.data_filepath = data_filepath\n        self.obs_seq_len = obs_seq_len\n        self.pred_seq_len = pred_seq_len\n        self.skip = skip\n        self.seq_len = self.obs_seq_len + self.pred_seq_len\n        self.delim = delim\n        self.frame_diff = 0.\n        # ! all_files = os.listdir(self.data_dir)\n        # ! all_files = [os.path.join(self.data_dir, _path) for _path in all_files]\n        # ! Need to make it only one file!!!\n        all_files = [self.data_filepath]\n        num_peds_in_seq = []\n        seq_list = []\n        seq_list_rel = []\n        loss_mask_list = []\n        loss_mask_rel_list = []\n        frame_id_seq = []\n        ped_id_list = []\n        print(\"Files to be written into the dataset: \")\n        print(all_files)\n        for path in all_files:\n            print(\"current sequence length: \", len(seq_list))\n            print(path)\n            data, frame_diff, start_frames_considered, filename_str = read_trajnet_file(path)\n            self.frame_diff = frame_diff\n            if len(data) == 0:\n                continue\n            # data = read_file(path, delim)\n            frames_np = np.unique(data[:, 0])\n            frames = frames_np.tolist()\n            frame_data = []\n            for frame in frames:\n                frame_data.append(data[data[:, 0]==frame, :])\n            if filename_str[:3]==\"cff\": # too big\n                skip = 100\n            else:\n                skip = self.skip\n            print(\"skip: \", skip)\n            if mode == 'test':\n                start_frames_np = start_frames_considered\n            else:\n                raise RuntimeError(\"Wrong mode for TrajectoriesDataset.\")\n            for start_frame in start_frames_np:\n                idx, = np.where(frames_np==start_frame)\n                idx = idx[0]\n                curr_seq_data = np.concatenate(frame_data[idx:idx + self.obs_seq_len+1], axis=0) # only observation is available in testset\n                # ! REALLY IMPORTANT: they have 21 frames. 9 obs, 12 pred. i.e. still 8 offset in obs, 12 offset in prediction\n                \n                start_frame_id = curr_seq_data[0, 0]\n                peds_in_curr_seq = np.unique(curr_seq_data[:, 1])\n                ped_survive_all_time = False\n                for _, ped_id in enumerate(peds_in_curr_seq):\n                    curr_ped_seq = curr_seq_data[curr_seq_data[:, 1] == ped_id, :]\n                    frames_curr_ped_seq = np.unique(curr_ped_seq[:,0])\n                    # if len(frames_curr_ped_seq) == self.seq_len and \\\n                    if len(frames_curr_ped_seq) == self.obs_seq_len+1 and \\\n                        np.all(frames_curr_ped_seq[1:]-frames_curr_ped_seq[:-1]==frame_diff):\n                            ped_survive_all_time = True\n                            break\n                \n                if not ped_survive_all_time:\n                    print(\"Never\")\n                    continue\n                curr_seq = np.ones((len(peds_in_curr_seq), 2, self.obs_seq_len)) * invalid_value\n                curr_seq_rel = np.ones((len(peds_in_curr_seq), 2, self.obs_seq_len)) * invalid_value\n                curr_loss_mask = np.zeros((len(peds_in_curr_seq), self.obs_seq_len))\n                curr_loss_mask_rel = np.zeros((len(peds_in_curr_seq), self.obs_seq_len))\n                \n                ped_id_dict = {}\n                \n                for ped_id_curr_seq, ped_id in enumerate(peds_in_curr_seq):\n                    ped_id_dict[ped_id_curr_seq] = ped_id\n                    \n                    for tt in range(self.obs_seq_len):\n                        frame_id = start_frame_id + (tt+1) * frame_diff\n                        frame_ped_id = (curr_seq_data[:,0]==frame_id) * (curr_seq_data[:,1]==ped_id)\n                        if len(curr_seq_data[frame_ped_id]) == 0:\n                            curr_loss_mask[ped_id_curr_seq, tt] = 0\n                            curr_loss_mask_rel[ped_id_curr_seq, tt] = 0\n                        elif len(curr_seq_data[frame_ped_id]) == 1:\n                            curr_seq[ped_id_curr_seq,:,tt] = curr_seq_data[frame_ped_id, 2:]\n                            curr_loss_mask[ped_id_curr_seq, tt] = 1\n                            if tt == 0:\n                                start_frame_ped_id = (curr_seq_data[:,0]==start_frame_id) * (curr_seq_data[:,1]==ped_id)\n                                if len(curr_seq_data[start_frame_ped_id]) == 1:\n                                    curr_seq_rel[ped_id_curr_seq,:,0] = \\\n                                        curr_seq_data[frame_ped_id, 2:] - curr_seq_data[start_frame_ped_id, 2:]\n                                    curr_loss_mask_rel[ped_id_curr_seq, 0] = 1\n                                else:\n                                    curr_loss_mask_rel[ped_id_curr_seq, 0] = 0\n                            else:\n                                if curr_loss_mask[ped_id_curr_seq, tt-1] == 1:\n                                    curr_seq_rel[ped_id_curr_seq,:,tt] = curr_seq[ped_id_curr_seq,:,tt] - curr_seq[ped_id_curr_seq,:,tt-1]\n                                    curr_loss_mask_rel[ped_id_curr_seq, tt] = 1\n                                else:\n                                    curr_loss_mask_rel[ped_id_curr_seq, tt] = 0\n                        else:\n                            raise RuntimeError(\"The same pedestrian has multiple locations in the same frame.\")\n                \n\n                num_peds_in_seq.append(len(peds_in_curr_seq))\n                seq_list.append(curr_seq)\n                seq_list_rel.append(curr_seq_rel)\n                loss_mask_list.append(curr_loss_mask)\n                loss_mask_rel_list.append(curr_loss_mask_rel)\n                frame_id_seq.append(start_frame_id)\n                ped_id_list.append(ped_id_dict)\n\n        self.num_seq = len(seq_list)\n        print(\"total sequence length: \", len(seq_list))\n        seq_list = np.concatenate(seq_list, axis=0)\n        seq_list_rel = np.concatenate(seq_list_rel, axis=0)\n        loss_mask_list = np.concatenate(loss_mask_list, axis=0)\n        loss_mask_rel_list = np.concatenate(loss_mask_rel_list, axis=0)\n        self.obs_traj = torch.from_numpy(\n            seq_list[:, :, :self.obs_seq_len]).type(torch.float)\n        # self.pred_traj = torch.from_numpy(\n        #     seq_list[:, :, self.obs_seq_len:]).type(torch.float)\n        self.obs_traj_rel = torch.from_numpy(\n            seq_list_rel[:, :, :self.obs_seq_len]).type(torch.float)\n        # self.pred_traj_rel = torch.from_numpy(\n        #     seq_list_rel[:, :, self.obs_seq_len:]).type(torch.float)\n        self.loss_mask = torch.from_numpy(loss_mask_list).type(torch.float)\n        self.loss_mask_rel = torch.from_numpy(loss_mask_rel_list).type(torch.float)\n        cum_start_idx = [0] + np.cumsum(num_peds_in_seq).tolist()\n        self.seq_start_end = [(start, end) for start, end in zip(cum_start_idx[:-1], cum_start_idx[1:])]\n        self.frame_id_seq = frame_id_seq\n        self.ped_id_list = ped_id_list\n        self.v_obs = []\n        self.A_obs = []\n        self.attn_mask_obs = []\n        # self.v_pred = []\n        # self.A_pred = []\n        # self.attn_mask_pred = []\n        for ss in range(len(self.seq_start_end)):\n            start, end = self.seq_start_end[ss]\n            v_, a_ = seq_to_graph(\n                self.obs_traj[start:end, :], self.obs_traj_rel[start:end, :], attn_mech='rel_conv')\n            self.v_obs.append(v_.clone())\n            self.A_obs.append(a_.clone())\n            # v_, a_ = seq_to_graph(\n            #     self.pred_traj[start:end, :], self.pred_traj_rel[start:end, :], attn_mech='rel_conv')\n            # self.v_pred.append(v_.clone())\n            # self.A_pred.append(a_.clone())\n            attn_mask = []\n            # for tt in range(self.seq_len):\n            for tt in range(self.obs_seq_len):\n                loss_mask_rel_tt = self.loss_mask_rel[start:end, tt]\n                attn_mask.append(torch.outer(loss_mask_rel_tt, loss_mask_rel_tt).float())\n            attn_mask = torch.stack(attn_mask, dim=0)\n            self.attn_mask_obs.append(attn_mask[:self.obs_seq_len])\n            # self.attn_mask_pred.append(attn_mask[self.obs_seq_len:])\n\n\n    def __len__(self):\n        return self.num_seq\n\n\n    def __getitem__(self, index):\n        start, end = self.seq_start_end[index]\n        # out = [\n        #     self.obs_traj[start:end, :], self.pred_traj[start:end, :],\n        #     self.obs_traj_rel[start:end, :], self.pred_traj_rel[start:end, :],\n        #     self.loss_mask_rel[start:end, :], self.loss_mask[start:end, :],\n        #     self.v_obs[index], self.A_obs[index],\n        #     self.v_pred[index], self.A_pred[index],\n        #     self.attn_mask_obs[index], self.attn_mask_pred[index],\n        #     self.frame_id_seq[index], self.ped_id_list[index]\n        # ]\n        out = [\n            self.obs_traj[start:end, :], [],\n            self.obs_traj_rel[start:end, :], [],\n            self.loss_mask_rel[start:end, :], self.loss_mask[start:end, :],\n            self.v_obs[index], self.A_obs[index],\n            [], [],\n            self.attn_mask_obs[index], [],\n            self.frame_id_seq[index], self.ped_id_list[index]\n        ]\n        return out\n    \n    \n\ndef read_file(_path, delim='\\t'):\n    data = []\n    if delim == 'tab':\n        delim = '\\t'\n    elif delim == 'space':\n        delim = ' '\n    with open(_path, 'r') as f:\n        for line in f:\n            line = line.strip().split(delim)\n            line = [float(i) for i in line]\n            data.append(line)\n    return np.asarray(data)\n\n\ndef read_trajnet_file(_path):\n    _, filename = os.path.split(_path)\n    print(filename)\n    filename_str, filename_ext = filename.split('.')\n    if filename_ext != \"ndjson\":\n        print(\"Not ndjson file for trajnet++.\")\n        data = []\n        frame_diff = 0.\n        start_frames_considered = []\n        return data, frame_diff, start_frames_considered, filename_str\n        # raise RuntimeError(\"Not ndjson file for trajnet++.\")\n    lines = []\n    frame_diff = 0.\n    start_frames_considered = []\n    with open(_path) as f:\n        reader = ndjson.reader(f)\n        for post in reader:\n            # print(type(post))\n            # print(post.keys())\n            # print(post['scene'])\n            if \"scene\" in post.keys():# and len(start_frames_considered) < 10:\n                start_frame = post[\"scene\"][\"s\"]\n                if frame_diff==0.:\n                    frame_diff = (post[\"scene\"][\"e\"]-post[\"scene\"][\"s\"])/20\n                start_frames_considered.append(start_frame)\n                # np.unique(data[:, 0]).tolist()\n                # {\"scene\": {\"id\": 0, \"p\": 24, \"s\": 500, \"e\": 700, \"fps\": 2.5, \"tag\": [3, [2]]}}\n                # {\"scene\": {\"id\": 1, \"p\": 24, \"s\": 520, \"e\": 720, \"fps\": 2.5, \"tag\": [4, []]}}\n                # {\"scene\": {\"id\": 2, \"p\": 24, \"s\": 540, \"e\": 740, \"fps\": 2.5, \"tag\": [4, []]}}\n                # {\"scene\": {\"id\": 3, \"p\": 24, \"s\": 560, \"e\": 760, \"fps\": 2.5, \"tag\": [4, []]}}\n            if \"track\" in post.keys():\n                f = post[\"track\"][\"f\"]\n                p = post[\"track\"][\"p\"]\n                x = post[\"track\"][\"x\"]\n                y = post[\"track\"][\"y\"]\n                line = [f,p,x,y] # str(f)+'\\t'+str(p)+'\\t'+str(x)+'\\t'+str(y)+'\\n'\n                lines.append(line)\n                # if len(lines)==3000:\n                #     break\n    data = np.asarray(lines)\n    start_frames_considered = np.unique(start_frames_considered)\n    return data, frame_diff, start_frames_considered, filename_str\n\n\n\n\n\n#     import pathhack\n# import ndjson\n# import os\n\n# raw_folder = 'real_data'\n# output_folder = raw_folder+'_txt'\n# data_dir = os.path.join(pathhack.pkg_path, \"datasets/trajnet++/train\", raw_folder)\n# output_dir = os.path.join(pathhack.pkg_path, \"datasets/trajnet++/train\", output_folder)\n# all_files = os.listdir(data_dir)\n# all_files = [os.path.join(data_dir, _path) for _path in all_files]\n\n# for path in all_files:\n#     print(path)\n#     _, filename = os.path.split(path)\n#     filename_str, filename_ext = filename.split('.')\n#     frame = None\n#     if filename_ext == \"ndjson\":\n#         lines = []\n#         with open(path) as f:\n#             reader = ndjson.reader(f)\n#             for post in reader:\n#                 # print(type(post))\n#                 # print(post.keys())\n#                 # print(post['scene'])\n#                 if \"track\" in post.keys():\n#                     f = post[\"track\"][\"f\"]\n#                     if frame is None:\n#                         frame = f\n#                     else:\n#                         print(f-frame)\n#                         break\n#                     p = post[\"track\"][\"p\"]\n#                     x = post[\"track\"][\"x\"]\n#                     y = post[\"track\"][\"y\"]\n#                     line = str(f)+'\\t'+str(p)+'\\t'+str(x)+'\\t'+str(y)+'\\n'\n#                     lines.append(line)\n"
  },
  {
    "path": "gst_updated/src/mgnn/utils.py",
    "content": "from os.path import join\nimport argparse\nimport torch\nimport numpy as np\nfrom torch.utils.data import DataLoader\n\n\ndef average_offset_error(x_pred, x_target, loss_mask=None):\n    assert x_pred.shape[0] == 1\n    pos_pred = torch.cumsum(x_pred, dim=1)\n    pos_target = torch.cumsum(x_target, dim=1)\n    offset_error = torch.sqrt(((pos_pred-pos_target)**2.).sum(3))[0]\n    aoe = offset_error.mean(0)\n    if loss_mask is not None:\n        aoe = aoe * loss_mask[0]\n    return aoe\n\ndef final_offset_error(x_pred, x_target, loss_mask=None):\n    assert x_pred.shape[0] == 1\n    pos_pred = torch.cumsum(x_pred, dim=1)\n    pos_target = torch.cumsum(x_target, dim=1)\n    offset_error = torch.sqrt(((pos_pred-pos_target)**2.).sum(3))[0]\n    foe = offset_error[-1]\n    if loss_mask is not None:\n        foe = foe * loss_mask[0]\n    return foe\n\n\ndef negative_log_likelihood(gaussian_params, x_target, loss_mask=None):\n    mu, sx, sy, corr = gaussian_params\n    assert mu.shape[0] == 1\n    sigma = torch.cat((sx, sy), dim=3)\n    x_target_norm = (x_target-mu)/sigma\n    nx, ny = x_target_norm[:,:,:,0:1], x_target_norm[:,:,:,1:2]\n    loss_term_1 = torch.log(1.-corr**2.)/2.+torch.log(sx)+torch.log(sy)\n    loss_term_2 = (nx**2.-2.*corr*nx*ny+ny**2.)/(2.*(1.-corr**2.))\n    loss = loss_term_1+loss_term_2\n    loss = loss.squeeze(3).mean(1)\n    loss = (loss[0] * loss_mask[0]).sum()/loss_mask[0].sum()\n    return loss\n\n\ndef seq_to_graph(seq_, seq_rel, attn_mech='glob_kip'):\n    if len(seq_.shape) == 4:\n        seq_ = seq_[0]\n        seq_rel = seq_rel[0]\n    seq_len = seq_.shape[2]\n    max_nodes = seq_.shape[0]\n\n    V = np.zeros((seq_len, max_nodes, 2))\n\n    for s in range(seq_len):\n        step_rel = seq_rel[:, :, s]\n        for h in range(len(step_rel)):\n            V[s, h, :] = step_rel[h]\n\n    if attn_mech == 'plain':\n        A = np.ones((seq_len, max_nodes, max_nodes))*(1./max_nodes)\n    elif attn_mech == 'rel_conv':\n        A = []\n        for tt in range(seq_len):\n            x = seq_[:,:,tt]\n            x_row = torch.ones(max_nodes,max_nodes,2)*x.view(max_nodes,1,2)\n            x_col = torch.ones(max_nodes,max_nodes,2)*x.view(1,max_nodes,2)\n            edge_attr = x_row-x_col\n            A.append(edge_attr.numpy())\n        A = np.stack(A, axis=0)\n    else:\n        raise RuntimeError('Wrong attention mechanism.')\n\n    return torch.from_numpy(V).type(torch.float),\\\n        torch.from_numpy(A).type(torch.float)\n\n\ndef rotate_graph(vtx, adj, theta):\n    vtx_rotated_x = vtx[:,:,:,0:1]*np.cos(theta)-vtx[:,:,:,1:2]*np.sin(theta)\n    vtx_rotated_y = vtx[:,:,:,0:1]*np.sin(theta)+vtx[:,:,:,1:2]*np.cos(theta)\n    vtx_rotated = torch.cat((vtx_rotated_x, vtx_rotated_y), dim=3)\n\n    adj_rotated_x = adj[:,:,:,:,0:1]*np.cos(theta)-adj[:,:,:,:,1:2]*np.sin(theta)\n    adj_rotated_y = adj[:,:,:,:,0:1]*np.sin(theta)+adj[:,:,:,:,1:2]*np.cos(theta)\n    adj_rotated = torch.cat((adj_rotated_x, adj_rotated_y), dim=4)\n\n    return vtx_rotated, adj_rotated\n\ndef random_rotate_graph(args, vtx_obs, adj_obs, vtx_pred_gt, adj_pred_gt):\n    if args.rotation_pattern is None:\n        raise RuntimeError('random_rotate_seq should not be called when rotation_pattern is None.')\n\n    if args.rotation_pattern == 'right_angle':\n        theta = (torch.randint(0, 4, ()).float()/2.*np.pi).item()\n    elif args.rotation_pattern == 'random':\n        theta = (torch.rand(())*2.*np.pi).item()\n    else:\n        raise RuntimeError('Rotation pattern is not found in args.')\n    vtx_obs_rotated, adj_obs_rotated = rotate_graph(vtx_obs, adj_obs, theta)\n    vtx_pred_gt_rotated, adj_pred_gt_rotated = rotate_graph(vtx_pred_gt, adj_pred_gt, theta)\n    return (vtx_obs_rotated, adj_obs_rotated, vtx_pred_gt_rotated, adj_pred_gt_rotated), theta\n\ndef load_batch_dataset(args, pkg_path, subfolder='train', num_workers=4, shuffle=None):\n    result_filename = args.dataset+'_dset_'+subfolder+'_batch_trajectories.pt'\n    if args.dataset == 'sdd':\n        dataset_folderpath = join(pkg_path, 'datasets/sdd/social_pool_data')\n    elif args.dataset == 'real' or args.dataset == 'synth' or args.dataset == 'all':\n        dataset_folderpath = join(pkg_path, 'datasets/trajnet++/train/')\n    elif args.dataset == 'deathCircle' or args.dataset == 'hyang':\n        dataset_folderpath = join(pkg_path, 'datasets/sdd', args.dataset)\n    elif args.dataset == 'sj':\n        dataset_folderpath = join(pkg_path, 'datasets/shuijing/orca_20humans_fov')\n    else:\n        # dataset_folderpath = join(pkg_path, 'datasets/eth_ucy', args.dataset)\n        dataset_folderpath = join(pkg_path, 'datasets/self_eth_ucy', args.dataset)\n        print(\"self_eth_ucy\")\n    dset = torch.load(join(dataset_folderpath, result_filename))\n    if shuffle is None:\n        if subfolder == 'train':\n            shuffle = True\n        else:\n            shuffle = False\n    dloader = DataLoader(\n        dset,\n        batch_size=1,\n        shuffle=shuffle,\n        num_workers=num_workers)\n    return dloader\n\ndef args2writername(args):\n    writername = str(args.temp_epochs)+'-'+str(args.spatial)\\\n        +'-'+str(args.temporal)+'-lr_'+str(args.lr)\n    if args.deterministic:\n        writername = writername + '-deterministic'\n    # if args.init_temp != 1.:\n    writername = writername + '-init_temp_'+str(args.init_temp)\n    if args.clip_grad is not None:\n        writername = writername + '-clip_grad_'+str(args.clip_grad)\n    # if args.spatial_num_heads_edges != 4:\n    writername = writername + '-edge_head_'+str(args.spatial_num_heads_edges)\n    if args.only_observe_full_period:\n        writername = writername + '-only_full'\n    if args.detach_sample:\n        writername = writername + '-detach'\n    # if args.embedding_size != 64:\n    writername = writername + '-ebd_'+str(args.embedding_size)\n    # if args.spatial_num_layers != 3:\n    writername = writername + '-snl_'+str(args.spatial_num_layers)\n    # if args.spatial_num_heads != 8:\n    writername = writername + '-snh_'+str(args.spatial_num_heads)\n    if args.ghost:\n        writername = writername + '-ghost'\n    writername = writername + '-seed_'+str(args.random_seed)\n    return writername\n\n\ndef arg_parse():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--spatial', default='rel_conv',\n                        help='spatial encoding methods: rel_conv, plain')\n    parser.add_argument('--temporal', default='lstm',\n                        help='temporal encoding methods: lstm, temporal_convolution_net, faster_lstm.')                    \n    parser.add_argument('--output_dim', type=int, default=5,\n                        help='5 for mu_x, mu_y, sigma_x, sigma_y, corr')\n    parser.add_argument('--embedding_size', type=int, default=64)\n    parser.add_argument('--spatial_num_heads', type=int, default=8,\n                        help='number of heads for multi-head \\\n                        attention mechanism in spatial encoding')\n    parser.add_argument('--lstm_hidden_size', type=int, default=64)\n    parser.add_argument('--lstm_num_layers', type=int, default=1)          \n    parser.add_argument('--decode_style', default='recursive',\n                        help='recursive, readout')\n    parser.add_argument('--detach_sample', action='store_true',\n                        help='Default False means using reparameterization trick. \\\n                        True means disable the trick and cut gradient flow between \\\n                        gaussian parameters and samples in prediction period.')\n    parser.add_argument('--motion_dim', type=int, default=2,\n                        help='pedestrian motion is 2D')\n    parser.add_argument('--dataset', default='eth',\n                        help='eth,hotel,univ,zara1,zara2')\n    parser.add_argument('--obs_seq_len', type=int, default=8)\n    parser.add_argument('--pred_seq_len', type=int, default=12)\n    parser.add_argument('--rotation_pattern', default=None,\n                        help='rotation pattern used for data augmentation in training \\\n                        phase: right_angle, or random. None means no rotation.')\n    parser.add_argument('--batch_size', type=int, default=16,\n                        help='minibatch size')\n    parser.add_argument('--num_epochs', type=int, default=200,\n                        help='number of epochs')\n    parser.add_argument('--lr', type=float, default=1e-3,\n                        help='learning rate')\n    parser.add_argument('--clip_grad', type=float, default=None,\n                        help='gradient clipping')\n    parser.add_argument('--organize_csv', action='store_true',\n                        help='Organize test_performance.csv towards performance_organized.csv.')\n    parser.add_argument('--spatial_num_layers', type=int, default=3)\n    parser.add_argument('--only_observe_full_period', action='store_true',\n                        help='Only observe pedestrians that appear in the full period.')\n    parser.add_argument('--spatial_num_heads_edges', type=int, default=4)\n    parser.add_argument('--ghost', action='store_true')\n    parser.add_argument('--random_seed', type=int, default=1000)\n    parser.add_argument('--deterministic', action='store_true')\n    parser.add_argument('--init_temp', type=float, default=1.)\n    parser.add_argument('--resume_training', action='store_true')\n    parser.add_argument('--temp_epochs', type=int, default=200)\n    parser.add_argument('--save_epochs', type=int, default=10)\n    parser.add_argument('--resume_epoch', type=int, default=None,\n                        help='the index of epoch where we resume training.')\n    return parser.parse_args()\n"
  },
  {
    "path": "gst_updated/src/pec_net/config/optimal.yaml",
    "content": "adl_reg: 1\ndata_scale: 1.86\ndataset_type: image\ndec_size:\n- 1024\n- 512\n- 1024\ndist_thresh: 100\nenc_dest_size:\n- 8\n- 16\nenc_latent_size:\n- 8\n- 50\nenc_past_size:\n- 512\n- 256\nnon_local_theta_size:\n- 256\n- 128\n- 64\nnon_local_phi_size:\n- 256\n- 128\n- 64\nnon_local_g_size:\n- 256\n- 128\n- 64\nnon_local_dim: 128\nfdim: 16\nfuture_length: 12\ngpu_index: 0\nkld_reg: 1\nlearning_rate: 0.0003\nmu: 0\nn_values: 20\nnonlocal_pools: 3\nnormalize_type: shift_origin\nnum_epochs: 650\nnum_workers: 0\npast_length: 8\npredictor_hidden_size:\n- 1024\n- 512\n- 256\nsigma: 1.3\ntest_b_size: 4096\ntime_thresh: 0\ntrain_b_size: 512\nzdim: 16\n"
  },
  {
    "path": "gst_updated/src/pec_net/sdd_trajectories.py",
    "content": "from torch.utils.data import Dataset\nimport numpy as np\n\nclass SDDTrajectoriesDataset(Dataset):\n    def __init__(\n        self,\n        ):\n        super(SDDTrajectoriesDataset, self).__init__()\n        self.num_seq = 0\n        self.obs_traj = []\n        self.pred_traj = []\n        self.obs_traj_rel = []\n        self.pred_traj_rel = []\n        self.loss_mask_rel = []\n        self.loss_mask = []\n        self.v_obs = []\n        self.A_obs = []\n        self.v_pred = []\n        self.A_pred = []\n        self.attn_mask_obs = []\n        self.attn_mask_pred = []\n\n    def add_batch(\n        self,\n        obs_traj,\n        pred_traj_gt,\n        obs_traj_rel,\n        pred_traj_rel_gt,\n        loss_mask_rel,\n        loss_mask,\n        v_obs, \n        A_obs,\n        v_pred,\n        A_pred,\n        attn_mask_obs,\n        attn_mask_pred,\n    ):\n        self.num_seq += 1\n        self.obs_traj.append(obs_traj[0].float())\n        self.pred_traj.append(pred_traj_gt[0].float())\n        self.obs_traj_rel.append(obs_traj_rel[0].float())\n        self.pred_traj_rel.append(pred_traj_rel_gt[0].float())\n        self.loss_mask_rel.append(loss_mask_rel[0].float())\n        self.loss_mask.append(loss_mask[0].float())\n        self.v_obs.append(v_obs[0].float())\n        self.A_obs.append(A_obs[0].float())\n        self.v_pred.append(v_pred[0].float())\n        self.A_pred.append(A_pred[0].float())\n        self.attn_mask_obs.append(attn_mask_obs[0].float())\n        self.attn_mask_pred.append(attn_mask_pred[0].float())\n\n    def __len__(self):\n        return self.num_seq\n\n    def __getitem__(self, index):\n        out = [\n            self.obs_traj[index], self.pred_traj[index],\n            self.obs_traj_rel[index], self.pred_traj_rel[index],\n            self.loss_mask_rel[index], self.loss_mask[index],\n            self.v_obs[index], self.A_obs[index],\n            self.v_pred[index], self.A_pred[index],\n            self.attn_mask_obs[index], self.attn_mask_pred[index],\n        ]\n        return out"
  },
  {
    "path": "gst_updated/src/pec_net/social_utils.py",
    "content": "from IPython import embed\nimport glob\nimport pandas as pd\nimport pickle\nimport os\nimport torch\nfrom torch import nn\nfrom torch.utils import data\nimport random\nimport numpy as np\n\n'''for sanity check'''\ndef naive_social(p1_key, p2_key, all_data_dict):\n\tif abs(p1_key-p2_key)<4:\n\t\treturn True\n\telse:\n\t\treturn False\n\ndef find_min_time(t1, t2):\n\t'''given two time frame arrays, find then min dist (time)'''\n\tmin_d = 9e4\n\tt1, t2 = t1[:8], t2[:8]\n\n\tfor t in t2:\n\t\tif abs(t1[0]-t)<min_d:\n\t\t\tmin_d = abs(t1[0]-t)\n\n\tfor t in t1:\n\t\tif abs(t2[0]-t)<min_d:\n\t\t\tmin_d = abs(t2[0]-t)\n\n\treturn min_d\n\ndef find_min_dist(p1x, p1y, p2x, p2y):\n\t'''given two time frame arrays, find then min dist'''\n\tmin_d = 9e4\n\tp1x, p1y = p1x[:8], p1y[:8]\n\tp2x, p2y = p2x[:8], p2y[:8]\n\n\tfor i in range(len(p1x)):\n\t\tfor j in range(len(p1x)):\n\t\t\tif ((p2x[i]-p1x[j])**2 + (p2y[i]-p1y[j])**2)**0.5 < min_d:\n\t\t\t\tmin_d = ((p2x[i]-p1x[j])**2 + (p2y[i]-p1y[j])**2)**0.5\n\n\treturn min_d\n\ndef social_and_temporal_filter(p1_key, p2_key, all_data_dict, time_thresh=48, dist_tresh=100):\n\tp1_traj, p2_traj = np.array(all_data_dict[p1_key]), np.array(all_data_dict[p2_key])\n\tp1_time, p2_time = p1_traj[:,1], p2_traj[:,1]\n\tp1_x, p2_x = p1_traj[:,2], p2_traj[:,2]\n\tp1_y, p2_y = p1_traj[:,3], p2_traj[:,3]\n\n\tif find_min_time(p1_time, p2_time)>time_thresh:\n\t\treturn False\n\tif find_min_dist(p1_x, p1_y, p2_x, p2_y)>dist_tresh:\n\t\treturn False\n\n\treturn True\n\ndef mark_similar(mask, sim_list):\n\tfor i in range(len(sim_list)):\n\t\tfor j in range(len(sim_list)):\n\t\t\tmask[sim_list[i]][sim_list[j]] = 1\n\n\ndef collect_data(set_name, dataset_type = 'image', batch_size=512, time_thresh=48, dist_tresh=100, scene=None, verbose=True, root_path=\"./\"):\n\n\tassert set_name in ['train','val','test']\n\n\t'''Please specify the parent directory of the dataset. In our case data was stored in:\n\t\troot_path/trajnet_image/train/scene_name.txt\n\t\troot_path/trajnet_image/test/scene_name.txt\n\t'''\n\n\trel_path = '/trajnet_{0}/{1}/stanford'.format(dataset_type, set_name)\n\n\tfull_dataset = []\n\tfull_masks = []\n\n\tcurrent_batch = []\n\tmask_batch = [[0 for i in range(int(batch_size*1.5))] for j in range(int(batch_size*1.5))]\n\n\tcurrent_size = 0\n\tsocial_id = 0\n\tpart_file = '/{}.txt'.format('*' if scene == None else scene)\n\n\tfor file in glob.glob(root_path + rel_path + part_file):\n\t\tscene_name = file[len(root_path+rel_path)+1:-6] + file[-5]\n\t\tdata = np.loadtxt(fname = file, delimiter = ' ')\n\n\t\tdata_by_id = {}\n\t\tfor frame_id, person_id, x, y in data:\n\t\t\tif person_id not in data_by_id.keys():\n\t\t\t\tdata_by_id[person_id] = []\n\t\t\tdata_by_id[person_id].append([person_id, frame_id, x, y])\n\n\t\tall_data_dict = data_by_id.copy()\n\t\tif verbose:\n\t\t\tprint(\"Total People: \", len(list(data_by_id.keys())))\n\t\twhile len(list(data_by_id.keys()))>0:\n\t\t\trelated_list = []\n\t\t\tcurr_keys = list(data_by_id.keys())\n\n\t\t\tif current_size<batch_size:\n\t\t\t\tpass\n\t\t\telse:\n\t\t\t\tfull_dataset.append(current_batch.copy())\n\t\t\t\tmask_batch = np.array(mask_batch)\n\t\t\t\tfull_masks.append(mask_batch[0:len(current_batch), 0:len(current_batch)])\n\n\t\t\t\tcurrent_size = 0\n\t\t\t\tsocial_id = 0\n\t\t\t\tcurrent_batch = []\n\t\t\t\tmask_batch = [[0 for i in range(int(batch_size*1.5))] for j in range(int(batch_size*1.5))]\n\n\t\t\tcurrent_batch.append((all_data_dict[curr_keys[0]]))\n\t\t\trelated_list.append(current_size)\n\t\t\tcurrent_size+=1\n\t\t\tdel data_by_id[curr_keys[0]]\n\n\t\t\tfor i in range(1, len(curr_keys)):\n\t\t\t\tif social_and_temporal_filter(curr_keys[0], curr_keys[i], all_data_dict, time_thresh, dist_tresh):\n\t\t\t\t\tcurrent_batch.append((all_data_dict[curr_keys[i]]))\n\t\t\t\t\trelated_list.append(current_size)\n\t\t\t\t\tcurrent_size+=1\n\t\t\t\t\tdel data_by_id[curr_keys[i]]\n\n\t\t\tmark_similar(mask_batch, related_list)\n\t\t\tsocial_id +=1\n\n\n\tfull_dataset.append(current_batch)\n\tmask_batch = np.array(mask_batch)\n\tfull_masks.append(mask_batch[0:len(current_batch),0:len(current_batch)])\n\treturn full_dataset, full_masks\n\ndef generate_pooled_data(b_size, t_tresh, d_tresh, train=True, scene=None, verbose=True):\n\tif train:\n\t\tfull_train, full_masks_train = collect_data(\"train\", batch_size=b_size, time_thresh=t_tresh, dist_tresh=d_tresh, scene=scene, verbose=verbose)\n\t\ttrain = [full_train, full_masks_train]\n\t\ttrain_name = \"../social_pool_data/train_{0}_{1}_{2}_{3}.pickle\".format('all' if scene is None else scene[:-2] + scene[-1], b_size, t_tresh, d_tresh)\n\t\twith open(train_name, 'wb') as f:\n\t\t\tpickle.dump(train, f)\n\n\tif not train:\n\t\tfull_test, full_masks_test = collect_data(\"test\", batch_size=b_size, time_thresh=t_tresh, dist_tresh=d_tresh, scene=scene, verbose=verbose)\n\t\ttest = [full_test, full_masks_test]\n\t\ttest_name = \"../social_pool_data/test_{0}_{1}_{2}_{3}.pickle\".format('all' if scene is None else scene[:-2] + scene[-1], b_size, t_tresh, d_tresh)# + str(b_size) + \"_\" + str(t_tresh) + \"_\" + str(d_tresh) + \".pickle\"\n\t\twith open(test_name, 'wb') as f:\n\t\t\tpickle.dump(test, f)\n\ndef initial_pos(traj_batches):\n\tbatches = []\n\tfor b in traj_batches:\n\t\tstarting_pos = b[:,7,:].copy()/1000 #starting pos is end of past, start of future. scaled down.\n\t\tbatches.append(starting_pos)\n\n\treturn batches\n\ndef calculate_loss(x, reconstructed_x, mean, log_var, criterion, future, interpolated_future):\n\t# reconstruction loss\n\tRCL_dest = criterion(x, reconstructed_x)\n\n\tADL_traj = criterion(future, interpolated_future) # better with l2 loss\n\n\t# kl divergence loss\n\tKLD = -0.5 * torch.sum(1 + log_var - mean.pow(2) - log_var.exp())\n\n\treturn RCL_dest, KLD, ADL_traj\n\nclass SocialDataset(data.Dataset):\n\n\tdef __init__(self, set_name, pkg_path, b_size=4096, t_tresh=60, d_tresh=50, scene=None, id=False, verbose=True):\n\t\t# 'Initialization'\n\t\t# set_name = 'train' or 'test'\n\t\tload_name = os.path.join(pkg_path, \"datasets/sdd/social_pool_data/{0}_{1}{2}_{3}_{4}.pickle\".format(set_name, 'all_' if scene is None else scene[:-2] + scene[-1] + '_', b_size, t_tresh, d_tresh))\n\t\tprint(load_name)\n\t\twith open(load_name, 'rb') as f:\n\t\t\tdata = pickle.load(f)\n\n\t\ttraj, masks = data\n\t\ttraj_new = []\n\n\t\tif id==False:\n\t\t\tfor t in traj:\n\t\t\t\tt = np.array(t)\n\t\t\t\tt = t[:,:,2:]\n\t\t\t\ttraj_new.append(t)\n\t\t\t\tif set_name==\"train\":\n\t\t\t\t\t#augment training set with reversed tracklets...\n\t\t\t\t\treverse_t = np.flip(t, axis=1).copy()\n\t\t\t\t\ttraj_new.append(reverse_t)\n\t\telse:\n\t\t\tfor t in traj:\n\t\t\t\tt = np.array(t)\n\t\t\t\ttraj_new.append(t)\n\n\t\t\t\tif set_name==\"train\":\n\t\t\t\t\t#augment training set with reversed tracklets...\n\t\t\t\t\treverse_t = np.flip(t, axis=1).copy()\n\t\t\t\t\ttraj_new.append(reverse_t)\n\n\n\t\tmasks_new = []\n\t\tfor m in masks:\n\t\t\tmasks_new.append(m)\n\n\t\t\tif set_name==\"train\":\n\t\t\t\t#add second time for the reversed tracklets...\n\t\t\t\tmasks_new.append(m)\n\n\t\ttraj_new = np.array(traj_new)\n\t\tmasks_new = np.array(masks_new)\n\t\tself.trajectory_batches = traj_new.copy()\n\t\tself.mask_batches = masks_new.copy()\n\t\tself.initial_pos_batches = np.array(initial_pos(self.trajectory_batches)) #for relative positioning\n\t\tif verbose:\n\t\t\tprint(\"Initialized social dataloader...\")\n\n\"\"\"\nWe've provided pickle files, but to generate new files for different datasets or thresholds, please use a command like so:\nParameter1: batchsize, Parameter2: time_thresh, Param3: dist_thresh\n\"\"\"\n\n# generate_pooled_data(512,0,25, train=True, verbose=True, root_path=\"./\")\n\ndef split_square_block_matrix(block_mat):\n    block_size_list = []\n    start_idx = 0\n    curr_block_size = 1\n    for i in range(1, block_mat.shape[0]):\n        if block_mat[start_idx, i] != 0:\n            curr_block_size += 1\n        else:\n            block_size_list.append(curr_block_size)\n            curr_block_size = 1\n            start_idx = i\n    block_size_list.append(curr_block_size)\n    return block_size_list\n"
  },
  {
    "path": "gst_updated/tuning/211130-train_shuijing.sh",
    "content": "temporal=faster_lstm\ndecode_style=recursive\nonly_observe_full_period=\n\nsave_epochs=10\nspatial=gumbel_social_transformer\nrotation_pattern=random\nlr=1e-3\nspatial_num_heads=8\nembedding_size=64\nlstm_hidden_size=64\nbatch_size=1\n\ndeterministic=\ndetach_sample=\ninit_temp=0.5\nclip_grad=10.\ntemp_epochs=100\n\nfor random_seed in 1000; do\n    for num_epochs in 100; do\n        for spatial_num_layers in 1; do\n            for lr in 1e-3; do\n                for dataset in sj; do\n                    for spatial_num_heads_edges in 0; do\n                        CUDA_VISIBLE_DEVICES=1 python -u scripts/experiments/train.py --spatial $spatial --temporal $temporal --lr $lr\\\n                            --dataset $dataset --temp_epochs $temp_epochs --num_epochs $num_epochs --save_epochs $save_epochs\\\n                            $detach_sample\\\n                            --spatial_num_heads $spatial_num_heads --spatial_num_layers $spatial_num_layers\\\n                            --decode_style $decode_style \\\n                            --spatial_num_heads_edges $spatial_num_heads_edges --random_seed $random_seed\\\n                            $deterministic --lstm_hidden_size $lstm_hidden_size --embedding_size $embedding_size\\\n                            --batch_size $batch_size \\\n                            --init_temp $init_temp $only_observe_full_period\\\n                            --obs_seq_len 5 --pred_seq_len 5 \\\n                            | tee -a logs/211130.txt\n                    done\n                done\n            done\n        done\n    done\ndone\n\n# for random_seed in 1000; do\n#     for num_epochs in 100; do\n#         for spatial_num_layers in 1; do\n#             for lr in 1e-3; do\n#                 for dataset in deathCircle hyang; do\n#                     for spatial_num_heads_edges in 0; do\n#                         CUDA_VISIBLE_DEVICES=1 python -u scripts/experiments/train.py --spatial $spatial --temporal $temporal --lr $lr\\\n#                             --dataset $dataset --temp_epochs $temp_epochs --num_epochs $num_epochs --save_epochs $save_epochs\\\n#                             $detach_sample\\\n#                             --spatial_num_heads $spatial_num_heads --spatial_num_layers $spatial_num_layers\\\n#                             --decode_style $decode_style \\\n#                             --spatial_num_heads_edges $spatial_num_heads_edges --random_seed $random_seed\\\n#                             $deterministic --lstm_hidden_size $lstm_hidden_size --embedding_size $embedding_size\\\n#                             --batch_size $batch_size \\\n#                             --init_temp $init_temp --only_observe_full_period\\\n#                             | tee -a logs/211112.txt\n#                     done\n#                 done\n#             done\n#         done\n#     done\n# done\n\n\n# for random_seed in 1000; do\n#     for num_epochs in 100; do\n#         for spatial_num_layers in 1; do\n#             for lr in 1e-3; do\n#                 for dataset in deathCircle hyang; do\n#                     for spatial_num_heads_edges in 16 8 4 2 1 0; do\n#                         CUDA_VISIBLE_DEVICES=1 python -u scripts/experiments/train.py --spatial $spatial --temporal $temporal --lr $lr\\\n#                             --dataset $dataset --temp_epochs $temp_epochs --num_epochs $num_epochs --save_epochs $save_epochs\\\n#                             $detach_sample\\\n#                             --spatial_num_heads $spatial_num_heads --spatial_num_layers $spatial_num_layers\\\n#                             --decode_style $decode_style \\\n#                             --spatial_num_heads_edges $spatial_num_heads_edges --random_seed $random_seed\\\n#                             $deterministic --lstm_hidden_size $lstm_hidden_size --embedding_size $embedding_size\\\n#                             --batch_size $batch_size \\\n#                             --init_temp $init_temp $only_observe_full_period\\\n#                             | tee -a logs/211112.txt\n#                     done\n#                 done\n#             done\n#         done\n#     done\n# done\n\n\n\n\n\n\n\n"
  },
  {
    "path": "gst_updated/tuning/211203-eval_shuijing.sh",
    "content": "temporal=faster_lstm\ndecode_style=recursive\nonly_observe_full_period=\n\nsave_epochs=10\nspatial=gumbel_social_transformer\nrotation_pattern=random\nlr=1e-3\nspatial_num_heads=8\nembedding_size=64\nlstm_hidden_size=64\nbatch_size=1\n\ndeterministic=\ndetach_sample=\ninit_temp=0.5\nclip_grad=10.\ntemp_epochs=100\n\nfor random_seed in 1000; do\n    for num_epochs in 100; do\n        for spatial_num_layers in 1; do\n            for lr in 1e-3; do\n                for dataset in sj; do\n                    for spatial_num_heads_edges in 0; do\n                        CUDA_VISIBLE_DEVICES=0 python -u scripts/experiments/eval.py --spatial $spatial --temporal $temporal --lr $lr\\\n                            --dataset $dataset --temp_epochs $temp_epochs --num_epochs $num_epochs --save_epochs $save_epochs\\\n                            $detach_sample\\\n                            --spatial_num_heads $spatial_num_heads --spatial_num_layers $spatial_num_layers\\\n                            --decode_style $decode_style \\\n                            --spatial_num_heads_edges $spatial_num_heads_edges --random_seed $random_seed\\\n                            $deterministic --lstm_hidden_size $lstm_hidden_size --embedding_size $embedding_size\\\n                            --batch_size $batch_size \\\n                            --init_temp $init_temp $only_observe_full_period\\\n                            --obs_seq_len 5 --pred_seq_len 5 \\\n                            | tee -a logs/211203_sj.txt\n                        \n                    done\n                done\n            done\n        done\n    done\ndone\n\n\n"
  },
  {
    "path": "gst_updated/tuning/211203-train_shuijing.sh",
    "content": "temporal=faster_lstm\ndecode_style=recursive\nonly_observe_full_period=\n\nsave_epochs=10\nspatial=gumbel_social_transformer\nrotation_pattern=random\nlr=1e-3\nspatial_num_heads=8\nembedding_size=64\nlstm_hidden_size=64\nbatch_size=1\n\ndeterministic=\ndetach_sample=\ninit_temp=0.5\nclip_grad=10.\ntemp_epochs=100\n\nfor random_seed in 1000; do\n    for num_epochs in 100; do\n        for spatial_num_layers in 1; do\n            for lr in 1e-3; do\n                for dataset in sj; do\n                    for spatial_num_heads_edges in 4; do\n                        CUDA_VISIBLE_DEVICES=1 python -u scripts/experiments/train.py --spatial $spatial --temporal $temporal --lr $lr\\\n                            --dataset $dataset --temp_epochs $temp_epochs --num_epochs $num_epochs --save_epochs $save_epochs\\\n                            $detach_sample\\\n                            --spatial_num_heads $spatial_num_heads --spatial_num_layers $spatial_num_layers\\\n                            --decode_style $decode_style \\\n                            --spatial_num_heads_edges $spatial_num_heads_edges --random_seed $random_seed\\\n                            $deterministic --lstm_hidden_size $lstm_hidden_size --embedding_size $embedding_size\\\n                            --batch_size $batch_size \\\n                            --init_temp $init_temp $only_observe_full_period --ghost\\\n                            --obs_seq_len 5 --pred_seq_len 5 \\\n                            | tee -a logs/211130.txt\n                    done\n                done\n            done\n        done\n    done\ndone\n\n# for random_seed in 1000; do\n#     for num_epochs in 100; do\n#         for spatial_num_layers in 1; do\n#             for lr in 1e-3; do\n#                 for dataset in deathCircle hyang; do\n#                     for spatial_num_heads_edges in 0; do\n#                         CUDA_VISIBLE_DEVICES=1 python -u scripts/experiments/train.py --spatial $spatial --temporal $temporal --lr $lr\\\n#                             --dataset $dataset --temp_epochs $temp_epochs --num_epochs $num_epochs --save_epochs $save_epochs\\\n#                             $detach_sample\\\n#                             --spatial_num_heads $spatial_num_heads --spatial_num_layers $spatial_num_layers\\\n#                             --decode_style $decode_style \\\n#                             --spatial_num_heads_edges $spatial_num_heads_edges --random_seed $random_seed\\\n#                             $deterministic --lstm_hidden_size $lstm_hidden_size --embedding_size $embedding_size\\\n#                             --batch_size $batch_size \\\n#                             --init_temp $init_temp --only_observe_full_period\\\n#                             | tee -a logs/211112.txt\n#                     done\n#                 done\n#             done\n#         done\n#     done\n# done\n\n\n# for random_seed in 1000; do\n#     for num_epochs in 100; do\n#         for spatial_num_layers in 1; do\n#             for lr in 1e-3; do\n#                 for dataset in deathCircle hyang; do\n#                     for spatial_num_heads_edges in 16 8 4 2 1 0; do\n#                         CUDA_VISIBLE_DEVICES=1 python -u scripts/experiments/train.py --spatial $spatial --temporal $temporal --lr $lr\\\n#                             --dataset $dataset --temp_epochs $temp_epochs --num_epochs $num_epochs --save_epochs $save_epochs\\\n#                             $detach_sample\\\n#                             --spatial_num_heads $spatial_num_heads --spatial_num_layers $spatial_num_layers\\\n#                             --decode_style $decode_style \\\n#                             --spatial_num_heads_edges $spatial_num_heads_edges --random_seed $random_seed\\\n#                             $deterministic --lstm_hidden_size $lstm_hidden_size --embedding_size $embedding_size\\\n#                             --batch_size $batch_size \\\n#                             --init_temp $init_temp $only_observe_full_period\\\n#                             | tee -a logs/211112.txt\n#                     done\n#                 done\n#             done\n#         done\n#     done\n# done\n\n\n\n\n\n\n\n"
  },
  {
    "path": "gst_updated/tuning/211209-test_shuijing.sh",
    "content": "temporal=faster_lstm\ndecode_style=recursive\nonly_observe_full_period=\n\nsave_epochs=10\nspatial=gumbel_social_transformer\nrotation_pattern=random\nlr=1e-3\nspatial_num_heads=8\nembedding_size=64\nlstm_hidden_size=64\nbatch_size=1\n\ndeterministic=\ndetach_sample=\ninit_temp=0.5\nclip_grad=10.\ntemp_epochs=100\n\nfor random_seed in 1000; do\n    for num_epochs in 100; do\n        for spatial_num_layers in 1; do\n            for lr in 1e-3; do\n                for dataset in sj; do\n                    for spatial_num_heads_edges in 0; do\n                        CUDA_VISIBLE_DEVICES=0 python -u scripts/experiments/test.py --spatial $spatial --temporal $temporal --lr $lr\\\n                            --dataset $dataset --temp_epochs $temp_epochs --num_epochs $num_epochs --save_epochs $save_epochs\\\n                            $detach_sample\\\n                            --spatial_num_heads $spatial_num_heads --spatial_num_layers $spatial_num_layers\\\n                            --decode_style $decode_style \\\n                            --spatial_num_heads_edges $spatial_num_heads_edges --random_seed $random_seed\\\n                            $deterministic --lstm_hidden_size $lstm_hidden_size --embedding_size $embedding_size\\\n                            --batch_size $batch_size \\\n                            --init_temp $init_temp $only_observe_full_period\\\n                            --obs_seq_len 5 --pred_seq_len 5 \\\n                            | tee -a logs/211203_sj.txt\n                        \n                    done\n                done\n            done\n        done\n    done\ndone\n\n\n"
  },
  {
    "path": "plot.py",
    "content": "import pandas as pd\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\n\r\n\r\nlegends = ['Non-randomized humans', 'randomized humans', '', '', '']\r\n\r\n# add any folder directories here!\r\nlog_list = [\r\npd.read_csv(\"trained_models/GST_predictor_non_rand/progress.csv\"),\r\npd.read_csv(\"trained_models/GST_predictor_rand/progress.csv\"),\r\n\t]\r\n\r\n\r\nlogDicts = {}\r\nfor i in range(len(log_list)):\r\n\tlogDicts[i] = log_list[i]\r\n\r\ngraphDicts={0:'eprewmean', 1:'loss/value_loss'}\r\n\r\nlegendList=[]\r\n# summarize history for accuracy\r\n\r\n# for each metric\r\nfor i in range(len(graphDicts)):\r\n\tplt.figure(i)\r\n\tplt.title(graphDicts[i])\r\n\tj = 0\r\n\tfor key in logDicts:\r\n\t\tif graphDicts[i] not in logDicts[key]:\r\n\t\t\tcontinue\r\n\t\telse:\r\n\t\t\tplt.plot(logDicts[key]['misc/total_timesteps'],logDicts[key][graphDicts[i]])\r\n\r\n\t\t\tlegendList.append(legends[j])\r\n\t\t\tprint('avg', str(key), graphDicts[i], np.average(logDicts[key][graphDicts[i]]))\r\n\t\tj = j + 1\r\n\tprint('------------------------')\r\n\r\n\tplt.xlabel('total_timesteps')\r\n\tplt.legend(legendList, loc='lower right')\r\n\tlegendList=[]\r\n\r\nplt.show()\r\n"
  },
  {
    "path": "requirements.txt",
    "content": "numpy==1.20.3\npandas==1.5.2\ngym==0.15.7\ntensorflow-gpu==2.11.0\nmatplotlib==3.6.2\nCython"
  },
  {
    "path": "rl/.gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\ntrained_models/\n.fuse_hidden*\n"
  },
  {
    "path": "rl/__init__.py",
    "content": ""
  },
  {
    "path": "rl/evaluation.py",
    "content": "import numpy as np\nimport torch\n\nfrom crowd_sim.envs.utils.info import *\n\n\ndef evaluate(actor_critic, eval_envs, num_processes, device, test_size, logging, config, args, visualize=False):\n    \"\"\" function to run all testing episodes and log the testing metrics \"\"\"\n    # initializations\n    eval_episode_rewards = []\n\n    if config.robot.policy not in ['orca', 'social_force']:\n        eval_recurrent_hidden_states = {}\n\n        node_num = 1\n        edge_num = actor_critic.base.human_num + 1\n        eval_recurrent_hidden_states['human_node_rnn'] = torch.zeros(num_processes, node_num, actor_critic.base.human_node_rnn_size,\n                                                                     device=device)\n\n        eval_recurrent_hidden_states['human_human_edge_rnn'] = torch.zeros(num_processes, edge_num,\n                                                                           actor_critic.base.human_human_edge_rnn_size,\n                                                                           device=device)\n\n    eval_masks = torch.zeros(num_processes, 1, device=device)\n\n    success_times = []\n    collision_times = []\n    timeout_times = []\n\n    success = 0\n    collision = 0\n    timeout = 0\n    too_close_ratios = []\n    min_dist = []\n\n    collision_cases = []\n    timeout_cases = []\n\n    all_path_len = []\n\n    # to make it work with the virtualenv in sim2real\n    if hasattr(eval_envs.venv, 'envs'):\n        baseEnv = eval_envs.venv.envs[0].env\n    else:\n        baseEnv = eval_envs.venv.unwrapped.envs[0].env\n    time_limit = baseEnv.time_limit\n\n    # start the testing episodes\n    for k in range(test_size):\n        baseEnv.episode_k = k\n        done = False\n        rewards = []\n        stepCounter = 0\n        episode_rew = 0\n        obs = eval_envs.reset()\n        global_time = 0.0\n        path_len = 0.\n        too_close = 0.\n        last_pos = obs['robot_node'][0, 0, :2].cpu().numpy()\n\n\n        while not done:\n            stepCounter = stepCounter + 1\n            if config.robot.policy not in ['orca', 'social_force']:\n                # run inference on the NN policy\n                with torch.no_grad():\n                    _, action, _, eval_recurrent_hidden_states = actor_critic.act(\n                        obs,\n                        eval_recurrent_hidden_states,\n                        eval_masks,\n                        deterministic=True)\n            else:\n                action = torch.zeros([1, 2], device=device)\n            if not done:\n                global_time = baseEnv.global_time\n\n            # if the vec_pretext_normalize.py wrapper is used, send the predicted traj to env\n            if args.env_name == 'CrowdSimPredRealGST-v0' and config.env.use_wrapper:\n                out_pred = obs['spatial_edges'][:, :, 2:].to('cpu').numpy()\n                # send manager action to all processes\n                ack = eval_envs.talk2Env(out_pred)\n                assert all(ack)\n            # render\n            if visualize:\n                eval_envs.render()\n\n            # Obser reward and next obs\n            obs, rew, done, infos = eval_envs.step(action)\n\n            # record the info for calculating testing metrics\n            rewards.append(rew)\n\n            path_len = path_len + np.linalg.norm(obs['robot_node'][0, 0, :2].cpu().numpy() - last_pos)\n            last_pos = obs['robot_node'][0, 0, :2].cpu().numpy()\n\n\n            if isinstance(infos[0]['info'], Danger):\n                too_close = too_close + 1\n                min_dist.append(infos[0]['info'].min_dist)\n\n            episode_rew += rew[0]\n\n\n            eval_masks = torch.tensor(\n                [[0.0] if done_ else [1.0] for done_ in done],\n                dtype=torch.float32,\n                device=device)\n\n            for info in infos:\n                if 'episode' in info.keys():\n                    eval_episode_rewards.append(info['episode']['r'])\n\n        # an episode ends!\n        print('')\n        print('Reward={}'.format(episode_rew))\n        print('Episode', k, 'ends in', stepCounter)\n        all_path_len.append(path_len)\n        too_close_ratios.append(too_close/stepCounter*100)\n\n\n        if isinstance(infos[0]['info'], ReachGoal):\n            success += 1\n            success_times.append(global_time)\n            print('Success')\n        elif isinstance(infos[0]['info'], Collision):\n            collision += 1\n            collision_cases.append(k)\n            collision_times.append(global_time)\n            print('Collision')\n        elif isinstance(infos[0]['info'], Timeout):\n            timeout += 1\n            timeout_cases.append(k)\n            timeout_times.append(time_limit)\n            print('Time out')\n        elif isinstance(infos[0]['info'] is None):\n            pass\n        else:\n            raise ValueError('Invalid end signal from environment')\n\n    # all episodes end\n    success_rate = success / test_size\n    collision_rate = collision / test_size\n    timeout_rate = timeout / test_size\n    assert success + collision + timeout == test_size\n    avg_nav_time = sum(success_times) / len(\n        success_times) if success_times else time_limit  # baseEnv.env.time_limit\n\n    # logging\n    logging.info(\n        'Testing success rate: {:.2f}, collision rate: {:.2f}, timeout rate: {:.2f}, '\n        'nav time: {:.2f}, path length: {:.2f}, average intrusion ratio: {:.2f}%, '\n        'average minimal distance during intrusions: {:.2f}'.\n            format(success_rate, collision_rate, timeout_rate, avg_nav_time, np.mean(all_path_len),\n                   np.mean(too_close_ratios), np.mean(min_dist)))\n\n    logging.info('Collision cases: ' + ' '.join([str(x) for x in collision_cases]))\n    logging.info('Timeout cases: ' + ' '.join([str(x) for x in timeout_cases]))\n    print(\" Evaluation using {} episodes: mean reward {:.5f}\\n\".format(\n        len(eval_episode_rewards), np.mean(eval_episode_rewards)))\n\n    eval_envs.close()"
  },
  {
    "path": "rl/networks/__init__.py",
    "content": ""
  },
  {
    "path": "rl/networks/distributions.py",
    "content": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom rl.networks.network_utils import AddBias, init\n\n\"\"\"\nModify standard PyTorch distributions so they are compatible with this code.\n\"\"\"\n\n#\n# Standardize distribution interfaces\n#\n\n# Categorical\nclass FixedCategorical(torch.distributions.Categorical):\n    def sample(self):\n        return super().sample().unsqueeze(-1)\n\n    def log_probs(self, actions):\n        return (\n            super()\n            .log_prob(actions.squeeze(-1))\n            .view(actions.size(0), -1)\n            .sum(-1)\n            .unsqueeze(-1)\n        )\n\n    def mode(self):\n        return self.probs.argmax(dim=-1, keepdim=True)\n\n\n# Normal\nclass FixedNormal(torch.distributions.Normal):\n    def log_probs(self, actions):\n        return super(FixedNormal, self).log_prob(actions).sum(-1, keepdim=True)\n\n    def entrop(self):\n        return super.entropy().sum(-1)\n\n    def mode(self):\n        return self.mean\n\n\n# Bernoulli\nclass FixedBernoulli(torch.distributions.Bernoulli):\n    def log_probs(self, actions):\n        return super.log_prob(actions).view(actions.size(0), -1).sum(-1).unsqueeze(-1)\n\n    def entropy(self):\n        return super().entropy().sum(-1)\n\n    def mode(self):\n        return torch.gt(self.probs, 0.5).float()\n\n\nclass Categorical(nn.Module):\n    def __init__(self, num_inputs, num_outputs):\n        super(Categorical, self).__init__()\n\n        init_ = lambda m: init(\n            m,\n            nn.init.orthogonal_,\n            lambda x: nn.init.constant_(x, 0),\n            gain=0.01)\n\n        self.linear = init_(nn.Linear(num_inputs, num_outputs))\n\n    def forward(self, x):\n        x = self.linear(x)\n        return FixedCategorical(logits=x)\n\n\nclass DiagGaussian(nn.Module):\n    def __init__(self, num_inputs, num_outputs):\n        super(DiagGaussian, self).__init__()\n\n        init_ = lambda m: init(m, nn.init.orthogonal_, lambda x: nn.init.\n                               constant_(x, 0))\n\n        self.fc_mean = init_(nn.Linear(num_inputs, num_outputs))\n        self.logstd = AddBias(torch.zeros(num_outputs))\n\n    def forward(self, x):\n        action_mean = self.fc_mean(x)\n\n        #  An ugly hack for my KFAC implementation.\n        zeros = torch.zeros(action_mean.size())\n        if x.is_cuda:\n            zeros = zeros.cuda()\n\n        action_logstd = self.logstd(zeros)\n        return FixedNormal(action_mean, action_logstd.exp())\n\n\nclass Bernoulli(nn.Module):\n    def __init__(self, num_inputs, num_outputs):\n        super(Bernoulli, self).__init__()\n\n        init_ = lambda m: init(m, nn.init.orthogonal_, lambda x: nn.init.\n                               constant_(x, 0))\n\n        self.linear = init_(nn.Linear(num_inputs, num_outputs))\n\n    def forward(self, x):\n        x = self.linear(x)\n        return FixedBernoulli(logits=x)\n"
  },
  {
    "path": "rl/networks/dummy_vec_env.py",
    "content": "import numpy as np\nfrom baselines.common.vec_env.vec_env import VecEnv\nfrom baselines.common.vec_env.util import dict_to_obs, obs_space_info, copy_obs_dict\n\nclass DummyVecEnv(VecEnv):\n    \"\"\"\n    VecEnv that does runs multiple environments sequentially, that is,\n    the step and reset commands are send to one environment at a time.\n    Useful when debugging and when num_env == 1 (in the latter case,\n    avoids communication overhead)\n    \"\"\"\n    def __init__(self, env_fns):\n        \"\"\"\n        Arguments:\n\n        env_fns: iterable of callables      functions that build environments\n        \"\"\"\n        self.envs = [fn() for fn in env_fns]\n        env = self.envs[0]\n        VecEnv.__init__(self, len(env_fns), env.observation_space, env.action_space)\n        obs_space = env.observation_space\n        self.keys, shapes, dtypes = obs_space_info(obs_space)\n\n        self.buf_obs = { k: np.zeros((self.num_envs,) + tuple(shapes[k]), dtype=dtypes[k]) for k in self.keys }\n        self.buf_dones = np.zeros((self.num_envs,), dtype=np.bool)\n        self.buf_rews  = np.zeros((self.num_envs,), dtype=np.float32)\n        self.buf_infos = [{} for _ in range(self.num_envs)]\n        self.actions = None\n        self.spec = self.envs[0].spec\n\n    def step_async(self, actions):\n        listify = True\n        try:\n            if len(actions) == self.num_envs:\n                listify = False\n        except TypeError:\n            pass\n\n        if not listify:\n            self.actions = actions\n        else:\n            assert self.num_envs == 1, \"actions {} is either not a list or has a wrong size - cannot match to {} environments\".format(actions, self.num_envs)\n            self.actions = [actions]\n\n    def step_wait(self):\n        for e in range(self.num_envs):\n            action = self.actions[e]\n            # if isinstance(self.envs[e].action_space, spaces.Discrete):\n            #    action = int(action)\n\n            obs, self.buf_rews[e], self.buf_dones[e], self.buf_infos[e] = self.envs[e].step(action)\n            if self.buf_dones[e]:\n                obs = self.envs[e].reset()\n            self._save_obs(e, obs)\n        return (self._obs_from_buf(), np.copy(self.buf_rews), np.copy(self.buf_dones),\n                self.buf_infos.copy())\n\n    def reset(self):\n        for e in range(self.num_envs):\n            obs = self.envs[e].reset()\n            self._save_obs(e, obs)\n        return self._obs_from_buf()\n\n    def talk2Env_async(self, data):\n        self.envs[0].env.talk2Env(data[0])\n        pass\n\n    def talk2Env_wait(self):\n        return [True]\n\n    def _save_obs(self, e, obs):\n        for k in self.keys:\n            if k is None:\n                self.buf_obs[k][e] = obs\n            else:\n                self.buf_obs[k][e] = obs[k]\n\n    def _obs_from_buf(self):\n        return dict_to_obs(copy_obs_dict(self.buf_obs))\n\n    def get_images(self):\n        return [env.render(mode='rgb_array') for env in self.envs]\n\n    def render(self, mode='human'):\n        if self.num_envs == 1:\n            return self.envs[0].render(mode=mode)\n        else:\n            return super().render(mode=mode)\n"
  },
  {
    "path": "rl/networks/envs.py",
    "content": "import os\n\nimport gym\nimport numpy as np\nimport torch\nfrom gym.spaces.box import Box\nfrom gym.spaces.dict import Dict\n\nfrom baselines import bench\nfrom baselines.common.atari_wrappers import make_atari, wrap_deepmind\nfrom baselines.common.vec_env import VecEnvWrapper\n# from baselines.common.vec_env.dummy_vec_env import DummyVecEnv\n# from baselines.common.vec_env.shmem_vec_env import ShmemVecEnv\nfrom rl.networks.dummy_vec_env import DummyVecEnv\nfrom rl.networks.shmem_vec_env import ShmemVecEnv\nfrom baselines.common.vec_env.vec_normalize import \\\n    VecNormalize as VecNormalize_\nfrom rl.vec_env.vec_pretext_normalize import VecPretextNormalize\n\ntry:\n    import dm_control2gym\nexcept ImportError:\n    pass\n\ntry:\n    import roboschool\nexcept ImportError:\n    pass\n\ntry:\n    import pybullet_envs\nexcept ImportError:\n    pass\n\n\ndef make_env(env_id, seed, rank, log_dir, allow_early_resets, config=None, envNum=1, ax=None, test_case=-1):\n    def _thunk():\n        if env_id.startswith(\"dm\"):\n            _, domain, task = env_id.split('.')\n            env = dm_control2gym.make(domain_name=domain, task_name=task)\n        else:\n            env = gym.make(env_id)\n\n        is_atari = hasattr(gym.envs, 'atari') and isinstance(\n            env.unwrapped, gym.envs.atari.atari_env.AtariEnv)\n        if is_atari:\n            env = make_atari(env_id)\n\n        env.configure(config)\n\n        envSeed = seed + rank if seed is not None else None\n        # environment.render_axis = ax\n        env.thisSeed = envSeed\n        env.nenv = envNum\n        if envNum > 1:\n            env.phase = 'train'\n        else:\n            env.phase = 'test'\n\n        if ax:\n            env.render_axis = ax\n            if test_case >= 0:\n                env.test_case = test_case\n        env.seed(seed + rank)\n\n        if str(env.__class__.__name__).find('TimeLimit') >= 0:\n            env = TimeLimitMask(env)\n\n        # if log_dir is not None:\n        env = bench.Monitor(\n            env,\n            None,\n            allow_early_resets=allow_early_resets)\n        print(env)\n\n        if isinstance(env.observation_space, Box):\n            if is_atari:\n                if len(env.observation_space.shape) == 3:\n                    env = wrap_deepmind(env)\n            elif len(env.observation_space.shape) == 3:\n                raise NotImplementedError(\n                    \"CNN models work only for atari,\\n\"\n                    \"please use a custom wrapper for a custom pixel input env.\\n\"\n                    \"See wrap_deepmind for an example.\")\n\n            # If the input has shape (W,H,3), wrap for PyTorch convolutions\n\n            obs_shape = env.observation_space.shape\n            if len(obs_shape) == 3 and obs_shape[2] in [1, 3]:\n                env = TransposeImage(env, op=[2, 0, 1])\n\n        return env\n\n    return _thunk\n\n\ndef make_vec_envs(env_name,\n                  seed,\n                  num_processes,\n                  gamma,\n                  log_dir,\n                  device,\n                  allow_early_resets,\n                  num_frame_stack=None,\n                  config=None,\n                  ax=None, test_case=-1, wrap_pytorch=True, pretext_wrapper=False):\n    envs = [\n        make_env(env_name, seed, i, log_dir, allow_early_resets, config=config,\n                 envNum=num_processes, ax=ax, test_case=test_case)\n        for i in range(num_processes)\n    ]\n    test = False if len(envs) > 1 else True\n\n    if len(envs) > 1:\n        envs = ShmemVecEnv(envs, context='fork')\n    else:\n        envs = DummyVecEnv(envs)\n    # for collect data in supervised learning, we don't need to wrap pytorch\n    if wrap_pytorch:\n        if isinstance(envs.observation_space, Box):\n            if len(envs.observation_space.shape) == 1:\n                if gamma is None:\n                    envs = VecNormalize(envs, ret=False, ob=False)\n                else:\n                    envs = VecNormalize(envs, gamma=gamma, ob=False, ret=False)\n\n        envs = VecPyTorch(envs, device)\n    if pretext_wrapper:\n        if gamma is None:\n            envs = VecPretextNormalize(envs, ret=False, ob=False, config=config, test=test)\n        else:\n            envs = VecPretextNormalize(envs, gamma=gamma, ob=False, ret=False, config=config, test=test)\n\n    if num_frame_stack is not None:\n        envs = VecPyTorchFrameStack(envs, num_frame_stack, device)\n    elif isinstance(envs.observation_space, Box):\n        if len(envs.observation_space.shape) == 3:\n            envs = VecPyTorchFrameStack(envs, 4, device)\n\n    return envs\n\n\n# Checks whether done was caused my timit limits or not\nclass TimeLimitMask(gym.Wrapper):\n    def step(self, action):\n        obs, rew, done, info = self.env.step(action)\n        if done and self.env._max_episode_steps == self.env._elapsed_steps:\n            info['bad_transition'] = True\n\n        return obs, rew, done, info\n\n    def reset(self, **kwargs):\n        return self.env.reset(**kwargs)\n\n\n# Can be used to test recurrent policies for Reacher-v2\nclass MaskGoal(gym.ObservationWrapper):\n    def observation(self, observation):\n        if self.env._elapsed_steps > 0:\n            observation[-2:] = 0\n        return observation\n\n\nclass TransposeObs(gym.ObservationWrapper):\n    def __init__(self, env=None):\n        \"\"\"\n        Transpose observation space (base class)\n        \"\"\"\n        super(TransposeObs, self).__init__(env)\n\n\nclass TransposeImage(TransposeObs):\n    def __init__(self, env=None, op=[2, 0, 1]):\n        \"\"\"\n        Transpose observation space for images\n        \"\"\"\n        super(TransposeImage, self).__init__(env)\n        assert len(op) == 3, \"Error: Operation, \" + str(op) + \", must be dim3\"\n        self.op = op\n        obs_shape = self.observation_space.shape\n        self.observation_space = Box(\n            self.observation_space.low[0, 0, 0],\n            self.observation_space.high[0, 0, 0], [\n                obs_shape[self.op[0]], obs_shape[self.op[1]],\n                obs_shape[self.op[2]]\n            ],\n            dtype=self.observation_space.dtype)\n\n    def observation(self, ob):\n        return ob.transpose(self.op[0], self.op[1], self.op[2])\n\n\nclass VecPyTorch(VecEnvWrapper):\n    def __init__(self, venv, device):\n        \"\"\"Return only every `skip`-th frame\"\"\"\n        super(VecPyTorch, self).__init__(venv)\n        self.device = device\n        # TODO: Fix data types\n\n    def reset(self):\n        obs = self.venv.reset()\n        if isinstance(obs, dict):\n            for key in obs:\n                obs[key]=torch.from_numpy(obs[key]).to(self.device)\n        else:\n            obs = torch.from_numpy(obs).float().to(self.device)\n        return obs\n\n    def step_async(self, actions):\n        if isinstance(actions, torch.LongTensor):\n            # Squeeze the dimension for discrete actions\n            actions = actions.squeeze(1)\n        actions = actions.cpu().numpy()\n        self.venv.step_async(actions)\n\n    def step_wait(self):\n        obs, reward, done, info = self.venv.step_wait()\n        if isinstance(obs, dict):\n            for key in obs:\n                obs[key] = torch.from_numpy(obs[key]).to(self.device)\n        else:\n            obs = torch.from_numpy(obs).float().to(self.device)\n        reward = torch.from_numpy(reward).unsqueeze(dim=1).float()\n        return obs, reward, done, info\n\n    def render_traj(self, path, episode_num):\n        if self.venv.num_envs == 1:\n            return self.venv.envs[0].env.render_traj(path, episode_num)\n        else:\n            for i, curr_env in enumerate(self.venv.envs):\n                curr_env.env.render_traj(path, str(episode_num) + '.' + str(i))\n\n\nclass VecNormalize(VecNormalize_):\n    def __init__(self, *args, **kwargs):\n        super(VecNormalize, self).__init__(*args, **kwargs)\n        self.training = True\n\n    def _obfilt(self, obs, update=True):\n        if self.ob_rms:\n            if self.training and update:\n                self.ob_rms.update(obs)\n            obs = np.clip((obs - self.ob_rms.mean) /\n                          np.sqrt(self.ob_rms.var + self.epsilon),\n                          -self.clipob, self.clipob)\n            return obs\n        else:\n            return obs\n\n    def train(self):\n        self.training = True\n\n    def eval(self):\n        self.training = False\n\n\n# Derived from\n# https://github.com/openai/baselines/blob/master/baselines/common/vec_env/vec_frame_stack.py\nclass VecPyTorchFrameStack(VecEnvWrapper):\n    def __init__(self, venv, nstack, device=None):\n        self.venv = venv\n        self.nstack = nstack\n\n        wos = venv.observation_space  # wrapped ob space\n        self.shape_dim0 = wos.shape[0]\n\n        low = np.repeat(wos.low, self.nstack, axis=0)\n        high = np.repeat(wos.high, self.nstack, axis=0)\n\n        if device is None:\n            device = torch.device('cpu')\n        self.stacked_obs = torch.zeros((venv.num_envs, ) +\n                                       low.shape).to(device)\n\n        observation_space = gym.spaces.Box(\n            low=low, high=high, dtype=venv.observation_space.dtype)\n        VecEnvWrapper.__init__(self, venv, observation_space=observation_space)\n\n    def step_wait(self):\n        obs, rews, news, infos = self.venv.step_wait()\n        self.stacked_obs[:, :-self.shape_dim0] = \\\n            self.stacked_obs[:, self.shape_dim0:].clone()\n        for (i, new) in enumerate(news):\n            if new:\n                self.stacked_obs[i] = 0\n        self.stacked_obs[:, -self.shape_dim0:] = obs\n        return self.stacked_obs, rews, news, infos\n\n    def reset(self):\n        obs = self.venv.reset()\n        if torch.backends.cudnn.deterministic:\n            self.stacked_obs = torch.zeros(self.stacked_obs.shape)\n        else:\n            self.stacked_obs.zero_()\n        self.stacked_obs[:, -self.shape_dim0:] = obs\n        return self.stacked_obs\n\n    def close(self):\n        self.venv.close()\n"
  },
  {
    "path": "rl/networks/model.py",
    "content": "import torch\nimport torch.nn as nn\n\n\nfrom rl.networks.distributions import Bernoulli, Categorical, DiagGaussian\nfrom .srnn_model import SRNN\nfrom .selfAttn_srnn_temp_node import selfAttn_merge_SRNN\n\nclass Flatten(nn.Module):\n    def forward(self, x):\n        return x.view(x.size(0), -1)\n\n\nclass Policy(nn.Module):\n    \"\"\" Class for a robot policy network \"\"\"\n    def __init__(self, obs_shape, action_space, base=None, base_kwargs=None):\n        super(Policy, self).__init__()\n        if base_kwargs is None:\n            base_kwargs = {}\n\n        if base == 'srnn':\n            base=SRNN\n        elif base == 'selfAttn_merge_srnn':\n            base = selfAttn_merge_SRNN\n        else:\n            raise NotImplementedError\n\n        self.base = base(obs_shape, base_kwargs)\n        self.srnn = True\n\n        if action_space.__class__.__name__ == \"Discrete\":\n            num_outputs = action_space.n\n            self.dist = Categorical(self.base.output_size, num_outputs)\n        elif action_space.__class__.__name__ == \"Box\":\n            num_outputs = action_space.shape[0]\n\n            self.dist = DiagGaussian(self.base.output_size, num_outputs)\n        elif action_space.__class__.__name__ == \"MultiBinary\":\n            num_outputs = action_space.shape[0]\n            self.dist = Bernoulli(self.base.output_size, num_outputs)\n        else:\n            raise NotImplementedError\n\n    @property\n    def is_recurrent(self):\n        return self.base.is_recurrent\n\n    @property\n    def recurrent_hidden_state_size(self):\n        \"\"\"Size of rnn_hx.\"\"\"\n        return self.base.recurrent_hidden_state_size\n\n    def forward(self, inputs, rnn_hxs, masks):\n        raise NotImplementedError\n\n    def act(self, inputs, rnn_hxs, masks, deterministic=False):\n        if not hasattr(self, 'srnn'):\n            self.srnn = False\n        if self.srnn:\n            value, actor_features, rnn_hxs = self.base(inputs, rnn_hxs, masks, infer=True)\n\n        else:\n            value, actor_features, rnn_hxs = self.base(inputs, rnn_hxs, masks)\n        dist = self.dist(actor_features)\n\n        if deterministic:\n            action = dist.mode()\n        else:\n            action = dist.sample()\n\n        action_log_probs = dist.log_probs(action)\n        dist_entropy = dist.entropy().mean()\n\n        return value, action, action_log_probs, rnn_hxs\n\n    def get_value(self, inputs, rnn_hxs, masks):\n\n        value, _, _ = self.base(inputs, rnn_hxs, masks, infer=True)\n\n        return value\n\n    def evaluate_actions(self, inputs, rnn_hxs, masks, action):\n        value, actor_features, rnn_hxs = self.base(inputs, rnn_hxs, masks)\n\n        dist = self.dist(actor_features)\n\n        action_log_probs = dist.log_probs(action)\n        dist_entropy = dist.entropy().mean()\n\n        return value, action_log_probs, dist_entropy, rnn_hxs\n\n\n\n"
  },
  {
    "path": "rl/networks/network_utils.py",
    "content": "import glob\nimport os\n\nimport torch.nn as nn\n\nfrom rl.networks.envs import VecNormalize\n\n\n# Get a render function\ndef get_render_func(venv):\n    if hasattr(venv, 'envs'):\n        return venv.envs[0].render\n    elif hasattr(venv, 'venv'):\n        return get_render_func(venv.venv)\n    elif hasattr(venv, 'env'):\n        return get_render_func(venv.env)\n\n    return None\n\n\ndef get_vec_normalize(venv):\n    if isinstance(venv, VecNormalize):\n        return venv\n    elif hasattr(venv, 'venv'):\n        return get_vec_normalize(venv.venv)\n\n    return None\n\n\n# Necessary for my KFAC implementation.\nclass AddBias(nn.Module):\n    def __init__(self, bias):\n        super(AddBias, self).__init__()\n        self._bias = nn.Parameter(bias.unsqueeze(1))\n\n    def forward(self, x):\n        if x.dim() == 2:\n            bias = self._bias.t().view(1, -1)\n        else:\n            bias = self._bias.t().view(1, -1, 1, 1)\n\n        return x + bias\n\n\ndef update_linear_schedule(optimizer, epoch, total_num_epochs, initial_lr):\n    \"\"\"Decreases the learning rate linearly\"\"\"\n    lr = initial_lr - (initial_lr * (epoch / float(total_num_epochs)))\n    for param_group in optimizer.param_groups:\n        param_group['lr'] = lr\n\n\ndef init(module, weight_init, bias_init, gain=1):\n    weight_init(module.weight.data, gain=gain)\n    bias_init(module.bias.data)\n    return module\n\n\ndef cleanup_log_dir(log_dir):\n    try:\n        os.makedirs(log_dir)\n    except OSError:\n        files = glob.glob(os.path.join(log_dir, '*.monitor.csv'))\n        for f in files:\n            os.remove(f)\n"
  },
  {
    "path": "rl/networks/selfAttn_srnn_temp_node.py",
    "content": "import torch.nn.functional as F\n\nfrom .srnn_model import *\n\nclass SpatialEdgeSelfAttn(nn.Module):\n    \"\"\"\n    Class for the human-human attention,\n    uses a multi-head self attention proposed by https://arxiv.org/abs/1706.03762\n    \"\"\"\n    def __init__(self, args):\n        super(SpatialEdgeSelfAttn, self).__init__()\n        self.args = args\n\n        # Store required sizes\n        # todo: hard-coded for now\n        # with human displacement: + 2\n        # pred 4 steps + disp: 12\n        # pred 4 steps + no disp: 10\n        # pred 5 steps + no disp: 12\n        # pred 5 steps + no disp + probR: 17\n        # Gaussian pred 5 steps + no disp: 27\n        # pred 8 steps + no disp: 18\n        if args.env_name in ['CrowdSimPred-v0', 'CrowdSimPredRealGST-v0']:\n            self.input_size = 12\n        elif args.env_name == 'CrowdSimVarNum-v0':\n            self.input_size = 2 # 4\n        else:\n            raise NotImplementedError\n        self.num_attn_heads=8\n        self.attn_size=512\n\n\n        # Linear layer to embed input\n        self.embedding_layer = nn.Sequential(nn.Linear(self.input_size, 128), nn.ReLU(),\n                                             nn.Linear(128, self.attn_size), nn.ReLU()\n                                             )\n\n        self.q_linear = nn.Linear(self.attn_size, self.attn_size)\n        self.v_linear = nn.Linear(self.attn_size, self.attn_size)\n        self.k_linear = nn.Linear(self.attn_size, self.attn_size)\n\n        # multi-head self attention\n        self.multihead_attn=torch.nn.MultiheadAttention(self.attn_size, self.num_attn_heads)\n\n\n    # Given a list of sequence lengths, create a mask to indicate which indices are padded\n    # e.x. Input: [3, 1, 4], max_human_num = 5\n    # Output: [[1, 1, 1, 0, 0], [1, 0, 0, 0, 0], [1, 1, 1, 1, 0]]\n    def create_attn_mask(self, each_seq_len, seq_len, nenv, max_human_num):\n        # mask with value of False means padding and should be ignored by attention\n        # why +1: use a sentinel in the end to handle the case when each_seq_len = 18\n        if self.args.no_cuda:\n            mask = torch.zeros(seq_len * nenv, max_human_num + 1).cpu()\n        else:\n            mask = torch.zeros(seq_len*nenv, max_human_num+1).cuda()\n        mask[torch.arange(seq_len*nenv), each_seq_len.long()] = 1.\n        mask = torch.logical_not(mask.cumsum(dim=1))\n        # remove the sentinel\n        mask = mask[:, :-1].unsqueeze(-2) # seq_len*nenv, 1, max_human_num\n        return mask\n\n\n    def forward(self, inp, each_seq_len):\n        '''\n        Forward pass for the model\n        params:\n        inp : input edge features\n        each_seq_len:\n        if self.args.sort_humans is True, the true length of the sequence. Should be the number of detected humans\n        else, it is the mask itself\n        '''\n        # inp is padded sequence [seq_len, nenv, max_human_num, 2]\n        seq_len, nenv, max_human_num, _ = inp.size()\n        if self.args.sort_humans:\n            attn_mask = self.create_attn_mask(each_seq_len, seq_len, nenv, max_human_num)  # [seq_len*nenv, 1, max_human_num]\n            attn_mask = attn_mask.squeeze(1)  # if we use pytorch builtin function\n        else:\n            # combine the first two dimensions\n            attn_mask = each_seq_len.reshape(seq_len*nenv, max_human_num)\n\n\n        input_emb=self.embedding_layer(inp).view(seq_len*nenv, max_human_num, -1)\n        input_emb=torch.transpose(input_emb, dim0=0, dim1=1) # if we use pytorch builtin function, v1.7.0 has no batch first option\n        q=self.q_linear(input_emb)\n        k=self.k_linear(input_emb)\n        v=self.v_linear(input_emb)\n\n        #z=self.multihead_attn(q, k, v, mask=attn_mask)\n        z,_=self.multihead_attn(q, k, v, key_padding_mask=torch.logical_not(attn_mask)) # if we use pytorch builtin function\n        z=torch.transpose(z, dim0=0, dim1=1) # if we use pytorch builtin function\n        return z\n\n\n\nclass EdgeAttention_M(nn.Module):\n    '''\n    Class for the robot-human attention module\n    '''\n    def __init__(self, args):\n        '''\n        Initializer function\n        params:\n        args : Training arguments\n        infer : Training or test time (True at test time)\n        '''\n        super(EdgeAttention_M, self).__init__()\n\n        self.args = args\n\n        # Store required sizes\n        self.human_human_edge_rnn_size = args.human_human_edge_rnn_size\n        self.human_node_rnn_size = args.human_node_rnn_size\n        self.attention_size = args.attention_size\n\n\n\n        # Linear layer to embed temporal edgeRNN hidden state\n        self.temporal_edge_layer=nn.ModuleList()\n        self.spatial_edge_layer=nn.ModuleList()\n\n        self.temporal_edge_layer.append(nn.Linear(self.human_human_edge_rnn_size, self.attention_size))\n\n        # Linear layer to embed spatial edgeRNN hidden states\n        self.spatial_edge_layer.append(nn.Linear(self.human_human_edge_rnn_size, self.attention_size))\n\n\n\n        # number of agents who have spatial edges (complete graph: all 6 agents; incomplete graph: only the robot)\n        self.agent_num = 1\n        self.num_attention_head = 1\n\n    def create_attn_mask(self, each_seq_len, seq_len, nenv, max_human_num):\n        # mask with value of False means padding and should be ignored by attention\n        # why +1: use a sentinel in the end to handle the case when each_seq_len = 18\n        if self.args.no_cuda:\n            mask = torch.zeros(seq_len * nenv, max_human_num + 1).cpu()\n        else:\n            mask = torch.zeros(seq_len * nenv, max_human_num + 1).cuda()\n        mask[torch.arange(seq_len * nenv), each_seq_len.long()] = 1.\n        mask = torch.logical_not(mask.cumsum(dim=1))\n        # remove the sentinel\n        mask = mask[:, :-1].unsqueeze(-2)  # seq_len*nenv, 1, max_human_num\n        return mask\n\n    def att_func(self, temporal_embed, spatial_embed, h_spatials, attn_mask=None):\n        seq_len, nenv, num_edges, h_size = h_spatials.size()  # [1, 12, 30, 256] in testing,  [12, 30, 256] in training\n        attn = temporal_embed * spatial_embed\n        attn = torch.sum(attn, dim=3)\n\n        # Variable length\n        temperature = num_edges / np.sqrt(self.attention_size)\n        attn = torch.mul(attn, temperature)\n\n        # if we don't want to mask invalid humans, attn_mask is None and no mask will be applied\n        # else apply attn masks\n        if attn_mask is not None:\n            attn = attn.masked_fill(attn_mask == 0, -1e9)\n\n        # Softmax\n        attn = attn.view(seq_len, nenv, self.agent_num, self.human_num)\n        attn = torch.nn.functional.softmax(attn, dim=-1)\n        # print(attn[0, 0, 0].cpu().numpy())\n\n        # Compute weighted value\n        # weighted_value = torch.mv(torch.t(h_spatials), attn)\n\n        # reshape h_spatials and attn\n        # shape[0] = seq_len, shape[1] = num of spatial edges (6*5 = 30), shape[2] = 256\n        h_spatials = h_spatials.view(seq_len, nenv, self.agent_num, self.human_num, h_size)\n        h_spatials = h_spatials.view(seq_len * nenv * self.agent_num, self.human_num, h_size).permute(0, 2,\n                                                                                         1)  # [seq_len*nenv*6, 5, 256] -> [seq_len*nenv*6, 256, 5]\n\n        attn = attn.view(seq_len * nenv * self.agent_num, self.human_num).unsqueeze(-1)  # [seq_len*nenv*6, 5, 1]\n        weighted_value = torch.bmm(h_spatials, attn)  # [seq_len*nenv*6, 256, 1]\n\n        # reshape back\n        weighted_value = weighted_value.squeeze(-1).view(seq_len, nenv, self.agent_num, h_size)  # [seq_len, 12, 6 or 1, 256]\n        return weighted_value, attn\n\n\n\n    # h_temporal: [seq_len, nenv, 1, 256]\n    # h_spatials: [seq_len, nenv, 5, 256]\n    def forward(self, h_temporal, h_spatials, each_seq_len):\n        '''\n        Forward pass for the model\n        params:\n        h_temporal : Hidden state of the temporal edgeRNN\n        h_spatials : Hidden states of all spatial edgeRNNs connected to the node.\n        each_seq_len:\n            if self.args.sort_humans is True, the true length of the sequence. Should be the number of detected humans\n            else, it is the mask itself\n        '''\n        seq_len, nenv, max_human_num, _ = h_spatials.size()\n        # find the number of humans by the size of spatial edgeRNN hidden state\n        self.human_num = max_human_num // self.agent_num\n\n        weighted_value_list, attn_list=[],[]\n        for i in range(self.num_attention_head):\n\n            # Embed the temporal edgeRNN hidden state\n            temporal_embed = self.temporal_edge_layer[i](h_temporal)\n            # temporal_embed = temporal_embed.squeeze(0)\n\n            # Embed the spatial edgeRNN hidden states\n            spatial_embed = self.spatial_edge_layer[i](h_spatials)\n\n            # Dot based attention\n            temporal_embed = temporal_embed.repeat_interleave(self.human_num, dim=2)\n\n            if self.args.sort_humans:\n                attn_mask = self.create_attn_mask(each_seq_len, seq_len, nenv, max_human_num)  # [seq_len*nenv, 1, max_human_num]\n                attn_mask = attn_mask.squeeze(-2).view(seq_len, nenv, max_human_num)\n            else:\n                attn_mask = each_seq_len\n            weighted_value,attn=self.att_func(temporal_embed, spatial_embed, h_spatials, attn_mask=attn_mask)\n            weighted_value_list.append(weighted_value)\n            attn_list.append(attn)\n\n        if self.num_attention_head > 1:\n            return self.final_attn_linear(torch.cat(weighted_value_list, dim=-1)), attn_list\n        else:\n            return weighted_value_list[0], attn_list[0]\n\nclass EndRNN(RNNBase):\n    '''\n    Class for the GRU\n    '''\n    def __init__(self, args):\n        '''\n        Initializer function\n        params:\n        args : Training arguments\n        infer : Training or test time (True at test time)\n        '''\n        super(EndRNN, self).__init__(args, edge=False)\n\n        self.args = args\n\n        # Store required sizes\n        self.rnn_size = args.human_node_rnn_size\n        self.output_size = args.human_node_output_size\n        self.embedding_size = args.human_node_embedding_size\n        self.input_size = args.human_node_input_size\n        self.edge_rnn_size = args.human_human_edge_rnn_size\n\n        # Linear layer to embed input\n        self.encoder_linear = nn.Linear(256, self.embedding_size)\n\n        # ReLU and Dropout layers\n        self.relu = nn.ReLU()\n\n        # Linear layer to embed attention module output\n        self.edge_attention_embed = nn.Linear(self.edge_rnn_size, self.embedding_size)\n\n\n        # Output linear layer\n        self.output_linear = nn.Linear(self.rnn_size, self.output_size)\n\n\n\n    def forward(self, robot_s, h_spatial_other, h, masks):\n        '''\n        Forward pass for the model\n        params:\n        pos : input position\n        h_temporal : hidden state of the temporal edgeRNN corresponding to this node\n        h_spatial_other : output of the attention module\n        h : hidden state of the current nodeRNN\n        c : cell state of the current nodeRNN\n        '''\n        # Encode the input position\n        encoded_input = self.encoder_linear(robot_s)\n        encoded_input = self.relu(encoded_input)\n\n        h_edges_embedded = self.relu(self.edge_attention_embed(h_spatial_other))\n\n        concat_encoded = torch.cat((encoded_input, h_edges_embedded), -1)\n\n        x, h_new = self._forward_gru(concat_encoded, h, masks)\n\n        outputs = self.output_linear(x)\n\n\n        return outputs, h_new\n\nclass selfAttn_merge_SRNN(nn.Module):\n    \"\"\"\n    Class for the proposed network\n    \"\"\"\n    def __init__(self, obs_space_dict, args, infer=False):\n        \"\"\"\n        Initializer function\n        params:\n        args : Training arguments\n        infer : Training or test time (True at test time)\n        \"\"\"\n        super(selfAttn_merge_SRNN, self).__init__()\n        self.infer = infer\n        self.is_recurrent = True\n        self.args=args\n\n        self.human_num = obs_space_dict['spatial_edges'].shape[0]\n\n        self.seq_length = args.seq_length\n        self.nenv = args.num_processes\n        self.nminibatch = args.num_mini_batch\n\n        # Store required sizes\n        self.human_node_rnn_size = args.human_node_rnn_size\n        self.human_human_edge_rnn_size = args.human_human_edge_rnn_size\n        self.output_size = args.human_node_output_size\n\n        # Initialize the Node and Edge RNNs\n        self.humanNodeRNN = EndRNN(args)\n\n        # Initialize attention module\n        self.attn = EdgeAttention_M(args)\n\n\n        init_ = lambda m: init(m, nn.init.orthogonal_, lambda x: nn.init.\n                               constant_(x, 0), np.sqrt(2))\n\n        num_inputs = hidden_size = self.output_size\n\n        self.actor = nn.Sequential(\n            init_(nn.Linear(num_inputs, hidden_size)), nn.Tanh(),\n            init_(nn.Linear(hidden_size, hidden_size)), nn.Tanh())\n\n        self.critic = nn.Sequential(\n            init_(nn.Linear(num_inputs, hidden_size)), nn.Tanh(),\n            init_(nn.Linear(hidden_size, hidden_size)), nn.Tanh())\n\n\n        self.critic_linear = init_(nn.Linear(hidden_size, 1))\n        robot_size = 9\n        self.robot_linear = nn.Sequential(init_(nn.Linear(robot_size, 256)), nn.ReLU()) # todo: check dim\n        self.human_node_final_linear=init_(nn.Linear(self.output_size,2))\n\n        if self.args.use_self_attn:\n            self.spatial_attn = SpatialEdgeSelfAttn(args)\n            self.spatial_linear = nn.Sequential(init_(nn.Linear(512, 256)), nn.ReLU())\n        else:\n            self.spatial_linear = nn.Sequential(init_(nn.Linear(obs_space_dict['spatial_edges'].shape[1], 128)), nn.ReLU(),\n                                                init_(nn.Linear(128, 256)), nn.ReLU())\n\n\n        self.temporal_edges = [0]\n        self.spatial_edges = np.arange(1, self.human_num+1)\n\n        dummy_human_mask = [0] * self.human_num\n        dummy_human_mask[0] = 1\n        if self.args.no_cuda:\n            self.dummy_human_mask = Variable(torch.Tensor([dummy_human_mask]).cpu())\n        else:\n            self.dummy_human_mask = Variable(torch.Tensor([dummy_human_mask]).cuda())\n\n\n\n    def forward(self, inputs, rnn_hxs, masks, infer=False):\n        if infer:\n            # Test/rollout time\n            seq_length = 1\n            nenv = self.nenv\n\n        else:\n            # Training time\n            seq_length = self.seq_length\n            nenv = self.nenv // self.nminibatch\n\n        robot_node = reshapeT(inputs['robot_node'], seq_length, nenv)\n        temporal_edges = reshapeT(inputs['temporal_edges'], seq_length, nenv)\n        spatial_edges = reshapeT(inputs['spatial_edges'], seq_length, nenv)\n\n        # to prevent errors in old models that does not have sort_humans argument\n        if not hasattr(self.args, 'sort_humans'):\n            self.args.sort_humans = True\n        if self.args.sort_humans:\n            detected_human_num = inputs['detected_human_num'].squeeze(-1).cpu().int()\n        else:\n            human_masks = reshapeT(inputs['visible_masks'], seq_length, nenv).float() # [seq_len, nenv, max_human_num]\n            # if no human is detected (human_masks are all False, set the first human to True)\n            human_masks[human_masks.sum(dim=-1)==0] = self.dummy_human_mask\n\n\n        hidden_states_node_RNNs = reshapeT(rnn_hxs['human_node_rnn'], 1, nenv)\n        masks = reshapeT(masks, seq_length, nenv)\n\n\n        if self.args.no_cuda:\n            all_hidden_states_edge_RNNs = Variable(\n                torch.zeros(1, nenv, 1+self.human_num, rnn_hxs['human_human_edge_rnn'].size()[-1]).cpu())\n        else:\n            all_hidden_states_edge_RNNs = Variable(\n                torch.zeros(1, nenv, 1+self.human_num, rnn_hxs['human_human_edge_rnn'].size()[-1]).cuda())\n\n        robot_states = torch.cat((temporal_edges, robot_node), dim=-1)\n        robot_states = self.robot_linear(robot_states)\n\n\n        # attention modules\n        if self.args.sort_humans:\n            # human-human attention\n            if self.args.use_self_attn:\n                spatial_attn_out=self.spatial_attn(spatial_edges, detected_human_num).view(seq_length, nenv, self.human_num, -1)\n            else:\n                spatial_attn_out = spatial_edges\n            output_spatial = self.spatial_linear(spatial_attn_out)\n\n            # robot-human attention\n            hidden_attn_weighted, _ = self.attn(robot_states, output_spatial, detected_human_num)\n        else:\n            # human-human attention\n            if self.args.use_self_attn:\n                spatial_attn_out = self.spatial_attn(spatial_edges, human_masks).view(seq_length, nenv, self.human_num, -1)\n            else:\n                spatial_attn_out = spatial_edges\n            output_spatial = self.spatial_linear(spatial_attn_out)\n\n            # robot-human attention\n            hidden_attn_weighted, _ = self.attn(robot_states, output_spatial, human_masks)\n\n\n        # Do a forward pass through GRU\n        outputs, h_nodes \\\n            = self.humanNodeRNN(robot_states, hidden_attn_weighted, hidden_states_node_RNNs, masks)\n\n\n        # Update the hidden and cell states\n        all_hidden_states_node_RNNs = h_nodes\n        outputs_return = outputs\n\n        rnn_hxs['human_node_rnn'] = all_hidden_states_node_RNNs\n        rnn_hxs['human_human_edge_rnn'] = all_hidden_states_edge_RNNs\n\n\n        # x is the output and will be sent to actor and critic\n        x = outputs_return[:, :, 0, :]\n\n        hidden_critic = self.critic(x)\n        hidden_actor = self.actor(x)\n\n        for key in rnn_hxs:\n            rnn_hxs[key] = rnn_hxs[key].squeeze(0)\n\n        if infer:\n            return self.critic_linear(hidden_critic).squeeze(0), hidden_actor.squeeze(0), rnn_hxs\n        else:\n            return self.critic_linear(hidden_critic).view(-1, 1), hidden_actor.view(-1, self.output_size), rnn_hxs\n\n\ndef reshapeT(T, seq_length, nenv):\n    shape = T.size()[1:]\n    return T.unsqueeze(0).reshape((seq_length, nenv, *shape))"
  },
  {
    "path": "rl/networks/shmem_vec_env.py",
    "content": "\"\"\"\nAn interface for asynchronous vectorized environments.\n\"\"\"\n\nimport multiprocessing as mp\nimport numpy as np\nfrom baselines.common.vec_env.vec_env import VecEnv, CloudpickleWrapper, clear_mpi_env_vars\nimport ctypes\nfrom baselines import logger\n\nfrom baselines.common.vec_env.util import dict_to_obs, obs_space_info, obs_to_dict\n\n_NP_TO_CT = {np.float32: ctypes.c_float,\n             np.int32: ctypes.c_int32,\n             np.int8: ctypes.c_int8,\n             np.uint8: ctypes.c_char,\n             np.bool: ctypes.c_bool,\n             np.bool_: ctypes.c_bool}\n\n\nclass ShmemVecEnv(VecEnv):\n    \"\"\"\n    Optimized version of SubprocVecEnv that uses shared variables to communicate observations.\n    \"\"\"\n\n    def __init__(self, env_fns, spaces=None, context='spawn'):\n        \"\"\"\n        If you don't specify observation_space, we'll have to create a dummy\n        environment to get it.\n        \"\"\"\n        ctx = mp.get_context(context)\n        if spaces:\n            observation_space, action_space = spaces\n        else:\n            logger.log('Creating dummy env object to get spaces')\n            with logger.scoped_configure(format_strs=[]):\n                dummy = env_fns[0]()\n                observation_space, action_space = dummy.observation_space, dummy.action_space\n                dummy.close()\n                del dummy\n        VecEnv.__init__(self, len(env_fns), observation_space, action_space)\n        self.obs_keys, self.obs_shapes, self.obs_dtypes = obs_space_info(observation_space)\n        self.obs_bufs = [\n            {k: ctx.Array(_NP_TO_CT[self.obs_dtypes[k].type], int(np.prod(self.obs_shapes[k]))) for k in self.obs_keys}\n            for _ in env_fns]\n        self.parent_pipes = []\n        self.procs = []\n        with clear_mpi_env_vars():\n            for env_fn, obs_buf in zip(env_fns, self.obs_bufs):\n                wrapped_fn = CloudpickleWrapper(env_fn)\n                parent_pipe, child_pipe = ctx.Pipe()\n                proc = ctx.Process(target=_subproc_worker,\n                            args=(child_pipe, parent_pipe, wrapped_fn, obs_buf, self.obs_shapes, self.obs_dtypes, self.obs_keys))\n                proc.daemon = True\n                self.procs.append(proc)\n                self.parent_pipes.append(parent_pipe)\n                proc.start()\n                child_pipe.close()\n        self.waiting_step = False\n        self.viewer = None\n\n    def reset(self):\n        if self.waiting_step:\n            logger.warn('Called reset() while waiting for the step to complete')\n            self.step_wait()\n        for pipe in self.parent_pipes:\n            pipe.send(('reset', None))\n        return self._decode_obses([pipe.recv() for pipe in self.parent_pipes])\n\n    def step_async(self, actions):\n        assert len(actions) == len(self.parent_pipes)\n        for pipe, act in zip(self.parent_pipes, actions):\n            pipe.send(('step', act))\n        self.waiting_step = True\n\n    def step_wait(self):\n        outs = [pipe.recv() for pipe in self.parent_pipes]\n        self.waiting_step = False\n        obs, rews, dones, infos = zip(*outs)\n        return self._decode_obses(obs), np.array(rews), np.array(dones), infos\n\n    def talk2Env_async(self, data):\n        assert len(data) == len(self.parent_pipes)\n        for pipe, d in zip(self.parent_pipes, data):\n            pipe.send(('talk2Env', d))\n        self.waiting_step = True\n\n    def talk2Env_wait(self):\n        outs = [pipe.recv() for pipe in self.parent_pipes]  # pipe.recv() is a blocking call\n        self.waiting_step = False\n        return np.array(outs)\n\n    def close_extras(self):\n        if self.waiting_step:\n            self.step_wait()\n        for pipe in self.parent_pipes:\n            pipe.send(('close', None))\n        for pipe in self.parent_pipes:\n            pipe.recv()\n            pipe.close()\n        for proc in self.procs:\n            proc.join()\n\n    def get_images(self, mode='human'):\n        for pipe in self.parent_pipes:\n            pipe.send(('render', None))\n        return [pipe.recv() for pipe in self.parent_pipes]\n\n    def _decode_obses(self, obs):\n        result = {}\n        for k in self.obs_keys:\n\n            bufs = [b[k] for b in self.obs_bufs]\n            o = [np.frombuffer(b.get_obj(), dtype=self.obs_dtypes[k]).reshape(self.obs_shapes[k]) for b in bufs]\n            result[k] = np.array(o)\n        return dict_to_obs(result)\n\n\ndef _subproc_worker(pipe, parent_pipe, env_fn_wrapper, obs_bufs, obs_shapes, obs_dtypes, keys):\n    \"\"\"\n    Control a single environment instance using IPC and\n    shared memory.\n    \"\"\"\n    def _write_obs(maybe_dict_obs):\n        flatdict = obs_to_dict(maybe_dict_obs)\n        for k in keys:\n            dst = obs_bufs[k].get_obj()\n            dst_np = np.frombuffer(dst, dtype=obs_dtypes[k]).reshape(obs_shapes[k])  # pylint: disable=W0212\n            np.copyto(dst_np, flatdict[k])\n\n    env = env_fn_wrapper.x()\n    parent_pipe.close()\n    try:\n        while True:\n            cmd, data = pipe.recv()\n            if cmd == 'reset':\n                pipe.send(_write_obs(env.reset()))\n            elif cmd == 'step':\n                obs, reward, done, info = env.step(data)\n                if done:\n                    obs = env.reset()\n                pipe.send((_write_obs(obs), reward, done, info))\n            elif cmd == 'render':\n                pipe.send(env.render(mode='rgb_array'))\n            elif cmd == 'close':\n                pipe.send(None)\n                break\n            elif cmd == 'talk2Env':\n                pipe.send(env.talk2Env(data))\n            else:\n                raise RuntimeError('Got unrecognized cmd %s' % cmd)\n    except KeyboardInterrupt:\n        print('ShmemVecEnv worker: got KeyboardInterrupt')\n    finally:\n        env.close()\n"
  },
  {
    "path": "rl/networks/srnn_model.py",
    "content": "import torch.nn as nn\nfrom torch.autograd import Variable\nimport torch\nimport numpy as np\nimport copy\n\nfrom rl.networks.network_utils import init\n\n\nclass RNNBase(nn.Module):\n    \"\"\"\n    The class for RNN with done masks\n    \"\"\"\n    # edge: True -> edge RNN, False -> node RNN\n    def __init__(self, args, edge):\n        super(RNNBase, self).__init__()\n        self.args = args\n\n        # if this is an edge RNN\n        if edge:\n            self.gru = nn.GRU(args.human_human_edge_embedding_size, args.human_human_edge_rnn_size)\n        # if this is a node RNN\n        else:\n            self.gru = nn.GRU(args.human_node_embedding_size*2, args.human_node_rnn_size)\n\n        for name, param in self.gru.named_parameters():\n            if 'bias' in name:\n                nn.init.constant_(param, 0)\n            elif 'weight' in name:\n                nn.init.orthogonal_(param)\n\n    # x: [seq_len, nenv, 6 or 30 or 36, ?]\n    # hxs: [1, nenv, human_num, ?]\n    # masks: [1, nenv, 1]\n    def _forward_gru(self, x, hxs, masks):\n        # for acting model, input shape[0] == hidden state shape[0]\n        if x.size(0) == hxs.size(0):\n            # use env dimension as batch\n            # [1, 12, 6, ?] -> [1, 12*6, ?] or [30, 6, 6, ?] -> [30, 6*6, ?]\n            seq_len, nenv, agent_num, _ = x.size()\n            x = x.view(seq_len, nenv*agent_num, -1)\n            mask_agent_num = masks.size()[-1]\n            hxs_times_masks = hxs * (masks.view(seq_len, nenv, mask_agent_num, 1))\n            hxs_times_masks = hxs_times_masks.view(seq_len, nenv*agent_num, -1)\n            x, hxs = self.gru(x, hxs_times_masks) # we already unsqueezed the inputs in SRNN forward function\n            x = x.view(seq_len, nenv, agent_num, -1)\n            hxs = hxs.view(seq_len, nenv, agent_num, -1)\n\n        # during update, input shape[0] * nsteps (30) = hidden state shape[0]\n        else:\n\n            # N: nenv, T: seq_len, agent_num: node num or edge num\n            T, N, agent_num, _ = x.size()\n            # x = x.view(T, N, agent_num, x.size(2))\n\n            # Same deal with masks\n            masks = masks.view(T, N)\n\n            # Let's figure out which steps in the sequence have a zero for any agent\n            # We will always assume t=0 has a zero in it as that makes the logic cleaner\n            # for the [29, num_env] boolean array, if any entry in the second axis (num_env) is True -> True\n            # to make it [29, 1], then select the indices of True entries\n            has_zeros = ((masks[1:] == 0.0) \\\n                            .any(dim=-1)\n                            .nonzero()\n                            .squeeze()\n                            .cpu())\n\n            # +1 to correct the masks[1:]\n            if has_zeros.dim() == 0:\n                # Deal with scalar\n                has_zeros = [has_zeros.item() + 1]\n            else:\n                has_zeros = (has_zeros + 1).numpy().tolist()\n\n            # add t=0 and t=T to the list\n            has_zeros = [0] + has_zeros + [T]\n\n            # hxs = hxs.unsqueeze(0)\n            # hxs = hxs.view(hxs.size(0), hxs.size(1)*hxs.size(2), hxs.size(3))\n            outputs = []\n            for i in range(len(has_zeros) - 1):\n                # We can now process steps that don't have any zeros in masks together!\n                # This is much faster\n                start_idx = has_zeros[i]\n                end_idx = has_zeros[i + 1]\n\n                # x and hxs have 4 dimensions, merge the 2nd and 3rd dimension\n                x_in = x[start_idx:end_idx]\n                x_in = x_in.view(x_in.size(0), x_in.size(1)*x_in.size(2), x_in.size(3))\n                hxs = hxs.view(hxs.size(0), N, agent_num, -1)\n                hxs = hxs * (masks[start_idx].view(1, -1, 1, 1))\n                hxs = hxs.view(hxs.size(0), hxs.size(1) * hxs.size(2), hxs.size(3))\n                rnn_scores, hxs = self.gru(x_in, hxs)\n\n                outputs.append(rnn_scores)\n\n            # assert len(outputs) == T\n            # x is a (T, N, -1) tensor\n            x = torch.cat(outputs, dim=0)\n            # flatten\n            x = x.view(T, N, agent_num, -1)\n            hxs = hxs.view(1, N, agent_num, -1)\n\n        return x, hxs\n\n\nclass HumanNodeRNN(RNNBase):\n    '''\n    Class representing human Node RNNs in the st-graph\n    '''\n    def __init__(self, args):\n        '''\n        Initializer function\n        params:\n        args : Training arguments\n        infer : Training or test time (True at test time)\n        '''\n        super(HumanNodeRNN, self).__init__(args, edge=False)\n\n        self.args = args\n\n        # Store required sizes\n        self.rnn_size = args.human_node_rnn_size\n        self.output_size = args.human_node_output_size\n        self.embedding_size = args.human_node_embedding_size\n        self.input_size = args.human_node_input_size\n        self.edge_rnn_size = args.human_human_edge_rnn_size\n\n        # Linear layer to embed input\n        self.encoder_linear = nn.Linear(self.input_size, self.embedding_size)\n\n        # ReLU and Dropout layers\n        self.relu = nn.ReLU()\n\n\n        # Linear layer to embed edgeRNN hidden states\n        self.edge_embed = nn.Linear(self.edge_rnn_size, self.embedding_size)\n\n        # Linear layer to embed attention module output\n        self.edge_attention_embed = nn.Linear(self.edge_rnn_size*2, self.embedding_size)\n\n\n        # Output linear layer\n        self.output_linear = nn.Linear(self.rnn_size, self.output_size)\n\n\n\n    def forward(self, pos, h_temporal, h_spatial_other, h, masks):\n        '''\n        Forward pass for the model\n        params:\n        pos : input position\n        h_temporal : hidden state of the temporal edgeRNN corresponding to this node\n        h_spatial_other : output of the attention module\n        h : hidden state of the current nodeRNN\n        c : cell state of the current nodeRNN\n        '''\n        # Encode the input position\n        encoded_input = self.encoder_linear(pos)\n        encoded_input = self.relu(encoded_input)\n\n        # Concat both the embeddings\n        h_edges = torch.cat((h_temporal, h_spatial_other), -1)\n        h_edges_embedded = self.relu(self.edge_attention_embed(h_edges))\n\n        concat_encoded = torch.cat((encoded_input, h_edges_embedded), -1)\n\n        x, h_new = self._forward_gru(concat_encoded, h, masks)\n\n        outputs = self.output_linear(x)\n\n\n        return outputs, h_new\n\n\nclass HumanHumanEdgeRNN(RNNBase):\n    '''\n    Class representing the Human-Human Edge RNN in the s-t graph\n    '''\n    def __init__(self, args):\n        '''\n        Initializer function\n        params:\n        args : Training arguments\n        infer : Training or test time (True at test time)\n        '''\n        super(HumanHumanEdgeRNN, self).__init__(args, edge=True)\n\n        self.args = args\n\n        # Store required sizes\n        self.rnn_size = args.human_human_edge_rnn_size\n        self.embedding_size = args.human_human_edge_embedding_size\n        self.input_size = args.human_human_edge_input_size\n\n        # Linear layer to embed input\n        self.encoder_linear = nn.Linear(self.input_size, self.embedding_size)\n        self.relu = nn.ReLU()\n\n\n    def forward(self, inp, h, masks):\n        '''\n        Forward pass for the model\n        params:\n        inp : input edge features\n        h : hidden state of the current edgeRNN\n        c : cell state of the current edgeRNN\n        '''\n        # Encode the input position\n        encoded_input = self.encoder_linear(inp)\n        encoded_input = self.relu(encoded_input)\n\n        x, h_new = self._forward_gru(encoded_input, h, masks)\n\n        return x, encoded_input, h_new\n\n\nclass EdgeAttention(nn.Module):\n    '''\n    Class representing the attention module\n    '''\n    def __init__(self, args):\n        '''\n        Initializer function\n        params:\n        args : Training arguments\n        infer : Training or test time (True at test time)\n        '''\n        super(EdgeAttention, self).__init__()\n\n        self.args = args\n\n        # Store required sizes\n        self.human_human_edge_rnn_size = args.human_human_edge_rnn_size\n        self.human_node_rnn_size = args.human_node_rnn_size\n        self.attention_size = args.attention_size\n\n\n\n        # Linear layer to embed temporal edgeRNN hidden state\n        self.temporal_edge_layer=nn.ModuleList()\n        self.spatial_edge_layer=nn.ModuleList()\n\n        self.temporal_edge_layer.append(nn.Linear(self.human_human_edge_rnn_size, self.attention_size))\n\n        # Linear layer to embed spatial edgeRNN hidden states\n        self.spatial_edge_layer.append(nn.Linear(self.human_human_edge_rnn_size, self.attention_size))\n\n\n\n        # number of agents who have spatial edges (complete graph: all 6 agents; incomplete graph: only the robot)\n        self.agent_num = 1\n        self.num_attention_head = 1\n\n    def att_func(self, temporal_embed, spatial_embed, h_spatials):\n        seq_len, nenv, num_edges, h_size = h_spatials.size()  # [1, 12, 30, 256] in testing,  [12, 30, 256] in training\n        attn = temporal_embed * spatial_embed\n        attn = torch.sum(attn, dim=3)\n\n        # Variable length\n        temperature = num_edges / np.sqrt(self.attention_size)\n        attn = torch.mul(attn, temperature)\n\n        # Softmax\n\n        attn = attn.view(seq_len, nenv, self.agent_num, self.human_num)\n        attn = torch.nn.functional.softmax(attn, dim=-1)\n        # print(attn[0, 0, 0].cpu().numpy())\n\n        # Compute weighted value\n        # weighted_value = torch.mv(torch.t(h_spatials), attn)\n\n        # reshape h_spatials and attn\n        # shape[0] = seq_len, shape[1] = num of spatial edges (6*5 = 30), shape[2] = 256\n        h_spatials = h_spatials.view(seq_len, nenv, self.agent_num, self.human_num, h_size)\n        h_spatials = h_spatials.view(seq_len * nenv * self.agent_num, self.human_num, h_size).permute(0, 2,\n                                                                                         1)  # [seq_len*nenv*6, 5, 256] -> [seq_len*nenv*6, 256, 5]\n\n        attn = attn.view(seq_len * nenv * self.agent_num, self.human_num).unsqueeze(-1)  # [seq_len*nenv*6, 5, 1]\n        weighted_value = torch.bmm(h_spatials, attn)  # [seq_len*nenv*6, 256, 1]\n\n        # reshape back\n        weighted_value = weighted_value.squeeze(-1).view(seq_len, nenv, self.agent_num, h_size)  # [seq_len, 12, 6 or 1, 256]\n        return weighted_value, attn\n\n\n\n    # h_temporal: [seq_len, nenv, 1, 256]\n    # h_spatials: [seq_len, nenv, 5, 256]\n    def forward(self, h_temporal, h_spatials):\n        '''\n        Forward pass for the model\n        params:\n        h_temporal : Hidden state of the temporal edgeRNN\n        h_spatials : Hidden states of all spatial edgeRNNs connected to the node.\n        '''\n        # find the number of humans by the size of spatial edgeRNN hidden state\n        self.human_num = h_spatials.size()[2] // self.agent_num\n\n        weighted_value_list, attn_list=[],[]\n        for i in range(self.num_attention_head):\n\n            # Embed the temporal edgeRNN hidden state\n            temporal_embed = self.temporal_edge_layer[i](h_temporal)\n            # temporal_embed = temporal_embed.squeeze(0)\n\n            # Embed the spatial edgeRNN hidden states\n            spatial_embed = self.spatial_edge_layer[i](h_spatials)\n\n            # Dot based attention\n            try:\n                temporal_embed = temporal_embed.repeat_interleave(self.human_num, dim=2)\n            except RuntimeError:\n                print('hello')\n            weighted_value,attn=self.att_func(temporal_embed, spatial_embed, h_spatials)\n            weighted_value_list.append(weighted_value)\n            attn_list.append(attn)\n\n        if self.num_attention_head > 1:\n            return self.final_attn_linear(torch.cat(weighted_value_list, dim=-1)), attn_list\n        else:\n            return weighted_value_list[0], attn_list[0]\n\n\nclass SRNN(nn.Module):\n    \"\"\"\n    Class for the DS-RNN model, see https://arxiv.org/abs/2011.04820 for details\n    \"\"\"\n    def __init__(self, obs_space_dict, args, infer=False):\n        \"\"\"\n        Initializer function\n        params:\n        args : Training arguments\n        infer : Training or test time (True at test time)\n        \"\"\"\n        super(SRNN, self).__init__()\n        self.infer = infer\n        self.is_recurrent = True\n        self.args=args\n\n        self.human_num = obs_space_dict['spatial_edges'].shape[0]\n\n        self.seq_length = args.seq_length\n        self.nenv = args.num_processes\n        self.nminibatch = args.num_mini_batch\n\n        # Store required sizes\n        self.human_node_rnn_size = args.human_node_rnn_size\n        self.human_human_edge_rnn_size = args.human_human_edge_rnn_size\n        self.output_size = args.human_node_output_size\n\n        # Initialize the Node and Edge RNNs\n        self.humanNodeRNN = HumanNodeRNN(args)\n        spatial_args = copy.deepcopy(args)\n        spatial_args.human_human_edge_input_size = obs_space_dict['spatial_edges'].shape[1]\n        self.humanhumanEdgeRNN_spatial = HumanHumanEdgeRNN(spatial_args)\n        self.humanhumanEdgeRNN_temporal = HumanHumanEdgeRNN(args)\n\n        # Initialize attention module\n        self.attn = EdgeAttention(args)\n\n        init_ = lambda m: init(m, nn.init.orthogonal_, lambda x: nn.init.\n                               constant_(x, 0), np.sqrt(2))\n\n        num_inputs = hidden_size = self.output_size\n\n        self.actor = nn.Sequential(\n            init_(nn.Linear(num_inputs, hidden_size)), nn.Tanh(),\n            init_(nn.Linear(hidden_size, hidden_size)), nn.Tanh())\n\n        self.critic = nn.Sequential(\n            init_(nn.Linear(num_inputs, hidden_size)), nn.Tanh(),\n            init_(nn.Linear(hidden_size, hidden_size)), nn.Tanh())\n\n\n        self.critic_linear = init_(nn.Linear(hidden_size, 1))\n        robot_size = 7 if args.env_type == 'crowd_sim' else 2\n        self.robot_linear = init_(nn.Linear(robot_size, 3))\n        self.human_node_final_linear=init_(nn.Linear(self.output_size,2))\n\n\n        self.temporal_edges = [0]\n        self.spatial_edges = np.arange(1, self.human_num+1)\n        # make sure the spatial edge embedding size is the same as temporal edge size\n        self.spatial_linear = init_(nn.Linear(obs_space_dict['spatial_edges'].shape[1], 2))\n\n\n    def forward(self, inputs, rnn_hxs, masks, infer=False):\n        if infer:\n            # Test time/rollout trajectory time\n            seq_length = 1\n            nenv = self.nenv\n\n        else:\n            # Training time\n            seq_length = self.seq_length\n            nenv = self.nenv // self.nminibatch\n\n        robot_node = reshapeT(inputs['robot_node'], seq_length, nenv)\n        temporal_edges = reshapeT(inputs['temporal_edges'], seq_length, nenv)\n        spatial_edges = reshapeT(inputs['spatial_edges'], seq_length, nenv)\n\n        hidden_states_node_RNNs = reshapeT(rnn_hxs['human_node_rnn'], 1, nenv)\n        hidden_states_edge_RNNs = reshapeT(rnn_hxs['human_human_edge_rnn'], 1, nenv)\n        masks = reshapeT(masks, seq_length, nenv)\n\n\n\n        if self.args.no_cuda:\n            all_hidden_states_edge_RNNs = Variable(\n                torch.zeros(1, nenv, self.human_num + 1, rnn_hxs['human_human_edge_rnn'].size()[-1]).cpu())\n        else:\n            all_hidden_states_edge_RNNs = Variable(\n                torch.zeros(1, nenv, self.human_num + 1, rnn_hxs['human_human_edge_rnn'].size()[-1]).cuda())\n\n\n        # Do forward pass through temporaledgeRNN\n        # todo: now edgeRNNs has 3 return values!\n        hidden_temporal_start_end=hidden_states_edge_RNNs[:,:,self.temporal_edges,:]\n        output_temporal, _, hidden_temporal = self.humanhumanEdgeRNN_temporal(temporal_edges, hidden_temporal_start_end, masks)\n\n        # Update the hidden state and cell state\n        all_hidden_states_edge_RNNs[:, :, self.temporal_edges,:] = hidden_temporal\n\n        # Spatial Edges\n        # spatial_edges = self.spatial_linear(spatial_edges)\n        hidden_spatial_start_end=hidden_states_edge_RNNs[:,:,self.spatial_edges,:]\n        # Do forward pass through spatialedgeRNN\n        output_spatial, _, hidden_spatial = self.humanhumanEdgeRNN_spatial(spatial_edges, hidden_spatial_start_end, masks)\n\n        # Update the hidden state and cell state\n        all_hidden_states_edge_RNNs[:, :,self.spatial_edges,: ] = hidden_spatial\n\n\n        # Do forward pass through attention module\n        hidden_attn_weighted, _ = self.attn(output_temporal, output_spatial)\n\n\n        # concatenate human node features with robot features\n        nodes_current_selected = self.robot_linear(robot_node)\n\n        # Do a forward pass through nodeRNN\n        outputs, h_nodes \\\n            = self.humanNodeRNN(nodes_current_selected, output_temporal, hidden_attn_weighted, hidden_states_node_RNNs, masks)\n\n\n        # Update the hidden and cell states\n        all_hidden_states_node_RNNs = h_nodes\n        outputs_return = outputs\n\n        rnn_hxs['human_node_rnn'] = all_hidden_states_node_RNNs\n        rnn_hxs['human_human_edge_rnn'] = all_hidden_states_edge_RNNs\n\n\n        # x is the output of the robot node and will be sent to actor and critic\n        x = outputs_return[:, :, 0, :]\n\n        hidden_critic = self.critic(x)\n        hidden_actor = self.actor(x)\n\n        for key in rnn_hxs:\n            rnn_hxs[key] = rnn_hxs[key].squeeze(0)\n\n        if infer:\n            return self.critic_linear(hidden_critic).squeeze(0), hidden_actor.squeeze(0), rnn_hxs\n        else:\n            return self.critic_linear(hidden_critic).view(-1, 1), hidden_actor.view(-1, self.output_size), rnn_hxs\n\n\ndef reshapeT(T, seq_length, nenv):\n    shape = T.size()[1:]\n    return T.unsqueeze(0).reshape((seq_length, nenv, *shape))"
  },
  {
    "path": "rl/networks/storage.py",
    "content": "import torch\nfrom torch.utils.data.sampler import BatchSampler, SubsetRandomSampler\n\n\ndef _flatten_helper(T, N, _tensor):\n    if isinstance(_tensor, dict):\n        for key in _tensor:\n            _tensor[key] = _tensor[key].view(T * N, *(_tensor[key].size()[2:]))\n        return _tensor\n    else:\n        return _tensor.view(T * N, *_tensor.size()[2:])\n\nclass RolloutStorage(object):\n    \"\"\" The rollout buffer to store the agent's experience for PPO \"\"\"\n    def __init__(self, num_steps, num_processes, obs_shape, action_space,\n                 human_node_rnn_size, human_human_edge_rnn_size):\n\n        if isinstance(obs_shape, dict):\n            self.obs = {}\n            for key in obs_shape:\n                self.obs[key] = torch.zeros(num_steps + 1, num_processes, *(obs_shape[key].shape))\n            self.human_num = obs_shape['spatial_edges'].shape[0]\n        else:\n            self.obs = torch.zeros(num_steps + 1, num_processes, *obs_shape)\n\n        self.recurrent_hidden_states = {} # a dict of tuple(hidden state, cell state)\n\n        node_num = 1\n        # todo: uncomment the next line for previous models!\n        edge_num = self.human_num + 1\n        # edge_num = 2\n\n        self.recurrent_hidden_states['human_node_rnn'] = torch.zeros(num_steps + 1, num_processes, node_num, human_node_rnn_size)\n        self.recurrent_hidden_states['human_human_edge_rnn'] = torch.zeros(num_steps + 1, num_processes, edge_num, human_human_edge_rnn_size)\n\n        self.rewards = torch.zeros(num_steps, num_processes, 1)\n        self.value_preds = torch.zeros(num_steps + 1, num_processes, 1)\n        self.returns = torch.zeros(num_steps + 1, num_processes, 1)\n        self.action_log_probs = torch.zeros(num_steps, num_processes, 1)\n        if action_space.__class__.__name__ == 'Discrete':\n            action_shape = 1\n        else:\n            action_shape = action_space.shape[0]\n        self.actions = torch.zeros(num_steps, num_processes, action_shape)\n        if action_space.__class__.__name__ == 'Discrete':\n            self.actions = self.actions.long()\n        self.masks = torch.ones(num_steps + 1, num_processes, 1)\n\n        # Masks that indicate whether it's a true terminal state\n        # or time limit end state\n        self.bad_masks = torch.ones(num_steps + 1, num_processes, 1)\n\n        self.num_steps = num_steps\n        self.step = 0\n\n    def to(self, device):\n        for key in self.obs:\n            self.obs[key] = self.obs[key].to(device)\n        for key in self.recurrent_hidden_states:\n            self.recurrent_hidden_states[key] = self.recurrent_hidden_states[key].to(device)\n\n        self.rewards = self.rewards.to(device)\n        self.value_preds = self.value_preds.to(device)\n        self.returns = self.returns.to(device)\n        self.action_log_probs = self.action_log_probs.to(device)\n        self.actions = self.actions.to(device)\n        self.masks = self.masks.to(device)\n        self.bad_masks = self.bad_masks.to(device)\n\n    def insert(self, obs, recurrent_hidden_states, actions, action_log_probs,\n               value_preds, rewards, masks, bad_masks):\n\n\n        for key in self.obs:\n            self.obs[key][self.step + 1].copy_(obs[key])\n        for key in recurrent_hidden_states:\n            self.recurrent_hidden_states[key][self.step + 1].copy_(recurrent_hidden_states[key])\n\n        self.actions[self.step].copy_(actions)\n        self.action_log_probs[self.step].copy_(action_log_probs)\n        self.value_preds[self.step].copy_(value_preds)\n        self.rewards[self.step].copy_(rewards)\n        self.masks[self.step + 1].copy_(masks)\n        self.bad_masks[self.step + 1].copy_(bad_masks)\n\n        self.step = (self.step + 1) % self.num_steps\n\n    def after_update(self):\n\n        for key in self.obs:\n            self.obs[key][0].copy_(self.obs[key][-1])\n        for key in self.recurrent_hidden_states:\n            self.recurrent_hidden_states[key][0].copy_(self.recurrent_hidden_states[key][-1])\n\n        self.masks[0].copy_(self.masks[-1])\n        self.bad_masks[0].copy_(self.bad_masks[-1])\n\n    def compute_returns(self,\n                        next_value,\n                        use_gae,\n                        gamma,\n                        gae_lambda,\n                        use_proper_time_limits=True):\n        if use_proper_time_limits:\n            if use_gae:\n                self.value_preds[-1] = next_value\n                gae = 0\n                for step in reversed(range(self.rewards.size(0))):\n                    delta = self.rewards[step] + gamma * self.value_preds[\n                        step + 1] * self.masks[step +\n                                               1] - self.value_preds[step]\n                    gae = delta + gamma * gae_lambda * self.masks[step +\n                                                                  1] * gae\n                    gae = gae * self.bad_masks[step + 1]\n                    self.returns[step] = gae + self.value_preds[step]\n            else:\n                self.returns[-1] = next_value\n                for step in reversed(range(self.rewards.size(0))):\n                    self.returns[step] = (self.returns[step + 1] * \\\n                        gamma * self.masks[step + 1] + self.rewards[step]) * self.bad_masks[step + 1] \\\n                        + (1 - self.bad_masks[step + 1]) * self.value_preds[step]\n        else:\n            if use_gae:\n                self.value_preds[-1] = next_value\n                gae = 0\n                for step in reversed(range(self.rewards.size(0))):\n                    delta = self.rewards[step] + gamma * self.value_preds[\n                        step + 1] * self.masks[step +\n                                               1] - self.value_preds[step]\n                    gae = delta + gamma * gae_lambda * self.masks[step +\n                                                                  1] * gae\n                    self.returns[step] = gae + self.value_preds[step]\n            else:\n                self.returns[-1] = next_value\n                for step in reversed(range(self.rewards.size(0))):\n                    self.returns[step] = self.returns[step + 1] * \\\n                        gamma * self.masks[step + 1] + self.rewards[step]\n\n    def feed_forward_generator(self,\n                               advantages,\n                               num_mini_batch=None,\n                               mini_batch_size=None):\n        num_steps, num_processes = self.rewards.size()[0:2]\n        batch_size = num_processes * num_steps\n\n        if mini_batch_size is None:\n            assert batch_size >= num_mini_batch, (\n                \"PPO requires the number of processes ({}) \"\n                \"* number of steps ({}) = {} \"\n                \"to be greater than or equal to the number of PPO mini batches ({}).\"\n                \"\".format(num_processes, num_steps, num_processes * num_steps,\n                          num_mini_batch))\n            mini_batch_size = batch_size // num_mini_batch\n        sampler = BatchSampler(\n            SubsetRandomSampler(range(batch_size)),\n            mini_batch_size,\n            drop_last=True)\n        for indices in sampler:\n\n\n            obs_batch = {}\n            for key in self.obs:\n                obs_batch[key] = self.obs[key][:-1].view(-1, *self.obs[key].size()[2:])[indices]\n            recurrent_hidden_states_batch = {}\n            for key in self.recurrent_hidden_states:\n                recurrent_hidden_states_batch[key] = self.recurrent_hidden_states[key][:-1].view(\n                -1, self.recurrent_hidden_states[key].size(-1))[indices]\n\n            actions_batch = self.actions.view(-1,\n                                              self.actions.size(-1))[indices]\n            value_preds_batch = self.value_preds[:-1].view(-1, 1)[indices]\n            return_batch = self.returns[:-1].view(-1, 1)[indices]\n            masks_batch = self.masks[:-1].view(-1, 1)[indices]\n            old_action_log_probs_batch = self.action_log_probs.view(-1,\n                                                                    1)[indices]\n            if advantages is None:\n                adv_targ = None\n            else:\n                adv_targ = advantages.view(-1, 1)[indices]\n\n            yield obs_batch, recurrent_hidden_states_batch, actions_batch, \\\n                value_preds_batch, return_batch, masks_batch, old_action_log_probs_batch, adv_targ\n\n    def recurrent_generator(self, advantages, num_mini_batch):\n        num_processes = self.rewards.size(1)\n        assert num_processes >= num_mini_batch, (\n            \"PPO requires the number of processes ({}) \"\n            \"to be greater than or equal to the number of \"\n            \"PPO mini batches ({}).\".format(num_processes, num_mini_batch))\n        num_envs_per_batch = num_processes // num_mini_batch\n        perm = torch.randperm(num_processes)\n        for start_ind in range(0, num_processes, num_envs_per_batch):\n\n            obs_batch = {}\n            for key in self.obs:\n                obs_batch[key] = []\n            recurrent_hidden_states_batch = {}\n            for key in self.recurrent_hidden_states:\n                recurrent_hidden_states_batch[key] = []\n\n            actions_batch = []\n            value_preds_batch = []\n            return_batch = []\n            masks_batch = []\n            old_action_log_probs_batch = []\n            adv_targ = []\n\n            for offset in range(num_envs_per_batch):\n                ind = perm[start_ind + offset]\n\n\n                for key in self.obs:\n                    obs_batch[key].append(self.obs[key][:-1, ind])\n                for key in self.recurrent_hidden_states:\n                    recurrent_hidden_states_batch[key].append(self.recurrent_hidden_states[key][0:1, ind])\n\n                actions_batch.append(self.actions[:, ind])\n                value_preds_batch.append(self.value_preds[:-1, ind])\n                return_batch.append(self.returns[:-1, ind])\n                masks_batch.append(self.masks[:-1, ind])\n                old_action_log_probs_batch.append(\n                    self.action_log_probs[:, ind])\n                adv_targ.append(advantages[:, ind])\n\n            T, N = self.num_steps, num_envs_per_batch\n            # These are all tensors of size (T, N, -1)\n\n            actions_batch = torch.stack(actions_batch, 1)\n            value_preds_batch = torch.stack(value_preds_batch, 1)\n            return_batch = torch.stack(return_batch, 1)\n            masks_batch = torch.stack(masks_batch, 1)\n            old_action_log_probs_batch = torch.stack(\n                old_action_log_probs_batch, 1)\n            adv_targ = torch.stack(adv_targ, 1)\n\n            for key in obs_batch:\n                obs_batch[key] = torch.stack(obs_batch[key], 1)\n            for key in recurrent_hidden_states_batch:\n                temp = torch.stack(recurrent_hidden_states_batch[key], 1)\n                recurrent_hidden_states_batch[key] = temp.view(N, *(temp.size()[2:]))\n\n            # Flatten the (T, N, ...) tensors to (T * N, ...)\n            obs_batch = _flatten_helper(T, N, obs_batch)\n            actions_batch = _flatten_helper(T, N, actions_batch)\n            value_preds_batch = _flatten_helper(T, N, value_preds_batch)\n            return_batch = _flatten_helper(T, N, return_batch)\n            masks_batch = _flatten_helper(T, N, masks_batch)\n            old_action_log_probs_batch = _flatten_helper(T, N, \\\n                    old_action_log_probs_batch)\n            adv_targ = _flatten_helper(T, N, adv_targ)\n\n            yield obs_batch, recurrent_hidden_states_batch, actions_batch, \\\n                value_preds_batch, return_batch, masks_batch, old_action_log_probs_batch, adv_targ\n"
  },
  {
    "path": "rl/ppo/__init__.py",
    "content": "\nfrom .ppo import PPO"
  },
  {
    "path": "rl/ppo/ppo.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.optim as optim\n\n\nclass PPO():\n    \"\"\" Class for the PPO optimizer \"\"\"\n    def __init__(self,\n                 actor_critic,\n                 clip_param,\n                 ppo_epoch,\n                 num_mini_batch,\n                 value_loss_coef,\n                 entropy_coef,\n                 lr=None,\n                 eps=None,\n                 max_grad_norm=None,\n                 use_clipped_value_loss=True):\n\n        self.actor_critic = actor_critic\n\n        self.clip_param = clip_param\n        self.ppo_epoch = ppo_epoch\n        self.num_mini_batch = num_mini_batch\n\n        self.value_loss_coef = value_loss_coef\n        self.entropy_coef = entropy_coef\n\n        self.max_grad_norm = max_grad_norm\n        self.use_clipped_value_loss = use_clipped_value_loss\n\n        self.optimizer = optim.Adam(actor_critic.parameters(), lr=lr, eps=eps)\n\n\n\n    def update(self, rollouts):\n        advantages = rollouts.returns[:-1] - rollouts.value_preds[:-1]\n        advantages = (advantages - advantages.mean()) / (\n            advantages.std() + 1e-5)\n\n        value_loss_epoch = 0\n        action_loss_epoch = 0\n        dist_entropy_epoch = 0\n\n\n        for e in range(self.ppo_epoch):\n            if self.actor_critic.is_recurrent:\n                data_generator = rollouts.recurrent_generator(\n                    advantages, self.num_mini_batch)\n            else:\n                data_generator = rollouts.feed_forward_generator(\n                    advantages, self.num_mini_batch)\n\n            for sample in data_generator:\n                obs_batch, recurrent_hidden_states_batch, actions_batch, \\\n                   value_preds_batch, return_batch, masks_batch, old_action_log_probs_batch, \\\n                        adv_targ = sample\n\n                # Reshape to do in a single forward pass for all steps\n                values, action_log_probs, dist_entropy, _ = self.actor_critic.evaluate_actions(\n                    obs_batch, recurrent_hidden_states_batch, masks_batch,\n                    actions_batch)\n\n                ratio = torch.exp(action_log_probs -\n                                  old_action_log_probs_batch)\n                surr1 = ratio * adv_targ\n                surr2 = torch.clamp(ratio, 1.0 - self.clip_param,\n                                    1.0 + self.clip_param) * adv_targ\n                action_loss = -torch.min(surr1, surr2).mean()\n\n                if self.use_clipped_value_loss:\n                    value_pred_clipped = value_preds_batch + \\\n                        (values - value_preds_batch).clamp(-self.clip_param, self.clip_param)\n                    value_losses = (values - return_batch).pow(2)\n                    value_losses_clipped = (\n                        value_pred_clipped - return_batch).pow(2)\n                    value_loss = 0.5 * torch.max(value_losses,\n                                                 value_losses_clipped).mean()\n                else:\n                    value_loss = 0.5 * (return_batch - values).pow(2).mean()\n\n                self.optimizer.zero_grad()\n                total_loss=value_loss * self.value_loss_coef + action_loss - dist_entropy * self.entropy_coef\n                total_loss.backward()\n                nn.utils.clip_grad_norm_(self.actor_critic.parameters(),\n                                         self.max_grad_norm)\n                self.optimizer.step()\n\n                value_loss_epoch += value_loss.item()\n                action_loss_epoch += action_loss.item()\n                dist_entropy_epoch += dist_entropy.item()\n\n\n        num_updates = self.ppo_epoch * self.num_mini_batch\n\n        value_loss_epoch /= num_updates\n        action_loss_epoch /= num_updates\n        dist_entropy_epoch /= num_updates\n\n\n        return value_loss_epoch, action_loss_epoch, dist_entropy_epoch\n"
  },
  {
    "path": "rl/vec_env/__init__.py",
    "content": "from .vec_env import AlreadySteppingError, NotSteppingError, VecEnv, VecEnvWrapper, VecEnvObservationWrapper, CloudpickleWrapper\nfrom .dummy_vec_env import DummyVecEnv\nfrom .shmem_vec_env import ShmemVecEnv\nfrom .vec_frame_stack import VecFrameStack\n\nfrom .vec_normalize import VecNormalize\nfrom .vec_remove_dict_obs import VecExtractDictObs\n\n__all__ = ['AlreadySteppingError', 'NotSteppingError', 'VecEnv', 'VecEnvWrapper', 'VecEnvObservationWrapper', 'CloudpickleWrapper', 'DummyVecEnv', 'ShmemVecEnv', 'VecFrameStack', 'VecNormalize', 'VecExtractDictObs']\n"
  },
  {
    "path": "rl/vec_env/dummy_vec_env.py",
    "content": "import numpy as np\nfrom .vec_env import VecEnv\nfrom .util import copy_obs_dict, dict_to_obs, obs_space_info\nimport copy\n\nclass DummyVecEnv(VecEnv):\n    \"\"\"\n    VecEnv that does runs multiple environments sequentially, that is,\n    the step and reset commands are send to one environment at a time.\n    Useful when debugging and when num_env == 1 (in the latter case,\n    avoids communication overhead)\n    \"\"\"\n    def __init__(self, env_fns):\n        \"\"\"\n        Arguments:\n\n        env_fns: iterable of callables      functions that build environments\n        \"\"\"\n        self.envs = [fn() for fn in env_fns]\n        env = self.envs[0]\n        VecEnv.__init__(self, len(env_fns), env.observation_space, env.action_space)\n        obs_space = env.observation_space\n        self.keys, shapes, dtypes = obs_space_info(obs_space)\n\n        self.buf_obs = { k: np.zeros((self.num_envs,) + tuple(shapes[k]), dtype=dtypes[k]) for k in self.keys }\n        self.obs_list = []\n        self.buf_dones = np.zeros((self.num_envs,), dtype=np.bool)\n        self.buf_rews  = np.zeros((self.num_envs,), dtype=np.float32)\n        self.buf_infos = [{} for _ in range(self.num_envs)]\n        self.actions = None\n        self.spec = self.envs[0].spec\n\n    def step_async(self, actions):\n        listify = True\n        try:\n            if len(actions) == self.num_envs:\n                listify = False\n        except TypeError:\n            pass\n\n        if not listify:\n            self.actions = actions\n        else:\n            assert self.num_envs == 1, \"actions {} is either not a list or has a wrong size - cannot match to {} environments\".format(actions, self.num_envs)\n            self.actions = [actions]\n\n    def step_wait(self):\n        for e in range(self.num_envs):\n            action = self.actions[e]\n            # if isinstance(self.envs[e].action_space, spaces.Discrete):\n            #    action = int(action)\n\n            obs, self.buf_rews[e], self.buf_dones[e], self.buf_infos[e] = self.envs[e].step(action)\n            if self.buf_dones[e]:\n                obs = self.envs[e].reset()\n            self._save_obs(e, obs)\n        return (self._obs_from_buf(), np.copy(self.buf_rews), np.copy(self.buf_dones),\n                copy.deepcopy(self.buf_infos))\n\n    def reset(self):\n        for e in range(self.num_envs):\n            obs = self.envs[e].reset()\n            self._save_obs(e, obs)\n        return self._obs_from_buf()\n\n    def talk2Env_async(self, data):\n        self.envs[0].env.talk2Env(data[0])\n        pass\n\n    def talk2Env_wait(self):\n        return [True]\n\n    def _save_obs(self, e, obs):\n        d={}\n        for k in self.keys:\n            if k is None:\n                self.buf_obs[k][e] = obs\n            else:\n                d[k] = self.buf_obs[k][0]\n                self.buf_obs[k][e] = obs[k]\n        self.obs_list = [dict_to_obs(copy_obs_dict(d))]\n\n    def _obs_from_buf(self):\n        return dict_to_obs(copy_obs_dict(self.buf_obs))\n\n    def get_images(self):\n        return [env.render(mode='rgb_array') for env in self.envs]\n\n    def render(self, mode='human'):\n        if self.num_envs == 1:\n            return self.envs[0].render(mode=mode)\n        else:\n            return super().render(mode=mode)\n"
  },
  {
    "path": "rl/vec_env/envs.py",
    "content": "import os\r\n\r\nimport gym\r\nimport numpy as np\r\n\r\nfrom gym.spaces.box import Box\r\n\r\nfrom .vec_env import VecEnvWrapper\r\nimport torch\r\n\r\nfrom .dummy_vec_env import DummyVecEnv\r\nfrom .shmem_vec_env import ShmemVecEnv\r\n\r\n\r\nfrom .vec_pretext_normalize import VecPretextNormalize\r\n\r\n\r\ndef make_env(env_id, seed, rank):\r\n    def _thunk():\r\n\r\n        env = gym.make(env_id)\r\n\r\n        env.seed(seed + rank)\r\n\r\n        if str(env.__class__.__name__).find('TimeLimit') >= 0:\r\n            env = TimeLimitMask(env)\r\n\r\n        return env\r\n\r\n    return _thunk\r\n\r\n\r\ndef make_vec_envs(env_name,\r\n                  seed,\r\n                  num_processes,\r\n                  gamma,\r\n                  device,\r\n                  randomCollect,\r\n                  config=None):\r\n    envs = [\r\n        make_env(env_name, seed, i)\r\n        for i in range(num_processes)\r\n    ]\r\n\r\n    if len(envs) > 1:\r\n        envs = ShmemVecEnv(envs) if os.name == 'nt' else ShmemVecEnv(envs, context='fork')\r\n    else:\r\n        envs = DummyVecEnv(envs)\r\n\r\n    if not randomCollect: # if it is not for random data collection phase\r\n        if gamma is None:\r\n            envs = VecPretextNormalize(envs, ob=False, ret=False, config=config)\r\n        else:\r\n            envs = VecPretextNormalize(envs, ob=False, gamma=gamma, config=config)\r\n\r\n        envs = VecPyTorch(envs, device)\r\n\r\n    return envs\r\n\r\n\r\n# Checks whether done was caused my timit limits or not\r\nclass TimeLimitMask(gym.Wrapper):\r\n    def step(self, action):\r\n        obs, rew, done, info = self.env.step(action)\r\n        if done and self.env._max_episode_steps == self.env._elapsed_steps:\r\n            info['bad_transition'] = True\r\n\r\n        return obs, rew, done, info\r\n\r\n    def reset(self, **kwargs):\r\n        return self.env.reset(**kwargs)\r\n\r\nclass VecPyTorch(VecEnvWrapper):\r\n    def __init__(self, venv, device):\r\n        \"\"\"Return only every `skip`-th frame\"\"\"\r\n        super(VecPyTorch, self).__init__(venv)\r\n        self.device = device\r\n        # TODO: Fix data types\r\n\r\n    def reset(self):\r\n        obs = self.venv.reset()\r\n        if isinstance(obs, dict):\r\n            for key in obs:\r\n                obs[key]=torch.from_numpy(obs[key]).to(self.device)\r\n        else:\r\n            obs = torch.from_numpy(obs).float().to(self.device)\r\n        return obs\r\n\r\n    def step_async(self, actions):\r\n        if isinstance(actions, torch.LongTensor):\r\n            # Squeeze the dimension for discrete actions\r\n            actions = actions.squeeze(1)\r\n        actions = actions.cpu().numpy()\r\n        self.venv.step_async(actions)\r\n\r\n    def step_wait(self):\r\n        obs, reward, done, info = self.venv.step_wait()\r\n        if isinstance(obs, dict):\r\n            for key in obs:\r\n                obs[key] = torch.from_numpy(obs[key]).to(self.device)\r\n        else:\r\n            obs = torch.from_numpy(obs).float().to(self.device)\r\n        reward = torch.from_numpy(reward).unsqueeze(dim=1).float()\r\n        return obs, reward, done, info\r\n\r\n\r\n\r\n\r\n\r\n"
  },
  {
    "path": "rl/vec_env/logger.py",
    "content": "import os\nimport sys\nimport shutil\nimport os.path as osp\nimport json\nimport time\nimport datetime\nimport tempfile\nfrom collections import defaultdict\nfrom contextlib import contextmanager\n\nDEBUG = 10\nINFO = 20\nWARN = 30\nERROR = 40\n\nDISABLED = 50\n\nclass KVWriter(object):\n    def writekvs(self, kvs):\n        raise NotImplementedError\n\nclass SeqWriter(object):\n    def writeseq(self, seq):\n        raise NotImplementedError\n\nclass HumanOutputFormat(KVWriter, SeqWriter):\n    def __init__(self, filename_or_file):\n        if isinstance(filename_or_file, str):\n            self.file = open(filename_or_file, 'wt')\n            self.own_file = True\n        else:\n            assert hasattr(filename_or_file, 'read'), 'expected file or str, got %s'%filename_or_file\n            self.file = filename_or_file\n            self.own_file = False\n\n    def writekvs(self, kvs):\n        # Create strings for printing\n        key2str = {}\n        for (key, val) in sorted(kvs.items()):\n            if hasattr(val, '__float__'):\n                valstr = '%-8.3g' % val\n            else:\n                valstr = str(val)\n            key2str[self._truncate(key)] = self._truncate(valstr)\n\n        # Find max widths\n        if len(key2str) == 0:\n            print('WARNING: tried to write empty key-value dict')\n            return\n        else:\n            keywidth = max(map(len, key2str.keys()))\n            valwidth = max(map(len, key2str.values()))\n\n        # Write out the data\n        dashes = '-' * (keywidth + valwidth + 7)\n        lines = [dashes]\n        for (key, val) in sorted(key2str.items(), key=lambda kv: kv[0].lower()):\n            lines.append('| %s%s | %s%s |' % (\n                key,\n                ' ' * (keywidth - len(key)),\n                val,\n                ' ' * (valwidth - len(val)),\n            ))\n        lines.append(dashes)\n        self.file.write('\\n'.join(lines) + '\\n')\n\n        # Flush the output to the file\n        self.file.flush()\n\n    def _truncate(self, s):\n        maxlen = 30\n        return s[:maxlen-3] + '...' if len(s) > maxlen else s\n\n    def writeseq(self, seq):\n        seq = list(seq)\n        for (i, elem) in enumerate(seq):\n            self.file.write(elem)\n            if i < len(seq) - 1: # add space unless this is the last one\n                self.file.write(' ')\n        self.file.write('\\n')\n        self.file.flush()\n\n    def close(self):\n        if self.own_file:\n            self.file.close()\n\nclass JSONOutputFormat(KVWriter):\n    def __init__(self, filename):\n        self.file = open(filename, 'wt')\n\n    def writekvs(self, kvs):\n        for k, v in sorted(kvs.items()):\n            if hasattr(v, 'dtype'):\n                kvs[k] = float(v)\n        self.file.write(json.dumps(kvs) + '\\n')\n        self.file.flush()\n\n    def close(self):\n        self.file.close()\n\nclass CSVOutputFormat(KVWriter):\n    def __init__(self, filename):\n        self.file = open(filename, 'w+t')\n        self.keys = []\n        self.sep = ','\n\n    def writekvs(self, kvs):\n        # Add our current row to the history\n        extra_keys = list(kvs.keys() - self.keys)\n        extra_keys.sort()\n        if extra_keys:\n            self.keys.extend(extra_keys)\n            self.file.seek(0)\n            lines = self.file.readlines()\n            self.file.seek(0)\n            for (i, k) in enumerate(self.keys):\n                if i > 0:\n                    self.file.write(',')\n                self.file.write(k)\n            self.file.write('\\n')\n            for line in lines[1:]:\n                self.file.write(line[:-1])\n                self.file.write(self.sep * len(extra_keys))\n                self.file.write('\\n')\n        for (i, k) in enumerate(self.keys):\n            if i > 0:\n                self.file.write(',')\n            v = kvs.get(k)\n            if v is not None:\n                self.file.write(str(v))\n        self.file.write('\\n')\n        self.file.flush()\n\n    def close(self):\n        self.file.close()\n\n\nclass TensorBoardOutputFormat(KVWriter):\n    \"\"\"\n    Dumps key/value pairs into TensorBoard's numeric format.\n    \"\"\"\n    def __init__(self, dir):\n        os.makedirs(dir, exist_ok=True)\n        self.dir = dir\n        self.step = 1\n        prefix = 'events'\n        path = osp.join(osp.abspath(dir), prefix)\n        import tensorflow as tf\n        from tensorflow.python import pywrap_tensorflow\n        from tensorflow.core.util import event_pb2\n        from tensorflow.python.util import compat\n        self.tf = tf\n        self.event_pb2 = event_pb2\n        self.pywrap_tensorflow = pywrap_tensorflow\n        self.writer = pywrap_tensorflow.EventsWriter(compat.as_bytes(path))\n\n    def writekvs(self, kvs):\n        def summary_val(k, v):\n            kwargs = {'tag': k, 'simple_value': float(v)}\n            return self.tf.Summary.Value(**kwargs)\n        summary = self.tf.Summary(value=[summary_val(k, v) for k, v in kvs.items()])\n        event = self.event_pb2.Event(wall_time=time.time(), summary=summary)\n        event.step = self.step # is there any reason why you'd want to specify the step?\n        self.writer.WriteEvent(event)\n        self.writer.Flush()\n        self.step += 1\n\n    def close(self):\n        if self.writer:\n            self.writer.Close()\n            self.writer = None\n\ndef make_output_format(format, ev_dir, log_suffix=''):\n    os.makedirs(ev_dir, exist_ok=True)\n    if format == 'stdout':\n        return HumanOutputFormat(sys.stdout)\n    elif format == 'log':\n        return HumanOutputFormat(osp.join(ev_dir, 'log%s.txt' % log_suffix))\n    elif format == 'json':\n        return JSONOutputFormat(osp.join(ev_dir, 'progress%s.json' % log_suffix))\n    elif format == 'csv':\n        return CSVOutputFormat(osp.join(ev_dir, 'progress%s.csv' % log_suffix))\n    elif format == 'tensorboard':\n        return TensorBoardOutputFormat(osp.join(ev_dir, 'tb%s' % log_suffix))\n    else:\n        raise ValueError('Unknown format specified: %s' % (format,))\n\n# ================================================================\n# API\n# ================================================================\n\ndef logkv(key, val):\n    \"\"\"\n    Log a value of some diagnostic\n    Call this once for each diagnostic quantity, each iteration\n    If called many times, last value will be used.\n    \"\"\"\n    get_current().logkv(key, val)\n\ndef logkv_mean(key, val):\n    \"\"\"\n    The same as logkv(), but if called many times, values averaged.\n    \"\"\"\n    get_current().logkv_mean(key, val)\n\ndef logkvs(d):\n    \"\"\"\n    Log a dictionary of key-value pairs\n    \"\"\"\n    for (k, v) in d.items():\n        logkv(k, v)\n\ndef dumpkvs():\n    \"\"\"\n    Write all of the diagnostics from the current iteration\n    \"\"\"\n    return get_current().dumpkvs()\n\ndef getkvs():\n    return get_current().name2val\n\n\ndef log(*args, level=INFO):\n    \"\"\"\n    Write the sequence of args, with no separators, to the console and output files (if you've configured an output file).\n    \"\"\"\n    get_current().log(*args, level=level)\n\ndef debug(*args):\n    log(*args, level=DEBUG)\n\ndef info(*args):\n    log(*args, level=INFO)\n\ndef warn(*args):\n    log(*args, level=WARN)\n\ndef error(*args):\n    log(*args, level=ERROR)\n\n\ndef set_level(level):\n    \"\"\"\n    Set logging threshold on current logger.\n    \"\"\"\n    get_current().set_level(level)\n\ndef set_comm(comm):\n    get_current().set_comm(comm)\n\ndef get_dir():\n    \"\"\"\n    Get directory that log files are being written to.\n    will be None if there is no output directory (i.e., if you didn't call start)\n    \"\"\"\n    return get_current().get_dir()\n\nrecord_tabular = logkv\ndump_tabular = dumpkvs\n\n@contextmanager\ndef profile_kv(scopename):\n    logkey = 'wait_' + scopename\n    tstart = time.time()\n    try:\n        yield\n    finally:\n        get_current().name2val[logkey] += time.time() - tstart\n\ndef profile(n):\n    \"\"\"\n    Usage:\n    @profile(\"my_func\")\n    def my_func(): code\n    \"\"\"\n    def decorator_with_name(func):\n        def func_wrapper(*args, **kwargs):\n            with profile_kv(n):\n                return func(*args, **kwargs)\n        return func_wrapper\n    return decorator_with_name\n\n\n# ================================================================\n# Backend\n# ================================================================\n\ndef get_current():\n    if Logger.CURRENT is None:\n        _configure_default_logger()\n\n    return Logger.CURRENT\n\n\nclass Logger(object):\n    DEFAULT = None  # A logger with no output files. (See right below class definition)\n                    # So that you can still log to the terminal without setting up any output files\n    CURRENT = None  # Current logger being used by the free functions above\n\n    def __init__(self, dir, output_formats, comm=None):\n        self.name2val = defaultdict(float)  # values this iteration\n        self.name2cnt = defaultdict(int)\n        self.level = INFO\n        self.dir = dir\n        self.output_formats = output_formats\n        self.comm = comm\n\n    # Logging API, forwarded\n    # ----------------------------------------\n    def logkv(self, key, val):\n        self.name2val[key] = val\n\n    def logkv_mean(self, key, val):\n        oldval, cnt = self.name2val[key], self.name2cnt[key]\n        self.name2val[key] = oldval*cnt/(cnt+1) + val/(cnt+1)\n        self.name2cnt[key] = cnt + 1\n\n    def dumpkvs(self):\n        if self.comm is None:\n            d = self.name2val\n        else:\n            from baselines.common import mpi_util\n            d = mpi_util.mpi_weighted_mean(self.comm,\n                {name : (val, self.name2cnt.get(name, 1))\n                    for (name, val) in self.name2val.items()})\n            if self.comm.rank != 0:\n                d['dummy'] = 1 # so we don't get a warning about empty dict\n        out = d.copy() # Return the dict for unit testing purposes\n        for fmt in self.output_formats:\n            if isinstance(fmt, KVWriter):\n                fmt.writekvs(d)\n        self.name2val.clear()\n        self.name2cnt.clear()\n        return out\n\n    def log(self, *args, level=INFO):\n        if self.level <= level:\n            self._do_log(args)\n\n    # Configuration\n    # ----------------------------------------\n    def set_level(self, level):\n        self.level = level\n\n    def set_comm(self, comm):\n        self.comm = comm\n\n    def get_dir(self):\n        return self.dir\n\n    def close(self):\n        for fmt in self.output_formats:\n            fmt.close()\n\n    # Misc\n    # ----------------------------------------\n    def _do_log(self, args):\n        for fmt in self.output_formats:\n            if isinstance(fmt, SeqWriter):\n                fmt.writeseq(map(str, args))\n\ndef get_rank_without_mpi_import():\n    # check environment variables here instead of importing mpi4py\n    # to avoid calling MPI_Init() when this module is imported\n    for varname in ['PMI_RANK', 'OMPI_COMM_WORLD_RANK']:\n        if varname in os.environ:\n            return int(os.environ[varname])\n    return 0\n\n\ndef configure(dir=None, format_strs=None, comm=None, log_suffix=''):\n    \"\"\"\n    If comm is provided, average all numerical stats across that comm\n    \"\"\"\n    if dir is None:\n        dir = os.getenv('OPENAI_LOGDIR')\n    if dir is None:\n        dir = osp.join(tempfile.gettempdir(),\n            datetime.datetime.now().strftime(\"openai-%Y-%m-%d-%H-%M-%S-%f\"))\n    assert isinstance(dir, str)\n    dir = os.path.expanduser(dir)\n    os.makedirs(os.path.expanduser(dir), exist_ok=True)\n\n    rank = get_rank_without_mpi_import()\n    if rank > 0:\n        log_suffix = log_suffix + \"-rank%03i\" % rank\n\n    if format_strs is None:\n        if rank == 0:\n            format_strs = os.getenv('OPENAI_LOG_FORMAT', 'stdout,log,csv').split(',')\n        else:\n            format_strs = os.getenv('OPENAI_LOG_FORMAT_MPI', 'log').split(',')\n    format_strs = filter(None, format_strs)\n    output_formats = [make_output_format(f, dir, log_suffix) for f in format_strs]\n\n    Logger.CURRENT = Logger(dir=dir, output_formats=output_formats, comm=comm)\n    if output_formats:\n        log('Logging to %s'%dir)\n\ndef _configure_default_logger():\n    configure()\n    Logger.DEFAULT = Logger.CURRENT\n\ndef reset():\n    if Logger.CURRENT is not Logger.DEFAULT:\n        Logger.CURRENT.close()\n        Logger.CURRENT = Logger.DEFAULT\n        log('Reset logger')\n\n@contextmanager\ndef scoped_configure(dir=None, format_strs=None, comm=None):\n    prevlogger = Logger.CURRENT\n    configure(dir=dir, format_strs=format_strs, comm=comm)\n    try:\n        yield\n    finally:\n        Logger.CURRENT.close()\n        Logger.CURRENT = prevlogger\n\n# ================================================================\n\ndef _demo():\n    info(\"hi\")\n    debug(\"shouldn't appear\")\n    set_level(DEBUG)\n    debug(\"should appear\")\n    dir = \"/tmp/testlogging\"\n    if os.path.exists(dir):\n        shutil.rmtree(dir)\n    configure(dir=dir)\n    logkv(\"a\", 3)\n    logkv(\"b\", 2.5)\n    dumpkvs()\n    logkv(\"b\", -2.5)\n    logkv(\"a\", 5.5)\n    dumpkvs()\n    info(\"^^^ should see a = 5.5\")\n    logkv_mean(\"b\", -22.5)\n    logkv_mean(\"b\", -44.4)\n    logkv(\"a\", 5.5)\n    dumpkvs()\n    info(\"^^^ should see b = -33.3\")\n\n    logkv(\"b\", -2.5)\n    dumpkvs()\n\n    logkv(\"a\", \"longasslongasslongasslongasslongasslongassvalue\")\n    dumpkvs()\n\n\n# ================================================================\n# Readers\n# ================================================================\n\ndef read_json(fname):\n    import pandas\n    ds = []\n    with open(fname, 'rt') as fh:\n        for line in fh:\n            ds.append(json.loads(line))\n    return pandas.DataFrame(ds)\n\ndef read_csv(fname):\n    import pandas\n    return pandas.read_csv(fname, index_col=None, comment='#')\n\ndef read_tb(path):\n    \"\"\"\n    path : a tensorboard file OR a directory, where we will find all TB files\n           of the form events.*\n    \"\"\"\n    import pandas\n    import numpy as np\n    from glob import glob\n    import tensorflow as tf\n    if osp.isdir(path):\n        fnames = glob(osp.join(path, \"events.*\"))\n    elif osp.basename(path).startswith(\"events.\"):\n        fnames = [path]\n    else:\n        raise NotImplementedError(\"Expected tensorboard file or directory containing them. Got %s\"%path)\n    tag2pairs = defaultdict(list)\n    maxstep = 0\n    for fname in fnames:\n        for summary in tf.train.summary_iterator(fname):\n            if summary.step > 0:\n                for v in summary.summary.value:\n                    pair = (summary.step, v.simple_value)\n                    tag2pairs[v.tag].append(pair)\n                maxstep = max(summary.step, maxstep)\n    data = np.empty((maxstep, len(tag2pairs)))\n    data[:] = np.nan\n    tags = sorted(tag2pairs.keys())\n    for (colidx,tag) in enumerate(tags):\n        pairs = tag2pairs[tag]\n        for (step, value) in pairs:\n            data[step-1, colidx] = value\n    return pandas.DataFrame(data, columns=tags)\n\nif __name__ == \"__main__\":\n    _demo()\n"
  },
  {
    "path": "rl/vec_env/running_mean_std.py",
    "content": "#from typing import Tuple\r\n\r\nimport numpy as np\r\n\r\n\r\nclass RunningMeanStd(object):\r\n    def __init__(self, epsilon = 1e-4, shape = ()):\r\n        \"\"\"\r\n        Calulates the running mean and std of a data stream\r\n        https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Parallel_algorithm\r\n        :param epsilon: helps with arithmetic issues\r\n        :param shape: the shape of the data stream's output\r\n        \"\"\"\r\n        self.mean = np.zeros(shape, np.float64)\r\n        self.var = np.ones(shape, np.float64)\r\n        self.count = epsilon\r\n\r\n    def update(self, arr):\r\n        batch_mean = np.mean(arr, axis=0)\r\n        batch_var = np.var(arr, axis=0)\r\n        batch_count = arr.shape[0]\r\n        self.update_from_moments(batch_mean, batch_var, batch_count)\r\n\r\n    def update_from_moments(self, batch_mean, batch_var, batch_count):\r\n        delta = batch_mean - self.mean\r\n        tot_count = self.count + batch_count\r\n\r\n        new_mean = self.mean + delta * batch_count / tot_count\r\n        m_a = self.var * self.count\r\n        m_b = batch_var * batch_count\r\n        m_2 = m_a + m_b + np.square(delta) * self.count * batch_count / (self.count + batch_count)\r\n        new_var = m_2 / (self.count + batch_count)\r\n\r\n        new_count = batch_count + self.count\r\n\r\n        self.mean = new_mean\r\n        self.var = new_var\r\n        self.count = new_count"
  },
  {
    "path": "rl/vec_env/shmem_vec_env.py",
    "content": "\"\"\"\nAn interface for asynchronous vectorized environments.\n\"\"\"\n\nimport multiprocessing as mp\nimport numpy as np\nfrom .vec_env import VecEnv, CloudpickleWrapper, clear_mpi_env_vars\nimport ctypes\n#from . import logger\n\nfrom .util import dict_to_obs, obs_space_info, obs_to_dict\n\n_NP_TO_CT = {np.float32: ctypes.c_float,\n             np.int32: ctypes.c_int32,\n             np.int8: ctypes.c_int8,\n             np.uint8: ctypes.c_char,\n             np.bool: ctypes.c_bool}\n\n\nclass ShmemVecEnv(VecEnv):\n    \"\"\"\n    Optimized version of SubprocVecEnv that uses shared variables to communicate observations.\n    \"\"\"\n\n    def __init__(self, env_fns, spaces=None, context='spawn'):\n        \"\"\"\n        If you don't specify observation_space, we'll have to create a dummy\n        environment to get it.\n        \"\"\"\n        ctx = mp.get_context(context)\n        if spaces:\n            observation_space, action_space = spaces\n        else:\n            print('Creating dummy env object to get spaces')\n            dummy = env_fns[0]()\n            observation_space, action_space = dummy.observation_space, dummy.action_space\n            dummy.close()\n            del dummy\n        VecEnv.__init__(self, len(env_fns), observation_space, action_space)\n        self.obs_keys, self.obs_shapes, self.obs_dtypes = obs_space_info(observation_space)\n        self.obs_bufs = [\n            {k: ctx.Array(_NP_TO_CT[self.obs_dtypes[k].type], int(np.prod(self.obs_shapes[k]))) for k in self.obs_keys}\n            for _ in env_fns]\n        self.obs_list=[]\n        self.parent_pipes = []\n        self.procs = []\n        with clear_mpi_env_vars():\n            for env_fn, obs_buf in zip(env_fns, self.obs_bufs):\n                wrapped_fn = CloudpickleWrapper(env_fn)\n                parent_pipe, child_pipe = ctx.Pipe()\n                proc = ctx.Process(target=_subproc_worker,\n                            args=(child_pipe, parent_pipe, wrapped_fn, obs_buf, self.obs_shapes, self.obs_dtypes, self.obs_keys))\n                proc.daemon = True\n                self.procs.append(proc)\n                self.parent_pipes.append(parent_pipe)\n                proc.start()\n                child_pipe.close()\n        self.waiting_step = False\n        self.viewer = None\n\n    def reset(self):\n        if self.waiting_step:\n            print('Called reset() while waiting for the step to complete')\n            self.step_wait()\n        for pipe in self.parent_pipes:\n            pipe.send(('reset', None))\n        return self._decode_obses([pipe.recv() for pipe in self.parent_pipes])\n\n    def step_async(self, actions):\n        assert len(actions) == len(self.parent_pipes)\n        for pipe, act in zip(self.parent_pipes, actions):\n            pipe.send(('step', act))\n        self.waiting_step = True\n\n    def step_wait(self):\n        outs = [pipe.recv() for pipe in self.parent_pipes]\n        self.waiting_step = False\n        obs, rews, dones, infos = zip(*outs)\n        return self._decode_obses(obs), np.array(rews), np.array(dones), infos\n\n    def talk2Env_async(self, data):\n        assert len(data) == len(self.parent_pipes)\n        for pipe, d in zip(self.parent_pipes, data):\n            pipe.send(('talk2Env', d))\n        self.waiting_step = True\n\n    def talk2Env_wait(self):\n        outs = [pipe.recv() for pipe in self.parent_pipes]  # pipe.recv() is a blocking call\n        self.waiting_step = False\n        return np.array(outs)\n\n    def close_extras(self):\n        if self.waiting_step:\n            self.step_wait()\n        for pipe in self.parent_pipes:\n            pipe.send(('close', None))\n        for pipe in self.parent_pipes:\n            pipe.recv()\n            pipe.close()\n        for proc in self.procs:\n            proc.join()\n\n    def get_images(self, mode='human'):\n        for pipe in self.parent_pipes:\n            pipe.send(('render', None))\n        return [pipe.recv() for pipe in self.parent_pipes]\n\n    def _decode_obses(self, obs):\n        result = {}\n        self.obs_list=[{} for _ in range(len(self.obs_bufs))]\n        for k in self.obs_keys:\n            bufs = [b[k] for b in self.obs_bufs]\n            o=[]\n            for idx, b in enumerate(bufs):\n                item=np.frombuffer(b.get_obj(), dtype=self.obs_dtypes[k]).reshape(self.obs_shapes[k])\n                o.append(item)\n                self.obs_list[idx][k]=item\n            result[k] = np.array(o)\n        return dict_to_obs(result)\n\n\n\n\ndef _subproc_worker(pipe, parent_pipe, env_fn_wrapper, obs_bufs, obs_shapes, obs_dtypes, keys):\n    \"\"\"\n    Control a single environment instance using IPC and\n    shared memory.\n    \"\"\"\n    def _write_obs(maybe_dict_obs):\n        flatdict = obs_to_dict(maybe_dict_obs)\n        for k in keys:\n            dst = obs_bufs[k].get_obj()\n            dst_np = np.frombuffer(dst, dtype=obs_dtypes[k]).reshape(obs_shapes[k])  # pylint: disable=W0212\n            np.copyto(dst_np, flatdict[k])\n\n    env = env_fn_wrapper.x()\n    parent_pipe.close()\n    try:\n        while True:\n            cmd, data = pipe.recv()\n            if cmd == 'reset':\n                pipe.send(_write_obs(env.reset()))\n            elif cmd == 'step':\n                obs, reward, done, info = env.step(data)\n                if done:\n                    obs = env.reset()\n                pipe.send((_write_obs(obs), reward, done, info))\n            elif cmd == 'render':\n                pipe.send(env.render(mode='rgb_array'))\n            elif cmd == 'close':\n                pipe.send(None)\n                break\n            elif cmd == 'talk2Env':\n                pipe.send(env.talk2Env(data))\n            else:\n                raise RuntimeError('Got unrecognized cmd %s' % cmd)\n    except KeyboardInterrupt:\n        print('ShmemVecEnv worker: got KeyboardInterrupt')\n    finally:\n        env.close()\n"
  },
  {
    "path": "rl/vec_env/tile_images.py",
    "content": "import numpy as np\n\ndef tile_images(img_nhwc):\n    \"\"\"\n    Tile N images into one big PxQ image\n    (P,Q) are chosen to be as close as possible, and if N\n    is square, then P=Q.\n\n    input: img_nhwc, list or array of images, ndim=4 once turned into array\n        n = batch index, h = height, w = width, c = channel\n    returns:\n        bigim_HWc, ndarray with ndim=3\n    \"\"\"\n    img_nhwc = np.asarray(img_nhwc)\n    N, h, w, c = img_nhwc.shape\n    H = int(np.ceil(np.sqrt(N)))\n    W = int(np.ceil(float(N)/H))\n    img_nhwc = np.array(list(img_nhwc) + [img_nhwc[0]*0 for _ in range(N, H*W)])\n    img_HWhwc = img_nhwc.reshape(H, W, h, w, c)\n    img_HhWwc = img_HWhwc.transpose(0, 2, 1, 3, 4)\n    img_Hh_Ww_c = img_HhWwc.reshape(H*h, W*w, c)\n    return img_Hh_Ww_c\n\n"
  },
  {
    "path": "rl/vec_env/util.py",
    "content": "\"\"\"\nHelpers for dealing with vectorized environments.\n\"\"\"\n\nfrom collections import OrderedDict\n\nimport gym\nimport numpy as np\n\n\ndef copy_obs_dict(obs):\n    \"\"\"\n    Deep-copy an observation dict.\n    \"\"\"\n    return {k: np.copy(v) for k, v in obs.items()}\n\n\ndef dict_to_obs(obs_dict):\n    \"\"\"\n    Convert an observation dict into a raw array if the\n    original observation space was not a Dict space.\n    \"\"\"\n    if set(obs_dict.keys()) == {None}:\n        return obs_dict[None]\n    return obs_dict\n\n\ndef obs_space_info(obs_space):\n    \"\"\"\n    Get dict-structured information about a gym.Space.\n\n    Returns:\n      A tuple (keys, shapes, dtypes):\n        keys: a list of dict keys.\n        shapes: a dict mapping keys to shapes.\n        dtypes: a dict mapping keys to dtypes.\n    \"\"\"\n    if isinstance(obs_space, gym.spaces.Dict):\n        assert isinstance(obs_space.spaces, OrderedDict)\n        subspaces = obs_space.spaces\n    elif isinstance(obs_space, gym.spaces.Tuple):\n        assert isinstance(obs_space.spaces, tuple)\n        subspaces = {i: obs_space.spaces[i] for i in range(len(obs_space.spaces))}\n    else:\n        subspaces = {None: obs_space}\n    keys = []\n    shapes = {}\n    dtypes = {}\n    for key, box in subspaces.items():\n        keys.append(key)\n        shapes[key] = box.shape\n        dtypes[key] = box.dtype\n    return keys, shapes, dtypes\n\n\ndef obs_to_dict(obs):\n    \"\"\"\n    Convert an observation into a dict.\n    \"\"\"\n    if isinstance(obs, dict):\n        return obs\n    return {None: obs}\n"
  },
  {
    "path": "rl/vec_env/vec_env.py",
    "content": "import contextlib\nimport os\nimport abc\nfrom abc import abstractmethod\nimport abc\nABC = abc.ABCMeta('ABC', (), {})\n\n#from envs.vec_env.tile_images import tile_images\n\nclass AlreadySteppingError(Exception):\n    \"\"\"\n    Raised when an asynchronous step is running while\n    step_async() is called again.\n    \"\"\"\n\n    def __init__(self):\n        msg = 'already running an async step'\n        Exception.__init__(self, msg)\n\n\nclass NotSteppingError(Exception):\n    \"\"\"\n    Raised when an asynchronous step is not running but\n    step_wait() is called.\n    \"\"\"\n\n    def __init__(self):\n        msg = 'not running an async step'\n        Exception.__init__(self, msg)\n\n\nclass VecEnv(ABC):\n    \"\"\"\n    An abstract asynchronous, vectorized environment.\n    Used to batch data from multiple copies of an environment, so that\n    each observation becomes an batch of observations, and expected action is a batch of actions to\n    be applied per-environment.\n    \"\"\"\n    closed = False\n    viewer = None\n\n    metadata = {\n        'render.modes': ['human', 'rgb_array']\n    }\n\n    def __init__(self, num_envs, observation_space, action_space):\n        self.num_envs = num_envs\n        self.observation_space = observation_space\n        self.action_space = action_space\n\n    @abstractmethod\n    def reset(self):\n        \"\"\"\n        Reset all the environments and return an array of\n        observations, or a dict of observation arrays.\n\n        If step_async is still doing work, that work will\n        be cancelled and step_wait() should not be called\n        until step_async() is invoked again.\n        \"\"\"\n        pass\n\n    @abstractmethod\n    def step_async(self, actions):\n        \"\"\"\n        Tell all the environments to start taking a step\n        with the given actions.\n        Call step_wait() to get the results of the step.\n\n        You should not call this if a step_async run is\n        already pending.\n        \"\"\"\n        pass\n\n    @abstractmethod\n    def step_wait(self):\n        \"\"\"\n        Wait for the step taken with step_async().\n\n        Returns (obs, rews, dones, infos):\n         - obs: an array of observations, or a dict of\n                arrays of observations.\n         - rews: an array of rewards\n         - dones: an array of \"episode done\" booleans\n         - infos: a sequence of info objects\n        \"\"\"\n        pass\n\n    @abstractmethod\n    def talk2Env_async(self, data):\n        pass\n\n    @abstractmethod\n    def talk2Env_wait(self):\n        pass\n\n    def close_extras(self):\n        \"\"\"\n        Clean up the  extra resources, beyond what's in this base class.\n        Only runs when not self.closed.\n        \"\"\"\n        pass\n\n    def close(self):\n        if self.closed:\n            return\n        if self.viewer is not None:\n            self.viewer.close()\n        self.close_extras()\n        self.closed = True\n\n    def step(self, actions):\n        \"\"\"\n        Step the environments synchronously.\n\n        This is available for backwards compatibility.\n        \"\"\"\n        self.step_async(actions)\n        return self.step_wait()\n\n    def render(self, mode='human'):\n        imgs = self.get_images()\n        bigimg = tile_images(imgs)\n        if mode == 'human':\n            self.get_viewer().imshow(bigimg)\n            return self.get_viewer().isopen\n        elif mode == 'rgb_array':\n            return bigimg\n        else:\n            raise NotImplementedError\n\n    def talk2Env(self, data):\n        self.talk2Env_async(data)\n        return self.talk2Env_wait()\n\n    def get_images(self):\n        \"\"\"\n        Return RGB images from each environment\n        \"\"\"\n        raise NotImplementedError\n\n    @property\n    def unwrapped(self):\n        if isinstance(self, VecEnvWrapper):\n            return self.venv.unwrapped\n        else:\n            return self\n\n    def get_viewer(self):\n        if self.viewer is None:\n            from gym.envs.classic_control import rendering\n            self.viewer = rendering.SimpleImageViewer()\n        return self.viewer\n\nclass VecEnvWrapper(VecEnv):\n    \"\"\"\n    An environment wrapper that applies to an entire batch\n    of environments at once.\n    \"\"\"\n\n    def __init__(self, venv, observation_space=None, action_space=None):\n        self.venv = venv\n        super(VecEnvWrapper,self).__init__(num_envs=venv.num_envs,\n                        observation_space=observation_space or venv.observation_space,\n                        action_space=action_space or venv.action_space)\n\n    def step_async(self, actions):\n        self.venv.step_async(actions)\n\n    def talk2Env_async(self, data):\n        self.venv.talk2Env_async(data)\n\n    @abstractmethod\n    def talk2Env_wait(self):\n        pass\n\n    @abstractmethod\n    def reset(self):\n        pass\n\n    @abstractmethod\n    def step_wait(self):\n        pass\n\n    def close(self):\n        return self.venv.close()\n\n    def render(self, mode='human'):\n        return self.venv.render(mode=mode)\n\n    def get_images(self):\n        return self.venv.get_images()\n\n    def __getattr__(self, name):\n        if name.startswith('_'):\n            raise AttributeError(\"attempted to get missing private attribute '{}'\".format(name))\n        return getattr(self.venv, name)\n\nclass VecEnvObservationWrapper(VecEnvWrapper):\n    @abstractmethod\n    def process(self, obs):\n        pass\n\n    def reset(self):\n        obs = self.venv.reset()\n        return self.process(obs)\n\n    def step_wait(self):\n        obs, rews, dones, infos = self.venv.step_wait()\n        return self.process(obs), rews, dones, infos\n\nclass CloudpickleWrapper(object):\n    \"\"\"\n    Uses cloudpickle to serialize contents (otherwise multiprocessing tries to use pickle)\n    \"\"\"\n\n    def __init__(self, x):\n        self.x = x\n\n    def __getstate__(self):\n        import cloudpickle\n        return cloudpickle.dumps(self.x)\n\n    def __setstate__(self, ob):\n        import pickle\n        self.x = pickle.loads(ob)\n\n\n@contextlib.contextmanager\ndef clear_mpi_env_vars():\n    \"\"\"\n    from mpi4py import MPI will call MPI_Init by default.  If the child process has MPI environment variables, MPI will think that the child process is an MPI process just like the parent and do bad things such as hang.\n    This context manager is a hacky way to clear those environment variables temporarily such as when we are starting multiprocessing\n    Processes.\n    \"\"\"\n    removed_environment = {}\n    for k, v in list(os.environ.items()):\n        for prefix in ['OMPI_', 'PMI_']:\n            if k.startswith(prefix):\n                removed_environment[k] = v\n                del os.environ[k]\n    try:\n        yield\n    finally:\n        os.environ.update(removed_environment)\n"
  },
  {
    "path": "rl/vec_env/vec_frame_stack.py",
    "content": "from .vec_env import VecEnvWrapper\nimport numpy as np\nfrom gym import spaces\n\n\nclass VecFrameStack(VecEnvWrapper):\n    def __init__(self, venv, nstack):\n        self.venv = venv\n        self.nstack = nstack\n        wos = venv.observation_space  # wrapped ob space\n        low = np.repeat(wos.low, self.nstack, axis=-1)\n        high = np.repeat(wos.high, self.nstack, axis=-1)\n        self.stackedobs = np.zeros((venv.num_envs,) + low.shape, low.dtype)\n        observation_space = spaces.Box(low=low, high=high, dtype=venv.observation_space.dtype)\n        VecEnvWrapper.__init__(self, venv, observation_space=observation_space)\n\n    def step_wait(self):\n        obs, rews, news, infos = self.venv.step_wait()\n        self.stackedobs = np.roll(self.stackedobs, shift=-1, axis=-1)\n        for (i, new) in enumerate(news):\n            if new:\n                self.stackedobs[i] = 0\n        self.stackedobs[..., -obs.shape[-1]:] = obs\n        return self.stackedobs, rews, news, infos\n\n    def reset(self):\n        obs = self.venv.reset()\n        self.stackedobs[...] = 0\n        self.stackedobs[..., -obs.shape[-1]:] = obs\n        return self.stackedobs\n"
  },
  {
    "path": "rl/vec_env/vec_normalize.py",
    "content": "from . import VecEnvWrapper\nimport numpy as np\n\nclass VecNormalize(VecEnvWrapper):\n    \"\"\"\n    A vectorized wrapper that normalizes the observations\n    and returns from an environment.\n    \"\"\"\n\n    def __init__(self, venv, ob=True, ret=True, clipob=10., cliprew=10., gamma=0.99, epsilon=1e-8, use_tf=False):\n        VecEnvWrapper.__init__(self, venv)\n        if use_tf:\n            from baselines.common.running_mean_std import TfRunningMeanStd\n            self.ob_rms = TfRunningMeanStd(shape=self.observation_space.shape, scope='ob_rms') if ob else None\n            self.ret_rms = TfRunningMeanStd(shape=(), scope='ret_rms') if ret else None\n        else:\n            from baselines.common.running_mean_std import RunningMeanStd\n            self.ob_rms = RunningMeanStd(shape=self.observation_space.shape) if ob else None\n            self.ret_rms = RunningMeanStd(shape=()) if ret else None\n        self.clipob = clipob\n        self.cliprew = cliprew\n        self.ret = np.zeros(self.num_envs)\n        self.gamma = gamma\n        self.epsilon = epsilon\n\n    def step_wait(self):\n        obs, rews, news, infos = self.venv.step_wait()\n        self.ret = self.ret * self.gamma + rews\n        obs = self._obfilt(obs)\n        if self.ret_rms:\n            self.ret_rms.update(self.ret)\n            rews = np.clip(rews / np.sqrt(self.ret_rms.var + self.epsilon), -self.cliprew, self.cliprew)\n        self.ret[news] = 0.\n        return obs, rews, news, infos\n\n    def _obfilt(self, obs):\n        if self.ob_rms:\n            self.ob_rms.update(obs)\n            obs = np.clip((obs - self.ob_rms.mean) / np.sqrt(self.ob_rms.var + self.epsilon), -self.clipob, self.clipob)\n            return obs\n        else:\n            return obs\n\n    def reset(self):\n        self.ret = np.zeros(self.num_envs)\n        obs = self.venv.reset()\n        return self._obfilt(obs)\n"
  },
  {
    "path": "rl/vec_env/vec_pretext_normalize.py",
    "content": "from . import VecEnvWrapper\r\nimport numpy as np\r\nfrom .running_mean_std import RunningMeanStd\r\nimport torch\r\nimport os\r\nfrom collections import deque\r\n\r\nimport copy\r\nimport pickle\r\n\r\nfrom gst_updated.src.gumbel_social_transformer.temperature_scheduler import Temp_Scheduler\r\nfrom gst_updated.scripts.wrapper.crowd_nav_interface_parallel import CrowdNavPredInterfaceMultiEnv\r\n\r\nclass VecPretextNormalize(VecEnvWrapper):\r\n    \"\"\"\r\n    A vectorized wrapper that processes the observations and rewards, used for GST predictors\r\n    and returns from an environment.\r\n    config: a Config object\r\n    test: whether we are training or testing\r\n    \"\"\"\r\n\r\n    def __init__(self, venv, ob=False, ret=False, clipob=10., cliprew=10., gamma=0.99, epsilon=1e-8, config=None, test=False):\r\n        VecEnvWrapper.__init__(self, venv)\r\n\r\n        self.config = config\r\n        self.device=torch.device(self.config.training.device)\r\n        if test:\r\n            self.num_envs = 1\r\n        else:\r\n            self.num_envs = self.config.env.num_processes\r\n\r\n        self.max_human_num = config.sim.human_num + config.sim.human_num_range\r\n\r\n        self.ob_rms = RunningMeanStd(shape=self.observation_space.shape) if ob else None\r\n        self.ret_rms = RunningMeanStd(shape=()) if ret else None\r\n        self.clipob = clipob\r\n        self.cliprew = cliprew\r\n        self.ret = torch.zeros(self.num_envs).to(self.device)\r\n        self.gamma = gamma\r\n        self.epsilon = epsilon\r\n\r\n        # load and configure the prediction model\r\n        load_path = os.path.join(os.getcwd(), self.config.pred.model_dir)\r\n        if not os.path.isdir(load_path):\r\n            raise RuntimeError('The result directory was not found.')\r\n        checkpoint_dir = os.path.join(load_path, 'checkpoint')\r\n        with open(os.path.join(checkpoint_dir, 'args.pickle'), 'rb') as f:\r\n            self.args = pickle.load(f)\r\n\r\n        self.predictor = CrowdNavPredInterfaceMultiEnv(load_path=load_path, device=self.device, config = self.args, num_env = self.num_envs)\r\n\r\n        temperature_scheduler = Temp_Scheduler(self.args.num_epochs, self.args.init_temp, self.args.init_temp, temp_min=0.03)\r\n        self.tau = temperature_scheduler.decay_whole_process(epoch=100)\r\n\r\n        # handle different prediction and control frequency\r\n        self.pred_interval = int(self.config.data.pred_timestep//self.config.env.time_step)\r\n        self.buffer_len = (self.args.obs_seq_len - 1) * self.pred_interval + 1\r\n\r\n\r\n\r\n    def talk2Env_async(self, data):\r\n        self.venv.talk2Env_async(data)\r\n\r\n\r\n    def talk2Env_wait(self):\r\n        outs=self.venv.talk2Env_wait()\r\n        return outs\r\n\r\n    def step_wait(self):\r\n        obs, rews, done, infos = self.venv.step_wait()\r\n\r\n        # process the observations and reward\r\n        obs, rews = self.process_obs_rew(obs, done, rews=rews)\r\n\r\n        return obs, rews, done, infos\r\n\r\n    def _obfilt(self, obs):\r\n        if self.ob_rms and self.config.RLTrain:\r\n            self.ob_rms.update(obs)\r\n            obs = np.clip((obs - self.ob_rms.mean) / np.sqrt(self.ob_rms.var + self.epsilon), -self.clipob, self.clipob)\r\n            return obs\r\n        else:\r\n            return obs\r\n\r\n    def reset(self):\r\n        # queue for inputs to the pred model\r\n        # fill the queue with dummy values\r\n        self.traj_buffer = deque(list(-torch.ones((self.buffer_len, self.num_envs, self.max_human_num, 2), device=self.device)*999),\r\n                                 maxlen=self.buffer_len) # (n_env, num_peds, obs_seq_len, 2)\r\n        self.mask_buffer = deque(list(torch.zeros((self.buffer_len, self.num_envs, self.max_human_num, 1), dtype=torch.bool, device=self.device)),\r\n                                 maxlen=self.buffer_len) # (n_env, num_peds, obs_seq_len, 1)\r\n\r\n        self.step_counter = 0\r\n\r\n        # for calculating the displacement of human positions\r\n        self.last_pos = torch.zeros(self.num_envs, self.max_human_num, 2).to(self.device)\r\n\r\n        obs = self.venv.reset()\r\n        obs, _ = self.process_obs_rew(obs, np.zeros(self.num_envs))\r\n\r\n        return obs\r\n\r\n\r\n    '''\r\n    1. Process observations: \r\n    Run inference on pred model with past obs as inputs, fill in the predicted trajectory in O['spatial_edges']\r\n    \r\n    2. Process rewards: \r\n    Calculate reward for colliding with predicted future traj and add to the original reward, \r\n    same as calc_reward() function in crowd_sim_pred.py except the data are torch tensors\r\n    '''\r\n    def process_obs_rew(self, O, done, rews=0.):\r\n        # O: robot_node: [nenv, 1, 7], spatial_edges: [nenv, observed_human_num, 2*(1+predict_steps)],temporal_edges: [nenv, 1, 2],\r\n        # pos_mask: [nenv, max_human_num], pos_disp_mask: [nenv, max_human_num]\r\n        # prepare inputs for pred_model\r\n        # find humans' absolute positions\r\n        human_pos = O['robot_node'][:, :, :2] + O['spatial_edges'][:, :, :2]\r\n\r\n        # insert the new ob to deque\r\n        self.traj_buffer.append(human_pos)\r\n        self.mask_buffer.append(O['visible_masks'].unsqueeze(-1))\r\n        # [obs_seq_len, nenv, max_human_num, 2] -> [nenv, max_human_num, obs_seq_len, 2]\r\n        in_traj = torch.stack(list(self.traj_buffer)).permute(1, 2, 0, 3)\r\n        in_mask = torch.stack(list(self.mask_buffer)).permute(1, 2, 0, 3).float()\r\n\r\n        # index select the input traj and input mask for GST\r\n        in_traj = in_traj[:, :, ::self.pred_interval]\r\n        in_mask = in_mask[:, :, ::self.pred_interval]\r\n\r\n        # forward predictor model\r\n        out_traj, out_mask = self.predictor.forward(input_traj=in_traj, input_binary_mask=in_mask)\r\n        out_mask = out_mask.bool()\r\n\r\n        # add penalties if the robot collides with predicted future pos of humans\r\n        # deterministic reward, only uses mu_x, mu_y and a predefined radius\r\n        # constant radius of each personal zone circle\r\n        # [nenv, human_num, predict_steps]\r\n        hr_dist_future = out_traj[:, :, :, :2] - O['robot_node'][:, :, :2].unsqueeze(1)\r\n        # [nenv, human_num, predict_steps]\r\n        collision_idx = torch.norm(hr_dist_future, dim=-1) < self.config.robot.radius + self.config.humans.radius\r\n\r\n        # [1,1, predict_steps]\r\n        # mask out invalid predictions\r\n        # [nenv, human_num, predict_steps] AND [nenv, human_num, 1]\r\n        collision_idx = torch.logical_and(collision_idx, out_mask)\r\n        coefficients = 2. ** torch.arange(2, self.config.sim.predict_steps + 2, device=self.device).reshape(\r\n            (1, 1, self.config.sim.predict_steps))  # 4, 8, 16, 32, 64\r\n\r\n        # [1, 1, predict_steps]\r\n        collision_penalties = self.config.reward.collision_penalty / coefficients\r\n\r\n        # [nenv, human_num, predict_steps]\r\n        reward_future = collision_idx.to(torch.float)*collision_penalties\r\n        # [nenv, human_num, predict_steps] -> [nenv, human_num*predict_steps] -> [nenv,]\r\n        # keep the values & discard indices\r\n        reward_future, _ = torch.min(reward_future.reshape(self.num_envs, -1), dim=1)\r\n        # print(reward_future)\r\n        # seems that rews is on cpu\r\n        rews = rews + reward_future.reshape(self.num_envs, 1).cpu().numpy()\r\n\r\n        # get observation back to env\r\n        robot_pos = O['robot_node'][:, :, :2].unsqueeze(1)\r\n\r\n        # convert from positions in world frame to robot frame\r\n        out_traj[:, :, :, :2] = out_traj[:, :, :, :2] - robot_pos\r\n\r\n        # only take mu_x and mu_y\r\n        out_mask = out_mask.repeat(1, 1, self.config.sim.predict_steps * 2)\r\n        new_spatial_edges = out_traj[:, :, :, :2].reshape(self.num_envs, self.max_human_num, -1)\r\n        O['spatial_edges'][:, :, 2:][out_mask] = new_spatial_edges[out_mask]\r\n\r\n        # sort all humans by distance to robot\r\n        # [nenv, human_num]\r\n        hr_dist_cur = torch.linalg.norm(O['spatial_edges'][:, :, :2], dim=-1)\r\n        sorted_idx = torch.argsort(hr_dist_cur, dim=1)\r\n        # sorted_idx = sorted_idx.unsqueeze(-1).repeat(1, 1, 2*(self.config.sim.predict_steps+1))\r\n        for i in range(self.num_envs):\r\n            O['spatial_edges'][i] = O['spatial_edges'][i][sorted_idx[i]]\r\n\r\n        obs={'robot_node':O['robot_node'],\r\n            'spatial_edges':O['spatial_edges'],\r\n            'temporal_edges':O['temporal_edges'],\r\n            'visible_masks':O['visible_masks'],\r\n             'detected_human_num': O['detected_human_num'],\r\n\r\n        }\r\n\r\n        self.last_pos = copy.deepcopy(human_pos)\r\n        self.step_counter = self.step_counter + 1\r\n\r\n        return obs, rews\r\n"
  },
  {
    "path": "rl/vec_env/vec_remove_dict_obs.py",
    "content": "from .vec_env import VecEnvObservationWrapper\n\nclass VecExtractDictObs(VecEnvObservationWrapper):\n    def __init__(self, venv, key):\n        self.key = key\n        super().__init__(venv=venv,\n            observation_space=venv.observation_space.spaces[self.key])\n\n    def process(self, obs):\n        return obs[self.key]\n"
  },
  {
    "path": "test.py",
    "content": "import logging\nimport argparse\nimport os\nimport sys\nfrom matplotlib import pyplot as plt\nimport torch\nimport torch.nn as nn\n\nfrom rl.networks.envs import make_vec_envs\nfrom rl.evaluation import evaluate\nfrom rl.networks.model import Policy\n\nfrom crowd_sim import *\n\n\ndef main():\n\t\"\"\"\n\tThe main function for testing a trained model\n\t\"\"\"\n\t# the following parameters will be determined for each test run\n\tparser = argparse.ArgumentParser('Parse configuration file')\n\t# the model directory that we are testing\n\tparser.add_argument('--model_dir', type=str, default='trained_models/GST_predictor_rand')\n\t# render the environment or not\n\tparser.add_argument('--visualize', default=True, action='store_true')\n\t# if -1, it will run 500 different cases; if >=0, it will run the specified test case repeatedly\n\tparser.add_argument('--test_case', type=int, default=-1)\n\t# model weight file you want to test\n\tparser.add_argument('--test_model', type=str, default='41665.pt')\n\t# whether to save trajectories of episodes\n\tparser.add_argument('--render_traj', default=False, action='store_true')\n\t# whether to save slide show of episodes\n\tparser.add_argument('--save_slides', default=False, action='store_true')\n\ttest_args = parser.parse_args()\n\tif test_args.save_slides:\n\t\ttest_args.visualize = True\n\n\tfrom importlib import import_module\n\tmodel_dir_temp = test_args.model_dir\n\tif model_dir_temp.endswith('/'):\n\t\tmodel_dir_temp = model_dir_temp[:-1]\n\t# import arguments.py from saved directory\n\t# if not found, import from the default directory\n\ttry:\n\t\tmodel_dir_string = model_dir_temp.replace('/', '.') + '.arguments'\n\t\tmodel_arguments = import_module(model_dir_string)\n\t\tget_args = getattr(model_arguments, 'get_args')\n\texcept:\n\t\tprint('Failed to get get_args function from ', test_args.model_dir, '/arguments.py')\n\t\tfrom arguments import get_args\n\n\talgo_args = get_args()\n\n\t# import config class from saved directory\n\t# if not found, import from the default directory\n\n\ttry:\n\t\tmodel_dir_string = model_dir_temp.replace('/', '.') + '.configs.config'\n\t\tmodel_arguments = import_module(model_dir_string)\n\t\tConfig = getattr(model_arguments, 'Config')\n\n\texcept:\n\t\tprint('Failed to get Config function from ', test_args.model_dir)\n\t\tfrom crowd_nav.configs.config import Config\n\tenv_config = config = Config()\n\n\n\t# configure logging and device\n\t# print test result in log file\n\tlog_file = os.path.join(test_args.model_dir,'test')\n\tif not os.path.exists(log_file):\n\t\tos.mkdir(log_file)\n\tif test_args.visualize:\n\t\tlog_file = os.path.join(test_args.model_dir, 'test', 'test_visual.log')\n\telse:\n\t\tlog_file = os.path.join(test_args.model_dir, 'test', 'test_' + test_args.test_model + '.log')\n\n\n\n\tfile_handler = logging.FileHandler(log_file, mode='w')\n\tstdout_handler = logging.StreamHandler(sys.stdout)\n\tlevel = logging.INFO\n\tlogging.basicConfig(level=level, handlers=[stdout_handler, file_handler],\n\t\t\t\t\t\tformat='%(asctime)s, %(levelname)s: %(message)s', datefmt=\"%Y-%m-%d %H:%M:%S\")\n\n\tlogging.info('robot FOV %f', config.robot.FOV)\n\tlogging.info('humans FOV %f', config.humans.FOV)\n\n\ttorch.manual_seed(algo_args.seed)\n\ttorch.cuda.manual_seed_all(algo_args.seed)\n\tif algo_args.cuda:\n\t\tif algo_args.cuda_deterministic:\n\t\t\t# reproducible but slower\n\t\t\ttorch.backends.cudnn.benchmark = False\n\t\t\ttorch.backends.cudnn.deterministic = True\n\t\telse:\n\t\t\t# not reproducible but faster\n\t\t\ttorch.backends.cudnn.benchmark = True\n\t\t\ttorch.backends.cudnn.deterministic = False\n\n\n\ttorch.set_num_threads(1)\n\tdevice = torch.device(\"cuda\" if algo_args.cuda else \"cpu\")\n\n\tlogging.info('Create other envs with new settings')\n\n\t# set up visualization\n\tif test_args.visualize:\n\t\tfig, ax = plt.subplots(figsize=(7, 7))\n\t\tax.set_xlim(-6.5, 6.5) # 6\n\t\tax.set_ylim(-6.5, 6.5)\n\t\tax.axes.xaxis.set_visible(False)\n\t\tax.axes.yaxis.set_visible(False)\n\t\t# ax.set_xlabel('x(m)', fontsize=16)\n\t\t# ax.set_ylabel('y(m)', fontsize=16)\n\t\tplt.ion()\n\t\tplt.show()\n\telse:\n\t\tax = None\n\n\n\tload_path=os.path.join(test_args.model_dir,'checkpoints', test_args.test_model)\n\tprint(load_path)\n\n\n\t# create an environment\n\tenv_name = algo_args.env_name\n\n\teval_dir = os.path.join(test_args.model_dir,'eval')\n\tif not os.path.exists(eval_dir):\n\t\tos.mkdir(eval_dir)\n\n\tenv_config.render_traj = test_args.render_traj\n\tenv_config.save_slides = test_args.save_slides\n\tenv_config.save_path = os.path.join(test_args.model_dir, 'social_eval', test_args.test_model[:-3])\n\tenvs = make_vec_envs(env_name, algo_args.seed, 1,\n\t\t\t\t\t\t algo_args.gamma, eval_dir, device, allow_early_resets=True,\n\t\t\t\t\t\t config=env_config, ax=ax, test_case=test_args.test_case, pretext_wrapper=config.env.use_wrapper)\n\n\tif config.robot.policy not in ['orca', 'social_force']:\n\t\t# load the policy weights\n\t\tactor_critic = Policy(\n\t\t\tenvs.observation_space.spaces,\n\t\t\tenvs.action_space,\n\t\t\tbase_kwargs=algo_args,\n\t\t\tbase=config.robot.policy)\n\t\tactor_critic.load_state_dict(torch.load(load_path, map_location=device))\n\t\tactor_critic.base.nenv = 1\n\n\t\t# allow the usage of multiple GPUs to increase the number of examples processed simultaneously\n\t\tnn.DataParallel(actor_critic).to(device)\n\telse:\n\t\tactor_critic = None\n\n\ttest_size = config.env.test_size\n\n\t# call the evaluation function\n\tevaluate(actor_critic, envs, 1, device, test_size, logging, config, algo_args, test_args.visualize)\n\n\nif __name__ == '__main__':\n\tmain()"
  },
  {
    "path": "train.py",
    "content": "import os\nimport shutil\nimport time\nfrom collections import deque\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom rl import ppo\nfrom rl.networks import network_utils\nfrom arguments import get_args\nfrom rl.networks.envs import make_vec_envs\nfrom rl.networks.model import Policy\nfrom rl.networks.storage import RolloutStorage\n\n\nfrom crowd_nav.configs.config import Config\nfrom crowd_sim import *\n\n\ndef main():\n\t\"\"\"\n\tmain function for training a robot policy network\n\t\"\"\"\n\t# read arguments\n\talgo_args = get_args()\n\n\t# create a directory for saving the logs and weights\n\tif not os.path.exists(algo_args.output_dir):\n\t\tos.makedirs(algo_args.output_dir)\n\t# if output_dir exists and overwrite = False\n\telif not algo_args.overwrite:\n\t\traise ValueError('output_dir already exists!')\n\n\tsave_config_dir = os.path.join(algo_args.output_dir, 'configs')\n\tif not os.path.exists(save_config_dir):\n\t\tos.makedirs(save_config_dir)\n\tshutil.copy('crowd_nav/configs/config.py', save_config_dir)\n\tshutil.copy('crowd_nav/configs/__init__.py', save_config_dir)\n\tshutil.copy('arguments.py', algo_args.output_dir)\n\n\n\tenv_config = config = Config()\n\n\ttorch.manual_seed(algo_args.seed)\n\ttorch.cuda.manual_seed_all(algo_args.seed)\n\tif algo_args.cuda:\n\t\tif algo_args.cuda_deterministic:\n\t\t\t# reproducible but slower\n\t\t\ttorch.backends.cudnn.benchmark = False\n\t\t\ttorch.backends.cudnn.deterministic = True\n\t\telse:\n\t\t\t# not reproducible but faster\n\t\t\ttorch.backends.cudnn.benchmark = True\n\t\t\ttorch.backends.cudnn.deterministic = False\n\n\n\n\ttorch.set_num_threads(algo_args.num_threads)\n\tdevice = torch.device(\"cuda\" if algo_args.cuda else \"cpu\")\n\n\n\tenv_name = algo_args.env_name\n\n\tif config.sim.render:\n\t\talgo_args.num_processes = 1\n\t\talgo_args.num_mini_batch = 1\n\n\t# for visualization\n\tif config.sim.render:\n\t\tfig, ax = plt.subplots(figsize=(7, 7))\n\t\tax.set_xlim(-10, 10)\n\t\tax.set_ylim(-10, 10)\n\t\tax.set_xlabel('x(m)', fontsize=16)\n\t\tax.set_ylabel('y(m)', fontsize=16)\n\t\tplt.ion()\n\t\tplt.show()\n\telse:\n\t\tax = None\n\n\n\t# Create a wrapped, monitored VecEnv\n\tenvs = make_vec_envs(env_name, algo_args.seed, algo_args.num_processes,\n\t\t\t\t\t\t algo_args.gamma, None, device, False, config=env_config, ax=ax, pretext_wrapper=config.env.use_wrapper)\n\n\n\t# create a policy network\n\tactor_critic = Policy(\n\t\tenvs.observation_space.spaces, # pass the Dict into policy to parse\n\t\tenvs.action_space,\n\t\tbase_kwargs=algo_args,\n\t\tbase=config.robot.policy)\n\n\t# storage buffer to store the agent's experience\n\trollouts = RolloutStorage(algo_args.num_steps,\n\t\t\t\t\t\t\t  algo_args.num_processes,\n\t\t\t\t\t\t\t  envs.observation_space.spaces,\n\t\t\t\t\t\t\t  envs.action_space,\n\t\t\t\t\t\t\t  algo_args.human_node_rnn_size,\n\t\t\t\t\t\t\t  algo_args.human_human_edge_rnn_size)\n\n\t# continue training from an existing model if resume = True\n\tif algo_args.resume:\n\t\tload_path = config.training.load_path\n\t\tactor_critic.load_state_dict(torch.load(load_path))\n\t\tprint(\"Loaded the following checkpoint:\", load_path)\n\n\n\t# allow the usage of multiple GPUs to increase the number of examples processed simultaneously\n\tnn.DataParallel(actor_critic).to(device)\n\n\t# create the ppo optimizer\n\tagent = ppo.PPO(\n\t\tactor_critic,\n\t\talgo_args.clip_param,\n\t\talgo_args.ppo_epoch,\n\t\talgo_args.num_mini_batch,\n\t\talgo_args.value_loss_coef,\n\t\talgo_args.entropy_coef,\n\t\tlr=algo_args.lr,\n\t\teps=algo_args.eps,\n\t\tmax_grad_norm=algo_args.max_grad_norm)\n\n\n\n\tobs = envs.reset()\n\tif isinstance(obs, dict):\n\t\tfor key in obs:\n\t\t\trollouts.obs[key][0].copy_(obs[key])\n\telse:\n\t\trollouts.obs[0].copy_(obs)\n\n\trollouts.to(device)\n\n\tepisode_rewards = deque(maxlen=100)\n\n\tstart = time.time()\n\tnum_updates = int(\n\t\talgo_args.num_env_steps) // algo_args.num_steps // algo_args.num_processes\n\n\t# start the training loop\n\tfor j in range(num_updates):\n\t\t# schedule learning rate if needed\n\t\tif algo_args.use_linear_lr_decay:\n\t\t\tnetwork_utils.update_linear_schedule(\n\t\t\t\tagent.optimizer, j, num_updates,\n\t\t\t\tagent.optimizer.lr if algo_args.algo == \"acktr\" else algo_args.lr)\n\n\t\t# step the environment for a few times\n\t\tfor step in range(algo_args.num_steps):\n\t\t\t# Sample actions\n\t\t\twith torch.no_grad():\n\n\t\t\t\trollouts_obs = {}\n\t\t\t\tfor key in rollouts.obs:\n\t\t\t\t\trollouts_obs[key] = rollouts.obs[key][step]\n\t\t\t\trollouts_hidden_s = {}\n\t\t\t\tfor key in rollouts.recurrent_hidden_states:\n\t\t\t\t\trollouts_hidden_s[key] = rollouts.recurrent_hidden_states[key][step]\n\t\t\t\tvalue, action, action_log_prob, recurrent_hidden_states = actor_critic.act(\n\t\t\t\t\trollouts_obs, rollouts_hidden_s,\n\t\t\t\t\trollouts.masks[step])\n\n\t\t\t# if we use real prediction, send predictions to env for rendering\n\t\t\tif env_name == 'CrowdSimPredRealGST-v0' and env_config.env.use_wrapper:\n\t\t\t\t# [nenv, max_human_num, 2*(pred_steps+1)] -> [nenv, max_human_num, 2*pred_steps]\n\t\t\t\tout_pred = rollouts_obs['spatial_edges'][:, :, 2:].to('cpu').numpy()\n\t\t\t\t# send manager action to all processes\n\t\t\t\tack = envs.talk2Env(out_pred)\n\t\t\t\tassert all(ack)\n\n\t\t\tif config.sim.render:\n\t\t\t\tenvs.render()\n\t\t\t# Obser reward and next obs\n\t\t\tobs, reward, done, infos = envs.step(action)\n\n\n\t\t\tfor info in infos:\n\t\t\t\tif 'episode' in info.keys():\n\t\t\t\t\tepisode_rewards.append(info['episode']['r'])\n\n\t\t\t# If done then clean the history of observations.\n\t\t\tmasks = torch.FloatTensor(\n\t\t\t\t[[0.0] if done_ else [1.0] for done_ in done])\n\t\t\tbad_masks = torch.FloatTensor(\n\t\t\t\t[[0.0] if 'bad_transition' in info.keys() else [1.0]\n\t\t\t\t for info in infos])\n\t\t\trollouts.insert(obs, recurrent_hidden_states, action,\n\t\t\t\t\t\t\taction_log_prob, value, reward, masks, bad_masks)\n\t\t# store the stepped experience to buffer\n\t\twith torch.no_grad():\n\t\t\trollouts_obs = {}\n\t\t\tfor key in rollouts.obs:\n\t\t\t\trollouts_obs[key] = rollouts.obs[key][-1]\n\t\t\trollouts_hidden_s = {}\n\t\t\tfor key in rollouts.recurrent_hidden_states:\n\t\t\t\trollouts_hidden_s[key] = rollouts.recurrent_hidden_states[key][-1]\n\t\t\tnext_value = actor_critic.get_value(\n\t\t\t\trollouts_obs, rollouts_hidden_s,\n\t\t\t\trollouts.masks[-1]).detach()\n\n\t\t# compute advantage and gradient, and update the network parameters\n\t\trollouts.compute_returns(next_value, algo_args.use_gae, algo_args.gamma,\n\t\t\t\t\t\t\t\t algo_args.gae_lambda, algo_args.use_proper_time_limits)\n\n\t\tvalue_loss, action_loss, dist_entropy = agent.update(rollouts)\n\n\t\trollouts.after_update()\n\n\t\t# save the model for every interval-th episode or for the last epoch\n\t\tif (j % algo_args.save_interval == 0\n\t\t\tor j == num_updates - 1) :\n\t\t\tsave_path = os.path.join(algo_args.output_dir, 'checkpoints')\n\t\t\tif not os.path.exists(save_path):\n\t\t\t\tos.mkdir(save_path)\n\n\t\t\ttorch.save(actor_critic.state_dict(), os.path.join(save_path, '%.5i'%j + \".pt\"))\n\n\t\tif j % algo_args.log_interval == 0 and len(episode_rewards) > 1:\n\t\t\ttotal_num_steps = (j + 1) * algo_args.num_processes * algo_args.num_steps\n\t\t\tend = time.time()\n\t\t\tprint(\n\t\t\t\t\"Updates {}, num timesteps {}, FPS {} \\n Last {} training episodes: mean/median reward \"\n\t\t\t\t\"{:.1f}/{:.1f}, min/max reward {:.1f}/{:.1f}\\n\"\n\t\t\t\t\t.format(j, total_num_steps,\n\t\t\t\t\t\t\tint(total_num_steps / (end - start)),\n\t\t\t\t\t\t\tlen(episode_rewards), np.mean(episode_rewards),\n\t\t\t\t\t\t\tnp.median(episode_rewards), np.min(episode_rewards),\n\t\t\t\t\t\t\tnp.max(episode_rewards), dist_entropy, value_loss,\n\t\t\t\t\t\t\taction_loss))\n\n\t\t\tdf = pd.DataFrame({'misc/nupdates': [j], 'misc/total_timesteps': [total_num_steps],\n\t\t\t\t\t\t\t   'fps': int(total_num_steps / (end - start)), 'eprewmean': [np.mean(episode_rewards)],\n\t\t\t\t\t\t\t   'loss/policy_entropy': dist_entropy, 'loss/policy_loss': action_loss,\n\t\t\t\t\t\t\t   'loss/value_loss': value_loss})\n\n\t\t\tif os.path.exists(os.path.join(algo_args.output_dir, 'progress.csv')) and j > 20:\n\t\t\t\tdf.to_csv(os.path.join(algo_args.output_dir, 'progress.csv'), mode='a', header=False, index=False)\n\t\t\telse:\n\t\t\t\tdf.to_csv(os.path.join(algo_args.output_dir, 'progress.csv'), mode='w', header=True, index=False)\n\n\tenvs.close()\n\n\nif __name__ == '__main__':\n\tmain()\n"
  }
]