Full Code of ZheC/GTA-IM-Dataset for AI

master 31d7baaab027 cached
11 files
24.5 MB
14.7k tokens
30 symbols
1 requests
Download .txt
Repository: ZheC/GTA-IM-Dataset
Branch: master
Commit: 31d7baaab027
Files: 11
Total size: 24.5 MB

Directory structure:
gitextract_uzga8gil/

├── .gitignore
├── LICENSE
├── README.md
├── demo/
│   ├── info_frames.npz
│   └── info_frames.pickle
├── environment.yml
├── gen_npz.py
├── gta_utils.py
├── vis_2d_pose_depth.py
├── vis_skeleton_pcd.py
└── vis_video.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Project specifics
2020*
GTA-IM*
data/

# Direnv stuffs
.direnv
.envrc

# Compiled source
*.class
*.dll
*.exe
*.o
*.so
*.pyc
**/__pycache__/

# Packages
# It's better to unpack these files and commit the raw source because
# git has its own built in compression methods.
*.7z
*.jar
*.rar
*.zip
*.gz
*.bzip
*.xz
*.lzma

# packing-only formats
*.iso
*.tar

# package management formats
*.dmg
*.xpi
*.gem
*.egg
*.egg-info
*.deb
*.rpm

# Logs and databases
*.log
*.sqlite

# OS generated files
.DS_Store
.Spotlight-V100
.Trashes
._*

# Linux
.fuse_hidden*
.nfs*

# Windows image file caches
Thumbs.db

# Folder config file
Desktop.ini

# Vim
.*.s[a-w][a-z]

# IDE stuffs
.idea/
*.iml
.project
.classpath
.settings/
.ipynb_checkpoints/


================================================
FILE: LICENSE
================================================
Copyright (c) 2020, Zhe Cao, Hang Gao, Qi-Zhi Cai
All rights reserved.

This dataset and code are licensed under the license found in the
LICENSE file in the root directory of this source tree.

Attribution-NonCommercial 4.0 International

=======================================================================

Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.

Using Creative Commons Public Licenses

Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.

     Considerations for licensors: Our public licenses are
     intended for use by those authorized to give the public
     permission to use material in ways otherwise restricted by
     copyright and certain other rights. Our licenses are
     irrevocable. Licensors should read and understand the terms
     and conditions of the license they choose before applying it.
     Licensors should also secure all rights necessary before
     applying our licenses so that the public can reuse the
     material as expected. Licensors should clearly mark any
     material not subject to the license. This includes other CC-
     licensed material, or material used under an exception or
     limitation to copyright. More considerations for licensors:
	wiki.creativecommons.org/Considerations_for_licensors

     Considerations for the public: By using one of our public
     licenses, a licensor grants the public permission to use the
     licensed material under specified terms and conditions. If
     the licensor's permission is not necessary for any reason--for
     example, because of any applicable exception or limitation to
     copyright--then that use is not regulated by the license. Our
     licenses grant only permissions under copyright and certain
     other rights that a licensor has authority to grant. Use of
     the licensed material may still be restricted for other
     reasons, including because others have copyright or other
     rights in the material. A licensor may make special requests,
     such as asking that all changes be marked or described.
     Although not required by our licenses, you are encouraged to
     respect those requests where reasonable. More considerations
     for the public: 
	wiki.creativecommons.org/Considerations_for_licensees

=======================================================================

Creative Commons Attribution-NonCommercial 4.0 International Public
License

By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution-NonCommercial 4.0 International Public License ("Public
License"). To the extent this Public License may be interpreted as a
contract, You are granted the Licensed Rights in consideration of Your
acceptance of these terms and conditions, and the Licensor grants You
such rights in consideration of benefits the Licensor receives from
making the Licensed Material available under these terms and
conditions.


Section 1 -- Definitions.

  a. Adapted Material means material subject to Copyright and Similar
     Rights that is derived from or based upon the Licensed Material
     and in which the Licensed Material is translated, altered,
     arranged, transformed, or otherwise modified in a manner requiring
     permission under the Copyright and Similar Rights held by the
     Licensor. For purposes of this Public License, where the Licensed
     Material is a musical work, performance, or sound recording,
     Adapted Material is always produced where the Licensed Material is
     synched in timed relation with a moving image.

  b. Adapter's License means the license You apply to Your Copyright
     and Similar Rights in Your contributions to Adapted Material in
     accordance with the terms and conditions of this Public License.

  c. Copyright and Similar Rights means copyright and/or similar rights
     closely related to copyright including, without limitation,
     performance, broadcast, sound recording, and Sui Generis Database
     Rights, without regard to how the rights are labeled or
     categorized. For purposes of this Public License, the rights
     specified in Section 2(b)(1)-(2) are not Copyright and Similar
     Rights.
  d. Effective Technological Measures means those measures that, in the
     absence of proper authority, may not be circumvented under laws
     fulfilling obligations under Article 11 of the WIPO Copyright
     Treaty adopted on December 20, 1996, and/or similar international
     agreements.

  e. Exceptions and Limitations means fair use, fair dealing, and/or
     any other exception or limitation to Copyright and Similar Rights
     that applies to Your use of the Licensed Material.

  f. Licensed Material means the artistic or literary work, database,
     or other material to which the Licensor applied this Public
     License.

  g. Licensed Rights means the rights granted to You subject to the
     terms and conditions of this Public License, which are limited to
     all Copyright and Similar Rights that apply to Your use of the
     Licensed Material and that the Licensor has authority to license.

  h. Licensor means the individual(s) or entity(ies) granting rights
     under this Public License.

  i. NonCommercial means not primarily intended for or directed towards
     commercial advantage or monetary compensation. For purposes of
     this Public License, the exchange of the Licensed Material for
     other material subject to Copyright and Similar Rights by digital
     file-sharing or similar means is NonCommercial provided there is
     no payment of monetary compensation in connection with the
     exchange.

  j. Share means to provide material to the public by any means or
     process that requires permission under the Licensed Rights, such
     as reproduction, public display, public performance, distribution,
     dissemination, communication, or importation, and to make material
     available to the public including in ways that members of the
     public may access the material from a place and at a time
     individually chosen by them.

  k. Sui Generis Database Rights means rights other than copyright
     resulting from Directive 96/9/EC of the European Parliament and of
     the Council of 11 March 1996 on the legal protection of databases,
     as amended and/or succeeded, as well as other essentially
     equivalent rights anywhere in the world.

  l. You means the individual or entity exercising the Licensed Rights
     under this Public License. Your has a corresponding meaning.


Section 2 -- Scope.

  a. License grant.

       1. Subject to the terms and conditions of this Public License,
          the Licensor hereby grants You a worldwide, royalty-free,
          non-sublicensable, non-exclusive, irrevocable license to
          exercise the Licensed Rights in the Licensed Material to:

            a. reproduce and Share the Licensed Material, in whole or
               in part, for NonCommercial purposes only; and

            b. produce, reproduce, and Share Adapted Material for
               NonCommercial purposes only.

       2. Exceptions and Limitations. For the avoidance of doubt, where
          Exceptions and Limitations apply to Your use, this Public
          License does not apply, and You do not need to comply with
          its terms and conditions.

       3. Term. The term of this Public License is specified in Section
          6(a).

       4. Media and formats; technical modifications allowed. The
          Licensor authorizes You to exercise the Licensed Rights in
          all media and formats whether now known or hereafter created,
          and to make technical modifications necessary to do so. The
          Licensor waives and/or agrees not to assert any right or
          authority to forbid You from making technical modifications
          necessary to exercise the Licensed Rights, including
          technical modifications necessary to circumvent Effective
          Technological Measures. For purposes of this Public License,
          simply making modifications authorized by this Section 2(a)
          (4) never produces Adapted Material.

       5. Downstream recipients.

            a. Offer from the Licensor -- Licensed Material. Every
               recipient of the Licensed Material automatically
               receives an offer from the Licensor to exercise the
               Licensed Rights under the terms and conditions of this
               Public License.

            b. No downstream restrictions. You may not offer or impose
               any additional or different terms or conditions on, or
               apply any Effective Technological Measures to, the
               Licensed Material if doing so restricts exercise of the
               Licensed Rights by any recipient of the Licensed
               Material.

       6. No endorsement. Nothing in this Public License constitutes or
          may be construed as permission to assert or imply that You
          are, or that Your use of the Licensed Material is, connected
          with, or sponsored, endorsed, or granted official status by,
          the Licensor or others designated to receive attribution as
          provided in Section 3(a)(1)(A)(i).

  b. Other rights.

       1. Moral rights, such as the right of integrity, are not
          licensed under this Public License, nor are publicity,
          privacy, and/or other similar personality rights; however, to
          the extent possible, the Licensor waives and/or agrees not to
          assert any such rights held by the Licensor to the limited
          extent necessary to allow You to exercise the Licensed
          Rights, but not otherwise.

       2. Patent and trademark rights are not licensed under this
          Public License.

       3. To the extent possible, the Licensor waives any right to
          collect royalties from You for the exercise of the Licensed
          Rights, whether directly or through a collecting society
          under any voluntary or waivable statutory or compulsory
          licensing scheme. In all other cases the Licensor expressly
          reserves any right to collect such royalties, including when
          the Licensed Material is used other than for NonCommercial
          purposes.


Section 3 -- License Conditions.

Your exercise of the Licensed Rights is expressly made subject to the
following conditions.

  a. Attribution.

       1. If You Share the Licensed Material (including in modified
          form), You must:

            a. retain the following if it is supplied by the Licensor
               with the Licensed Material:

                 i. identification of the creator(s) of the Licensed
                    Material and any others designated to receive
                    attribution, in any reasonable manner requested by
                    the Licensor (including by pseudonym if
                    designated);

                ii. a copyright notice;

               iii. a notice that refers to this Public License;

                iv. a notice that refers to the disclaimer of
                    warranties;

                 v. a URI or hyperlink to the Licensed Material to the
                    extent reasonably practicable;

            b. indicate if You modified the Licensed Material and
               retain an indication of any previous modifications; and

            c. indicate the Licensed Material is licensed under this
               Public License, and include the text of, or the URI or
               hyperlink to, this Public License.

       2. You may satisfy the conditions in Section 3(a)(1) in any
          reasonable manner based on the medium, means, and context in
          which You Share the Licensed Material. For example, it may be
          reasonable to satisfy the conditions by providing a URI or
          hyperlink to a resource that includes the required
          information.

       3. If requested by the Licensor, You must remove any of the
          information required by Section 3(a)(1)(A) to the extent
          reasonably practicable.

       4. If You Share Adapted Material You produce, the Adapter's
          License You apply must not prevent recipients of the Adapted
          Material from complying with this Public License.


Section 4 -- Sui Generis Database Rights.

Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:

  a. for the avoidance of doubt, Section 2(a)(1) grants You the right
     to extract, reuse, reproduce, and Share all or a substantial
     portion of the contents of the database for NonCommercial purposes
     only;

  b. if You include all or a substantial portion of the database
     contents in a database in which You have Sui Generis Database
     Rights, then the database in which You have Sui Generis Database
     Rights (but not its individual contents) is Adapted Material; and

  c. You must comply with the conditions in Section 3(a) if You Share
     all or a substantial portion of the contents of the database.

For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.


Section 5 -- Disclaimer of Warranties and Limitation of Liability.

  a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
     EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
     AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
     ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
     IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
     WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
     PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
     ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
     KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
     ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.

  b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
     TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
     NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
     INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
     COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
     USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
     ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
     DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
     IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.

  c. The disclaimer of warranties and limitation of liability provided
     above shall be interpreted in a manner that, to the extent
     possible, most closely approximates an absolute disclaimer and
     waiver of all liability.


Section 6 -- Term and Termination.

  a. This Public License applies for the term of the Copyright and
     Similar Rights licensed here. However, if You fail to comply with
     this Public License, then Your rights under this Public License
     terminate automatically.

  b. Where Your right to use the Licensed Material has terminated under
     Section 6(a), it reinstates:

       1. automatically as of the date the violation is cured, provided
          it is cured within 30 days of Your discovery of the
          violation; or

       2. upon express reinstatement by the Licensor.

     For the avoidance of doubt, this Section 6(b) does not affect any
     right the Licensor may have to seek remedies for Your violations
     of this Public License.

  c. For the avoidance of doubt, the Licensor may also offer the
     Licensed Material under separate terms or conditions or stop
     distributing the Licensed Material at any time; however, doing so
     will not terminate this Public License.

  d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
     License.


Section 7 -- Other Terms and Conditions.

  a. The Licensor shall not be bound by any additional or different
     terms or conditions communicated by You unless expressly agreed.

  b. Any arrangements, understandings, or agreements regarding the
     Licensed Material not stated herein are separate from and
     independent of the terms and conditions of this Public License.


Section 8 -- Interpretation.

  a. For the avoidance of doubt, this Public License does not, and
     shall not be interpreted to, reduce, limit, restrict, or impose
     conditions on any use of the Licensed Material that could lawfully
     be made without permission under this Public License.

  b. To the extent possible, if any provision of this Public License is
     deemed unenforceable, it shall be automatically reformed to the
     minimum extent necessary to make it enforceable. If the provision
     cannot be reformed, it shall be severed from this Public License
     without affecting the enforceability of the remaining terms and
     conditions.

  c. No term or condition of this Public License will be waived and no
     failure to comply consented to unless expressly agreed to by the
     Licensor.

  d. Nothing in this Public License constitutes or may be interpreted
     as a limitation upon, or waiver of, any privileges and immunities
     that apply to the Licensor or You, including from the legal
     processes of any jurisdiction or authority.

=======================================================================

Creative Commons is not a party to its public
licenses. Notwithstanding, Creative Commons may elect to apply one of
its public licenses to material it publishes and in those instances
will be considered the “Licensor.” The text of the Creative Commons
public licenses is dedicated to the public domain under the CC0 Public
Domain Dedication. Except for the limited purpose of indicating that
material is shared under a Creative Commons public license or as
otherwise permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the
public licenses.

Creative Commons may be contacted at creativecommons.org.


================================================
FILE: README.md
================================================
# <b>GTA-IM Dataset</b> [[Website]](https://people.eecs.berkeley.edu/~zhecao/hmp/)

<div align=center>
<img src="assets/sample1.gif" width=32%>
<img src="assets/sample2.gif" width=32%>
<img src="assets/sample3.gif" width=32%>
</div>

<br>

**Long-term Human Motion Prediction with Scene Context, ECCV 2020 (Oral)** [PDF](https://arxiv.org/pdf/2007.03672.pdf)
<br>
[Zhe Cao](http://people.eecs.berkeley.edu/~zhecao/), [Hang Gao](http://people.eecs.berkeley.edu/~hangg/), [Karttikeya Mangalam](https://karttikeya.github.io/), [Qi-Zhi Cai](https://scholar.google.com/citations?user=oyh-YNwAAAAJ&hl=en), [Minh Vo](https://minhpvo.github.io/), [Jitendra Malik](https://people.eecs.berkeley.edu/~malik/). <br>

This repository maintains our GTA Indoor Motion dataset (GTA-IM) that emphasizes human-scene interactions in the indoor environments. We collect HD RGB-D image seuqences of 3D human motion from realistic game engine. The dataset has clean 3D human pose and camera pose annoations, and large diversity in human appearances, indoor environments, camera views, and human activities.

**Table of contents**<br>
1. [A demo for playing with our dataset.](#demo)<br>
2. [Instructions to request our full dataset.](#requesting-dataset)<br>
3. [Documentation on our dataset structure and contents.](#dataset-contents)<br>


## Demo

### (0) Getting Started
Clone this repository, and create local environment: `conda env create -f environment.yml`.

For your convinience, we provide a fragment of our data in `demo` directory. And in this section, you will be able to play with different parts of our data using maintained tool scripts.

### (1) 3D skeleton & point cloud
```bash
$ python vis_skeleton_pcd.py -h
usage: vis_skeleton_pcd.py [-h] [-pa PATH] [-f FRAME] [-fw FUSION_WINDOW]

# now visualize demo 3d skeleton and point cloud!
$ python vis_skeleton_pcd.py -pa demo -f 2720 -fw 80
```

You should be able to see a open3d viewer with our 3D skeleton and point cloud data, press 'h' in the viewer to see how to control the viewpoint:
<img src="assets/vis_skeleton_pcd.gif" width=100%>

Note that we use `open3d == 0.7.0`, the visualization code is not compatible with the newer version of open3d.

### (2) 2D skeleton & depth map
```bash
$ python vis_2d_pose_depth.py -h
usage: vis_2d_pose_depth.py [-h] [-pa PATH]

# now visualize 2d skeleton and depth map!
$ python vis_2d_pose_depth.py -pa demo
```

You should be able to find a created `demo/vis/` directory with `*_vis.jpg` that render to a movie strip like this:
<img src="assets/vis_2d_pose_depth.gif" width=80%>

### (3) RGB video
```bash
$ python vis_video.py -h
usage: vis_video.py [-h] [-pa PATH] [-s SCALE] [-fr FRAME_RATE]

# now visualize demo video!
$ python vis_video.py -pa demo -fr 15
```

You should be able to find a created `demo/vis/` directory with a `video.mp4`:

## Requesting Dataset

To obtain the Dataset, please send an email to [Zhe Cao](https://people.eecs.berkeley.edu/~zhecao/) (with the title "GTA-IM Dataset Download") stating:

- Your name, title and affilation
- Your intended use of the data
- The following statement:
    > With this email we declare that we will use the GTA-IM Dataset for non-commercial research purposes only. We also undertake to purchase a copy of Grand Theft Auto V. We will not redistribute the data in any form except in academic publications where necessary to present examples.

We will promptly reply with the download link.


## Dataset Contents

After you download data from our link and unzip, each sequence folder will contain the following files:

- `images`:
    - color images: `*.jpg`
    - depth images: `*.jpg`
    - instance masks: `*_id`.png

<br>

- `info_frames.pickle`: a pickle file contains camera information, 3d human poses (98 joints) in the global coordinate, weather condition, the character ID, and so on.

    ````python
    import pickle
    info = pickle.load(open(data_path + 'info_frames.pickle', 'rb'))
    print(info[0].keys())
    ````

<br>

- `info_frames.npz`: it contains five arrays. 21 joints out of 98 human joints are extraced to form the minimal skeleton. [Here](gen_npz.py) is how we generate it from raw captures.

    - `joints_2d`: 2d human poses on the HD image plane.
    - `joints_3d_cam`: 3d human poses in the current frame's camera coordinate
    - `joints_3d_world`: 3d human poses in the game/world coordinate
    - `world2cam_trans`: the world to camera transformation matrix for each frame
    - `intrinsics`: camera intrinsics

    <br>

    ````python
    import numpy as np
    info_npz = np.load(rec_idx+'info_frames.npz'); 
    print(info_npz.files)
    # 2d poses for frame 0
    print(npz['joints_2d'][0]) 
    ````


<br>

- `realtimeinfo.pickle`: a backup pickle file which contains all information from the data collection.

#### Joint Types

The human skeleton connection and joints index name:

```python
LIMBS = [
    (0, 1),  # head_center -> neck
    (1, 2),  # neck -> right_clavicle
    (2, 3),  # right_clavicle -> right_shoulder
    (3, 4),  # right_shoulder -> right_elbow
    (4, 5),  # right_elbow -> right_wrist
    (1, 6),  # neck -> left_clavicle
    (6, 7),  # left_clavicle -> left_shoulder
    (7, 8),  # left_shoulder -> left_elbow
    (8, 9),  # left_elbow -> left_wrist
    (1, 10),  # neck -> spine0
    (10, 11),  # spine0 -> spine1
    (11, 12),  # spine1 -> spine2
    (12, 13),  # spine2 -> spine3
    (13, 14),  # spine3 -> spine4
    (14, 15),  # spine4 -> right_hip
    (15, 16),  # right_hip -> right_knee
    (16, 17),  # right_knee -> right_ankle
    (14, 18),  # spine4 -> left_hip
    (18, 19),  # left_hip -> left_knee
    (19, 20)  # left_knee -> left_ankle
]
```

## Important Note

This dataset is for non-commercial research purpose only. Due to public interest, I decided to reimplement the data generation pipeline from scratch to collect the GTA-IM dataset again. I do not use Facebook resources to reproduce the data. 

## Citation

We believe in open research and we will be happy if you find this data useful.
If you use it, please consider citing our [work](https://people.eecs.berkeley.edu/~zhecao/hmp/preprint.pdf).

```latex
@incollection{caoHMP2020,
  author = {Zhe Cao and
    Hang Gao and
    Karttikeya Mangalam and
    Qizhi Cai and
    Minh Vo and
    Jitendra Malik},
  title = {Long-term human motion prediction with scene context},
  booktitle = ECCV,
  year = {2020},
  }
```

## Acknowledgement

Our data collection pipeline was built upon [this plugin](https://github.com/philkr/gamehook_gtav) and [this tool](https://github.com/fabbrimatteo/JTA-Mods).

## LICENSE
Our project is released under [CC-BY-NC 4.0](https://github.com/ZheC/GTA-IM-Dataset/tree/master/LICENSE).


================================================
FILE: demo/info_frames.pickle
================================================
[File too large to display: 24.5 MB]

================================================
FILE: environment.yml
================================================
name: gta-im
channels:
  - conda-forge
  - open3d-admin
dependencies:
  - python=3.6
  - tqdm
  - numpy
  - numba
  - pillow
  - matplotlib
  - opencv
  - open3d=0.7


================================================
FILE: gen_npz.py
================================================
"""
GTA-IM Dataset
"""

import glob
import os
import pickle

import numba
import numpy as np


@numba.jit(nopython=True, nogil=True)
def rot_axis(angle, axis):
    cg = np.cos(angle)
    sg = np.sin(angle)
    if axis == 0:  # X
        v = [0, 4, 5, 7, 8]
    elif axis == 1:  # Y
        v = [4, 0, 6, 2, 8]
    else:  # Z
        v = [8, 0, 1, 3, 4]
    RX = np.zeros(9, dtype=numba.float64)
    RX[v[0]] = 1.0
    RX[v[1]] = cg
    RX[v[2]] = -sg
    RX[v[3]] = sg
    RX[v[4]] = cg
    return RX.reshape(3, 3)


@numba.jit(nopython=True, nogil=True)
def rotate(vector, angle, inverse=False):
    """
    Rotation of x, y, z axis
    Forward rotate order: Z, Y, X
    Inverse rotate order: X^T, Y^T,Z^T
    Input:
        vector: vector in 3D coordinates
        angle: rotation along X, Y, Z (raw data from GTA)
    Output:
        out: rotated vector
    """
    gamma, beta, alpha = angle[0], angle[1], angle[2]

    # Rotation matrices around the X (gamma), Y (beta), and Z (alpha) axis
    RX = rot_axis(gamma, 0)
    RY = rot_axis(beta, 1)
    RZ = rot_axis(alpha, 2)

    # Composed rotation matrix with (RX, RY, RZ)
    if inverse:
        return np.dot(np.dot(np.dot(RX.T, RY.T), RZ.T), vector)
    else:
        return np.dot(np.dot(np.dot(RZ, RY), RX), vector)


def angle2rot(rotation, inverse=False):
    return rotate(np.eye(3), rotation, inverse=inverse)


class Pose:
    def __init__(self, position, rotation):
        # relative position to the 1st frame: (X, Y, Z)
        # relative rotation to the previous frame: (r_x, r_y, r_z)
        self.position = position
        self.rotation = angle2rot(rotation)
        magic_rot = angle2rot(np.array([np.pi / 2, 0, 0]), inverse=True)
        self.rotation = self.rotation.dot(magic_rot)


def get_focal_length(cam_near_clip, cam_field_of_view):
    near_clip_height = (
        2 * cam_near_clip * np.tan(cam_field_of_view / 2.0 * (np.pi / 180.0))
    )

    # camera focal length
    return 1080.0 / near_clip_height * cam_near_clip


def get_cam_extr(cam_pos, cam_rot):
    cam_pos = np.array(cam_pos)
    cam_rot = np.array(cam_rot)

    pose = Pose(cam_pos, cam_rot / 180.0 * np.pi)
    cam_extr = np.eye(4)
    cam_extr[:3, :3] = pose.rotation
    cam_extr[:3, -1] = pose.position

    return cam_extr


if __name__ == '__main__':
    rec_inds = glob.glob('2020*')
    for data_path in rec_inds:
        if '.zip' in data_path:
            continue
        print(data_path)
        data_path += '/'
        info_path = data_path + 'realtimeinfo.gz'
        info = pickle.load(open(info_path, 'rb'))['frames']

        new_info = []
        joints_2d_seq = []
        joints_3d_cam_seq = []
        joints_3d_world_seq = []
        world2cam_trans = []
        intrinsics = []
        count = 0
        for i in range(len(info)):
            infot = info[i]
            # Change the image names
            prefix = data_path + str(infot['time'])
            if os.path.exists(prefix + '_final.jpg') and os.path.exists(
                prefix + '_depth.png'
            ):
                os.rename(
                    prefix + '_final.jpg',
                    data_path + '{:05d}'.format(count) + '.jpg',
                )
                os.rename(
                    prefix + '_depth.png',
                    data_path + '{:05d}'.format(count) + '.png',
                )
                os.rename(
                    prefix + '_id.png',
                    data_path + '{:05d}'.format(count) + '_id.png',
                )
                count = count + 1

                # 3d keypoints
                keypoint = [
                    infot['head'],
                    infot['neck'],
                    infot['right_clavicle'],
                    infot['right_shoulder'],
                    infot['right_elbow'],
                    infot['right_wrist'],
                    infot['left_clavicle'],
                    infot['left_shoulder'],
                    infot['left_elbow'],
                    infot['left_wrist'],
                    infot['spine0'],
                    infot['spine1'],
                    infot['spine2'],
                    infot['spine3'],
                    infot['spine4'],
                    infot['right_hip'],
                    infot['right_knee'],
                    infot['right_ankle'],
                    infot['left_hip'],
                    infot['left_knee'],
                    infot['left_ankle'],
                ]

                # camera parameters
                cam_near_clip = infot['cam_near_clip']
                cam_field_of_view = infot['cam_field_of_view']
                focal_length = get_focal_length(
                    cam_near_clip, cam_field_of_view
                )
                intrinsic = np.asarray(
                    [
                        [focal_length, 0, 960.0],
                        [0, focal_length, 540.0],
                        [0, 0, 1],
                    ]
                )
                cam_extr_ref = get_cam_extr(infot['cam_pos'], infot['cam_rot'])

                joints = np.asarray(keypoint)
                jn = joints.shape[0]

                joints_world = np.concatenate(
                    [joints, np.ones((jn, 1))], axis=-1
                )
                joints_cam = joints_world.dot(np.linalg.inv(cam_extr_ref.T))[
                    :, :3
                ]
                joints_2d = np.matmul(intrinsic, joints_cam.T)
                joints_2d = (
                    joints_2d[0] / joints_2d[2],
                    joints_2d[1] / joints_2d[2],
                )
                gta_pose_2d = np.asarray(joints_2d).T.reshape(jn, 2)
                joints_cam = joints_cam.reshape(jn, 3)

                joints_2d_seq.append(np.asarray(joints_2d).T)
                joints_3d_cam_seq.append(joints_cam)
                joints_3d_world_seq.append(joints)
                world2cam_trans.append(np.linalg.inv(cam_extr_ref.T))
                intrinsics.append(intrinsic)

                new_info.append(infot)

        np.savez(
            data_path + 'info_frames.npz',
            joints_2d=np.asarray(joints_2d_seq),
            joints_3d_cam=np.asarray(joints_3d_cam_seq),
            joints_3d_world=np.asarray(joints_3d_world_seq),
            world2cam_trans=np.asarray(world2cam_trans),
            intrinsics=np.asarray(intrinsics),
        )

        fn = open(data_path + 'info_frames.pickle', 'wb')
        pickle.dump(new_info, fn)


================================================
FILE: gta_utils.py
================================================
"""
GTA-IM Dataset
"""

import cv2
import numpy as np

LIMBS = [
    (0, 1),  # head_center -> neck
    (1, 2),  # neck -> right_clavicle
    (2, 3),  # right_clavicle -> right_shoulder
    (3, 4),  # right_shoulder -> right_elbow
    (4, 5),  # right_elbow -> right_wrist
    (1, 6),  # neck -> left_clavicle
    (6, 7),  # left_clavicle -> left_shoulder
    (7, 8),  # left_shoulder -> left_elbow
    (8, 9),  # left_elbow -> left_wrist
    (1, 10),  # neck -> spine0
    (10, 11),  # spine0 -> spine1
    (11, 12),  # spine1 -> spine2
    (12, 13),  # spine2 -> spine3
    (13, 14),  # spine3 -> spine4
    (14, 15),  # spine4 -> right_hip
    (15, 16),  # right_hip -> right_knee
    (16, 17),  # right_knee -> right_ankle
    (14, 18),  # spine4 -> left_hip
    (18, 19),  # left_hip -> left_knee
    (19, 20),  # left_knee -> left_ankle
]


####################
# camera utils.
def get_focal_length(cam_near_clip, cam_field_of_view):
    near_clip_height = (
        2 * cam_near_clip * np.tan(cam_field_of_view / 2.0 * (np.pi / 180.0))
    )

    # camera focal length
    return 1080.0 / near_clip_height * cam_near_clip


def get_2d_from_3d(
    vertex,
    cam_coords,
    cam_rotation,
    cam_near_clip,
    cam_field_of_view,
    WIDTH=1920,
    HEIGHT=1080,
):
    WORLD_NORTH = np.array([0.0, 1.0, 0.0], 'double')
    WORLD_UP = np.array([0.0, 0.0, 1.0], 'double')
    WORLD_EAST = np.array([1.0, 0.0, 0.0], 'double')
    theta = (np.pi / 180.0) * cam_rotation
    cam_dir = rotate(WORLD_NORTH, theta)
    clip_plane_center = cam_coords + cam_near_clip * cam_dir
    camera_center = -cam_near_clip * cam_dir
    near_clip_height = (
        2 * cam_near_clip * np.tan(cam_field_of_view / 2.0 * (np.pi / 180.0))
    )
    near_clip_width = near_clip_height * WIDTH / HEIGHT

    cam_up = rotate(WORLD_UP, theta)
    cam_east = rotate(WORLD_EAST, theta)
    near_clip_to_target = vertex - clip_plane_center

    camera_to_target = near_clip_to_target - camera_center

    camera_to_target_unit_vector = camera_to_target * (
        1.0 / np.linalg.norm(camera_to_target)
    )

    view_plane_dist = cam_near_clip / cam_dir.dot(camera_to_target_unit_vector)

    new_origin = (
        clip_plane_center
        + (near_clip_height / 2.0) * cam_up
        - (near_clip_width / 2.0) * cam_east
    )

    view_plane_point = (
        view_plane_dist * camera_to_target_unit_vector
    ) + camera_center
    view_plane_point = (view_plane_point + clip_plane_center) - new_origin
    viewPlaneX = view_plane_point.dot(cam_east)
    viewPlaneZ = view_plane_point.dot(cam_up)
    screenX = viewPlaneX / near_clip_width
    screenY = -viewPlaneZ / near_clip_height

    # screenX and screenY between (0, 1)
    ret = np.array([screenX, screenY], 'double')
    return ret


def screen_x_to_view_plane(x, cam_near_clip, cam_field_of_view):
    # x in (0, 1)
    near_clip_height = (
        2 * cam_near_clip * np.tan(cam_field_of_view / 2.0 * (np.pi / 180.0))
    )
    near_clip_width = near_clip_height * 1920.0 / 1080.0

    viewPlaneX = x * near_clip_width

    return viewPlaneX


def generate_id_map(map_path):
    id_map = cv2.imread(map_path, -1)
    h, w, _ = id_map.shape
    id_map = np.concatenate(
        (id_map, np.zeros((h, w, 1), dtype=np.uint8)), axis=2
    )
    id_map.dtype = np.uint32
    return id_map


def get_depth(
    vertex, cam_coords, cam_rotation, cam_near_clip, cam_field_of_view
):
    WORLD_NORTH = np.array([0.0, 1.0, 0.0], 'double')
    theta = (np.pi / 180.0) * cam_rotation
    cam_dir = rotate(WORLD_NORTH, theta)
    clip_plane_center = cam_coords + cam_near_clip * cam_dir
    camera_center = -cam_near_clip * cam_dir

    near_clip_to_target = vertex - clip_plane_center

    camera_to_target = near_clip_to_target - camera_center
    camera_to_target_unit_vector = camera_to_target * (
        1.0 / np.linalg.norm(camera_to_target)
    )

    depth = np.linalg.norm(camera_to_target) * cam_dir.dot(
        camera_to_target_unit_vector
    )
    depth = depth - cam_near_clip

    return depth


def get_kitti_format_camera_coords(
    vertex, cam_coords, cam_rotation, cam_near_clip
):
    cam_dir, cam_up, cam_east = get_cam_dir_vecs(cam_rotation)

    clip_plane_center = cam_coords + cam_near_clip * cam_dir

    camera_center = -cam_near_clip * cam_dir

    near_clip_to_target = vertex - clip_plane_center

    camera_to_target = near_clip_to_target - camera_center
    camera_to_target_unit_vector = camera_to_target * (
        1.0 / np.linalg.norm(camera_to_target)
    )

    z = np.linalg.norm(camera_to_target) * cam_dir.dot(
        camera_to_target_unit_vector
    )
    y = -np.linalg.norm(camera_to_target) * cam_up.dot(
        camera_to_target_unit_vector
    )
    x = np.linalg.norm(camera_to_target) * cam_east.dot(
        camera_to_target_unit_vector
    )

    return np.array([x, y, z])


def get_cam_dir_vecs(cam_rotation):
    WORLD_NORTH = np.array([0.0, 1.0, 0.0], 'double')
    WORLD_UP = np.array([0.0, 0.0, 1.0], 'double')
    WORLD_EAST = np.array([1.0, 0.0, 0.0], 'double')
    theta = (np.pi / 180.0) * cam_rotation
    cam_dir = rotate(WORLD_NORTH, theta)
    cam_up = rotate(WORLD_UP, theta)
    cam_east = rotate(WORLD_EAST, theta)

    return cam_dir, cam_up, cam_east


def is_before_clip_plane(
    vertex,
    cam_coords,
    cam_rotation,
    cam_near_clip,
    cam_field_of_view,
    WIDTH=1920,
    HEIGHT=2080,
):
    WORLD_NORTH = np.array([0.0, 1.0, 0.0], 'double')
    theta = (np.pi / 180.0) * cam_rotation
    cam_dir = rotate(WORLD_NORTH, theta)
    clip_plane_center = cam_coords + cam_near_clip * cam_dir
    camera_center = -cam_near_clip * cam_dir

    near_clip_to_target = vertex - clip_plane_center

    camera_to_target = near_clip_to_target - camera_center

    camera_to_target_unit_vector = camera_to_target * (
        1.0 / np.linalg.norm(camera_to_target)
    )

    if cam_dir.dot(camera_to_target_unit_vector) > 0:
        return True
    else:
        return False


def get_clip_center_and_dir(cam_coords, cam_rotation, cam_near_clip):
    WORLD_NORTH = np.array([0.0, 1.0, 0.0], 'double')
    theta = (np.pi / 180.0) * cam_rotation
    cam_dir = rotate(WORLD_NORTH, theta)
    clip_plane_center = cam_coords + cam_near_clip * cam_dir
    return clip_plane_center, cam_dir


def rotate(a, t):
    d = np.zeros(3, 'double')
    d[0] = np.cos(t[2]) * (
        np.cos(t[1]) * a[0]
        + np.sin(t[1]) * (np.sin(t[0]) * a[1] + np.cos(t[0]) * a[2])
    ) - (np.sin(t[2]) * (np.cos(t[0]) * a[1] - np.sin(t[0]) * a[2]))
    d[1] = np.sin(t[2]) * (
        np.cos(t[1]) * a[0]
        + np.sin(t[1]) * (np.sin(t[0]) * a[1] + np.cos(t[0]) * a[2])
    ) + (np.cos(t[2]) * (np.cos(t[0]) * a[1] - np.sin(t[0]) * a[2]))
    d[2] = -np.sin(t[1]) * a[0] + np.cos(t[1]) * (
        np.sin(t[0]) * a[1] + np.cos(t[0]) * a[2]
    )
    return d


def get_intersect_point(center_pt, cam_dir, vertex1, vertex2):
    c1 = center_pt[0]
    c2 = center_pt[1]
    c3 = center_pt[2]
    a1 = cam_dir[0]
    a2 = cam_dir[1]
    a3 = cam_dir[2]
    x1 = vertex1[0]
    y1 = vertex1[1]
    z1 = vertex1[2]
    x2 = vertex2[0]
    y2 = vertex2[1]
    z2 = vertex2[2]

    k_up = a1 * (x1 - c1) + a2 * (y1 - c2) + a3 * (z1 - c3)
    k_down = a1 * (x1 - x2) + a2 * (y1 - y2) + a3 * (z1 - z2)
    k = k_up / k_down
    inter_point = (1 - k) * vertex1 + k * vertex2

    return inter_point


####################
# dataset utils.
def is_inside(x, y):
    return x >= 0 and x <= 1 and y >= 0 and y <= 1


def get_cut_edge(x1, y1, x2, y2):
    # (x1, y1) inside while (x2, y2) outside
    dx = x2 - x1
    dy = y2 - y1
    ratio_pool = []
    if x2 < 0:
        ratio = (x1 - 0) / (x1 - x2)
        ratio_pool.append(ratio)
    if x2 > 1:
        ratio = (1 - x1) / (x2 - x1)
        ratio_pool.append(ratio)
    if y2 < 0:
        ratio = (y1 - 0) / (y1 - y2)
        ratio_pool.append(ratio)
    if y2 > 1:
        ratio = (1 - y1) / (y2 - y1)
        ratio_pool.append(ratio)
    actual_ratio = min(ratio_pool)
    return x1 + actual_ratio * dx, y1 + actual_ratio * dy


def get_min_max_x_y_from_line(x1, y1, x2, y2):
    if is_inside(x1, y1) and is_inside(x2, y2):
        return min(x1, x2), max(x1, x2), min(y1, y2), max(y1, y2)
    if (not is_inside(x1, y1)) and (not is_inside(x2, y2)):
        return None, None, None, None
    if is_inside(x1, y1) and not is_inside(x2, y2):
        x2, y2 = get_cut_edge(x1, y1, x2, y2)
        return min(x1, x2), max(x1, x2), min(y1, y2), max(y1, y2)
    if is_inside(x2, y2) and not is_inside(x1, y1):
        x1, y1 = get_cut_edge(x2, y2, x1, y1)
        return min(x1, x2), max(x1, x2), min(y1, y2), max(y1, y2)


def get_angle_in_2pi(unit_vec):
    theta = np.arccos(unit_vec[0])
    if unit_vec[1] > 0:
        return theta
    else:
        return 2 * np.pi - theta


####################
# math utils.
def vec_cos(a, b):
    prod = a.dot(b)
    prod = prod * 1.0 / np.linalg.norm(a) / np.linalg.norm(b)
    return prod


def compute_bbox_ratio(bbox2, bbox):
    # bbox2 is inside bbox
    s = (bbox[2] - bbox[0]) * (bbox[3] - bbox[1])
    s2 = (bbox2[2] - bbox2[0]) * (bbox2[3] - bbox2[1])
    return s2 * 1.0 / s


def compute_iou(boxA, boxB):
    if (
        boxA[0] > boxB[2]
        or boxB[0] > boxA[2]
        or boxA[1] > boxB[3]
        or boxB[1] > boxA[3]
    ):
        return 0
    # determine the (x, y)-coordinates of the intersection rectangle
    xA = max(boxA[0], boxB[0])
    yA = max(boxA[1], boxB[1])
    xB = min(boxA[2], boxB[2])
    yB = min(boxA[3], boxB[3])

    # compute the area of intersection rectangle
    interArea = (xB - xA + 1) * (yB - yA + 1)

    # compute the area of both the prediction and ground-truth
    # rectangles
    boxAArea = (boxA[2] - boxA[0] + 1) * (boxA[3] - boxA[1] + 1)
    boxBArea = (boxB[2] - boxB[0] + 1) * (boxB[3] - boxB[1] + 1)

    # compute the intersection over union by taking the intersection
    # area and dividing it by the sum of prediction + ground-truth
    # areas - the interesection area
    iou = interArea / float(boxAArea + boxBArea - interArea)

    # return the intersection over union value
    return iou


def project2dline(
    p1,
    p2,
    cam_coords,
    cam_rotation,
    cam_near_clip=0.15,
    cam_field_of_view=50.0,
    WIDTH=1920,
    HEIGHT=2080,
):
    before1 = is_before_clip_plane(
        p1, cam_coords, cam_rotation, cam_near_clip, cam_field_of_view
    )
    before2 = is_before_clip_plane(
        p2, cam_coords, cam_rotation, cam_near_clip, cam_field_of_view
    )
    if not (before1 or before2):
        return None
    if before1 and before2:
        cp1 = get_2d_from_3d(
            p1,
            cam_coords,
            cam_rotation,
            cam_near_clip,
            cam_field_of_view,
            WIDTH,
            HEIGHT,
        )
        cp2 = get_2d_from_3d(
            p2,
            cam_coords,
            cam_rotation,
            cam_near_clip,
            cam_field_of_view,
            WIDTH,
            HEIGHT,
        )
        x1 = int(cp1[0] * WIDTH)
        x2 = int(cp2[0] * WIDTH)
        y1 = int(cp1[1] * HEIGHT)
        y2 = int(cp2[1] * HEIGHT)
        return [[x1, y1], [x2, y2]]
    center_pt, cam_dir = get_clip_center_and_dir(
        cam_coords, cam_rotation, cam_near_clip
    )
    if before1 and not before2:
        inter2 = get_intersect_point(center_pt, cam_dir, p1, p2)
        cp1 = get_2d_from_3d(
            p1,
            cam_coords,
            cam_rotation,
            cam_near_clip,
            cam_field_of_view,
            WIDTH,
            HEIGHT,
        )
        cp2 = get_2d_from_3d(
            inter2,
            cam_coords,
            cam_rotation,
            cam_near_clip,
            cam_field_of_view,
            WIDTH,
            HEIGHT,
        )
        x1 = int(cp1[0] * WIDTH)
        x2 = int(cp2[0] * WIDTH)
        y1 = int(cp1[1] * HEIGHT)
        y2 = int(cp2[1] * HEIGHT)
        return [[x1, y1], [x2, y2]]
    if before2 and not before1:
        inter1 = get_intersect_point(center_pt, cam_dir, p1, p2)
        cp2 = get_2d_from_3d(
            p2,
            cam_coords,
            cam_rotation,
            cam_near_clip,
            cam_field_of_view,
            WIDTH,
            HEIGHT,
        )
        cp1 = get_2d_from_3d(
            inter1,
            cam_coords,
            cam_rotation,
            cam_near_clip,
            cam_field_of_view,
            WIDTH,
            HEIGHT,
        )
        x1 = int(cp1[0] * WIDTH)
        x2 = int(cp2[0] * WIDTH)
        y1 = int(cp1[1] * HEIGHT)
        y2 = int(cp2[1] * HEIGHT)
        return [[x1, y1], [x2, y2]]


####################
# io utils.
def read_depthmap(name, cam_near_clip, cam_far_clip):
    depth = cv2.imread(name)
    depth = np.concatenate(
        (depth, np.zeros_like(depth[:, :, 0:1], dtype=np.uint8)), axis=2
    )
    depth.dtype = np.uint32
    depth = 0.05 * 1000 / depth.astype('float')
    depth = (
        cam_near_clip
        * cam_far_clip
        / (cam_near_clip + depth * (cam_far_clip - cam_near_clip))
    )
    return depth


================================================
FILE: vis_2d_pose_depth.py
================================================
"""
GTA-IM Dataset
"""

import argparse
import os
import pickle

import cv2
import matplotlib.pyplot as plt
import numpy as np

from gta_utils import LIMBS, read_depthmap


def single_vis(args):
    joints_2d = np.load(args.path + '/info_frames.npz')['joints_2d']
    info = pickle.load(open(args.path + '/info_frames.pickle', 'rb'))
    if not os.path.exists(args.outpath):
        os.mkdir(args.outpath)
    for idx in range(30, len(info)):
        if os.path.exists(
            os.path.join(args.path, '{:05d}'.format(idx) + '.jpg')
        ):
            keypoint = joints_2d[idx]

            # root
            root_pos = (int(keypoint[14, 0]), int(keypoint[14, 1]))
            # color image
            frame = cv2.imread(
                os.path.join(args.path, '{:05d}'.format(idx) + '.jpg')
            )
            frame = cv2.circle(frame, tuple(root_pos), 10, (0, 0, 255), 20)
            # depth map
            infot = info[idx]
            cam_near_clip = infot['cam_near_clip']
            if 'cam_far_clip' in infot.keys():
                cam_far_clip = infot['cam_far_clip']
            else:
                cam_far_clip = 800.    
            fname = os.path.join(args.path, '{:05d}'.format(idx) + '.png')
            depthmap = read_depthmap(fname, cam_near_clip, cam_far_clip)

            # plot joints
            for i0, i1 in LIMBS:
                p1 = (int(keypoint[i0, 0]), int(keypoint[i0, 1]))
                p2 = (int(keypoint[i1, 0]), int(keypoint[i1, 1]))
                frame = cv2.line(frame, tuple(p1), tuple(p2), (0, 255, 0), 20)


            fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(16, 9), sharey=True)
            ax1.imshow(frame[:, :, ::-1])
            ax1.axis('off')
            # visaulize the disparity
            ax2.imshow(100.0 / depthmap[:, :, 0], cmap='plasma')
            ax2.axis('off')
            # tight figure
            plt.subplots_adjust(top = 1, bottom = 0, right = 1, left = 0, 
            hspace = 0, wspace = 0)
            plt.margins(0,0)
            plt.gca().xaxis.set_major_locator(plt.NullLocator())
            plt.gca().yaxis.set_major_locator(plt.NullLocator())
            plt.savefig(
                os.path.join(args.outpath, str(idx) + '_vis.jpg'), 
                bbox_inches='tight',
                pad_inches=0)
            plt.close()


if __name__ == '__main__':
    parser = argparse.ArgumentParser(description=None)
    parser.add_argument('-pa', '--path', default='2020-06-10-09-27-04/')
    args = parser.parse_args()
    args.outpath = args.path + '/vis/'
    single_vis(args)


================================================
FILE: vis_skeleton_pcd.py
================================================
"""
GTA-IM Dataset
"""

import argparse
import os
import pickle
import sys

import cv2
import numpy as np
import open3d as o3d
from open3d import (LineSet, PinholeCameraIntrinsic, Vector2iVector,
                    Vector3dVector, draw_geometries)

from gta_utils import LIMBS, read_depthmap

sys.path.append('./')


def create_skeleton_viz_data(nskeletons, njoints):
    lines = []
    colors = []
    for i in range(nskeletons):
        cur_lines = np.asarray(LIMBS)
        cur_lines += i * njoints
        lines.append(cur_lines)

        single_color = np.zeros([njoints, 3])
        single_color[:] = [0.0, float(i) / nskeletons, 1.0]
        colors.append(single_color[1:])

    lines = np.concatenate(lines, axis=0)
    colors = np.asarray(colors).reshape(-1, 3)
    return lines, colors


def vis_skeleton_pcd(rec_idx, f_id, fusion_window=20):
    info = pickle.load(open(rec_idx + '/info_frames.pickle', 'rb'))
    info_npz = np.load(rec_idx + '/info_frames.npz')

    pcd = o3d.geometry.PointCloud()
    global_pcd = o3d.geometry.PointCloud()
    # use nearby RGBD frames to create the environment point cloud
    for i in range(f_id - fusion_window // 2, f_id + fusion_window // 2, 10):
        fname = rec_idx + '/' + '{:05d}'.format(i) + '.png'
        if os.path.exists(fname):
            infot = info[i]
            cam_near_clip = infot['cam_near_clip']
            if 'cam_far_clip' in infot.keys():
                cam_far_clip = infot['cam_far_clip']
            else:
                cam_far_clip = 800. 
            depth = read_depthmap(fname, cam_near_clip, cam_far_clip)
            # delete points that are more than 20 meters away
            depth[depth > 20.0] = 0

            # obtain the human mask
            p = info_npz['joints_2d'][i, 0]
            fname = rec_idx + '/' + '{:05d}'.format(i) + '_id.png'
            id_map = cv2.imread(fname, cv2.IMREAD_ANYDEPTH)
            human_id = id_map[
                np.clip(int(p[1]), 0, 1079), np.clip(int(p[0]), 0, 1919)
            ]

            mask = id_map == human_id
            kernel = np.ones((3, 3), np.uint8)
            mask_dilation = cv2.dilate(
                mask.astype(np.uint8), kernel, iterations=1
            )
            depth = depth * (1 - mask_dilation[..., None])
            depth = o3d.geometry.Image(depth.astype(np.float32))
            # cv2.imshow('tt', mask.astype(np.uint8)*255)
            # cv2.waitKey(0)

            fname = rec_idx + '/' + '{:05d}'.format(i) + '.jpg'
            color_raw = o3d.io.read_image(fname)

            focal_length = info_npz['intrinsics'][f_id, 0, 0]
            rgbd_image = o3d.geometry.create_rgbd_image_from_color_and_depth(
                color_raw,
                depth,
                depth_scale=1.0,
                depth_trunc=15.0,
                convert_rgb_to_intensity=False,
            )
            pcd = o3d.geometry.create_point_cloud_from_rgbd_image(
                rgbd_image,
                o3d.camera.PinholeCameraIntrinsic(
                    PinholeCameraIntrinsic(
                        1920, 1080, focal_length, focal_length, 960.0, 540.0
                    )
                ),
            )
            depth_pts = np.asarray(pcd.points)

            depth_pts_aug = np.hstack(
                [depth_pts, np.ones([depth_pts.shape[0], 1])]
            )
            cam_extr_ref = np.linalg.inv(info_npz['world2cam_trans'][i])
            depth_pts = depth_pts_aug.dot(cam_extr_ref)[:, :3]
            pcd.points = Vector3dVector(depth_pts)

            global_pcd.points.extend(pcd.points)
            global_pcd.colors.extend(pcd.colors)

    # read gt pose in world coordinate, visualize nearby frame as well
    joints = info_npz['joints_3d_world'][(f_id - 30) : (f_id + 30) : 10]
    tl, jn, _ = joints.shape
    joints = joints.reshape(-1, 3)

    # create skeletons in open3d
    nskeletons = tl
    lines, colors = create_skeleton_viz_data(nskeletons, jn)
    line_set = LineSet()
    line_set.points = Vector3dVector(joints)
    line_set.lines = Vector2iVector(lines)
    line_set.colors = Vector3dVector(colors)

    vis_list = [global_pcd, line_set]
    for j in range(joints.shape[0]):
        # spine joints
        if j % jn == 11 or j % jn == 12 or j % jn == 13:
            continue
        transformation = np.identity(4)
        transformation[:3, 3] = joints[j]
        # head joint
        if j % jn == 0:
            r = 0.07
        else:
            r = 0.03

        sphere = o3d.geometry.create_mesh_sphere(radius=r)
        sphere.paint_uniform_color([0.0, float(j // jn) / nskeletons, 1.0])
        vis_list.append(sphere.transform(transformation))

    draw_geometries(vis_list)


if __name__ == '__main__':
    parser = argparse.ArgumentParser(description=None)
    parser.add_argument('-pa', '--path', default='2020-06-10-21-47-45')
    parser.add_argument(
        '-f', '--frame', default=180, type=int, help='frame to visualize'
    )
    parser.add_argument(
        '-fw',
        '--fusion-window',
        default=20,
        type=int,
        help='timesteps of RGB frames for fusing',
    )
    args = parser.parse_args()

    vis_skeleton_pcd(args.path + '/', args.frame, args.fusion_window)


================================================
FILE: vis_video.py
================================================
"""
GTA-IM Dataset
"""

import argparse
import glob
import os
import os.path as osp
import sys

import cv2
from tqdm import tqdm

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description=None)
    parser.add_argument('-pa', '--path', default='2020-06-10-21-47-45')
    parser.add_argument('-s', '--scale', default=4, type=int, help='down scale')
    parser.add_argument(
        '-fr', '--frame_rate', default=5, type=int, help='frame_rate'
    )
    args = parser.parse_args()
    args.outpath = args.path + '/vis/'
    if not osp.exists(args.outpath):
        os.mkdir(args.outpath)

    ims = sorted(glob.glob(args.path + '/*.jpg'))
    if osp.exists(osp.join(args.outpath, 'video.mp4')):
        sys.exit()

    img_array = []
    for filename in tqdm(ims, desc='frame'):
        img = cv2.imread(filename)
        height, width, layers = img.shape
        size = (width // args.scale, height // args.scale)
        img = cv2.resize(img, size, interpolation=cv2.INTER_LINEAR)
        img_array.append(img)

    out = cv2.VideoWriter(
        osp.join(args.outpath, 'video.mp4'),
        cv2.VideoWriter_fourcc(*'mp4v'),
        args.frame_rate,
        size,
    )
    for i in range(len(img_array)):
        out.write(img_array[i])
    out.release()
Download .txt
gitextract_uzga8gil/

├── .gitignore
├── LICENSE
├── README.md
├── demo/
│   ├── info_frames.npz
│   └── info_frames.pickle
├── environment.yml
├── gen_npz.py
├── gta_utils.py
├── vis_2d_pose_depth.py
├── vis_skeleton_pcd.py
└── vis_video.py
Download .txt
SYMBOL INDEX (30 symbols across 4 files)

FILE: gen_npz.py
  function rot_axis (line 14) | def rot_axis(angle, axis):
  function rotate (line 33) | def rotate(vector, angle, inverse=False):
  function angle2rot (line 58) | def angle2rot(rotation, inverse=False):
  class Pose (line 62) | class Pose:
    method __init__ (line 63) | def __init__(self, position, rotation):
  function get_focal_length (line 72) | def get_focal_length(cam_near_clip, cam_field_of_view):
  function get_cam_extr (line 81) | def get_cam_extr(cam_pos, cam_rot):

FILE: gta_utils.py
  function get_focal_length (line 34) | def get_focal_length(cam_near_clip, cam_field_of_view):
  function get_2d_from_3d (line 43) | def get_2d_from_3d(
  function screen_x_to_view_plane (line 96) | def screen_x_to_view_plane(x, cam_near_clip, cam_field_of_view):
  function generate_id_map (line 108) | def generate_id_map(map_path):
  function get_depth (line 118) | def get_depth(
  function get_kitti_format_camera_coords (line 142) | def get_kitti_format_camera_coords(
  function get_cam_dir_vecs (line 171) | def get_cam_dir_vecs(cam_rotation):
  function is_before_clip_plane (line 183) | def is_before_clip_plane(
  function get_clip_center_and_dir (line 212) | def get_clip_center_and_dir(cam_coords, cam_rotation, cam_near_clip):
  function rotate (line 220) | def rotate(a, t):
  function get_intersect_point (line 236) | def get_intersect_point(center_pt, cam_dir, vertex1, vertex2):
  function is_inside (line 260) | def is_inside(x, y):
  function get_cut_edge (line 264) | def get_cut_edge(x1, y1, x2, y2):
  function get_min_max_x_y_from_line (line 285) | def get_min_max_x_y_from_line(x1, y1, x2, y2):
  function get_angle_in_2pi (line 298) | def get_angle_in_2pi(unit_vec):
  function vec_cos (line 308) | def vec_cos(a, b):
  function compute_bbox_ratio (line 314) | def compute_bbox_ratio(bbox2, bbox):
  function compute_iou (line 321) | def compute_iou(boxA, boxB):
  function project2dline (line 352) | def project2dline(
  function read_depthmap (line 451) | def read_depthmap(name, cam_near_clip, cam_far_clip):

FILE: vis_2d_pose_depth.py
  function single_vis (line 16) | def single_vis(args):

FILE: vis_skeleton_pcd.py
  function create_skeleton_viz_data (line 21) | def create_skeleton_viz_data(nskeletons, njoints):
  function vis_skeleton_pcd (line 38) | def vis_skeleton_pcd(rec_idx, f_id, fusion_window=20):
Condensed preview — 11 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (58K chars).
[
  {
    "path": ".gitignore",
    "chars": 732,
    "preview": "# Project specifics\n2020*\nGTA-IM*\ndata/\n\n# Direnv stuffs\n.direnv\n.envrc\n\n# Compiled source\n*.class\n*.dll\n*.exe\n*.o\n*.so\n"
  },
  {
    "path": "LICENSE",
    "chars": 19532,
    "preview": "Copyright (c) 2020, Zhe Cao, Hang Gao, Qi-Zhi Cai\nAll rights reserved.\n\nThis dataset and code are licensed under the lic"
  },
  {
    "path": "README.md",
    "chars": 6716,
    "preview": "# <b>GTA-IM Dataset</b> [[Website]](https://people.eecs.berkeley.edu/~zhecao/hmp/)\n\n<div align=center>\n<img src=\"assets/"
  },
  {
    "path": "environment.yml",
    "chars": 166,
    "preview": "name: gta-im\nchannels:\n  - conda-forge\n  - open3d-admin\ndependencies:\n  - python=3.6\n  - tqdm\n  - numpy\n  - numba\n  - pi"
  },
  {
    "path": "gen_npz.py",
    "chars": 6473,
    "preview": "\"\"\"\nGTA-IM Dataset\n\"\"\"\n\nimport glob\nimport os\nimport pickle\n\nimport numba\nimport numpy as np\n\n\n@numba.jit(nopython=True,"
  },
  {
    "path": "gta_utils.py",
    "chars": 13083,
    "preview": "\"\"\"\nGTA-IM Dataset\n\"\"\"\n\nimport cv2\nimport numpy as np\n\nLIMBS = [\n    (0, 1),  # head_center -> neck\n    (1, 2),  # neck "
  },
  {
    "path": "vis_2d_pose_depth.py",
    "chars": 2598,
    "preview": "\"\"\"\nGTA-IM Dataset\n\"\"\"\n\nimport argparse\nimport os\nimport pickle\n\nimport cv2\nimport matplotlib.pyplot as plt\nimport numpy"
  },
  {
    "path": "vis_skeleton_pcd.py",
    "chars": 5230,
    "preview": "\"\"\"\nGTA-IM Dataset\n\"\"\"\n\nimport argparse\nimport os\nimport pickle\nimport sys\n\nimport cv2\nimport numpy as np\nimport open3d "
  },
  {
    "path": "vis_video.py",
    "chars": 1274,
    "preview": "\"\"\"\nGTA-IM Dataset\n\"\"\"\n\nimport argparse\nimport glob\nimport os\nimport os.path as osp\nimport sys\n\nimport cv2\nfrom tqdm imp"
  }
]

// ... and 2 more files (download for full content)

About this extraction

This page contains the full source code of the ZheC/GTA-IM-Dataset GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 11 files (24.5 MB), approximately 14.7k tokens, and a symbol index with 30 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!