Full Code of linden713/humanoid_amp for AI

master 99efa6d60e35 cached
77 files
52.2 MB
54.2k tokens
80 symbols
1 requests
Download .txt
Showing preview only (203K chars total). Download the full file or copy to clipboard to get everything.
Repository: linden713/humanoid_amp
Branch: master
Commit: 99efa6d60e35
Files: 77
Total size: 52.2 MB

Directory structure:
gitextract_zeak4x_i/

├── .gitignore
├── LICENSE
├── README.md
├── __init__.py
├── agents/
│   ├── __init__.py
│   ├── skrl_dance_amp_cfg.yaml
│   ├── skrl_g1_dance_amp_cfg.yaml
│   ├── skrl_g1_walk_amp_cfg.yaml
│   ├── skrl_run_amp_cfg.yaml
│   └── skrl_walk_amp_cfg.yaml
├── g1_amp_env.py
├── g1_amp_env_cfg.py
├── g1_cfg.py
├── humanoid_amp_env.py
├── humanoid_amp_env_cfg.py
├── motions/
│   ├── G1_dance.npz
│   ├── G1_dance_old.npz
│   ├── G1_walk.npz
│   ├── README.md
│   ├── __init__.py
│   ├── data_convert.py
│   ├── humanoid_dance.npz
│   ├── humanoid_run.npz
│   ├── humanoid_walk.npz
│   ├── motion_loader.py
│   ├── motion_replayer.py
│   ├── motion_viewer.py
│   ├── record_data.py
│   ├── test/
│   │   └── get_joint_name.py
│   ├── update_pelvis_data.py
│   ├── verify_motion.py
│   └── visualize_motion.py
├── play.py
├── pyproject.toml
├── sync_skrl_scripts.sh
├── train.py
└── usd/
    ├── g1_29dof_rev_1_0.urdf
    ├── g1_29dof_rev_1_0.usd
    ├── instanceable_meshes.usd
    └── meshes/
        ├── head_link.STL
        ├── left_ankle_pitch_link.STL
        ├── left_ankle_roll_link.STL
        ├── left_elbow_link.STL
        ├── left_hip_pitch_link.STL
        ├── left_hip_roll_link.STL
        ├── left_hip_yaw_link.STL
        ├── left_knee_link.STL
        ├── left_rubber_hand.STL
        ├── left_shoulder_pitch_link.STL
        ├── left_shoulder_roll_link.STL
        ├── left_shoulder_yaw_link.STL
        ├── left_wrist_pitch_link.STL
        ├── left_wrist_roll_link.STL
        ├── left_wrist_roll_rubber_hand.STL
        ├── left_wrist_yaw_link.STL
        ├── logo_link.STL
        ├── pelvis.STL
        ├── pelvis_contour_link.STL
        ├── right_ankle_pitch_link.STL
        ├── right_ankle_roll_link.STL
        ├── right_elbow_link.STL
        ├── right_hip_pitch_link.STL
        ├── right_hip_roll_link.STL
        ├── right_hip_yaw_link.STL
        ├── right_knee_link.STL
        ├── right_rubber_hand.STL
        ├── right_shoulder_pitch_link.STL
        ├── right_shoulder_roll_link.STL
        ├── right_shoulder_yaw_link.STL
        ├── right_wrist_pitch_link.STL
        ├── right_wrist_roll_link.STL
        ├── right_wrist_roll_rubber_hand.STL
        ├── right_wrist_yaw_link.STL
        ├── torso_link_rev_1_0.STL
        ├── waist_roll_link_rev_1_0.STL
        ├── waist_support_link.STL
        └── waist_yaw_link_rev_1_0.STL

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
**/__pycache__/
*.pyc
*.html
logs/
outputs/*
*.egg-info

================================================
FILE: LICENSE
================================================
BSD 3-Clause License

Copyright (c) 2025, Linden

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this
   list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice,
   this list of conditions and the following disclaimer in the documentation
   and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its
   contributors may be used to endorse or promote products derived from
   this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.


================================================
FILE: README.md
================================================
![Humanoid AMP](docs/human_amp.png)

---

# Humanoid AMP
[![IsaacSim](https://img.shields.io/badge/IsaacSim-5.0.0-silver.svg)](https://docs.isaacsim.omniverse.nvidia.com/latest/index.html)
[![IsaacLab](https://img.shields.io/badge/IsaacLab-2.2.0-silver.svg)](https://isaac-sim.github.io/IsaacLab/main/index.html)
[![Python](https://img.shields.io/badge/python-3.10-blue.svg)](https://docs.python.org/3/whatsnew/3.10.html)
[![License](https://img.shields.io/badge/license-BSD--3-yellow.svg)](https://opensource.org/licenses/BSD-3-Clause)

## Prerequisites

This project requires Isaac Sim and Isaac Lab installed via pip. If you haven't installed them yet, pleanse follow the instructions in [Isaac Lab Installation Guide](https://isaac-sim.github.io/IsaacLab/main/source/setup/installation/pip_installation.html)



## Setup

No symbolic links or special setup required! This project works as a standalone Isaac Lab extension.

Install the repo in editable mode (mirrors the workflow used in unitree_rl_lab):

```bash
pip install -e .
```

We copy the train and play script from isaaclab, note you do not need to do it yourself.

```bash
bash ./sync_skrl_scripts.sh
```

## Train

```bash
python -m humanoid_amp.train --task Isaac-G1-AMP-Walk-Direct-v0 --headless --num_envs 4096
```

or for dance training:

```bash
python -m humanoid_amp.train --task Isaac-G1-AMP-Dance-Direct-v0 --headless --num_envs 4096
```

Additional training options:
```bash
# Resume from checkpoint
python -m humanoid_amp.train --task Isaac-G1-AMP-Walk-Direct-v0 --checkpoint logs/skrl/path/to/checkpoint
```

## Eval

```bash
python -m humanoid_amp.play --task Isaac-G1-AMP-Walk-Direct-v0 --num_envs 32 --checkpoint logs/skrl/<run>/checkpoints/Latest.ckpt
```

## TensorBoard

```bash
python -m tensorboard.main --logdir logs/skrl/
```

Then open your browser to http://localhost:6006

Walk training: `master` branch. Dance training: **`dance`** branch.

The usage of some helper script functions is explained at the beginning of the file.



## Motions Scripts
- `motion_loader.py` - Load motion data from npz files and provide sampling functionality
- `motion_viewer.py` - 3D visualization player for motion data
- `data_convert.py` - Convert CSV motion data to npz format with interpolation and forward kinematics
- `motion_replayer.py` - Replay motion data in Isaac Sim with optional recording
- `record_data.py` - Recording and managing motion data utility classes
- `verify_motion.py` - Verify and display npz file contents
- `visualize_motion.py` - Generate interactive HTML charts to visualize motion data
- `update_pelvis_data.py` - Fix pelvis posture and body center issues in replay motion by replacing pelvis data from source file


## Dataset & URDF

**Note**: The original dataset and URDF files from [Unitree Robotics](https://huggingface.co/datasets/unitreerobotics/LAFAN1_Retargeting_Dataset) have been removed by the official source.

If you're still looking for the dataset, a third-party mirror is currently available here:  
[lvhaidong/LAFAN1_Retargeting_Dataset](https://huggingface.co/datasets/lvhaidong/LAFAN1_Retargeting_Dataset)

Or you can also use the data from [AMASS](https://huggingface.co/datasets/ember-lab-berkeley/AMASS_Retargeted_for_G1)
## TODO

- ✅ Current: Dancing motion is working
- ✅ Test the `test` branch thoroughly and merge it into `dance` as soon as possible
- ✅ Write a more detailed README to cover the new features and usage
- [ ] Add clearer comments and explanations in all Python scripts


## Resources
[![Demo - Walk](https://img.shields.io/badge/Demo-Walk-ff69b4?style=for-the-badge&logo=bilibili)](https://www.bilibili.com/video/BV19cRvYhEL8/?vd_source=5159ce41348cd4fd3d83ef9169dc8dbc)
[![Demo - Dance (Bilibili)](https://img.shields.io/badge/Demo-Dance-ff69b4?style=for-the-badge&logo=bilibili)](https://www.bilibili.com/video/BV1vW36zEEG8/?share_source=copy_web&vd_source=0de36dd681c4f7ffab776ec97939e21f)
[![Demo - Dance (YouTube)](https://img.shields.io/badge/Demo-Dance-FF0000?style=for-the-badge&logo=youtube)](https://youtu.be/_ItIFkp-Xi4)

[![Documentation](https://img.shields.io/badge/Documentation-DeepWiki-blue?style=for-the-badge&logo=gitbook)](https://deepwiki.com/linden713/humanoid_amp)

**Contributions**, **discussions**, and **stars** are all welcome! ❥(^_-)


================================================
FILE: __init__.py
================================================
# Copyright (c) 2022-2025, The Isaac Lab Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

"""
AMP Humanoid locomotion environment.
"""

import gymnasium as gym

from . import agents

##
# Register Gym environments.
##

gym.register(
    id="Isaac-Humanoid-AMP-Dance-Direct-v0",
    entry_point=f"{__name__}.humanoid_amp_env:HumanoidAmpEnv",
    disable_env_checker=True,
    kwargs={
        "env_cfg_entry_point": f"{__name__}.humanoid_amp_env_cfg:HumanoidAmpDanceEnvCfg",
        "skrl_amp_cfg_entry_point": f"{agents.__name__}:skrl_dance_amp_cfg.yaml",
        "skrl_cfg_entry_point": f"{agents.__name__}:skrl_dance_amp_cfg.yaml",
    },
)

gym.register(
    id="Isaac-Humanoid-AMP-Run-Direct-v0",
    entry_point=f"{__name__}.humanoid_amp_env:HumanoidAmpEnv",
    disable_env_checker=True,
    kwargs={
        "env_cfg_entry_point": f"{__name__}.humanoid_amp_env_cfg:HumanoidAmpRunEnvCfg",
        "skrl_amp_cfg_entry_point": f"{agents.__name__}:skrl_run_amp_cfg.yaml",
        "skrl_cfg_entry_point": f"{agents.__name__}:skrl_run_amp_cfg.yaml",
    },
)

gym.register(
    id="Isaac-Humanoid-AMP-Walk-Direct-v0",
    entry_point=f"{__name__}.humanoid_amp_env:HumanoidAmpEnv",
    disable_env_checker=True,
    kwargs={
        "env_cfg_entry_point": f"{__name__}.humanoid_amp_env_cfg:HumanoidAmpWalkEnvCfg",
        "skrl_amp_cfg_entry_point": f"{agents.__name__}:skrl_walk_amp_cfg.yaml",
        "skrl_cfg_entry_point": f"{agents.__name__}:skrl_walk_amp_cfg.yaml",
    },
)

gym.register(
    id="Isaac-G1-AMP-Dance-Direct-v0",
    entry_point=f"{__name__}.g1_amp_env:G1AmpEnv",
    disable_env_checker=True,
    kwargs={
        "env_cfg_entry_point": f"{__name__}.g1_amp_env_cfg:G1AmpDanceEnvCfg",
        "skrl_amp_cfg_entry_point": f"{agents.__name__}:skrl_g1_dance_amp_cfg.yaml",
        "skrl_cfg_entry_point": f"{agents.__name__}:skrl_g1_dance_amp_cfg.yaml",
    },
)

gym.register(
    id="Isaac-G1-AMP-Walk-Direct-v0",
    entry_point=f"{__name__}.g1_amp_env:G1AmpEnv",
    disable_env_checker=True,
    kwargs={
        "env_cfg_entry_point": f"{__name__}.g1_amp_env_cfg:G1AmpWalkEnvCfg",
        "skrl_amp_cfg_entry_point": f"{agents.__name__}:skrl_g1_walk_amp_cfg.yaml",
        "skrl_cfg_entry_point": f"{agents.__name__}:skrl_g1_walk_amp_cfg.yaml",
    },
)

================================================
FILE: agents/__init__.py
================================================
# Copyright (c) 2022-2025, The Isaac Lab Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause


================================================
FILE: agents/skrl_dance_amp_cfg.yaml
================================================
seed: 42


# Models are instantiated using skrl's model instantiator utility
# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.html
models:
  separate: True
  policy:  # see gaussian_model parameters
    class: GaussianMixin
    clip_actions: False
    clip_log_std: True
    min_log_std: -20.0
    max_log_std: 2.0
    initial_log_std: -2.9
    fixed_log_std: True
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ACTIONS
  value:  # see deterministic_model parameters
    class: DeterministicMixin
    clip_actions: False
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ONE
  discriminator:  # see deterministic_model parameters
    class: DeterministicMixin
    clip_actions: False
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ONE


# Rollout memory
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
memory:
  class: RandomMemory
  memory_size: -1  # automatically determined (same as agent:rollouts)

# AMP memory (reference motion dataset)
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
motion_dataset:
  class: RandomMemory
  memory_size: 200000

# AMP memory (preventing discriminator overfitting)
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
reply_buffer:
  class: RandomMemory
  memory_size: 1000000


# AMP agent configuration (field names are from AMP_DEFAULT_CONFIG)
# https://skrl.readthedocs.io/en/latest/api/agents/amp.html
agent:
  class: AMP
  rollouts: 16
  learning_epochs: 6
  mini_batches: 2
  discount_factor: 0.99
  lambda: 0.95
  learning_rate: 5.0e-05
  learning_rate_scheduler: null
  learning_rate_scheduler_kwargs: null
  state_preprocessor: RunningStandardScaler
  state_preprocessor_kwargs: null
  value_preprocessor: RunningStandardScaler
  value_preprocessor_kwargs: null
  amp_state_preprocessor: RunningStandardScaler
  amp_state_preprocessor_kwargs: null
  random_timesteps: 0
  learning_starts: 0
  grad_norm_clip: 0.0
  ratio_clip: 0.2
  value_clip: 0.2
  clip_predicted_values: True
  entropy_loss_scale: 0.0
  value_loss_scale: 2.5
  discriminator_loss_scale: 5.0
  amp_batch_size: 512
  task_reward_weight: 0.0
  style_reward_weight: 1.0
  discriminator_batch_size: 4096
  discriminator_reward_scale: 2.0
  discriminator_logit_regularization_scale: 0.05
  discriminator_gradient_penalty_scale: 5.0
  discriminator_weight_decay_scale: 1.0e-04
  # rewards_shaper_scale: 1.0
  time_limit_bootstrap: False
  # logging and checkpoint
  experiment:
    directory: "humanoid_amp_dance"
    experiment_name: ""
    write_interval: auto
    checkpoint_interval: auto


# Sequential trainer
# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.html
trainer:
  class: SequentialTrainer
  timesteps: 80000
  environment_info: log


================================================
FILE: agents/skrl_g1_dance_amp_cfg.yaml
================================================
seed: 42


# Models are instantiated using skrl's model instantiator utility
# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.html
models:
  separate: True
  policy:  # see gaussian_model parameters
    class: GaussianMixin
    clip_actions: False
    clip_log_std: True
    min_log_std: -20.0
    max_log_std: 2.0
    initial_log_std: -2.9
    fixed_log_std: True
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ACTIONS
  value:  # see deterministic_model parameters
    class: DeterministicMixin
    clip_actions: False
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ONE
  discriminator:  # see deterministic_model parameters
    class: DeterministicMixin
    clip_actions: False
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ONE


# Rollout memory
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
memory:
  class: RandomMemory
  memory_size: -1  # automatically determined (same as agent:rollouts)

# AMP memory (reference motion dataset)
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
motion_dataset:
  class: RandomMemory
  memory_size: 200000

# AMP memory (preventing discriminator overfitting)
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
reply_buffer:
  class: RandomMemory
  memory_size: 1000000


# AMP agent configuration (field names are from AMP_DEFAULT_CONFIG)
# https://skrl.readthedocs.io/en/latest/api/agents/amp.html
agent:
  class: AMP
  rollouts: 16
  learning_epochs: 6
  mini_batches: 2
  discount_factor: 0.99
  lambda: 0.95
  learning_rate: 5.0e-05
  # learning_rate_scheduler: null
  # learning_rate_scheduler_kwargs: null
  learning_rate_scheduler: KLAdaptiveLR
  learning_rate_scheduler_kwargs: 
    kl_threshold: 0.008
  state_preprocessor: RunningStandardScaler
  state_preprocessor_kwargs: null
  value_preprocessor: RunningStandardScaler
  value_preprocessor_kwargs: null
  amp_state_preprocessor: RunningStandardScaler
  amp_state_preprocessor_kwargs: null
  random_timesteps: 0
  learning_starts: 0
  grad_norm_clip: 0.0
  ratio_clip: 0.2
  value_clip: 0.2
  clip_predicted_values: True
  entropy_loss_scale: 0.02
  value_loss_scale: 2.5
  discriminator_loss_scale: 5.0
  amp_batch_size: 512
  task_reward_weight: 0
  style_reward_weight: 1.0
  discriminator_batch_size: 4096
  discriminator_reward_scale: 2.0
  discriminator_logit_regularization_scale: 0.05
  discriminator_gradient_penalty_scale: 5.0
  discriminator_weight_decay_scale: 1.0e-04
  # rewards_shaper_scale: 1.0
  time_limit_bootstrap: False
  # logging and checkpoint
  experiment:
    directory: "g1_amp_dance"
    experiment_name: ""
    write_interval: auto
    checkpoint_interval: auto


# Sequential trainer
# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.html
trainer:
  class: SequentialTrainer
  timesteps: 50000
  environment_info: log


================================================
FILE: agents/skrl_g1_walk_amp_cfg.yaml
================================================
seed: 42


# Models are instantiated using skrl's model instantiator utility
# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.html
models:
  separate: True
  policy:  # see gaussian_model parameters
    class: GaussianMixin
    clip_actions: False
    clip_log_std: True
    min_log_std: -20.0
    max_log_std: 2.0
    initial_log_std: -2.9
    fixed_log_std: True
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ACTIONS
  value:  # see deterministic_model parameters
    class: DeterministicMixin
    clip_actions: False
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ONE
  discriminator:  # see deterministic_model parameters
    class: DeterministicMixin
    clip_actions: False
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ONE


# Rollout memory
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
memory:
  class: RandomMemory
  memory_size: -1  # automatically determined (same as agent:rollouts)

# AMP memory (reference motion dataset)
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
motion_dataset:
  class: RandomMemory
  memory_size: 200000

# AMP memory (preventing discriminator overfitting)
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
reply_buffer:
  class: RandomMemory
  memory_size: 1000000


# AMP agent configuration (field names are from AMP_DEFAULT_CONFIG)
# https://skrl.readthedocs.io/en/latest/api/agents/amp.html
agent:
  class: AMP
  rollouts: 16
  learning_epochs: 6
  mini_batches: 2
  discount_factor: 0.99
  lambda: 0.95
  learning_rate: 5.0e-05
  learning_rate_scheduler: null
  learning_rate_scheduler_kwargs: null
  state_preprocessor: RunningStandardScaler
  state_preprocessor_kwargs: null
  value_preprocessor: RunningStandardScaler
  value_preprocessor_kwargs: null
  amp_state_preprocessor: RunningStandardScaler
  amp_state_preprocessor_kwargs: null
  random_timesteps: 0
  learning_starts: 0
  grad_norm_clip: 0.0
  ratio_clip: 0.2
  value_clip: 0.2
  clip_predicted_values: True
  entropy_loss_scale: 0.0
  value_loss_scale: 2.5
  discriminator_loss_scale: 5.0
  amp_batch_size: 512
  task_reward_weight: 0
  style_reward_weight: 1.0
  discriminator_batch_size: 4096
  discriminator_reward_scale: 2.0
  discriminator_logit_regularization_scale: 0.05
  discriminator_gradient_penalty_scale: 5.0
  discriminator_weight_decay_scale: 1.0e-04
  # rewards_shaper_scale: 1.0
  time_limit_bootstrap: False
  # logging and checkpoint
  experiment:
    directory: "g1_amp_walk"
    experiment_name: ""
    write_interval: auto
    checkpoint_interval: auto


# Sequential trainer
# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.html
trainer:
  class: SequentialTrainer
  timesteps: 50000
  environment_info: log


================================================
FILE: agents/skrl_run_amp_cfg.yaml
================================================
seed: 42


# Models are instantiated using skrl's model instantiator utility
# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.html
models:
  separate: True
  policy:  # see gaussian_model parameters
    class: GaussianMixin
    clip_actions: False
    clip_log_std: True
    min_log_std: -20.0
    max_log_std: 2.0
    initial_log_std: -2.9
    fixed_log_std: True
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ACTIONS
  value:  # see deterministic_model parameters
    class: DeterministicMixin
    clip_actions: False
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ONE
  discriminator:  # see deterministic_model parameters
    class: DeterministicMixin
    clip_actions: False
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ONE


# Rollout memory
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
memory:
  class: RandomMemory
  memory_size: -1  # automatically determined (same as agent:rollouts)

# AMP memory (reference motion dataset)
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
motion_dataset:
  class: RandomMemory
  memory_size: 200000

# AMP memory (preventing discriminator overfitting)
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
reply_buffer:
  class: RandomMemory
  memory_size: 1000000


# AMP agent configuration (field names are from AMP_DEFAULT_CONFIG)
# https://skrl.readthedocs.io/en/latest/api/agents/amp.html
agent:
  class: AMP
  rollouts: 16
  learning_epochs: 6
  mini_batches: 2
  discount_factor: 0.99
  lambda: 0.95
  learning_rate: 5.0e-05
  learning_rate_scheduler: null
  learning_rate_scheduler_kwargs: null
  state_preprocessor: RunningStandardScaler
  state_preprocessor_kwargs: null
  value_preprocessor: RunningStandardScaler
  value_preprocessor_kwargs: null
  amp_state_preprocessor: RunningStandardScaler
  amp_state_preprocessor_kwargs: null
  random_timesteps: 0
  learning_starts: 0
  grad_norm_clip: 0.0
  ratio_clip: 0.2
  value_clip: 0.2
  clip_predicted_values: True
  entropy_loss_scale: 0.0
  value_loss_scale: 2.5
  discriminator_loss_scale: 5.0
  amp_batch_size: 512
  task_reward_weight: 0.0
  style_reward_weight: 1.0
  discriminator_batch_size: 4096
  discriminator_reward_scale: 2.0
  discriminator_logit_regularization_scale: 0.05
  discriminator_gradient_penalty_scale: 5.0
  discriminator_weight_decay_scale: 1.0e-04
  # rewards_shaper_scale: 1.0
  time_limit_bootstrap: False
  # logging and checkpoint
  experiment:
    directory: "humanoid_amp_run"
    experiment_name: ""
    write_interval: auto
    checkpoint_interval: auto


# Sequential trainer
# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.html
trainer:
  class: SequentialTrainer
  timesteps: 80000
  environment_info: log


================================================
FILE: agents/skrl_walk_amp_cfg.yaml
================================================
seed: 42


# Models are instantiated using skrl's model instantiator utility
# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.html
models:
  separate: True
  policy:  # see gaussian_model parameters
    class: GaussianMixin
    clip_actions: False
    clip_log_std: True
    min_log_std: -20.0
    max_log_std: 2.0
    initial_log_std: -2.9
    fixed_log_std: True
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ACTIONS
  value:  # see deterministic_model parameters
    class: DeterministicMixin
    clip_actions: False
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ONE
  discriminator:  # see deterministic_model parameters
    class: DeterministicMixin
    clip_actions: False
    network:
      - name: net
        input: STATES
        layers: [1024, 512]
        activations: relu
    output: ONE


# Rollout memory
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
memory:
  class: RandomMemory
  memory_size: -1  # automatically determined (same as agent:rollouts)

# AMP memory (reference motion dataset)
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
motion_dataset:
  class: RandomMemory
  memory_size: 200000

# AMP memory (preventing discriminator overfitting)
# https://skrl.readthedocs.io/en/latest/api/memories/random.html
reply_buffer:
  class: RandomMemory
  memory_size: 1000000


# AMP agent configuration (field names are from AMP_DEFAULT_CONFIG)
# https://skrl.readthedocs.io/en/latest/api/agents/amp.html
agent:
  class: AMP
  rollouts: 16
  learning_epochs: 6
  mini_batches: 2
  discount_factor: 0.99
  lambda: 0.95
  learning_rate: 5.0e-05
  learning_rate_scheduler: null
  learning_rate_scheduler_kwargs: null
  state_preprocessor: RunningStandardScaler
  state_preprocessor_kwargs: null
  value_preprocessor: RunningStandardScaler
  value_preprocessor_kwargs: null
  amp_state_preprocessor: RunningStandardScaler
  amp_state_preprocessor_kwargs: null
  random_timesteps: 0
  learning_starts: 0
  grad_norm_clip: 0.0
  ratio_clip: 0.2
  value_clip: 0.2
  clip_predicted_values: True
  entropy_loss_scale: 0.0
  value_loss_scale: 2.5
  discriminator_loss_scale: 5.0
  amp_batch_size: 512
  task_reward_weight: 0.0
  style_reward_weight: 1.0
  discriminator_batch_size: 4096
  discriminator_reward_scale: 2.0
  discriminator_logit_regularization_scale: 0.05
  discriminator_gradient_penalty_scale: 5.0
  discriminator_weight_decay_scale: 1.0e-04
  # rewards_shaper_scale: 1.0
  time_limit_bootstrap: False
  # logging and checkpoint
  experiment:
    directory: "humanoid_amp_walk"
    experiment_name: ""
    write_interval: auto
    checkpoint_interval: auto


# Sequential trainer
# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.html
trainer:
  class: SequentialTrainer
  timesteps: 80000
  environment_info: log


================================================
FILE: g1_amp_env.py
================================================
# Copyright (c) 2022-2025, The Isaac Lab Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

from __future__ import annotations

import gymnasium as gym
import numpy as np
import torch

import isaaclab.sim as sim_utils
from isaaclab.assets import Articulation
from isaaclab.envs import DirectRLEnv
from isaaclab.sim.spawners.from_files import GroundPlaneCfg, spawn_ground_plane
from isaaclab.utils.math import quat_apply

from .g1_amp_env_cfg import G1AmpEnvCfg
from .motions import MotionLoader


class G1AmpEnv(DirectRLEnv):
    cfg: G1AmpEnvCfg

    def __init__(self, cfg: G1AmpEnvCfg, render_mode: str | None = None, **kwargs):
        super().__init__(cfg, render_mode, **kwargs)

        # action offset and scale
        dof_lower_limits = self.robot.data.soft_joint_pos_limits[0, :, 0]
        dof_upper_limits = self.robot.data.soft_joint_pos_limits[0, :, 1]
        self.action_offset = 0.5 * (dof_upper_limits + dof_lower_limits)
        self.action_scale = dof_upper_limits - dof_lower_limits


        # load motion
        self._motion_loader = MotionLoader(motion_file=self.cfg.motion_file, device=self.device)

        # DOF and key body indexes  
        key_body_names = [ "left_shoulder_pitch_link",
            "right_shoulder_pitch_link",
            "left_elbow_link",
            "right_elbow_link",
            "right_hip_yaw_link",
            "left_hip_yaw_link",
            "right_rubber_hand",
            "left_rubber_hand",
            "right_ankle_roll_link",
            "left_ankle_roll_link"]

        self.ref_body_index = self.robot.data.body_names.index(self.cfg.reference_body)
        self.key_body_indexes = [self.robot.data.body_names.index(name) for name in key_body_names]
        # Used to for reset strategy
        self.motion_dof_indexes = self._motion_loader.get_dof_index(self.robot.data.joint_names)
        self.motion_ref_body_index = self._motion_loader.get_body_index([self.cfg.reference_body])[0]
        self.motion_key_body_indexes = self._motion_loader.get_body_index(key_body_names)

        # reconfigure AMP observation space according to the number of observations and create the buffer
        self.amp_observation_size = self.cfg.num_amp_observations * self.cfg.amp_observation_space
        self.amp_observation_space = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(self.amp_observation_size,))
        self.amp_observation_buffer = torch.zeros(
            (self.num_envs, self.cfg.num_amp_observations, self.cfg.amp_observation_space), device=self.device
        )

    def _setup_scene(self):
        self.robot = Articulation(self.cfg.robot)
        # add ground plane
        spawn_ground_plane(
            prim_path="/World/ground",
            cfg=GroundPlaneCfg(
                physics_material=sim_utils.RigidBodyMaterialCfg(
                    static_friction=1.0,
                    dynamic_friction=1.0,
                    restitution=0.0,
                ),
            ),
        )
        # clone and replicate
        self.scene.clone_environments(copy_from_source=False)
        # add articulation to scene
        self.scene.articulations["robot"] = self.robot
        # add lights
        light_cfg = sim_utils.DomeLightCfg(intensity=2000.0, color=(0.75, 0.75, 0.75))
        light_cfg.func("/World/Light", light_cfg)

    def _pre_physics_step(self, actions: torch.Tensor):
        self.actions = actions.clone()
        # self.pre_actions = actions.clone()

    def _apply_action(self):
        # self.pre_actions = self.actions.clone()
        target = self.action_offset + self.action_scale * self.actions
        self.robot.set_joint_position_target(target)

    def _get_observations(self) -> dict:
        # build task observation
        obs = compute_obs(
            self.robot.data.joint_pos,
            self.robot.data.joint_vel,
            self.robot.data.body_pos_w[:, self.ref_body_index],
            self.robot.data.body_quat_w[:, self.ref_body_index],
            self.robot.data.body_lin_vel_w[:, self.ref_body_index],
            self.robot.data.body_ang_vel_w[:, self.ref_body_index],
            self.robot.data.body_pos_w[:, self.key_body_indexes],
        )

        # update AMP observation history
        for i in reversed(range(self.cfg.num_amp_observations - 1)):
            self.amp_observation_buffer[:, i + 1] = self.amp_observation_buffer[:, i]
        # build AMP observation
        self.amp_observation_buffer[:, 0] = obs.clone()
        self.extras = {"amp_obs": self.amp_observation_buffer.view(-1, self.amp_observation_size)}

        return {"policy": obs}

    # def _get_rewards(self) -> torch.Tensor:
    #     return torch.ones((self.num_envs,), dtype=torch.float32, device=self.sim.device)
    def _get_rewards(self) -> torch.Tensor:
        total_reward, reward_log = compute_rewards(
            self.cfg.rew_termination,
            self.cfg.rew_action_l2,
            self.cfg.rew_joint_pos_limits,
            self.cfg.rew_joint_acc_l2,
            self.cfg.rew_joint_vel_l2,
            self.reset_terminated,
            self.actions,
            self.robot.data.joint_pos,
            self.robot.data.soft_joint_pos_limits,
            self.robot.data.joint_acc,
            self.robot.data.joint_vel,    
        )
        self.extras["log"] = reward_log
        return total_reward

    def _get_dones(self) -> tuple[torch.Tensor, torch.Tensor]:
        time_out = self.episode_length_buf >= self.max_episode_length - 1
        if self.cfg.early_termination:
            died = self.robot.data.body_pos_w[:, self.ref_body_index, 2] < self.cfg.termination_height
        else:
            died = torch.zeros_like(time_out)
        return died, time_out

    def _reset_idx(self, env_ids: torch.Tensor | None):
        if env_ids is None or len(env_ids) == self.num_envs:
            env_ids = self.robot._ALL_INDICES
        self.robot.reset(env_ids)
        super()._reset_idx(env_ids)

        if self.cfg.reset_strategy == "default":
            root_state, joint_pos, joint_vel = self._reset_strategy_default(env_ids)
        elif self.cfg.reset_strategy.startswith("random"):
            start = "start" in self.cfg.reset_strategy
            root_state, joint_pos, joint_vel = self._reset_strategy_random(env_ids, start)
        else:
            raise ValueError(f"Unknown reset strategy: {self.cfg.reset_strategy}")

        self.robot.write_root_link_pose_to_sim(root_state[:, :7], env_ids)
        self.robot.write_root_com_velocity_to_sim(root_state[:, 7:], env_ids)
        self.robot.write_joint_state_to_sim(joint_pos, joint_vel, None, env_ids)

    # reset strategies

    def _reset_strategy_default(self, env_ids: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
        root_state = self.robot.data.default_root_state[env_ids].clone()
        root_state[:, :3] += self.scene.env_origins[env_ids]
        joint_pos = self.robot.data.default_joint_pos[env_ids].clone()
        joint_vel = self.robot.data.default_joint_vel[env_ids].clone()
        return root_state, joint_pos, joint_vel

    def _reset_strategy_random(
        self, env_ids: torch.Tensor, start: bool = False
    ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
        # sample random motion times (or zeros if start is True)
        num_samples = env_ids.shape[0]
        times = np.zeros(num_samples) if start else self._motion_loader.sample_times(num_samples)
        # sample random motions
        (
            dof_positions,
            dof_velocities,
            body_positions,
            body_rotations,
            body_linear_velocities,
            body_angular_velocities,
        ) = self._motion_loader.sample(num_samples=num_samples, times=times)

        # get root transforms (the humanoid torso)
        motion_torso_index = self._motion_loader.get_body_index(["pelvis"])[0]
        root_state = self.robot.data.default_root_state[env_ids].clone()
        root_state[:, 0:3] = body_positions[:, motion_torso_index] + self.scene.env_origins[env_ids]
        root_state[:, 2] += 0.05  # lift the humanoid slightly to avoid collisions with the ground
        root_state[:, 3:7] = body_rotations[:, motion_torso_index]
        root_state[:, 7:10] = body_linear_velocities[:, motion_torso_index]
        root_state[:, 10:13] = body_angular_velocities[:, motion_torso_index]
        # get DOFs state
        dof_pos = dof_positions[:, self.motion_dof_indexes]
        dof_vel = dof_velocities[:, self.motion_dof_indexes]

        # update AMP observation
        amp_observations = self.collect_reference_motions(num_samples, times)
        self.amp_observation_buffer[env_ids] = amp_observations.view(num_samples, self.cfg.num_amp_observations, -1)

        return root_state, dof_pos, dof_vel

    # env methods

    def collect_reference_motions(self, num_samples: int, current_times: np.ndarray | None = None) -> torch.Tensor:
        # sample random motion times (or use the one specified)
        if current_times is None:
            current_times = self._motion_loader.sample_times(num_samples)
        times = (
            np.expand_dims(current_times, axis=-1)
            - self._motion_loader.dt * np.arange(0, self.cfg.num_amp_observations)
        ).flatten()
        # get motions
        (
            dof_positions,
            dof_velocities,
            body_positions,
            body_rotations,
            body_linear_velocities,
            body_angular_velocities,
        ) = self._motion_loader.sample(num_samples=num_samples, times=times)
        # compute AMP observation
        amp_observation = compute_obs(
            dof_positions[:, self.motion_dof_indexes],
            dof_velocities[:, self.motion_dof_indexes],
            body_positions[:, self.motion_ref_body_index],
            body_rotations[:, self.motion_ref_body_index],
            body_linear_velocities[:, self.motion_ref_body_index],
            body_angular_velocities[:, self.motion_ref_body_index],
            body_positions[:, self.motion_key_body_indexes],
        )
        return amp_observation.view(-1, self.amp_observation_size)


@torch.jit.script
def quaternion_to_tangent_and_normal(q: torch.Tensor) -> torch.Tensor:
    ref_tangent = torch.zeros_like(q[..., :3])
    ref_normal = torch.zeros_like(q[..., :3])
    ref_tangent[..., 0] = 1
    ref_normal[..., -1] = 1
    tangent = quat_apply(q, ref_tangent)
    normal = quat_apply(q, ref_normal)
    return torch.cat([tangent, normal], dim=len(tangent.shape) - 1)


@torch.jit.script
def compute_obs(
    dof_positions: torch.Tensor,
    dof_velocities: torch.Tensor,
    root_positions: torch.Tensor,
    root_rotations: torch.Tensor,
    root_linear_velocities: torch.Tensor,
    root_angular_velocities: torch.Tensor,
    key_body_positions: torch.Tensor,
) -> torch.Tensor:
    obs = torch.cat(
        (
            dof_positions,
            dof_velocities,
            root_positions[:, 2:3],  # root body height
            quaternion_to_tangent_and_normal(root_rotations),
            root_linear_velocities,
            root_angular_velocities,
            (key_body_positions - root_positions.unsqueeze(-2)).view(key_body_positions.shape[0], -1),
        ),
        dim=-1,
    )
    return obs
@torch.jit.script
def compute_rewards(
    rew_scale_termination: float,
    rew_scale_action_l2: float,
    rew_scale_joint_pos_limits: float,
    rew_scale_joint_acc_l2: float,
    rew_scale_joint_vel_l2: float,
    reset_terminated: torch.Tensor,
    actions: torch.Tensor,
    joint_pos: torch.Tensor,
    soft_joint_pos_limits: torch.Tensor,
    joint_acc: torch.Tensor,
    joint_vel: torch.Tensor,
):
    rew_termination = rew_scale_termination * reset_terminated.float()
    rew_action_l2 = rew_scale_action_l2 * torch.sum(torch.square(actions), dim=1)
    
    out_of_limits = -(joint_pos - soft_joint_pos_limits[:,:,0]).clip(max=0.0)
    out_of_limits += (joint_pos - soft_joint_pos_limits[:,:,1]).clip(min=0.0)
    rew_joint_pos_limits = rew_scale_joint_pos_limits * torch.sum(out_of_limits, dim=1)
    
    rew_joint_acc_l2 = rew_scale_joint_acc_l2 * torch.sum(torch.square(joint_acc), dim=1)
    rew_joint_vel_l2 = rew_scale_joint_vel_l2 * torch.sum(torch.square(joint_vel), dim=1)
    total_reward = rew_termination + rew_action_l2 + rew_joint_pos_limits + rew_joint_acc_l2 + rew_joint_vel_l2
    
    log = {
        "rew_termination": (rew_termination).mean(),
        "rew_action_l2": (rew_action_l2).mean(),
        "rew_joint_pos_limits": (rew_joint_pos_limits).mean(),
        "rew_joint_acc_l2": (rew_joint_acc_l2).mean(),
        "rew_joint_vel_l2": (rew_joint_vel_l2).mean(),
        }
    return total_reward, log

================================================
FILE: g1_amp_env_cfg.py
================================================
# Copyright (c) 2022-2025, The Isaac Lab Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

from __future__ import annotations

import os
from dataclasses import MISSING
from .g1_cfg import G1_CFG


from isaaclab.envs import DirectRLEnvCfg
from isaaclab.scene import InteractiveSceneCfg
from isaaclab.sim import PhysxCfg, SimulationCfg
from isaaclab.utils import configclass
from isaaclab.assets import ArticulationCfg

MOTIONS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), "motions")


@configclass
class G1AmpEnvCfg(DirectRLEnvCfg):
    """Humanoid AMP environment config (base class)."""
    
    # reward
    rew_termination = -0
    rew_action_l2 = -0.00
    rew_joint_pos_limits = -0
    rew_joint_acc_l2 =-0.00
    rew_joint_vel_l2= -0.00

    # env
    episode_length_s = 10.0
    decimation = 2

    # spaces
    observation_space =  71 + 3 * 10 #TODO
    action_space = 29
    state_space = 0
    num_amp_observations = 2
    amp_observation_space = 71 + 3 * 10

    early_termination = True
    termination_height = 0.5

    motion_file: str = MISSING
    reference_body = "pelvis"
    reset_strategy = "random"  # default, random, random-start
    """Strategy to be followed when resetting each environment (humanoid's pose and joint states).

    * default: pose and joint states are set to the initial state of the asset.
    * random: pose and joint states are set by sampling motions at random, uniform times.
    * random-start: pose and joint states are set by sampling motion at the start (time zero).
    """

    # simulation
    sim: SimulationCfg = SimulationCfg(
        dt=1 / 60,
        render_interval=decimation,
        physx=PhysxCfg(
            gpu_found_lost_pairs_capacity=2**23,
            gpu_total_aggregate_pairs_capacity=2**23,
        ),
    )

    # scene
    scene: InteractiveSceneCfg = InteractiveSceneCfg(num_envs=4096, env_spacing=4.0, replicate_physics=True)

    # robot
    robot: ArticulationCfg = G1_CFG.replace(prim_path="/World/envs/env_.*/Robot")


@configclass
class G1AmpDanceEnvCfg(G1AmpEnvCfg):
    motion_file = os.path.join(MOTIONS_DIR, "G1_dance.npz")
    
@configclass
class G1AmpWalkEnvCfg(G1AmpEnvCfg):
    motion_file = os.path.join(MOTIONS_DIR, "G1_walk.npz")

================================================
FILE: g1_cfg.py
================================================
import isaaclab.sim as sim_utils

from isaaclab.actuators import ImplicitActuatorCfg
from isaaclab.assets import ArticulationCfg
import os

CURRENT_DIR = os.path.dirname(os.path.abspath(__file__))


G1_CFG = ArticulationCfg(
        prim_path="{ENV_REGEX_NS}/Robot",
        spawn=sim_utils.UsdFileCfg(
            usd_path=os.path.join(CURRENT_DIR, "usd/g1_29dof_rev_1_0.usd"),
            activate_contact_sensors=True,
            rigid_props=sim_utils.RigidBodyPropertiesCfg(
                disable_gravity=False,
                retain_accelerations=False,
                linear_damping=0.0,
                angular_damping=0.0,
                max_linear_velocity=3.0,
                max_angular_velocity=3.0,
                max_depenetration_velocity=10.0,
            ),
            articulation_props=sim_utils.ArticulationRootPropertiesCfg(
                enabled_self_collisions=False, solver_position_iteration_count=4, solver_velocity_iteration_count=0
            ),
        ),
        init_state=ArticulationCfg.InitialStateCfg(
            pos=(0.0, 0.0, 0.8),
            joint_pos={
                ".*_hip_pitch_joint": -0.20,
                ".*_knee_joint": 0.42,
                ".*_ankle_pitch_joint": -0.23,
                # ".*_elbow_pitch_joint": 0.87,
                "left_shoulder_roll_joint": 0.16,
                "left_shoulder_pitch_joint": 0.35,
                "right_shoulder_roll_joint": -0.16,
                "right_shoulder_pitch_joint": 0.35,
            },
            joint_vel={".*": 0.0},
        ),
        soft_joint_pos_limit_factor=0.9,
        actuators={
            "legs": ImplicitActuatorCfg(
                joint_names_expr=[
                    ".*_hip_yaw_joint",
                    ".*_hip_roll_joint",
                    ".*_hip_pitch_joint",
                    ".*_knee_joint",
                    "waist_yaw_joint",
                    "waist_roll_joint",
                    "waist_pitch_joint",
                ],
                effort_limit=300,
                velocity_limit=100.0,
                stiffness={
                    ".*_hip_yaw_joint": 150.0,
                    ".*_hip_roll_joint": 150.0,
                    ".*_hip_pitch_joint": 200.0,
                    ".*_knee_joint": 200.0,
                    "waist_yaw_joint": 200.0,
                    "waist_roll_joint": 200.0,
                    "waist_pitch_joint": 200.0,
                },
                damping={
                    ".*_hip_yaw_joint": 5.0,
                    ".*_hip_roll_joint": 5.0,
                    ".*_hip_pitch_joint": 5.0,
                    ".*_knee_joint": 5.0,
                    "waist_yaw_joint": 5.0,
                    "waist_roll_joint": 5.0,
                    "waist_pitch_joint": 5.0,
                },
                armature={
                    ".*_hip_.*": 0.01,
                    ".*_knee_joint": 0.01,
                    "waist_yaw_joint": 0.01,
                    "waist_roll_joint": 0.01,
                    "waist_pitch_joint": 0.01,
                },
            ),
            "feet": ImplicitActuatorCfg(
                effort_limit=20,
                joint_names_expr=[".*_ankle_pitch_joint", ".*_ankle_roll_joint"],
                stiffness=20.0,
                damping=2.0,
                armature=0.01,
            ),
            "arms": ImplicitActuatorCfg(
                joint_names_expr=[
                    ".*_shoulder_pitch_joint",
                    ".*_shoulder_roll_joint",
                    ".*_shoulder_yaw_joint",
                    ".*_elbow_joint",
                    ".*_wrist_.*",

                ],
                effort_limit=300,
                velocity_limit=100.0,
                stiffness=40.0,
                damping=10.0,
                armature={
                    ".*_shoulder_.*": 0.01,
                    ".*_elbow_.*": 0.01,
                    ".*_wrist_.*": 0.01,

                },
            ),
        },
    )

================================================
FILE: humanoid_amp_env.py
================================================
# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

from __future__ import annotations

import gymnasium as gym
import numpy as np
import torch

import isaaclab.sim as sim_utils
from isaaclab.assets import Articulation
from isaaclab.envs import DirectRLEnv
from isaaclab.sim.spawners.from_files import GroundPlaneCfg, spawn_ground_plane
from isaaclab.utils.math import quat_apply

from .humanoid_amp_env_cfg import HumanoidAmpEnvCfg
from .motions import MotionLoader


class HumanoidAmpEnv(DirectRLEnv):
    cfg: HumanoidAmpEnvCfg

    def __init__(self, cfg: HumanoidAmpEnvCfg, render_mode: str | None = None, **kwargs):
        super().__init__(cfg, render_mode, **kwargs)

        # action offset and scale
        dof_lower_limits = self.robot.data.soft_joint_pos_limits[0, :, 0]
        dof_upper_limits = self.robot.data.soft_joint_pos_limits[0, :, 1]
        self.action_offset = 0.5 * (dof_upper_limits + dof_lower_limits)
        self.action_scale = dof_upper_limits - dof_lower_limits

        # load motion
        self._motion_loader = MotionLoader(motion_file=self.cfg.motion_file, device=self.device)

        # DOF and key body indexes
        key_body_names = ["right_hand", "left_hand", "right_foot", "left_foot"]
        self.ref_body_index = self.robot.data.body_names.index(self.cfg.reference_body)
        self.key_body_indexes = [self.robot.data.body_names.index(name) for name in key_body_names]
        self.motion_dof_indexes = self._motion_loader.get_dof_index(self.robot.data.joint_names)
        self.motion_ref_body_index = self._motion_loader.get_body_index([self.cfg.reference_body])[0]
        self.motion_key_body_indexes = self._motion_loader.get_body_index(key_body_names)

        # reconfigure AMP observation space according to the number of observations and create the buffer
        self.amp_observation_size = self.cfg.num_amp_observations * self.cfg.amp_observation_space
        self.amp_observation_space = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(self.amp_observation_size,))
        self.amp_observation_buffer = torch.zeros(
            (self.num_envs, self.cfg.num_amp_observations, self.cfg.amp_observation_space), device=self.device
        )

    def _setup_scene(self):
        self.robot = Articulation(self.cfg.robot)
        # add ground plane
        spawn_ground_plane(
            prim_path="/World/ground",
            cfg=GroundPlaneCfg(
                physics_material=sim_utils.RigidBodyMaterialCfg(
                    static_friction=1.0,
                    dynamic_friction=1.0,
                    restitution=0.0,
                ),
            ),
        )
        # clone and replicate
        self.scene.clone_environments(copy_from_source=False)
        # we need to explicitly filter collisions for CPU simulation
        if self.device == "cpu":
            self.scene.filter_collisions(global_prim_paths=["/World/ground"])

        # add articulation to scene
        self.scene.articulations["robot"] = self.robot
        # add lights
        light_cfg = sim_utils.DomeLightCfg(intensity=2000.0, color=(0.75, 0.75, 0.75))
        light_cfg.func("/World/Light", light_cfg)

    def _pre_physics_step(self, actions: torch.Tensor):
        self.actions = actions.clone()

    def _apply_action(self):
        target = self.action_offset + self.action_scale * self.actions
        self.robot.set_joint_position_target(target)

    def _get_observations(self) -> dict:
        # build task observation
        obs = compute_obs(
            self.robot.data.joint_pos,
            self.robot.data.joint_vel,
            self.robot.data.body_pos_w[:, self.ref_body_index],
            self.robot.data.body_quat_w[:, self.ref_body_index],
            self.robot.data.body_lin_vel_w[:, self.ref_body_index],
            self.robot.data.body_ang_vel_w[:, self.ref_body_index],
            self.robot.data.body_pos_w[:, self.key_body_indexes],
        )

        # update AMP observation history
        for i in reversed(range(self.cfg.num_amp_observations - 1)):
            self.amp_observation_buffer[:, i + 1] = self.amp_observation_buffer[:, i]
        # build AMP observation
        self.amp_observation_buffer[:, 0] = obs.clone()
        self.extras = {"amp_obs": self.amp_observation_buffer.view(-1, self.amp_observation_size)}

        return {"policy": obs}

    def _get_rewards(self) -> torch.Tensor:
        return torch.ones((self.num_envs,), dtype=torch.float32, device=self.sim.device)

    def _get_dones(self) -> tuple[torch.Tensor, torch.Tensor]:
        time_out = self.episode_length_buf >= self.max_episode_length - 1
        if self.cfg.early_termination:
            died = self.robot.data.body_pos_w[:, self.ref_body_index, 2] < self.cfg.termination_height
        else:
            died = torch.zeros_like(time_out)
        return died, time_out

    def _reset_idx(self, env_ids: torch.Tensor | None):
        if env_ids is None or len(env_ids) == self.num_envs:
            env_ids = self.robot._ALL_INDICES
        self.robot.reset(env_ids)
        super()._reset_idx(env_ids)

        if self.cfg.reset_strategy == "default":
            root_state, joint_pos, joint_vel = self._reset_strategy_default(env_ids)
        elif self.cfg.reset_strategy.startswith("random"):
            start = "start" in self.cfg.reset_strategy
            root_state, joint_pos, joint_vel = self._reset_strategy_random(env_ids, start)
        else:
            raise ValueError(f"Unknown reset strategy: {self.cfg.reset_strategy}")

        self.robot.write_root_link_pose_to_sim(root_state[:, :7], env_ids)
        self.robot.write_root_com_velocity_to_sim(root_state[:, 7:], env_ids)
        self.robot.write_joint_state_to_sim(joint_pos, joint_vel, None, env_ids)

    # reset strategies

    def _reset_strategy_default(self, env_ids: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
        root_state = self.robot.data.default_root_state[env_ids].clone()
        root_state[:, :3] += self.scene.env_origins[env_ids]
        joint_pos = self.robot.data.default_joint_pos[env_ids].clone()
        joint_vel = self.robot.data.default_joint_vel[env_ids].clone()
        return root_state, joint_pos, joint_vel

    def _reset_strategy_random(
        self, env_ids: torch.Tensor, start: bool = False
    ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
        # sample random motion times (or zeros if start is True)
        num_samples = env_ids.shape[0]
        times = np.zeros(num_samples) if start else self._motion_loader.sample_times(num_samples)
        # sample random motions
        (
            dof_positions,
            dof_velocities,
            body_positions,
            body_rotations,
            body_linear_velocities,
            body_angular_velocities,
        ) = self._motion_loader.sample(num_samples=num_samples, times=times)

        # get root transforms (the humanoid torso)
        motion_torso_index = self._motion_loader.get_body_index(["torso"])[0]
        root_state = self.robot.data.default_root_state[env_ids].clone()
        root_state[:, 0:3] = body_positions[:, motion_torso_index] + self.scene.env_origins[env_ids]
        root_state[:, 2] += 0.15  # lift the humanoid slightly to avoid collisions with the ground
        root_state[:, 3:7] = body_rotations[:, motion_torso_index]
        root_state[:, 7:10] = body_linear_velocities[:, motion_torso_index]
        root_state[:, 10:13] = body_angular_velocities[:, motion_torso_index]
        # get DOFs state
        dof_pos = dof_positions[:, self.motion_dof_indexes]
        dof_vel = dof_velocities[:, self.motion_dof_indexes]

        # update AMP observation
        amp_observations = self.collect_reference_motions(num_samples, times)
        self.amp_observation_buffer[env_ids] = amp_observations.view(num_samples, self.cfg.num_amp_observations, -1)

        return root_state, dof_pos, dof_vel

    # env methods

    def collect_reference_motions(self, num_samples: int, current_times: np.ndarray | None = None) -> torch.Tensor:
        # sample random motion times (or use the one specified)
        if current_times is None:
            current_times = self._motion_loader.sample_times(num_samples)
        times = (
            np.expand_dims(current_times, axis=-1)
            - self._motion_loader.dt * np.arange(0, self.cfg.num_amp_observations)
        ).flatten()
        # get motions
        (
            dof_positions,
            dof_velocities,
            body_positions,
            body_rotations,
            body_linear_velocities,
            body_angular_velocities,
        ) = self._motion_loader.sample(num_samples=num_samples, times=times)
        # compute AMP observation
        amp_observation = compute_obs(
            dof_positions[:, self.motion_dof_indexes],
            dof_velocities[:, self.motion_dof_indexes],
            body_positions[:, self.motion_ref_body_index],
            body_rotations[:, self.motion_ref_body_index],
            body_linear_velocities[:, self.motion_ref_body_index],
            body_angular_velocities[:, self.motion_ref_body_index],
            body_positions[:, self.motion_key_body_indexes],
        )
        return amp_observation.view(-1, self.amp_observation_size)


@torch.jit.script
def quaternion_to_tangent_and_normal(q: torch.Tensor) -> torch.Tensor:
    ref_tangent = torch.zeros_like(q[..., :3])
    ref_normal = torch.zeros_like(q[..., :3])
    ref_tangent[..., 0] = 1
    ref_normal[..., -1] = 1
    tangent = quat_apply(q, ref_tangent)
    normal = quat_apply(q, ref_normal)
    return torch.cat([tangent, normal], dim=len(tangent.shape) - 1)


@torch.jit.script
def compute_obs(
    dof_positions: torch.Tensor,
    dof_velocities: torch.Tensor,
    root_positions: torch.Tensor,
    root_rotations: torch.Tensor,
    root_linear_velocities: torch.Tensor,
    root_angular_velocities: torch.Tensor,
    key_body_positions: torch.Tensor,
) -> torch.Tensor:
    obs = torch.cat(
        (
            dof_positions,
            dof_velocities,
            root_positions[:, 2:3],  # root body height
            quaternion_to_tangent_and_normal(root_rotations),
            root_linear_velocities,
            root_angular_velocities,
            (key_body_positions - root_positions.unsqueeze(-2)).view(key_body_positions.shape[0], -1),
        ),
        dim=-1,
    )
    return obs


================================================
FILE: humanoid_amp_env_cfg.py
================================================
# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

from __future__ import annotations

import os
from dataclasses import MISSING

from isaaclab_assets import HUMANOID_28_CFG

from isaaclab.actuators import ImplicitActuatorCfg
from isaaclab.assets import ArticulationCfg
from isaaclab.envs import DirectRLEnvCfg
from isaaclab.scene import InteractiveSceneCfg
from isaaclab.sim import PhysxCfg, SimulationCfg
from isaaclab.utils import configclass

MOTIONS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), "motions")


@configclass
class HumanoidAmpEnvCfg(DirectRLEnvCfg):
    """Humanoid AMP environment config (base class)."""

    # env
    episode_length_s = 10.0
    decimation = 2

    # spaces
    observation_space = 81
    action_space = 28
    state_space = 0
    num_amp_observations = 2
    amp_observation_space = 81

    early_termination = True
    termination_height = 0.5

    motion_file: str = MISSING
    reference_body = "torso"
    reset_strategy = "random"  # default, random, random-start
    """Strategy to be followed when resetting each environment (humanoid's pose and joint states).

    * default: pose and joint states are set to the initial state of the asset.
    * random: pose and joint states are set by sampling motions at random, uniform times.
    * random-start: pose and joint states are set by sampling motion at the start (time zero).
    """

    # simulation
    sim: SimulationCfg = SimulationCfg(
        dt=1 / 60,
        render_interval=decimation,
        physx=PhysxCfg(
            gpu_found_lost_pairs_capacity=2**23,
            gpu_total_aggregate_pairs_capacity=2**23,
        ),
    )

    # scene
    scene: InteractiveSceneCfg = InteractiveSceneCfg(num_envs=4096, env_spacing=10.0, replicate_physics=True)

    # robot
    robot: ArticulationCfg = HUMANOID_28_CFG.replace(prim_path="/World/envs/env_.*/Robot").replace(
        actuators={
            "body": ImplicitActuatorCfg(
                joint_names_expr=[".*"],
                velocity_limit=100.0,
                stiffness=None,
                damping=None,
            ),
        },
    )


@configclass
class HumanoidAmpDanceEnvCfg(HumanoidAmpEnvCfg):
    motion_file = os.path.join(MOTIONS_DIR, "humanoid_dance.npz")


@configclass
class HumanoidAmpRunEnvCfg(HumanoidAmpEnvCfg):
    motion_file = os.path.join(MOTIONS_DIR, "humanoid_run.npz")


@configclass
class HumanoidAmpWalkEnvCfg(HumanoidAmpEnvCfg):
    motion_file = os.path.join(MOTIONS_DIR, "humanoid_walk.npz")


================================================
FILE: motions/README.md
================================================
# Motion files

The motion files are in NumPy-file format that contains data from the skeleton DOFs and bodies that perform the motion.

The data (accessed by key) is described in the following table, where:

* `N` is the number of motion frames recorded
* `D` is the number of skeleton DOFs
* `B` is the number of skeleton bodies

| Key | Dtype | Shape | Description |
| --- | ---- | ----- | ----------- |
| `fps` | int64 | () | FPS at which motion was sampled |
| `dof_names` | unicode string | (D,) | Skeleton DOF names |
| `body_names` | unicode string | (B,) | Skeleton body names |
| `dof_positions` | float32 | (N, D) | Skeleton DOF positions |
| `dof_velocities` | float32 | (N, D) | Skeleton DOF velocities |
| `body_positions` | float32 | (N, B, 3) | Skeleton body positions |
| `body_rotations` | float32 | (N, B, 4) | Skeleton body rotations (as `wxyz` quaternion) |
| `body_linear_velocities` | float32 | (N, B, 3) | Skeleton body linear velocities |
| `body_angular_velocities` | float32 | (N, B, 3) | Skeleton body angular velocities |

## Motion visualization

The `motion_viewer.py` file allows to visualize the skeleton motion recorded in a motion file.

Open an terminal in the `motions` folder and run the following command.

```bash
python motion_viewer.py --file MOTION_FILE_NAME.npz
```

See `python motion_viewer.py --help` for available arguments.

Humanoid:
```
dof_names:
  Data type: <U16
  Data shape: (28,)
  Joint names:
    1. abdomen_x
    2. abdomen_y
    3. abdomen_z
    4. neck_x
    5. neck_y
    6. neck_z
    7. right_shoulder_x
    8. right_shoulder_y
    9. right_shoulder_z
    10. right_elbow
    11. left_shoulder_x
    12. left_shoulder_y
    13. left_shoulder_z
    14. left_elbow
    15. right_hip_x
    16. right_hip_y
    17. right_hip_z
    18. right_knee
    19. right_ankle_x
    20. right_ankle_y
    21. right_ankle_z
    22. left_hip_x
    23. left_hip_y
    24. left_hip_z
    25. left_knee
    26. left_ankle_x
    27. left_ankle_y
    28. left_ankle_z

body_names:
  Data type: <U15
  Data shape: (15,)
  Body part names:
    1. pelvis
    2. torso
    3. head
    4. right_upper_arm
    5. right_lower_arm
    6. right_hand
    7. left_upper_arm
    8. left_lower_arm
    9. left_hand
    10. right_thigh
    11. right_shin
    12. right_foot
    13. left_thigh
    14. left_shin
    15. left_foot
```
G1:
```
dof_names:
  Data type: <U26
  Data shape: (29,)
  Joint names:
    1. left_hip_pitch_joint
    2. right_hip_pitch_joint
    3. waist_yaw_joint
    4. left_hip_roll_joint
    5. right_hip_roll_joint
    6. waist_roll_joint
    7. left_hip_yaw_joint
    8. right_hip_yaw_joint
    9. waist_pitch_joint
    10. left_knee_joint
    11. right_knee_joint
    12. left_shoulder_pitch_joint
    13. right_shoulder_pitch_joint
    14. left_ankle_pitch_joint
    15. right_ankle_pitch_joint
    16. left_shoulder_roll_joint
    17. right_shoulder_roll_joint
    18. left_ankle_roll_joint
    19. right_ankle_roll_joint
    20. left_shoulder_yaw_joint
    21. right_shoulder_yaw_joint
    22. left_elbow_joint
    23. right_elbow_joint
    24. left_wrist_roll_joint
    25. right_wrist_roll_joint
    26. left_wrist_pitch_joint
    27. right_wrist_pitch_joint
    28. left_wrist_yaw_joint
    29. right_wrist_yaw_joint

body_names:
  Data type: <U25
  Data shape: (39,)
  Body part names:
    1. pelvis
    2. imu_in_pelvis
    3. left_hip_pitch_link
    4. pelvis_contour_link
    5. right_hip_pitch_link
    6. waist_yaw_link
    7. left_hip_roll_link
    8. right_hip_roll_link
    9. waist_roll_link
    10. left_hip_yaw_link
    11. right_hip_yaw_link
    12. torso_link
    13. left_knee_link
    14. right_knee_link
    15. d435_link
    16. head_link
    17. imu_in_torso
    18. left_shoulder_pitch_link
    19. logo_link
    20. mid360_link
    21. right_shoulder_pitch_link
    22. left_ankle_pitch_link
    23. right_ankle_pitch_link
    24. left_shoulder_roll_link
    25. right_shoulder_roll_link
    26. left_ankle_roll_link
    27. right_ankle_roll_link
    28. left_shoulder_yaw_link
    29. right_shoulder_yaw_link
    30. left_elbow_link
    31. right_elbow_link
    32. left_wrist_roll_link
    33. right_wrist_roll_link
    34. left_wrist_pitch_link
    35. right_wrist_pitch_link
    36. left_wrist_yaw_link
    37. right_wrist_yaw_link
    38. left_rubber_hand
    39. right_rubber_hand
```

================================================
FILE: motions/__init__.py
================================================
# Copyright (c) 2022-2025, The Isaac Lab Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

"""
AMP Motion Loader and motion files.
"""

from .motion_loader import MotionLoader
from .motion_viewer import MotionViewer

================================================
FILE: motions/data_convert.py
================================================
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

"""
Humanoid Motion Data Converter

This script converts humanoid motion data from CSV format to NPZ format suitable for
Isaac Lab's Adversarial Motion Prior (AMP) training (only suit CSV format in 
https://huggingface.co/datasets/lvhaidong/LAFAN1_Retargeting_Dataset/blob/main/README.md)

USAGE:
    python data_convert.py
    
DESCRIPTION:
    This script performs the following operations:
    1. Reads raw motion capture data from CSV files
    2. Interpolates data from 30fps to 60fps for smoother motion
    3. Performs forward kinematics using Pinocchio to compute body poses
    4. Calculates velocities using central differences and smoothing
    5. Saves the processed data in NPZ format for Isaac Lab training
    
INPUT:
    - CSV file containing motion capture data (joints positions + root pose)
    - URDF file defining the robot model
    - Mesh directory containing 3D models
    
OUTPUT:
    - NPZ file containing:
        - fps: Sampling rate (60 Hz)
        - dof_names: Joint names
        - body_names: Body link names
        - dof_positions: Joint positions over time
        - dof_velocities: Joint velocities over time
        - body_positions: Body positions in world frame
        - body_rotations: Body orientations (quaternions)
        - body_linear_velocities: Body linear velocities
        - body_angular_velocities: Body angular velocities
        
REQUIREMENTS:
    - numpy
    - pandas
    - scipy
    - pinocchio
    
CONFIGURATION:
    - Modify csv_file variable to point to your motion data
    - Adjust start_idx and end_idx to select frame range
    - Update urdf_path and mesh_dir for your robot model
    - Customize joint_names and body_names arrays as needed
"""

import numpy as np
import pandas as pd
from scipy.ndimage import gaussian_filter1d
from scipy.interpolate import interp1d
from scipy.spatial.transform import Rotation as R, Slerp
import pinocchio as pin


# -----------------------------------------------
# 1. Quaternion Helper Functions
# -----------------------------------------------
def quaternion_inverse(q):
    """Input q: (w, x, y, z), returns its inverse."""
    w, x, y, z = q
    norm_sq = w*w + x*x + y*y + z*z
    if norm_sq < 1e-8:
        norm_sq = 1e-8
    return np.array([w, -x, -y, -z], dtype=q.dtype) / norm_sq

def quaternion_multiply(q1, q2):
    """Input/output: (w, x, y, z)"""
    w1, x1, y1, z1 = q1
    w2, x2, y2, z2 = q2
    w = w1*w2 - x1*x2 - y1*y2 - z1*z2
    x = w1*x2 + x1*w2 + y1*z2 - z1*y2
    y = w1*y2 - x1*z2 + y1*w2 + z1*x2
    z = w1*z2 + x1*y2 - y1*x2 + z1*w2
    return np.array([w, x, y, z], dtype=q1.dtype)

def compute_angular_velocity(q_prev, q_next, dt, eps=1e-8):
    """
    Compute angular velocity from adjacent quaternions (w, x, y, z):
      - Relative rotation q_rel = inv(q_prev) * q_next
      - Extract rotation angle and axis from q_rel
      - Return (angle / dt) * axis
    """
    q_inv = quaternion_inverse(q_prev)
    q_rel = quaternion_multiply(q_inv, q_next)
    norm_q_rel = np.linalg.norm(q_rel)
    if norm_q_rel < eps:
        return np.zeros(3, dtype=np.float32)
    q_rel /= norm_q_rel

    w = np.clip(q_rel[0], -1.0, 1.0)
    angle = 2.0 * np.arccos(w)
    sin_half = np.sqrt(1.0 - w*w)
    if sin_half < eps:
        return np.zeros(3, dtype=np.float32)
    axis = q_rel[1:] / sin_half
    return (angle / dt) * axis


# -----------------------------------------------
# 2. Helper Function to Build Pinocchio RobotWrapper
# -----------------------------------------------
def build_pin_robot(urdf_path, mesh_dir):
    """
    Load URDF file and construct a pin.RobotWrapper with free-flyer.
    Args:
        urdf_path: Path to the URDF file
        mesh_dir: Directory containing associated mesh files
    Returns:
        robot (pin.RobotWrapper)
    """
    # Note: If URDF already contains floating joint, you can modify this to use BuildFromURDF(urdf_path, ...)
    robot = pin.RobotWrapper.BuildFromURDF(
        urdf_path,
        mesh_dir,
        pin.JointModelFreeFlyer()
    )
    return robot


# -----------------------------------------------
# 3. Main Conversion Pipeline
# -----------------------------------------------
def main():
    # 3.1 Read CSV data and extract desired frame range (changed to frames 250~550)
    csv_file = "g1/dance1_subject2.csv"
    df = pd.read_csv(csv_file, header=None)
    start_idx = 250
    end_idx = 550
    # csv_file = "g1/walk1_subject1.csv"
    # df = pd.read_csv(csv_file, header=None)
    # start_idx = 100
    # end_idx = 300
    data_orig = df.iloc[start_idx:end_idx].to_numpy(dtype=np.float32)
    N_orig = data_orig.shape[0]
    print(f"Loading CSV: {csv_file}, frame range [{start_idx}:{end_idx}], total {N_orig} frames.")

    # Original CSV: first 7 columns are root data, remaining are joint data
    root_data_orig = data_orig[:, :7]      # (N_orig, 7)
    joint_data_orig = data_orig[:, 7:]       # (N_orig, D)

    # 3.2 Define original sampling rate (30fps) and new sampling rate (60fps), construct time series
    fps_orig = 30
    dt_orig = 1.0 / fps_orig
    t_orig = np.linspace(0, (N_orig - 1) * dt_orig, N_orig)

    fps_new = 60
    dt_new = 1.0 / fps_new
    N_new = 2 * N_orig - 1   # Insert one new frame between every two frames
    t_new = np.linspace(0, (N_orig - 1) * dt_orig, N_new)

    # 3.3 Interpolate root_data positions and joint angles
    # Linear interpolation for positions (first three columns)
    root_pos_interp = interp1d(t_orig, root_data_orig[:, 0:3], axis=0, kind='linear')(t_new)

    # Slerp interpolation for quaternions (qx, qy, qz, qw)
    # Note: Quaternions in CSV are stored as (qx, qy, qz, qw), which matches scipy requirements
    rotations_orig = R.from_quat(root_data_orig[:, 3:7])
    slerp = Slerp(t_orig, rotations_orig)
    rotations_new = slerp(t_new)
    root_quat_interp = rotations_new.as_quat()  # (N_new, 4) still (qx, qy, qz, qw)

    # Combine interpolated root data
    root_data = np.hstack((root_pos_interp, root_quat_interp))  # (N_new, 7)

    # Linear interpolation for joint angles (joint_data)
    joint_data = interp1d(t_orig, joint_data_orig, axis=0, kind='linear')(t_new)

    # Update frame count, sampling rate, and time interval
    N = N_new
    fps = fps_new
    dt = dt_new

    # 3.4 Define joint names 
    joint_names = [
        "left_hip_pitch_joint",
        "left_hip_roll_joint",
        "left_hip_yaw_joint",
        "left_knee_joint",
        "left_ankle_pitch_joint",
        "left_ankle_roll_joint",
        "right_hip_pitch_joint",
        "right_hip_roll_joint",
        "right_hip_yaw_joint",
        "right_knee_joint",
        "right_ankle_pitch_joint",
        "right_ankle_roll_joint",
        "waist_yaw_joint",
        "waist_roll_joint",
        "waist_pitch_joint",
        "left_shoulder_pitch_joint",
        "left_shoulder_roll_joint",
        "left_shoulder_yaw_joint",
        "left_elbow_joint",
        "left_wrist_roll_joint",
        "left_wrist_pitch_joint",
        "left_wrist_yaw_joint",
        "right_shoulder_pitch_joint",
        "right_shoulder_roll_joint",
        "right_shoulder_yaw_joint",
        "right_elbow_joint",
        "right_wrist_roll_joint",
        "right_wrist_pitch_joint",
        "right_wrist_yaw_joint"
    ]
    dof_names = np.array(joint_names, dtype=np.str_)

    # 3.5 Get joint positions (excluding Root)
    dof_positions = joint_data.copy()      # shape: (N, D)

    # 3.6 Calculate joint velocities (central differences + boundary forward/backward differences + Gaussian smoothing)
    dof_velocities = np.zeros_like(dof_positions)
    dof_velocities[1:-1] = (dof_positions[2:] - dof_positions[:-2]) / (2 * dt)
    dof_velocities[0] = (dof_positions[1] - dof_positions[0]) / dt
    dof_velocities[-1] = (dof_positions[-1] - dof_positions[-2]) / dt
    dof_velocities_smoothed = gaussian_filter1d(dof_velocities, sigma=1, axis=0)

    # 3.7 Specify link names to record and get their poses in global coordinate frame
    body_names = [
        "pelvis", 
        # "head_link",
        "left_shoulder_pitch_link",
        "right_shoulder_pitch_link",
        "left_elbow_link",
        "right_elbow_link",
        "right_hip_yaw_link",
        "left_hip_yaw_link",
        "right_rubber_hand",
        "left_rubber_hand",
        "right_ankle_roll_link",
        "left_ankle_roll_link"
    ]



    body_names = np.array(body_names, dtype=np.str_)
    B = len(body_names)

    body_positions = np.zeros((N, B, 3), dtype=np.float32)
    body_rotations = np.zeros((N, B, 4), dtype=np.float32)

    # 3.8 Build pin.RobotWrapper
    #    (Please change urdf_path and mesh_dir to your actual paths)
    urdf_path = "robot_description/g1/g1_29dof_rev_1_0.urdf"
    mesh_dir = "robot_description/g1"
    robot = build_pin_robot(urdf_path, mesh_dir)
    model = robot.model
    data_pk = robot.data

    nq = model.nq  # Total DOF (including free-flyer)
    if (7 + joint_data.shape[1]) != nq:
        print(f"Warning: CSV columns={7 + joint_data.shape[1]}, but pinocchio nq={nq}, may need to check or adjust script parsing.")

    # 3.9 Perform forward kinematics (FK) for each frame to get link poses in world coordinate frame
    q_pin = pin.neutral(model)

    for i in range(N):
        # Set free-flyer
        q_pin[0:3] = root_data[i, 0:3]
        # Quaternions stored in root_data are in (qx, qy, qz, qw) order
        q_pin[3:7] = root_data[i, 3:7]
        # Other joints
        dofD = joint_data.shape[1]
        q_pin[7:7 + dofD] = joint_data[i, :]

        # Forward kinematics
        pin.forwardKinematics(model, data_pk, q_pin)
        pin.updateFramePlacements(model, data_pk)

        # Read and save global poses for each link
        for j, link_name in enumerate(body_names):
            fid = model.getFrameId(link_name)
            link_tf = data_pk.oMf[fid]  # Link transformation in world frame

            # Translation
            body_positions[i, j, :] = link_tf.translation
            # Rotation (pin.Quaternion defaults to (x,y,z,w), need to convert to (w,x,y,z))
            quat_xyzw = pin.Quaternion(link_tf.rotation)
            body_rotations[i, j, :] = np.array([quat_xyzw.w,
                                                quat_xyzw.x,
                                                quat_xyzw.y,
                                                quat_xyzw.z],
                                               dtype=np.float32)

    # 3.10 Calculate body linear and angular velocities (in world coordinate frame)
    # -- Linear velocities: central differences --
    body_linear_velocities = np.zeros_like(body_positions)
    body_linear_velocities[1:-1] = (body_positions[2:] - body_positions[:-2]) / (2 * dt)
    body_linear_velocities[0] = (body_positions[1] - body_positions[0]) / dt
    body_linear_velocities[-1] = (body_positions[-1] - body_positions[-2]) / dt
    body_linear_velocities = gaussian_filter1d(body_linear_velocities, sigma=1, axis=0)

    # -- Angular velocities: computed from adjacent quaternions (in world coordinate frame) --
    body_angular_velocities = np.zeros((N, B, 3), dtype=np.float32)
    for j in range(B):
        quats = body_rotations[:, j, :]
        angular_vels = np.zeros((N, 3), dtype=np.float32)
        if N > 1:
            angular_vels[0] = compute_angular_velocity(quats[0], quats[1], dt)
            angular_vels[-1] = compute_angular_velocity(quats[-2], quats[-1], dt)
        for k in range(1, N - 1):
            av1 = compute_angular_velocity(quats[k - 1], quats[k], dt)
            av2 = compute_angular_velocity(quats[k], quats[k + 1], dt)
            angular_vels[k] = 0.5 * (av1 + av2)
        # Smoothing
        body_angular_velocities[:, j, :] = gaussian_filter1d(angular_vels, sigma=1, axis=0)

    # 3.11 Package and save to NPZ
    data_dict = {
        "fps": fps,                                   # int64 scalar, sampling rate
        "dof_names": dof_names,                       # unicode array (D,)
        "body_names": body_names,                     # unicode array (B,)
        "dof_positions": dof_positions,               # float32 (N, D)
        "dof_velocities": dof_velocities_smoothed,    # float32 (N, D)
        "body_positions": body_positions,             # float32 (N, B, 3)
        "body_rotations": body_rotations,             # float32 (N, B, 4) (w,x,y,z)
        "body_linear_velocities": body_linear_velocities,     # float32 (N, B, 3)
        "body_angular_velocities": body_angular_velocities    # float32 (N, B, 3)
    }

    out_filename = "g1.npz"
    np.savez(out_filename, **data_dict)

    print(f"Conversion completed, data saved to {out_filename}")
    print("fps:", fps)
    print("dof_names:", dof_names.shape)
    print("body_names:", body_names.shape)
    print("dof_positions:", dof_positions.shape)
    print("dof_velocities:", dof_velocities_smoothed.shape)
    print("body_positions:", body_positions.shape)
    print("body_rotations:", body_rotations.shape)
    print("body_linear_velocities:", body_linear_velocities.shape)
    print("body_angular_velocities:", body_angular_velocities.shape)


if __name__ == "__main__":
    main()


================================================
FILE: motions/motion_loader.py
================================================
# Copyright (c) 2022-2025, The Isaac Lab Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

import numpy as np
import os
import torch
from typing import Optional


class MotionLoader:
    """
    Helper class to load and sample motion data from NumPy-file format.
    """

    def __init__(self, motion_file: str, device: torch.device) -> None:
        """Load a motion file and initialize the internal variables.

        Args:
            motion_file: Motion file path to load.
            device: The device to which to load the data.

        Raises:
            AssertionError: If the specified motion file doesn't exist.
        """
        assert os.path.isfile(motion_file), f"Invalid file path: {motion_file}"
        data = np.load(motion_file)

        self.device = device
        self._dof_names = data["dof_names"].tolist()
        self._body_names = data["body_names"].tolist()

        self.dof_positions = torch.tensor(data["dof_positions"], dtype=torch.float32, device=self.device)
        self.dof_velocities = torch.tensor(data["dof_velocities"], dtype=torch.float32, device=self.device)
        self.body_positions = torch.tensor(data["body_positions"], dtype=torch.float32, device=self.device)
        self.body_rotations = torch.tensor(data["body_rotations"], dtype=torch.float32, device=self.device)
        self.body_linear_velocities = torch.tensor(
            data["body_linear_velocities"], dtype=torch.float32, device=self.device
        )
        self.body_angular_velocities = torch.tensor(
            data["body_angular_velocities"], dtype=torch.float32, device=self.device
        )

        self.dt = 1.0 / data["fps"]
        self.num_frames = self.dof_positions.shape[0]
        self.duration = self.dt * (self.num_frames - 1)
        print(f"Motion loaded ({motion_file}): duration: {self.duration} sec, frames: {self.num_frames}")

    @property
    def dof_names(self) -> list[str]:
        """Skeleton DOF names."""
        return self._dof_names

    @property
    def body_names(self) -> list[str]:
        """Skeleton rigid body names."""
        return self._body_names

    @property
    def num_dofs(self) -> int:
        """Number of skeleton's DOFs."""
        return len(self._dof_names)

    @property
    def num_bodies(self) -> int:
        """Number of skeleton's rigid bodies."""
        return len(self._body_names)

    def _interpolate(
        self,
        a: torch.Tensor,
        *,
        b: Optional[torch.Tensor] = None,
        blend: Optional[torch.Tensor] = None,
        start: Optional[np.ndarray] = None,
        end: Optional[np.ndarray] = None,
    ) -> torch.Tensor:
        """Linear interpolation between consecutive values.

        Args:
            a: The first value. Shape is (N, X) or (N, M, X).
            b: The second value. Shape is (N, X) or (N, M, X).
            blend: Interpolation coefficient between 0 (a) and 1 (b).
            start: Indexes to fetch the first value. If both, ``start`` and ``end` are specified,
                the first and second values will be fetches from the argument ``a`` (dimension 0).
            end: Indexes to fetch the second value. If both, ``start`` and ``end` are specified,
                the first and second values will be fetches from the argument ``a`` (dimension 0).

        Returns:
            Interpolated values. Shape is (N, X) or (N, M, X).
        """
        if start is not None and end is not None:
            return self._interpolate(a=a[start], b=a[end], blend=blend)
        if a.ndim >= 2:
            blend = blend.unsqueeze(-1)
        if a.ndim >= 3:
            blend = blend.unsqueeze(-1)
        return (1.0 - blend) * a + blend * b

    def _slerp(
        self,
        q0: torch.Tensor,
        *,
        q1: Optional[torch.Tensor] = None,
        blend: Optional[torch.Tensor] = None,
        start: Optional[np.ndarray] = None,
        end: Optional[np.ndarray] = None,
    ) -> torch.Tensor:
        """Interpolation between consecutive rotations (Spherical Linear Interpolation).

        Args:
            q0: The first quaternion (wxyz). Shape is (N, 4) or (N, M, 4).
            q1: The second quaternion (wxyz). Shape is (N, 4) or (N, M, 4).
            blend: Interpolation coefficient between 0 (q0) and 1 (q1).
            start: Indexes to fetch the first quaternion. If both, ``start`` and ``end` are specified,
                the first and second quaternions will be fetches from the argument ``q0`` (dimension 0).
            end: Indexes to fetch the second quaternion. If both, ``start`` and ``end` are specified,
                the first and second quaternions will be fetches from the argument ``q0`` (dimension 0).

        Returns:
            Interpolated quaternions. Shape is (N, 4) or (N, M, 4).
        """
        if start is not None and end is not None:
            return self._slerp(q0=q0[start], q1=q0[end], blend=blend)
        if q0.ndim >= 2:
            blend = blend.unsqueeze(-1)
        if q0.ndim >= 3:
            blend = blend.unsqueeze(-1)

        qw, qx, qy, qz = 0, 1, 2, 3  # wxyz
        cos_half_theta = (
            q0[..., qw] * q1[..., qw]
            + q0[..., qx] * q1[..., qx]
            + q0[..., qy] * q1[..., qy]
            + q0[..., qz] * q1[..., qz]
        )

        neg_mask = cos_half_theta < 0
        q1 = q1.clone()
        q1[neg_mask] = -q1[neg_mask]
        cos_half_theta = torch.abs(cos_half_theta)
        cos_half_theta = torch.unsqueeze(cos_half_theta, dim=-1)

        half_theta = torch.acos(cos_half_theta)
        sin_half_theta = torch.sqrt(1.0 - cos_half_theta * cos_half_theta)

        ratio_a = torch.sin((1 - blend) * half_theta) / sin_half_theta
        ratio_b = torch.sin(blend * half_theta) / sin_half_theta

        new_q_x = ratio_a * q0[..., qx : qx + 1] + ratio_b * q1[..., qx : qx + 1]
        new_q_y = ratio_a * q0[..., qy : qy + 1] + ratio_b * q1[..., qy : qy + 1]
        new_q_z = ratio_a * q0[..., qz : qz + 1] + ratio_b * q1[..., qz : qz + 1]
        new_q_w = ratio_a * q0[..., qw : qw + 1] + ratio_b * q1[..., qw : qw + 1]

        new_q = torch.cat([new_q_w, new_q_x, new_q_y, new_q_z], dim=len(new_q_w.shape) - 1)
        new_q = torch.where(torch.abs(sin_half_theta) < 0.001, 0.5 * q0 + 0.5 * q1, new_q)
        new_q = torch.where(torch.abs(cos_half_theta) >= 1, q0, new_q)
        return new_q

    def _compute_frame_blend(self, times: np.ndarray) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
        """Compute the indexes of the first and second values, as well as the blending time
        to interpolate between them and the given times.

        Args:
            times: Times, between 0 and motion duration, to sample motion values.
                Specified times will be clipped to fall within the range of the motion duration.

        Returns:
            First value indexes, Second value indexes, and blending time between 0 (first value) and 1 (second value).
        """
        phase = np.clip(times / self.duration, 0.0, 1.0)
        index_0 = (phase * (self.num_frames - 1)).round(decimals=0).astype(int)
        index_1 = np.minimum(index_0 + 1, self.num_frames - 1)
        blend = ((times - index_0 * self.dt) / self.dt).round(decimals=5)
        return index_0, index_1, blend

    def sample_times(self, num_samples: int, duration: float | None = None) -> np.ndarray:
        """Sample random motion times uniformly.

        Args:
            num_samples: Number of time samples to generate.
            duration: Maximum motion duration to sample.
                If not defined samples will be within the range of the motion duration.

        Raises:
            AssertionError: If the specified duration is longer than the motion duration.

        Returns:
            Time samples, between 0 and the specified/motion duration.
        """
        duration = self.duration if duration is None else duration
        assert (
            duration <= self.duration
        ), f"The specified duration ({duration}) is longer than the motion duration ({self.duration})"
        return duration * np.random.uniform(low=0.0, high=1.0, size=num_samples)

    def sample(
        self, num_samples: int, times: Optional[np.ndarray] = None, duration: float | None = None
    ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
        """Sample motion data.

        Args:
            num_samples: Number of time samples to generate. If ``times`` is defined, this parameter is ignored.
            times: Motion time used for sampling.
                If not defined, motion data will be random sampled uniformly in time.
            duration: Maximum motion duration to sample.
                If not defined, samples will be within the range of the motion duration.
                If ``times`` is defined, this parameter is ignored.

        Returns:
            Sampled motion DOF positions (with shape (N, num_dofs)), DOF velocities (with shape (N, num_dofs)),
            body positions (with shape (N, num_bodies, 3)), body rotations (with shape (N, num_bodies, 4), as wxyz quaternion),
            body linear velocities (with shape (N, num_bodies, 3)) and body angular velocities (with shape (N, num_bodies, 3)).
        """
        times = self.sample_times(num_samples, duration) if times is None else times
        index_0, index_1, blend = self._compute_frame_blend(times)
        blend = torch.tensor(blend, dtype=torch.float32, device=self.device)

        return (
            self._interpolate(self.dof_positions, blend=blend, start=index_0, end=index_1),
            self._interpolate(self.dof_velocities, blend=blend, start=index_0, end=index_1),
            self._interpolate(self.body_positions, blend=blend, start=index_0, end=index_1),
            self._slerp(self.body_rotations, blend=blend, start=index_0, end=index_1),
            self._interpolate(self.body_linear_velocities, blend=blend, start=index_0, end=index_1),
            self._interpolate(self.body_angular_velocities, blend=blend, start=index_0, end=index_1),
        )

    def get_dof_index(self, dof_names: list[str]) -> list[int]:
        """Get skeleton DOFs indexes by DOFs names.

        Args:
            dof_names: List of DOFs names.

        Raises:
            AssertionError: If the specified DOFs name doesn't exist.

        Returns:
            List of DOFs indexes.
        """
        indexes = []
        for name in dof_names:
            assert name in self._dof_names, f"The specified DOF name ({name}) doesn't exist: {self._dof_names}"
            indexes.append(self._dof_names.index(name))
        return indexes

    def get_body_index(self, body_names: list[str]) -> list[int]:
        """Get skeleton body indexes by body names.

        Args:
            dof_names: List of body names.

        Raises:
            AssertionError: If the specified body name doesn't exist.

        Returns:
            List of body indexes.
        """
        indexes = []
        for name in body_names:
            assert name in self._body_names, f"The specified body name ({name}) doesn't exist: {self._body_names}"
            indexes.append(self._body_names.index(name))
        return indexes


if __name__ == "__main__":
    import argparse

    parser = argparse.ArgumentParser()
    parser.add_argument("--file", type=str, required=True, help="Motion file")
    args, _ = parser.parse_known_args()

    motion = MotionLoader(args.file, "cpu")

    print("- number of frames:", motion.num_frames)
    print("- number of DOFs:", motion.num_dofs)
    print("- dt:", motion.dt)
    print("- fps:", 1.0 / motion.dt)
    print("- number of bodies:", motion.num_bodies)


================================================
FILE: motions/motion_replayer.py
================================================
"""
Motion Replayer for G1 Humanoid Robot

This script replays motion data for the G1 humanoid robot in Isaac Sim.
It can also record the simulation data for further analysis.

Usage:
    python motion_replayer.py [options]

Options:
    --motion MOTION_FILE    Motion data file to replay (default: G1_dance.npz)
    --record               Enable recording of simulation data
    --output OUTPUT_FILE   Output file name for recorded data (default: recorded_motion.npz)
    --device DEVICE       Device to run simulation on (default: cuda:0)

Example:
    # Replay a motion file
    python motion_replayer.py --motion G1_dance.npz

    # Record simulation data while replaying
    python motion_replayer.py --motion G1_dance.npz --record --output my_recording.npz
"""

import sys
import os
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))

import argparse
from isaaclab.app import AppLauncher
from record_data import MotionRecorder

# Command line arguments
parser = argparse.ArgumentParser()
AppLauncher.add_app_launcher_args(parser)
parser.add_argument("--motion", type=str, default="G1_dance.npz")
parser.add_argument("--record", action="store_true", help="Enable recording of simulation data")
parser.add_argument("--output", type=str, default="recorded_motion.npz", help="Output filename for recorded data")
args_cli = parser.parse_args()

# Launch Isaac Sim
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app

import torch
import isaaclab.sim as sim_utils
from isaaclab.scene import InteractiveScene, InteractiveSceneCfg
from motion_loader import MotionLoader
from g1_cfg import G1_CFG
from record_data import MotionRecorder

# Load motion data and get dt
motion = MotionLoader(args_cli.motion, device=args_cli.device)
num_frames = motion.num_frames
print(f"motion.dt: {motion.dt}")

# Find the index for the root body, typically 'pelvis'
try:
    print(f"Searching for 'pelvis' in the following list of body names: {motion.body_names}")
    root_idx = motion.body_names.index('pelvis')
    print(f"Found root body 'pelvis' at index: {root_idx}")
except (ValueError, AttributeError):
    print("\nError: Could not find 'pelvis' in the motion file's body_names.")
    print("The motion replayer requires 'pelvis' to be defined as the root body.")
    simulation_app.close()
    sys.exit(1)

# Configure simulation with dt matching motion.dt


sim_cfg = sim_utils.SimulationCfg(
    dt=motion.dt, 
    device=args_cli.device,
    gravity=(0.0, 0.0, -9.81),  # Explicitly set gravity
    render_interval=1,          # Render every physics step
    enable_scene_query_support=True,
    use_fabric=True,
    physx=sim_utils.PhysxCfg(
        solver_type=1,                    # TGS solver
        min_position_iteration_count=8,    # Increase solver iterations
        max_position_iteration_count=8,
        min_velocity_iteration_count=4,    # Add velocity iterations
        max_velocity_iteration_count=4,
        enable_ccd=True,                  # Enable continuous collision detection
        enable_stabilization=True,        # Enable additional stabilization
        bounce_threshold_velocity=0.2,    # Lower threshold for more stable contacts
        friction_offset_threshold=0.04,   # Increase friction threshold
        friction_correlation_distance=0.025  # Increase correlation distance
    ),

)
sim = sim_utils.SimulationContext(sim_cfg)
sim.set_camera_view([3.0, 3.0, 3.0], [0.0, 0.0, 0.0])

# Configure scene
scene_cfg = InteractiveSceneCfg(
    num_envs=1, 
    env_spacing=2.0
)
scene_cfg.robot = G1_CFG.replace(prim_path="/World/Robot")
scene = InteractiveScene(scene_cfg)

# Add DomeLight for illumination
light_cfg = sim_utils.DomeLightCfg(intensity=2000.0, color=(0.75, 0.75, 0.75))
light_cfg.func("/World/Light", light_cfg)

# Add black ground plane at -0.5m
ground_cfg = sim_utils.GroundPlaneCfg(color=(0.0, 0.0,-0.5))
# ground_cfg.func("/World/ground", ground_cfg)

ground_cfg.func(
    "/World/ground",
    ground_cfg,
    translation=(0.0, 0.0, -0.5),   # 这里是 (x, y, z)
)
# Reset simulation
sim.reset()
scene.reset()

robot = scene["robot"]
# Align joint order from motion file to robot's joint order
motion_dof_indices = motion.get_dof_index(robot.joint_names)

# Initialize data recorder
# We want to record the robot's state, so we pass the robot's joint names
recorder = MotionRecorder(
    robot,
    dof_names_to_record=robot.joint_names,
    fps=int(round(1.0 / motion.dt)),
    device=args_cli.device
)

if args_cli.record:
    recorder.start_recording()
    print(f"\nStarting data recording, will save to {args_cli.output}")
    print(f"Will record one complete cycle, total {num_frames} frames")

print("\nStarting G1 motion replay loop...")
print("Tip: Close window or press Ctrl+C to exit")

try:
    # First pass: Record data
    if args_cli.record:
        for i in range(num_frames):
            if not simulation_app.is_running():
                break
                
            # Get current frame's joint and root states (aligned order!)
            joint_pos = motion.dof_positions[i, motion_dof_indices].unsqueeze(0)
            joint_vel = motion.dof_velocities[i, motion_dof_indices].unsqueeze(0)
            root_pos = motion.body_positions[i, root_idx].unsqueeze(0)
            root_rot = motion.body_rotations[i, root_idx].unsqueeze(0)
            root_vel = motion.body_linear_velocities[i, root_idx].unsqueeze(0)
            root_ang_vel = motion.body_angular_velocities[i, root_idx].unsqueeze(0)
            root_state = torch.cat([root_pos, root_rot, root_vel, root_ang_vel], dim=-1)

            # Write to simulation
            robot.write_root_link_pose_to_sim(root_state[:, :7], torch.tensor([0], device=args_cli.device))
            robot.write_root_com_velocity_to_sim(root_state[:, 7:], torch.tensor([0], device=args_cli.device))
            robot.write_joint_state_to_sim(joint_pos, joint_vel, None, torch.tensor([0], device=args_cli.device))

            # Step simulation (strict dt synchronization)
            scene.update(dt=sim.get_physics_dt())
            scene.write_data_to_sim()
            sim.step(render=True)
            
            # Record data
            recorder.record_frame(i)
            # Show progress
            if i % 10 == 0:  # Show progress every 10 frames
                print(f"\rRecording progress: {i}/{num_frames} frames", end="", flush=True)

        # Save data after completing one cycle
        print("\n\nCompleted one cycle of recording, saving data...")
        recorder.stop_recording()
        recorder.save_data(args_cli.output)
        print(f"Data saved to {args_cli.output}")
        print("\nContinuing motion replay...")

    # Continue motion replay loop
    while simulation_app.is_running():
        for i in range(num_frames):
            if not simulation_app.is_running():
                break
                
            # Get current frame's joint and root states (aligned order!)
            joint_pos = motion.dof_positions[i, motion_dof_indices].unsqueeze(0)
            joint_vel = motion.dof_velocities[i, motion_dof_indices].unsqueeze(0)
            root_pos = motion.body_positions[i, root_idx].unsqueeze(0)
            root_rot = motion.body_rotations[i, root_idx].unsqueeze(0)
            root_vel = motion.body_linear_velocities[i, root_idx].unsqueeze(0)
            root_ang_vel = motion.body_angular_velocities[i, root_idx].unsqueeze(0)
            root_state = torch.cat([root_pos, root_rot, root_vel, root_ang_vel], dim=-1)

            # Write to simulation
            robot.write_root_link_pose_to_sim(root_state[:, :7], torch.tensor([0], device=args_cli.device))
            robot.write_root_com_velocity_to_sim(root_state[:, 7:], torch.tensor([0], device=args_cli.device))
            robot.write_joint_state_to_sim(joint_pos, joint_vel, None, torch.tensor([0], device=args_cli.device))

            # Step simulation (strict dt synchronization)
            scene.update(dt=sim.get_physics_dt())
            scene.write_data_to_sim()
            sim.step(render=True)

except KeyboardInterrupt:
    print("\nProgram interrupted by user")

finally:
    simulation_app.close() 

================================================
FILE: motions/motion_viewer.py
================================================
# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

import matplotlib
import matplotlib.animation
import matplotlib.pyplot as plt
import numpy as np
import torch

import mpl_toolkits.mplot3d  # noqa: F401

try:
    from .motion_loader import MotionLoader
except ImportError:
    from motion_loader import MotionLoader


class MotionViewer:
    """
    Helper class to visualize motion data from NumPy-file format.
    """

    def __init__(self, motion_file: str, device: torch.device | str = "cpu", render_scene: bool = False) -> None:
        """Load a motion file and initialize the internal variables.

        Args:
            motion_file: Motion file path to load.
            device: The device to which to load the data.
            render_scene: Whether the scene (space occupied by the skeleton during movement)
                is rendered instead of a reduced view of the skeleton.

        Raises:
            AssertionError: If the specified motion file doesn't exist.
        """
        self._figure = None
        self._figure_axes = None
        self._render_scene = render_scene

        # load motions
        self._motion_loader = MotionLoader(motion_file=motion_file, device=device)

        self._num_frames = self._motion_loader.num_frames
        self._current_frame = 0
        self._body_positions = self._motion_loader.body_positions.cpu().numpy()

        print("\nBody")
        for i, name in enumerate(self._motion_loader.body_names):
            minimum = np.min(self._body_positions[:, i], axis=0).round(decimals=2)
            maximum = np.max(self._body_positions[:, i], axis=0).round(decimals=2)
            print(f"  |-- [{name}] minimum position: {minimum}, maximum position: {maximum}")

    def _drawing_callback(self, frame: int) -> None:
        """Drawing callback called each frame"""
        # get current motion frame
        # get data
        vertices = self._body_positions[self._current_frame]
        # draw skeleton state
        self._figure_axes.clear()
        self._figure_axes.scatter(*vertices.T, color="black", depthshade=False)
        # adjust exes according to motion view
        # - scene
        if self._render_scene:
            # compute axes limits
            minimum = np.min(self._body_positions.reshape(-1, 3), axis=0)
            maximum = np.max(self._body_positions.reshape(-1, 3), axis=0)
            center = 0.5 * (maximum + minimum)
            diff = 0.75 * (maximum - minimum)
        # - skeleton
        else:
            # compute axes limits
            minimum = np.min(vertices, axis=0)
            maximum = np.max(vertices, axis=0)
            center = 0.5 * (maximum + minimum)
            diff = np.array([0.75 * np.max(maximum - minimum).item()] * 3)
        # scale view
        self._figure_axes.set_xlim((center[0] - diff[0], center[0] + diff[0]))
        self._figure_axes.set_ylim((center[1] - diff[1], center[1] + diff[1]))
        self._figure_axes.set_zlim((center[2] - diff[2], center[2] + diff[2]))
        self._figure_axes.set_box_aspect(aspect=diff / diff[0])
        # plot ground plane
        x, y = np.meshgrid([center[0] - diff[0], center[0] + diff[0]], [center[1] - diff[1], center[1] + diff[1]])
        self._figure_axes.plot_surface(x, y, np.zeros_like(x), color="green", alpha=0.2)
        # print metadata
        self._figure_axes.set_xlabel("X")
        self._figure_axes.set_ylabel("Y")
        self._figure_axes.set_zlabel("Z")
        self._figure_axes.set_title(f"frame: {self._current_frame}/{self._num_frames}")
        # increase frame counter
        self._current_frame += 1
        if self._current_frame >= self._num_frames:
            self._current_frame = 0

    def show(self) -> None:
        """Show motion"""
        # create a 3D figure
        self._figure = plt.figure()
        self._figure_axes = self._figure.add_subplot(projection="3d")
        # matplotlib animation (the instance must live as long as the animation will run)
        self._animation = matplotlib.animation.FuncAnimation(
            fig=self._figure,
            func=self._drawing_callback,
            frames=self._num_frames,
            interval=1000 * self._motion_loader.dt,
        )
        plt.show()


if __name__ == "__main__":
    import argparse

    parser = argparse.ArgumentParser()
    parser.add_argument("--file", type=str, required=True, help="Motion file")
    parser.add_argument(
        "--render-scene",
        action="store_true",
        default=False,
        help=(
            "Whether the scene (space occupied by the skeleton during movement) is rendered instead of a reduced view"
            " of the skeleton."
        ),
    )
    parser.add_argument("--matplotlib-backend", type=str, default="TkAgg", help="Matplotlib interactive backend")
    args, _ = parser.parse_known_args()

    # https://matplotlib.org/stable/users/explain/figure/backends.html#interactive-backends
    matplotlib.use(args.matplotlib_backend)

    viewer = MotionViewer(args.file, render_scene=args.render_scene)
    viewer.show()


================================================
FILE: motions/record_data.py
================================================
"""
Motion Data Recording Tool

This script provides functionality for recording and managing motion data from robots.
It includes classes and functions for:
- Recording motion data in real-time
- Saving motion data to npz files
- Loading motion data from npz files
- Managing motion data structure
- Converting joint orders between different formats

Usage:
    As a module:
        from record_data import MotionRecorder, MotionData
        
        recorder = MotionRecorder(robot, motion_dof_indices, fps=60)
        recorder.start_recording()
        # ... record frames ...
        recorder.stop_recording()
        recorder.save_data("output.npz")
"""

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import numpy as np
import torch
from typing import List, Dict, Optional
import os
from dataclasses import dataclass
from datetime import datetime

@dataclass
class MotionData:
    fps: int
    dof_names: np.ndarray
    body_names: np.ndarray
    dof_positions: np.ndarray
    dof_velocities: np.ndarray
    body_positions: np.ndarray
    body_rotations: np.ndarray
    body_linear_velocities: np.ndarray
    body_angular_velocities: np.ndarray


def smooth_motion_data(motion_data: MotionData, window_size: int = 3) -> MotionData:
    """Apply smoothing to motion data to reduce jitter.
    
    Args:
        motion_data: Original motion data
        window_size: Size of smoothing window
        
    Returns:
        Smoothed motion data
    """
    # Create a copy of the data
    smoothed = MotionData(
        fps=motion_data.fps,
        dof_names=motion_data.dof_names,
        body_names=motion_data.body_names,
        dof_positions=motion_data.dof_positions.copy(),
        dof_velocities=motion_data.dof_velocities.copy(),
        body_positions=motion_data.body_positions.copy(),
        body_rotations=motion_data.body_rotations.copy(),
        body_linear_velocities=motion_data.body_linear_velocities.copy(),
        body_angular_velocities=motion_data.body_angular_velocities.copy()
    )
    
    # Apply smoothing to positions and velocities
    for i in range(window_size, len(smoothed.dof_positions) - window_size):
        # Smooth joint positions
        smoothed.dof_positions[i] = np.mean(
            motion_data.dof_positions[i-window_size:i+window_size+1], 
            axis=0
        )
        # Smooth joint velocities
        smoothed.dof_velocities[i] = np.mean(
            motion_data.dof_velocities[i-window_size:i+window_size+1], 
            axis=0
        )
        # Smooth body positions
        smoothed.body_positions[i] = np.mean(
            motion_data.body_positions[i-window_size:i+window_size+1], 
            axis=0
        )
        # Smooth body rotations (using quaternion averaging)
        smoothed.body_rotations[i] = np.mean(
            motion_data.body_rotations[i-window_size:i+window_size+1], 
            axis=0
        )
        # Normalize quaternions
        smoothed.body_rotations[i] = smoothed.body_rotations[i] / np.linalg.norm(
            smoothed.body_rotations[i], 
            axis=-1, 
            keepdims=True
        )
    
    return smoothed

class MotionRecorder:
    """Motion data recorder for robot movements"""

    def __init__(self, robot, dof_names_to_record: List[str],
                 fps: int = 60, device: str = "cuda", smoothing_window: int = 3):
        """
        Args:
            robot: Robot object for metadata
            dof_names_to_record: List of joint names to record.
            fps: Target frame rate (used to calculate dt)
            device: Device to use (cuda/cpu)
            smoothing_window: Window size for motion smoothing
        """
        self.robot = robot
        self.fps = int(fps)
        self.dt = 1.0 / self.fps
        self.device = device
        self.smoothing_window = smoothing_window

        # Get names directly from robot to avoid empty arrays
        self.dof_names = np.asarray(dof_names_to_record, dtype=np.str_)
        self.body_names = np.asarray(robot.body_names, dtype=np.str_)

        try:
            self.root_body_idx = list(robot.body_names).index('pelvis')
        except ValueError:
            raise ValueError("The robot asset must have a body named 'pelvis' to be used with MotionRecorder.")

        # Create a mapping from the robot's full joint list to the desired recording list
        robot_dof_map = {name: i for i, name in enumerate(robot.joint_names)}
        self.dof_indices = np.array([robot_dof_map[name] for name in dof_names_to_record], dtype=np.int32)
        
        self.recorded_frames: list[dict] = []
        self.is_recording = False

    # 其余接口保持不变 --------------------------
    def start_recording(self):
        self.recorded_frames.clear()
        self.is_recording = True

    def stop_recording(self):
        self.is_recording = False

    def record_frame(self, frame_idx: int):
        """Record single frame data (no need to pass robot/motion_dof_indices externally)"""
        if not self.is_recording:
            return None

        # --- Joints ---
        dof_pos = self.robot.data.joint_pos[0].cpu().numpy()
        dof_vel = self.robot.data.joint_vel[0].cpu().numpy()

        # --- Root ---
        root_pos = self.robot.data.body_pos_w[0, self.root_body_idx].cpu().numpy()
        root_rot = self.robot.data.body_quat_w[0, self.root_body_idx].cpu().numpy()
        root_lin_vel = self.robot.data.body_lin_vel_w[0, self.root_body_idx].cpu().numpy()
        root_ang_vel = self.robot.data.body_ang_vel_w[0, self.root_body_idx].cpu().numpy()

        # --- Full body ---
        body_pos = self.robot.data.body_pos_w[0].cpu().numpy()
        body_rot = self.robot.data.body_quat_w[0].cpu().numpy()
        body_lin_vel = self.robot.data.body_lin_vel_w[0].cpu().numpy()
        body_ang_vel = self.robot.data.body_ang_vel_w[0].cpu().numpy()

        self.recorded_frames.append(
            dict(
                frame_idx=frame_idx,
                dof_positions=dof_pos[self.dof_indices],
                dof_velocities=dof_vel[self.dof_indices],
                root_position=root_pos,
                root_rotation=root_rot,
                root_linear_velocity=root_lin_vel,
                root_angular_velocity=root_ang_vel,
                body_positions=body_pos,
                body_rotations=body_rot,
                body_linear_velocities=body_lin_vel,
                body_angular_velocities=body_ang_vel,
            )
        )
        return self.recorded_frames[-1]

    def get_recorded_data(self) -> MotionData | None:
        if not self.recorded_frames:
            return None

        n = len(self.recorded_frames)
        d = len(self.dof_indices)
        b = len(self.body_names)

        # Pre-allocate arrays
        dof_pos = np.zeros((n, d), np.float32)
        dof_vel = np.zeros_like(dof_pos)
        body_pos = np.zeros((n, b, 3), np.float32)
        body_rot = np.zeros((n, b, 4), np.float32)
        body_lin = np.zeros_like(body_pos)
        body_ang = np.zeros_like(body_pos)

        for i, f in enumerate(self.recorded_frames):
            dof_pos[i] = f["dof_positions"]
            dof_vel[i] = f["dof_velocities"]
            body_pos[i] = f["body_positions"]
            body_rot[i] = f["body_rotations"]
            body_lin[i] = f["body_linear_velocities"]
            body_ang[i] = f["body_angular_velocities"]

        motion_data = MotionData(
            fps=self.fps,
            dof_names=self.dof_names,
            body_names=self.body_names,
            dof_positions=dof_pos,
            dof_velocities=dof_vel,
            body_positions=body_pos,
            body_rotations=body_rot,
            body_linear_velocities=body_lin,
            body_angular_velocities=body_ang,
        )
        
        # Apply smoothing if window size > 1
        if self.smoothing_window > 1:
            motion_data = smooth_motion_data(motion_data, self.smoothing_window)
            
        return motion_data

    def save_data(self, out_file: str, data: MotionData | None = None) -> bool:
        if data is None:
            data = self.get_recorded_data()
        if data is None:
            print("No data to save")
            return False

        os.makedirs(os.path.dirname(os.path.abspath(out_file)), exist_ok=True)
        try:
            np.savez(
                out_file,
                fps=data.fps,
                dof_names=data.dof_names,
                body_names=data.body_names,
                dof_positions=data.dof_positions,
                dof_velocities=data.dof_velocities,
                body_positions=data.body_positions,
                body_rotations=data.body_rotations,
                body_linear_velocities=data.body_linear_velocities,
                body_angular_velocities=data.body_angular_velocities,
                record_time=datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
            )
            print(f"Data saved to {out_file}")
            return True
        except Exception as e:
            print("Error saving data:", e)
            return False


def load_motion_data(file_path: str) -> Optional[MotionData]:
    """Load motion data file
    
    Args:
        file_path: Path to .npz file
        
    Returns:
        MotionData object if successful, None otherwise
    """
    try:
        data = np.load(file_path)
        return MotionData(
            fps=int(data['fps']),
            dof_names=data['dof_names'],
            body_names=data['body_names'],
            dof_positions=data['dof_positions'],
            dof_velocities=data['dof_velocities'],
            body_positions=data['body_positions'],
            body_rotations=data['body_rotations'],
            body_linear_velocities=data['body_linear_velocities'],
            body_angular_velocities=data['body_angular_velocities']
        )
    except Exception as e:
        print(f"Error loading data file: {e}")
        return None


================================================
FILE: motions/test/get_joint_name.py
================================================
#!/usr/bin/env python
import argparse
import os
import time
import numpy as np
import torch

# ========= 解析命令行参数 ==========
parser = argparse.ArgumentParser(
    description="Potential Field Controller Demo in Isaac Lab (Repulsive Only)."
)
parser.add_argument("--robot", type=str, default="franka_panda", help="Name of the robot.")
parser.add_argument("--num_envs", type=int, default=1, help="Number of environments to spawn.")
# 这里添加 AppLauncher 相关的命令行参数
from isaaclab.app import AppLauncher
AppLauncher.add_app_launcher_args(parser)
args_cli = parser.parse_args()

# ========= 启动 Omniverse 应用 ==========
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app

# ========= 导入 Isaac Lab 相关模块 ==========
import isaaclab.sim as sim_utils
from isaaclab.assets import AssetBaseCfg, RigidObject, RigidObjectCfg
from isaaclab.managers import SceneEntityCfg
from isaaclab.markers import VisualizationMarkers
from isaaclab.markers.config import FRAME_MARKER_CFG
from isaaclab.scene import InteractiveScene, InteractiveSceneCfg
from isaaclab.utils import configclass
from isaaclab.utils.assets import ISAAC_NUCLEUS_DIR
from isaaclab_assets import FRANKA_PANDA_HIGH_PD_CFG  # isort:skip G1_MINIMAL_CFG
from isaaclab_assets import G1_MINIMAL_CFG  
from isaaclab.actuators import ImplicitActuatorCfg
from isaaclab.assets import ArticulationCfg
# ========= 定义桌面场景配置 ==========
@configclass
class TableTopSceneCfg(InteractiveSceneCfg):
    """桌面场景的配置"""
    ground = AssetBaseCfg(
        prim_path="/World/defaultGroundPlane",
        spawn=sim_utils.GroundPlaneCfg(),
        init_state=AssetBaseCfg.InitialStateCfg(pos=(0.0, 0.0, 0.0)),
    )
    dome_light = AssetBaseCfg(
        prim_path="/World/Light",
        spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75)),
    )

    robot: ArticulationCfg = ArticulationCfg(
        prim_path="/World/Robot",
        spawn=sim_utils.UsdFileCfg(
            usd_path=os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), "usd/g1_29dof_rev_1_0.usd"),
            activate_contact_sensors=True,
            rigid_props=sim_utils.RigidBodyPropertiesCfg(
                disable_gravity=False,
                retain_accelerations=False,
                linear_damping=0.0,
                angular_damping=0.0,
                max_linear_velocity=1000.0,
                max_angular_velocity=1000.0,
                max_depenetration_velocity=1.0,
            ),
            articulation_props=sim_utils.ArticulationRootPropertiesCfg(
                enabled_self_collisions=False, solver_position_iteration_count=8, solver_velocity_iteration_count=4
            ),
        ),
        init_state=ArticulationCfg.InitialStateCfg(
            pos=(0.0, 1.0, 0.8),
            joint_pos={
                ".*_hip_pitch_joint": -0.20,
                ".*_knee_joint": 0.42,
                ".*_ankle_pitch_joint": -0.23,
                # ".*_elbow_pitch_joint": 0.87,
                "left_shoulder_roll_joint": 0.16,
                "left_shoulder_pitch_joint": 0.35,
                "right_shoulder_roll_joint": -0.16,
                "right_shoulder_pitch_joint": 0.35,
            },
            joint_vel={".*": 0.0},
        ),
        soft_joint_pos_limit_factor=0.9,
        actuators={
            "legs": ImplicitActuatorCfg(
                joint_names_expr=[
                    ".*_hip_yaw_joint",
                    ".*_hip_roll_joint",
                    ".*_hip_pitch_joint",
                    ".*_knee_joint",
                    "waist_yaw_joint",
                    "waist_roll_joint",
                    "waist_pitch_joint",
                ],
                effort_limit=300,
                velocity_limit=100.0,
                stiffness={
                    ".*_hip_yaw_joint": 150.0,
                    ".*_hip_roll_joint": 150.0,
                    ".*_hip_pitch_joint": 200.0,
                    ".*_knee_joint": 200.0,
                    "waist_yaw_joint": 200.0,
                    "waist_roll_joint": 200.0,
                    "waist_pitch_joint": 200.0,
                },
                damping={
                    ".*_hip_yaw_joint": 5.0,
                    ".*_hip_roll_joint": 5.0,
                    ".*_hip_pitch_joint": 5.0,
                    ".*_knee_joint": 5.0,
                    "waist_yaw_joint": 5.0,
                    "waist_roll_joint": 5.0,
                    "waist_pitch_joint": 5.0,
                },
                armature={
                    ".*_hip_.*": 0.01,
                    ".*_knee_joint": 0.01,
                    "waist_yaw_joint": 0.01,
                    "waist_roll_joint": 0.01,
                    "waist_pitch_joint": 0.01,
                },
            ),
            "feet": ImplicitActuatorCfg(
                effort_limit=20,
                joint_names_expr=[".*_ankle_pitch_joint", ".*_ankle_roll_joint"],
                stiffness=20.0,
                damping=2.0,
                armature=0.01,
            ),
            "arms": ImplicitActuatorCfg(
                joint_names_expr=[
                    ".*_shoulder_pitch_joint",
                    ".*_shoulder_roll_joint",
                    ".*_shoulder_yaw_joint",
                    ".*_elbow_joint",
                    ".*_wrist_.*",

                ],
                effort_limit=300,
                velocity_limit=100.0,
                stiffness=40.0,
                damping=10.0,
                armature={
                    ".*_shoulder_.*": 0.01,
                    ".*_elbow_.*": 0.01,
                    ".*_wrist_.*": 0.01,

                },
            ),
        },
    )

# ========= 辅助函数:打印机器人资产下所有 prim 的路径 ==========
def list_robot_prims():
    try:
        # 使用 USD Python API 遍历 "/World/Robot" 下的所有子节点
        from pxr import Usd
        stage = Usd.Stage.GetCurrent()
        robot_prim = stage.GetPrimAtPath("/World/Robot")
        if robot_prim:
            print("在 '/World/Robot' 下发现的 prim 路径:")
            for prim in robot_prim.GetChildren():
                print(prim.GetPath())
        else:
            print("在 '/World/Robot' 未找到任何 prim,请检查机器人是否正确加载。")
    except Exception as e:
        print("获取 USD Stage 时出错:", e)

# ========= 运行仿真 ==========
def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
    """主循环:运行仿真并打印出机器人关节与刚体信息(请先观察打印结果确定名称规则)"""
    
    # 先打印出机器人资产下所有 prim 路径,帮助你确认实际的命名规则
    list_robot_prims()
    
    # 如果你在上面的输出中找到了关节和身体对应的命名,可以在下面设置相应的正则表达式。
    # 例如,假设你观察后发现关节名称中包含 "joint" 而刚体名称中包含 "body",可以这样设置:
    robot = scene["robot"]
    
    print("解析到的关节名称列表:", robot.joint_names)
    print("解析到的刚体名称列表:", robot.body_names)
    print(robot.data.joint_pos.shape)
    print(robot.data.joint_vel.shape)
    # self.robot.data.body_pos_w[:, self.ref_body_index],
    # self.robot.data.body_quat_w[:, self.ref_body_index],
    # self.robot.data.body_lin_vel_w[:, self.ref_body_index],
    # self.robot.data.body_ang_vel_w[:, self.ref_body_index],
    # self.robot.data.body_pos_w[:, self.key_body_indexes],

    step_count = 0
    while simulation_app.is_running():
        # 更新场景并执行一步物理仿真
        scene.update(dt=sim.get_physics_dt())
        scene.write_data_to_sim()
        sim.step()
        step_count += 1

# ========= 主函数 ==========
def main():
    # 创建仿真配置,指定仿真步长和设备(例如 "cuda:0" 或 "cpu")
    sim_cfg = sim_utils.SimulationCfg(dt=0.01, device=args_cli.device)
    sim = sim_utils.SimulationContext(sim_cfg)
    sim.set_camera_view([2.5, 2.5, 2.5], [0.0, 0.0, 0.0])

    # 初始化场景(由 InteractiveScene 管理所有实体)
    scene_cfg = TableTopSceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)
    scene = InteractiveScene(scene_cfg)

    # 重置仿真环境
    sim.reset()
    print("[INFO]: Setup complete. Starting simulation...")

    # 进入主仿真循环
    run_simulator(sim, scene)

if __name__ == "__main__":
    main()
    simulation_app.close()
# 解析到的关节名称列表: ['left_hip_pitch_joint', 'right_hip_pitch_joint', 'waist_yaw_joint', 'left_hip_roll_joint', 'right_hip_roll_joint', 'waist_roll_joint', 'left_hip_yaw_joint', 'right_hip_yaw_joint', 'waist_pitch_joint', 'left_knee_joint', 'right_knee_joint', 'left_shoulder_pitch_joint', 'right_shoulder_pitch_joint', 'left_ankle_pitch_joint', 'right_ankle_pitch_joint', 'left_shoulder_roll_joint', 'right_shoulder_roll_joint', 'left_ankle_roll_joint', 'right_ankle_roll_joint', 'left_shoulder_yaw_joint', 'right_shoulder_yaw_joint', 'left_elbow_joint', 'right_elbow_joint', 'left_wrist_roll_joint', 'right_wrist_roll_joint', 'left_wrist_pitch_joint', 'right_wrist_pitch_joint', 'left_wrist_yaw_joint', 'right_wrist_yaw_joint']
# 解析到的刚体名称列表: ['pelvis', 'imu_in_pelvis', 'left_hip_pitch_link', 'pelvis_contour_link', 'right_hip_pitch_link', 'waist_yaw_link', 'left_hip_roll_link', 'right_hip_roll_link', 'waist_roll_link', 'left_hip_yaw_link', 'right_hip_yaw_link', 'torso_link', 'left_knee_link', 'right_knee_link', 'd435_link', 'head_link', 'imu_in_torso', 'left_shoulder_pitch_link', 'logo_link', 'mid360_link', 'right_shoulder_pitch_link', 'left_ankle_pitch_link', 'right_ankle_pitch_link', 'left_shoulder_roll_link', 'right_shoulder_roll_link', 'left_ankle_roll_link', 'right_ankle_roll_link', 'left_shoulder_yaw_link', 'right_shoulder_yaw_link', 'left_elbow_link', 'right_elbow_link', 'left_wrist_roll_link', 'right_wrist_roll_link', 'left_wrist_pitch_link', 'right_wrist_pitch_link', 'left_wrist_yaw_link', 'right_wrist_yaw_link', 'left_rubber_hand', 'right_rubber_hand']
# torch.Size([1, 29])
# torch.Size([1, 29])

================================================
FILE: motions/update_pelvis_data.py
================================================
import numpy as np
import os
import argparse
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from scipy.spatial.transform import Rotation as R

def plot_pelvis_data_2d(source_data, target_data, source_pelvis_idx, target_pelvis_idx):
    """
    Generates 2D plots comparing pelvis data (positions, velocities, rotations)
    from source and target files over time.
    """
    print("Generating 2D comparison plots...")

    plots_to_create = [
        ('body_positions', 'Pelvis Position Comparison', 'Position (m)'),
        ('body_linear_velocities', 'Pelvis Linear Velocity Comparison', 'Velocity (m/s)'),
        ('body_angular_velocities', 'Pelvis Angular Velocity Comparison', 'Angular Velocity (rad/s)'),
        ('body_rotations', 'Pelvis Rotation (Quaternion) Comparison', 'Value')
    ]

    fig = make_subplots(
        rows=len(plots_to_create), cols=1,
        subplot_titles=[p[1] for p in plots_to_create],
        vertical_spacing=0.08
    )

    fps = source_data.get('fps', 60)

    for i, (key, title, y_title) in enumerate(plots_to_create, 1):
        for data, idx, name in [(source_data, source_pelvis_idx, 'Source'), (target_data, target_pelvis_idx, 'Target')]:
            if key not in data:
                continue
            
            motion_data = data[key][:, idx]
            num_frames = motion_data.shape[0]
            time = np.arange(num_frames) / fps
            
            dims = motion_data.shape[1]
            labels = ['X', 'Y', 'Z', 'W'] if dims == 4 else ['X', 'Y', 'Z']
            
            for d in range(dims):
                fig.add_trace(
                    go.Scatter(
                        x=time,
                        y=motion_data[:, d],
                        name=f'{name} {labels[d]}',
                        legendgroup=f'group{i}',
                        line=dict(dash='solid' if name == 'Source' else 'dash', width=1.5),
                        showlegend=(d==0) # Show legend only for the first component to avoid clutter
                    ),
                    row=i, col=1
                )
        fig.update_yaxes(title_text=y_title, row=i, col=1)
        fig.update_xaxes(title_text="Time (s)", row=i, col=1)

    fig.update_layout(
        title_text="Pelvis Data 2D Comparison",
        height=350 * len(plots_to_create),
        legend_tracegroupgap = 250
    )
    
    output_html_file = "pelvis_data_2d_comparison.html"
    fig.write_html(output_html_file)
    print(f"2D plot visualization saved to: {os.path.abspath(output_html_file)}")


def visualize_pelvis_trajectories(source_data, target_data, source_pelvis_idx, target_pelvis_idx):
    """
    Visualizes and compares the 3D trajectories and orientations of the pelvis from two motion data sources.
    """
    print("Generating 3D trajectory visualization...")
    
    fig = go.Figure()

    for data, idx, name in [(source_data, source_pelvis_idx, 'Source'), (target_data, target_pelvis_idx, 'Target')]:
        positions = data['body_positions'][:, idx, :]
        rotations_quat = data['body_rotations'][:, idx, :]

        # Add trajectory line
        fig.add_trace(go.Scatter3d(
            x=positions[:, 0], y=positions[:, 1], z=positions[:, 2],
            mode='lines',
            name=f'{name} Trajectory',
            line=dict(width=4)
        ))

        # Add orientation cones
        num_frames = positions.shape[0]
        step = max(1, num_frames // 20) # Show about 20 orientation markers
        
        rotations = R.from_quat(rotations_quat[:, [1, 2, 3, 0]]) # reorder to x,y,z,w for SciPy

        # Get the forward vector (e.g., x-axis) for each orientation
        forward_vectors = rotations.apply([0.1, 0, 0]) 

        fig.add_trace(go.Cone(
            x=positions[::step, 0], y=positions[::step, 1], z=positions[::step, 2],
            u=forward_vectors[::step, 0], v=forward_vectors[::step, 1], w=forward_vectors[::step, 2],
            sizemode="absolute", sizeref=0.1,
            showscale=False,
            name=f'{name} Orientation',
            anchor="tip"
        ))

    fig.update_layout(
        title='Pelvis 3D Trajectory and Orientation Comparison',
        scene=dict(
            xaxis_title='X (m)',
            yaxis_title='Y (m)',
            zaxis_title='Z (m)',
            aspectmode='data'
        ),
        legend_title_text='Data Source'
    )
    
    output_html_file = "pelvis_trajectory_comparison.html"
    fig.write_html(output_html_file)
    print(f"Visualization saved to: {os.path.abspath(output_html_file)}")


def update_pelvis_data(source_file, target_file, dry_run=False):
    """
    Copies the pelvis data from a source motion file to a target motion file.

    This function loads two .npz motion files, finds the 'pelvis' body index in both,
    and then overwrites the pelvis trajectory data (position, rotation, velocities)
    in the target file with the data from the source file.

    Args:
        source_file (str): Path to the .npz file with the source pelvis data.
        target_file (str): Path to the .npz file to be updated.
        dry_run (bool): If True, print data for comparison without saving.
    """
    try:
        # Load the source and target .npz files
        source_data = np.load(source_file)
        target_data = np.load(target_file)
        print(f"Loaded source file: {source_file}")
        print(f"Loaded target file: {target_file}")

        # Find the index of 'pelvis' in the body_names array for both files
        source_body_names = list(source_data['body_names'])
        target_body_names = list(target_data['body_names'])

        source_pelvis_idx = source_body_names.index('pelvis')
        target_pelvis_idx = target_body_names.index('pelvis')
        print(f"Found 'pelvis' at index {source_pelvis_idx} in source file.")
        print(f"Found 'pelvis' at index {target_pelvis_idx} in target file.")

        if dry_run:
            print("\n--- DRY RUN MODE ---")
            print("Displaying pelvis data for the first 5 frames. No files will be changed.")
            
            print("\n--- Source Pelvis Data (first 5 frames) ---")
            print("Body Positions:")
            print(source_data['body_positions'][:5, source_pelvis_idx])
            print("\nBody Rotations:")
            print(source_data['body_rotations'][:5, source_pelvis_idx])
            
            print("\n--- Target Pelvis Data (first 5 frames) ---")
            print("Body Positions:")
            print(target_data['body_positions'][:5, target_pelvis_idx])
            print("\nBody Rotations:")
            print(target_data['body_rotations'][:5, target_pelvis_idx])
            
            visualize_pelvis_trajectories(source_data, target_data, source_pelvis_idx, target_pelvis_idx)
            plot_pelvis_data_2d(source_data, target_data, source_pelvis_idx, target_pelvis_idx)

            print("\n--- END DRY RUN ---")
            return

        # Convert the loaded target data into a mutable dictionary
        target_data_dict = {key: target_data[key] for key in target_data.files}
        
        # Check if the number of frames is consistent
        num_frames_source = source_data['body_positions'].shape[0]
        num_frames_target = target_data_dict['body_positions'].shape[0]
        if num_frames_source < num_frames_target:
            print(f"Warning: Source file has fewer frames ({num_frames_source}) than target file ({num_frames_target}). Target will be truncated.")
            num_frames = num_frames_source
            # Truncate all relevant arrays in the target data
            for key in ['dof_positions', 'dof_velocities', 'body_positions', 'body_rotations', 'body_linear_velocities', 'body_angular_velocities']:
                if key in target_data_dict:
                    target_data_dict[key] = target_data_dict[key][:num_frames]
        else:
            num_frames = num_frames_target
            if num_frames_source > num_frames_target:
                 print(f"Warning: Source file has more frames ({num_frames_source}) than target file ({num_frames_target}). Source data will be truncated.")

        # Copy the pelvis data from source to target
        print("Copying pelvis data...")
        target_data_dict['body_positions'][:, target_pelvis_idx] = source_data['body_positions'][:num_frames, source_pelvis_idx]
        target_data_dict['body_rotations'][:, target_pelvis_idx] = source_data['body_rotations'][:num_frames, source_pelvis_idx]
        # target_data_dict['body_linear_velocities'][:, target_pelvis_idx] = source_data['body_linear_velocities'][:num_frames, source_pelvis_idx]
        # target_data_dict['body_angular_velocities'][:, target_pelvis_idx] = source_data['body_angular_velocities'][:num_frames, source_pelvis_idx]
        
        # Save the updated data back to the target file
        np.savez(
            target_file,
            **target_data_dict
        )
        print(f"Successfully updated pelvis data in {target_file}")

    except FileNotFoundError as e:
        print(f"Error: {e}. Please ensure the file paths are correct.")
    except ValueError:
        print("Error: 'pelvis' not found in one of the files. Cannot proceed.")
    except Exception as e:
        print(f"An unexpected error occurred: {e}")


if __name__ == "__main__":
    # Set to True to print data without modifying files
    DRY_RUN = True  

    # Define the file paths relative to the script's location
    script_dir = os.path.dirname(__file__)
    source_motion_file = os.path.join(script_dir, "G1_dance_old.npz")
    target_motion_file = os.path.join(script_dir, "G1_dance.npz")
    
    update_pelvis_data(source_motion_file, target_motion_file, dry_run=DRY_RUN) 

================================================
FILE: motions/verify_motion.py
================================================
"""
Motion Data Verification Tool

This script verifies and displays detailed information about motion data stored in npz files.
It shows information about:
- FPS and duration
- File size and name
- Data structure and content
- Sample data for each type of motion data

Usage:
    python verify_motion.py --file <motion_file.npz>

Arguments:
    --file    Path to the npz file containing motion data

Example:
    python verify_motion.py --file G1_dance.npz

Output:
    Prints detailed information about the motion data file, including:
    - Basic file information (size, name)
    - FPS and duration
    - Data structure overview
    - Sample data for each type of motion data
"""

import numpy as np
import argparse
import os
np.set_printoptions(threshold=100, precision=3, suppress=True)

def verify_motion_file(npz_file):
    # Load npz file
    data = np.load(npz_file, allow_pickle=True)
    
    # First display FPS information
    if 'fps' in data:
        fps = data['fps'].item()
        print(f"\nFPS: {fps}")
        if 'dof_positions' in data:
            duration = data['dof_positions'].shape[0] / fps
            print(f"Motion duration: {duration:.2f} seconds")
    else:
        print("\nWarning: 'fps' key not found!")
    
    # Get file size
    file_size = os.path.getsize(npz_file) / (1024 * 1024)  # Convert to MB
    print(f"\nFile Information:")
    print(f"Filename: {os.path.basename(npz_file)}")
    print(f"File size: {file_size:.2f} MB")
    
    print("\nData Content Overview:")
    print("-" * 50)
    for key in data:
        arr = data[key]
        print(f"\n{key}:")
        print(f"  Data type: {arr.dtype}")
        print(f"  Data shape: {arr.shape}")
        
        # Display content based on different data types
        if key == 'dof_names':
            print(f"  Joint names:")
            for i, name in enumerate(arr):
                print(f"    {i+1}. {name}")
        elif key == 'body_names':
            print(f"  Body part names:")
            for i, name in enumerate(arr):
                print(f"    {i+1}. {name}")
        elif key == "body_positions":
            print(f"  Body positions sample (first 2 frames, first 3 parts):")
            print(arr[:2, :3, :])  # Show first 2 frames, first 3 parts positions
        elif key == "body_rotations":
            print(f"  Body rotations sample (first 2 frames, first 3 parts, quaternions):")
            print(arr[:2, :3, :])  # Show first 2 frames, first 3 parts quaternion rotations
        elif key == "body_linear_velocities":
            print(f"  Body linear velocities sample (first 2 frames, first 3 parts):")
            print(arr[:2, :3, :])  # Show first 2 frames, first 3 parts linear velocities
        elif key == "body_angular_velocities":
            print(f"  Body angular velocities sample (first 2 frames, first 3 parts):")
            print(arr[:2, :3, :])  # Show first 2 frames, first 3 parts angular velocities
        elif key == "dof_positions":
            print(f"  Joint positions sample (first 2 frames, first 5 joints):")
            print(arr[:2, :5])  # Show first 2 frames, first 5 joints positions
        elif key == "dof_velocities":
            print(f"  Joint velocities sample (first 2 frames, first 5 joints):")
            print(arr[:2, :5])  # Show first 2 frames, first 5 joints velocities

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Verify and display motion data file contents")
    parser.add_argument("--file", type=str, required=True, help="Path to the npz file")
    args = parser.parse_args()
    
    verify_motion_file(args.file)


================================================
FILE: motions/visualize_motion.py
================================================
"""
Motion Data Visualization Tool

This script visualizes motion data from npz files, creating interactive plots for:
- Joint positions
- Joint velocities
- Body linear velocities
- Body angular velocities

Usage:
    python visualize_motion.py --file <motion_file.npz>

Arguments:
    --file    Path to the npz file containing motion data

Example:
    python visualize_motion.py --file G1_dance.npz

Output:
    Generates an HTML file with interactive plots in the same directory as the input file.
    The output filename will be <input_filename>_visualization.html
"""

import numpy as np
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import argparse
import os

def visualize_motion(npz_file):
    # Load data
    data = np.load(npz_file, allow_pickle=True)
    
    # Get basic information
    fps = data['fps'].item() if 'fps' in data else 60
    dof_names = data['dof_names'] if 'dof_names' in data else []
    body_names = data['body_names'] if 'body_names' in data else []
    
    # Create subplot layout
    fig = make_subplots(
        rows=5, cols=1,
        subplot_titles=('Root Link 3D Position (pelvis)',
                       'Joint Positions (DOF Positions)', 
                       'Joint Velocities (DOF Velocities)',
                       'Body Linear Velocities',
                       'Body Angular Velocities'),
        vertical_spacing=0.05,
        specs=[[{"type": "scatter"}],
               [{"type": "scatter"}],
               [{"type": "scatter"}],
               [{"type": "scatter"}],
               [{"type": "scatter"}]],
        row_heights=[0.2, 0.2, 0.2, 0.2, 0.2]
    )
    
    # Time axis
    time = np.arange(len(data['dof_positions'])) / fps if 'dof_positions' in data else np.arange(len(data['body_linear_velocities'])) / fps
    
    # 1. Plot root link 3D position
    if 'body_positions' in data and 'pelvis' in body_names:
        root_idx = body_names.tolist().index('pelvis')
        root_positions = data['body_positions'][:, root_idx, :]
        
        # X component
        fig.add_trace(
            go.Scatter(
                x=time,
                y=root_positions[:, 0],
                name="Pelvis Position (X)",
                mode='lines',
                line=dict(width=1)
            ),
            row=1, col=1
        )
        # Y component
        fig.add_trace(
            go.Scatter(
                x=time,
                y=root_positions[:, 1],
                name="Pelvis Position (Y)",
                mode='lines',
                line=dict(width=1)
            ),
            row=1, col=1
        )
        # Z component
        fig.add_trace(
            go.Scatter(
                x=time,
                y=root_positions[:, 2],
                name="Pelvis Position (Z)",
                mode='lines',
                line=dict(width=1)
            ),
            row=1, col=1
        )
    
    # 2. Plot all joint positions
    if 'dof_positions' in data:
        for i in range(len(dof_names)):
            fig.add_trace(
                go.Scatter(
                    x=time,
                    y=data['dof_positions'][:, i],
                    name=dof_names[i],
                    mode='lines',
                    line=dict(width=1)
                ),
                row=2, col=1
            )
    
    # 3. Plot all joint velocities
    if 'dof_velocities' in data:
        for i in range(len(dof_names)):
            fig.add_trace(
                go.Scatter(
                    x=time,
                    y=data['dof_velocities'][:, i],
                    name=dof_names[i],
                    mode='lines',
                    line=dict(width=1)
                ),
                row=3, col=1
            )
    
    # 4. Plot all body linear velocities (X, Y, Z components)
    if 'body_linear_velocities' in data:
        for i in range(len(body_names)):
            velocities = data['body_linear_velocities'][:, i, :]
            # X component
            fig.add_trace(
                go.Scatter(
                    x=time,
                    y=velocities[:, 0],
                    name=f"{body_names[i]} (X)",
                    mode='lines',
                    line=dict(width=1)
                ),
                row=4, col=1
            )
            # Y component
            fig.add_trace(
                go.Scatter(
                    x=time,
                    y=velocities[:, 1],
                    name=f"{body_names[i]} (Y)",
                    mode='lines',
                    line=dict(width=1)
                ),
                row=4, col=1
            )
            # Z component
            fig.add_trace(
                go.Scatter(
                    x=time,
                    y=velocities[:, 2],
                    name=f"{body_names[i]} (Z)",
                    mode='lines',
                    line=dict(width=1)
                ),
                row=4, col=1
            )
    
    # 5. Plot all body angular velocities (X, Y, Z components)
    if 'body_angular_velocities' in data:
        for i in range(len(body_names)):
            velocities = data['body_angular_velocities'][:, i, :]
            # X component
            fig.add_trace(
                go.Scatter(
                    x=time,
                    y=velocities[:, 0],
                    name=f"{body_names[i]} (X)",
                    mode='lines',
                    line=dict(width=1)
                ),
                row=5, col=1
            )
            # Y component
            fig.add_trace(
                go.Scatter(
                    x=time,
                    y=velocities[:, 1],
                    name=f"{body_names[i]} (Y)",
                    mode='lines',
                    line=dict(width=1)
                ),
                row=5, col=1
            )
            # Z component
            fig.add_trace(
                go.Scatter(
                    x=time,
                    y=velocities[:, 2],
                    name=f"{body_names[i]} (Z)",
                    mode='lines',
                    line=dict(width=1)
                ),
                row=5, col=1
            )
    
    # Update layout
    fig.update_layout(
        title_text="Motion Data Visualization (Complete Data)",
        height=2000,
        showlegend=True,
        legend=dict(
            yanchor="top",
            y=0.99,
            xanchor="left",
            x=1.05,
            font=dict(size=8)
        )
    )
    
    # Update axis labels
    for i in range(1, 6):
        fig.update_xaxes(title_text="Time (seconds)", row=i, col=1)
    
    fig.update_yaxes(title_text="Position (m)", row=1, col=1)
    fig.update_yaxes(title_text="Position (radians)", row=2, col=1)
    fig.update_yaxes(title_text="Velocity (rad/s)", row=3, col=1)
    fig.update_yaxes(title_text="Linear Velocity (m/s)", row=4, col=1)
    fig.update_yaxes(title_text="Angular Velocity (rad/s)", row=5, col=1)
    
    # Generate output file path automatically
    input_dir = os.path.dirname(os.path.abspath(npz_file))
    input_filename = os.path.splitext(os.path.basename(npz_file))[0]
    output_html = os.path.join(input_dir, f"{input_filename}_visualization.html")
    
    # Save visualization results
    fig.write_html(output_html)
    print(f"Visualization results saved to: {output_html}")

def quaternion_to_euler(q):
    """Convert quaternion to Euler angles (roll, pitch, yaw)"""
    # Normalize quaternion
    q = q / np.linalg.norm(q)
    w, x, y, z = q
    
    # Calculate Euler angles
    roll = np.arctan2(2 * (w*x + y*z), 1 - 2 * (x*x + y*y))
    pitch = np.arcsin(2 * (w*y - z*x))
    yaw = np.arctan2(2 * (w*z + x*y), 1 - 2 * (y*y + z*z))
    
    return np.array([roll, pitch, yaw])

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Visualize motion data from npz file")
    parser.add_argument("--file", type=str, required=True, help="Path to the npz file")
    args = parser.parse_args()
    
    visualize_motion(args.file) 

================================================
FILE: play.py
================================================
# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

"""
Script to play a checkpoint of an RL agent from skrl.

Visit the skrl documentation (https://skrl.readthedocs.io) to see the examples structured in
a more user-friendly way.
"""

"""Launch Isaac Sim Simulator first."""

import argparse
import sys

from isaaclab.app import AppLauncher

# add argparse arguments
parser = argparse.ArgumentParser(description="Play a checkpoint of an RL agent from skrl.")
parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
parser.add_argument(
    "--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument(
    "--agent",
    type=str,
    default=None,
    help=(
        "Name of the RL agent configuration entry point. Defaults to None, in which case the argument "
        "--algorithm is used to determine the default agent configuration entry point."
    ),
)
parser.add_argument("--checkpoint", type=str, default=None, help="Path to model checkpoint.")
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
parser.add_argument(
    "--use_pretrained_checkpoint",
    action="store_true",
    help="Use the pre-trained checkpoint from Nucleus.",
)
parser.add_argument(
    "--ml_framework",
    type=str,
    default="torch",
    choices=["torch", "jax", "jax-numpy"],
    help="The ML framework used for training the skrl agent.",
)
parser.add_argument(
    "--algorithm",
    type=str,
    default="PPO",
    choices=["AMP", "PPO", "IPPO", "MAPPO"],
    help="The RL algorithm used for training the skrl agent.",
)
parser.add_argument("--real-time", action="store_true", default=False, help="Run in real-time, if possible.")

# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli, hydra_args = parser.parse_known_args()
# always enable cameras to record video
if args_cli.video:
    args_cli.enable_cameras = True

# clear out sys.argv for Hydra
sys.argv = [sys.argv[0]] + hydra_args
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app

"""Rest everything follows."""

import gymnasium as gym
import os
import random
import time
import torch

import skrl
from packaging import version

# check for minimum supported skrl version
SKRL_VERSION = "1.4.3"
if version.parse(skrl.__version__) < version.parse(SKRL_VERSION):
    skrl.logger.error(
        f"Unsupported skrl version: {skrl.__version__}. "
        f"Install supported version using 'pip install skrl>={SKRL_VERSION}'"
    )
    exit()

if args_cli.ml_framework.startswith("torch"):
    from skrl.utils.runner.torch import Runner
elif args_cli.ml_framework.startswith("jax"):
    from skrl.utils.runner.jax import Runner

from isaaclab.envs import (
    DirectMARLEnv,
    DirectMARLEnvCfg,
    DirectRLEnvCfg,
    ManagerBasedRLEnvCfg,
    multi_agent_to_single_agent,
)
from isaaclab.utils.dict import print_dict
from isaaclab.utils.pretrained_checkpoint import get_published_pretrained_checkpoint

from isaaclab_rl.skrl import SkrlVecEnvWrapper

import isaaclab_tasks  # noqa: F401
from isaaclab_tasks.utils import get_checkpoint_path
from isaaclab_tasks.utils.hydra import hydra_task_config

# PLACEHOLDER: Extension template (do not remove this comment)

# config shortcuts
if args_cli.agent is None:
    algorithm = args_cli.algorithm.lower()
    agent_cfg_entry_point = "skrl_cfg_entry_point" if algorithm in ["ppo"] else f"skrl_{algorithm}_cfg_entry_point"
else:
    agent_cfg_entry_point = args_cli.agent
    algorithm = agent_cfg_entry_point.split("_cfg")[0].split("skrl_")[-1].lower()


@hydra_task_config(args_cli.task, agent_cfg_entry_point)
def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, experiment_cfg: dict):
    """Play with skrl agent."""
    # grab task name for checkpoint path
    task_name = args_cli.task.split(":")[-1]
    train_task_name = task_name.replace("-Play", "")

    # override configurations with non-hydra CLI arguments
    env_cfg.scene.num_envs = args_cli.num_envs if args_cli.num_envs is not None else env_cfg.scene.num_envs
    env_cfg.sim.device = args_cli.device if args_cli.device is not None else env_cfg.sim.device

    # configure the ML framework into the global skrl variable
    if args_cli.ml_framework.startswith("jax"):
        skrl.config.jax.backend = "jax" if args_cli.ml_framework == "jax" else "numpy"

        # randomly sample a seed if seed = -1
    if args_cli.seed == -1:
        args_cli.seed = random.randint(0, 10000)

    # set the agent and environment seed from command line
    # note: certain randomization occur in the environment initialization so we set the seed here
    experiment_cfg["seed"] = args_cli.seed if args_cli.seed is not None else experiment_cfg["seed"]
    env_cfg.seed = experiment_cfg["seed"]

    # specify directory for logging experiments (load checkpoint)
    log_root_path = os.path.join("logs", "skrl", experiment_cfg["agent"]["experiment"]["directory"])
    log_root_path = os.path.abspath(log_root_path)
    print(f"[INFO] Loading experiment from directory: {log_root_path}")
    # get checkpoint path
    if args_cli.use_pretrained_checkpoint:
        resume_path = get_published_pretrained_checkpoint("skrl", train_task_name)
        if not resume_path:
            print("[INFO] Unfortunately a pre-trained checkpoint is currently unavailable for this task.")
            return
    elif args_cli.checkpoint:
        resume_path = os.path.abspath(args_cli.checkpoint)
    else:
        resume_path = get_checkpoint_path(
            log_root_path, run_dir=f".*_{algorithm}_{args_cli.ml_framework}", other_dirs=["checkpoints"]
        )
    log_dir = os.path.dirname(os.path.dirname(resume_path))

    # set the log directory for the environment (works for all environment types)
    env_cfg.log_dir = log_dir

    # create isaac environment
    env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)

    # convert to single-agent instance if required by the RL algorithm
    if isinstance(env.unwrapped, DirectMARLEnv) and algorithm in ["ppo"]:
        env = multi_agent_to_single_agent(env)

    # get environment (step) dt for real-time evaluation
    try:
        dt = env.step_dt
    except AttributeError:
        dt = env.unwrapped.step_dt

    # wrap for video recording
    if args_cli.video:
        video_kwargs = {
            "video_folder": os.path.join(log_dir, "videos", "play"),
            "step_trigger": lambda step: step == 0,
            "video_length": args_cli.video_length,
            "disable_logger": True,
        }
        print("[INFO] Recording videos during training.")
        print_dict(video_kwargs, nesting=4)
        env = gym.wrappers.RecordVideo(env, **video_kwargs)

    # wrap around environment for skrl
    env = SkrlVecEnvWrapper(env, ml_framework=args_cli.ml_framework)  # same as: `wrap_env(env, wrapper="auto")`

    # configure and instantiate the skrl runner
    # https://skrl.readthedocs.io/en/latest/api/utils/runner.html
    experiment_cfg["trainer"]["close_environment_at_exit"] = False
    experiment_cfg["agent"]["experiment"]["write_interval"] = 0  # don't log to TensorBoard
    experiment_cfg["agent"]["experiment"]["checkpoint_interval"] = 0  # don't generate checkpoints
    runner = Runner(env, experiment_cfg)

    print(f"[INFO] Loading model checkpoint from: {resume_path}")
    runner.agent.load(resume_path)
    # set agent to evaluation mode
    runner.agent.set_running_mode("eval")

    # reset environment
    obs, _ = env.reset()
    timestep = 0
    # simulate environment
    while simulation_app.is_running():
        start_time = time.time()

        # run everything in inference mode
        with torch.inference_mode():
            # agent stepping
            outputs = runner.agent.act(obs, timestep=0, timesteps=0)
            # - multi-agent (deterministic) actions
            if hasattr(env, "possible_agents"):
                actions = {a: outputs[-1][a].get("mean_actions", outputs[0][a]) for a in env.possible_agents}
            # - single-agent (deterministic) actions
            else:
                actions = outputs[-1].get("mean_actions", outputs[0])
            # env stepping
            obs, _, _, _, _ = env.step(actions)
        if args_cli.video:
            timestep += 1
            # exit the play loop after recording one video
            if timestep == args_cli.video_length:
                break

        # time delay for real-time evaluation
        sleep_time = dt - (time.time() - start_time)
        if args_cli.real_time and sleep_time > 0:
            time.sleep(sleep_time)

    # close the simulator
    env.close()


if __name__ == "__main__":
    # run the main function
    main()
    # close sim app
    simulation_app.close()


================================================
FILE: pyproject.toml
================================================
[build-system]
requires = ["setuptools>=65.0"]
build-backend = "setuptools.build_meta"

[project]
name = "humanoid-amp"
version = "0.1.0"
description = "Humanoid AMP tasks and utilities for Isaac Lab"
readme = "README.md"
requires-python = ">=3.10"
authors = [
  { name = "Unitree Robotics Hobbyist" }
]
license = "BSD-3-Clause"
classifiers = [
  "Programming Language :: Python :: 3",
  "Programming Language :: Python :: 3.10",
  "Operating System :: POSIX :: Linux"
]

[tool.setuptools]
packages = ["humanoid_amp"]

[tool.setuptools.package-dir]
humanoid_amp = "."

[tool.setuptools.package-data]
humanoid_amp = [
  "motions/**/*",
  "usd/**/*"
]


================================================
FILE: sync_skrl_scripts.sh
================================================
#!/usr/bin/env bash
# Sync skrl train/play scripts from Isaac Lab releases and reapply local tweaks.
set -euo pipefail

BASE_URL="https://raw.githubusercontent.com/isaac-sim/IsaacLab/main/scripts/reinforcement_learning/skrl"
FILES=(train.py play.py)

fetch() {
  local url="$1"
  local dest="$2"
  curl -fsSL "${url}" -o "${dest}"
}

for name in "${FILES[@]}"; do
  url="${BASE_URL}/${name}"
  echo "Fetching ${url}"
  fetch "${url}" "${name}"
done

echo "Done."


================================================
FILE: train.py
================================================
# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

"""
Script to train RL agent with skrl.

Visit the skrl documentation (https://skrl.readthedocs.io) to see the examples structured in
a more user-friendly way.
"""

"""Launch Isaac Sim Simulator first."""

import argparse
import sys

from isaaclab.app import AppLauncher

# add argparse arguments
parser = argparse.ArgumentParser(description="Train an RL agent with skrl.")
parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument(
    "--agent",
    type=str,
    default=None,
    help=(
        "Name of the RL agent configuration entry point. Defaults to None, in which case the argument "
        "--algorithm is used to determine the default agent configuration entry point."
    ),
)
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
parser.add_argument(
    "--distributed", action="store_true", default=False, help="Run training with multiple GPUs or nodes."
)
parser.add_argument("--checkpoint", type=str, default=None, help="Path to model checkpoint to resume training.")
parser.add_argument("--max_iterations", type=int, default=None, help="RL Policy training iterations.")
parser.add_argument("--export_io_descriptors", action="store_true", default=False, help="Export IO descriptors.")
parser.add_argument(
    "--ml_framework",
    type=str,
    default="torch",
    choices=["torch", "jax", "jax-numpy"],
    help="The ML framework used for training the skrl agent.",
)
parser.add_argument(
    "--algorithm",
    type=str,
    default="PPO",
    choices=["AMP", "PPO", "IPPO", "MAPPO"],
    help="The RL algorithm used for training the skrl agent.",
)

# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli, hydra_args = parser.parse_known_args()
# always enable cameras to record video
if args_cli.video:
    args_cli.enable_cameras = True

# clear out sys.argv for Hydra
sys.argv = [sys.argv[0]] + hydra_args

# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app

"""Rest everything follows."""

import gymnasium as gym
import os
import random
from datetime import datetime

import omni
import skrl
from packaging import version

# check for minimum supported skrl version
SKRL_VERSION = "1.4.3"
if version.parse(skrl.__version__) < version.parse(SKRL_VERSION):
    skrl.logger.error(
        f"Unsupported skrl version: {skrl.__version__}. "
        f"Install supported version using 'pip install skrl>={SKRL_VERSION}'"
    )
    exit()

if args_cli.ml_framework.startswith("torch"):
    from skrl.utils.runner.torch import Runner
elif args_cli.ml_framework.startswith("jax"):
    from skrl.utils.runner.jax import Runner

from isaaclab.envs import (
    DirectMARLEnv,
    DirectMARLEnvCfg,
    DirectRLEnvCfg,
    ManagerBasedRLEnvCfg,
    multi_agent_to_single_agent,
)
from isaaclab.utils.assets import retrieve_file_path
from isaaclab.utils.dict import print_dict
from isaaclab.utils.io import dump_yaml

from isaaclab_rl.skrl import SkrlVecEnvWrapper

import isaaclab_tasks  # noqa: F401
from isaaclab_tasks.utils.hydra import hydra_task_config

# PLACEHOLDER: Extension template (do not remove this comment)

# config shortcuts
if args_cli.agent is None:
    algorithm = args_cli.algorithm.lower()
    agent_cfg_entry_point = "skrl_cfg_entry_point" if algorithm in ["ppo"] else f"skrl_{algorithm}_cfg_entry_point"
else:
    agent_cfg_entry_point = args_cli.agent
    algorithm = agent_cfg_entry_point.split("_cfg")[0].split("skrl_")[-1].lower()


@hydra_task_config(args_cli.task, agent_cfg_entry_point)
def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: dict):
    """Train with skrl agent."""
    # override configurations with non-hydra CLI arguments
    env_cfg.scene.num_envs = args_cli.num_envs if args_cli.num_envs is not None else env_cfg.scene.num_envs
    env_cfg.sim.device = args_cli.device if args_cli.device is not None else env_cfg.sim.device

    # multi-gpu training config
    if args_cli.distributed:
        env_cfg.sim.device = f"cuda:{app_launcher.local_rank}"
    # max iterations for training
    if args_cli.max_iterations:
        agent_cfg["trainer"]["timesteps"] = args_cli.max_iterations * agent_cfg["agent"]["rollouts"]
    agent_cfg["trainer"]["close_environment_at_exit"] = False
    # configure the ML framework into the global skrl variable
    if args_cli.ml_framework.startswith("jax"):
        skrl.config.jax.backend = "jax" if args_cli.ml_framework == "jax" else "numpy"

    # randomly sample a seed if seed = -1
    if args_cli.seed == -1:
        args_cli.seed = random.randint(0, 10000)

    # set the agent and environment seed from command line
    # note: certain randomization occur in the environment initialization so we set the seed here
    agent_cfg["seed"] = args_cli.seed if args_cli.seed is not None else agent_cfg["seed"]
    env_cfg.seed = agent_cfg["seed"]

    # specify directory for logging experiments
    log_root_path = os.path.join("logs", "skrl", agent_cfg["agent"]["experiment"]["directory"])
    log_root_path = os.path.abspath(log_root_path)
    print(f"[INFO] Logging experiment in directory: {log_root_path}")
    # specify directory for logging runs: {time-stamp}_{run_name}
    log_dir = datetime.now().strftime("%Y-%m-%d_%H-%M-%S") + f"_{algorithm}_{args_cli.ml_framework}"
    # The Ray Tune workflow extracts experiment name using the logging line below, hence, do not change it (see PR #2346, comment-2819298849)
    print(f"Exact experiment name requested from command line: {log_dir}")
    if agent_cfg["agent"]["experiment"]["experiment_name"]:
        log_dir += f'_{agent_cfg["agent"]["experiment"]["experiment_name"]}'
    # set directory into agent config
    agent_cfg["agent"]["experiment"]["directory"] = log_root_path
    agent_cfg["agent"]["experiment"]["experiment_name"] = log_dir
    # update log_dir
    log_dir = os.path.join(log_root_path, log_dir)

    # dump the configuration into log-directory
    dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
    dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), agent_cfg)

    # get checkpoint path (to resume training)
    resume_path = retrieve_file_path(args_cli.checkpoint) if args_cli.checkpoint else None

    # set the IO descriptors export flag if requested
    if isinstance(env_cfg, ManagerBasedRLEnvCfg):
        env_cfg.export_io_descriptors = args_cli.export_io_descriptors
    else:
        omni.log.warn(
            "IO descriptors are only supported for manager based RL environments. No IO descriptors will be exported."
        )

    # set the log directory for the environment (works for all environment types)
    env_cfg.log_dir = log_dir

    # create isaac environment
    env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)

    # convert to single-agent instance if required by the RL algorithm
    if isinstance(env.unwrapped, DirectMARLEnv) and algorithm in ["ppo"]:
        env = multi_agent_to_single_agent(env)

    # wrap for video recording
    if args_cli.video:
        video_kwargs = {
            "video_folder": os.path.join(log_dir, "videos", "train"),
            "step_trigger": lambda step: step % args_cli.video_interval == 0,
            "video_length": args_cli.video_length,
            "disable_logger": True,
        }
        print("[INFO] Recording videos during training.")
        print_dict(video_kwargs, nesting=4)
        env = gym.wrappers.RecordVideo(env, **video_kwargs)

    # wrap around environment for skrl
    env = SkrlVecEnvWrapper(env, ml_framework=args_cli.ml_framework)  # same as: `wrap_env(env, wrapper="auto")`

    # configure and instantiate the skrl runner
    # https://skrl.readthedocs.io/en/latest/api/utils/runner.html
    runner = Runner(env, agent_cfg)

    # load checkpoint (if specified)
    if resume_path:
        print(f"[INFO] Loading model checkpoint from: {resume_path}")
        runner.agent.load(resume_path)

    # run training
    runner.run()

    # close the simulator
    env.close()


if __name__ == "__main__":
    # run the main function
    main()
    # close sim app
    simulation_app.close()


================================================
FILE: usd/g1_29dof_rev_1_0.urdf
================================================
<robot name="g1_29dof_rev_1_0">
  <mujoco>
    <compiler meshdir="meshes" discardvisual="false"/>
  </mujoco>

  <!-- [CAUTION] uncomment when convert to mujoco -->
  <!-- <link name="world"></link>
  <joint name="floating_base_joint" type="floating">
    <parent link="world"/>
    <child link="pelvis"/>
  </joint> -->

  <link name="pelvis">
    <inertial>
      <origin xyz="0 0 -0.07605" rpy="0 0 0"/>
      <mass value="3.813"/>
      <inertia ixx="0.010549" ixy="0" ixz="2.1E-06" iyy="0.0093089" iyz="0" izz="0.0079184"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/pelvis.STL"/>
      </geometry>
      <material name="dark">
        <color rgba="0.2 0.2 0.2 1"/>
      </material>
    </visual>
  </link>
  <link name="pelvis_contour_link">
    <inertial>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <mass value="0.001"/>
      <inertia ixx="1e-7" ixy="0" ixz="0" iyy="1e-7" iyz="0" izz="1e-7"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/pelvis_contour_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/pelvis_contour_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="pelvis_contour_joint" type="fixed">
    <parent link="pelvis"/>
    <child link="pelvis_contour_link"/>
  </joint>

  <!-- Legs -->
  <link name="left_hip_pitch_link">
    <inertial>
      <origin xyz="0.002741 0.047791 -0.02606" rpy="0 0 0"/>
      <mass value="1.35"/>
      <inertia ixx="0.001811" ixy="3.68E-05" ixz="-3.44E-05" iyy="0.0014193" iyz="0.000171" izz="0.0012812"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_hip_pitch_link.STL"/>
      </geometry>
      <material name="dark">
        <color rgba="0.2 0.2 0.2 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_hip_pitch_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_hip_pitch_joint" type="revolute">
    <origin xyz="0 0.064452 -0.1027" rpy="0 0 0"/>
    <parent link="pelvis"/>
    <child link="left_hip_pitch_link"/>
    <axis xyz="0 1 0"/>
    <limit lower="-2.5307" upper="2.8798" effort="88" velocity="32"/>
  </joint>
  <link name="left_hip_roll_link">
    <inertial>
      <origin xyz="0.029812 -0.001045 -0.087934" rpy="0 0 0"/>
      <mass value="1.52"/>
      <inertia ixx="0.0023773" ixy="-3.8E-06" ixz="-0.0003908" iyy="0.0024123" iyz="1.84E-05" izz="0.0016595"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_hip_roll_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_hip_roll_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_hip_roll_joint" type="revolute">
    <origin xyz="0 0.052 -0.030465" rpy="0 -0.1749 0"/>
    <parent link="left_hip_pitch_link"/>
    <child link="left_hip_roll_link"/>
    <axis xyz="1 0 0"/>
    <limit lower="-0.5236" upper="2.9671" effort="139" velocity="20"/>
  </joint>
  <link name="left_hip_yaw_link">
    <inertial>
      <origin xyz="-0.057709 -0.010981 -0.15078" rpy="0 0 0"/>
      <mass value="1.702"/>
      <inertia ixx="0.0057774" ixy="-0.0005411" ixz="-0.0023948" iyy="0.0076124" iyz="-0.0007072" izz="0.003149"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_hip_yaw_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_hip_yaw_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_hip_yaw_joint" type="revolute">
    <origin xyz="0.025001 0 -0.12412" rpy="0 0 0"/>
    <parent link="left_hip_roll_link"/>
    <child link="left_hip_yaw_link"/>
    <axis xyz="0 0 1"/>
    <limit lower="-2.7576" upper="2.7576" effort="88" velocity="32"/>
  </joint>
  <link name="left_knee_link">
    <inertial>
      <origin xyz="0.005457 0.003964 -0.12074" rpy="0 0 0"/>
      <mass value="1.932"/>
      <inertia ixx="0.011329" ixy="4.82E-05" ixz="-4.49E-05" iyy="0.011277" iyz="-0.0007146" izz="0.0015168"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_knee_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_knee_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_knee_joint" type="revolute">
    <origin xyz="-0.078273 0.0021489 -0.17734" rpy="0 0.1749 0"/>
    <parent link="left_hip_yaw_link"/>
    <child link="left_knee_link"/>
    <axis xyz="0 1 0"/>
    <limit lower="-0.087267" upper="2.8798" effort="139" velocity="20"/>
  </joint>
  <link name="left_ankle_pitch_link">
    <inertial>
      <origin xyz="-0.007269 0 0.011137" rpy="0 0 0"/>
      <mass value="0.074"/>
      <inertia ixx="8.4E-06" ixy="0" ixz="-2.9E-06" iyy="1.89E-05" iyz="0" izz="1.26E-05"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_ankle_pitch_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_ankle_pitch_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_ankle_pitch_joint" type="revolute">
    <origin xyz="0 -9.4445E-05 -0.30001" rpy="0 0 0"/>
    <parent link="left_knee_link"/>
    <child link="left_ankle_pitch_link"/>
    <axis xyz="0 1 0"/>
    <limit lower="-0.87267" upper="0.5236" effort="50" velocity="37"/>
  </joint>
  <link name="left_ankle_roll_link">
    <inertial>
      <origin xyz="0.026505 0 -0.016425" rpy="0 0 0"/>
      <mass value="0.608"/>
      <inertia ixx="0.0002231" ixy="2E-07" ixz="8.91E-05" iyy="0.0016161" iyz="-1E-07" izz="0.0016667"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_ankle_roll_link.STL"/>
      </geometry>
      <material name="dark">
        <color rgba="0.2 0.2 0.2 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="-0.05 0.025 -0.03" rpy="0 0 0"/>
      <geometry>
        <sphere radius="0.005"/>
      </geometry>
    </collision>
    <collision>
      <origin xyz="-0.05 -0.025 -0.03" rpy="0 0 0"/>
      <geometry>
        <sphere radius="0.005"/>
      </geometry>
    </collision>
    <collision>
      <origin xyz="0.12 0.03 -0.03" rpy="0 0 0"/>
      <geometry>
        <sphere radius="0.005"/>
      </geometry>
    </collision>
    <collision>
      <origin xyz="0.12 -0.03 -0.03" rpy="0 0 0"/>
      <geometry>
        <sphere radius="0.005"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_ankle_roll_joint" type="revolute">
    <origin xyz="0 0 -0.017558" rpy="0 0 0"/>
    <parent link="left_ankle_pitch_link"/>
    <child link="left_ankle_roll_link"/>
    <axis xyz="1 0 0"/>
    <limit lower="-0.2618" upper="0.2618" effort="50" velocity="37"/>
  </joint>
  <link name="right_hip_pitch_link">
    <inertial>
      <origin xyz="0.002741 -0.047791 -0.02606" rpy="0 0 0"/>
      <mass value="1.35"/>
      <inertia ixx="0.001811" ixy="-3.68E-05" ixz="-3.44E-05" iyy="0.0014193" iyz="-0.000171" izz="0.0012812"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_hip_pitch_link.STL"/>
      </geometry>
      <material name="dark">
        <color rgba="0.2 0.2 0.2 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_hip_pitch_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_hip_pitch_joint" type="revolute">
    <origin xyz="0 -0.064452 -0.1027" rpy="0 0 0"/>
    <parent link="pelvis"/>
    <child link="right_hip_pitch_link"/>
    <axis xyz="0 1 0"/>
    <limit lower="-2.5307" upper="2.8798" effort="88" velocity="32"/>
  </joint>
  <link name="right_hip_roll_link">
    <inertial>
      <origin xyz="0.029812 0.001045 -0.087934" rpy="0 0 0"/>
      <mass value="1.52"/>
      <inertia ixx="0.0023773" ixy="3.8E-06" ixz="-0.0003908" iyy="0.0024123" iyz="-1.84E-05" izz="0.0016595"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_hip_roll_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_hip_roll_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_hip_roll_joint" type="revolute">
    <origin xyz="0 -0.052 -0.030465" rpy="0 -0.1749 0"/>
    <parent link="right_hip_pitch_link"/>
    <child link="right_hip_roll_link"/>
    <axis xyz="1 0 0"/>
    <limit lower="-2.9671" upper="0.5236" effort="139" velocity="20"/>
  </joint>
  <link name="right_hip_yaw_link">
    <inertial>
      <origin xyz="-0.057709 0.010981 -0.15078" rpy="0 0 0"/>
      <mass value="1.702"/>
      <inertia ixx="0.0057774" ixy="0.0005411" ixz="-0.0023948" iyy="0.0076124" iyz="0.0007072" izz="0.003149"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_hip_yaw_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_hip_yaw_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_hip_yaw_joint" type="revolute">
    <origin xyz="0.025001 0 -0.12412" rpy="0 0 0"/>
    <parent link="right_hip_roll_link"/>
    <child link="right_hip_yaw_link"/>
    <axis xyz="0 0 1"/>
    <limit lower="-2.7576" upper="2.7576" effort="88" velocity="32"/>
  </joint>
  <link name="right_knee_link">
    <inertial>
      <origin xyz="0.005457 -0.003964 -0.12074" rpy="0 0 0"/>
      <mass value="1.932"/>
      <inertia ixx="0.011329" ixy="-4.82E-05" ixz="4.49E-05" iyy="0.011277" iyz="0.0007146" izz="0.0015168"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_knee_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_knee_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_knee_joint" type="revolute">
    <origin xyz="-0.078273 -0.0021489 -0.17734" rpy="0 0.1749 0"/>
    <parent link="right_hip_yaw_link"/>
    <child link="right_knee_link"/>
    <axis xyz="0 1 0"/>
    <limit lower="-0.087267" upper="2.8798" effort="139" velocity="20"/>
  </joint>
  <link name="right_ankle_pitch_link">
    <inertial>
      <origin xyz="-0.007269 0 0.011137" rpy="0 0 0"/>
      <mass value="0.074"/>
      <inertia ixx="8.4E-06" ixy="0" ixz="-2.9E-06" iyy="1.89E-05" iyz="0" izz="1.26E-05"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_ankle_pitch_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_ankle_pitch_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_ankle_pitch_joint" type="revolute">
    <origin xyz="0 9.4445E-05 -0.30001" rpy="0 0 0"/>
    <parent link="right_knee_link"/>
    <child link="right_ankle_pitch_link"/>
    <axis xyz="0 1 0"/>
    <limit lower="-0.87267" upper="0.5236" effort="50" velocity="37"/>
  </joint>
  <link name="right_ankle_roll_link">
    <inertial>
      <origin xyz="0.026505 0 -0.016425" rpy="0 0 0"/>
      <mass value="0.608"/>
      <inertia ixx="0.0002231" ixy="-2E-07" ixz="8.91E-05" iyy="0.0016161" iyz="1E-07" izz="0.0016667"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_ankle_roll_link.STL"/>
      </geometry>
      <material name="dark">
        <color rgba="0.2 0.2 0.2 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="-0.05 0.025 -0.03" rpy="0 0 0"/>
      <geometry>
        <sphere radius="0.005"/>
      </geometry>
    </collision>
    <collision>
      <origin xyz="-0.05 -0.025 -0.03" rpy="0 0 0"/>
      <geometry>
        <sphere radius="0.005"/>
      </geometry>
    </collision>
    <collision>
      <origin xyz="0.12 0.03 -0.03" rpy="0 0 0"/>
      <geometry>
        <sphere radius="0.005"/>
      </geometry>
    </collision>
    <collision>
      <origin xyz="0.12 -0.03 -0.03" rpy="0 0 0"/>
      <geometry>
        <sphere radius="0.005"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_ankle_roll_joint" type="revolute">
    <origin xyz="0 0 -0.017558" rpy="0 0 0"/>
    <parent link="right_ankle_pitch_link"/>
    <child link="right_ankle_roll_link"/>
    <axis xyz="1 0 0"/>
    <limit lower="-0.2618" upper="0.2618" effort="50" velocity="37"/>
  </joint>

  <!-- Torso -->
  <link name="waist_yaw_link">
    <inertial>
      <origin xyz="0.003494 0.000233 0.018034" rpy="0 0 0"/>
      <mass value="0.214"/>
      <inertia ixx="0.00010673" ixy="2.703E-06" ixz="-7.631E-06" iyy="0.00010422" iyz="-2.01E-07" izz="0.0001625"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/waist_yaw_link_rev_1_0.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
  </link>
  <joint name="waist_yaw_joint" type="revolute">
    <origin xyz="0 0 0" rpy="0 0 0"/>
    <parent link="pelvis"/>
    <child link="waist_yaw_link"/>
    <axis xyz="0 0 1"/>
    <limit lower="-2.618" upper="2.618" effort="88" velocity="32"/>
  </joint>
  <link name="waist_roll_link">
    <inertial>
      <origin xyz="0 2.3E-05 0" rpy="0 0 0"/>
      <mass value="0.086"/>
      <inertia ixx="7.079E-06" ixy="0" ixz="0" iyy="6.339E-06" iyz="0" izz="8.245E-06"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/waist_roll_link_rev_1_0.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
  </link>
  <joint name="waist_roll_joint" type="revolute">
    <origin xyz="-0.0039635 0 0.044" rpy="0 0 0"/>
    <parent link="waist_yaw_link"/>
    <child link="waist_roll_link"/>
    <axis xyz="1 0 0"/>
    <limit lower="-0.52" upper="0.52" effort="50" velocity="37"/>
  </joint>
  <link name="torso_link">
    <inertial>
      <origin xyz="0.000931 0.000346 0.15082" rpy="0 0 0"/>
      <mass value="6.78"/>
      <inertia ixx="0.05905" ixy="3.3302E-05" ixz="-0.0017715" iyy="0.047014" iyz="-2.2399E-05" izz="0.025652"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/torso_link_rev_1_0.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/torso_link_rev_1_0.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="waist_pitch_joint" type="revolute">
    <origin xyz="0 0 0" rpy="0 0 0"/>
    <parent link="waist_roll_link"/>
    <child link="torso_link"/>
    <axis xyz="0 1 0"/>
    <limit lower="-0.52" upper="0.52" effort="50" velocity="37"/>
  </joint>

  <!-- LOGO -->
  <joint name="logo_joint" type="fixed">
    <origin xyz="0.0039635 0 -0.044" rpy="0 0 0"/>
    <parent link="torso_link"/>
    <child link="logo_link"/>
  </joint>
  <link name="logo_link">
    <inertial>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <mass value="0.001"/>
      <inertia ixx="1e-7" ixy="0" ixz="0" iyy="1e-7" iyz="0" izz="1e-7"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/logo_link.STL"/>
      </geometry>
      <material name="dark">
        <color rgba="0.2 0.2 0.2 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/logo_link.STL"/>
      </geometry>
    </collision>
  </link>

  <!-- Head -->
  <link name="head_link">
    <inertial>
      <origin xyz="0.005267 0.000299 0.449869" rpy="0 0 0"/>
      <mass value="1.036"/>
      <inertia ixx="0.004085051" ixy="-2.543E-06" ixz="-6.9455E-05" iyy="0.004185212" iyz="-3.726E-06" izz="0.001807911"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/head_link.STL"/>
      </geometry>
      <material name="dark">
        <color rgba="0.2 0.2 0.2 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/head_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="head_joint" type="fixed">
    <origin xyz="0.0039635 0 -0.044" rpy="0 0 0"/>
    <parent link="torso_link"/>
    <child link="head_link"/>
  </joint>


  <!-- IMU -->
  <link name="imu_in_torso"></link>
  <joint name="imu_in_torso_joint" type="fixed">
    <origin xyz="-0.03959 -0.00224 0.14792" rpy="0 0 0"/>
    <parent link="torso_link"/>
    <child link="imu_in_torso"/>
  </joint>

  <link name="imu_in_pelvis"></link>
  <joint name="imu_in_pelvis_joint" type="fixed">
    <origin xyz="0.04525 0 -0.08339" rpy="0 0 0"/>
    <parent link="pelvis"/>
    <child link="imu_in_pelvis"/>
  </joint>

  <!-- d435 -->
  <link name="d435_link"></link>
  <joint name="d435_joint" type="fixed">
    <origin xyz="0.0576235 0.01753 0.42987" rpy="0 0.8307767239493009 0"/>
    <parent link="torso_link"/>
    <child link="d435_link"/>
  </joint>

  <!-- mid360 -->
  <link name="mid360_link"></link>
  <joint name="mid360_joint" type="fixed">
    <origin xyz="0.0002835 0.00003 0.41618" rpy="0 0.04014257279586953 0"/>
    <parent link="torso_link"/>
    <child link="mid360_link"/>
  </joint>

  <!-- Arm -->
  <link name="left_shoulder_pitch_link">
    <inertial>
      <origin xyz="0 0.035892 -0.011628" rpy="0 0 0"/>
      <mass value="0.718"/>
      <inertia ixx="0.0004291" ixy="-9.2E-06" ixz="6.4E-06" iyy="0.000453" iyz="2.26E-05" izz="0.000423"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_shoulder_pitch_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0.04 -0.01" rpy="0 1.5707963267948966 0"/>
      <geometry>
        <cylinder radius="0.03" length="0.05"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_shoulder_pitch_joint" type="revolute">
    <origin xyz="0.0039563 0.10022 0.24778" rpy="0.27931 5.4949E-05 -0.00019159"/>
    <parent link="torso_link"/>
    <child link="left_shoulder_pitch_link"/>
    <axis xyz="0 1 0"/>
    <limit lower="-3.0892" upper="2.6704" effort="25" velocity="37"/>
  </joint>
  <link name="left_shoulder_roll_link">
    <inertial>
      <origin xyz="-0.000227 0.00727 -0.063243" rpy="0 0 0"/>
      <mass value="0.643"/>
      <inertia ixx="0.0006177" ixy="-1E-06" ixz="8.7E-06" iyy="0.0006912" iyz="-5.3E-06" izz="0.0003894"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_shoulder_roll_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="-0.004 0.006 -0.053" rpy="0 0 0"/>
      <geometry>
        <cylinder radius="0.03" length="0.03"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_shoulder_roll_joint" type="revolute">
    <origin xyz="0 0.038 -0.013831" rpy="-0.27925 0 0"/>
    <parent link="left_shoulder_pitch_link"/>
    <child link="left_shoulder_roll_link"/>
    <axis xyz="1 0 0"/>
    <limit lower="-1.5882" upper="2.2515" effort="25" velocity="37"/>
  </joint>
  <link name="left_shoulder_yaw_link">
    <inertial>
      <origin xyz="0.010773 -0.002949 -0.072009" rpy="0 0 0"/>
      <mass value="0.734"/>
      <inertia ixx="0.0009988" ixy="7.9E-06" ixz="0.0001412" iyy="0.0010605" iyz="-2.86E-05" izz="0.0004354"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_shoulder_yaw_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_shoulder_yaw_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_shoulder_yaw_joint" type="revolute">
    <origin xyz="0 0.00624 -0.1032" rpy="0 0 0"/>
    <parent link="left_shoulder_roll_link"/>
    <child link="left_shoulder_yaw_link"/>
    <axis xyz="0 0 1"/>
    <limit lower="-2.618" upper="2.618" effort="25" velocity="37"/>
  </joint>
  <link name="left_elbow_link">
    <inertial>
      <origin xyz="0.064956 0.004454 -0.010062" rpy="0 0 0"/>
      <mass value="0.6"/>
      <inertia ixx="0.0002891" ixy="6.53E-05" ixz="1.72E-05" iyy="0.0004152" iyz="-5.6E-06" izz="0.0004197"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_elbow_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_elbow_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_elbow_joint" type="revolute">
    <origin xyz="0.015783 0 -0.080518" rpy="0 0 0"/>
    <parent link="left_shoulder_yaw_link"/>
    <child link="left_elbow_link"/>
    <axis xyz="0 1 0"/>
    <limit lower="-1.0472" upper="2.0944" effort="25" velocity="37"/>
  </joint>
  <joint name="left_wrist_roll_joint" type="revolute">
    <origin xyz="0.100 0.00188791 -0.010" rpy="0 0 0"/>
    <axis xyz="1 0 0"/>
    <parent link="left_elbow_link"/>
    <child link="left_wrist_roll_link"/>
    <limit effort="25" velocity="37" lower="-1.972222054" upper="1.972222054"/>
  </joint>
  <link name="left_wrist_roll_link">
    <inertial>
      <origin xyz="0.01713944778 0.00053759094 0.00000048864" rpy="0 0 0"/>
      <mass value="0.08544498"/>
      <inertia ixx="0.00004821544023" ixy="-0.00000424511021" ixz="0.00000000510599" iyy="0.00003722899093" iyz="-0.00000000123525" izz="0.00005482106541"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_wrist_roll_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_wrist_roll_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_wrist_pitch_joint" type="revolute">
    <origin xyz="0.038 0 0" rpy="0 0 0"/>
    <axis xyz="0 1 0"/>
    <parent link="left_wrist_roll_link"/>
    <child link="left_wrist_pitch_link"/>
    <limit effort="5" velocity="22" lower="-1.614429558" upper="1.614429558"/>
  </joint>
  <link name="left_wrist_pitch_link">
    <inertial>
      <origin xyz="0.02299989837 -0.00111685314 -0.00111658096" rpy="0 0 0"/>
      <mass value="0.48404956"/>
      <inertia ixx="0.00016579646273" ixy="-0.00001231206746" ixz="0.00001231699194" iyy="0.00042954057410" iyz="0.00000081417712" izz="0.00042953697654"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_wrist_pitch_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_wrist_pitch_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_wrist_yaw_joint" type="revolute">
    <origin xyz="0.046 0 0" rpy="0 0 0"/>
    <axis xyz="0 0 1"/>
    <parent link="left_wrist_pitch_link"/>
    <child link="left_wrist_yaw_link"/>
    <limit effort="5" velocity="22" lower="-1.614429558" upper="1.614429558"/>
  </joint>
  <link name="left_wrist_yaw_link">
    <inertial>
      <origin xyz="0.02200381568 0.00049485096 0.00053861123" rpy="0 0 0"/>
      <mass value="0.08457647"/>
      <inertia ixx="0.00004929128828" ixy="-0.00000045735494" ixz="0.00000445867591" iyy="0.00005973338134" iyz="0.00000043217198" izz="0.00003928083826"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_wrist_yaw_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_wrist_yaw_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="left_hand_palm_joint" type="fixed">
    <origin xyz="0.0415 0.003 0" rpy="0 0 0"/>
    <parent link="left_wrist_yaw_link"/>
    <child link="left_rubber_hand"/>
  </joint>
  <link name="left_rubber_hand">
    <inertial>
      <origin xyz="0.05361310808 -0.00295905240 0.00215413091" rpy="0 0 0"/>
      <mass value="0.170"/>
      <inertia ixx="0.00010099485234748" ixy="0.00003618590790516" ixz="-0.00000074301518642" iyy="0.00028135871571621" iyz="0.00000330189743286" izz="0.00021894770413514"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/left_rubber_hand.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
  </link>
  <link name="right_shoulder_pitch_link">
    <inertial>
      <origin xyz="0 -0.035892 -0.011628" rpy="0 0 0"/>
      <mass value="0.718"/>
      <inertia ixx="0.0004291" ixy="9.2E-06" ixz="6.4E-06" iyy="0.000453" iyz="-2.26E-05" izz="0.000423"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_shoulder_pitch_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 -0.04 -0.01" rpy="0 1.5707963267948966 0"/>
      <geometry>
        <cylinder radius="0.03" length="0.05"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_shoulder_pitch_joint" type="revolute">
    <origin xyz="0.0039563 -0.10021 0.24778" rpy="-0.27931 5.4949E-05 0.00019159"/>
    <parent link="torso_link"/>
    <child link="right_shoulder_pitch_link"/>
    <axis xyz="0 1 0"/>
    <limit lower="-3.0892" upper="2.6704" effort="25" velocity="37"/>
  </joint>
  <link name="right_shoulder_roll_link">
    <inertial>
      <origin xyz="-0.000227 -0.00727 -0.063243" rpy="0 0 0"/>
      <mass value="0.643"/>
      <inertia ixx="0.0006177" ixy="1E-06" ixz="8.7E-06" iyy="0.0006912" iyz="5.3E-06" izz="0.0003894"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_shoulder_roll_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="-0.004 -0.006 -0.053" rpy="0 0 0"/>
      <geometry>
        <cylinder radius="0.03" length="0.03"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_shoulder_roll_joint" type="revolute">
    <origin xyz="0 -0.038 -0.013831" rpy="0.27925 0 0"/>
    <parent link="right_shoulder_pitch_link"/>
    <child link="right_shoulder_roll_link"/>
    <axis xyz="1 0 0"/>
    <limit lower="-2.2515" upper="1.5882" effort="25" velocity="37"/>
  </joint>
  <link name="right_shoulder_yaw_link">
    <inertial>
      <origin xyz="0.010773 0.002949 -0.072009" rpy="0 0 0"/>
      <mass value="0.734"/>
      <inertia ixx="0.0009988" ixy="-7.9E-06" ixz="0.0001412" iyy="0.0010605" iyz="2.86E-05" izz="0.0004354"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_shoulder_yaw_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_shoulder_yaw_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_shoulder_yaw_joint" type="revolute">
    <origin xyz="0 -0.00624 -0.1032" rpy="0 0 0"/>
    <parent link="right_shoulder_roll_link"/>
    <child link="right_shoulder_yaw_link"/>
    <axis xyz="0 0 1"/>
    <limit lower="-2.618" upper="2.618" effort="25" velocity="37"/>
  </joint>
  <link name="right_elbow_link">
    <inertial>
      <origin xyz="0.064956 -0.004454 -0.010062" rpy="0 0 0"/>
      <mass value="0.6"/>
      <inertia ixx="0.0002891" ixy="-6.53E-05" ixz="1.72E-05" iyy="0.0004152" iyz="5.6E-06" izz="0.0004197"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_elbow_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_elbow_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_elbow_joint" type="revolute">
    <origin xyz="0.015783 0 -0.080518" rpy="0 0 0"/>
    <parent link="right_shoulder_yaw_link"/>
    <child link="right_elbow_link"/>
    <axis xyz="0 1 0"/>
    <limit lower="-1.0472" upper="2.0944" effort="25" velocity="37"/>
  </joint>
  <joint name="right_wrist_roll_joint" type="revolute">
    <origin xyz="0.100 -0.00188791 -0.010" rpy="0 0 0"/>
    <axis xyz="1 0 0"/>
    <parent link="right_elbow_link"/>
    <child link="right_wrist_roll_link"/>
    <limit effort="25" velocity="37" lower="-1.972222054" upper="1.972222054"/>
  </joint>
  <link name="right_wrist_roll_link">
    <inertial>
      <origin xyz="0.01713944778 -0.00053759094 0.00000048864" rpy="0 0 0"/>
      <mass value="0.08544498"/>
      <inertia ixx="0.00004821544023" ixy="0.00000424511021" ixz="0.00000000510599" iyy="0.00003722899093" iyz="0.00000000123525" izz="0.00005482106541"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_wrist_roll_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_wrist_roll_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_wrist_pitch_joint" type="revolute">
    <origin xyz="0.038 0 0" rpy="0 0 0"/>
    <axis xyz="0 1 0"/>
    <parent link="right_wrist_roll_link"/>
    <child link="right_wrist_pitch_link"/>
    <limit effort="5" velocity="22" lower="-1.614429558" upper="1.614429558"/>
  </joint>
  <link name="right_wrist_pitch_link">
    <inertial>
      <origin xyz="0.02299989837 0.00111685314 -0.00111658096" rpy="0 0 0"/>
      <mass value="0.48404956"/>
      <inertia ixx="0.00016579646273" ixy="0.00001231206746" ixz="0.00001231699194" iyy="0.00042954057410" iyz="-0.00000081417712" izz="0.00042953697654"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_wrist_pitch_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_wrist_pitch_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_wrist_yaw_joint" type="revolute">
    <origin xyz="0.046 0 0" rpy="0 0 0"/>
    <axis xyz="0 0 1"/>
    <parent link="right_wrist_pitch_link"/>
    <child link="right_wrist_yaw_link"/>
    <limit effort="5" velocity="22" lower="-1.614429558" upper="1.614429558"/>
  </joint>
  <link name="right_wrist_yaw_link">
    <inertial>
      <origin xyz="0.02200381568 -0.00049485096 0.00053861123" rpy="0 0 0"/>
      <mass value="0.08457647"/>
      <inertia ixx="0.00004929128828" ixy="0.00000045735494" ixz="0.00000445867591" iyy="0.00005973338134" iyz="-0.00000043217198" izz="0.00003928083826"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_wrist_yaw_link.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_wrist_yaw_link.STL"/>
      </geometry>
    </collision>
  </link>
  <joint name="right_hand_palm_joint" type="fixed">
    <origin xyz="0.0415 -0.003 0" rpy="0 0 0"/>
    <parent link="right_wrist_yaw_link"/>
    <child link="right_rubber_hand"/>
  </joint>
  <link name="right_rubber_hand">
    <inertial>
      <origin xyz="0.05361310808 0.00295905240 0.00215413091" rpy="0 0 0"/>
      <mass value="0.170"/>
      <inertia ixx="0.00010099485234748" ixy="-0.00003618590790516" ixz="-0.00000074301518642" iyy="0.00028135871571621" iyz="-0.00000330189743286" izz="0.00021894770413514"/>
    </inertial>
    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <mesh filename="meshes/right_rubber_hand.STL"/>
      </geometry>
      <material name="white">
        <color rgba="0.7 0.7 0.7 1"/>
      </material>
    </visual>
  </link>
</robot>

================================================
FILE: usd/g1_29dof_rev_1_0.usd
================================================
[File too large to display: 26.0 MB]

================================================
FILE: usd/instanceable_meshes.usd
================================================
[File too large to display: 26.0 MB]
Download .txt
gitextract_zeak4x_i/

├── .gitignore
├── LICENSE
├── README.md
├── __init__.py
├── agents/
│   ├── __init__.py
│   ├── skrl_dance_amp_cfg.yaml
│   ├── skrl_g1_dance_amp_cfg.yaml
│   ├── skrl_g1_walk_amp_cfg.yaml
│   ├── skrl_run_amp_cfg.yaml
│   └── skrl_walk_amp_cfg.yaml
├── g1_amp_env.py
├── g1_amp_env_cfg.py
├── g1_cfg.py
├── humanoid_amp_env.py
├── humanoid_amp_env_cfg.py
├── motions/
│   ├── G1_dance.npz
│   ├── G1_dance_old.npz
│   ├── G1_walk.npz
│   ├── README.md
│   ├── __init__.py
│   ├── data_convert.py
│   ├── humanoid_dance.npz
│   ├── humanoid_run.npz
│   ├── humanoid_walk.npz
│   ├── motion_loader.py
│   ├── motion_replayer.py
│   ├── motion_viewer.py
│   ├── record_data.py
│   ├── test/
│   │   └── get_joint_name.py
│   ├── update_pelvis_data.py
│   ├── verify_motion.py
│   └── visualize_motion.py
├── play.py
├── pyproject.toml
├── sync_skrl_scripts.sh
├── train.py
└── usd/
    ├── g1_29dof_rev_1_0.urdf
    ├── g1_29dof_rev_1_0.usd
    ├── instanceable_meshes.usd
    └── meshes/
        ├── head_link.STL
        ├── left_ankle_pitch_link.STL
        ├── left_ankle_roll_link.STL
        ├── left_elbow_link.STL
        ├── left_hip_pitch_link.STL
        ├── left_hip_roll_link.STL
        ├── left_hip_yaw_link.STL
        ├── left_knee_link.STL
        ├── left_rubber_hand.STL
        ├── left_shoulder_pitch_link.STL
        ├── left_shoulder_roll_link.STL
        ├── left_shoulder_yaw_link.STL
        ├── left_wrist_pitch_link.STL
        ├── left_wrist_roll_link.STL
        ├── left_wrist_roll_rubber_hand.STL
        ├── left_wrist_yaw_link.STL
        ├── logo_link.STL
        ├── pelvis.STL
        ├── pelvis_contour_link.STL
        ├── right_ankle_pitch_link.STL
        ├── right_ankle_roll_link.STL
        ├── right_elbow_link.STL
        ├── right_hip_pitch_link.STL
        ├── right_hip_roll_link.STL
        ├── right_hip_yaw_link.STL
        ├── right_knee_link.STL
        ├── right_rubber_hand.STL
        ├── right_shoulder_pitch_link.STL
        ├── right_shoulder_roll_link.STL
        ├── right_shoulder_yaw_link.STL
        ├── right_wrist_pitch_link.STL
        ├── right_wrist_roll_link.STL
        ├── right_wrist_roll_rubber_hand.STL
        ├── right_wrist_yaw_link.STL
        ├── torso_link_rev_1_0.STL
        ├── waist_roll_link_rev_1_0.STL
        ├── waist_support_link.STL
        └── waist_yaw_link_rev_1_0.STL
Download .txt
SYMBOL INDEX (80 symbols across 14 files)

FILE: g1_amp_env.py
  class G1AmpEnv (line 22) | class G1AmpEnv(DirectRLEnv):
    method __init__ (line 25) | def __init__(self, cfg: G1AmpEnvCfg, render_mode: str | None = None, *...
    method _setup_scene (line 64) | def _setup_scene(self):
    method _pre_physics_step (line 85) | def _pre_physics_step(self, actions: torch.Tensor):
    method _apply_action (line 89) | def _apply_action(self):
    method _get_observations (line 94) | def _get_observations(self) -> dict:
    method _get_rewards (line 117) | def _get_rewards(self) -> torch.Tensor:
    method _get_dones (line 134) | def _get_dones(self) -> tuple[torch.Tensor, torch.Tensor]:
    method _reset_idx (line 142) | def _reset_idx(self, env_ids: torch.Tensor | None):
    method _reset_strategy_default (line 162) | def _reset_strategy_default(self, env_ids: torch.Tensor) -> tuple[torc...
    method _reset_strategy_random (line 169) | def _reset_strategy_random(
    method collect_reference_motions (line 205) | def collect_reference_motions(self, num_samples: int, current_times: n...
  function quaternion_to_tangent_and_normal (line 236) | def quaternion_to_tangent_and_normal(q: torch.Tensor) -> torch.Tensor:
  function compute_obs (line 247) | def compute_obs(
  function compute_rewards (line 270) | def compute_rewards(

FILE: g1_amp_env_cfg.py
  class G1AmpEnvCfg (line 23) | class G1AmpEnvCfg(DirectRLEnvCfg):
  class G1AmpDanceEnvCfg (line 75) | class G1AmpDanceEnvCfg(G1AmpEnvCfg):
  class G1AmpWalkEnvCfg (line 79) | class G1AmpWalkEnvCfg(G1AmpEnvCfg):

FILE: humanoid_amp_env.py
  class HumanoidAmpEnv (line 22) | class HumanoidAmpEnv(DirectRLEnv):
    method __init__ (line 25) | def __init__(self, cfg: HumanoidAmpEnvCfg, render_mode: str | None = N...
    method _setup_scene (line 52) | def _setup_scene(self):
    method _pre_physics_step (line 77) | def _pre_physics_step(self, actions: torch.Tensor):
    method _apply_action (line 80) | def _apply_action(self):
    method _get_observations (line 84) | def _get_observations(self) -> dict:
    method _get_rewards (line 105) | def _get_rewards(self) -> torch.Tensor:
    method _get_dones (line 108) | def _get_dones(self) -> tuple[torch.Tensor, torch.Tensor]:
    method _reset_idx (line 116) | def _reset_idx(self, env_ids: torch.Tensor | None):
    method _reset_strategy_default (line 136) | def _reset_strategy_default(self, env_ids: torch.Tensor) -> tuple[torc...
    method _reset_strategy_random (line 143) | def _reset_strategy_random(
    method collect_reference_motions (line 179) | def collect_reference_motions(self, num_samples: int, current_times: n...
  function quaternion_to_tangent_and_normal (line 210) | def quaternion_to_tangent_and_normal(q: torch.Tensor) -> torch.Tensor:
  function compute_obs (line 221) | def compute_obs(

FILE: humanoid_amp_env_cfg.py
  class HumanoidAmpEnvCfg (line 24) | class HumanoidAmpEnvCfg(DirectRLEnvCfg):
  class HumanoidAmpDanceEnvCfg (line 78) | class HumanoidAmpDanceEnvCfg(HumanoidAmpEnvCfg):
  class HumanoidAmpRunEnvCfg (line 83) | class HumanoidAmpRunEnvCfg(HumanoidAmpEnvCfg):
  class HumanoidAmpWalkEnvCfg (line 88) | class HumanoidAmpWalkEnvCfg(HumanoidAmpEnvCfg):

FILE: motions/data_convert.py
  function quaternion_inverse (line 63) | def quaternion_inverse(q):
  function quaternion_multiply (line 71) | def quaternion_multiply(q1, q2):
  function compute_angular_velocity (line 81) | def compute_angular_velocity(q_prev, q_next, dt, eps=1e-8):
  function build_pin_robot (line 107) | def build_pin_robot(urdf_path, mesh_dir):
  function main (line 128) | def main():

FILE: motions/motion_loader.py
  class MotionLoader (line 12) | class MotionLoader:
    method __init__ (line 17) | def __init__(self, motion_file: str, device: torch.device) -> None:
    method dof_names (line 51) | def dof_names(self) -> list[str]:
    method body_names (line 56) | def body_names(self) -> list[str]:
    method num_dofs (line 61) | def num_dofs(self) -> int:
    method num_bodies (line 66) | def num_bodies(self) -> int:
    method _interpolate (line 70) | def _interpolate(
    method _slerp (line 101) | def _slerp(
    method _compute_frame_blend (line 161) | def _compute_frame_blend(self, times: np.ndarray) -> tuple[np.ndarray,...
    method sample_times (line 178) | def sample_times(self, num_samples: int, duration: float | None = None...
    method sample (line 198) | def sample(
    method get_dof_index (line 229) | def get_dof_index(self, dof_names: list[str]) -> list[int]:
    method get_body_index (line 247) | def get_body_index(self, body_names: list[str]) -> list[int]:

FILE: motions/motion_viewer.py
  class MotionViewer (line 20) | class MotionViewer:
    method __init__ (line 25) | def __init__(self, motion_file: str, device: torch.device | str = "cpu...
    method _drawing_callback (line 54) | def _drawing_callback(self, frame: int) -> None:
    method show (line 95) | def show(self) -> None:

FILE: motions/record_data.py
  class MotionData (line 34) | class MotionData:
  function smooth_motion_data (line 46) | def smooth_motion_data(motion_data: MotionData, window_size: int = 3) ->...
  class MotionRecorder (line 100) | class MotionRecorder:
    method __init__ (line 103) | def __init__(self, robot, dof_names_to_record: List[str],
    method start_recording (line 136) | def start_recording(self):
    method stop_recording (line 140) | def stop_recording(self):
    method record_frame (line 143) | def record_frame(self, frame_idx: int):
    method get_recorded_data (line 181) | def get_recorded_data(self) -> MotionData | None:
    method save_data (line 223) | def save_data(self, out_file: str, data: MotionData | None = None) -> ...
  function load_motion_data (line 252) | def load_motion_data(file_path: str) -> Optional[MotionData]:

FILE: motions/test/get_joint_name.py
  class TableTopSceneCfg (line 38) | class TableTopSceneCfg(InteractiveSceneCfg):
  function list_robot_prims (line 153) | def list_robot_prims():
  function run_simulator (line 169) | def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveSc...
  function main (line 198) | def main():

FILE: motions/update_pelvis_data.py
  function plot_pelvis_data_2d (line 8) | def plot_pelvis_data_2d(source_data, target_data, source_pelvis_idx, tar...
  function visualize_pelvis_trajectories (line 68) | def visualize_pelvis_trajectories(source_data, target_data, source_pelvi...
  function update_pelvis_data (line 122) | def update_pelvis_data(source_file, target_file, dry_run=False):

FILE: motions/verify_motion.py
  function verify_motion_file (line 33) | def verify_motion_file(npz_file):

FILE: motions/visualize_motion.py
  function visualize_motion (line 30) | def visualize_motion(npz_file):
  function quaternion_to_euler (line 235) | def quaternion_to_euler(q):

FILE: play.py
  function main (line 128) | def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvC...

FILE: train.py
  function main (line 126) | def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvC...
Condensed preview — 77 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (207K chars).
[
  {
    "path": ".gitignore",
    "chars": 55,
    "preview": "**/__pycache__/\n*.pyc\n*.html\nlogs/\noutputs/*\n*.egg-info"
  },
  {
    "path": "LICENSE",
    "chars": 1493,
    "preview": "BSD 3-Clause License\n\nCopyright (c) 2025, Linden\n\nRedistribution and use in source and binary forms, with or without\nmod"
  },
  {
    "path": "README.md",
    "chars": 4313,
    "preview": "![Humanoid AMP](docs/human_amp.png)\n\n---\n\n# Humanoid AMP\n[![IsaacSim](https://img.shields.io/badge/IsaacSim-5.0.0-silver"
  },
  {
    "path": "__init__.py",
    "chars": 2315,
    "preview": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-C"
  },
  {
    "path": "agents/__init__.py",
    "chars": 126,
    "preview": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-C"
  },
  {
    "path": "agents/skrl_dance_amp_cfg.yaml",
    "chars": 2957,
    "preview": "seed: 42\n\n\n# Models are instantiated using skrl's model instantiator utility\n# https://skrl.readthedocs.io/en/latest/api"
  },
  {
    "path": "agents/skrl_g1_dance_amp_cfg.yaml",
    "chars": 3053,
    "preview": "seed: 42\n\n\n# Models are instantiated using skrl's model instantiator utility\n# https://skrl.readthedocs.io/en/latest/api"
  },
  {
    "path": "agents/skrl_g1_walk_amp_cfg.yaml",
    "chars": 2948,
    "preview": "seed: 42\n\n\n# Models are instantiated using skrl's model instantiator utility\n# https://skrl.readthedocs.io/en/latest/api"
  },
  {
    "path": "agents/skrl_run_amp_cfg.yaml",
    "chars": 2955,
    "preview": "seed: 42\n\n\n# Models are instantiated using skrl's model instantiator utility\n# https://skrl.readthedocs.io/en/latest/api"
  },
  {
    "path": "agents/skrl_walk_amp_cfg.yaml",
    "chars": 2956,
    "preview": "seed: 42\n\n\n# Models are instantiated using skrl's model instantiator utility\n# https://skrl.readthedocs.io/en/latest/api"
  },
  {
    "path": "g1_amp_env.py",
    "chars": 12718,
    "preview": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-C"
  },
  {
    "path": "g1_amp_env_cfg.py",
    "chars": 2279,
    "preview": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-C"
  },
  {
    "path": "g1_cfg.py",
    "chars": 3984,
    "preview": "import isaaclab.sim as sim_utils\n\nfrom isaaclab.actuators import ImplicitActuatorCfg\nfrom isaaclab.assets import Articul"
  },
  {
    "path": "humanoid_amp_env.py",
    "chars": 10531,
    "preview": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTOR"
  },
  {
    "path": "humanoid_amp_env_cfg.py",
    "chars": 2651,
    "preview": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTOR"
  },
  {
    "path": "motions/README.md",
    "chars": 4373,
    "preview": "# Motion files\n\nThe motion files are in NumPy-file format that contains data from the skeleton DOFs and bodies that perf"
  },
  {
    "path": "motions/__init__.py",
    "chars": 251,
    "preview": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-C"
  },
  {
    "path": "motions/data_convert.py",
    "chars": 13169,
    "preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\"\nHumanoid Motion Data Converter\n\nThis script converts humanoid motion"
  },
  {
    "path": "motions/motion_loader.py",
    "chars": 11754,
    "preview": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-C"
  },
  {
    "path": "motions/motion_replayer.py",
    "chars": 8180,
    "preview": "\"\"\"\nMotion Replayer for G1 Humanoid Robot\n\nThis script replays motion data for the G1 humanoid robot in Isaac Sim.\nIt ca"
  },
  {
    "path": "motions/motion_viewer.py",
    "chars": 5197,
    "preview": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTOR"
  },
  {
    "path": "motions/record_data.py",
    "chars": 9917,
    "preview": "\"\"\"\nMotion Data Recording Tool\n\nThis script provides functionality for recording and managing motion data from robots.\nI"
  },
  {
    "path": "motions/test/get_joint_name.py",
    "chars": 9490,
    "preview": "#!/usr/bin/env python\nimport argparse\nimport os\nimport time\nimport numpy as np\nimport torch\n\n# ========= 解析命令行参数 ======="
  },
  {
    "path": "motions/update_pelvis_data.py",
    "chars": 9726,
    "preview": "import numpy as np\nimport os\nimport argparse\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots"
  },
  {
    "path": "motions/verify_motion.py",
    "chars": 3628,
    "preview": "\"\"\"\nMotion Data Verification Tool\n\nThis script verifies and displays detailed information about motion data stored in np"
  },
  {
    "path": "motions/visualize_motion.py",
    "chars": 8041,
    "preview": "\"\"\"\nMotion Data Visualization Tool\n\nThis script visualizes motion data from npz files, creating interactive plots for:\n-"
  },
  {
    "path": "play.py",
    "chars": 9362,
    "preview": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTOR"
  },
  {
    "path": "pyproject.toml",
    "chars": 650,
    "preview": "[build-system]\nrequires = [\"setuptools>=65.0\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"humanoid-amp\"\n"
  },
  {
    "path": "sync_skrl_scripts.sh",
    "chars": 463,
    "preview": "#!/usr/bin/env bash\n# Sync skrl train/play scripts from Isaac Lab releases and reapply local tweaks.\nset -euo pipefail\n\n"
  },
  {
    "path": "train.py",
    "chars": 8868,
    "preview": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTOR"
  },
  {
    "path": "usd/g1_29dof_rev_1_0.urdf",
    "chars": 36227,
    "preview": "<robot name=\"g1_29dof_rev_1_0\">\r\n  <mujoco>\r\n    <compiler meshdir=\"meshes\" discardvisual=\"false\"/>\r\n  </mujoco>\r\n\r\n  <!"
  }
]

// ... and 46 more files (download for full content)

About this extraction

This page contains the full source code of the linden713/humanoid_amp GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 77 files (52.2 MB), approximately 54.2k tokens, and a symbol index with 80 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!