[
  {
    "path": ".gitignore",
    "content": "**/__pycache__/\n*.pyc\n*.html\nlogs/\noutputs/*\n*.egg-info"
  },
  {
    "path": "LICENSE",
    "content": "BSD 3-Clause License\n\nCopyright (c) 2025, Linden\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this\n   list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice,\n   this list of conditions and the following disclaimer in the documentation\n   and/or other materials provided with the distribution.\n\n3. Neither the name of the copyright holder nor the names of its\n   contributors may be used to endorse or promote products derived from\n   this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "README.md",
    "content": "![Humanoid AMP](docs/human_amp.png)\n\n---\n\n# Humanoid AMP\n[![IsaacSim](https://img.shields.io/badge/IsaacSim-5.0.0-silver.svg)](https://docs.isaacsim.omniverse.nvidia.com/latest/index.html)\n[![IsaacLab](https://img.shields.io/badge/IsaacLab-2.2.0-silver.svg)](https://isaac-sim.github.io/IsaacLab/main/index.html)\n[![Python](https://img.shields.io/badge/python-3.10-blue.svg)](https://docs.python.org/3/whatsnew/3.10.html)\n[![License](https://img.shields.io/badge/license-BSD--3-yellow.svg)](https://opensource.org/licenses/BSD-3-Clause)\n\n## Prerequisites\n\nThis project requires Isaac Sim and Isaac Lab installed via pip. If you haven't installed them yet, pleanse follow the instructions in [Isaac Lab Installation Guide](https://isaac-sim.github.io/IsaacLab/main/source/setup/installation/pip_installation.html)\n\n\n\n## Setup\n\nNo symbolic links or special setup required! This project works as a standalone Isaac Lab extension.\n\nInstall the repo in editable mode (mirrors the workflow used in unitree_rl_lab):\n\n```bash\npip install -e .\n```\n\nWe copy the train and play script from isaaclab, note you do not need to do it yourself.\n\n```bash\nbash ./sync_skrl_scripts.sh\n```\n\n## Train\n\n```bash\npython -m humanoid_amp.train --task Isaac-G1-AMP-Walk-Direct-v0 --headless --num_envs 4096\n```\n\nor for dance training:\n\n```bash\npython -m humanoid_amp.train --task Isaac-G1-AMP-Dance-Direct-v0 --headless --num_envs 4096\n```\n\nAdditional training options:\n```bash\n# Resume from checkpoint\npython -m humanoid_amp.train --task Isaac-G1-AMP-Walk-Direct-v0 --checkpoint logs/skrl/path/to/checkpoint\n```\n\n## Eval\n\n```bash\npython -m humanoid_amp.play --task Isaac-G1-AMP-Walk-Direct-v0 --num_envs 32 --checkpoint logs/skrl/<run>/checkpoints/Latest.ckpt\n```\n\n## TensorBoard\n\n```bash\npython -m tensorboard.main --logdir logs/skrl/\n```\n\nThen open your browser to http://localhost:6006\n\nWalk training: `master` branch. Dance training: **`dance`** branch.\n\nThe usage of some helper script functions is explained at the beginning of the file.\n\n\n\n## Motions Scripts\n- `motion_loader.py` - Load motion data from npz files and provide sampling functionality\n- `motion_viewer.py` - 3D visualization player for motion data\n- `data_convert.py` - Convert CSV motion data to npz format with interpolation and forward kinematics\n- `motion_replayer.py` - Replay motion data in Isaac Sim with optional recording\n- `record_data.py` - Recording and managing motion data utility classes\n- `verify_motion.py` - Verify and display npz file contents\n- `visualize_motion.py` - Generate interactive HTML charts to visualize motion data\n- `update_pelvis_data.py` - Fix pelvis posture and body center issues in replay motion by replacing pelvis data from source file\n\n\n## Dataset & URDF\n\n**Note**: The original dataset and URDF files from [Unitree Robotics](https://huggingface.co/datasets/unitreerobotics/LAFAN1_Retargeting_Dataset) have been removed by the official source.\n\nIf you're still looking for the dataset, a third-party mirror is currently available here:  \n[lvhaidong/LAFAN1_Retargeting_Dataset](https://huggingface.co/datasets/lvhaidong/LAFAN1_Retargeting_Dataset)\n\nOr you can also use the data from [AMASS](https://huggingface.co/datasets/ember-lab-berkeley/AMASS_Retargeted_for_G1)\n## TODO\n\n- ✅ Current: Dancing motion is working\n- ✅ Test the `test` branch thoroughly and merge it into `dance` as soon as possible\n- ✅ Write a more detailed README to cover the new features and usage\n- [ ] Add clearer comments and explanations in all Python scripts\n\n\n## Resources\n[![Demo - Walk](https://img.shields.io/badge/Demo-Walk-ff69b4?style=for-the-badge&logo=bilibili)](https://www.bilibili.com/video/BV19cRvYhEL8/?vd_source=5159ce41348cd4fd3d83ef9169dc8dbc)\n[![Demo - Dance (Bilibili)](https://img.shields.io/badge/Demo-Dance-ff69b4?style=for-the-badge&logo=bilibili)](https://www.bilibili.com/video/BV1vW36zEEG8/?share_source=copy_web&vd_source=0de36dd681c4f7ffab776ec97939e21f)\n[![Demo - Dance (YouTube)](https://img.shields.io/badge/Demo-Dance-FF0000?style=for-the-badge&logo=youtube)](https://youtu.be/_ItIFkp-Xi4)\n\n[![Documentation](https://img.shields.io/badge/Documentation-DeepWiki-blue?style=for-the-badge&logo=gitbook)](https://deepwiki.com/linden713/humanoid_amp)\n\n**Contributions**, **discussions**, and **stars** are all welcome! ❥(^_-)\n"
  },
  {
    "path": "__init__.py",
    "content": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-Clause\n\n\"\"\"\nAMP Humanoid locomotion environment.\n\"\"\"\n\nimport gymnasium as gym\n\nfrom . import agents\n\n##\n# Register Gym environments.\n##\n\ngym.register(\n    id=\"Isaac-Humanoid-AMP-Dance-Direct-v0\",\n    entry_point=f\"{__name__}.humanoid_amp_env:HumanoidAmpEnv\",\n    disable_env_checker=True,\n    kwargs={\n        \"env_cfg_entry_point\": f\"{__name__}.humanoid_amp_env_cfg:HumanoidAmpDanceEnvCfg\",\n        \"skrl_amp_cfg_entry_point\": f\"{agents.__name__}:skrl_dance_amp_cfg.yaml\",\n        \"skrl_cfg_entry_point\": f\"{agents.__name__}:skrl_dance_amp_cfg.yaml\",\n    },\n)\n\ngym.register(\n    id=\"Isaac-Humanoid-AMP-Run-Direct-v0\",\n    entry_point=f\"{__name__}.humanoid_amp_env:HumanoidAmpEnv\",\n    disable_env_checker=True,\n    kwargs={\n        \"env_cfg_entry_point\": f\"{__name__}.humanoid_amp_env_cfg:HumanoidAmpRunEnvCfg\",\n        \"skrl_amp_cfg_entry_point\": f\"{agents.__name__}:skrl_run_amp_cfg.yaml\",\n        \"skrl_cfg_entry_point\": f\"{agents.__name__}:skrl_run_amp_cfg.yaml\",\n    },\n)\n\ngym.register(\n    id=\"Isaac-Humanoid-AMP-Walk-Direct-v0\",\n    entry_point=f\"{__name__}.humanoid_amp_env:HumanoidAmpEnv\",\n    disable_env_checker=True,\n    kwargs={\n        \"env_cfg_entry_point\": f\"{__name__}.humanoid_amp_env_cfg:HumanoidAmpWalkEnvCfg\",\n        \"skrl_amp_cfg_entry_point\": f\"{agents.__name__}:skrl_walk_amp_cfg.yaml\",\n        \"skrl_cfg_entry_point\": f\"{agents.__name__}:skrl_walk_amp_cfg.yaml\",\n    },\n)\n\ngym.register(\n    id=\"Isaac-G1-AMP-Dance-Direct-v0\",\n    entry_point=f\"{__name__}.g1_amp_env:G1AmpEnv\",\n    disable_env_checker=True,\n    kwargs={\n        \"env_cfg_entry_point\": f\"{__name__}.g1_amp_env_cfg:G1AmpDanceEnvCfg\",\n        \"skrl_amp_cfg_entry_point\": f\"{agents.__name__}:skrl_g1_dance_amp_cfg.yaml\",\n        \"skrl_cfg_entry_point\": f\"{agents.__name__}:skrl_g1_dance_amp_cfg.yaml\",\n    },\n)\n\ngym.register(\n    id=\"Isaac-G1-AMP-Walk-Direct-v0\",\n    entry_point=f\"{__name__}.g1_amp_env:G1AmpEnv\",\n    disable_env_checker=True,\n    kwargs={\n        \"env_cfg_entry_point\": f\"{__name__}.g1_amp_env_cfg:G1AmpWalkEnvCfg\",\n        \"skrl_amp_cfg_entry_point\": f\"{agents.__name__}:skrl_g1_walk_amp_cfg.yaml\",\n        \"skrl_cfg_entry_point\": f\"{agents.__name__}:skrl_g1_walk_amp_cfg.yaml\",\n    },\n)"
  },
  {
    "path": "agents/__init__.py",
    "content": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-Clause\n"
  },
  {
    "path": "agents/skrl_dance_amp_cfg.yaml",
    "content": "seed: 42\n\n\n# Models are instantiated using skrl's model instantiator utility\n# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.html\nmodels:\n  separate: True\n  policy:  # see gaussian_model parameters\n    class: GaussianMixin\n    clip_actions: False\n    clip_log_std: True\n    min_log_std: -20.0\n    max_log_std: 2.0\n    initial_log_std: -2.9\n    fixed_log_std: True\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ACTIONS\n  value:  # see deterministic_model parameters\n    class: DeterministicMixin\n    clip_actions: False\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ONE\n  discriminator:  # see deterministic_model parameters\n    class: DeterministicMixin\n    clip_actions: False\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ONE\n\n\n# Rollout memory\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nmemory:\n  class: RandomMemory\n  memory_size: -1  # automatically determined (same as agent:rollouts)\n\n# AMP memory (reference motion dataset)\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nmotion_dataset:\n  class: RandomMemory\n  memory_size: 200000\n\n# AMP memory (preventing discriminator overfitting)\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nreply_buffer:\n  class: RandomMemory\n  memory_size: 1000000\n\n\n# AMP agent configuration (field names are from AMP_DEFAULT_CONFIG)\n# https://skrl.readthedocs.io/en/latest/api/agents/amp.html\nagent:\n  class: AMP\n  rollouts: 16\n  learning_epochs: 6\n  mini_batches: 2\n  discount_factor: 0.99\n  lambda: 0.95\n  learning_rate: 5.0e-05\n  learning_rate_scheduler: null\n  learning_rate_scheduler_kwargs: null\n  state_preprocessor: RunningStandardScaler\n  state_preprocessor_kwargs: null\n  value_preprocessor: RunningStandardScaler\n  value_preprocessor_kwargs: null\n  amp_state_preprocessor: RunningStandardScaler\n  amp_state_preprocessor_kwargs: null\n  random_timesteps: 0\n  learning_starts: 0\n  grad_norm_clip: 0.0\n  ratio_clip: 0.2\n  value_clip: 0.2\n  clip_predicted_values: True\n  entropy_loss_scale: 0.0\n  value_loss_scale: 2.5\n  discriminator_loss_scale: 5.0\n  amp_batch_size: 512\n  task_reward_weight: 0.0\n  style_reward_weight: 1.0\n  discriminator_batch_size: 4096\n  discriminator_reward_scale: 2.0\n  discriminator_logit_regularization_scale: 0.05\n  discriminator_gradient_penalty_scale: 5.0\n  discriminator_weight_decay_scale: 1.0e-04\n  # rewards_shaper_scale: 1.0\n  time_limit_bootstrap: False\n  # logging and checkpoint\n  experiment:\n    directory: \"humanoid_amp_dance\"\n    experiment_name: \"\"\n    write_interval: auto\n    checkpoint_interval: auto\n\n\n# Sequential trainer\n# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.html\ntrainer:\n  class: SequentialTrainer\n  timesteps: 80000\n  environment_info: log\n"
  },
  {
    "path": "agents/skrl_g1_dance_amp_cfg.yaml",
    "content": "seed: 42\n\n\n# Models are instantiated using skrl's model instantiator utility\n# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.html\nmodels:\n  separate: True\n  policy:  # see gaussian_model parameters\n    class: GaussianMixin\n    clip_actions: False\n    clip_log_std: True\n    min_log_std: -20.0\n    max_log_std: 2.0\n    initial_log_std: -2.9\n    fixed_log_std: True\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ACTIONS\n  value:  # see deterministic_model parameters\n    class: DeterministicMixin\n    clip_actions: False\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ONE\n  discriminator:  # see deterministic_model parameters\n    class: DeterministicMixin\n    clip_actions: False\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ONE\n\n\n# Rollout memory\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nmemory:\n  class: RandomMemory\n  memory_size: -1  # automatically determined (same as agent:rollouts)\n\n# AMP memory (reference motion dataset)\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nmotion_dataset:\n  class: RandomMemory\n  memory_size: 200000\n\n# AMP memory (preventing discriminator overfitting)\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nreply_buffer:\n  class: RandomMemory\n  memory_size: 1000000\n\n\n# AMP agent configuration (field names are from AMP_DEFAULT_CONFIG)\n# https://skrl.readthedocs.io/en/latest/api/agents/amp.html\nagent:\n  class: AMP\n  rollouts: 16\n  learning_epochs: 6\n  mini_batches: 2\n  discount_factor: 0.99\n  lambda: 0.95\n  learning_rate: 5.0e-05\n  # learning_rate_scheduler: null\n  # learning_rate_scheduler_kwargs: null\n  learning_rate_scheduler: KLAdaptiveLR\n  learning_rate_scheduler_kwargs: \n    kl_threshold: 0.008\n  state_preprocessor: RunningStandardScaler\n  state_preprocessor_kwargs: null\n  value_preprocessor: RunningStandardScaler\n  value_preprocessor_kwargs: null\n  amp_state_preprocessor: RunningStandardScaler\n  amp_state_preprocessor_kwargs: null\n  random_timesteps: 0\n  learning_starts: 0\n  grad_norm_clip: 0.0\n  ratio_clip: 0.2\n  value_clip: 0.2\n  clip_predicted_values: True\n  entropy_loss_scale: 0.02\n  value_loss_scale: 2.5\n  discriminator_loss_scale: 5.0\n  amp_batch_size: 512\n  task_reward_weight: 0\n  style_reward_weight: 1.0\n  discriminator_batch_size: 4096\n  discriminator_reward_scale: 2.0\n  discriminator_logit_regularization_scale: 0.05\n  discriminator_gradient_penalty_scale: 5.0\n  discriminator_weight_decay_scale: 1.0e-04\n  # rewards_shaper_scale: 1.0\n  time_limit_bootstrap: False\n  # logging and checkpoint\n  experiment:\n    directory: \"g1_amp_dance\"\n    experiment_name: \"\"\n    write_interval: auto\n    checkpoint_interval: auto\n\n\n# Sequential trainer\n# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.html\ntrainer:\n  class: SequentialTrainer\n  timesteps: 50000\n  environment_info: log\n"
  },
  {
    "path": "agents/skrl_g1_walk_amp_cfg.yaml",
    "content": "seed: 42\n\n\n# Models are instantiated using skrl's model instantiator utility\n# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.html\nmodels:\n  separate: True\n  policy:  # see gaussian_model parameters\n    class: GaussianMixin\n    clip_actions: False\n    clip_log_std: True\n    min_log_std: -20.0\n    max_log_std: 2.0\n    initial_log_std: -2.9\n    fixed_log_std: True\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ACTIONS\n  value:  # see deterministic_model parameters\n    class: DeterministicMixin\n    clip_actions: False\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ONE\n  discriminator:  # see deterministic_model parameters\n    class: DeterministicMixin\n    clip_actions: False\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ONE\n\n\n# Rollout memory\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nmemory:\n  class: RandomMemory\n  memory_size: -1  # automatically determined (same as agent:rollouts)\n\n# AMP memory (reference motion dataset)\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nmotion_dataset:\n  class: RandomMemory\n  memory_size: 200000\n\n# AMP memory (preventing discriminator overfitting)\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nreply_buffer:\n  class: RandomMemory\n  memory_size: 1000000\n\n\n# AMP agent configuration (field names are from AMP_DEFAULT_CONFIG)\n# https://skrl.readthedocs.io/en/latest/api/agents/amp.html\nagent:\n  class: AMP\n  rollouts: 16\n  learning_epochs: 6\n  mini_batches: 2\n  discount_factor: 0.99\n  lambda: 0.95\n  learning_rate: 5.0e-05\n  learning_rate_scheduler: null\n  learning_rate_scheduler_kwargs: null\n  state_preprocessor: RunningStandardScaler\n  state_preprocessor_kwargs: null\n  value_preprocessor: RunningStandardScaler\n  value_preprocessor_kwargs: null\n  amp_state_preprocessor: RunningStandardScaler\n  amp_state_preprocessor_kwargs: null\n  random_timesteps: 0\n  learning_starts: 0\n  grad_norm_clip: 0.0\n  ratio_clip: 0.2\n  value_clip: 0.2\n  clip_predicted_values: True\n  entropy_loss_scale: 0.0\n  value_loss_scale: 2.5\n  discriminator_loss_scale: 5.0\n  amp_batch_size: 512\n  task_reward_weight: 0\n  style_reward_weight: 1.0\n  discriminator_batch_size: 4096\n  discriminator_reward_scale: 2.0\n  discriminator_logit_regularization_scale: 0.05\n  discriminator_gradient_penalty_scale: 5.0\n  discriminator_weight_decay_scale: 1.0e-04\n  # rewards_shaper_scale: 1.0\n  time_limit_bootstrap: False\n  # logging and checkpoint\n  experiment:\n    directory: \"g1_amp_walk\"\n    experiment_name: \"\"\n    write_interval: auto\n    checkpoint_interval: auto\n\n\n# Sequential trainer\n# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.html\ntrainer:\n  class: SequentialTrainer\n  timesteps: 50000\n  environment_info: log\n"
  },
  {
    "path": "agents/skrl_run_amp_cfg.yaml",
    "content": "seed: 42\n\n\n# Models are instantiated using skrl's model instantiator utility\n# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.html\nmodels:\n  separate: True\n  policy:  # see gaussian_model parameters\n    class: GaussianMixin\n    clip_actions: False\n    clip_log_std: True\n    min_log_std: -20.0\n    max_log_std: 2.0\n    initial_log_std: -2.9\n    fixed_log_std: True\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ACTIONS\n  value:  # see deterministic_model parameters\n    class: DeterministicMixin\n    clip_actions: False\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ONE\n  discriminator:  # see deterministic_model parameters\n    class: DeterministicMixin\n    clip_actions: False\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ONE\n\n\n# Rollout memory\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nmemory:\n  class: RandomMemory\n  memory_size: -1  # automatically determined (same as agent:rollouts)\n\n# AMP memory (reference motion dataset)\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nmotion_dataset:\n  class: RandomMemory\n  memory_size: 200000\n\n# AMP memory (preventing discriminator overfitting)\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nreply_buffer:\n  class: RandomMemory\n  memory_size: 1000000\n\n\n# AMP agent configuration (field names are from AMP_DEFAULT_CONFIG)\n# https://skrl.readthedocs.io/en/latest/api/agents/amp.html\nagent:\n  class: AMP\n  rollouts: 16\n  learning_epochs: 6\n  mini_batches: 2\n  discount_factor: 0.99\n  lambda: 0.95\n  learning_rate: 5.0e-05\n  learning_rate_scheduler: null\n  learning_rate_scheduler_kwargs: null\n  state_preprocessor: RunningStandardScaler\n  state_preprocessor_kwargs: null\n  value_preprocessor: RunningStandardScaler\n  value_preprocessor_kwargs: null\n  amp_state_preprocessor: RunningStandardScaler\n  amp_state_preprocessor_kwargs: null\n  random_timesteps: 0\n  learning_starts: 0\n  grad_norm_clip: 0.0\n  ratio_clip: 0.2\n  value_clip: 0.2\n  clip_predicted_values: True\n  entropy_loss_scale: 0.0\n  value_loss_scale: 2.5\n  discriminator_loss_scale: 5.0\n  amp_batch_size: 512\n  task_reward_weight: 0.0\n  style_reward_weight: 1.0\n  discriminator_batch_size: 4096\n  discriminator_reward_scale: 2.0\n  discriminator_logit_regularization_scale: 0.05\n  discriminator_gradient_penalty_scale: 5.0\n  discriminator_weight_decay_scale: 1.0e-04\n  # rewards_shaper_scale: 1.0\n  time_limit_bootstrap: False\n  # logging and checkpoint\n  experiment:\n    directory: \"humanoid_amp_run\"\n    experiment_name: \"\"\n    write_interval: auto\n    checkpoint_interval: auto\n\n\n# Sequential trainer\n# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.html\ntrainer:\n  class: SequentialTrainer\n  timesteps: 80000\n  environment_info: log\n"
  },
  {
    "path": "agents/skrl_walk_amp_cfg.yaml",
    "content": "seed: 42\n\n\n# Models are instantiated using skrl's model instantiator utility\n# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.html\nmodels:\n  separate: True\n  policy:  # see gaussian_model parameters\n    class: GaussianMixin\n    clip_actions: False\n    clip_log_std: True\n    min_log_std: -20.0\n    max_log_std: 2.0\n    initial_log_std: -2.9\n    fixed_log_std: True\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ACTIONS\n  value:  # see deterministic_model parameters\n    class: DeterministicMixin\n    clip_actions: False\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ONE\n  discriminator:  # see deterministic_model parameters\n    class: DeterministicMixin\n    clip_actions: False\n    network:\n      - name: net\n        input: STATES\n        layers: [1024, 512]\n        activations: relu\n    output: ONE\n\n\n# Rollout memory\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nmemory:\n  class: RandomMemory\n  memory_size: -1  # automatically determined (same as agent:rollouts)\n\n# AMP memory (reference motion dataset)\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nmotion_dataset:\n  class: RandomMemory\n  memory_size: 200000\n\n# AMP memory (preventing discriminator overfitting)\n# https://skrl.readthedocs.io/en/latest/api/memories/random.html\nreply_buffer:\n  class: RandomMemory\n  memory_size: 1000000\n\n\n# AMP agent configuration (field names are from AMP_DEFAULT_CONFIG)\n# https://skrl.readthedocs.io/en/latest/api/agents/amp.html\nagent:\n  class: AMP\n  rollouts: 16\n  learning_epochs: 6\n  mini_batches: 2\n  discount_factor: 0.99\n  lambda: 0.95\n  learning_rate: 5.0e-05\n  learning_rate_scheduler: null\n  learning_rate_scheduler_kwargs: null\n  state_preprocessor: RunningStandardScaler\n  state_preprocessor_kwargs: null\n  value_preprocessor: RunningStandardScaler\n  value_preprocessor_kwargs: null\n  amp_state_preprocessor: RunningStandardScaler\n  amp_state_preprocessor_kwargs: null\n  random_timesteps: 0\n  learning_starts: 0\n  grad_norm_clip: 0.0\n  ratio_clip: 0.2\n  value_clip: 0.2\n  clip_predicted_values: True\n  entropy_loss_scale: 0.0\n  value_loss_scale: 2.5\n  discriminator_loss_scale: 5.0\n  amp_batch_size: 512\n  task_reward_weight: 0.0\n  style_reward_weight: 1.0\n  discriminator_batch_size: 4096\n  discriminator_reward_scale: 2.0\n  discriminator_logit_regularization_scale: 0.05\n  discriminator_gradient_penalty_scale: 5.0\n  discriminator_weight_decay_scale: 1.0e-04\n  # rewards_shaper_scale: 1.0\n  time_limit_bootstrap: False\n  # logging and checkpoint\n  experiment:\n    directory: \"humanoid_amp_walk\"\n    experiment_name: \"\"\n    write_interval: auto\n    checkpoint_interval: auto\n\n\n# Sequential trainer\n# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.html\ntrainer:\n  class: SequentialTrainer\n  timesteps: 80000\n  environment_info: log\n"
  },
  {
    "path": "g1_amp_env.py",
    "content": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-Clause\n\nfrom __future__ import annotations\n\nimport gymnasium as gym\nimport numpy as np\nimport torch\n\nimport isaaclab.sim as sim_utils\nfrom isaaclab.assets import Articulation\nfrom isaaclab.envs import DirectRLEnv\nfrom isaaclab.sim.spawners.from_files import GroundPlaneCfg, spawn_ground_plane\nfrom isaaclab.utils.math import quat_apply\n\nfrom .g1_amp_env_cfg import G1AmpEnvCfg\nfrom .motions import MotionLoader\n\n\nclass G1AmpEnv(DirectRLEnv):\n    cfg: G1AmpEnvCfg\n\n    def __init__(self, cfg: G1AmpEnvCfg, render_mode: str | None = None, **kwargs):\n        super().__init__(cfg, render_mode, **kwargs)\n\n        # action offset and scale\n        dof_lower_limits = self.robot.data.soft_joint_pos_limits[0, :, 0]\n        dof_upper_limits = self.robot.data.soft_joint_pos_limits[0, :, 1]\n        self.action_offset = 0.5 * (dof_upper_limits + dof_lower_limits)\n        self.action_scale = dof_upper_limits - dof_lower_limits\n\n\n        # load motion\n        self._motion_loader = MotionLoader(motion_file=self.cfg.motion_file, device=self.device)\n\n        # DOF and key body indexes  \n        key_body_names = [ \"left_shoulder_pitch_link\",\n            \"right_shoulder_pitch_link\",\n            \"left_elbow_link\",\n            \"right_elbow_link\",\n            \"right_hip_yaw_link\",\n            \"left_hip_yaw_link\",\n            \"right_rubber_hand\",\n            \"left_rubber_hand\",\n            \"right_ankle_roll_link\",\n            \"left_ankle_roll_link\"]\n\n        self.ref_body_index = self.robot.data.body_names.index(self.cfg.reference_body)\n        self.key_body_indexes = [self.robot.data.body_names.index(name) for name in key_body_names]\n        # Used to for reset strategy\n        self.motion_dof_indexes = self._motion_loader.get_dof_index(self.robot.data.joint_names)\n        self.motion_ref_body_index = self._motion_loader.get_body_index([self.cfg.reference_body])[0]\n        self.motion_key_body_indexes = self._motion_loader.get_body_index(key_body_names)\n\n        # reconfigure AMP observation space according to the number of observations and create the buffer\n        self.amp_observation_size = self.cfg.num_amp_observations * self.cfg.amp_observation_space\n        self.amp_observation_space = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(self.amp_observation_size,))\n        self.amp_observation_buffer = torch.zeros(\n            (self.num_envs, self.cfg.num_amp_observations, self.cfg.amp_observation_space), device=self.device\n        )\n\n    def _setup_scene(self):\n        self.robot = Articulation(self.cfg.robot)\n        # add ground plane\n        spawn_ground_plane(\n            prim_path=\"/World/ground\",\n            cfg=GroundPlaneCfg(\n                physics_material=sim_utils.RigidBodyMaterialCfg(\n                    static_friction=1.0,\n                    dynamic_friction=1.0,\n                    restitution=0.0,\n                ),\n            ),\n        )\n        # clone and replicate\n        self.scene.clone_environments(copy_from_source=False)\n        # add articulation to scene\n        self.scene.articulations[\"robot\"] = self.robot\n        # add lights\n        light_cfg = sim_utils.DomeLightCfg(intensity=2000.0, color=(0.75, 0.75, 0.75))\n        light_cfg.func(\"/World/Light\", light_cfg)\n\n    def _pre_physics_step(self, actions: torch.Tensor):\n        self.actions = actions.clone()\n        # self.pre_actions = actions.clone()\n\n    def _apply_action(self):\n        # self.pre_actions = self.actions.clone()\n        target = self.action_offset + self.action_scale * self.actions\n        self.robot.set_joint_position_target(target)\n\n    def _get_observations(self) -> dict:\n        # build task observation\n        obs = compute_obs(\n            self.robot.data.joint_pos,\n            self.robot.data.joint_vel,\n            self.robot.data.body_pos_w[:, self.ref_body_index],\n            self.robot.data.body_quat_w[:, self.ref_body_index],\n            self.robot.data.body_lin_vel_w[:, self.ref_body_index],\n            self.robot.data.body_ang_vel_w[:, self.ref_body_index],\n            self.robot.data.body_pos_w[:, self.key_body_indexes],\n        )\n\n        # update AMP observation history\n        for i in reversed(range(self.cfg.num_amp_observations - 1)):\n            self.amp_observation_buffer[:, i + 1] = self.amp_observation_buffer[:, i]\n        # build AMP observation\n        self.amp_observation_buffer[:, 0] = obs.clone()\n        self.extras = {\"amp_obs\": self.amp_observation_buffer.view(-1, self.amp_observation_size)}\n\n        return {\"policy\": obs}\n\n    # def _get_rewards(self) -> torch.Tensor:\n    #     return torch.ones((self.num_envs,), dtype=torch.float32, device=self.sim.device)\n    def _get_rewards(self) -> torch.Tensor:\n        total_reward, reward_log = compute_rewards(\n            self.cfg.rew_termination,\n            self.cfg.rew_action_l2,\n            self.cfg.rew_joint_pos_limits,\n            self.cfg.rew_joint_acc_l2,\n            self.cfg.rew_joint_vel_l2,\n            self.reset_terminated,\n            self.actions,\n            self.robot.data.joint_pos,\n            self.robot.data.soft_joint_pos_limits,\n            self.robot.data.joint_acc,\n            self.robot.data.joint_vel,    \n        )\n        self.extras[\"log\"] = reward_log\n        return total_reward\n\n    def _get_dones(self) -> tuple[torch.Tensor, torch.Tensor]:\n        time_out = self.episode_length_buf >= self.max_episode_length - 1\n        if self.cfg.early_termination:\n            died = self.robot.data.body_pos_w[:, self.ref_body_index, 2] < self.cfg.termination_height\n        else:\n            died = torch.zeros_like(time_out)\n        return died, time_out\n\n    def _reset_idx(self, env_ids: torch.Tensor | None):\n        if env_ids is None or len(env_ids) == self.num_envs:\n            env_ids = self.robot._ALL_INDICES\n        self.robot.reset(env_ids)\n        super()._reset_idx(env_ids)\n\n        if self.cfg.reset_strategy == \"default\":\n            root_state, joint_pos, joint_vel = self._reset_strategy_default(env_ids)\n        elif self.cfg.reset_strategy.startswith(\"random\"):\n            start = \"start\" in self.cfg.reset_strategy\n            root_state, joint_pos, joint_vel = self._reset_strategy_random(env_ids, start)\n        else:\n            raise ValueError(f\"Unknown reset strategy: {self.cfg.reset_strategy}\")\n\n        self.robot.write_root_link_pose_to_sim(root_state[:, :7], env_ids)\n        self.robot.write_root_com_velocity_to_sim(root_state[:, 7:], env_ids)\n        self.robot.write_joint_state_to_sim(joint_pos, joint_vel, None, env_ids)\n\n    # reset strategies\n\n    def _reset_strategy_default(self, env_ids: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n        root_state = self.robot.data.default_root_state[env_ids].clone()\n        root_state[:, :3] += self.scene.env_origins[env_ids]\n        joint_pos = self.robot.data.default_joint_pos[env_ids].clone()\n        joint_vel = self.robot.data.default_joint_vel[env_ids].clone()\n        return root_state, joint_pos, joint_vel\n\n    def _reset_strategy_random(\n        self, env_ids: torch.Tensor, start: bool = False\n    ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n        # sample random motion times (or zeros if start is True)\n        num_samples = env_ids.shape[0]\n        times = np.zeros(num_samples) if start else self._motion_loader.sample_times(num_samples)\n        # sample random motions\n        (\n            dof_positions,\n            dof_velocities,\n            body_positions,\n            body_rotations,\n            body_linear_velocities,\n            body_angular_velocities,\n        ) = self._motion_loader.sample(num_samples=num_samples, times=times)\n\n        # get root transforms (the humanoid torso)\n        motion_torso_index = self._motion_loader.get_body_index([\"pelvis\"])[0]\n        root_state = self.robot.data.default_root_state[env_ids].clone()\n        root_state[:, 0:3] = body_positions[:, motion_torso_index] + self.scene.env_origins[env_ids]\n        root_state[:, 2] += 0.05  # lift the humanoid slightly to avoid collisions with the ground\n        root_state[:, 3:7] = body_rotations[:, motion_torso_index]\n        root_state[:, 7:10] = body_linear_velocities[:, motion_torso_index]\n        root_state[:, 10:13] = body_angular_velocities[:, motion_torso_index]\n        # get DOFs state\n        dof_pos = dof_positions[:, self.motion_dof_indexes]\n        dof_vel = dof_velocities[:, self.motion_dof_indexes]\n\n        # update AMP observation\n        amp_observations = self.collect_reference_motions(num_samples, times)\n        self.amp_observation_buffer[env_ids] = amp_observations.view(num_samples, self.cfg.num_amp_observations, -1)\n\n        return root_state, dof_pos, dof_vel\n\n    # env methods\n\n    def collect_reference_motions(self, num_samples: int, current_times: np.ndarray | None = None) -> torch.Tensor:\n        # sample random motion times (or use the one specified)\n        if current_times is None:\n            current_times = self._motion_loader.sample_times(num_samples)\n        times = (\n            np.expand_dims(current_times, axis=-1)\n            - self._motion_loader.dt * np.arange(0, self.cfg.num_amp_observations)\n        ).flatten()\n        # get motions\n        (\n            dof_positions,\n            dof_velocities,\n            body_positions,\n            body_rotations,\n            body_linear_velocities,\n            body_angular_velocities,\n        ) = self._motion_loader.sample(num_samples=num_samples, times=times)\n        # compute AMP observation\n        amp_observation = compute_obs(\n            dof_positions[:, self.motion_dof_indexes],\n            dof_velocities[:, self.motion_dof_indexes],\n            body_positions[:, self.motion_ref_body_index],\n            body_rotations[:, self.motion_ref_body_index],\n            body_linear_velocities[:, self.motion_ref_body_index],\n            body_angular_velocities[:, self.motion_ref_body_index],\n            body_positions[:, self.motion_key_body_indexes],\n        )\n        return amp_observation.view(-1, self.amp_observation_size)\n\n\n@torch.jit.script\ndef quaternion_to_tangent_and_normal(q: torch.Tensor) -> torch.Tensor:\n    ref_tangent = torch.zeros_like(q[..., :3])\n    ref_normal = torch.zeros_like(q[..., :3])\n    ref_tangent[..., 0] = 1\n    ref_normal[..., -1] = 1\n    tangent = quat_apply(q, ref_tangent)\n    normal = quat_apply(q, ref_normal)\n    return torch.cat([tangent, normal], dim=len(tangent.shape) - 1)\n\n\n@torch.jit.script\ndef compute_obs(\n    dof_positions: torch.Tensor,\n    dof_velocities: torch.Tensor,\n    root_positions: torch.Tensor,\n    root_rotations: torch.Tensor,\n    root_linear_velocities: torch.Tensor,\n    root_angular_velocities: torch.Tensor,\n    key_body_positions: torch.Tensor,\n) -> torch.Tensor:\n    obs = torch.cat(\n        (\n            dof_positions,\n            dof_velocities,\n            root_positions[:, 2:3],  # root body height\n            quaternion_to_tangent_and_normal(root_rotations),\n            root_linear_velocities,\n            root_angular_velocities,\n            (key_body_positions - root_positions.unsqueeze(-2)).view(key_body_positions.shape[0], -1),\n        ),\n        dim=-1,\n    )\n    return obs\n@torch.jit.script\ndef compute_rewards(\n    rew_scale_termination: float,\n    rew_scale_action_l2: float,\n    rew_scale_joint_pos_limits: float,\n    rew_scale_joint_acc_l2: float,\n    rew_scale_joint_vel_l2: float,\n    reset_terminated: torch.Tensor,\n    actions: torch.Tensor,\n    joint_pos: torch.Tensor,\n    soft_joint_pos_limits: torch.Tensor,\n    joint_acc: torch.Tensor,\n    joint_vel: torch.Tensor,\n):\n    rew_termination = rew_scale_termination * reset_terminated.float()\n    rew_action_l2 = rew_scale_action_l2 * torch.sum(torch.square(actions), dim=1)\n    \n    out_of_limits = -(joint_pos - soft_joint_pos_limits[:,:,0]).clip(max=0.0)\n    out_of_limits += (joint_pos - soft_joint_pos_limits[:,:,1]).clip(min=0.0)\n    rew_joint_pos_limits = rew_scale_joint_pos_limits * torch.sum(out_of_limits, dim=1)\n    \n    rew_joint_acc_l2 = rew_scale_joint_acc_l2 * torch.sum(torch.square(joint_acc), dim=1)\n    rew_joint_vel_l2 = rew_scale_joint_vel_l2 * torch.sum(torch.square(joint_vel), dim=1)\n    total_reward = rew_termination + rew_action_l2 + rew_joint_pos_limits + rew_joint_acc_l2 + rew_joint_vel_l2\n    \n    log = {\n        \"rew_termination\": (rew_termination).mean(),\n        \"rew_action_l2\": (rew_action_l2).mean(),\n        \"rew_joint_pos_limits\": (rew_joint_pos_limits).mean(),\n        \"rew_joint_acc_l2\": (rew_joint_acc_l2).mean(),\n        \"rew_joint_vel_l2\": (rew_joint_vel_l2).mean(),\n        }\n    return total_reward, log"
  },
  {
    "path": "g1_amp_env_cfg.py",
    "content": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-Clause\n\nfrom __future__ import annotations\n\nimport os\nfrom dataclasses import MISSING\nfrom .g1_cfg import G1_CFG\n\n\nfrom isaaclab.envs import DirectRLEnvCfg\nfrom isaaclab.scene import InteractiveSceneCfg\nfrom isaaclab.sim import PhysxCfg, SimulationCfg\nfrom isaaclab.utils import configclass\nfrom isaaclab.assets import ArticulationCfg\n\nMOTIONS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"motions\")\n\n\n@configclass\nclass G1AmpEnvCfg(DirectRLEnvCfg):\n    \"\"\"Humanoid AMP environment config (base class).\"\"\"\n    \n    # reward\n    rew_termination = -0\n    rew_action_l2 = -0.00\n    rew_joint_pos_limits = -0\n    rew_joint_acc_l2 =-0.00\n    rew_joint_vel_l2= -0.00\n\n    # env\n    episode_length_s = 10.0\n    decimation = 2\n\n    # spaces\n    observation_space =  71 + 3 * 10 #TODO\n    action_space = 29\n    state_space = 0\n    num_amp_observations = 2\n    amp_observation_space = 71 + 3 * 10\n\n    early_termination = True\n    termination_height = 0.5\n\n    motion_file: str = MISSING\n    reference_body = \"pelvis\"\n    reset_strategy = \"random\"  # default, random, random-start\n    \"\"\"Strategy to be followed when resetting each environment (humanoid's pose and joint states).\n\n    * default: pose and joint states are set to the initial state of the asset.\n    * random: pose and joint states are set by sampling motions at random, uniform times.\n    * random-start: pose and joint states are set by sampling motion at the start (time zero).\n    \"\"\"\n\n    # simulation\n    sim: SimulationCfg = SimulationCfg(\n        dt=1 / 60,\n        render_interval=decimation,\n        physx=PhysxCfg(\n            gpu_found_lost_pairs_capacity=2**23,\n            gpu_total_aggregate_pairs_capacity=2**23,\n        ),\n    )\n\n    # scene\n    scene: InteractiveSceneCfg = InteractiveSceneCfg(num_envs=4096, env_spacing=4.0, replicate_physics=True)\n\n    # robot\n    robot: ArticulationCfg = G1_CFG.replace(prim_path=\"/World/envs/env_.*/Robot\")\n\n\n@configclass\nclass G1AmpDanceEnvCfg(G1AmpEnvCfg):\n    motion_file = os.path.join(MOTIONS_DIR, \"G1_dance.npz\")\n    \n@configclass\nclass G1AmpWalkEnvCfg(G1AmpEnvCfg):\n    motion_file = os.path.join(MOTIONS_DIR, \"G1_walk.npz\")"
  },
  {
    "path": "g1_cfg.py",
    "content": "import isaaclab.sim as sim_utils\n\nfrom isaaclab.actuators import ImplicitActuatorCfg\nfrom isaaclab.assets import ArticulationCfg\nimport os\n\nCURRENT_DIR = os.path.dirname(os.path.abspath(__file__))\n\n\nG1_CFG = ArticulationCfg(\n        prim_path=\"{ENV_REGEX_NS}/Robot\",\n        spawn=sim_utils.UsdFileCfg(\n            usd_path=os.path.join(CURRENT_DIR, \"usd/g1_29dof_rev_1_0.usd\"),\n            activate_contact_sensors=True,\n            rigid_props=sim_utils.RigidBodyPropertiesCfg(\n                disable_gravity=False,\n                retain_accelerations=False,\n                linear_damping=0.0,\n                angular_damping=0.0,\n                max_linear_velocity=3.0,\n                max_angular_velocity=3.0,\n                max_depenetration_velocity=10.0,\n            ),\n            articulation_props=sim_utils.ArticulationRootPropertiesCfg(\n                enabled_self_collisions=False, solver_position_iteration_count=4, solver_velocity_iteration_count=0\n            ),\n        ),\n        init_state=ArticulationCfg.InitialStateCfg(\n            pos=(0.0, 0.0, 0.8),\n            joint_pos={\n                \".*_hip_pitch_joint\": -0.20,\n                \".*_knee_joint\": 0.42,\n                \".*_ankle_pitch_joint\": -0.23,\n                # \".*_elbow_pitch_joint\": 0.87,\n                \"left_shoulder_roll_joint\": 0.16,\n                \"left_shoulder_pitch_joint\": 0.35,\n                \"right_shoulder_roll_joint\": -0.16,\n                \"right_shoulder_pitch_joint\": 0.35,\n            },\n            joint_vel={\".*\": 0.0},\n        ),\n        soft_joint_pos_limit_factor=0.9,\n        actuators={\n            \"legs\": ImplicitActuatorCfg(\n                joint_names_expr=[\n                    \".*_hip_yaw_joint\",\n                    \".*_hip_roll_joint\",\n                    \".*_hip_pitch_joint\",\n                    \".*_knee_joint\",\n                    \"waist_yaw_joint\",\n                    \"waist_roll_joint\",\n                    \"waist_pitch_joint\",\n                ],\n                effort_limit=300,\n                velocity_limit=100.0,\n                stiffness={\n                    \".*_hip_yaw_joint\": 150.0,\n                    \".*_hip_roll_joint\": 150.0,\n                    \".*_hip_pitch_joint\": 200.0,\n                    \".*_knee_joint\": 200.0,\n                    \"waist_yaw_joint\": 200.0,\n                    \"waist_roll_joint\": 200.0,\n                    \"waist_pitch_joint\": 200.0,\n                },\n                damping={\n                    \".*_hip_yaw_joint\": 5.0,\n                    \".*_hip_roll_joint\": 5.0,\n                    \".*_hip_pitch_joint\": 5.0,\n                    \".*_knee_joint\": 5.0,\n                    \"waist_yaw_joint\": 5.0,\n                    \"waist_roll_joint\": 5.0,\n                    \"waist_pitch_joint\": 5.0,\n                },\n                armature={\n                    \".*_hip_.*\": 0.01,\n                    \".*_knee_joint\": 0.01,\n                    \"waist_yaw_joint\": 0.01,\n                    \"waist_roll_joint\": 0.01,\n                    \"waist_pitch_joint\": 0.01,\n                },\n            ),\n            \"feet\": ImplicitActuatorCfg(\n                effort_limit=20,\n                joint_names_expr=[\".*_ankle_pitch_joint\", \".*_ankle_roll_joint\"],\n                stiffness=20.0,\n                damping=2.0,\n                armature=0.01,\n            ),\n            \"arms\": ImplicitActuatorCfg(\n                joint_names_expr=[\n                    \".*_shoulder_pitch_joint\",\n                    \".*_shoulder_roll_joint\",\n                    \".*_shoulder_yaw_joint\",\n                    \".*_elbow_joint\",\n                    \".*_wrist_.*\",\n\n                ],\n                effort_limit=300,\n                velocity_limit=100.0,\n                stiffness=40.0,\n                damping=10.0,\n                armature={\n                    \".*_shoulder_.*\": 0.01,\n                    \".*_elbow_.*\": 0.01,\n                    \".*_wrist_.*\": 0.01,\n\n                },\n            ),\n        },\n    )"
  },
  {
    "path": "humanoid_amp_env.py",
    "content": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-Clause\n\nfrom __future__ import annotations\n\nimport gymnasium as gym\nimport numpy as np\nimport torch\n\nimport isaaclab.sim as sim_utils\nfrom isaaclab.assets import Articulation\nfrom isaaclab.envs import DirectRLEnv\nfrom isaaclab.sim.spawners.from_files import GroundPlaneCfg, spawn_ground_plane\nfrom isaaclab.utils.math import quat_apply\n\nfrom .humanoid_amp_env_cfg import HumanoidAmpEnvCfg\nfrom .motions import MotionLoader\n\n\nclass HumanoidAmpEnv(DirectRLEnv):\n    cfg: HumanoidAmpEnvCfg\n\n    def __init__(self, cfg: HumanoidAmpEnvCfg, render_mode: str | None = None, **kwargs):\n        super().__init__(cfg, render_mode, **kwargs)\n\n        # action offset and scale\n        dof_lower_limits = self.robot.data.soft_joint_pos_limits[0, :, 0]\n        dof_upper_limits = self.robot.data.soft_joint_pos_limits[0, :, 1]\n        self.action_offset = 0.5 * (dof_upper_limits + dof_lower_limits)\n        self.action_scale = dof_upper_limits - dof_lower_limits\n\n        # load motion\n        self._motion_loader = MotionLoader(motion_file=self.cfg.motion_file, device=self.device)\n\n        # DOF and key body indexes\n        key_body_names = [\"right_hand\", \"left_hand\", \"right_foot\", \"left_foot\"]\n        self.ref_body_index = self.robot.data.body_names.index(self.cfg.reference_body)\n        self.key_body_indexes = [self.robot.data.body_names.index(name) for name in key_body_names]\n        self.motion_dof_indexes = self._motion_loader.get_dof_index(self.robot.data.joint_names)\n        self.motion_ref_body_index = self._motion_loader.get_body_index([self.cfg.reference_body])[0]\n        self.motion_key_body_indexes = self._motion_loader.get_body_index(key_body_names)\n\n        # reconfigure AMP observation space according to the number of observations and create the buffer\n        self.amp_observation_size = self.cfg.num_amp_observations * self.cfg.amp_observation_space\n        self.amp_observation_space = gym.spaces.Box(low=-np.inf, high=np.inf, shape=(self.amp_observation_size,))\n        self.amp_observation_buffer = torch.zeros(\n            (self.num_envs, self.cfg.num_amp_observations, self.cfg.amp_observation_space), device=self.device\n        )\n\n    def _setup_scene(self):\n        self.robot = Articulation(self.cfg.robot)\n        # add ground plane\n        spawn_ground_plane(\n            prim_path=\"/World/ground\",\n            cfg=GroundPlaneCfg(\n                physics_material=sim_utils.RigidBodyMaterialCfg(\n                    static_friction=1.0,\n                    dynamic_friction=1.0,\n                    restitution=0.0,\n                ),\n            ),\n        )\n        # clone and replicate\n        self.scene.clone_environments(copy_from_source=False)\n        # we need to explicitly filter collisions for CPU simulation\n        if self.device == \"cpu\":\n            self.scene.filter_collisions(global_prim_paths=[\"/World/ground\"])\n\n        # add articulation to scene\n        self.scene.articulations[\"robot\"] = self.robot\n        # add lights\n        light_cfg = sim_utils.DomeLightCfg(intensity=2000.0, color=(0.75, 0.75, 0.75))\n        light_cfg.func(\"/World/Light\", light_cfg)\n\n    def _pre_physics_step(self, actions: torch.Tensor):\n        self.actions = actions.clone()\n\n    def _apply_action(self):\n        target = self.action_offset + self.action_scale * self.actions\n        self.robot.set_joint_position_target(target)\n\n    def _get_observations(self) -> dict:\n        # build task observation\n        obs = compute_obs(\n            self.robot.data.joint_pos,\n            self.robot.data.joint_vel,\n            self.robot.data.body_pos_w[:, self.ref_body_index],\n            self.robot.data.body_quat_w[:, self.ref_body_index],\n            self.robot.data.body_lin_vel_w[:, self.ref_body_index],\n            self.robot.data.body_ang_vel_w[:, self.ref_body_index],\n            self.robot.data.body_pos_w[:, self.key_body_indexes],\n        )\n\n        # update AMP observation history\n        for i in reversed(range(self.cfg.num_amp_observations - 1)):\n            self.amp_observation_buffer[:, i + 1] = self.amp_observation_buffer[:, i]\n        # build AMP observation\n        self.amp_observation_buffer[:, 0] = obs.clone()\n        self.extras = {\"amp_obs\": self.amp_observation_buffer.view(-1, self.amp_observation_size)}\n\n        return {\"policy\": obs}\n\n    def _get_rewards(self) -> torch.Tensor:\n        return torch.ones((self.num_envs,), dtype=torch.float32, device=self.sim.device)\n\n    def _get_dones(self) -> tuple[torch.Tensor, torch.Tensor]:\n        time_out = self.episode_length_buf >= self.max_episode_length - 1\n        if self.cfg.early_termination:\n            died = self.robot.data.body_pos_w[:, self.ref_body_index, 2] < self.cfg.termination_height\n        else:\n            died = torch.zeros_like(time_out)\n        return died, time_out\n\n    def _reset_idx(self, env_ids: torch.Tensor | None):\n        if env_ids is None or len(env_ids) == self.num_envs:\n            env_ids = self.robot._ALL_INDICES\n        self.robot.reset(env_ids)\n        super()._reset_idx(env_ids)\n\n        if self.cfg.reset_strategy == \"default\":\n            root_state, joint_pos, joint_vel = self._reset_strategy_default(env_ids)\n        elif self.cfg.reset_strategy.startswith(\"random\"):\n            start = \"start\" in self.cfg.reset_strategy\n            root_state, joint_pos, joint_vel = self._reset_strategy_random(env_ids, start)\n        else:\n            raise ValueError(f\"Unknown reset strategy: {self.cfg.reset_strategy}\")\n\n        self.robot.write_root_link_pose_to_sim(root_state[:, :7], env_ids)\n        self.robot.write_root_com_velocity_to_sim(root_state[:, 7:], env_ids)\n        self.robot.write_joint_state_to_sim(joint_pos, joint_vel, None, env_ids)\n\n    # reset strategies\n\n    def _reset_strategy_default(self, env_ids: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n        root_state = self.robot.data.default_root_state[env_ids].clone()\n        root_state[:, :3] += self.scene.env_origins[env_ids]\n        joint_pos = self.robot.data.default_joint_pos[env_ids].clone()\n        joint_vel = self.robot.data.default_joint_vel[env_ids].clone()\n        return root_state, joint_pos, joint_vel\n\n    def _reset_strategy_random(\n        self, env_ids: torch.Tensor, start: bool = False\n    ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n        # sample random motion times (or zeros if start is True)\n        num_samples = env_ids.shape[0]\n        times = np.zeros(num_samples) if start else self._motion_loader.sample_times(num_samples)\n        # sample random motions\n        (\n            dof_positions,\n            dof_velocities,\n            body_positions,\n            body_rotations,\n            body_linear_velocities,\n            body_angular_velocities,\n        ) = self._motion_loader.sample(num_samples=num_samples, times=times)\n\n        # get root transforms (the humanoid torso)\n        motion_torso_index = self._motion_loader.get_body_index([\"torso\"])[0]\n        root_state = self.robot.data.default_root_state[env_ids].clone()\n        root_state[:, 0:3] = body_positions[:, motion_torso_index] + self.scene.env_origins[env_ids]\n        root_state[:, 2] += 0.15  # lift the humanoid slightly to avoid collisions with the ground\n        root_state[:, 3:7] = body_rotations[:, motion_torso_index]\n        root_state[:, 7:10] = body_linear_velocities[:, motion_torso_index]\n        root_state[:, 10:13] = body_angular_velocities[:, motion_torso_index]\n        # get DOFs state\n        dof_pos = dof_positions[:, self.motion_dof_indexes]\n        dof_vel = dof_velocities[:, self.motion_dof_indexes]\n\n        # update AMP observation\n        amp_observations = self.collect_reference_motions(num_samples, times)\n        self.amp_observation_buffer[env_ids] = amp_observations.view(num_samples, self.cfg.num_amp_observations, -1)\n\n        return root_state, dof_pos, dof_vel\n\n    # env methods\n\n    def collect_reference_motions(self, num_samples: int, current_times: np.ndarray | None = None) -> torch.Tensor:\n        # sample random motion times (or use the one specified)\n        if current_times is None:\n            current_times = self._motion_loader.sample_times(num_samples)\n        times = (\n            np.expand_dims(current_times, axis=-1)\n            - self._motion_loader.dt * np.arange(0, self.cfg.num_amp_observations)\n        ).flatten()\n        # get motions\n        (\n            dof_positions,\n            dof_velocities,\n            body_positions,\n            body_rotations,\n            body_linear_velocities,\n            body_angular_velocities,\n        ) = self._motion_loader.sample(num_samples=num_samples, times=times)\n        # compute AMP observation\n        amp_observation = compute_obs(\n            dof_positions[:, self.motion_dof_indexes],\n            dof_velocities[:, self.motion_dof_indexes],\n            body_positions[:, self.motion_ref_body_index],\n            body_rotations[:, self.motion_ref_body_index],\n            body_linear_velocities[:, self.motion_ref_body_index],\n            body_angular_velocities[:, self.motion_ref_body_index],\n            body_positions[:, self.motion_key_body_indexes],\n        )\n        return amp_observation.view(-1, self.amp_observation_size)\n\n\n@torch.jit.script\ndef quaternion_to_tangent_and_normal(q: torch.Tensor) -> torch.Tensor:\n    ref_tangent = torch.zeros_like(q[..., :3])\n    ref_normal = torch.zeros_like(q[..., :3])\n    ref_tangent[..., 0] = 1\n    ref_normal[..., -1] = 1\n    tangent = quat_apply(q, ref_tangent)\n    normal = quat_apply(q, ref_normal)\n    return torch.cat([tangent, normal], dim=len(tangent.shape) - 1)\n\n\n@torch.jit.script\ndef compute_obs(\n    dof_positions: torch.Tensor,\n    dof_velocities: torch.Tensor,\n    root_positions: torch.Tensor,\n    root_rotations: torch.Tensor,\n    root_linear_velocities: torch.Tensor,\n    root_angular_velocities: torch.Tensor,\n    key_body_positions: torch.Tensor,\n) -> torch.Tensor:\n    obs = torch.cat(\n        (\n            dof_positions,\n            dof_velocities,\n            root_positions[:, 2:3],  # root body height\n            quaternion_to_tangent_and_normal(root_rotations),\n            root_linear_velocities,\n            root_angular_velocities,\n            (key_body_positions - root_positions.unsqueeze(-2)).view(key_body_positions.shape[0], -1),\n        ),\n        dim=-1,\n    )\n    return obs\n"
  },
  {
    "path": "humanoid_amp_env_cfg.py",
    "content": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-Clause\n\nfrom __future__ import annotations\n\nimport os\nfrom dataclasses import MISSING\n\nfrom isaaclab_assets import HUMANOID_28_CFG\n\nfrom isaaclab.actuators import ImplicitActuatorCfg\nfrom isaaclab.assets import ArticulationCfg\nfrom isaaclab.envs import DirectRLEnvCfg\nfrom isaaclab.scene import InteractiveSceneCfg\nfrom isaaclab.sim import PhysxCfg, SimulationCfg\nfrom isaaclab.utils import configclass\n\nMOTIONS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"motions\")\n\n\n@configclass\nclass HumanoidAmpEnvCfg(DirectRLEnvCfg):\n    \"\"\"Humanoid AMP environment config (base class).\"\"\"\n\n    # env\n    episode_length_s = 10.0\n    decimation = 2\n\n    # spaces\n    observation_space = 81\n    action_space = 28\n    state_space = 0\n    num_amp_observations = 2\n    amp_observation_space = 81\n\n    early_termination = True\n    termination_height = 0.5\n\n    motion_file: str = MISSING\n    reference_body = \"torso\"\n    reset_strategy = \"random\"  # default, random, random-start\n    \"\"\"Strategy to be followed when resetting each environment (humanoid's pose and joint states).\n\n    * default: pose and joint states are set to the initial state of the asset.\n    * random: pose and joint states are set by sampling motions at random, uniform times.\n    * random-start: pose and joint states are set by sampling motion at the start (time zero).\n    \"\"\"\n\n    # simulation\n    sim: SimulationCfg = SimulationCfg(\n        dt=1 / 60,\n        render_interval=decimation,\n        physx=PhysxCfg(\n            gpu_found_lost_pairs_capacity=2**23,\n            gpu_total_aggregate_pairs_capacity=2**23,\n        ),\n    )\n\n    # scene\n    scene: InteractiveSceneCfg = InteractiveSceneCfg(num_envs=4096, env_spacing=10.0, replicate_physics=True)\n\n    # robot\n    robot: ArticulationCfg = HUMANOID_28_CFG.replace(prim_path=\"/World/envs/env_.*/Robot\").replace(\n        actuators={\n            \"body\": ImplicitActuatorCfg(\n                joint_names_expr=[\".*\"],\n                velocity_limit=100.0,\n                stiffness=None,\n                damping=None,\n            ),\n        },\n    )\n\n\n@configclass\nclass HumanoidAmpDanceEnvCfg(HumanoidAmpEnvCfg):\n    motion_file = os.path.join(MOTIONS_DIR, \"humanoid_dance.npz\")\n\n\n@configclass\nclass HumanoidAmpRunEnvCfg(HumanoidAmpEnvCfg):\n    motion_file = os.path.join(MOTIONS_DIR, \"humanoid_run.npz\")\n\n\n@configclass\nclass HumanoidAmpWalkEnvCfg(HumanoidAmpEnvCfg):\n    motion_file = os.path.join(MOTIONS_DIR, \"humanoid_walk.npz\")\n"
  },
  {
    "path": "motions/README.md",
    "content": "# Motion files\n\nThe motion files are in NumPy-file format that contains data from the skeleton DOFs and bodies that perform the motion.\n\nThe data (accessed by key) is described in the following table, where:\n\n* `N` is the number of motion frames recorded\n* `D` is the number of skeleton DOFs\n* `B` is the number of skeleton bodies\n\n| Key | Dtype | Shape | Description |\n| --- | ---- | ----- | ----------- |\n| `fps` | int64 | () | FPS at which motion was sampled |\n| `dof_names` | unicode string | (D,) | Skeleton DOF names |\n| `body_names` | unicode string | (B,) | Skeleton body names |\n| `dof_positions` | float32 | (N, D) | Skeleton DOF positions |\n| `dof_velocities` | float32 | (N, D) | Skeleton DOF velocities |\n| `body_positions` | float32 | (N, B, 3) | Skeleton body positions |\n| `body_rotations` | float32 | (N, B, 4) | Skeleton body rotations (as `wxyz` quaternion) |\n| `body_linear_velocities` | float32 | (N, B, 3) | Skeleton body linear velocities |\n| `body_angular_velocities` | float32 | (N, B, 3) | Skeleton body angular velocities |\n\n## Motion visualization\n\nThe `motion_viewer.py` file allows to visualize the skeleton motion recorded in a motion file.\n\nOpen an terminal in the `motions` folder and run the following command.\n\n```bash\npython motion_viewer.py --file MOTION_FILE_NAME.npz\n```\n\nSee `python motion_viewer.py --help` for available arguments.\n\nHumanoid:\n```\ndof_names:\n  Data type: <U16\n  Data shape: (28,)\n  Joint names:\n    1. abdomen_x\n    2. abdomen_y\n    3. abdomen_z\n    4. neck_x\n    5. neck_y\n    6. neck_z\n    7. right_shoulder_x\n    8. right_shoulder_y\n    9. right_shoulder_z\n    10. right_elbow\n    11. left_shoulder_x\n    12. left_shoulder_y\n    13. left_shoulder_z\n    14. left_elbow\n    15. right_hip_x\n    16. right_hip_y\n    17. right_hip_z\n    18. right_knee\n    19. right_ankle_x\n    20. right_ankle_y\n    21. right_ankle_z\n    22. left_hip_x\n    23. left_hip_y\n    24. left_hip_z\n    25. left_knee\n    26. left_ankle_x\n    27. left_ankle_y\n    28. left_ankle_z\n\nbody_names:\n  Data type: <U15\n  Data shape: (15,)\n  Body part names:\n    1. pelvis\n    2. torso\n    3. head\n    4. right_upper_arm\n    5. right_lower_arm\n    6. right_hand\n    7. left_upper_arm\n    8. left_lower_arm\n    9. left_hand\n    10. right_thigh\n    11. right_shin\n    12. right_foot\n    13. left_thigh\n    14. left_shin\n    15. left_foot\n```\nG1:\n```\ndof_names:\n  Data type: <U26\n  Data shape: (29,)\n  Joint names:\n    1. left_hip_pitch_joint\n    2. right_hip_pitch_joint\n    3. waist_yaw_joint\n    4. left_hip_roll_joint\n    5. right_hip_roll_joint\n    6. waist_roll_joint\n    7. left_hip_yaw_joint\n    8. right_hip_yaw_joint\n    9. waist_pitch_joint\n    10. left_knee_joint\n    11. right_knee_joint\n    12. left_shoulder_pitch_joint\n    13. right_shoulder_pitch_joint\n    14. left_ankle_pitch_joint\n    15. right_ankle_pitch_joint\n    16. left_shoulder_roll_joint\n    17. right_shoulder_roll_joint\n    18. left_ankle_roll_joint\n    19. right_ankle_roll_joint\n    20. left_shoulder_yaw_joint\n    21. right_shoulder_yaw_joint\n    22. left_elbow_joint\n    23. right_elbow_joint\n    24. left_wrist_roll_joint\n    25. right_wrist_roll_joint\n    26. left_wrist_pitch_joint\n    27. right_wrist_pitch_joint\n    28. left_wrist_yaw_joint\n    29. right_wrist_yaw_joint\n\nbody_names:\n  Data type: <U25\n  Data shape: (39,)\n  Body part names:\n    1. pelvis\n    2. imu_in_pelvis\n    3. left_hip_pitch_link\n    4. pelvis_contour_link\n    5. right_hip_pitch_link\n    6. waist_yaw_link\n    7. left_hip_roll_link\n    8. right_hip_roll_link\n    9. waist_roll_link\n    10. left_hip_yaw_link\n    11. right_hip_yaw_link\n    12. torso_link\n    13. left_knee_link\n    14. right_knee_link\n    15. d435_link\n    16. head_link\n    17. imu_in_torso\n    18. left_shoulder_pitch_link\n    19. logo_link\n    20. mid360_link\n    21. right_shoulder_pitch_link\n    22. left_ankle_pitch_link\n    23. right_ankle_pitch_link\n    24. left_shoulder_roll_link\n    25. right_shoulder_roll_link\n    26. left_ankle_roll_link\n    27. right_ankle_roll_link\n    28. left_shoulder_yaw_link\n    29. right_shoulder_yaw_link\n    30. left_elbow_link\n    31. right_elbow_link\n    32. left_wrist_roll_link\n    33. right_wrist_roll_link\n    34. left_wrist_pitch_link\n    35. right_wrist_pitch_link\n    36. left_wrist_yaw_link\n    37. right_wrist_yaw_link\n    38. left_rubber_hand\n    39. right_rubber_hand\n```"
  },
  {
    "path": "motions/__init__.py",
    "content": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-Clause\n\n\"\"\"\nAMP Motion Loader and motion files.\n\"\"\"\n\nfrom .motion_loader import MotionLoader\nfrom .motion_viewer import MotionViewer"
  },
  {
    "path": "motions/data_convert.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\"\nHumanoid Motion Data Converter\n\nThis script converts humanoid motion data from CSV format to NPZ format suitable for\nIsaac Lab's Adversarial Motion Prior (AMP) training (only suit CSV format in \nhttps://huggingface.co/datasets/lvhaidong/LAFAN1_Retargeting_Dataset/blob/main/README.md)\n\nUSAGE:\n    python data_convert.py\n    \nDESCRIPTION:\n    This script performs the following operations:\n    1. Reads raw motion capture data from CSV files\n    2. Interpolates data from 30fps to 60fps for smoother motion\n    3. Performs forward kinematics using Pinocchio to compute body poses\n    4. Calculates velocities using central differences and smoothing\n    5. Saves the processed data in NPZ format for Isaac Lab training\n    \nINPUT:\n    - CSV file containing motion capture data (joints positions + root pose)\n    - URDF file defining the robot model\n    - Mesh directory containing 3D models\n    \nOUTPUT:\n    - NPZ file containing:\n        - fps: Sampling rate (60 Hz)\n        - dof_names: Joint names\n        - body_names: Body link names\n        - dof_positions: Joint positions over time\n        - dof_velocities: Joint velocities over time\n        - body_positions: Body positions in world frame\n        - body_rotations: Body orientations (quaternions)\n        - body_linear_velocities: Body linear velocities\n        - body_angular_velocities: Body angular velocities\n        \nREQUIREMENTS:\n    - numpy\n    - pandas\n    - scipy\n    - pinocchio\n    \nCONFIGURATION:\n    - Modify csv_file variable to point to your motion data\n    - Adjust start_idx and end_idx to select frame range\n    - Update urdf_path and mesh_dir for your robot model\n    - Customize joint_names and body_names arrays as needed\n\"\"\"\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.ndimage import gaussian_filter1d\nfrom scipy.interpolate import interp1d\nfrom scipy.spatial.transform import Rotation as R, Slerp\nimport pinocchio as pin\n\n\n# -----------------------------------------------\n# 1. Quaternion Helper Functions\n# -----------------------------------------------\ndef quaternion_inverse(q):\n    \"\"\"Input q: (w, x, y, z), returns its inverse.\"\"\"\n    w, x, y, z = q\n    norm_sq = w*w + x*x + y*y + z*z\n    if norm_sq < 1e-8:\n        norm_sq = 1e-8\n    return np.array([w, -x, -y, -z], dtype=q.dtype) / norm_sq\n\ndef quaternion_multiply(q1, q2):\n    \"\"\"Input/output: (w, x, y, z)\"\"\"\n    w1, x1, y1, z1 = q1\n    w2, x2, y2, z2 = q2\n    w = w1*w2 - x1*x2 - y1*y2 - z1*z2\n    x = w1*x2 + x1*w2 + y1*z2 - z1*y2\n    y = w1*y2 - x1*z2 + y1*w2 + z1*x2\n    z = w1*z2 + x1*y2 - y1*x2 + z1*w2\n    return np.array([w, x, y, z], dtype=q1.dtype)\n\ndef compute_angular_velocity(q_prev, q_next, dt, eps=1e-8):\n    \"\"\"\n    Compute angular velocity from adjacent quaternions (w, x, y, z):\n      - Relative rotation q_rel = inv(q_prev) * q_next\n      - Extract rotation angle and axis from q_rel\n      - Return (angle / dt) * axis\n    \"\"\"\n    q_inv = quaternion_inverse(q_prev)\n    q_rel = quaternion_multiply(q_inv, q_next)\n    norm_q_rel = np.linalg.norm(q_rel)\n    if norm_q_rel < eps:\n        return np.zeros(3, dtype=np.float32)\n    q_rel /= norm_q_rel\n\n    w = np.clip(q_rel[0], -1.0, 1.0)\n    angle = 2.0 * np.arccos(w)\n    sin_half = np.sqrt(1.0 - w*w)\n    if sin_half < eps:\n        return np.zeros(3, dtype=np.float32)\n    axis = q_rel[1:] / sin_half\n    return (angle / dt) * axis\n\n\n# -----------------------------------------------\n# 2. Helper Function to Build Pinocchio RobotWrapper\n# -----------------------------------------------\ndef build_pin_robot(urdf_path, mesh_dir):\n    \"\"\"\n    Load URDF file and construct a pin.RobotWrapper with free-flyer.\n    Args:\n        urdf_path: Path to the URDF file\n        mesh_dir: Directory containing associated mesh files\n    Returns:\n        robot (pin.RobotWrapper)\n    \"\"\"\n    # Note: If URDF already contains floating joint, you can modify this to use BuildFromURDF(urdf_path, ...)\n    robot = pin.RobotWrapper.BuildFromURDF(\n        urdf_path,\n        mesh_dir,\n        pin.JointModelFreeFlyer()\n    )\n    return robot\n\n\n# -----------------------------------------------\n# 3. Main Conversion Pipeline\n# -----------------------------------------------\ndef main():\n    # 3.1 Read CSV data and extract desired frame range (changed to frames 250~550)\n    csv_file = \"g1/dance1_subject2.csv\"\n    df = pd.read_csv(csv_file, header=None)\n    start_idx = 250\n    end_idx = 550\n    # csv_file = \"g1/walk1_subject1.csv\"\n    # df = pd.read_csv(csv_file, header=None)\n    # start_idx = 100\n    # end_idx = 300\n    data_orig = df.iloc[start_idx:end_idx].to_numpy(dtype=np.float32)\n    N_orig = data_orig.shape[0]\n    print(f\"Loading CSV: {csv_file}, frame range [{start_idx}:{end_idx}], total {N_orig} frames.\")\n\n    # Original CSV: first 7 columns are root data, remaining are joint data\n    root_data_orig = data_orig[:, :7]      # (N_orig, 7)\n    joint_data_orig = data_orig[:, 7:]       # (N_orig, D)\n\n    # 3.2 Define original sampling rate (30fps) and new sampling rate (60fps), construct time series\n    fps_orig = 30\n    dt_orig = 1.0 / fps_orig\n    t_orig = np.linspace(0, (N_orig - 1) * dt_orig, N_orig)\n\n    fps_new = 60\n    dt_new = 1.0 / fps_new\n    N_new = 2 * N_orig - 1   # Insert one new frame between every two frames\n    t_new = np.linspace(0, (N_orig - 1) * dt_orig, N_new)\n\n    # 3.3 Interpolate root_data positions and joint angles\n    # Linear interpolation for positions (first three columns)\n    root_pos_interp = interp1d(t_orig, root_data_orig[:, 0:3], axis=0, kind='linear')(t_new)\n\n    # Slerp interpolation for quaternions (qx, qy, qz, qw)\n    # Note: Quaternions in CSV are stored as (qx, qy, qz, qw), which matches scipy requirements\n    rotations_orig = R.from_quat(root_data_orig[:, 3:7])\n    slerp = Slerp(t_orig, rotations_orig)\n    rotations_new = slerp(t_new)\n    root_quat_interp = rotations_new.as_quat()  # (N_new, 4) still (qx, qy, qz, qw)\n\n    # Combine interpolated root data\n    root_data = np.hstack((root_pos_interp, root_quat_interp))  # (N_new, 7)\n\n    # Linear interpolation for joint angles (joint_data)\n    joint_data = interp1d(t_orig, joint_data_orig, axis=0, kind='linear')(t_new)\n\n    # Update frame count, sampling rate, and time interval\n    N = N_new\n    fps = fps_new\n    dt = dt_new\n\n    # 3.4 Define joint names \n    joint_names = [\n        \"left_hip_pitch_joint\",\n        \"left_hip_roll_joint\",\n        \"left_hip_yaw_joint\",\n        \"left_knee_joint\",\n        \"left_ankle_pitch_joint\",\n        \"left_ankle_roll_joint\",\n        \"right_hip_pitch_joint\",\n        \"right_hip_roll_joint\",\n        \"right_hip_yaw_joint\",\n        \"right_knee_joint\",\n        \"right_ankle_pitch_joint\",\n        \"right_ankle_roll_joint\",\n        \"waist_yaw_joint\",\n        \"waist_roll_joint\",\n        \"waist_pitch_joint\",\n        \"left_shoulder_pitch_joint\",\n        \"left_shoulder_roll_joint\",\n        \"left_shoulder_yaw_joint\",\n        \"left_elbow_joint\",\n        \"left_wrist_roll_joint\",\n        \"left_wrist_pitch_joint\",\n        \"left_wrist_yaw_joint\",\n        \"right_shoulder_pitch_joint\",\n        \"right_shoulder_roll_joint\",\n        \"right_shoulder_yaw_joint\",\n        \"right_elbow_joint\",\n        \"right_wrist_roll_joint\",\n        \"right_wrist_pitch_joint\",\n        \"right_wrist_yaw_joint\"\n    ]\n    dof_names = np.array(joint_names, dtype=np.str_)\n\n    # 3.5 Get joint positions (excluding Root)\n    dof_positions = joint_data.copy()      # shape: (N, D)\n\n    # 3.6 Calculate joint velocities (central differences + boundary forward/backward differences + Gaussian smoothing)\n    dof_velocities = np.zeros_like(dof_positions)\n    dof_velocities[1:-1] = (dof_positions[2:] - dof_positions[:-2]) / (2 * dt)\n    dof_velocities[0] = (dof_positions[1] - dof_positions[0]) / dt\n    dof_velocities[-1] = (dof_positions[-1] - dof_positions[-2]) / dt\n    dof_velocities_smoothed = gaussian_filter1d(dof_velocities, sigma=1, axis=0)\n\n    # 3.7 Specify link names to record and get their poses in global coordinate frame\n    body_names = [\n        \"pelvis\", \n        # \"head_link\",\n        \"left_shoulder_pitch_link\",\n        \"right_shoulder_pitch_link\",\n        \"left_elbow_link\",\n        \"right_elbow_link\",\n        \"right_hip_yaw_link\",\n        \"left_hip_yaw_link\",\n        \"right_rubber_hand\",\n        \"left_rubber_hand\",\n        \"right_ankle_roll_link\",\n        \"left_ankle_roll_link\"\n    ]\n\n\n\n    body_names = np.array(body_names, dtype=np.str_)\n    B = len(body_names)\n\n    body_positions = np.zeros((N, B, 3), dtype=np.float32)\n    body_rotations = np.zeros((N, B, 4), dtype=np.float32)\n\n    # 3.8 Build pin.RobotWrapper\n    #    (Please change urdf_path and mesh_dir to your actual paths)\n    urdf_path = \"robot_description/g1/g1_29dof_rev_1_0.urdf\"\n    mesh_dir = \"robot_description/g1\"\n    robot = build_pin_robot(urdf_path, mesh_dir)\n    model = robot.model\n    data_pk = robot.data\n\n    nq = model.nq  # Total DOF (including free-flyer)\n    if (7 + joint_data.shape[1]) != nq:\n        print(f\"Warning: CSV columns={7 + joint_data.shape[1]}, but pinocchio nq={nq}, may need to check or adjust script parsing.\")\n\n    # 3.9 Perform forward kinematics (FK) for each frame to get link poses in world coordinate frame\n    q_pin = pin.neutral(model)\n\n    for i in range(N):\n        # Set free-flyer\n        q_pin[0:3] = root_data[i, 0:3]\n        # Quaternions stored in root_data are in (qx, qy, qz, qw) order\n        q_pin[3:7] = root_data[i, 3:7]\n        # Other joints\n        dofD = joint_data.shape[1]\n        q_pin[7:7 + dofD] = joint_data[i, :]\n\n        # Forward kinematics\n        pin.forwardKinematics(model, data_pk, q_pin)\n        pin.updateFramePlacements(model, data_pk)\n\n        # Read and save global poses for each link\n        for j, link_name in enumerate(body_names):\n            fid = model.getFrameId(link_name)\n            link_tf = data_pk.oMf[fid]  # Link transformation in world frame\n\n            # Translation\n            body_positions[i, j, :] = link_tf.translation\n            # Rotation (pin.Quaternion defaults to (x,y,z,w), need to convert to (w,x,y,z))\n            quat_xyzw = pin.Quaternion(link_tf.rotation)\n            body_rotations[i, j, :] = np.array([quat_xyzw.w,\n                                                quat_xyzw.x,\n                                                quat_xyzw.y,\n                                                quat_xyzw.z],\n                                               dtype=np.float32)\n\n    # 3.10 Calculate body linear and angular velocities (in world coordinate frame)\n    # -- Linear velocities: central differences --\n    body_linear_velocities = np.zeros_like(body_positions)\n    body_linear_velocities[1:-1] = (body_positions[2:] - body_positions[:-2]) / (2 * dt)\n    body_linear_velocities[0] = (body_positions[1] - body_positions[0]) / dt\n    body_linear_velocities[-1] = (body_positions[-1] - body_positions[-2]) / dt\n    body_linear_velocities = gaussian_filter1d(body_linear_velocities, sigma=1, axis=0)\n\n    # -- Angular velocities: computed from adjacent quaternions (in world coordinate frame) --\n    body_angular_velocities = np.zeros((N, B, 3), dtype=np.float32)\n    for j in range(B):\n        quats = body_rotations[:, j, :]\n        angular_vels = np.zeros((N, 3), dtype=np.float32)\n        if N > 1:\n            angular_vels[0] = compute_angular_velocity(quats[0], quats[1], dt)\n            angular_vels[-1] = compute_angular_velocity(quats[-2], quats[-1], dt)\n        for k in range(1, N - 1):\n            av1 = compute_angular_velocity(quats[k - 1], quats[k], dt)\n            av2 = compute_angular_velocity(quats[k], quats[k + 1], dt)\n            angular_vels[k] = 0.5 * (av1 + av2)\n        # Smoothing\n        body_angular_velocities[:, j, :] = gaussian_filter1d(angular_vels, sigma=1, axis=0)\n\n    # 3.11 Package and save to NPZ\n    data_dict = {\n        \"fps\": fps,                                   # int64 scalar, sampling rate\n        \"dof_names\": dof_names,                       # unicode array (D,)\n        \"body_names\": body_names,                     # unicode array (B,)\n        \"dof_positions\": dof_positions,               # float32 (N, D)\n        \"dof_velocities\": dof_velocities_smoothed,    # float32 (N, D)\n        \"body_positions\": body_positions,             # float32 (N, B, 3)\n        \"body_rotations\": body_rotations,             # float32 (N, B, 4) (w,x,y,z)\n        \"body_linear_velocities\": body_linear_velocities,     # float32 (N, B, 3)\n        \"body_angular_velocities\": body_angular_velocities    # float32 (N, B, 3)\n    }\n\n    out_filename = \"g1.npz\"\n    np.savez(out_filename, **data_dict)\n\n    print(f\"Conversion completed, data saved to {out_filename}\")\n    print(\"fps:\", fps)\n    print(\"dof_names:\", dof_names.shape)\n    print(\"body_names:\", body_names.shape)\n    print(\"dof_positions:\", dof_positions.shape)\n    print(\"dof_velocities:\", dof_velocities_smoothed.shape)\n    print(\"body_positions:\", body_positions.shape)\n    print(\"body_rotations:\", body_rotations.shape)\n    print(\"body_linear_velocities:\", body_linear_velocities.shape)\n    print(\"body_angular_velocities:\", body_angular_velocities.shape)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "motions/motion_loader.py",
    "content": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers.\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-Clause\n\nimport numpy as np\nimport os\nimport torch\nfrom typing import Optional\n\n\nclass MotionLoader:\n    \"\"\"\n    Helper class to load and sample motion data from NumPy-file format.\n    \"\"\"\n\n    def __init__(self, motion_file: str, device: torch.device) -> None:\n        \"\"\"Load a motion file and initialize the internal variables.\n\n        Args:\n            motion_file: Motion file path to load.\n            device: The device to which to load the data.\n\n        Raises:\n            AssertionError: If the specified motion file doesn't exist.\n        \"\"\"\n        assert os.path.isfile(motion_file), f\"Invalid file path: {motion_file}\"\n        data = np.load(motion_file)\n\n        self.device = device\n        self._dof_names = data[\"dof_names\"].tolist()\n        self._body_names = data[\"body_names\"].tolist()\n\n        self.dof_positions = torch.tensor(data[\"dof_positions\"], dtype=torch.float32, device=self.device)\n        self.dof_velocities = torch.tensor(data[\"dof_velocities\"], dtype=torch.float32, device=self.device)\n        self.body_positions = torch.tensor(data[\"body_positions\"], dtype=torch.float32, device=self.device)\n        self.body_rotations = torch.tensor(data[\"body_rotations\"], dtype=torch.float32, device=self.device)\n        self.body_linear_velocities = torch.tensor(\n            data[\"body_linear_velocities\"], dtype=torch.float32, device=self.device\n        )\n        self.body_angular_velocities = torch.tensor(\n            data[\"body_angular_velocities\"], dtype=torch.float32, device=self.device\n        )\n\n        self.dt = 1.0 / data[\"fps\"]\n        self.num_frames = self.dof_positions.shape[0]\n        self.duration = self.dt * (self.num_frames - 1)\n        print(f\"Motion loaded ({motion_file}): duration: {self.duration} sec, frames: {self.num_frames}\")\n\n    @property\n    def dof_names(self) -> list[str]:\n        \"\"\"Skeleton DOF names.\"\"\"\n        return self._dof_names\n\n    @property\n    def body_names(self) -> list[str]:\n        \"\"\"Skeleton rigid body names.\"\"\"\n        return self._body_names\n\n    @property\n    def num_dofs(self) -> int:\n        \"\"\"Number of skeleton's DOFs.\"\"\"\n        return len(self._dof_names)\n\n    @property\n    def num_bodies(self) -> int:\n        \"\"\"Number of skeleton's rigid bodies.\"\"\"\n        return len(self._body_names)\n\n    def _interpolate(\n        self,\n        a: torch.Tensor,\n        *,\n        b: Optional[torch.Tensor] = None,\n        blend: Optional[torch.Tensor] = None,\n        start: Optional[np.ndarray] = None,\n        end: Optional[np.ndarray] = None,\n    ) -> torch.Tensor:\n        \"\"\"Linear interpolation between consecutive values.\n\n        Args:\n            a: The first value. Shape is (N, X) or (N, M, X).\n            b: The second value. Shape is (N, X) or (N, M, X).\n            blend: Interpolation coefficient between 0 (a) and 1 (b).\n            start: Indexes to fetch the first value. If both, ``start`` and ``end` are specified,\n                the first and second values will be fetches from the argument ``a`` (dimension 0).\n            end: Indexes to fetch the second value. If both, ``start`` and ``end` are specified,\n                the first and second values will be fetches from the argument ``a`` (dimension 0).\n\n        Returns:\n            Interpolated values. Shape is (N, X) or (N, M, X).\n        \"\"\"\n        if start is not None and end is not None:\n            return self._interpolate(a=a[start], b=a[end], blend=blend)\n        if a.ndim >= 2:\n            blend = blend.unsqueeze(-1)\n        if a.ndim >= 3:\n            blend = blend.unsqueeze(-1)\n        return (1.0 - blend) * a + blend * b\n\n    def _slerp(\n        self,\n        q0: torch.Tensor,\n        *,\n        q1: Optional[torch.Tensor] = None,\n        blend: Optional[torch.Tensor] = None,\n        start: Optional[np.ndarray] = None,\n        end: Optional[np.ndarray] = None,\n    ) -> torch.Tensor:\n        \"\"\"Interpolation between consecutive rotations (Spherical Linear Interpolation).\n\n        Args:\n            q0: The first quaternion (wxyz). Shape is (N, 4) or (N, M, 4).\n            q1: The second quaternion (wxyz). Shape is (N, 4) or (N, M, 4).\n            blend: Interpolation coefficient between 0 (q0) and 1 (q1).\n            start: Indexes to fetch the first quaternion. If both, ``start`` and ``end` are specified,\n                the first and second quaternions will be fetches from the argument ``q0`` (dimension 0).\n            end: Indexes to fetch the second quaternion. If both, ``start`` and ``end` are specified,\n                the first and second quaternions will be fetches from the argument ``q0`` (dimension 0).\n\n        Returns:\n            Interpolated quaternions. Shape is (N, 4) or (N, M, 4).\n        \"\"\"\n        if start is not None and end is not None:\n            return self._slerp(q0=q0[start], q1=q0[end], blend=blend)\n        if q0.ndim >= 2:\n            blend = blend.unsqueeze(-1)\n        if q0.ndim >= 3:\n            blend = blend.unsqueeze(-1)\n\n        qw, qx, qy, qz = 0, 1, 2, 3  # wxyz\n        cos_half_theta = (\n            q0[..., qw] * q1[..., qw]\n            + q0[..., qx] * q1[..., qx]\n            + q0[..., qy] * q1[..., qy]\n            + q0[..., qz] * q1[..., qz]\n        )\n\n        neg_mask = cos_half_theta < 0\n        q1 = q1.clone()\n        q1[neg_mask] = -q1[neg_mask]\n        cos_half_theta = torch.abs(cos_half_theta)\n        cos_half_theta = torch.unsqueeze(cos_half_theta, dim=-1)\n\n        half_theta = torch.acos(cos_half_theta)\n        sin_half_theta = torch.sqrt(1.0 - cos_half_theta * cos_half_theta)\n\n        ratio_a = torch.sin((1 - blend) * half_theta) / sin_half_theta\n        ratio_b = torch.sin(blend * half_theta) / sin_half_theta\n\n        new_q_x = ratio_a * q0[..., qx : qx + 1] + ratio_b * q1[..., qx : qx + 1]\n        new_q_y = ratio_a * q0[..., qy : qy + 1] + ratio_b * q1[..., qy : qy + 1]\n        new_q_z = ratio_a * q0[..., qz : qz + 1] + ratio_b * q1[..., qz : qz + 1]\n        new_q_w = ratio_a * q0[..., qw : qw + 1] + ratio_b * q1[..., qw : qw + 1]\n\n        new_q = torch.cat([new_q_w, new_q_x, new_q_y, new_q_z], dim=len(new_q_w.shape) - 1)\n        new_q = torch.where(torch.abs(sin_half_theta) < 0.001, 0.5 * q0 + 0.5 * q1, new_q)\n        new_q = torch.where(torch.abs(cos_half_theta) >= 1, q0, new_q)\n        return new_q\n\n    def _compute_frame_blend(self, times: np.ndarray) -> tuple[np.ndarray, np.ndarray, np.ndarray]:\n        \"\"\"Compute the indexes of the first and second values, as well as the blending time\n        to interpolate between them and the given times.\n\n        Args:\n            times: Times, between 0 and motion duration, to sample motion values.\n                Specified times will be clipped to fall within the range of the motion duration.\n\n        Returns:\n            First value indexes, Second value indexes, and blending time between 0 (first value) and 1 (second value).\n        \"\"\"\n        phase = np.clip(times / self.duration, 0.0, 1.0)\n        index_0 = (phase * (self.num_frames - 1)).round(decimals=0).astype(int)\n        index_1 = np.minimum(index_0 + 1, self.num_frames - 1)\n        blend = ((times - index_0 * self.dt) / self.dt).round(decimals=5)\n        return index_0, index_1, blend\n\n    def sample_times(self, num_samples: int, duration: float | None = None) -> np.ndarray:\n        \"\"\"Sample random motion times uniformly.\n\n        Args:\n            num_samples: Number of time samples to generate.\n            duration: Maximum motion duration to sample.\n                If not defined samples will be within the range of the motion duration.\n\n        Raises:\n            AssertionError: If the specified duration is longer than the motion duration.\n\n        Returns:\n            Time samples, between 0 and the specified/motion duration.\n        \"\"\"\n        duration = self.duration if duration is None else duration\n        assert (\n            duration <= self.duration\n        ), f\"The specified duration ({duration}) is longer than the motion duration ({self.duration})\"\n        return duration * np.random.uniform(low=0.0, high=1.0, size=num_samples)\n\n    def sample(\n        self, num_samples: int, times: Optional[np.ndarray] = None, duration: float | None = None\n    ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:\n        \"\"\"Sample motion data.\n\n        Args:\n            num_samples: Number of time samples to generate. If ``times`` is defined, this parameter is ignored.\n            times: Motion time used for sampling.\n                If not defined, motion data will be random sampled uniformly in time.\n            duration: Maximum motion duration to sample.\n                If not defined, samples will be within the range of the motion duration.\n                If ``times`` is defined, this parameter is ignored.\n\n        Returns:\n            Sampled motion DOF positions (with shape (N, num_dofs)), DOF velocities (with shape (N, num_dofs)),\n            body positions (with shape (N, num_bodies, 3)), body rotations (with shape (N, num_bodies, 4), as wxyz quaternion),\n            body linear velocities (with shape (N, num_bodies, 3)) and body angular velocities (with shape (N, num_bodies, 3)).\n        \"\"\"\n        times = self.sample_times(num_samples, duration) if times is None else times\n        index_0, index_1, blend = self._compute_frame_blend(times)\n        blend = torch.tensor(blend, dtype=torch.float32, device=self.device)\n\n        return (\n            self._interpolate(self.dof_positions, blend=blend, start=index_0, end=index_1),\n            self._interpolate(self.dof_velocities, blend=blend, start=index_0, end=index_1),\n            self._interpolate(self.body_positions, blend=blend, start=index_0, end=index_1),\n            self._slerp(self.body_rotations, blend=blend, start=index_0, end=index_1),\n            self._interpolate(self.body_linear_velocities, blend=blend, start=index_0, end=index_1),\n            self._interpolate(self.body_angular_velocities, blend=blend, start=index_0, end=index_1),\n        )\n\n    def get_dof_index(self, dof_names: list[str]) -> list[int]:\n        \"\"\"Get skeleton DOFs indexes by DOFs names.\n\n        Args:\n            dof_names: List of DOFs names.\n\n        Raises:\n            AssertionError: If the specified DOFs name doesn't exist.\n\n        Returns:\n            List of DOFs indexes.\n        \"\"\"\n        indexes = []\n        for name in dof_names:\n            assert name in self._dof_names, f\"The specified DOF name ({name}) doesn't exist: {self._dof_names}\"\n            indexes.append(self._dof_names.index(name))\n        return indexes\n\n    def get_body_index(self, body_names: list[str]) -> list[int]:\n        \"\"\"Get skeleton body indexes by body names.\n\n        Args:\n            dof_names: List of body names.\n\n        Raises:\n            AssertionError: If the specified body name doesn't exist.\n\n        Returns:\n            List of body indexes.\n        \"\"\"\n        indexes = []\n        for name in body_names:\n            assert name in self._body_names, f\"The specified body name ({name}) doesn't exist: {self._body_names}\"\n            indexes.append(self._body_names.index(name))\n        return indexes\n\n\nif __name__ == \"__main__\":\n    import argparse\n\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--file\", type=str, required=True, help=\"Motion file\")\n    args, _ = parser.parse_known_args()\n\n    motion = MotionLoader(args.file, \"cpu\")\n\n    print(\"- number of frames:\", motion.num_frames)\n    print(\"- number of DOFs:\", motion.num_dofs)\n    print(\"- dt:\", motion.dt)\n    print(\"- fps:\", 1.0 / motion.dt)\n    print(\"- number of bodies:\", motion.num_bodies)\n"
  },
  {
    "path": "motions/motion_replayer.py",
    "content": "\"\"\"\nMotion Replayer for G1 Humanoid Robot\n\nThis script replays motion data for the G1 humanoid robot in Isaac Sim.\nIt can also record the simulation data for further analysis.\n\nUsage:\n    python motion_replayer.py [options]\n\nOptions:\n    --motion MOTION_FILE    Motion data file to replay (default: G1_dance.npz)\n    --record               Enable recording of simulation data\n    --output OUTPUT_FILE   Output file name for recorded data (default: recorded_motion.npz)\n    --device DEVICE       Device to run simulation on (default: cuda:0)\n\nExample:\n    # Replay a motion file\n    python motion_replayer.py --motion G1_dance.npz\n\n    # Record simulation data while replaying\n    python motion_replayer.py --motion G1_dance.npz --record --output my_recording.npz\n\"\"\"\n\nimport sys\nimport os\nsys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))\n\nimport argparse\nfrom isaaclab.app import AppLauncher\nfrom record_data import MotionRecorder\n\n# Command line arguments\nparser = argparse.ArgumentParser()\nAppLauncher.add_app_launcher_args(parser)\nparser.add_argument(\"--motion\", type=str, default=\"G1_dance.npz\")\nparser.add_argument(\"--record\", action=\"store_true\", help=\"Enable recording of simulation data\")\nparser.add_argument(\"--output\", type=str, default=\"recorded_motion.npz\", help=\"Output filename for recorded data\")\nargs_cli = parser.parse_args()\n\n# Launch Isaac Sim\napp_launcher = AppLauncher(args_cli)\nsimulation_app = app_launcher.app\n\nimport torch\nimport isaaclab.sim as sim_utils\nfrom isaaclab.scene import InteractiveScene, InteractiveSceneCfg\nfrom motion_loader import MotionLoader\nfrom g1_cfg import G1_CFG\nfrom record_data import MotionRecorder\n\n# Load motion data and get dt\nmotion = MotionLoader(args_cli.motion, device=args_cli.device)\nnum_frames = motion.num_frames\nprint(f\"motion.dt: {motion.dt}\")\n\n# Find the index for the root body, typically 'pelvis'\ntry:\n    print(f\"Searching for 'pelvis' in the following list of body names: {motion.body_names}\")\n    root_idx = motion.body_names.index('pelvis')\n    print(f\"Found root body 'pelvis' at index: {root_idx}\")\nexcept (ValueError, AttributeError):\n    print(\"\\nError: Could not find 'pelvis' in the motion file's body_names.\")\n    print(\"The motion replayer requires 'pelvis' to be defined as the root body.\")\n    simulation_app.close()\n    sys.exit(1)\n\n# Configure simulation with dt matching motion.dt\n\n\nsim_cfg = sim_utils.SimulationCfg(\n    dt=motion.dt, \n    device=args_cli.device,\n    gravity=(0.0, 0.0, -9.81),  # Explicitly set gravity\n    render_interval=1,          # Render every physics step\n    enable_scene_query_support=True,\n    use_fabric=True,\n    physx=sim_utils.PhysxCfg(\n        solver_type=1,                    # TGS solver\n        min_position_iteration_count=8,    # Increase solver iterations\n        max_position_iteration_count=8,\n        min_velocity_iteration_count=4,    # Add velocity iterations\n        max_velocity_iteration_count=4,\n        enable_ccd=True,                  # Enable continuous collision detection\n        enable_stabilization=True,        # Enable additional stabilization\n        bounce_threshold_velocity=0.2,    # Lower threshold for more stable contacts\n        friction_offset_threshold=0.04,   # Increase friction threshold\n        friction_correlation_distance=0.025  # Increase correlation distance\n    ),\n\n)\nsim = sim_utils.SimulationContext(sim_cfg)\nsim.set_camera_view([3.0, 3.0, 3.0], [0.0, 0.0, 0.0])\n\n# Configure scene\nscene_cfg = InteractiveSceneCfg(\n    num_envs=1, \n    env_spacing=2.0\n)\nscene_cfg.robot = G1_CFG.replace(prim_path=\"/World/Robot\")\nscene = InteractiveScene(scene_cfg)\n\n# Add DomeLight for illumination\nlight_cfg = sim_utils.DomeLightCfg(intensity=2000.0, color=(0.75, 0.75, 0.75))\nlight_cfg.func(\"/World/Light\", light_cfg)\n\n# Add black ground plane at -0.5m\nground_cfg = sim_utils.GroundPlaneCfg(color=(0.0, 0.0,-0.5))\n# ground_cfg.func(\"/World/ground\", ground_cfg)\n\nground_cfg.func(\n    \"/World/ground\",\n    ground_cfg,\n    translation=(0.0, 0.0, -0.5),   # 这里是 (x, y, z)\n)\n# Reset simulation\nsim.reset()\nscene.reset()\n\nrobot = scene[\"robot\"]\n# Align joint order from motion file to robot's joint order\nmotion_dof_indices = motion.get_dof_index(robot.joint_names)\n\n# Initialize data recorder\n# We want to record the robot's state, so we pass the robot's joint names\nrecorder = MotionRecorder(\n    robot,\n    dof_names_to_record=robot.joint_names,\n    fps=int(round(1.0 / motion.dt)),\n    device=args_cli.device\n)\n\nif args_cli.record:\n    recorder.start_recording()\n    print(f\"\\nStarting data recording, will save to {args_cli.output}\")\n    print(f\"Will record one complete cycle, total {num_frames} frames\")\n\nprint(\"\\nStarting G1 motion replay loop...\")\nprint(\"Tip: Close window or press Ctrl+C to exit\")\n\ntry:\n    # First pass: Record data\n    if args_cli.record:\n        for i in range(num_frames):\n            if not simulation_app.is_running():\n                break\n                \n            # Get current frame's joint and root states (aligned order!)\n            joint_pos = motion.dof_positions[i, motion_dof_indices].unsqueeze(0)\n            joint_vel = motion.dof_velocities[i, motion_dof_indices].unsqueeze(0)\n            root_pos = motion.body_positions[i, root_idx].unsqueeze(0)\n            root_rot = motion.body_rotations[i, root_idx].unsqueeze(0)\n            root_vel = motion.body_linear_velocities[i, root_idx].unsqueeze(0)\n            root_ang_vel = motion.body_angular_velocities[i, root_idx].unsqueeze(0)\n            root_state = torch.cat([root_pos, root_rot, root_vel, root_ang_vel], dim=-1)\n\n            # Write to simulation\n            robot.write_root_link_pose_to_sim(root_state[:, :7], torch.tensor([0], device=args_cli.device))\n            robot.write_root_com_velocity_to_sim(root_state[:, 7:], torch.tensor([0], device=args_cli.device))\n            robot.write_joint_state_to_sim(joint_pos, joint_vel, None, torch.tensor([0], device=args_cli.device))\n\n            # Step simulation (strict dt synchronization)\n            scene.update(dt=sim.get_physics_dt())\n            scene.write_data_to_sim()\n            sim.step(render=True)\n            \n            # Record data\n            recorder.record_frame(i)\n            # Show progress\n            if i % 10 == 0:  # Show progress every 10 frames\n                print(f\"\\rRecording progress: {i}/{num_frames} frames\", end=\"\", flush=True)\n\n        # Save data after completing one cycle\n        print(\"\\n\\nCompleted one cycle of recording, saving data...\")\n        recorder.stop_recording()\n        recorder.save_data(args_cli.output)\n        print(f\"Data saved to {args_cli.output}\")\n        print(\"\\nContinuing motion replay...\")\n\n    # Continue motion replay loop\n    while simulation_app.is_running():\n        for i in range(num_frames):\n            if not simulation_app.is_running():\n                break\n                \n            # Get current frame's joint and root states (aligned order!)\n            joint_pos = motion.dof_positions[i, motion_dof_indices].unsqueeze(0)\n            joint_vel = motion.dof_velocities[i, motion_dof_indices].unsqueeze(0)\n            root_pos = motion.body_positions[i, root_idx].unsqueeze(0)\n            root_rot = motion.body_rotations[i, root_idx].unsqueeze(0)\n            root_vel = motion.body_linear_velocities[i, root_idx].unsqueeze(0)\n            root_ang_vel = motion.body_angular_velocities[i, root_idx].unsqueeze(0)\n            root_state = torch.cat([root_pos, root_rot, root_vel, root_ang_vel], dim=-1)\n\n            # Write to simulation\n            robot.write_root_link_pose_to_sim(root_state[:, :7], torch.tensor([0], device=args_cli.device))\n            robot.write_root_com_velocity_to_sim(root_state[:, 7:], torch.tensor([0], device=args_cli.device))\n            robot.write_joint_state_to_sim(joint_pos, joint_vel, None, torch.tensor([0], device=args_cli.device))\n\n            # Step simulation (strict dt synchronization)\n            scene.update(dt=sim.get_physics_dt())\n            scene.write_data_to_sim()\n            sim.step(render=True)\n\nexcept KeyboardInterrupt:\n    print(\"\\nProgram interrupted by user\")\n\nfinally:\n    simulation_app.close() "
  },
  {
    "path": "motions/motion_viewer.py",
    "content": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-Clause\n\nimport matplotlib\nimport matplotlib.animation\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\n\nimport mpl_toolkits.mplot3d  # noqa: F401\n\ntry:\n    from .motion_loader import MotionLoader\nexcept ImportError:\n    from motion_loader import MotionLoader\n\n\nclass MotionViewer:\n    \"\"\"\n    Helper class to visualize motion data from NumPy-file format.\n    \"\"\"\n\n    def __init__(self, motion_file: str, device: torch.device | str = \"cpu\", render_scene: bool = False) -> None:\n        \"\"\"Load a motion file and initialize the internal variables.\n\n        Args:\n            motion_file: Motion file path to load.\n            device: The device to which to load the data.\n            render_scene: Whether the scene (space occupied by the skeleton during movement)\n                is rendered instead of a reduced view of the skeleton.\n\n        Raises:\n            AssertionError: If the specified motion file doesn't exist.\n        \"\"\"\n        self._figure = None\n        self._figure_axes = None\n        self._render_scene = render_scene\n\n        # load motions\n        self._motion_loader = MotionLoader(motion_file=motion_file, device=device)\n\n        self._num_frames = self._motion_loader.num_frames\n        self._current_frame = 0\n        self._body_positions = self._motion_loader.body_positions.cpu().numpy()\n\n        print(\"\\nBody\")\n        for i, name in enumerate(self._motion_loader.body_names):\n            minimum = np.min(self._body_positions[:, i], axis=0).round(decimals=2)\n            maximum = np.max(self._body_positions[:, i], axis=0).round(decimals=2)\n            print(f\"  |-- [{name}] minimum position: {minimum}, maximum position: {maximum}\")\n\n    def _drawing_callback(self, frame: int) -> None:\n        \"\"\"Drawing callback called each frame\"\"\"\n        # get current motion frame\n        # get data\n        vertices = self._body_positions[self._current_frame]\n        # draw skeleton state\n        self._figure_axes.clear()\n        self._figure_axes.scatter(*vertices.T, color=\"black\", depthshade=False)\n        # adjust exes according to motion view\n        # - scene\n        if self._render_scene:\n            # compute axes limits\n            minimum = np.min(self._body_positions.reshape(-1, 3), axis=0)\n            maximum = np.max(self._body_positions.reshape(-1, 3), axis=0)\n            center = 0.5 * (maximum + minimum)\n            diff = 0.75 * (maximum - minimum)\n        # - skeleton\n        else:\n            # compute axes limits\n            minimum = np.min(vertices, axis=0)\n            maximum = np.max(vertices, axis=0)\n            center = 0.5 * (maximum + minimum)\n            diff = np.array([0.75 * np.max(maximum - minimum).item()] * 3)\n        # scale view\n        self._figure_axes.set_xlim((center[0] - diff[0], center[0] + diff[0]))\n        self._figure_axes.set_ylim((center[1] - diff[1], center[1] + diff[1]))\n        self._figure_axes.set_zlim((center[2] - diff[2], center[2] + diff[2]))\n        self._figure_axes.set_box_aspect(aspect=diff / diff[0])\n        # plot ground plane\n        x, y = np.meshgrid([center[0] - diff[0], center[0] + diff[0]], [center[1] - diff[1], center[1] + diff[1]])\n        self._figure_axes.plot_surface(x, y, np.zeros_like(x), color=\"green\", alpha=0.2)\n        # print metadata\n        self._figure_axes.set_xlabel(\"X\")\n        self._figure_axes.set_ylabel(\"Y\")\n        self._figure_axes.set_zlabel(\"Z\")\n        self._figure_axes.set_title(f\"frame: {self._current_frame}/{self._num_frames}\")\n        # increase frame counter\n        self._current_frame += 1\n        if self._current_frame >= self._num_frames:\n            self._current_frame = 0\n\n    def show(self) -> None:\n        \"\"\"Show motion\"\"\"\n        # create a 3D figure\n        self._figure = plt.figure()\n        self._figure_axes = self._figure.add_subplot(projection=\"3d\")\n        # matplotlib animation (the instance must live as long as the animation will run)\n        self._animation = matplotlib.animation.FuncAnimation(\n            fig=self._figure,\n            func=self._drawing_callback,\n            frames=self._num_frames,\n            interval=1000 * self._motion_loader.dt,\n        )\n        plt.show()\n\n\nif __name__ == \"__main__\":\n    import argparse\n\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--file\", type=str, required=True, help=\"Motion file\")\n    parser.add_argument(\n        \"--render-scene\",\n        action=\"store_true\",\n        default=False,\n        help=(\n            \"Whether the scene (space occupied by the skeleton during movement) is rendered instead of a reduced view\"\n            \" of the skeleton.\"\n        ),\n    )\n    parser.add_argument(\"--matplotlib-backend\", type=str, default=\"TkAgg\", help=\"Matplotlib interactive backend\")\n    args, _ = parser.parse_known_args()\n\n    # https://matplotlib.org/stable/users/explain/figure/backends.html#interactive-backends\n    matplotlib.use(args.matplotlib_backend)\n\n    viewer = MotionViewer(args.file, render_scene=args.render_scene)\n    viewer.show()\n"
  },
  {
    "path": "motions/record_data.py",
    "content": "\"\"\"\nMotion Data Recording Tool\n\nThis script provides functionality for recording and managing motion data from robots.\nIt includes classes and functions for:\n- Recording motion data in real-time\n- Saving motion data to npz files\n- Loading motion data from npz files\n- Managing motion data structure\n- Converting joint orders between different formats\n\nUsage:\n    As a module:\n        from record_data import MotionRecorder, MotionData\n        \n        recorder = MotionRecorder(robot, motion_dof_indices, fps=60)\n        recorder.start_recording()\n        # ... record frames ...\n        recorder.stop_recording()\n        recorder.save_data(\"output.npz\")\n\"\"\"\n\n#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\nimport numpy as np\nimport torch\nfrom typing import List, Dict, Optional\nimport os\nfrom dataclasses import dataclass\nfrom datetime import datetime\n\n@dataclass\nclass MotionData:\n    fps: int\n    dof_names: np.ndarray\n    body_names: np.ndarray\n    dof_positions: np.ndarray\n    dof_velocities: np.ndarray\n    body_positions: np.ndarray\n    body_rotations: np.ndarray\n    body_linear_velocities: np.ndarray\n    body_angular_velocities: np.ndarray\n\n\ndef smooth_motion_data(motion_data: MotionData, window_size: int = 3) -> MotionData:\n    \"\"\"Apply smoothing to motion data to reduce jitter.\n    \n    Args:\n        motion_data: Original motion data\n        window_size: Size of smoothing window\n        \n    Returns:\n        Smoothed motion data\n    \"\"\"\n    # Create a copy of the data\n    smoothed = MotionData(\n        fps=motion_data.fps,\n        dof_names=motion_data.dof_names,\n        body_names=motion_data.body_names,\n        dof_positions=motion_data.dof_positions.copy(),\n        dof_velocities=motion_data.dof_velocities.copy(),\n        body_positions=motion_data.body_positions.copy(),\n        body_rotations=motion_data.body_rotations.copy(),\n        body_linear_velocities=motion_data.body_linear_velocities.copy(),\n        body_angular_velocities=motion_data.body_angular_velocities.copy()\n    )\n    \n    # Apply smoothing to positions and velocities\n    for i in range(window_size, len(smoothed.dof_positions) - window_size):\n        # Smooth joint positions\n        smoothed.dof_positions[i] = np.mean(\n            motion_data.dof_positions[i-window_size:i+window_size+1], \n            axis=0\n        )\n        # Smooth joint velocities\n        smoothed.dof_velocities[i] = np.mean(\n            motion_data.dof_velocities[i-window_size:i+window_size+1], \n            axis=0\n        )\n        # Smooth body positions\n        smoothed.body_positions[i] = np.mean(\n            motion_data.body_positions[i-window_size:i+window_size+1], \n            axis=0\n        )\n        # Smooth body rotations (using quaternion averaging)\n        smoothed.body_rotations[i] = np.mean(\n            motion_data.body_rotations[i-window_size:i+window_size+1], \n            axis=0\n        )\n        # Normalize quaternions\n        smoothed.body_rotations[i] = smoothed.body_rotations[i] / np.linalg.norm(\n            smoothed.body_rotations[i], \n            axis=-1, \n            keepdims=True\n        )\n    \n    return smoothed\n\nclass MotionRecorder:\n    \"\"\"Motion data recorder for robot movements\"\"\"\n\n    def __init__(self, robot, dof_names_to_record: List[str],\n                 fps: int = 60, device: str = \"cuda\", smoothing_window: int = 3):\n        \"\"\"\n        Args:\n            robot: Robot object for metadata\n            dof_names_to_record: List of joint names to record.\n            fps: Target frame rate (used to calculate dt)\n            device: Device to use (cuda/cpu)\n            smoothing_window: Window size for motion smoothing\n        \"\"\"\n        self.robot = robot\n        self.fps = int(fps)\n        self.dt = 1.0 / self.fps\n        self.device = device\n        self.smoothing_window = smoothing_window\n\n        # Get names directly from robot to avoid empty arrays\n        self.dof_names = np.asarray(dof_names_to_record, dtype=np.str_)\n        self.body_names = np.asarray(robot.body_names, dtype=np.str_)\n\n        try:\n            self.root_body_idx = list(robot.body_names).index('pelvis')\n        except ValueError:\n            raise ValueError(\"The robot asset must have a body named 'pelvis' to be used with MotionRecorder.\")\n\n        # Create a mapping from the robot's full joint list to the desired recording list\n        robot_dof_map = {name: i for i, name in enumerate(robot.joint_names)}\n        self.dof_indices = np.array([robot_dof_map[name] for name in dof_names_to_record], dtype=np.int32)\n        \n        self.recorded_frames: list[dict] = []\n        self.is_recording = False\n\n    # 其余接口保持不变 --------------------------\n    def start_recording(self):\n        self.recorded_frames.clear()\n        self.is_recording = True\n\n    def stop_recording(self):\n        self.is_recording = False\n\n    def record_frame(self, frame_idx: int):\n        \"\"\"Record single frame data (no need to pass robot/motion_dof_indices externally)\"\"\"\n        if not self.is_recording:\n            return None\n\n        # --- Joints ---\n        dof_pos = self.robot.data.joint_pos[0].cpu().numpy()\n        dof_vel = self.robot.data.joint_vel[0].cpu().numpy()\n\n        # --- Root ---\n        root_pos = self.robot.data.body_pos_w[0, self.root_body_idx].cpu().numpy()\n        root_rot = self.robot.data.body_quat_w[0, self.root_body_idx].cpu().numpy()\n        root_lin_vel = self.robot.data.body_lin_vel_w[0, self.root_body_idx].cpu().numpy()\n        root_ang_vel = self.robot.data.body_ang_vel_w[0, self.root_body_idx].cpu().numpy()\n\n        # --- Full body ---\n        body_pos = self.robot.data.body_pos_w[0].cpu().numpy()\n        body_rot = self.robot.data.body_quat_w[0].cpu().numpy()\n        body_lin_vel = self.robot.data.body_lin_vel_w[0].cpu().numpy()\n        body_ang_vel = self.robot.data.body_ang_vel_w[0].cpu().numpy()\n\n        self.recorded_frames.append(\n            dict(\n                frame_idx=frame_idx,\n                dof_positions=dof_pos[self.dof_indices],\n                dof_velocities=dof_vel[self.dof_indices],\n                root_position=root_pos,\n                root_rotation=root_rot,\n                root_linear_velocity=root_lin_vel,\n                root_angular_velocity=root_ang_vel,\n                body_positions=body_pos,\n                body_rotations=body_rot,\n                body_linear_velocities=body_lin_vel,\n                body_angular_velocities=body_ang_vel,\n            )\n        )\n        return self.recorded_frames[-1]\n\n    def get_recorded_data(self) -> MotionData | None:\n        if not self.recorded_frames:\n            return None\n\n        n = len(self.recorded_frames)\n        d = len(self.dof_indices)\n        b = len(self.body_names)\n\n        # Pre-allocate arrays\n        dof_pos = np.zeros((n, d), np.float32)\n        dof_vel = np.zeros_like(dof_pos)\n        body_pos = np.zeros((n, b, 3), np.float32)\n        body_rot = np.zeros((n, b, 4), np.float32)\n        body_lin = np.zeros_like(body_pos)\n        body_ang = np.zeros_like(body_pos)\n\n        for i, f in enumerate(self.recorded_frames):\n            dof_pos[i] = f[\"dof_positions\"]\n            dof_vel[i] = f[\"dof_velocities\"]\n            body_pos[i] = f[\"body_positions\"]\n            body_rot[i] = f[\"body_rotations\"]\n            body_lin[i] = f[\"body_linear_velocities\"]\n            body_ang[i] = f[\"body_angular_velocities\"]\n\n        motion_data = MotionData(\n            fps=self.fps,\n            dof_names=self.dof_names,\n            body_names=self.body_names,\n            dof_positions=dof_pos,\n            dof_velocities=dof_vel,\n            body_positions=body_pos,\n            body_rotations=body_rot,\n            body_linear_velocities=body_lin,\n            body_angular_velocities=body_ang,\n        )\n        \n        # Apply smoothing if window size > 1\n        if self.smoothing_window > 1:\n            motion_data = smooth_motion_data(motion_data, self.smoothing_window)\n            \n        return motion_data\n\n    def save_data(self, out_file: str, data: MotionData | None = None) -> bool:\n        if data is None:\n            data = self.get_recorded_data()\n        if data is None:\n            print(\"No data to save\")\n            return False\n\n        os.makedirs(os.path.dirname(os.path.abspath(out_file)), exist_ok=True)\n        try:\n            np.savez(\n                out_file,\n                fps=data.fps,\n                dof_names=data.dof_names,\n                body_names=data.body_names,\n                dof_positions=data.dof_positions,\n                dof_velocities=data.dof_velocities,\n                body_positions=data.body_positions,\n                body_rotations=data.body_rotations,\n                body_linear_velocities=data.body_linear_velocities,\n                body_angular_velocities=data.body_angular_velocities,\n                record_time=datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\"),\n            )\n            print(f\"Data saved to {out_file}\")\n            return True\n        except Exception as e:\n            print(\"Error saving data:\", e)\n            return False\n\n\ndef load_motion_data(file_path: str) -> Optional[MotionData]:\n    \"\"\"Load motion data file\n    \n    Args:\n        file_path: Path to .npz file\n        \n    Returns:\n        MotionData object if successful, None otherwise\n    \"\"\"\n    try:\n        data = np.load(file_path)\n        return MotionData(\n            fps=int(data['fps']),\n            dof_names=data['dof_names'],\n            body_names=data['body_names'],\n            dof_positions=data['dof_positions'],\n            dof_velocities=data['dof_velocities'],\n            body_positions=data['body_positions'],\n            body_rotations=data['body_rotations'],\n            body_linear_velocities=data['body_linear_velocities'],\n            body_angular_velocities=data['body_angular_velocities']\n        )\n    except Exception as e:\n        print(f\"Error loading data file: {e}\")\n        return None\n"
  },
  {
    "path": "motions/test/get_joint_name.py",
    "content": "#!/usr/bin/env python\nimport argparse\nimport os\nimport time\nimport numpy as np\nimport torch\n\n# ========= 解析命令行参数 ==========\nparser = argparse.ArgumentParser(\n    description=\"Potential Field Controller Demo in Isaac Lab (Repulsive Only).\"\n)\nparser.add_argument(\"--robot\", type=str, default=\"franka_panda\", help=\"Name of the robot.\")\nparser.add_argument(\"--num_envs\", type=int, default=1, help=\"Number of environments to spawn.\")\n# 这里添加 AppLauncher 相关的命令行参数\nfrom isaaclab.app import AppLauncher\nAppLauncher.add_app_launcher_args(parser)\nargs_cli = parser.parse_args()\n\n# ========= 启动 Omniverse 应用 ==========\napp_launcher = AppLauncher(args_cli)\nsimulation_app = app_launcher.app\n\n# ========= 导入 Isaac Lab 相关模块 ==========\nimport isaaclab.sim as sim_utils\nfrom isaaclab.assets import AssetBaseCfg, RigidObject, RigidObjectCfg\nfrom isaaclab.managers import SceneEntityCfg\nfrom isaaclab.markers import VisualizationMarkers\nfrom isaaclab.markers.config import FRAME_MARKER_CFG\nfrom isaaclab.scene import InteractiveScene, InteractiveSceneCfg\nfrom isaaclab.utils import configclass\nfrom isaaclab.utils.assets import ISAAC_NUCLEUS_DIR\nfrom isaaclab_assets import FRANKA_PANDA_HIGH_PD_CFG  # isort:skip G1_MINIMAL_CFG\nfrom isaaclab_assets import G1_MINIMAL_CFG  \nfrom isaaclab.actuators import ImplicitActuatorCfg\nfrom isaaclab.assets import ArticulationCfg\n# ========= 定义桌面场景配置 ==========\n@configclass\nclass TableTopSceneCfg(InteractiveSceneCfg):\n    \"\"\"桌面场景的配置\"\"\"\n    ground = AssetBaseCfg(\n        prim_path=\"/World/defaultGroundPlane\",\n        spawn=sim_utils.GroundPlaneCfg(),\n        init_state=AssetBaseCfg.InitialStateCfg(pos=(0.0, 0.0, 0.0)),\n    )\n    dome_light = AssetBaseCfg(\n        prim_path=\"/World/Light\",\n        spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75)),\n    )\n\n    robot: ArticulationCfg = ArticulationCfg(\n        prim_path=\"/World/Robot\",\n        spawn=sim_utils.UsdFileCfg(\n            usd_path=os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), \"usd/g1_29dof_rev_1_0.usd\"),\n            activate_contact_sensors=True,\n            rigid_props=sim_utils.RigidBodyPropertiesCfg(\n                disable_gravity=False,\n                retain_accelerations=False,\n                linear_damping=0.0,\n                angular_damping=0.0,\n                max_linear_velocity=1000.0,\n                max_angular_velocity=1000.0,\n                max_depenetration_velocity=1.0,\n            ),\n            articulation_props=sim_utils.ArticulationRootPropertiesCfg(\n                enabled_self_collisions=False, solver_position_iteration_count=8, solver_velocity_iteration_count=4\n            ),\n        ),\n        init_state=ArticulationCfg.InitialStateCfg(\n            pos=(0.0, 1.0, 0.8),\n            joint_pos={\n                \".*_hip_pitch_joint\": -0.20,\n                \".*_knee_joint\": 0.42,\n                \".*_ankle_pitch_joint\": -0.23,\n                # \".*_elbow_pitch_joint\": 0.87,\n                \"left_shoulder_roll_joint\": 0.16,\n                \"left_shoulder_pitch_joint\": 0.35,\n                \"right_shoulder_roll_joint\": -0.16,\n                \"right_shoulder_pitch_joint\": 0.35,\n            },\n            joint_vel={\".*\": 0.0},\n        ),\n        soft_joint_pos_limit_factor=0.9,\n        actuators={\n            \"legs\": ImplicitActuatorCfg(\n                joint_names_expr=[\n                    \".*_hip_yaw_joint\",\n                    \".*_hip_roll_joint\",\n                    \".*_hip_pitch_joint\",\n                    \".*_knee_joint\",\n                    \"waist_yaw_joint\",\n                    \"waist_roll_joint\",\n                    \"waist_pitch_joint\",\n                ],\n                effort_limit=300,\n                velocity_limit=100.0,\n                stiffness={\n                    \".*_hip_yaw_joint\": 150.0,\n                    \".*_hip_roll_joint\": 150.0,\n                    \".*_hip_pitch_joint\": 200.0,\n                    \".*_knee_joint\": 200.0,\n                    \"waist_yaw_joint\": 200.0,\n                    \"waist_roll_joint\": 200.0,\n                    \"waist_pitch_joint\": 200.0,\n                },\n                damping={\n                    \".*_hip_yaw_joint\": 5.0,\n                    \".*_hip_roll_joint\": 5.0,\n                    \".*_hip_pitch_joint\": 5.0,\n                    \".*_knee_joint\": 5.0,\n                    \"waist_yaw_joint\": 5.0,\n                    \"waist_roll_joint\": 5.0,\n                    \"waist_pitch_joint\": 5.0,\n                },\n                armature={\n                    \".*_hip_.*\": 0.01,\n                    \".*_knee_joint\": 0.01,\n                    \"waist_yaw_joint\": 0.01,\n                    \"waist_roll_joint\": 0.01,\n                    \"waist_pitch_joint\": 0.01,\n                },\n            ),\n            \"feet\": ImplicitActuatorCfg(\n                effort_limit=20,\n                joint_names_expr=[\".*_ankle_pitch_joint\", \".*_ankle_roll_joint\"],\n                stiffness=20.0,\n                damping=2.0,\n                armature=0.01,\n            ),\n            \"arms\": ImplicitActuatorCfg(\n                joint_names_expr=[\n                    \".*_shoulder_pitch_joint\",\n                    \".*_shoulder_roll_joint\",\n                    \".*_shoulder_yaw_joint\",\n                    \".*_elbow_joint\",\n                    \".*_wrist_.*\",\n\n                ],\n                effort_limit=300,\n                velocity_limit=100.0,\n                stiffness=40.0,\n                damping=10.0,\n                armature={\n                    \".*_shoulder_.*\": 0.01,\n                    \".*_elbow_.*\": 0.01,\n                    \".*_wrist_.*\": 0.01,\n\n                },\n            ),\n        },\n    )\n\n# ========= 辅助函数：打印机器人资产下所有 prim 的路径 ==========\ndef list_robot_prims():\n    try:\n        # 使用 USD Python API 遍历 \"/World/Robot\" 下的所有子节点\n        from pxr import Usd\n        stage = Usd.Stage.GetCurrent()\n        robot_prim = stage.GetPrimAtPath(\"/World/Robot\")\n        if robot_prim:\n            print(\"在 '/World/Robot' 下发现的 prim 路径：\")\n            for prim in robot_prim.GetChildren():\n                print(prim.GetPath())\n        else:\n            print(\"在 '/World/Robot' 未找到任何 prim，请检查机器人是否正确加载。\")\n    except Exception as e:\n        print(\"获取 USD Stage 时出错：\", e)\n\n# ========= 运行仿真 ==========\ndef run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):\n    \"\"\"主循环：运行仿真并打印出机器人关节与刚体信息（请先观察打印结果确定名称规则）\"\"\"\n    \n    # 先打印出机器人资产下所有 prim 路径，帮助你确认实际的命名规则\n    list_robot_prims()\n    \n    # 如果你在上面的输出中找到了关节和身体对应的命名，可以在下面设置相应的正则表达式。\n    # 例如，假设你观察后发现关节名称中包含 \"joint\" 而刚体名称中包含 \"body\"，可以这样设置：\n    robot = scene[\"robot\"]\n    \n    print(\"解析到的关节名称列表:\", robot.joint_names)\n    print(\"解析到的刚体名称列表:\", robot.body_names)\n    print(robot.data.joint_pos.shape)\n    print(robot.data.joint_vel.shape)\n    # self.robot.data.body_pos_w[:, self.ref_body_index],\n    # self.robot.data.body_quat_w[:, self.ref_body_index],\n    # self.robot.data.body_lin_vel_w[:, self.ref_body_index],\n    # self.robot.data.body_ang_vel_w[:, self.ref_body_index],\n    # self.robot.data.body_pos_w[:, self.key_body_indexes],\n\n    step_count = 0\n    while simulation_app.is_running():\n        # 更新场景并执行一步物理仿真\n        scene.update(dt=sim.get_physics_dt())\n        scene.write_data_to_sim()\n        sim.step()\n        step_count += 1\n\n# ========= 主函数 ==========\ndef main():\n    # 创建仿真配置，指定仿真步长和设备（例如 \"cuda:0\" 或 \"cpu\"）\n    sim_cfg = sim_utils.SimulationCfg(dt=0.01, device=args_cli.device)\n    sim = sim_utils.SimulationContext(sim_cfg)\n    sim.set_camera_view([2.5, 2.5, 2.5], [0.0, 0.0, 0.0])\n\n    # 初始化场景（由 InteractiveScene 管理所有实体）\n    scene_cfg = TableTopSceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)\n    scene = InteractiveScene(scene_cfg)\n\n    # 重置仿真环境\n    sim.reset()\n    print(\"[INFO]: Setup complete. Starting simulation...\")\n\n    # 进入主仿真循环\n    run_simulator(sim, scene)\n\nif __name__ == \"__main__\":\n    main()\n    simulation_app.close()\n# 解析到的关节名称列表: ['left_hip_pitch_joint', 'right_hip_pitch_joint', 'waist_yaw_joint', 'left_hip_roll_joint', 'right_hip_roll_joint', 'waist_roll_joint', 'left_hip_yaw_joint', 'right_hip_yaw_joint', 'waist_pitch_joint', 'left_knee_joint', 'right_knee_joint', 'left_shoulder_pitch_joint', 'right_shoulder_pitch_joint', 'left_ankle_pitch_joint', 'right_ankle_pitch_joint', 'left_shoulder_roll_joint', 'right_shoulder_roll_joint', 'left_ankle_roll_joint', 'right_ankle_roll_joint', 'left_shoulder_yaw_joint', 'right_shoulder_yaw_joint', 'left_elbow_joint', 'right_elbow_joint', 'left_wrist_roll_joint', 'right_wrist_roll_joint', 'left_wrist_pitch_joint', 'right_wrist_pitch_joint', 'left_wrist_yaw_joint', 'right_wrist_yaw_joint']\n# 解析到的刚体名称列表: ['pelvis', 'imu_in_pelvis', 'left_hip_pitch_link', 'pelvis_contour_link', 'right_hip_pitch_link', 'waist_yaw_link', 'left_hip_roll_link', 'right_hip_roll_link', 'waist_roll_link', 'left_hip_yaw_link', 'right_hip_yaw_link', 'torso_link', 'left_knee_link', 'right_knee_link', 'd435_link', 'head_link', 'imu_in_torso', 'left_shoulder_pitch_link', 'logo_link', 'mid360_link', 'right_shoulder_pitch_link', 'left_ankle_pitch_link', 'right_ankle_pitch_link', 'left_shoulder_roll_link', 'right_shoulder_roll_link', 'left_ankle_roll_link', 'right_ankle_roll_link', 'left_shoulder_yaw_link', 'right_shoulder_yaw_link', 'left_elbow_link', 'right_elbow_link', 'left_wrist_roll_link', 'right_wrist_roll_link', 'left_wrist_pitch_link', 'right_wrist_pitch_link', 'left_wrist_yaw_link', 'right_wrist_yaw_link', 'left_rubber_hand', 'right_rubber_hand']\n# torch.Size([1, 29])\n# torch.Size([1, 29])"
  },
  {
    "path": "motions/update_pelvis_data.py",
    "content": "import numpy as np\nimport os\nimport argparse\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots\nfrom scipy.spatial.transform import Rotation as R\n\ndef plot_pelvis_data_2d(source_data, target_data, source_pelvis_idx, target_pelvis_idx):\n    \"\"\"\n    Generates 2D plots comparing pelvis data (positions, velocities, rotations)\n    from source and target files over time.\n    \"\"\"\n    print(\"Generating 2D comparison plots...\")\n\n    plots_to_create = [\n        ('body_positions', 'Pelvis Position Comparison', 'Position (m)'),\n        ('body_linear_velocities', 'Pelvis Linear Velocity Comparison', 'Velocity (m/s)'),\n        ('body_angular_velocities', 'Pelvis Angular Velocity Comparison', 'Angular Velocity (rad/s)'),\n        ('body_rotations', 'Pelvis Rotation (Quaternion) Comparison', 'Value')\n    ]\n\n    fig = make_subplots(\n        rows=len(plots_to_create), cols=1,\n        subplot_titles=[p[1] for p in plots_to_create],\n        vertical_spacing=0.08\n    )\n\n    fps = source_data.get('fps', 60)\n\n    for i, (key, title, y_title) in enumerate(plots_to_create, 1):\n        for data, idx, name in [(source_data, source_pelvis_idx, 'Source'), (target_data, target_pelvis_idx, 'Target')]:\n            if key not in data:\n                continue\n            \n            motion_data = data[key][:, idx]\n            num_frames = motion_data.shape[0]\n            time = np.arange(num_frames) / fps\n            \n            dims = motion_data.shape[1]\n            labels = ['X', 'Y', 'Z', 'W'] if dims == 4 else ['X', 'Y', 'Z']\n            \n            for d in range(dims):\n                fig.add_trace(\n                    go.Scatter(\n                        x=time,\n                        y=motion_data[:, d],\n                        name=f'{name} {labels[d]}',\n                        legendgroup=f'group{i}',\n                        line=dict(dash='solid' if name == 'Source' else 'dash', width=1.5),\n                        showlegend=(d==0) # Show legend only for the first component to avoid clutter\n                    ),\n                    row=i, col=1\n                )\n        fig.update_yaxes(title_text=y_title, row=i, col=1)\n        fig.update_xaxes(title_text=\"Time (s)\", row=i, col=1)\n\n    fig.update_layout(\n        title_text=\"Pelvis Data 2D Comparison\",\n        height=350 * len(plots_to_create),\n        legend_tracegroupgap = 250\n    )\n    \n    output_html_file = \"pelvis_data_2d_comparison.html\"\n    fig.write_html(output_html_file)\n    print(f\"2D plot visualization saved to: {os.path.abspath(output_html_file)}\")\n\n\ndef visualize_pelvis_trajectories(source_data, target_data, source_pelvis_idx, target_pelvis_idx):\n    \"\"\"\n    Visualizes and compares the 3D trajectories and orientations of the pelvis from two motion data sources.\n    \"\"\"\n    print(\"Generating 3D trajectory visualization...\")\n    \n    fig = go.Figure()\n\n    for data, idx, name in [(source_data, source_pelvis_idx, 'Source'), (target_data, target_pelvis_idx, 'Target')]:\n        positions = data['body_positions'][:, idx, :]\n        rotations_quat = data['body_rotations'][:, idx, :]\n\n        # Add trajectory line\n        fig.add_trace(go.Scatter3d(\n            x=positions[:, 0], y=positions[:, 1], z=positions[:, 2],\n            mode='lines',\n            name=f'{name} Trajectory',\n            line=dict(width=4)\n        ))\n\n        # Add orientation cones\n        num_frames = positions.shape[0]\n        step = max(1, num_frames // 20) # Show about 20 orientation markers\n        \n        rotations = R.from_quat(rotations_quat[:, [1, 2, 3, 0]]) # reorder to x,y,z,w for SciPy\n\n        # Get the forward vector (e.g., x-axis) for each orientation\n        forward_vectors = rotations.apply([0.1, 0, 0]) \n\n        fig.add_trace(go.Cone(\n            x=positions[::step, 0], y=positions[::step, 1], z=positions[::step, 2],\n            u=forward_vectors[::step, 0], v=forward_vectors[::step, 1], w=forward_vectors[::step, 2],\n            sizemode=\"absolute\", sizeref=0.1,\n            showscale=False,\n            name=f'{name} Orientation',\n            anchor=\"tip\"\n        ))\n\n    fig.update_layout(\n        title='Pelvis 3D Trajectory and Orientation Comparison',\n        scene=dict(\n            xaxis_title='X (m)',\n            yaxis_title='Y (m)',\n            zaxis_title='Z (m)',\n            aspectmode='data'\n        ),\n        legend_title_text='Data Source'\n    )\n    \n    output_html_file = \"pelvis_trajectory_comparison.html\"\n    fig.write_html(output_html_file)\n    print(f\"Visualization saved to: {os.path.abspath(output_html_file)}\")\n\n\ndef update_pelvis_data(source_file, target_file, dry_run=False):\n    \"\"\"\n    Copies the pelvis data from a source motion file to a target motion file.\n\n    This function loads two .npz motion files, finds the 'pelvis' body index in both,\n    and then overwrites the pelvis trajectory data (position, rotation, velocities)\n    in the target file with the data from the source file.\n\n    Args:\n        source_file (str): Path to the .npz file with the source pelvis data.\n        target_file (str): Path to the .npz file to be updated.\n        dry_run (bool): If True, print data for comparison without saving.\n    \"\"\"\n    try:\n        # Load the source and target .npz files\n        source_data = np.load(source_file)\n        target_data = np.load(target_file)\n        print(f\"Loaded source file: {source_file}\")\n        print(f\"Loaded target file: {target_file}\")\n\n        # Find the index of 'pelvis' in the body_names array for both files\n        source_body_names = list(source_data['body_names'])\n        target_body_names = list(target_data['body_names'])\n\n        source_pelvis_idx = source_body_names.index('pelvis')\n        target_pelvis_idx = target_body_names.index('pelvis')\n        print(f\"Found 'pelvis' at index {source_pelvis_idx} in source file.\")\n        print(f\"Found 'pelvis' at index {target_pelvis_idx} in target file.\")\n\n        if dry_run:\n            print(\"\\n--- DRY RUN MODE ---\")\n            print(\"Displaying pelvis data for the first 5 frames. No files will be changed.\")\n            \n            print(\"\\n--- Source Pelvis Data (first 5 frames) ---\")\n            print(\"Body Positions:\")\n            print(source_data['body_positions'][:5, source_pelvis_idx])\n            print(\"\\nBody Rotations:\")\n            print(source_data['body_rotations'][:5, source_pelvis_idx])\n            \n            print(\"\\n--- Target Pelvis Data (first 5 frames) ---\")\n            print(\"Body Positions:\")\n            print(target_data['body_positions'][:5, target_pelvis_idx])\n            print(\"\\nBody Rotations:\")\n            print(target_data['body_rotations'][:5, target_pelvis_idx])\n            \n            visualize_pelvis_trajectories(source_data, target_data, source_pelvis_idx, target_pelvis_idx)\n            plot_pelvis_data_2d(source_data, target_data, source_pelvis_idx, target_pelvis_idx)\n\n            print(\"\\n--- END DRY RUN ---\")\n            return\n\n        # Convert the loaded target data into a mutable dictionary\n        target_data_dict = {key: target_data[key] for key in target_data.files}\n        \n        # Check if the number of frames is consistent\n        num_frames_source = source_data['body_positions'].shape[0]\n        num_frames_target = target_data_dict['body_positions'].shape[0]\n        if num_frames_source < num_frames_target:\n            print(f\"Warning: Source file has fewer frames ({num_frames_source}) than target file ({num_frames_target}). Target will be truncated.\")\n            num_frames = num_frames_source\n            # Truncate all relevant arrays in the target data\n            for key in ['dof_positions', 'dof_velocities', 'body_positions', 'body_rotations', 'body_linear_velocities', 'body_angular_velocities']:\n                if key in target_data_dict:\n                    target_data_dict[key] = target_data_dict[key][:num_frames]\n        else:\n            num_frames = num_frames_target\n            if num_frames_source > num_frames_target:\n                 print(f\"Warning: Source file has more frames ({num_frames_source}) than target file ({num_frames_target}). Source data will be truncated.\")\n\n        # Copy the pelvis data from source to target\n        print(\"Copying pelvis data...\")\n        target_data_dict['body_positions'][:, target_pelvis_idx] = source_data['body_positions'][:num_frames, source_pelvis_idx]\n        target_data_dict['body_rotations'][:, target_pelvis_idx] = source_data['body_rotations'][:num_frames, source_pelvis_idx]\n        # target_data_dict['body_linear_velocities'][:, target_pelvis_idx] = source_data['body_linear_velocities'][:num_frames, source_pelvis_idx]\n        # target_data_dict['body_angular_velocities'][:, target_pelvis_idx] = source_data['body_angular_velocities'][:num_frames, source_pelvis_idx]\n        \n        # Save the updated data back to the target file\n        np.savez(\n            target_file,\n            **target_data_dict\n        )\n        print(f\"Successfully updated pelvis data in {target_file}\")\n\n    except FileNotFoundError as e:\n        print(f\"Error: {e}. Please ensure the file paths are correct.\")\n    except ValueError:\n        print(\"Error: 'pelvis' not found in one of the files. Cannot proceed.\")\n    except Exception as e:\n        print(f\"An unexpected error occurred: {e}\")\n\n\nif __name__ == \"__main__\":\n    # Set to True to print data without modifying files\n    DRY_RUN = True  \n\n    # Define the file paths relative to the script's location\n    script_dir = os.path.dirname(__file__)\n    source_motion_file = os.path.join(script_dir, \"G1_dance_old.npz\")\n    target_motion_file = os.path.join(script_dir, \"G1_dance.npz\")\n    \n    update_pelvis_data(source_motion_file, target_motion_file, dry_run=DRY_RUN) "
  },
  {
    "path": "motions/verify_motion.py",
    "content": "\"\"\"\nMotion Data Verification Tool\n\nThis script verifies and displays detailed information about motion data stored in npz files.\nIt shows information about:\n- FPS and duration\n- File size and name\n- Data structure and content\n- Sample data for each type of motion data\n\nUsage:\n    python verify_motion.py --file <motion_file.npz>\n\nArguments:\n    --file    Path to the npz file containing motion data\n\nExample:\n    python verify_motion.py --file G1_dance.npz\n\nOutput:\n    Prints detailed information about the motion data file, including:\n    - Basic file information (size, name)\n    - FPS and duration\n    - Data structure overview\n    - Sample data for each type of motion data\n\"\"\"\n\nimport numpy as np\nimport argparse\nimport os\nnp.set_printoptions(threshold=100, precision=3, suppress=True)\n\ndef verify_motion_file(npz_file):\n    # Load npz file\n    data = np.load(npz_file, allow_pickle=True)\n    \n    # First display FPS information\n    if 'fps' in data:\n        fps = data['fps'].item()\n        print(f\"\\nFPS: {fps}\")\n        if 'dof_positions' in data:\n            duration = data['dof_positions'].shape[0] / fps\n            print(f\"Motion duration: {duration:.2f} seconds\")\n    else:\n        print(\"\\nWarning: 'fps' key not found!\")\n    \n    # Get file size\n    file_size = os.path.getsize(npz_file) / (1024 * 1024)  # Convert to MB\n    print(f\"\\nFile Information:\")\n    print(f\"Filename: {os.path.basename(npz_file)}\")\n    print(f\"File size: {file_size:.2f} MB\")\n    \n    print(\"\\nData Content Overview:\")\n    print(\"-\" * 50)\n    for key in data:\n        arr = data[key]\n        print(f\"\\n{key}:\")\n        print(f\"  Data type: {arr.dtype}\")\n        print(f\"  Data shape: {arr.shape}\")\n        \n        # Display content based on different data types\n        if key == 'dof_names':\n            print(f\"  Joint names:\")\n            for i, name in enumerate(arr):\n                print(f\"    {i+1}. {name}\")\n        elif key == 'body_names':\n            print(f\"  Body part names:\")\n            for i, name in enumerate(arr):\n                print(f\"    {i+1}. {name}\")\n        elif key == \"body_positions\":\n            print(f\"  Body positions sample (first 2 frames, first 3 parts):\")\n            print(arr[:2, :3, :])  # Show first 2 frames, first 3 parts positions\n        elif key == \"body_rotations\":\n            print(f\"  Body rotations sample (first 2 frames, first 3 parts, quaternions):\")\n            print(arr[:2, :3, :])  # Show first 2 frames, first 3 parts quaternion rotations\n        elif key == \"body_linear_velocities\":\n            print(f\"  Body linear velocities sample (first 2 frames, first 3 parts):\")\n            print(arr[:2, :3, :])  # Show first 2 frames, first 3 parts linear velocities\n        elif key == \"body_angular_velocities\":\n            print(f\"  Body angular velocities sample (first 2 frames, first 3 parts):\")\n            print(arr[:2, :3, :])  # Show first 2 frames, first 3 parts angular velocities\n        elif key == \"dof_positions\":\n            print(f\"  Joint positions sample (first 2 frames, first 5 joints):\")\n            print(arr[:2, :5])  # Show first 2 frames, first 5 joints positions\n        elif key == \"dof_velocities\":\n            print(f\"  Joint velocities sample (first 2 frames, first 5 joints):\")\n            print(arr[:2, :5])  # Show first 2 frames, first 5 joints velocities\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(description=\"Verify and display motion data file contents\")\n    parser.add_argument(\"--file\", type=str, required=True, help=\"Path to the npz file\")\n    args = parser.parse_args()\n    \n    verify_motion_file(args.file)\n"
  },
  {
    "path": "motions/visualize_motion.py",
    "content": "\"\"\"\nMotion Data Visualization Tool\n\nThis script visualizes motion data from npz files, creating interactive plots for:\n- Joint positions\n- Joint velocities\n- Body linear velocities\n- Body angular velocities\n\nUsage:\n    python visualize_motion.py --file <motion_file.npz>\n\nArguments:\n    --file    Path to the npz file containing motion data\n\nExample:\n    python visualize_motion.py --file G1_dance.npz\n\nOutput:\n    Generates an HTML file with interactive plots in the same directory as the input file.\n    The output filename will be <input_filename>_visualization.html\n\"\"\"\n\nimport numpy as np\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots\nimport argparse\nimport os\n\ndef visualize_motion(npz_file):\n    # Load data\n    data = np.load(npz_file, allow_pickle=True)\n    \n    # Get basic information\n    fps = data['fps'].item() if 'fps' in data else 60\n    dof_names = data['dof_names'] if 'dof_names' in data else []\n    body_names = data['body_names'] if 'body_names' in data else []\n    \n    # Create subplot layout\n    fig = make_subplots(\n        rows=5, cols=1,\n        subplot_titles=('Root Link 3D Position (pelvis)',\n                       'Joint Positions (DOF Positions)', \n                       'Joint Velocities (DOF Velocities)',\n                       'Body Linear Velocities',\n                       'Body Angular Velocities'),\n        vertical_spacing=0.05,\n        specs=[[{\"type\": \"scatter\"}],\n               [{\"type\": \"scatter\"}],\n               [{\"type\": \"scatter\"}],\n               [{\"type\": \"scatter\"}],\n               [{\"type\": \"scatter\"}]],\n        row_heights=[0.2, 0.2, 0.2, 0.2, 0.2]\n    )\n    \n    # Time axis\n    time = np.arange(len(data['dof_positions'])) / fps if 'dof_positions' in data else np.arange(len(data['body_linear_velocities'])) / fps\n    \n    # 1. Plot root link 3D position\n    if 'body_positions' in data and 'pelvis' in body_names:\n        root_idx = body_names.tolist().index('pelvis')\n        root_positions = data['body_positions'][:, root_idx, :]\n        \n        # X component\n        fig.add_trace(\n            go.Scatter(\n                x=time,\n                y=root_positions[:, 0],\n                name=\"Pelvis Position (X)\",\n                mode='lines',\n                line=dict(width=1)\n            ),\n            row=1, col=1\n        )\n        # Y component\n        fig.add_trace(\n            go.Scatter(\n                x=time,\n                y=root_positions[:, 1],\n                name=\"Pelvis Position (Y)\",\n                mode='lines',\n                line=dict(width=1)\n            ),\n            row=1, col=1\n        )\n        # Z component\n        fig.add_trace(\n            go.Scatter(\n                x=time,\n                y=root_positions[:, 2],\n                name=\"Pelvis Position (Z)\",\n                mode='lines',\n                line=dict(width=1)\n            ),\n            row=1, col=1\n        )\n    \n    # 2. Plot all joint positions\n    if 'dof_positions' in data:\n        for i in range(len(dof_names)):\n            fig.add_trace(\n                go.Scatter(\n                    x=time,\n                    y=data['dof_positions'][:, i],\n                    name=dof_names[i],\n                    mode='lines',\n                    line=dict(width=1)\n                ),\n                row=2, col=1\n            )\n    \n    # 3. Plot all joint velocities\n    if 'dof_velocities' in data:\n        for i in range(len(dof_names)):\n            fig.add_trace(\n                go.Scatter(\n                    x=time,\n                    y=data['dof_velocities'][:, i],\n                    name=dof_names[i],\n                    mode='lines',\n                    line=dict(width=1)\n                ),\n                row=3, col=1\n            )\n    \n    # 4. Plot all body linear velocities (X, Y, Z components)\n    if 'body_linear_velocities' in data:\n        for i in range(len(body_names)):\n            velocities = data['body_linear_velocities'][:, i, :]\n            # X component\n            fig.add_trace(\n                go.Scatter(\n                    x=time,\n                    y=velocities[:, 0],\n                    name=f\"{body_names[i]} (X)\",\n                    mode='lines',\n                    line=dict(width=1)\n                ),\n                row=4, col=1\n            )\n            # Y component\n            fig.add_trace(\n                go.Scatter(\n                    x=time,\n                    y=velocities[:, 1],\n                    name=f\"{body_names[i]} (Y)\",\n                    mode='lines',\n                    line=dict(width=1)\n                ),\n                row=4, col=1\n            )\n            # Z component\n            fig.add_trace(\n                go.Scatter(\n                    x=time,\n                    y=velocities[:, 2],\n                    name=f\"{body_names[i]} (Z)\",\n                    mode='lines',\n                    line=dict(width=1)\n                ),\n                row=4, col=1\n            )\n    \n    # 5. Plot all body angular velocities (X, Y, Z components)\n    if 'body_angular_velocities' in data:\n        for i in range(len(body_names)):\n            velocities = data['body_angular_velocities'][:, i, :]\n            # X component\n            fig.add_trace(\n                go.Scatter(\n                    x=time,\n                    y=velocities[:, 0],\n                    name=f\"{body_names[i]} (X)\",\n                    mode='lines',\n                    line=dict(width=1)\n                ),\n                row=5, col=1\n            )\n            # Y component\n            fig.add_trace(\n                go.Scatter(\n                    x=time,\n                    y=velocities[:, 1],\n                    name=f\"{body_names[i]} (Y)\",\n                    mode='lines',\n                    line=dict(width=1)\n                ),\n                row=5, col=1\n            )\n            # Z component\n            fig.add_trace(\n                go.Scatter(\n                    x=time,\n                    y=velocities[:, 2],\n                    name=f\"{body_names[i]} (Z)\",\n                    mode='lines',\n                    line=dict(width=1)\n                ),\n                row=5, col=1\n            )\n    \n    # Update layout\n    fig.update_layout(\n        title_text=\"Motion Data Visualization (Complete Data)\",\n        height=2000,\n        showlegend=True,\n        legend=dict(\n            yanchor=\"top\",\n            y=0.99,\n            xanchor=\"left\",\n            x=1.05,\n            font=dict(size=8)\n        )\n    )\n    \n    # Update axis labels\n    for i in range(1, 6):\n        fig.update_xaxes(title_text=\"Time (seconds)\", row=i, col=1)\n    \n    fig.update_yaxes(title_text=\"Position (m)\", row=1, col=1)\n    fig.update_yaxes(title_text=\"Position (radians)\", row=2, col=1)\n    fig.update_yaxes(title_text=\"Velocity (rad/s)\", row=3, col=1)\n    fig.update_yaxes(title_text=\"Linear Velocity (m/s)\", row=4, col=1)\n    fig.update_yaxes(title_text=\"Angular Velocity (rad/s)\", row=5, col=1)\n    \n    # Generate output file path automatically\n    input_dir = os.path.dirname(os.path.abspath(npz_file))\n    input_filename = os.path.splitext(os.path.basename(npz_file))[0]\n    output_html = os.path.join(input_dir, f\"{input_filename}_visualization.html\")\n    \n    # Save visualization results\n    fig.write_html(output_html)\n    print(f\"Visualization results saved to: {output_html}\")\n\ndef quaternion_to_euler(q):\n    \"\"\"Convert quaternion to Euler angles (roll, pitch, yaw)\"\"\"\n    # Normalize quaternion\n    q = q / np.linalg.norm(q)\n    w, x, y, z = q\n    \n    # Calculate Euler angles\n    roll = np.arctan2(2 * (w*x + y*z), 1 - 2 * (x*x + y*y))\n    pitch = np.arcsin(2 * (w*y - z*x))\n    yaw = np.arctan2(2 * (w*z + x*y), 1 - 2 * (y*y + z*z))\n    \n    return np.array([roll, pitch, yaw])\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(description=\"Visualize motion data from npz file\")\n    parser.add_argument(\"--file\", type=str, required=True, help=\"Path to the npz file\")\n    args = parser.parse_args()\n    \n    visualize_motion(args.file) "
  },
  {
    "path": "play.py",
    "content": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-Clause\n\n\"\"\"\nScript to play a checkpoint of an RL agent from skrl.\n\nVisit the skrl documentation (https://skrl.readthedocs.io) to see the examples structured in\na more user-friendly way.\n\"\"\"\n\n\"\"\"Launch Isaac Sim Simulator first.\"\"\"\n\nimport argparse\nimport sys\n\nfrom isaaclab.app import AppLauncher\n\n# add argparse arguments\nparser = argparse.ArgumentParser(description=\"Play a checkpoint of an RL agent from skrl.\")\nparser.add_argument(\"--video\", action=\"store_true\", default=False, help=\"Record videos during training.\")\nparser.add_argument(\"--video_length\", type=int, default=200, help=\"Length of the recorded video (in steps).\")\nparser.add_argument(\n    \"--disable_fabric\", action=\"store_true\", default=False, help=\"Disable fabric and use USD I/O operations.\"\n)\nparser.add_argument(\"--num_envs\", type=int, default=None, help=\"Number of environments to simulate.\")\nparser.add_argument(\"--task\", type=str, default=None, help=\"Name of the task.\")\nparser.add_argument(\n    \"--agent\",\n    type=str,\n    default=None,\n    help=(\n        \"Name of the RL agent configuration entry point. Defaults to None, in which case the argument \"\n        \"--algorithm is used to determine the default agent configuration entry point.\"\n    ),\n)\nparser.add_argument(\"--checkpoint\", type=str, default=None, help=\"Path to model checkpoint.\")\nparser.add_argument(\"--seed\", type=int, default=None, help=\"Seed used for the environment\")\nparser.add_argument(\n    \"--use_pretrained_checkpoint\",\n    action=\"store_true\",\n    help=\"Use the pre-trained checkpoint from Nucleus.\",\n)\nparser.add_argument(\n    \"--ml_framework\",\n    type=str,\n    default=\"torch\",\n    choices=[\"torch\", \"jax\", \"jax-numpy\"],\n    help=\"The ML framework used for training the skrl agent.\",\n)\nparser.add_argument(\n    \"--algorithm\",\n    type=str,\n    default=\"PPO\",\n    choices=[\"AMP\", \"PPO\", \"IPPO\", \"MAPPO\"],\n    help=\"The RL algorithm used for training the skrl agent.\",\n)\nparser.add_argument(\"--real-time\", action=\"store_true\", default=False, help=\"Run in real-time, if possible.\")\n\n# append AppLauncher cli args\nAppLauncher.add_app_launcher_args(parser)\n# parse the arguments\nargs_cli, hydra_args = parser.parse_known_args()\n# always enable cameras to record video\nif args_cli.video:\n    args_cli.enable_cameras = True\n\n# clear out sys.argv for Hydra\nsys.argv = [sys.argv[0]] + hydra_args\n# launch omniverse app\napp_launcher = AppLauncher(args_cli)\nsimulation_app = app_launcher.app\n\n\"\"\"Rest everything follows.\"\"\"\n\nimport gymnasium as gym\nimport os\nimport random\nimport time\nimport torch\n\nimport skrl\nfrom packaging import version\n\n# check for minimum supported skrl version\nSKRL_VERSION = \"1.4.3\"\nif version.parse(skrl.__version__) < version.parse(SKRL_VERSION):\n    skrl.logger.error(\n        f\"Unsupported skrl version: {skrl.__version__}. \"\n        f\"Install supported version using 'pip install skrl>={SKRL_VERSION}'\"\n    )\n    exit()\n\nif args_cli.ml_framework.startswith(\"torch\"):\n    from skrl.utils.runner.torch import Runner\nelif args_cli.ml_framework.startswith(\"jax\"):\n    from skrl.utils.runner.jax import Runner\n\nfrom isaaclab.envs import (\n    DirectMARLEnv,\n    DirectMARLEnvCfg,\n    DirectRLEnvCfg,\n    ManagerBasedRLEnvCfg,\n    multi_agent_to_single_agent,\n)\nfrom isaaclab.utils.dict import print_dict\nfrom isaaclab.utils.pretrained_checkpoint import get_published_pretrained_checkpoint\n\nfrom isaaclab_rl.skrl import SkrlVecEnvWrapper\n\nimport isaaclab_tasks  # noqa: F401\nfrom isaaclab_tasks.utils import get_checkpoint_path\nfrom isaaclab_tasks.utils.hydra import hydra_task_config\n\n# PLACEHOLDER: Extension template (do not remove this comment)\n\n# config shortcuts\nif args_cli.agent is None:\n    algorithm = args_cli.algorithm.lower()\n    agent_cfg_entry_point = \"skrl_cfg_entry_point\" if algorithm in [\"ppo\"] else f\"skrl_{algorithm}_cfg_entry_point\"\nelse:\n    agent_cfg_entry_point = args_cli.agent\n    algorithm = agent_cfg_entry_point.split(\"_cfg\")[0].split(\"skrl_\")[-1].lower()\n\n\n@hydra_task_config(args_cli.task, agent_cfg_entry_point)\ndef main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, experiment_cfg: dict):\n    \"\"\"Play with skrl agent.\"\"\"\n    # grab task name for checkpoint path\n    task_name = args_cli.task.split(\":\")[-1]\n    train_task_name = task_name.replace(\"-Play\", \"\")\n\n    # override configurations with non-hydra CLI arguments\n    env_cfg.scene.num_envs = args_cli.num_envs if args_cli.num_envs is not None else env_cfg.scene.num_envs\n    env_cfg.sim.device = args_cli.device if args_cli.device is not None else env_cfg.sim.device\n\n    # configure the ML framework into the global skrl variable\n    if args_cli.ml_framework.startswith(\"jax\"):\n        skrl.config.jax.backend = \"jax\" if args_cli.ml_framework == \"jax\" else \"numpy\"\n\n        # randomly sample a seed if seed = -1\n    if args_cli.seed == -1:\n        args_cli.seed = random.randint(0, 10000)\n\n    # set the agent and environment seed from command line\n    # note: certain randomization occur in the environment initialization so we set the seed here\n    experiment_cfg[\"seed\"] = args_cli.seed if args_cli.seed is not None else experiment_cfg[\"seed\"]\n    env_cfg.seed = experiment_cfg[\"seed\"]\n\n    # specify directory for logging experiments (load checkpoint)\n    log_root_path = os.path.join(\"logs\", \"skrl\", experiment_cfg[\"agent\"][\"experiment\"][\"directory\"])\n    log_root_path = os.path.abspath(log_root_path)\n    print(f\"[INFO] Loading experiment from directory: {log_root_path}\")\n    # get checkpoint path\n    if args_cli.use_pretrained_checkpoint:\n        resume_path = get_published_pretrained_checkpoint(\"skrl\", train_task_name)\n        if not resume_path:\n            print(\"[INFO] Unfortunately a pre-trained checkpoint is currently unavailable for this task.\")\n            return\n    elif args_cli.checkpoint:\n        resume_path = os.path.abspath(args_cli.checkpoint)\n    else:\n        resume_path = get_checkpoint_path(\n            log_root_path, run_dir=f\".*_{algorithm}_{args_cli.ml_framework}\", other_dirs=[\"checkpoints\"]\n        )\n    log_dir = os.path.dirname(os.path.dirname(resume_path))\n\n    # set the log directory for the environment (works for all environment types)\n    env_cfg.log_dir = log_dir\n\n    # create isaac environment\n    env = gym.make(args_cli.task, cfg=env_cfg, render_mode=\"rgb_array\" if args_cli.video else None)\n\n    # convert to single-agent instance if required by the RL algorithm\n    if isinstance(env.unwrapped, DirectMARLEnv) and algorithm in [\"ppo\"]:\n        env = multi_agent_to_single_agent(env)\n\n    # get environment (step) dt for real-time evaluation\n    try:\n        dt = env.step_dt\n    except AttributeError:\n        dt = env.unwrapped.step_dt\n\n    # wrap for video recording\n    if args_cli.video:\n        video_kwargs = {\n            \"video_folder\": os.path.join(log_dir, \"videos\", \"play\"),\n            \"step_trigger\": lambda step: step == 0,\n            \"video_length\": args_cli.video_length,\n            \"disable_logger\": True,\n        }\n        print(\"[INFO] Recording videos during training.\")\n        print_dict(video_kwargs, nesting=4)\n        env = gym.wrappers.RecordVideo(env, **video_kwargs)\n\n    # wrap around environment for skrl\n    env = SkrlVecEnvWrapper(env, ml_framework=args_cli.ml_framework)  # same as: `wrap_env(env, wrapper=\"auto\")`\n\n    # configure and instantiate the skrl runner\n    # https://skrl.readthedocs.io/en/latest/api/utils/runner.html\n    experiment_cfg[\"trainer\"][\"close_environment_at_exit\"] = False\n    experiment_cfg[\"agent\"][\"experiment\"][\"write_interval\"] = 0  # don't log to TensorBoard\n    experiment_cfg[\"agent\"][\"experiment\"][\"checkpoint_interval\"] = 0  # don't generate checkpoints\n    runner = Runner(env, experiment_cfg)\n\n    print(f\"[INFO] Loading model checkpoint from: {resume_path}\")\n    runner.agent.load(resume_path)\n    # set agent to evaluation mode\n    runner.agent.set_running_mode(\"eval\")\n\n    # reset environment\n    obs, _ = env.reset()\n    timestep = 0\n    # simulate environment\n    while simulation_app.is_running():\n        start_time = time.time()\n\n        # run everything in inference mode\n        with torch.inference_mode():\n            # agent stepping\n            outputs = runner.agent.act(obs, timestep=0, timesteps=0)\n            # - multi-agent (deterministic) actions\n            if hasattr(env, \"possible_agents\"):\n                actions = {a: outputs[-1][a].get(\"mean_actions\", outputs[0][a]) for a in env.possible_agents}\n            # - single-agent (deterministic) actions\n            else:\n                actions = outputs[-1].get(\"mean_actions\", outputs[0])\n            # env stepping\n            obs, _, _, _, _ = env.step(actions)\n        if args_cli.video:\n            timestep += 1\n            # exit the play loop after recording one video\n            if timestep == args_cli.video_length:\n                break\n\n        # time delay for real-time evaluation\n        sleep_time = dt - (time.time() - start_time)\n        if args_cli.real_time and sleep_time > 0:\n            time.sleep(sleep_time)\n\n    # close the simulator\n    env.close()\n\n\nif __name__ == \"__main__\":\n    # run the main function\n    main()\n    # close sim app\n    simulation_app.close()\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[build-system]\nrequires = [\"setuptools>=65.0\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"humanoid-amp\"\nversion = \"0.1.0\"\ndescription = \"Humanoid AMP tasks and utilities for Isaac Lab\"\nreadme = \"README.md\"\nrequires-python = \">=3.10\"\nauthors = [\n  { name = \"Unitree Robotics Hobbyist\" }\n]\nlicense = \"BSD-3-Clause\"\nclassifiers = [\n  \"Programming Language :: Python :: 3\",\n  \"Programming Language :: Python :: 3.10\",\n  \"Operating System :: POSIX :: Linux\"\n]\n\n[tool.setuptools]\npackages = [\"humanoid_amp\"]\n\n[tool.setuptools.package-dir]\nhumanoid_amp = \".\"\n\n[tool.setuptools.package-data]\nhumanoid_amp = [\n  \"motions/**/*\",\n  \"usd/**/*\"\n]\n"
  },
  {
    "path": "sync_skrl_scripts.sh",
    "content": "#!/usr/bin/env bash\n# Sync skrl train/play scripts from Isaac Lab releases and reapply local tweaks.\nset -euo pipefail\n\nBASE_URL=\"https://raw.githubusercontent.com/isaac-sim/IsaacLab/main/scripts/reinforcement_learning/skrl\"\nFILES=(train.py play.py)\n\nfetch() {\n  local url=\"$1\"\n  local dest=\"$2\"\n  curl -fsSL \"${url}\" -o \"${dest}\"\n}\n\nfor name in \"${FILES[@]}\"; do\n  url=\"${BASE_URL}/${name}\"\n  echo \"Fetching ${url}\"\n  fetch \"${url}\" \"${name}\"\ndone\n\necho \"Done.\"\n"
  },
  {
    "path": "train.py",
    "content": "# Copyright (c) 2022-2025, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).\n# All rights reserved.\n#\n# SPDX-License-Identifier: BSD-3-Clause\n\n\"\"\"\nScript to train RL agent with skrl.\n\nVisit the skrl documentation (https://skrl.readthedocs.io) to see the examples structured in\na more user-friendly way.\n\"\"\"\n\n\"\"\"Launch Isaac Sim Simulator first.\"\"\"\n\nimport argparse\nimport sys\n\nfrom isaaclab.app import AppLauncher\n\n# add argparse arguments\nparser = argparse.ArgumentParser(description=\"Train an RL agent with skrl.\")\nparser.add_argument(\"--video\", action=\"store_true\", default=False, help=\"Record videos during training.\")\nparser.add_argument(\"--video_length\", type=int, default=200, help=\"Length of the recorded video (in steps).\")\nparser.add_argument(\"--video_interval\", type=int, default=2000, help=\"Interval between video recordings (in steps).\")\nparser.add_argument(\"--num_envs\", type=int, default=None, help=\"Number of environments to simulate.\")\nparser.add_argument(\"--task\", type=str, default=None, help=\"Name of the task.\")\nparser.add_argument(\n    \"--agent\",\n    type=str,\n    default=None,\n    help=(\n        \"Name of the RL agent configuration entry point. Defaults to None, in which case the argument \"\n        \"--algorithm is used to determine the default agent configuration entry point.\"\n    ),\n)\nparser.add_argument(\"--seed\", type=int, default=None, help=\"Seed used for the environment\")\nparser.add_argument(\n    \"--distributed\", action=\"store_true\", default=False, help=\"Run training with multiple GPUs or nodes.\"\n)\nparser.add_argument(\"--checkpoint\", type=str, default=None, help=\"Path to model checkpoint to resume training.\")\nparser.add_argument(\"--max_iterations\", type=int, default=None, help=\"RL Policy training iterations.\")\nparser.add_argument(\"--export_io_descriptors\", action=\"store_true\", default=False, help=\"Export IO descriptors.\")\nparser.add_argument(\n    \"--ml_framework\",\n    type=str,\n    default=\"torch\",\n    choices=[\"torch\", \"jax\", \"jax-numpy\"],\n    help=\"The ML framework used for training the skrl agent.\",\n)\nparser.add_argument(\n    \"--algorithm\",\n    type=str,\n    default=\"PPO\",\n    choices=[\"AMP\", \"PPO\", \"IPPO\", \"MAPPO\"],\n    help=\"The RL algorithm used for training the skrl agent.\",\n)\n\n# append AppLauncher cli args\nAppLauncher.add_app_launcher_args(parser)\n# parse the arguments\nargs_cli, hydra_args = parser.parse_known_args()\n# always enable cameras to record video\nif args_cli.video:\n    args_cli.enable_cameras = True\n\n# clear out sys.argv for Hydra\nsys.argv = [sys.argv[0]] + hydra_args\n\n# launch omniverse app\napp_launcher = AppLauncher(args_cli)\nsimulation_app = app_launcher.app\n\n\"\"\"Rest everything follows.\"\"\"\n\nimport gymnasium as gym\nimport os\nimport random\nfrom datetime import datetime\n\nimport omni\nimport skrl\nfrom packaging import version\n\n# check for minimum supported skrl version\nSKRL_VERSION = \"1.4.3\"\nif version.parse(skrl.__version__) < version.parse(SKRL_VERSION):\n    skrl.logger.error(\n        f\"Unsupported skrl version: {skrl.__version__}. \"\n        f\"Install supported version using 'pip install skrl>={SKRL_VERSION}'\"\n    )\n    exit()\n\nif args_cli.ml_framework.startswith(\"torch\"):\n    from skrl.utils.runner.torch import Runner\nelif args_cli.ml_framework.startswith(\"jax\"):\n    from skrl.utils.runner.jax import Runner\n\nfrom isaaclab.envs import (\n    DirectMARLEnv,\n    DirectMARLEnvCfg,\n    DirectRLEnvCfg,\n    ManagerBasedRLEnvCfg,\n    multi_agent_to_single_agent,\n)\nfrom isaaclab.utils.assets import retrieve_file_path\nfrom isaaclab.utils.dict import print_dict\nfrom isaaclab.utils.io import dump_yaml\n\nfrom isaaclab_rl.skrl import SkrlVecEnvWrapper\n\nimport isaaclab_tasks  # noqa: F401\nfrom isaaclab_tasks.utils.hydra import hydra_task_config\n\n# PLACEHOLDER: Extension template (do not remove this comment)\n\n# config shortcuts\nif args_cli.agent is None:\n    algorithm = args_cli.algorithm.lower()\n    agent_cfg_entry_point = \"skrl_cfg_entry_point\" if algorithm in [\"ppo\"] else f\"skrl_{algorithm}_cfg_entry_point\"\nelse:\n    agent_cfg_entry_point = args_cli.agent\n    algorithm = agent_cfg_entry_point.split(\"_cfg\")[0].split(\"skrl_\")[-1].lower()\n\n\n@hydra_task_config(args_cli.task, agent_cfg_entry_point)\ndef main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: dict):\n    \"\"\"Train with skrl agent.\"\"\"\n    # override configurations with non-hydra CLI arguments\n    env_cfg.scene.num_envs = args_cli.num_envs if args_cli.num_envs is not None else env_cfg.scene.num_envs\n    env_cfg.sim.device = args_cli.device if args_cli.device is not None else env_cfg.sim.device\n\n    # multi-gpu training config\n    if args_cli.distributed:\n        env_cfg.sim.device = f\"cuda:{app_launcher.local_rank}\"\n    # max iterations for training\n    if args_cli.max_iterations:\n        agent_cfg[\"trainer\"][\"timesteps\"] = args_cli.max_iterations * agent_cfg[\"agent\"][\"rollouts\"]\n    agent_cfg[\"trainer\"][\"close_environment_at_exit\"] = False\n    # configure the ML framework into the global skrl variable\n    if args_cli.ml_framework.startswith(\"jax\"):\n        skrl.config.jax.backend = \"jax\" if args_cli.ml_framework == \"jax\" else \"numpy\"\n\n    # randomly sample a seed if seed = -1\n    if args_cli.seed == -1:\n        args_cli.seed = random.randint(0, 10000)\n\n    # set the agent and environment seed from command line\n    # note: certain randomization occur in the environment initialization so we set the seed here\n    agent_cfg[\"seed\"] = args_cli.seed if args_cli.seed is not None else agent_cfg[\"seed\"]\n    env_cfg.seed = agent_cfg[\"seed\"]\n\n    # specify directory for logging experiments\n    log_root_path = os.path.join(\"logs\", \"skrl\", agent_cfg[\"agent\"][\"experiment\"][\"directory\"])\n    log_root_path = os.path.abspath(log_root_path)\n    print(f\"[INFO] Logging experiment in directory: {log_root_path}\")\n    # specify directory for logging runs: {time-stamp}_{run_name}\n    log_dir = datetime.now().strftime(\"%Y-%m-%d_%H-%M-%S\") + f\"_{algorithm}_{args_cli.ml_framework}\"\n    # The Ray Tune workflow extracts experiment name using the logging line below, hence, do not change it (see PR #2346, comment-2819298849)\n    print(f\"Exact experiment name requested from command line: {log_dir}\")\n    if agent_cfg[\"agent\"][\"experiment\"][\"experiment_name\"]:\n        log_dir += f'_{agent_cfg[\"agent\"][\"experiment\"][\"experiment_name\"]}'\n    # set directory into agent config\n    agent_cfg[\"agent\"][\"experiment\"][\"directory\"] = log_root_path\n    agent_cfg[\"agent\"][\"experiment\"][\"experiment_name\"] = log_dir\n    # update log_dir\n    log_dir = os.path.join(log_root_path, log_dir)\n\n    # dump the configuration into log-directory\n    dump_yaml(os.path.join(log_dir, \"params\", \"env.yaml\"), env_cfg)\n    dump_yaml(os.path.join(log_dir, \"params\", \"agent.yaml\"), agent_cfg)\n\n    # get checkpoint path (to resume training)\n    resume_path = retrieve_file_path(args_cli.checkpoint) if args_cli.checkpoint else None\n\n    # set the IO descriptors export flag if requested\n    if isinstance(env_cfg, ManagerBasedRLEnvCfg):\n        env_cfg.export_io_descriptors = args_cli.export_io_descriptors\n    else:\n        omni.log.warn(\n            \"IO descriptors are only supported for manager based RL environments. No IO descriptors will be exported.\"\n        )\n\n    # set the log directory for the environment (works for all environment types)\n    env_cfg.log_dir = log_dir\n\n    # create isaac environment\n    env = gym.make(args_cli.task, cfg=env_cfg, render_mode=\"rgb_array\" if args_cli.video else None)\n\n    # convert to single-agent instance if required by the RL algorithm\n    if isinstance(env.unwrapped, DirectMARLEnv) and algorithm in [\"ppo\"]:\n        env = multi_agent_to_single_agent(env)\n\n    # wrap for video recording\n    if args_cli.video:\n        video_kwargs = {\n            \"video_folder\": os.path.join(log_dir, \"videos\", \"train\"),\n            \"step_trigger\": lambda step: step % args_cli.video_interval == 0,\n            \"video_length\": args_cli.video_length,\n            \"disable_logger\": True,\n        }\n        print(\"[INFO] Recording videos during training.\")\n        print_dict(video_kwargs, nesting=4)\n        env = gym.wrappers.RecordVideo(env, **video_kwargs)\n\n    # wrap around environment for skrl\n    env = SkrlVecEnvWrapper(env, ml_framework=args_cli.ml_framework)  # same as: `wrap_env(env, wrapper=\"auto\")`\n\n    # configure and instantiate the skrl runner\n    # https://skrl.readthedocs.io/en/latest/api/utils/runner.html\n    runner = Runner(env, agent_cfg)\n\n    # load checkpoint (if specified)\n    if resume_path:\n        print(f\"[INFO] Loading model checkpoint from: {resume_path}\")\n        runner.agent.load(resume_path)\n\n    # run training\n    runner.run()\n\n    # close the simulator\n    env.close()\n\n\nif __name__ == \"__main__\":\n    # run the main function\n    main()\n    # close sim app\n    simulation_app.close()\n"
  },
  {
    "path": "usd/g1_29dof_rev_1_0.urdf",
    "content": "<robot name=\"g1_29dof_rev_1_0\">\r\n  <mujoco>\r\n    <compiler meshdir=\"meshes\" discardvisual=\"false\"/>\r\n  </mujoco>\r\n\r\n  <!-- [CAUTION] uncomment when convert to mujoco -->\r\n  <!-- <link name=\"world\"></link>\r\n  <joint name=\"floating_base_joint\" type=\"floating\">\r\n    <parent link=\"world\"/>\r\n    <child link=\"pelvis\"/>\r\n  </joint> -->\r\n\r\n  <link name=\"pelvis\">\r\n    <inertial>\r\n      <origin xyz=\"0 0 -0.07605\" rpy=\"0 0 0\"/>\r\n      <mass value=\"3.813\"/>\r\n      <inertia ixx=\"0.010549\" ixy=\"0\" ixz=\"2.1E-06\" iyy=\"0.0093089\" iyz=\"0\" izz=\"0.0079184\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/pelvis.STL\"/>\r\n      </geometry>\r\n      <material name=\"dark\">\r\n        <color rgba=\"0.2 0.2 0.2 1\"/>\r\n      </material>\r\n    </visual>\r\n  </link>\r\n  <link name=\"pelvis_contour_link\">\r\n    <inertial>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.001\"/>\r\n      <inertia ixx=\"1e-7\" ixy=\"0\" ixz=\"0\" iyy=\"1e-7\" iyz=\"0\" izz=\"1e-7\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/pelvis_contour_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/pelvis_contour_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"pelvis_contour_joint\" type=\"fixed\">\r\n    <parent link=\"pelvis\"/>\r\n    <child link=\"pelvis_contour_link\"/>\r\n  </joint>\r\n\r\n  <!-- Legs -->\r\n  <link name=\"left_hip_pitch_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.002741 0.047791 -0.02606\" rpy=\"0 0 0\"/>\r\n      <mass value=\"1.35\"/>\r\n      <inertia ixx=\"0.001811\" ixy=\"3.68E-05\" ixz=\"-3.44E-05\" iyy=\"0.0014193\" iyz=\"0.000171\" izz=\"0.0012812\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_hip_pitch_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"dark\">\r\n        <color rgba=\"0.2 0.2 0.2 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_hip_pitch_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_hip_pitch_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 0.064452 -0.1027\" rpy=\"0 0 0\"/>\r\n    <parent link=\"pelvis\"/>\r\n    <child link=\"left_hip_pitch_link\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <limit lower=\"-2.5307\" upper=\"2.8798\" effort=\"88\" velocity=\"32\"/>\r\n  </joint>\r\n  <link name=\"left_hip_roll_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.029812 -0.001045 -0.087934\" rpy=\"0 0 0\"/>\r\n      <mass value=\"1.52\"/>\r\n      <inertia ixx=\"0.0023773\" ixy=\"-3.8E-06\" ixz=\"-0.0003908\" iyy=\"0.0024123\" iyz=\"1.84E-05\" izz=\"0.0016595\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_hip_roll_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_hip_roll_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_hip_roll_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 0.052 -0.030465\" rpy=\"0 -0.1749 0\"/>\r\n    <parent link=\"left_hip_pitch_link\"/>\r\n    <child link=\"left_hip_roll_link\"/>\r\n    <axis xyz=\"1 0 0\"/>\r\n    <limit lower=\"-0.5236\" upper=\"2.9671\" effort=\"139\" velocity=\"20\"/>\r\n  </joint>\r\n  <link name=\"left_hip_yaw_link\">\r\n    <inertial>\r\n      <origin xyz=\"-0.057709 -0.010981 -0.15078\" rpy=\"0 0 0\"/>\r\n      <mass value=\"1.702\"/>\r\n      <inertia ixx=\"0.0057774\" ixy=\"-0.0005411\" ixz=\"-0.0023948\" iyy=\"0.0076124\" iyz=\"-0.0007072\" izz=\"0.003149\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_hip_yaw_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_hip_yaw_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_hip_yaw_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.025001 0 -0.12412\" rpy=\"0 0 0\"/>\r\n    <parent link=\"left_hip_roll_link\"/>\r\n    <child link=\"left_hip_yaw_link\"/>\r\n    <axis xyz=\"0 0 1\"/>\r\n    <limit lower=\"-2.7576\" upper=\"2.7576\" effort=\"88\" velocity=\"32\"/>\r\n  </joint>\r\n  <link name=\"left_knee_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.005457 0.003964 -0.12074\" rpy=\"0 0 0\"/>\r\n      <mass value=\"1.932\"/>\r\n      <inertia ixx=\"0.011329\" ixy=\"4.82E-05\" ixz=\"-4.49E-05\" iyy=\"0.011277\" iyz=\"-0.0007146\" izz=\"0.0015168\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_knee_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_knee_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_knee_joint\" type=\"revolute\">\r\n    <origin xyz=\"-0.078273 0.0021489 -0.17734\" rpy=\"0 0.1749 0\"/>\r\n    <parent link=\"left_hip_yaw_link\"/>\r\n    <child link=\"left_knee_link\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <limit lower=\"-0.087267\" upper=\"2.8798\" effort=\"139\" velocity=\"20\"/>\r\n  </joint>\r\n  <link name=\"left_ankle_pitch_link\">\r\n    <inertial>\r\n      <origin xyz=\"-0.007269 0 0.011137\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.074\"/>\r\n      <inertia ixx=\"8.4E-06\" ixy=\"0\" ixz=\"-2.9E-06\" iyy=\"1.89E-05\" iyz=\"0\" izz=\"1.26E-05\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_ankle_pitch_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_ankle_pitch_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_ankle_pitch_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 -9.4445E-05 -0.30001\" rpy=\"0 0 0\"/>\r\n    <parent link=\"left_knee_link\"/>\r\n    <child link=\"left_ankle_pitch_link\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <limit lower=\"-0.87267\" upper=\"0.5236\" effort=\"50\" velocity=\"37\"/>\r\n  </joint>\r\n  <link name=\"left_ankle_roll_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.026505 0 -0.016425\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.608\"/>\r\n      <inertia ixx=\"0.0002231\" ixy=\"2E-07\" ixz=\"8.91E-05\" iyy=\"0.0016161\" iyz=\"-1E-07\" izz=\"0.0016667\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_ankle_roll_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"dark\">\r\n        <color rgba=\"0.2 0.2 0.2 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"-0.05 0.025 -0.03\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <sphere radius=\"0.005\"/>\r\n      </geometry>\r\n    </collision>\r\n    <collision>\r\n      <origin xyz=\"-0.05 -0.025 -0.03\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <sphere radius=\"0.005\"/>\r\n      </geometry>\r\n    </collision>\r\n    <collision>\r\n      <origin xyz=\"0.12 0.03 -0.03\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <sphere radius=\"0.005\"/>\r\n      </geometry>\r\n    </collision>\r\n    <collision>\r\n      <origin xyz=\"0.12 -0.03 -0.03\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <sphere radius=\"0.005\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_ankle_roll_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 0 -0.017558\" rpy=\"0 0 0\"/>\r\n    <parent link=\"left_ankle_pitch_link\"/>\r\n    <child link=\"left_ankle_roll_link\"/>\r\n    <axis xyz=\"1 0 0\"/>\r\n    <limit lower=\"-0.2618\" upper=\"0.2618\" effort=\"50\" velocity=\"37\"/>\r\n  </joint>\r\n  <link name=\"right_hip_pitch_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.002741 -0.047791 -0.02606\" rpy=\"0 0 0\"/>\r\n      <mass value=\"1.35\"/>\r\n      <inertia ixx=\"0.001811\" ixy=\"-3.68E-05\" ixz=\"-3.44E-05\" iyy=\"0.0014193\" iyz=\"-0.000171\" izz=\"0.0012812\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_hip_pitch_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"dark\">\r\n        <color rgba=\"0.2 0.2 0.2 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_hip_pitch_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_hip_pitch_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 -0.064452 -0.1027\" rpy=\"0 0 0\"/>\r\n    <parent link=\"pelvis\"/>\r\n    <child link=\"right_hip_pitch_link\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <limit lower=\"-2.5307\" upper=\"2.8798\" effort=\"88\" velocity=\"32\"/>\r\n  </joint>\r\n  <link name=\"right_hip_roll_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.029812 0.001045 -0.087934\" rpy=\"0 0 0\"/>\r\n      <mass value=\"1.52\"/>\r\n      <inertia ixx=\"0.0023773\" ixy=\"3.8E-06\" ixz=\"-0.0003908\" iyy=\"0.0024123\" iyz=\"-1.84E-05\" izz=\"0.0016595\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_hip_roll_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_hip_roll_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_hip_roll_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 -0.052 -0.030465\" rpy=\"0 -0.1749 0\"/>\r\n    <parent link=\"right_hip_pitch_link\"/>\r\n    <child link=\"right_hip_roll_link\"/>\r\n    <axis xyz=\"1 0 0\"/>\r\n    <limit lower=\"-2.9671\" upper=\"0.5236\" effort=\"139\" velocity=\"20\"/>\r\n  </joint>\r\n  <link name=\"right_hip_yaw_link\">\r\n    <inertial>\r\n      <origin xyz=\"-0.057709 0.010981 -0.15078\" rpy=\"0 0 0\"/>\r\n      <mass value=\"1.702\"/>\r\n      <inertia ixx=\"0.0057774\" ixy=\"0.0005411\" ixz=\"-0.0023948\" iyy=\"0.0076124\" iyz=\"0.0007072\" izz=\"0.003149\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_hip_yaw_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_hip_yaw_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_hip_yaw_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.025001 0 -0.12412\" rpy=\"0 0 0\"/>\r\n    <parent link=\"right_hip_roll_link\"/>\r\n    <child link=\"right_hip_yaw_link\"/>\r\n    <axis xyz=\"0 0 1\"/>\r\n    <limit lower=\"-2.7576\" upper=\"2.7576\" effort=\"88\" velocity=\"32\"/>\r\n  </joint>\r\n  <link name=\"right_knee_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.005457 -0.003964 -0.12074\" rpy=\"0 0 0\"/>\r\n      <mass value=\"1.932\"/>\r\n      <inertia ixx=\"0.011329\" ixy=\"-4.82E-05\" ixz=\"4.49E-05\" iyy=\"0.011277\" iyz=\"0.0007146\" izz=\"0.0015168\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_knee_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_knee_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_knee_joint\" type=\"revolute\">\r\n    <origin xyz=\"-0.078273 -0.0021489 -0.17734\" rpy=\"0 0.1749 0\"/>\r\n    <parent link=\"right_hip_yaw_link\"/>\r\n    <child link=\"right_knee_link\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <limit lower=\"-0.087267\" upper=\"2.8798\" effort=\"139\" velocity=\"20\"/>\r\n  </joint>\r\n  <link name=\"right_ankle_pitch_link\">\r\n    <inertial>\r\n      <origin xyz=\"-0.007269 0 0.011137\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.074\"/>\r\n      <inertia ixx=\"8.4E-06\" ixy=\"0\" ixz=\"-2.9E-06\" iyy=\"1.89E-05\" iyz=\"0\" izz=\"1.26E-05\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_ankle_pitch_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_ankle_pitch_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_ankle_pitch_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 9.4445E-05 -0.30001\" rpy=\"0 0 0\"/>\r\n    <parent link=\"right_knee_link\"/>\r\n    <child link=\"right_ankle_pitch_link\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <limit lower=\"-0.87267\" upper=\"0.5236\" effort=\"50\" velocity=\"37\"/>\r\n  </joint>\r\n  <link name=\"right_ankle_roll_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.026505 0 -0.016425\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.608\"/>\r\n      <inertia ixx=\"0.0002231\" ixy=\"-2E-07\" ixz=\"8.91E-05\" iyy=\"0.0016161\" iyz=\"1E-07\" izz=\"0.0016667\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_ankle_roll_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"dark\">\r\n        <color rgba=\"0.2 0.2 0.2 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"-0.05 0.025 -0.03\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <sphere radius=\"0.005\"/>\r\n      </geometry>\r\n    </collision>\r\n    <collision>\r\n      <origin xyz=\"-0.05 -0.025 -0.03\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <sphere radius=\"0.005\"/>\r\n      </geometry>\r\n    </collision>\r\n    <collision>\r\n      <origin xyz=\"0.12 0.03 -0.03\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <sphere radius=\"0.005\"/>\r\n      </geometry>\r\n    </collision>\r\n    <collision>\r\n      <origin xyz=\"0.12 -0.03 -0.03\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <sphere radius=\"0.005\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_ankle_roll_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 0 -0.017558\" rpy=\"0 0 0\"/>\r\n    <parent link=\"right_ankle_pitch_link\"/>\r\n    <child link=\"right_ankle_roll_link\"/>\r\n    <axis xyz=\"1 0 0\"/>\r\n    <limit lower=\"-0.2618\" upper=\"0.2618\" effort=\"50\" velocity=\"37\"/>\r\n  </joint>\r\n\r\n  <!-- Torso -->\r\n  <link name=\"waist_yaw_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.003494 0.000233 0.018034\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.214\"/>\r\n      <inertia ixx=\"0.00010673\" ixy=\"2.703E-06\" ixz=\"-7.631E-06\" iyy=\"0.00010422\" iyz=\"-2.01E-07\" izz=\"0.0001625\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/waist_yaw_link_rev_1_0.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n  </link>\r\n  <joint name=\"waist_yaw_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n    <parent link=\"pelvis\"/>\r\n    <child link=\"waist_yaw_link\"/>\r\n    <axis xyz=\"0 0 1\"/>\r\n    <limit lower=\"-2.618\" upper=\"2.618\" effort=\"88\" velocity=\"32\"/>\r\n  </joint>\r\n  <link name=\"waist_roll_link\">\r\n    <inertial>\r\n      <origin xyz=\"0 2.3E-05 0\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.086\"/>\r\n      <inertia ixx=\"7.079E-06\" ixy=\"0\" ixz=\"0\" iyy=\"6.339E-06\" iyz=\"0\" izz=\"8.245E-06\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/waist_roll_link_rev_1_0.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n  </link>\r\n  <joint name=\"waist_roll_joint\" type=\"revolute\">\r\n    <origin xyz=\"-0.0039635 0 0.044\" rpy=\"0 0 0\"/>\r\n    <parent link=\"waist_yaw_link\"/>\r\n    <child link=\"waist_roll_link\"/>\r\n    <axis xyz=\"1 0 0\"/>\r\n    <limit lower=\"-0.52\" upper=\"0.52\" effort=\"50\" velocity=\"37\"/>\r\n  </joint>\r\n  <link name=\"torso_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.000931 0.000346 0.15082\" rpy=\"0 0 0\"/>\r\n      <mass value=\"6.78\"/>\r\n      <inertia ixx=\"0.05905\" ixy=\"3.3302E-05\" ixz=\"-0.0017715\" iyy=\"0.047014\" iyz=\"-2.2399E-05\" izz=\"0.025652\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/torso_link_rev_1_0.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/torso_link_rev_1_0.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"waist_pitch_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n    <parent link=\"waist_roll_link\"/>\r\n    <child link=\"torso_link\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <limit lower=\"-0.52\" upper=\"0.52\" effort=\"50\" velocity=\"37\"/>\r\n  </joint>\r\n\r\n  <!-- LOGO -->\r\n  <joint name=\"logo_joint\" type=\"fixed\">\r\n    <origin xyz=\"0.0039635 0 -0.044\" rpy=\"0 0 0\"/>\r\n    <parent link=\"torso_link\"/>\r\n    <child link=\"logo_link\"/>\r\n  </joint>\r\n  <link name=\"logo_link\">\r\n    <inertial>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.001\"/>\r\n      <inertia ixx=\"1e-7\" ixy=\"0\" ixz=\"0\" iyy=\"1e-7\" iyz=\"0\" izz=\"1e-7\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/logo_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"dark\">\r\n        <color rgba=\"0.2 0.2 0.2 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/logo_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n\r\n  <!-- Head -->\r\n  <link name=\"head_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.005267 0.000299 0.449869\" rpy=\"0 0 0\"/>\r\n      <mass value=\"1.036\"/>\r\n      <inertia ixx=\"0.004085051\" ixy=\"-2.543E-06\" ixz=\"-6.9455E-05\" iyy=\"0.004185212\" iyz=\"-3.726E-06\" izz=\"0.001807911\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/head_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"dark\">\r\n        <color rgba=\"0.2 0.2 0.2 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/head_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"head_joint\" type=\"fixed\">\r\n    <origin xyz=\"0.0039635 0 -0.044\" rpy=\"0 0 0\"/>\r\n    <parent link=\"torso_link\"/>\r\n    <child link=\"head_link\"/>\r\n  </joint>\r\n\r\n\r\n  <!-- IMU -->\r\n  <link name=\"imu_in_torso\"></link>\r\n  <joint name=\"imu_in_torso_joint\" type=\"fixed\">\r\n    <origin xyz=\"-0.03959 -0.00224 0.14792\" rpy=\"0 0 0\"/>\r\n    <parent link=\"torso_link\"/>\r\n    <child link=\"imu_in_torso\"/>\r\n  </joint>\r\n\r\n  <link name=\"imu_in_pelvis\"></link>\r\n  <joint name=\"imu_in_pelvis_joint\" type=\"fixed\">\r\n    <origin xyz=\"0.04525 0 -0.08339\" rpy=\"0 0 0\"/>\r\n    <parent link=\"pelvis\"/>\r\n    <child link=\"imu_in_pelvis\"/>\r\n  </joint>\r\n\r\n  <!-- d435 -->\r\n  <link name=\"d435_link\"></link>\r\n  <joint name=\"d435_joint\" type=\"fixed\">\r\n    <origin xyz=\"0.0576235 0.01753 0.42987\" rpy=\"0 0.8307767239493009 0\"/>\r\n    <parent link=\"torso_link\"/>\r\n    <child link=\"d435_link\"/>\r\n  </joint>\r\n\r\n  <!-- mid360 -->\r\n  <link name=\"mid360_link\"></link>\r\n  <joint name=\"mid360_joint\" type=\"fixed\">\r\n    <origin xyz=\"0.0002835 0.00003 0.41618\" rpy=\"0 0.04014257279586953 0\"/>\r\n    <parent link=\"torso_link\"/>\r\n    <child link=\"mid360_link\"/>\r\n  </joint>\r\n\r\n  <!-- Arm -->\r\n  <link name=\"left_shoulder_pitch_link\">\r\n    <inertial>\r\n      <origin xyz=\"0 0.035892 -0.011628\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.718\"/>\r\n      <inertia ixx=\"0.0004291\" ixy=\"-9.2E-06\" ixz=\"6.4E-06\" iyy=\"0.000453\" iyz=\"2.26E-05\" izz=\"0.000423\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_shoulder_pitch_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0.04 -0.01\" rpy=\"0 1.5707963267948966 0\"/>\r\n      <geometry>\r\n        <cylinder radius=\"0.03\" length=\"0.05\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_shoulder_pitch_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.0039563 0.10022 0.24778\" rpy=\"0.27931 5.4949E-05 -0.00019159\"/>\r\n    <parent link=\"torso_link\"/>\r\n    <child link=\"left_shoulder_pitch_link\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <limit lower=\"-3.0892\" upper=\"2.6704\" effort=\"25\" velocity=\"37\"/>\r\n  </joint>\r\n  <link name=\"left_shoulder_roll_link\">\r\n    <inertial>\r\n      <origin xyz=\"-0.000227 0.00727 -0.063243\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.643\"/>\r\n      <inertia ixx=\"0.0006177\" ixy=\"-1E-06\" ixz=\"8.7E-06\" iyy=\"0.0006912\" iyz=\"-5.3E-06\" izz=\"0.0003894\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_shoulder_roll_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"-0.004 0.006 -0.053\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <cylinder radius=\"0.03\" length=\"0.03\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_shoulder_roll_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 0.038 -0.013831\" rpy=\"-0.27925 0 0\"/>\r\n    <parent link=\"left_shoulder_pitch_link\"/>\r\n    <child link=\"left_shoulder_roll_link\"/>\r\n    <axis xyz=\"1 0 0\"/>\r\n    <limit lower=\"-1.5882\" upper=\"2.2515\" effort=\"25\" velocity=\"37\"/>\r\n  </joint>\r\n  <link name=\"left_shoulder_yaw_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.010773 -0.002949 -0.072009\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.734\"/>\r\n      <inertia ixx=\"0.0009988\" ixy=\"7.9E-06\" ixz=\"0.0001412\" iyy=\"0.0010605\" iyz=\"-2.86E-05\" izz=\"0.0004354\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_shoulder_yaw_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_shoulder_yaw_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_shoulder_yaw_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 0.00624 -0.1032\" rpy=\"0 0 0\"/>\r\n    <parent link=\"left_shoulder_roll_link\"/>\r\n    <child link=\"left_shoulder_yaw_link\"/>\r\n    <axis xyz=\"0 0 1\"/>\r\n    <limit lower=\"-2.618\" upper=\"2.618\" effort=\"25\" velocity=\"37\"/>\r\n  </joint>\r\n  <link name=\"left_elbow_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.064956 0.004454 -0.010062\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.6\"/>\r\n      <inertia ixx=\"0.0002891\" ixy=\"6.53E-05\" ixz=\"1.72E-05\" iyy=\"0.0004152\" iyz=\"-5.6E-06\" izz=\"0.0004197\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_elbow_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_elbow_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_elbow_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.015783 0 -0.080518\" rpy=\"0 0 0\"/>\r\n    <parent link=\"left_shoulder_yaw_link\"/>\r\n    <child link=\"left_elbow_link\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <limit lower=\"-1.0472\" upper=\"2.0944\" effort=\"25\" velocity=\"37\"/>\r\n  </joint>\r\n  <joint name=\"left_wrist_roll_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.100 0.00188791 -0.010\" rpy=\"0 0 0\"/>\r\n    <axis xyz=\"1 0 0\"/>\r\n    <parent link=\"left_elbow_link\"/>\r\n    <child link=\"left_wrist_roll_link\"/>\r\n    <limit effort=\"25\" velocity=\"37\" lower=\"-1.972222054\" upper=\"1.972222054\"/>\r\n  </joint>\r\n  <link name=\"left_wrist_roll_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.01713944778 0.00053759094 0.00000048864\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.08544498\"/>\r\n      <inertia ixx=\"0.00004821544023\" ixy=\"-0.00000424511021\" ixz=\"0.00000000510599\" iyy=\"0.00003722899093\" iyz=\"-0.00000000123525\" izz=\"0.00005482106541\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_wrist_roll_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_wrist_roll_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_wrist_pitch_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.038 0 0\" rpy=\"0 0 0\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <parent link=\"left_wrist_roll_link\"/>\r\n    <child link=\"left_wrist_pitch_link\"/>\r\n    <limit effort=\"5\" velocity=\"22\" lower=\"-1.614429558\" upper=\"1.614429558\"/>\r\n  </joint>\r\n  <link name=\"left_wrist_pitch_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.02299989837 -0.00111685314 -0.00111658096\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.48404956\"/>\r\n      <inertia ixx=\"0.00016579646273\" ixy=\"-0.00001231206746\" ixz=\"0.00001231699194\" iyy=\"0.00042954057410\" iyz=\"0.00000081417712\" izz=\"0.00042953697654\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_wrist_pitch_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_wrist_pitch_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_wrist_yaw_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.046 0 0\" rpy=\"0 0 0\"/>\r\n    <axis xyz=\"0 0 1\"/>\r\n    <parent link=\"left_wrist_pitch_link\"/>\r\n    <child link=\"left_wrist_yaw_link\"/>\r\n    <limit effort=\"5\" velocity=\"22\" lower=\"-1.614429558\" upper=\"1.614429558\"/>\r\n  </joint>\r\n  <link name=\"left_wrist_yaw_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.02200381568 0.00049485096 0.00053861123\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.08457647\"/>\r\n      <inertia ixx=\"0.00004929128828\" ixy=\"-0.00000045735494\" ixz=\"0.00000445867591\" iyy=\"0.00005973338134\" iyz=\"0.00000043217198\" izz=\"0.00003928083826\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_wrist_yaw_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_wrist_yaw_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"left_hand_palm_joint\" type=\"fixed\">\r\n    <origin xyz=\"0.0415 0.003 0\" rpy=\"0 0 0\"/>\r\n    <parent link=\"left_wrist_yaw_link\"/>\r\n    <child link=\"left_rubber_hand\"/>\r\n  </joint>\r\n  <link name=\"left_rubber_hand\">\r\n    <inertial>\r\n      <origin xyz=\"0.05361310808 -0.00295905240 0.00215413091\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.170\"/>\r\n      <inertia ixx=\"0.00010099485234748\" ixy=\"0.00003618590790516\" ixz=\"-0.00000074301518642\" iyy=\"0.00028135871571621\" iyz=\"0.00000330189743286\" izz=\"0.00021894770413514\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/left_rubber_hand.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n  </link>\r\n  <link name=\"right_shoulder_pitch_link\">\r\n    <inertial>\r\n      <origin xyz=\"0 -0.035892 -0.011628\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.718\"/>\r\n      <inertia ixx=\"0.0004291\" ixy=\"9.2E-06\" ixz=\"6.4E-06\" iyy=\"0.000453\" iyz=\"-2.26E-05\" izz=\"0.000423\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_shoulder_pitch_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 -0.04 -0.01\" rpy=\"0 1.5707963267948966 0\"/>\r\n      <geometry>\r\n        <cylinder radius=\"0.03\" length=\"0.05\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_shoulder_pitch_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.0039563 -0.10021 0.24778\" rpy=\"-0.27931 5.4949E-05 0.00019159\"/>\r\n    <parent link=\"torso_link\"/>\r\n    <child link=\"right_shoulder_pitch_link\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <limit lower=\"-3.0892\" upper=\"2.6704\" effort=\"25\" velocity=\"37\"/>\r\n  </joint>\r\n  <link name=\"right_shoulder_roll_link\">\r\n    <inertial>\r\n      <origin xyz=\"-0.000227 -0.00727 -0.063243\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.643\"/>\r\n      <inertia ixx=\"0.0006177\" ixy=\"1E-06\" ixz=\"8.7E-06\" iyy=\"0.0006912\" iyz=\"5.3E-06\" izz=\"0.0003894\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_shoulder_roll_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"-0.004 -0.006 -0.053\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <cylinder radius=\"0.03\" length=\"0.03\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_shoulder_roll_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 -0.038 -0.013831\" rpy=\"0.27925 0 0\"/>\r\n    <parent link=\"right_shoulder_pitch_link\"/>\r\n    <child link=\"right_shoulder_roll_link\"/>\r\n    <axis xyz=\"1 0 0\"/>\r\n    <limit lower=\"-2.2515\" upper=\"1.5882\" effort=\"25\" velocity=\"37\"/>\r\n  </joint>\r\n  <link name=\"right_shoulder_yaw_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.010773 0.002949 -0.072009\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.734\"/>\r\n      <inertia ixx=\"0.0009988\" ixy=\"-7.9E-06\" ixz=\"0.0001412\" iyy=\"0.0010605\" iyz=\"2.86E-05\" izz=\"0.0004354\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_shoulder_yaw_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_shoulder_yaw_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_shoulder_yaw_joint\" type=\"revolute\">\r\n    <origin xyz=\"0 -0.00624 -0.1032\" rpy=\"0 0 0\"/>\r\n    <parent link=\"right_shoulder_roll_link\"/>\r\n    <child link=\"right_shoulder_yaw_link\"/>\r\n    <axis xyz=\"0 0 1\"/>\r\n    <limit lower=\"-2.618\" upper=\"2.618\" effort=\"25\" velocity=\"37\"/>\r\n  </joint>\r\n  <link name=\"right_elbow_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.064956 -0.004454 -0.010062\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.6\"/>\r\n      <inertia ixx=\"0.0002891\" ixy=\"-6.53E-05\" ixz=\"1.72E-05\" iyy=\"0.0004152\" iyz=\"5.6E-06\" izz=\"0.0004197\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_elbow_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_elbow_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_elbow_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.015783 0 -0.080518\" rpy=\"0 0 0\"/>\r\n    <parent link=\"right_shoulder_yaw_link\"/>\r\n    <child link=\"right_elbow_link\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <limit lower=\"-1.0472\" upper=\"2.0944\" effort=\"25\" velocity=\"37\"/>\r\n  </joint>\r\n  <joint name=\"right_wrist_roll_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.100 -0.00188791 -0.010\" rpy=\"0 0 0\"/>\r\n    <axis xyz=\"1 0 0\"/>\r\n    <parent link=\"right_elbow_link\"/>\r\n    <child link=\"right_wrist_roll_link\"/>\r\n    <limit effort=\"25\" velocity=\"37\" lower=\"-1.972222054\" upper=\"1.972222054\"/>\r\n  </joint>\r\n  <link name=\"right_wrist_roll_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.01713944778 -0.00053759094 0.00000048864\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.08544498\"/>\r\n      <inertia ixx=\"0.00004821544023\" ixy=\"0.00000424511021\" ixz=\"0.00000000510599\" iyy=\"0.00003722899093\" iyz=\"0.00000000123525\" izz=\"0.00005482106541\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_wrist_roll_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_wrist_roll_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_wrist_pitch_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.038 0 0\" rpy=\"0 0 0\"/>\r\n    <axis xyz=\"0 1 0\"/>\r\n    <parent link=\"right_wrist_roll_link\"/>\r\n    <child link=\"right_wrist_pitch_link\"/>\r\n    <limit effort=\"5\" velocity=\"22\" lower=\"-1.614429558\" upper=\"1.614429558\"/>\r\n  </joint>\r\n  <link name=\"right_wrist_pitch_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.02299989837 0.00111685314 -0.00111658096\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.48404956\"/>\r\n      <inertia ixx=\"0.00016579646273\" ixy=\"0.00001231206746\" ixz=\"0.00001231699194\" iyy=\"0.00042954057410\" iyz=\"-0.00000081417712\" izz=\"0.00042953697654\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_wrist_pitch_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_wrist_pitch_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_wrist_yaw_joint\" type=\"revolute\">\r\n    <origin xyz=\"0.046 0 0\" rpy=\"0 0 0\"/>\r\n    <axis xyz=\"0 0 1\"/>\r\n    <parent link=\"right_wrist_pitch_link\"/>\r\n    <child link=\"right_wrist_yaw_link\"/>\r\n    <limit effort=\"5\" velocity=\"22\" lower=\"-1.614429558\" upper=\"1.614429558\"/>\r\n  </joint>\r\n  <link name=\"right_wrist_yaw_link\">\r\n    <inertial>\r\n      <origin xyz=\"0.02200381568 -0.00049485096 0.00053861123\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.08457647\"/>\r\n      <inertia ixx=\"0.00004929128828\" ixy=\"0.00000045735494\" ixz=\"0.00000445867591\" iyy=\"0.00005973338134\" iyz=\"-0.00000043217198\" izz=\"0.00003928083826\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_wrist_yaw_link.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n    <collision>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_wrist_yaw_link.STL\"/>\r\n      </geometry>\r\n    </collision>\r\n  </link>\r\n  <joint name=\"right_hand_palm_joint\" type=\"fixed\">\r\n    <origin xyz=\"0.0415 -0.003 0\" rpy=\"0 0 0\"/>\r\n    <parent link=\"right_wrist_yaw_link\"/>\r\n    <child link=\"right_rubber_hand\"/>\r\n  </joint>\r\n  <link name=\"right_rubber_hand\">\r\n    <inertial>\r\n      <origin xyz=\"0.05361310808 0.00295905240 0.00215413091\" rpy=\"0 0 0\"/>\r\n      <mass value=\"0.170\"/>\r\n      <inertia ixx=\"0.00010099485234748\" ixy=\"-0.00003618590790516\" ixz=\"-0.00000074301518642\" iyy=\"0.00028135871571621\" iyz=\"-0.00000330189743286\" izz=\"0.00021894770413514\"/>\r\n    </inertial>\r\n    <visual>\r\n      <origin xyz=\"0 0 0\" rpy=\"0 0 0\"/>\r\n      <geometry>\r\n        <mesh filename=\"meshes/right_rubber_hand.STL\"/>\r\n      </geometry>\r\n      <material name=\"white\">\r\n        <color rgba=\"0.7 0.7 0.7 1\"/>\r\n      </material>\r\n    </visual>\r\n  </link>\r\n</robot>"
  }
]